text
stringlengths
4
602k
History of India Part of a series on the |History of India| |Outline of South Asian history| According to consensus in modern genetics, anatomically modern humans first arrived on the Indian subcontinent from Africa between 73,000 and 55,000 years ago. However, the earliest known human remains in India date to 30,000 years ago. Settled life, which involves the transition from foraging to farming and pastoralism, began in Greater India around 7,000 BCE. At the site of Mehrgarh, presence can be documented of the domestication of wheat and barley, rapidly followed by that of goats, sheep, and cattle. By 4,500 BCE, settled life had spread more widely, and began to gradually evolve into the Indus Valley Civilization, an early civilization of the Old World, which was contemporaneous with Ancient Egypt and Mesopotamia. This civilization flourished between 2,500 BCE and 1900 BCE in what today is Pakistan and north-western India and was noted for its urban planning, baked brick houses, elaborate drainage, and water supply. In the early second millennium BCE, persistent drought caused the population of the Indus Valley to scatter from large urban centers to villages. Around the same time, Indo-Aryan tribes moved into the Punjab from regions further northwest in several waves of migration. The resulting Vedic period was marked by the composition of the Vedas, large collections of hymns of these tribes whose postulated religious culture, through synthesis with the preexisting religious cultures of the subcontinent, gave rise to Hinduism. The concept of Varna, a social grouping system which divided people into different groups based on their occupations and abilities, such as priests, warriors, merchants, and tradesmen, was created during this time. Towards the end of this period, around 600 BCE, after the pastoral and nomadic Indo-Aryans spread from the Punjab into the Gangetic plain, large swaths of which they deforested to pave way for agriculture, a second urbanization took place. The small Indo-Aryan chieftaincies, or janapadas, were consolidated into larger states, or mahajanapadas. The urbanization was accompanied by the rise of new ascetic movements in Greater Magadha, including Jainism and Buddhism. These movements gave rise to new religious concepts, which opposed the growing influence of Brahmanism and the primacy of rituals, presided by the Brahmin priests, that had come to be associated with Vedic religion. Most of the Indian subcontinent was conquered by the Maurya Empire, during the 4th and 3rd centuries BCE. From the 3rd century BCE onwards, Prakrit and Pali literature in the north and the Tamil Sangam literature in southern India started to flourish. In the 3rd century BCE, Wootz steel originated in south India and was exported to foreign countries. During the Classical period, various parts of India were ruled by numerous dynasties for the next 1,500 years, among which the Gupta Empire stands out. This period, witnessing a Hindu religious and intellectual resurgence, is known as the classical or Golden Age of India. During this period, many aspects of Indian civilization, administration, culture, and religion (Hinduism and Buddhism) spread to much of Asia, while kingdoms in southern India began to have maritime business links with the Middle East and the Mediterranean. Indian cultural influence spread over many parts of Southeast Asia, which led to the establishment of Indianised kingdoms in Southeast Asia (Greater India). The most significant event between the 7th and 11th centuries was the Tripartite struggle centered on Kannauj that lasted for more than two centuries between the Pala Empire, Rashtrakuta Empire, and Gurjara-Pratihara Empire. Southern India saw the rise of multiple imperial powers from the middle of the fifth century, most notably the Chalukya, Chola, Pallava, Chera, Pandyan, and Western Chalukya Empires. The Chola dynasty conquered southern India and successfully invaded parts of Southeast Asia, Sri Lanka, the Maldives, and Bengal in the 11th century. In the early medieval period, Indian mathematics, including Hindu numerals, influenced the development of mathematics and astronomy in the Arab world. Islamic conquests made limited inroads into modern Afghanistan and Sindh as early as the 8th century, followed by the invasions of Mahmud Ghazni. The Delhi Sultanate was founded in 1206 CE by Central Asian Turks who ruled a major part of the northern Indian subcontinent in the early 14th century, but declined in the late 14th century, and saw the advent of the Deccan Sultanates. The wealthy Bengal Sultanate also emerged as a regional and diplomatic power, lasting over three centuries. This period saw the emergence of several powerful Hindu states, notably Vijayanagara, Gajapati, and Ahom, as well as Rajput states, such as Mewar. The 15th century saw the advent of Sikhism. The early modern period began in the 16th century, when the Mughal Empire conquered most of the Indian subcontinent, becoming the biggest global economy and manufacturing power, with a nominal GDP that valued a quarter of the world GDP, superior to the combination of Europe's GDP. The Mughals suffered a gradual decline in the early 18th century, which provided opportunities for the Marathas, Sikhs, Mysoreans, and Nawabs of Bengal to exercise control over large regions of the Indian subcontinent. From the mid-18th century to the mid-19th century, large regions of India were gradually annexed by the East India Company, a chartered company, acting as a sovereign power on behalf of the British government. Dissatisfaction with the Company rule in India led to the Indian Rebellion of 1857, which rocked parts of north and central India, and led to the dissolution of the company. India was afterward ruled directly by the British Crown, in the British Raj. After World War I, a nationwide struggle for independence was launched by the Indian National Congress, led by Mahatma Gandhi, and noted for nonviolence. Later, the All-India Muslim League would advocate for a separate Muslim-majority nation-state. The British Indian Empire was partitioned in August 1947 into the Dominion of India (present day Republic of India) and Dominion of Pakistan (present day Islamic Republic of Pakistan and People's Republic of Bangladesh), each gaining its independence. Prehistoric era (until c. 3300 BCE) Hominin expansion from Africa is estimated to have reached the Indian subcontinent approximately two million years ago, and possibly as early as 2.2 million years before the present. This dating is based on the known presence of Homo erectus in Indonesia by 1.8 million years before the present and in East Asia by 1.36 million years before present, as well as the discovery of stone tools made by proto-humans in the Soan River valley, at Riwat, and in the Pabbi Hills, in present-day Pakistan[verification needed]. Although some older discoveries have been claimed, the suggested dates, based on the dating of fluvial sediments, have not been independently verified. The oldest hominin fossil remains in the Indian subcontinent are those of Homo erectus or Homo heidelbergensis, from the Narmada Valley in central India, and are dated to approximately half a million years ago. Older fossil finds have been claimed, but are considered unreliable. Reviews of archaeological evidence have suggested that occupation of the Indian subcontinent by hominins was sporadic until approximately 700,000 years ago, and was geographically widespread by approximately 250,000 years before the present, from which point onward, archaeological evidence of proto-human presence is widely mentioned. According to a historical demographer of South Asia, Tim Dyson: "Modern human beings—Homo sapiens—originated in Africa. Then, intermittently, sometime between 60,000 and 80,000 years ago, tiny groups of them began to enter the north-west of the Indian subcontinent. It seems likely that initially, they came by way of the coast. ... it is virtually certain that there were Homo sapiens in the subcontinent 55,000 years ago, even though the earliest fossils that have been found of them date to only about 30,000 years before the present." "Y-Chromosome and Mt-DNA data support the colonization of South Asia by modern humans originating in Africa. ... Coalescence dates for most non-European populations average to between 73–55 ka." And according to an environmental historian of South Asia, Michael Fisher: "Scholars estimate that the first successful expansion of the Homo sapiens range beyond Africa and across the Arabian Peninsula occurred from as early as 80,000 years ago to as late as 40,000 years ago, although there may have been prior unsuccessful emigrations. Some of their descendants extended the human range ever further in each generation, spreading into each habitable land they encountered. One human channel was along the warm and productive coastal lands of the Persian Gulf and northern Indian Ocean. Eventually, various bands entered India between 75,000 years ago and 35,000 years ago." Archaeological evidence has been interpreted to suggest the presence of anatomically modern humans in the Indian subcontinent 78,000–74,000 years ago, although this interpretation is disputed. The occupation of South Asia by modern humans, over a long time, initially in varying forms of isolation as hunter-gatherers, has turned it into a highly diverse one, second only to Africa in human genetic diversity. According to Tim Dyson: "Genetic research has contributed to knowledge of the prehistory of the subcontinent’s people in other respects. In particular, the level of genetic diversity in the region is extremely high. Indeed, only Africa’s population is genetically more diverse. Related to this, there is strong evidence of ‘founder’ events in the subcontinent. By this is meant circumstances where a subgroup—such as a tribe—derives from a tiny number of ‘original’ individuals. Further, compared to most world regions, the subcontinent’s people are relatively distinct in having practised comparatively high levels of endogamy." Settled life emerged on the subcontinent in the western margins of the Indus river alluvium approximately 9,000 years ago, evolving gradually into the Indus valley civilisation of the third millennium BCE. According to Tim Dyson: "By 7,000 years ago agriculture was firmly established in Baluchistan. And, over the next 2,000 years, the practice of farming slowly spread eastwards into the Indus valley." And according to Michael Fisher: "The earliest discovered instance ... of well-established, settled agricultural society is at Mehrgarh in the hills between the Bolan Pass and the Indus plain (today in Pakistan) (see Map 3.1). From as early as 7000 BCE, communities there started investing increased labor in preparing the land and selecting, planting, tending, and harvesting particular grain-producing plants. They also domesticated animals, including sheep, goats, pigs, and oxen (both humped zebu [Bos indicus] and unhumped [Bos taurus]). Castrating oxen, for instance, turned them from mainly meat sources into domesticated draft-animals as well." Bronze Age – first urbanisation (c. 3300 – c. 1800 BCE) Indus Valley Civilisation The Bronze Age in the Indian subcontinent began around 3300 BCE. Along with Ancient Egypt and Mesopotamia, the Indus valley region was one of three early cradles of civilization of the Old World. Of the three, the Indus Valley Civilization was the most expansive, and at its peak, may have had a population of over five million. The civilization was primarily centered in modern-day Pakistan, in the Indus river basin, and secondarily in the Ghaggar-Hakra river basin in eastern Pakistan and northwestern India. The Mature Indus civilization flourished from about 2600 to 1900 BCE, marking the beginning of urban civilization on the Indian subcontinent. The civilization included cities such as Harappa, Ganeriwala, and Mohenjo-daro in modern-day Pakistan, and Dholavira, Kalibangan, Rakhigarhi, and Lothal in modern-day India. Inhabitants of the ancient Indus river valley, the Harappans, developed new techniques in metallurgy and handicraft (carneol products, seal carving), and produced copper, bronze, lead, and tin. The civilization is noted for its cities built of brick, roadside drainage system, and multi-storeyed houses and is thought to have had some kind of municipal organisation. After the collapse of Indus Valley civilization, the inhabitants of the Indus Valley civilization migrated from the river valleys of Indus and Ghaggar-Hakra, towards the Himalayan foothills of Ganga-Yamuna basin. Iron Age (1500 – 200 BCE) Vedic period (c. 1500 – 600 BCE) The Vedic period is the period when the Vedas were composed, the liturgical hymns from the Indo-Aryan people. The Vedic culture was located in part of north-west India, while other parts of India had a distinct cultural identity during this period. The Vedic culture is described in the texts of Vedas, still sacred to Hindus, which were orally composed and transmitted in Vedic Sanskrit. The Vedas are some of the oldest extant texts in India. The Vedic period, lasting from about 1500 to 500 BCE, contributed the foundations of several cultural aspects of the Indian subcontinent. In terms of culture, many regions of the Indian subcontinent transitioned from the Chalcolithic to the Iron Age in this period. Historians have analysed the Vedas to posit a Vedic culture in the Punjab region and the upper Gangetic Plain. Most historians also consider this period to have encompassed several waves of Indo-Aryan migration into the Indian subcontinent from the north-west. The peepal tree and cow were sanctified by the time of the Atharva Veda. Many of the concepts of Indian philosophy espoused later, like dharma, trace their roots to Vedic antecedents. Early Vedic society is described in the Rigveda, the oldest Vedic text, believed to have been compiled during 2nd millennium BCE, in the northwestern region of the Indian subcontinent. At this time, Aryan society consisted of largely tribal and pastoral groups, distinct from the Harappan urbanization which had been abandoned. The early Indo-Aryan presence probably corresponds, in part, to the Ochre Coloured Pottery culture in archaeological contexts. At the end of the Rigvedic period, the Aryan society began to expand from the northwestern region of the Indian subcontinent, into the western Ganges plain. It became increasingly agricultural and was socially organised around the hierarchy of the four varnas, or social classes. This social structure was characterized both by syncretising with the native cultures of northern India, but also eventually by the excluding of some indigenous peoples by labeling their occupations impure. During this period, many of the previous small tribal units and chiefdoms began to coalesce into Janapadas (monarchical, state-level polities). The Iron Age in the Indian subcontinent from about 1200 BCE to the 6th century BCE is defined by the rise of Janapadas, which are realms, republics and kingdoms—notably the Iron Age Kingdoms of Kuru, Panchala, Kosala, Videha. The Kuru kingdom was the first state-level society of the Vedic period, corresponding to the beginning of the Iron Age in northwestern India, around 1200–800 BCE, as well as with the composition of the Atharvaveda (the first Indian text to mention iron, as śyāma ayas, literally "black metal"). The Kuru state organised the Vedic hymns into collections, and developed the orthodox srauta ritual to uphold the social order. Two key figures of the Kuru state were king Parikshit and his successor Janamejaya, transforming this realm into the dominant political, social, and cultural power of northern Iron Age India. When the Kuru kingdom declined, the centre of Vedic culture shifted to their eastern neighbours, the Panchala kingdom. The archaeological Painted Grey Ware culture, which flourished in the Haryana and western Uttar Pradesh regions of northern India from about 1100 to 600 BCE, is believed to correspond to the Kuru and Panchala kingdoms. During the Late Vedic Period, the kingdom of Videha emerged as a new centre of Vedic culture, situated even farther to the East (in what is today Nepal and Bihar state in India); reaching its prominence under the king Janaka, whose court provided patronage for Brahmin sages and philosophers such as Yajnavalkya, Aruni, and Gargi Vachaknavi. The later part of this period corresponds with a consolidation of increasingly large states and kingdoms, called mahajanapadas, all across Northern India. Second urbanisation (800–200 BCE) During the time between 800 and 200 BCE the Śramaṇa movement formed, from which originated Jainism and Buddhism. In the same period, the first Upanishads were written. After 500 BCE, the so-called "second urbanisation" started, with new urban settlements arising at the Ganges plain, especially the Central Ganges plain. The foundations for the "second urbanisation" were laid prior to 600 BCE, in the Painted Grey Ware culture of the Ghaggar-Hakra and Upper Ganges Plain; although most PGW sites were small farming villages, "several dozen" PGW sites eventually emerged as relatively large settlements that can be characterized as towns, the largest of which were fortified by ditches or moats and embankments made of piled earth with wooden palisades, albeit smaller and simpler than the elaborately fortified large cities which grew after 600 BCE in the Northern Black Polished Ware culture. The Central Ganges Plain, where Magadha gained prominence, forming the base of the Mauryan Empire, was a distinct cultural area, with new states arising after 500 BCE[web 1] during the so-called "second urbanisation".[note 1] It was influenced by the Vedic culture, but differed markedly from the Kuru-Panchala region. It "was the area of the earliest known cultivation of rice in South Asia and by 1800 BCE was the location of an advanced Neolithic population associated with the sites of Chirand and Chechar". In this region, the Śramaṇic movements flourished, and Jainism and Buddhism originated. Buddhism and Jainism Around 800 BCE to 400 BCE witnessed the composition of the earliest Upanishads. Upanishads form the theoretical basis of classical Hinduism and are known as Vedanta (conclusion of the Vedas). Increasing urbanisation of India in 7th and 6th centuries BCE led to the rise of new ascetic or Śramaṇa movements which challenged the orthodoxy of rituals. Mahavira (c. 549–477 BCE), proponent of Jainism, and Gautama Buddha (c. 563–483 BCE), founder of Buddhism were the most prominent icons of this movement. Śramaṇa gave rise to the concept of the cycle of birth and death, the concept of samsara, and the concept of liberation. Buddha found a Middle Way that ameliorated the extreme asceticism found in the Śramaṇa religions. Around the same time, Mahavira (the 24th Tirthankara in Jainism) propagated a theology that was to later become Jainism. However, Jain orthodoxy believes the teachings of the Tirthankaras predates all known time and scholars believe Parshvanatha (c. 872 – c. 772 BCE), accorded status as the 23rd Tirthankara, was a historical figure. The Vedas are believed to have documented a few Tirthankaras and an ascetic order similar to the Śramaṇa movement. The Sanskrit epics Ramayana and Mahabharata were composed during this period. The Mahabharata remains, today, the longest single poem in the world. Historians formerly postulated an "epic age" as the milieu of these two epic poems, but now recognize that the texts (which are both familiar with each other) went through multiple stages of development over centuries. For instance, the Mahabharata may have been based on a small-scale conflict (possibly about 1000 BCE) which was eventually "transformed into a gigantic epic war by bards and poets". There is no conclusive proof from archaeology as to whether the specific events of the Mahabharata have any historical basis. The existing texts of these epics are believed to belong to the post-Vedic age, between c. 400 BCE and 400 CE. According to the Puranic chronology and recent research conducted by scientists and astronomers, the events of these epics are stated to have happened before 3000 BCE. The period from c. 600 BCE to c. 300 BCE witnessed the rise of the Mahajanapadas, sixteen powerful and vast kingdoms and oligarchic republics. These Mahajanapadas evolved and flourished in a belt stretching from Gandhara in the northwest to Bengal in the eastern part of the Indian subcontinent and included parts of the trans-Vindhyan region. Ancient Buddhist texts, like the Anguttara Nikaya, make frequent reference to these sixteen great kingdoms and republics—Anga, Assaka, Avanti, Chedi, Gandhara, Kashi, Kamboja, Kosala, Kuru, Magadha, Malla, Matsya (or Machcha), Panchala, Surasena, Vriji, and Vatsa. This period saw the second major rise of urbanism in India after the Indus Valley Civilisation. Early "republics" or Gaṇa sangha, such as Shakyas, Koliyas, Mallas, and Licchavis had republican governments. Gaṇa sanghas, such as Mallas, centered in the city of Kusinagara, and the Vajjian Confederacy (Vajji), centered in the city of Vaishali, existed as early as the 6th century BCE and persisted in some areas until the 4th century CE. The most famous clan amongst the ruling confederate clans of the Vajji Mahajanapada were the Licchavis. This period corresponds in an archaeological context to the Northern Black Polished Ware culture. Especially focused in the Central Ganges plain but also spreading across vast areas of the northern and central Indian subcontinent, this culture is characterized by the emergence of large cities with massive fortifications, significant population growth, increased social stratification, wide-ranging trade networks, construction of public architecture and water channels, specialized craft industries (e.g., ivory and carnelian carving), a system of weights, punch-marked coins, and the introduction of writing in the form of Brahmi and Kharosthi scripts. The language of the gentry at that time was Sanskrit, while the languages of the general population of northern India are referred to as Prakrits. Many of the sixteen kingdoms had coalesced into four major ones by 500/400 BCE, by the time of Gautama Buddha. These four were Vatsa, Avanti, Kosala, and Magadha. The life of Gautama Buddha was mainly associated with these four kingdoms. Early Magadha dynasties Magadha formed one of the sixteen Mahā-Janapadas (Sanskrit: "Great Realms") or kingdoms in ancient India. The core of the kingdom was the area of Bihar south of the Ganges; its first capital was Rajagriha (modern Rajgir) then Pataliputra (modern Patna). Magadha expanded to include most of Bihar and Bengal with the conquest of Licchavi and Anga respectively, followed by much of eastern Uttar Pradesh and Orissa. The ancient kingdom of Magadha is heavily mentioned in Jain and Buddhist texts. It is also mentioned in the Ramayana, Mahabharata and Puranas. The earliest reference to the Magadha people occurs in the Atharva-Veda where they are found listed along with the Angas, Gandharis, and Mujavats. Magadha played an important role in the development of Jainism and Buddhism. The Magadha kingdom included republican communities such as the community of Rajakumara. Villages had their own assemblies under their local chiefs called Gramakas. Their administrations were divided into executive, judicial, and military functions. Early sources, from the Buddhist Pāli Canon, the Jain Agamas and the Hindu Puranas, mention Magadha being ruled by the Haryanka dynasty for some 200 years, c. 600–413 BCE. King Bimbisara of the Haryanka dynasty led an active and expansive policy, conquering Anga in what is now eastern Bihar and West Bengal. King Bimbisara was overthrown and killed by his son, Prince Ajatashatru, who continued the expansionist policy of Magadha. During this period, Gautama Buddha, the founder of Buddhism, lived much of his life in Magadha kingdom. He attained enlightenment in Bodh Gaya, gave his first sermon in Sarnath and the first Buddhist council was held in Rajgriha. The Haryanka dynasty was overthrown by the Shishunaga dynasty. The last Shishunaga ruler, Kalasoka, was assassinated by Mahapadma Nanda in 345 BCE, the first of the so-called Nine Nandas, which were Mahapadma and his eight sons. Nanda Empire and Alexander's campaign The Nanda Empire, at its greatest extent, extended from Bengal in the east, to the Punjab region in the west and as far south as the Vindhya Range. The Nanda dynasty was famed for their great wealth. The Nanda dynasty built on the foundations laid by their Haryanka and Shishunaga predecessors to create the first great empire of north India. To achieve this objective they built a vast army, consisting of 200,000 infantry, 20,000 cavalry, 2,000 war chariots and 3,000 war elephants (at the lowest estimates). According to the Greek historian Plutarch, the size of the Nanda army was even larger, numbering 200,000 infantry, 80,000 cavalry, 8,000 war chariots, and 6,000 war elephants. However, the Nanda Empire did not have the opportunity to see their army face Alexander the Great, who invaded north-western India at the time of Dhana Nanda, since Alexander was forced to confine his campaign to the plains of Punjab and Sindh, for his forces mutinied at the river Beas and refused to go any further upon encountering Nanda and Gangaridai forces. The Maurya Empire (322–185 BCE) unified most of the Indian subcontinent into one state, and was the largest empire ever to exist on the Indian subcontinent. At its greatest extent, the Mauryan Empire stretched to the north up to the natural boundaries of the Himalayas and to the east into what is now Assam. To the west, it reached beyond modern Pakistan, to the Hindu Kush mountains in what is now Afghanistan. The empire was established by Chandragupta Maurya assisted by Chanakya (Kautilya) in Magadha (in modern Bihar) when he overthrew the Nanda dynasty. Chandragupta rapidly expanded his power westwards across central and western India, and by 317 BCE the empire had fully occupied Northwestern India. The Mauryan Empire then defeated Seleucus I, a diadochus and founder of the Seleucid Empire, during the Seleucid–Mauryan war, thus gained additional territory west of the Indus River. Chandragupta's son Bindusara succeeded to the throne around 297 BCE. By the time he died in c. 272 BCE, a large part of the Indian subcontinent was under Mauryan suzerainty. However, the region of Kalinga (around modern day Odisha) remained outside Mauryan control, perhaps interfering with their trade with the south. Bindusara was succeeded by Ashoka, whose reign lasted for around 37 years until his death in about 232 BCE. His campaign against the Kalingans in about 260 BCE, though successful, led to immense loss of life and misery. This filled Ashoka with remorse and led him to shun violence, and subsequently to embrace Buddhism. The empire began to decline after his death and the last Mauryan ruler, Brihadratha, was assassinated by Pushyamitra Shunga to establish the Shunga Empire. Under Chandragupta Maurya and his successors, internal and external trade, agriculture, and economic activities all thrived and expanded across India thanks to the creation of a single efficient system of finance, administration, and security. The Mauryans built the Grand Trunk Road, one of Asia's oldest and longest major roads connecting the Indian subcontinent with Central Asia. After the Kalinga War, the Empire experienced nearly half a century of peace and security under Ashoka. Mauryan India also enjoyed an era of social harmony, religious transformation, and expansion of the sciences and of knowledge. Chandragupta Maurya's embrace of Jainism increased social and religious renewal and reform across his society, while Ashoka's embrace of Buddhism has been said to have been the foundation of the reign of social and political peace and non-violence across all of India. Ashoka sponsored the spreading of Buddhist missionaries into Sri Lanka, Southeast Asia, West Asia, North Africa, and Mediterranean Europe. The Arthashastra and the Edicts of Ashoka are the primary written records of the Mauryan times. Archaeologically, this period falls into the era of Northern Black Polished Ware. The Mauryan Empire was based on a modern and efficient economy and society. However, the sale of merchandise was closely regulated by the government. Although there was no banking in the Mauryan society, usury was customary. A significant amount of written records on slavery are found, suggesting a prevalence thereof. During this period, a high quality steel called Wootz steel was developed in south India and was later exported to China and Arabia. During the Sangam period Tamil literature flourished from the 3rd century BCE to the 4th century CE. During this period, three Tamil dynasties, collectively known as the Three Crowned Kings of Tamilakam: Chera dynasty, Chola dynasty, and the Pandyan dynasty ruled parts of southern India. The Sangam literature deals with the history, politics, wars, and culture of the Tamil people of this period. The scholars of the Sangam period rose from among the common people who sought the patronage of the Tamil Kings, but who mainly wrote about the common people and their concerns. Unlike Sanskrit writers who were mostly Brahmins, Sangam writers came from diverse classes and social backgrounds and were mostly non-Brahmins. They belonged to different faiths and professions such as farmers, artisans, merchants, monks, and priests, including also royalty and women. Around c. 300 BCE – c. 200 CE, Pathupattu, an anthology of ten mid-length books collection, which is considered part of Sangam Literature, were composed; the composition of eight anthologies of poetic works Ettuthogai as well as the composition of eighteen minor poetic works Patiṉeṇkīḻkaṇakku; while Tolkāppiyam, the earliest grammarian work in the Tamil language was developed. Also, during Sangam period, two of the Five Great Epics of Tamil Literature were composed. Ilango Adigal composed Silappatikaram, which is a non-religious work, that revolves around Kannagi, who having lost her husband to a miscarriage of justice at the court of the Pandyan dynasty, wreaks her revenge on his kingdom, and Manimekalai, composed by Sīthalai Sāttanār, is a sequel to Silappatikaram, and tells the story of the daughter of Kovalan and Madhavi, who became a Buddhist Bikkuni. Classical and early medieval periods (c. 200 BCE – c. 1200 CE) The Great Chaitya in the Karla Caves. The shrines were developed over the period from 2nd century BCE to the 5th century CE. The time between the Maurya Empire in the 3rd century BCE and the end of the Gupta Empire in the 6th century CE is referred to as the "Classical" period of India. It can be divided in various sub-periods, depending on the chosen periodisation. Classical period begins after the decline of the Maurya Empire, and the corresponding rise of the Shunga dynasty and Satavahana dynasty. The Gupta Empire (4th–6th century) is regarded as the "Golden Age" of Hinduism, although a host of kingdoms ruled over India in these centuries. Also, the Sangam literature flourished from the 3rd century BCE to the 3rd century CE in southern India. During this period, India's economy is estimated to have been the largest in the world, having between one-third and one-quarter of the world's wealth, from 1 CE to 1000 CE. Early classical period (c. 200 BCE – c. 320 CE) The Shungas originated from Magadha, and controlled areas of the central and eastern Indian subcontinent from around 187 to 78 BCE. The dynasty was established by Pushyamitra Shunga, who overthrew the last Maurya emperor. Its capital was Pataliputra, but later emperors, such as Bhagabhadra, also held court at Vidisha, modern Besnagar in Eastern Malwa. Pushyamitra Shunga ruled for 36 years and was succeeded by his son Agnimitra. There were ten Shunga rulers. However, after the death of Agnimitra, the empire rapidly disintegrated; inscriptions and coins indicate that much of northern and central India consisted of small kingdoms and city-states that were independent of any Shunga hegemony. The empire is noted for its numerous wars with both foreign and indigenous powers. They fought battles with the Mahameghavahana dynasty of Kalinga, Satavahana dynasty of Deccan, the Indo-Greeks, and possibly the Panchalas and Mitras of Mathura. Art, education, philosophy, and other forms of learning flowered during this period including small terracotta images, larger stone sculptures, and architectural monuments such as the Stupa at Bharhut, and the renowned Great Stupa at Sanchi. The Shunga rulers helped to establish the tradition of royal sponsorship of learning and art. The script used by the empire was a variant of Brahmi and was used to write the Sanskrit language. The Shunga Empire played an imperative role in patronising Indian culture at a time when some of the most important developments in Hindu thought were taking place. This helped the empire flourish and gain power. The Śātavāhanas were based from Amaravati in Andhra Pradesh as well as Junnar (Pune) and Prathisthan (Paithan) in Maharashtra. The territory of the empire covered large parts of India from the 1st century BCE onward. The Sātavāhanas started out as feudatories to the Mauryan dynasty, but declared independence with its decline. The Sātavāhanas are known for their patronage of Hinduism and Buddhism, which resulted in Buddhist monuments from Ellora (a UNESCO World Heritage Site) to Amaravati. They were one of the first Indian states to issue coins struck with their rulers embossed. They formed a cultural bridge and played a vital role in trade as well as the transfer of ideas and culture to and from the Indo-Gangetic Plain to the southern tip of India. They had to compete with the Shunga Empire and then the Kanva dynasty of Magadha to establish their rule. Later, they played a crucial role to protect large part of India against foreign invaders like the Sakas, Yavanas and Pahlavas. In particular, their struggles with the Western Kshatrapas went on for a long time. The notable rulers of the Satavahana Dynasty Gautamiputra Satakarni and Sri Yajna Sātakarni were able to defeat the foreign invaders like the Western Kshatrapas and to stop their expansion. In the 3rd century CE the empire was split into smaller states. Trade and travels to India - The spice trade in Kerala attracted traders from all over the Old World to India. Early writings and Stone Age carvings of Neolithic age obtained indicates that India's Southwest coastal port Muziris, in Kerala, had established itself as a major spice trade centre from as early as 3,000 BCE, according to Sumerian records. Jewish traders from Judea arrived in Kochi, Kerala, India as early as 562 BCE. - Thomas the Apostle sailed to India around the 1st century CE. He landed in Muziris in Kerala, India and established Yezh (Seven) ara (half) palligal (churches) or Seven and a Half Churches. - Buddhism entered China through the Silk Road transmission of Buddhism in the 1st or 2nd century CE. The interaction of cultures resulted in several Chinese travellers and monks to enter India. Most notable were Faxian, Yijing, Song Yun and Xuanzang. These travellers wrote detailed accounts of the Indian subcontinent, which includes the political and social aspects of the region. - Hindu and Buddhist religious establishments of Southeast Asia came to be associated with the economic activity and commerce as patrons entrust large funds which would later be used to benefit the local economy by estate management, craftsmanship, promotion of trading activities. Buddhism in particular, travelled alongside the maritime trade, promoting coinage, art, and literacy. Indian merchants involved in spice trade took Indian cuisine to Southeast Asia, where spice mixtures and curries became popular with the native inhabitants. - The Greco-Roman world followed by trading along the incense route and the Roman-India routes. During the 2nd century BCE Greek and Indian ships met to trade at Arabian ports such as Aden. During the first millennium, the sea routes to India were controlled by the Indians and Ethiopians that became the maritime trading power of the Red Sea. The Kushan Empire expanded out of what is now Afghanistan into the northwest of the Indian subcontinent under the leadership of their first emperor, Kujula Kadphises, about the middle of the 1st century CE. The Kushans were possibly of Tocharian speaking tribe; one of five branches of the Yuezhi confederation. By the time of his grandson, Kanishka the Great, the empire spread to encompass much of Afghanistan, and then the northern parts of the Indian subcontinent at least as far as Saketa and Sarnath near Varanasi (Banaras). Emperor Kanishka was a great patron of Buddhism; however, as Kushans expanded southward, the deities of their later coinage came to reflect its new Hindu majority. They played an important role in the establishment of Buddhism in India and its spread to Central Asia and China. Historian Vincent Smith said about Kanishka: He played the part of a second Ashoka in the history of Buddhism. The empire linked the Indian Ocean maritime trade with the commerce of the Silk Road through the Indus valley, encouraging long-distance trade, particularly between China and Rome. The Kushans brought new trends to the budding and blossoming Gandhara art and Mathura art, which reached its peak during Kushan rule. H.G. Rowlinson commented: The Kushan period is a fitting prelude to the Age of the Guptas. Classical period: Gupta Empire (c. 320 – 650 CE) The Gupta period was noted for cultural creativity, especially in literature, architecture, sculpture, and painting. The Gupta period produced scholars such as Kalidasa, Aryabhata, Varahamihira, Vishnu Sharma, and Vatsyayana who made great advancements in many academic fields. The Gupta period marked a watershed of Indian culture: the Guptas performed Vedic sacrifices to legitimise their rule, but they also patronised Buddhism, which continued to provide an alternative to Brahmanical orthodoxy. The military exploits of the first three rulers – Chandragupta I, Samudragupta, and Chandragupta II – brought much of India under their leadership. Science and political administration reached new heights during the Gupta era. Strong trade ties also made the region an important cultural centre and established it as a base that would influence nearby kingdoms and regions in Burma, Sri Lanka, Maritime Southeast Asia, and Indochina. The latter Guptas successfully resisted the northwestern kingdoms until the arrival of the Alchon Huns, who established themselves in Afghanistan by the first half of the 5th century CE, with their capital at Bamiyan. However, much of the Deccan and southern India were largely unaffected by these events in the north. The Vākāṭaka Empire originated from the Deccan in the mid-third century CE. Their state is believed to have extended from the southern edges of Malwa and Gujarat in the north to the Tungabhadra River in the south as well as from the Arabian Sea in the western to the edges of Chhattisgarh in the east. They were the most important successors of the Satavahanas in the Deccan, contemporaneous with the Guptas in northern India and succeeded by the Vishnukundina dynasty. The Vakatakas are noted for having been patrons of the arts, architecture and literature. They led public works and their monuments are a visible legacy. The rock-cut Buddhist viharas and chaityas of Ajanta Caves (a UNESCO World Heritage Site) were built under the patronage of Vakataka emperor, Harishena. Samudragupta's 4th-century Allahabad pillar inscription mentions Kamarupa (Western Assam) and Davaka (Central Assam) as frontier kingdoms of the Gupta Empire. Davaka was later absorbed by Kamarupa, which grew into a large kingdom that spanned from Karatoya river to near present Sadiya and covered the entire Brahmaputra valley, North Bengal, parts of Bangladesh and, at times Purnea and parts of West Bengal. Ruled by three dynasties Varmanas (c. 350–650 CE), Mlechchha dynasty (c. 655–900 CE) and Kamarupa-Palas (c. 900–1100 CE), from their capitals in present-day Guwahati (Pragjyotishpura), Tezpur (Haruppeswara) and North Gauhati (Durjaya) respectively. All three dynasties claimed their descent from Narakasura, an immigrant from Aryavarta. In the reign of the Varman king, Bhaskar Varman (c. 600–650 CE), the Chinese traveller Xuanzang visited the region and recorded his travels. Later, after weakening and disintegration (after the Kamarupa-Palas), the Kamarupa tradition was somewhat extended until c. 1255 CE by the Lunar I (c. 1120–1185 CE) and Lunar II (c. 1155–1255 CE) dynasties. The Kamarupa kingdom came to an end in the middle of the 13th century when the Khen dynasty under Sandhya of Kamarupanagara (North Guwahati), moved his capital to Kamatapur (North Bengal) after the invasion of Muslim Turks, and established the Kamata kingdom. The Pallavas, during the 4th to 9th centuries were, alongside the Guptas of the North, great patronisers of Sanskrit development in the South of the Indian subcontinent. The Pallava reign saw the first Sanskrit inscriptions in a script called Grantha. Early Pallavas had different connexions to Southeast Asian countries. The Pallavas used Dravidian architecture to build some very important Hindu temples and academies in Mamallapuram, Kanchipuram and other places; their rule saw the rise of great poets. The practice of dedicating temples to different deities came into vogue followed by fine artistic temple architecture and sculpture style of Vastu Shastra. Pallavas reached the height of power during the reign of Mahendravarman I (571–630 CE) and Narasimhavarman I (630–668 CE) and dominated the Telugu and northern parts of the Tamil region for about six hundred years until the end of the 9th century. Kadambas originated from Karnataka, was founded by Mayurasharma in 345 CE which at later times showed the potential of developing into imperial proportions, an indication to which is provided by the titles and epithets assumed by its rulers. King Mayurasharma defeated the armies of Pallavas of Kanchi possibly with help of some native tribes. The Kadamba fame reached its peak during the rule of Kakusthavarma, a notable ruler with whom even the kings of Gupta Dynasty of northern India cultivated marital alliances. The Kadambas were contemporaries of the Western Ganga Dynasty and together they formed the earliest native kingdoms to rule the land with absolute autonomy. The dynasty later continued to rule as a feudatory of larger Kannada empires, the Chalukya and the Rashtrakuta empires, for over five hundred years during which time they branched into minor dynasties known as the Kadambas of Goa, Kadambas of Halasi and Kadambas of Hangal. Empire of Harsha Harsha ruled northern India from 606 to 647 CE. He was the son of Prabhakarvardhana and the younger brother of Rajyavardhana, who were members of the Vardhana dynasty and ruled Thanesar, in present-day Haryana. After the downfall of the prior Gupta Empire in the middle of the 6th century, North India reverted to smaller republics and monarchical states. The power vacuum resulted in the rise of the Vardhanas of Thanesar, who began uniting the republics and monarchies from the Punjab to central India. After the death of Harsha's father and brother, representatives of the empire crowned Harsha emperor at an assembly in April 606 CE, giving him the title of Maharaja when he was merely 16 years old. At the height of his power, his Empire covered much of North and Northwestern India, extended East until Kamarupa, and South until Narmada River; and eventually made Kannauj (in present Uttar Pradesh state) his capital, and ruled until 647 CE. The peace and prosperity that prevailed made his court a centre of cosmopolitanism, attracting scholars, artists and religious visitors from far and wide. During this time, Harsha converted to Buddhism from Surya worship. The Chinese traveller Xuanzang visited the court of Harsha and wrote a very favourable account of him, praising his justice and generosity. His biography Harshacharita ("Deeds of Harsha") written by Sanskrit poet Banabhatta, describes his association with Thanesar, besides mentioning the defence wall, a moat and the palace with a two-storied Dhavalagriha (White Mansion). Early medieval period (mid 6th c.–1200 CE) Early medieval India began after the end of the Gupta Empire in the 6th century CE. This period also covers the "Late Classical Age" of Hinduism, which began after the end of the Gupta Empire, and the collapse of the Empire of Harsha in the 7th century CE; the beginning of Imperial Kannauj, leading to the Tripartite struggle; and ended in the 13th century with the rise of the Delhi Sultanate in Northern India and the end of the Later Cholas with the death of Rajendra Chola III in 1279 in Southern India; however some aspects of the Classical period continued until the fall of the Vijayanagara Empire in the south around the 17th century. From the fifth century to the thirteenth, Śrauta sacrifices declined, and initiatory traditions of Buddhism, Jainism or more commonly Shaivism, Vaishnavism and Shaktism expanded in royal courts. This period produced some of India's finest art, considered the epitome of classical development, and the development of the main spiritual and philosophical systems which continued to be in Hinduism, Buddhism and Jainism. In the 7th century CE, Kumārila Bhaṭṭa formulated his school of Mimamsa philosophy and defended the position on Vedic rituals against Buddhist attacks. Scholars note Bhaṭṭa's contribution to the decline of Buddhism in India. In the 8th century, Adi Shankara travelled across the Indian subcontinent to propagate and spread the doctrine of Advaita Vedanta, which he consolidated; and is credited with unifying the main characteristics of the current thoughts in Hinduism. He was a critic of both Buddhism and Minamsa school of Hinduism; and founded mathas (monasteries), in the four corners of the Indian subcontinent for the spread and development of Advaita Vedanta. While, Muhammad bin Qasim's invasion of Sindh (modern Pakistan) in 711 CE witnessed further decline of Buddhism. The Chach Nama records many instances of conversion of stupas to mosques such as at Nerun. From the 8th to the 10th century, three dynasties contested for control of northern India: the Gurjara Pratiharas of Malwa, the Palas of Bengal, and the Rashtrakutas of the Deccan. The Sena dynasty would later assume control of the Pala Empire; the Gurjara Pratiharas fragmented into various states, notably the Paramaras of Malwa, the Chandelas of Bundelkhand, the Kalachuris of Mahakoshal, the Tomaras of Haryana, and the Chauhans of Rajputana, these states were some of the earliest Rajput kingdoms; while the Rashtrakutas were annexed by the Western Chalukyas. During this period, the Chaulukya dynasty emerged; the Chaulukyas constructed the Dilwara Temples, Modhera Sun Temple, Rani ki vav in the style of Māru-Gurjara architecture, and their capital Anhilwara (modern Patan, Gujarat) was one of the largest cities in the Indian subcontinent, with the population estimated at 100,000 in 1000 CE. The Chola Empire emerged as a major power during the reign of Raja Raja Chola I and Rajendra Chola I who successfully invaded parts of Southeast Asia and Sri Lanka in the 11th century. Lalitaditya Muktapida (r. 724–760 CE) was an emperor of the Kashmiri Karkoṭa dynasty, which exercised influence in northwestern India from 625 CE until 1003, and was followed by Lohara dynasty. Kalhana in his Rajatarangini credits king Lalitaditya with leading an aggressive military campaign in Northern India and Central Asia. The Hindu Shahi dynasty ruled portions of eastern Afghanistan, northern Pakistan, and Kashmir from the mid-7th century to the early 11th century. While in Odisha, the Eastern Ganga Empire rose to power; noted for the advancement of Hindu architecture, most notable being Jagannath Temple and Konark Sun Temple, as well as being patrons of art and literature. The Chalukya Empire ruled large parts of southern and central India between the 6th and the 12th centuries. During this period, they ruled as three related yet individual dynasties. The earliest dynasty, known as the "Badami Chalukyas", ruled from Vatapi (modern Badami) from the middle of the 6th century. The Badami Chalukyas began to assert their independence at the decline of the Kadamba kingdom of Banavasi and rapidly rose to prominence during the reign of Pulakeshin II. The rule of the Chalukyas marks an important milestone in the history of South India and a golden age in the history of Karnataka. The political atmosphere in South India shifted from smaller kingdoms to large empires with the ascendancy of Badami Chalukyas. A Southern India-based kingdom took control and consolidated the entire region between the Kaveri and the Narmada rivers. The rise of this empire saw the birth of efficient administration, overseas trade and commerce and the development of new style of architecture called "Chalukyan architecture". The Chalukya dynasty ruled parts of southern and central India from Badami in Karnataka between 550 and 750, and then again from Kalyani between 970 and 1190. 8th century Durga temple exterior view at Aihole complex. Aihole complex includes Hindu, Buddhist and Jain temples and monuments. Founded by Dantidurga around 753, the Rashtrakuta Empire ruled from its capital at Manyakheta for almost two centuries. At its peak, the Rashtrakutas ruled from the Ganges River and Yamuna River doab in the north to Cape Comorin in the south, a fruitful time of political expansion, architectural achievements and famous literary contributions. The early rulers of this dynasty were Hindu, but the later rulers were strongly influenced by Jainism. Govinda III and Amoghavarsha were the most famous of the long line of able administrators produced by the dynasty. Amoghavarsha, who ruled for 64 years, was also an author and wrote Kavirajamarga, the earliest known Kannada work on poetics. Architecture reached a milestone in the Dravidian style, the finest example of which is seen in the Kailasanath Temple at Ellora. Other important contributions are the Kashivishvanatha temple and the Jain Narayana temple at Pattadakal in Karnataka. The Arab traveller Suleiman described the Rashtrakuta Empire as one of the four great Empires of the world. The Rashtrakuta period marked the beginning of the golden age of southern Indian mathematics. The great south Indian mathematician Mahāvīra lived in the Rashtrakuta Empire and his text had a huge impact on the medieval south Indian mathematicians who lived after him. The Rashtrakuta rulers also patronised men of letters, who wrote in a variety of languages from Sanskrit to the Apabhraṃśas. The Gurjara-Pratiharas were instrumental in containing Arab armies moving east of the Indus River. Nagabhata I defeated the Arab army under Junaid and Tamin during the Caliphate campaigns in India. Under Nagabhata II, the Gurjara-Pratiharas became the most powerful dynasty in northern India. He was succeeded by his son Ramabhadra, who ruled briefly before being succeeded by his son, Mihira Bhoja. Under Bhoja and his successor Mahendrapala I, the Pratihara Empire reached its peak of prosperity and power. By the time of Mahendrapala, the extent of its territory rivalled that of the Gupta Empire stretching from the border of Sindh in the west to Bengal in the east and from the Himalayas in the north to areas past the Narmada in the south. The expansion triggered a tripartite power struggle with the Rashtrakuta and Pala empires for control of the Indian subcontinent. During this period, Imperial Pratihara took the title of Maharajadhiraja of Āryāvarta (Great King of Kings of India). By the 10th century, several feudatories of the empire took advantage of the temporary weakness of the Gurjara-Pratiharas to declare their independence, notably the Paramaras of Malwa, the Chandelas of Bundelkhand, the Kalachuris of Mahakoshal, the Tomaras of Haryana, and the Chauhans of Rajputana. Sculptures near Teli ka Mandir, Gwalior Fort. Jainism-related cave monuments and statues carved into the rock face inside Siddhachal Caves, Gwalior Fort. Ghateshwara Mahadeva temple at Baroli Temples complex. The complex of eight temples, built by the Gurjara-Pratiharas, is situated within a walled enclosure. The Khayaravala dynasty,ruled parts of the present-day Indian states of Bihar and Jharkhand, during 11th and 12th centuries. Their capital was located at Khayaragarh in Shahabad district. Pratapdhavala and Shri Pratapa were king of the dynasty according to inscription of Rohtas. The Pala Empire was founded by Gopala I. It was ruled by a Buddhist dynasty from Bengal in the eastern region of the Indian subcontinent. The Palas reunified Bengal after the fall of Shashanka's Gauda Kingdom. The Palas were followers of the Mahayana and Tantric schools of Buddhism, they also patronised Shaivism and Vaishnavism. The morpheme Pala, meaning "protector", was used as an ending for the names of all the Pala monarchs. The empire reached its peak under Dharmapala and Devapala. Dharmapala is believed to have conquered Kanauj and extended his sway up to the farthest limits of India in the northwest. The Pala Empire can be considered as the golden era of Bengal in many ways. Dharmapala founded the Vikramashila and revived Nalanda, considered one of the first great universities in recorded history. Nalanda reached its height under the patronage of the Pala Empire. The Palas also built many viharas. They maintained close cultural and commercial ties with countries of Southeast Asia and Tibet. Sea trade added greatly to the prosperity of the Pala Empire. The Arab merchant Suleiman notes the enormity of the Pala army in his memoirs. Medieval Cholas rose to prominence during the middle of the 9th century CE and established the greatest empire South India had seen. They successfully united the South India under their rule and through their naval strength extended their influence in the Southeast Asian countries such as Srivijaya. Under Rajaraja Chola I and his successors Rajendra Chola I, Rajadhiraja Chola, Virarajendra Chola and Kulothunga Chola I the dynasty became a military, economic and cultural power in South Asia and South-East Asia. Rajendra Chola I's navies went even further, occupying the sea coasts from Burma to Vietnam, the Andaman and Nicobar Islands, the Lakshadweep (Laccadive) islands, Sumatra, and the Malay Peninsula in Southeast Asia and the Pegu islands. The power of the new empire was proclaimed to the eastern world by the expedition to the Ganges which Rajendra Chola I undertook and by the occupation of cities of the maritime empire of Srivijaya in Southeast Asia, as well as by the repeated embassies to China. They dominated the political affairs of Sri Lanka for over two centuries through repeated invasions and occupation. They also had continuing trade contacts with the Arabs in the west and with the Chinese empire in the east. Rajaraja Chola I and his equally distinguished son Rajendra Chola I gave political unity to the whole of Southern India and established the Chola Empire as a respected sea power. Under the Cholas, the South India reached new heights of excellence in art, religion and literature. In all of these spheres, the Chola period marked the culmination of movements that had begun in an earlier age under the Pallavas. Monumental architecture in the form of majestic temples and sculpture in stone and bronze reached a finesse never before achieved in India. Western Chalukya Empire The Western Chalukya Empire ruled most of the western Deccan, South India, between the 10th and 12th centuries. Vast areas between the Narmada River in the north and Kaveri River in the south came under Chalukya control. During this period the other major ruling families of the Deccan, the Hoysalas, the Seuna Yadavas of Devagiri, the Kakatiya dynasty and the Southern Kalachuris, were subordinates of the Western Chalukyas and gained their independence only when the power of the Chalukya waned during the latter half of the 12th century. The Western Chalukyas developed an architectural style known today as a transitional style, an architectural link between the style of the early Chalukya dynasty and that of the later Hoysala empire. Most of its monuments are in the districts bordering the Tungabhadra River in central Karnataka. Well known examples are the Kasivisvesvara Temple at Lakkundi, the Mallikarjuna Temple at Kuruvatti, the Kallesvara Temple at Bagali, Siddhesvara Temple at Haveri, and the Mahadeva Temple at Itagi. This was an important period in the development of fine arts in Southern India, especially in literature as the Western Chalukya kings encouraged writers in the native language of Kannada, and Sanskrit like the philosopher and statesman Basava and the great mathematician Bhāskara II. Late medieval period (c. 1200–1526 CE) The late medieval period is marked by repeated invasions of the Muslim Central Asian nomadic clans, the rule of the Delhi sultanate, and by the growth of other dynasties and empires, built upon military technology of the Sultanate. The Delhi Sultanate was a Muslim sultanate based in Delhi, ruled by several dynasties of Turkic, Turko-Indian and Pathan origins. It ruled large parts of the Indian subcontinent from the 13th century to the early 16th century. In the 12th and 13th centuries, Central Asian Turks invaded parts of northern India and established the Delhi Sultanate in the former Hindu holdings. The subsequent Slave dynasty of Delhi managed to conquer large areas of northern India, while the Khalji dynasty conquered most of central India while forcing the principal Hindu kingdoms of South India to become vassal states. However, they were ultimately unsuccessful in conquering and uniting the Indian subcontinent. The Sultanate ushered in a period of Indian cultural renaissance. The resulting "Indo-Muslim" fusion of cultures left lasting syncretic monuments in architecture, music, literature, religion, and clothing. It is surmised that the language of Urdu was born during the Delhi Sultanate period as a result of the intermingling of the local speakers of Sanskritic Prakrits with immigrants speaking Persian, Turkic, and Arabic under the Muslim rulers. The Delhi Sultanate is the only Indo-Islamic empire to enthrone one of the few female rulers in India, Razia Sultana (1236–1240). During the Delhi Sultanate, there was a synthesis between Indian civilization and Islamic civilization. The latter was a cosmopolitan civilization, with a multicultural and pluralistic society, and wide-ranging international networks, including social and economic networks, spanning large parts of Afro-Eurasia, leading to escalating circulation of goods, peoples, technologies and ideas. While initially disruptive due to the passing of power from native Indian elites to Turkic Muslim elites, the Delhi Sultanate was responsible for integrating the Indian subcontinent into a growing world system, drawing India into a wider international network, which had a significant impact on Indian culture and society. However, the Delhi Sultanate also caused large-scale destruction and desecration of temples in the Indian subcontinent. The Mongol invasions of India were successfully repelled by the Delhi Sultanate. A major factor in their success was their Turkic Mamluk slave army, who were highly skilled in the same style of nomadic cavalry warfare as the Mongols, as a result of having similar nomadic Central Asian roots. It is possible that the Mongol Empire may have expanded into India were it not for the Delhi Sultanate's role in repelling them. By repeatedly repulsing the Mongol raiders, the sultanate saved India from the devastation visited on West and Central Asia, setting the scene for centuries of migration of fleeing soldiers, learned men, mystics, traders, artists, and artisans from that region into the subcontinent, thereby creating a syncretic Indo-Islamic culture in the north. A Turco-Mongol conqueror in Central Asia, Timur (Tamerlane), attacked the reigning Sultan Nasir-u Din Mehmud of the Tughlaq Dynasty in the north Indian city of Delhi. The Sultan's army was defeated on 17 December 1398. Timur entered Delhi and the city was sacked, destroyed, and left in ruins after Timur's army had killed and plundered for three days and nights. He ordered the whole city to be sacked except for the sayyids, scholars, and the "other Muslims" (artists); 100,000 war prisoners were put to death in one day. The Sultanate suffered significantly from the sacking of Delhi revived briefly under the Lodi Dynasty, but it was a shadow of the former. The Vijayanagara Empire was established in 1336 by Harihara I and his brother Bukka Raya I of Sangama Dynasty, which originated as a political heir of the Hoysala Empire, Kakatiya Empire, and the Pandyan Empire. The empire rose to prominence as a culmination of attempts by the south Indian powers to ward off Islamic invasions by the end of the 13th century. It lasted until 1646, although its power declined after a major military defeat in 1565 by the combined armies of the Deccan sultanates. The empire is named after its capital city of Vijayanagara, whose ruins surround present day Hampi, now a World Heritage Site in Karnataka, India. In the first two decades after the founding of the empire, Harihara I gained control over most of the area south of the Tungabhadra river and earned the title of Purvapaschima Samudradhishavara ("master of the eastern and western seas"). By 1374 Bukka Raya I, successor to Harihara I, had defeated the chiefdom of Arcot, the Reddys of Kondavidu, and the Sultan of Madurai and had gained control over Goa in the west and the Tungabhadra-Krishna River doab in the north. With the Vijayanagara Kingdom now imperial in stature, Harihara II, the second son of Bukka Raya I, further consolidated the kingdom beyond the Krishna River and brought the whole of South India under the Vijayanagara umbrella. The next ruler, Deva Raya I, emerged successful against the Gajapatis of Odisha and undertook important works of fortification and irrigation. Italian traveler Niccolo de Conti wrote of him as the most powerful ruler of India. Deva Raya II (called Gajabetekara) succeeded to the throne in 1424 and was possibly the most capable of the Sangama dynasty rulers. He quelled rebelling feudal lords as well as the Zamorin of Calicut and Quilon in the south. He invaded the island of Sri Lanka and became overlord of the kings of Burma at Pegu and Tanasserim. The Vijayanagara Emperors were tolerant of all religions and sects, as writings by foreign visitors show. The kings used titles such as Gobrahamana Pratipalanacharya (literally, "protector of cows and Brahmins") and Hindurayasuratrana (lit, "upholder of Hindu faith") that testified to their intention of protecting Hinduism and yet were at the same time staunchly Islamicate in their court ceremonials and dress. The empire's founders, Harihara I and Bukka Raya I, were devout Shaivas (worshippers of Shiva), but made grants to the Vaishnava order of Sringeri with Vidyaranya as their patron saint, and designated Varaha (the boar, an Avatar of Vishnu) as their emblem. Over one-fourth of the archaeological dig found an "Islamic Quarter" not far from the "Royal Quarter". Nobles from Central Asia's Timurid kingdoms also came to Vijayanagara. The later Saluva and Tuluva kings were Vaishnava by faith, but worshipped at the feet of Lord Virupaksha (Shiva) at Hampi as well as Lord Venkateshwara (Vishnu) at Tirupati. A Sanskrit work, Jambavati Kalyanam by King Krishnadevaraya, called Lord Virupaksha Karnata Rajya Raksha Mani ("protective jewel of Karnata Empire"). The kings patronised the saints of the dvaita order (philosophy of dualism) of Madhvacharya at Udupi. The empire's legacy includes many monuments spread over South India, the best known of which is the group at Hampi. The previous temple building traditions in South India came together in the Vijayanagara Architecture style. The mingling of all faiths and vernaculars inspired architectural innovation of Hindu temple construction, first in the Deccan and later in the Dravidian idioms using the local granite. South Indian mathematics flourished under the protection of the Vijayanagara Empire in Kerala. The south Indian mathematician Madhava of Sangamagrama founded the famous Kerala School of Astronomy and Mathematics in the 14th century which produced a lot of great south Indian mathematicians like Parameshvara, Nilakantha Somayaji and Jyeṣṭhadeva in medieval south India. Efficient administration and vigorous overseas trade brought new technologies such as water management systems for irrigation. The empire's patronage enabled fine arts and literature to reach new heights in Kannada, Telugu, Tamil, and Sanskrit, while Carnatic music evolved into its current form. Vijayanagara went into decline after the defeat in the Battle of Talikota (1565). After the death of Aliya Rama Raya in the Battle of Talikota, Tirumala Deva Raya started the Aravidu dynasty, moved and founded a new capital of Penukonda to replace the destroyed Hampi, and attempted to reconstitute the remains of Vijayanagara Empire. Tirumala abdicated in 1572, dividing the remains of his kingdom to his three sons, and pursued a religious life until his death in 1578. The Aravidu dynasty successors ruled the region but the empire collapsed in 1614, and the final remains ended in 1646, from continued wars with the Bijapur sultanate and others. During this period, more kingdoms in South India became independent and separate from Vijayanagara. These include the Mysore Kingdom, Keladi Nayaka, Nayaks of Madurai, Nayaks of Tanjore, Nayakas of Chitradurga and Nayak Kingdom of Gingee – all of which declared independence and went on to have a significant impact on the history of South India in the coming centuries. For two and a half centuries from the mid 13th century, politics in Northern India was dominated by the Delhi Sultanate, and in Southern India by the Vijayanagar Empire. However, there were other regional powers present as well. After fall of Pala empire, the Chero dynasty ruled much of Eastern Uttar Pradesh, Bihar and Jharkhand from 12th CE to 18th CE. The Reddy dynasty successfully defeated the Delhi Sultanate; and extended their rule from Cuttack in the north to Kanchi in the south, eventually being absorbed into the expanding Vijayanagara Empire. In the north, the Rajput kingdoms remained the dominant force in Western and Central India. The Mewar dynasty under Maharana Hammir defeated and captured Muhammad Tughlaq with the Bargujars as his main allies. Tughlaq had to pay a huge ransom and relinquish all of Mewar's lands. After this event, the Delhi Sultanate did not attack Chittor for a few hundred years. The Rajputs re-established their independence, and Rajput states were established as far east as Bengal and north into the Punjab. The Tomaras established themselves at Gwalior, and Man Singh Tomar reconstructed the Gwalior Fort which still stands there. During this period, Mewar emerged as the leading Rajput state; and Rana Kumbha expanded his kingdom at the expense of the Sultanates of Malwa and Gujarat. The next great Rajput ruler, Rana Sanga of Mewar, became the principal player in Northern India. His objectives grew in scope – he planned to conquer the much sought after prize of the Muslim rulers of the time, Delhi. But, his defeat in the Battle of Khanwa, consolidated the new Mughal dynasty in India. The Mewar dynasty under Maharana Udai Singh II faced further defeat by Mughal emperor Akbar, with their capital Chittor being captured. Due to this event, Udai Singh II founded Udaipur, which became the new capital of the Mewar kingdom. His son, Maharana Pratap of Mewar, firmly resisted the Mughals. Akbar sent many missions against him. He survived to ultimately gain control of all of Mewar, excluding the Chittor Fort. In the south, the Bahmani Sultanate, which was established either by a Brahman convert or patronised by a Brahman and from that source it was given the name Bahmani, was the chief rival of the Vijayanagara, and frequently created difficulties for the Vijayanagara. In the early 16th century Krishnadevaraya of the Vijayanagar Empire defeated the last remnant of Bahmani Sultanate power. After which, the Bahmani Sultanate collapsed, resulting it being split into five small Deccan sultanates. In 1490, Ahmadnagar declared independence, followed by Bijapur and Berar in the same year; Golkonda became independent in 1518 and Bidar in 1528. Although generally rivals, they did ally against the Vijayanagara Empire in 1565, permanently weakening Vijayanagar in the Battle of Talikota. In the East, the Gajapati Kingdom remained a strong regional power to reckon with, associated with a high point in the growth of regional culture and architecture. Under Kapilendradeva, Gajapatis became an empire stretching from the lower Ganga in the north to the Kaveri in the south. In Northeast India, the Ahom Kingdom was a major power for six centuries; led by Lachit Borphukan, the Ahoms decisively defeated the Mughal army at the Battle of Saraighat during the Ahom-Mughal conflicts. Further east in Northeastern India was the Kingdom of Manipur, which ruled from their seat of power at Kangla Fort and developed a sophisticated Hindu Gaudiya Vaishnavite culture. The Sultanate of Bengal was the dominant power of the Ganges–Brahmaputra Delta, with a network of mint towns spread across the region. It was a Sunni Muslim monarchy with Indo-Turkic, Arab, Abyssinian and Bengali Muslim elites. The sultanate was known for its religious pluralism where non-Muslim communities co-existed peacefully. The Bengal Sultanate had a circle of vassal states, including Odisha in the southwest, Arakan in the southeast, and Tripura in the east. In the early 16th-century, the Bengal Sultanate reached the peak of its territorial growth with control over Kamrup and Kamata in the northeast and Jaunpur and Bihar in the west. It was reputed as a thriving trading nation and one of Asia's strongest states.The Bengal Sultanate was described by contemporary European and Chinese visitors as a relatively prosperous kingdom. Due to the abundance of goods in Bengal, the region was described as the "richest country to trade with". The Bengal Sultanate left a strong architectural legacy. Buildings from the period show foreign influences merged into a distinct Bengali style. The Bengal Sultanate was also the largest and most prestigious authority among the independent medieval Muslim-ruled states in the history of Bengal. Its decline began with an interregnum by the Suri Empire, followed by Mughal conquest and disintegration into petty kingdoms. Bhakti movement and Sikhism The Bhakti movement refers to the theistic devotional trend that emerged in medieval Hinduism and later revolutionised in Sikhism. It originated in the seventh-century south India (now parts of Tamil Nadu and Kerala), and spread northwards. It swept over east and north India from the 15th century onwards, reaching its zenith between the 15th and 17th century CE. - The Bhakti movement regionally developed around different gods and goddesses, such as Vaishnavism (Vishnu), Shaivism (Shiva), Shaktism (Shakti goddesses), and Smartism. The movement was inspired by many poet-saints, who championed a wide range of philosophical positions ranging from theistic dualism of Dvaita to absolute monism of Advaita Vedanta. - Sikhism is based on the spiritual teachings of Guru Nanak, the first Guru, and the ten successive Sikh gurus. After the death of the tenth Guru, Guru Gobind Singh, the Sikh scripture, Guru Granth Sahib, became the literal embodiment of the eternal, impersonal Guru, where the scripture's word serves as the spiritual guide for Sikhs. - Buddhism in India flourished in the Himalayan kingdoms of Namgyal Kingdom in Ladakh, Sikkim Kingdom in Sikkim, and Chutiya Kingdom in Arunachal Pradesh of the Late medieval period. Early modern period (c. 1526–1858 CE) The early modern period of Indian history is dated from 1526 CE to 1858 CE, corresponding to the rise and fall of the Mughal Empire, which inherited from the Timurid Renaissance. During this age India's economy expanded, relative peace was maintained and arts were patronized. This period witnessed the further development of Indo-Islamic architecture; the growth of Maratha and Sikhs were able to rule significant regions of India in the waning days of the Mughal empire, which formally came to an end when the British Raj was founded. In 1526, Babur, a Timurid descendant of Timur and Genghis Khan from Fergana Valley (modern day Uzbekistan), swept across the Khyber Pass and established the Mughal Empire, which at its zenith covered much of South Asia. However, his son Humayun was defeated by the Afghan warrior Sher Shah Suri in the year 1540, and Humayun was forced to retreat to Kabul. After Sher Shah's death, his son Islam Shah Suri and his Hindu general Hemu Vikramaditya established secular rule in North India from Delhi until 1556, when Akbar the Great defeated Hemu in the Second Battle of Panipat on 6 November 1556 after winning Battle of Delhi. The famous emperor Akbar the Great, who was the grandson of Babar, tried to establish a good relationship with the Hindus. Akbar declared "Amari" or non-killing of animals in the holy days of Jainism. He rolled back the jizya tax for non-Muslims. The Mughal emperors married local royalty, allied themselves with local maharajas, and attempted to fuse their Turko-Persian culture with ancient Indian styles, creating a unique Indo-Persian culture and Indo-Saracenic architecture. Akbar married a Rajput princess, Mariam-uz-Zamani, and they had a son, Jahangir, who was part-Mughal and part-Rajput, as were future Mughal emperors. Jahangir more or less followed his father's policy. The Mughal dynasty ruled most of the Indian subcontinent by 1600. The reign of Shah Jahan was the golden age of Mughal architecture. He erected several large monuments, the most famous of which is the Taj Mahal at Agra, as well as the Moti Masjid, Agra, the Red Fort, the Jama Masjid, Delhi, and the Lahore Fort. It was the second largest empire to have existed in the Indian subcontinent, and surpassed China to be become the world's largest economic power, controlling 24.4% of the world economy, and the world leader in manufacturing, producing 25% of global industrial output. The economic and demographic upsurge was stimulated by Mughal agrarian reforms that intensified agricultural production, a proto-industrializing economy that began moving towards industrial manufacturing, and a relatively high degree of urbanization for its time. - Other Mughal UNESCO World Heritage Sites The Mughal Empire reached the zenith of its territorial expanse during the reign of Aurangzeb and also started its terminal decline in his reign due to Maratha military resurgence under Shivaji. Historian Sir. J.N. Sarkar wrote "All seemed to have been gained by Aurangzeb now, but in reality all was lost." He was less tolerant than his predecessors, reintroducing the jizya tax and destroying several historical temples, while at the same time building more Hindu temples than he destroyed, employing significantly more Hindus in his imperial bureaucracy than his predecessors, and opposing Sunni Muslim bigotry against Hindus and Shia Muslims. However, he is often blamed for the erosion of the tolerant syncretic tradition of his predecessors, as well as increasing brutality and centralisation, which may have played a large part in the dynasty's downfall after Aurangzeb, who unlike previous emperors, imposed relatively less pluralistic policies on the general population, which may have inflamed the majority Hindu population. The empire went into decline thereafter. The Mughals suffered several blows due to invasions from Marathas, Jats and Afghans. In 1737, the Maratha general Bajirao of the Maratha Empire invaded and plundered Delhi. Under the general Amir Khan Umrao Al Udat, the Mughal Emperor sent 8,000 troops to drive away the 5,000 Maratha cavalry soldiers. Baji Rao, however, easily routed the novice Mughal general and the rest of the imperial Mughal army fled. In 1737, in the final defeat of Mughal Empire, the commander-in-chief of the Mughal Army, Nizam-ul-mulk, was routed at Bhopal by the Maratha army. This essentially brought an end to the Mughal Empire. While Bharatpur State under Jat ruler Suraj Mal, overran the Mughal garrison at Agra and plundered the city taking with them the two great silver doors of the entrance of the famous Taj Mahal; which were then melted down by Suraj Mal in 1763. In 1739, Nader Shah, emperor of Iran, defeated the Mughal army at the Battle of Karnal. After this victory, Nader captured and sacked Delhi, carrying away many treasures, including the Peacock Throne. Mughal rule was further weakened by constant native Indian resistance; Banda Singh Bahadur led the Sikh Khalsa against Mughal religious oppression; Hindu Rajas of Bengal, Pratapaditya and Raja Sitaram Ray revolted; and Maharaja Chhatrasal, of Bundela Rajputs, fought the Mughals and established the Panna State. The Mughal dynasty was reduced to puppet rulers by 1757. Vadda Ghalughara took place under the Muslim provincial government based at Lahore to wipe out the Sikhs, with 30,000 Sikhs being killed, an offensive that had begun with the Mughals, with the Chhota Ghallughara, and lasted several decades under its Muslim successor states. Marathas and Sikhs In the early 18th century the Maratha Empire extended suzerainty over the Indian subcontinent. Under the Peshwas, the Marathas consolidated and ruled over much of South Asia. The Marathas are credited to a large extent for ending Mughal rule in India. The Maratha kingdom was founded and consolidated by Chatrapati Shivaji, a Maratha aristocrat of the Bhonsle clan. However, the credit for making the Marathas formidable power nationally goes to Peshwa Bajirao I. Historian K.K. Datta wrote that Bajirao I "may very well be regarded as the second founder of the Maratha Empire". By the early 18th century, the Maratha Kingdom had transformed itself into the Maratha Empire under the rule of the Peshwas (prime ministers). In 1737, the Marathas defeated a Mughal army in their capital, in the Battle of Delhi. The Marathas continued their military campaigns against the Mughals, Nizam, Nawab of Bengal and the Durrani Empire to further extend their boundaries. By 1760, the domain of the Marathas stretched across most of the Indian subcontinent. The Marathas even discussed abolishing the Mughal throne and placing Vishwasrao Peshwa on the Mughal imperial throne in Delhi. The empire at its peak stretched from Tamil Nadu in the south, to Peshawar (modern-day Khyber Pakhtunkhwa, Pakistan [note 2]) in the north, and Bengal in the east. The Northwestern expansion of the Marathas was stopped after the Third Battle of Panipat (1761). However, the Maratha authority in the north was re-established within a decade under Peshwa Madhavrao I. Under Madhavrao I, the strongest knights were granted semi-autonomy, creating a confederacy of Maratha states under the Gaekwads of Baroda, the Holkars of Indore and Malwa, the Scindias of Gwalior and Ujjain, the Bhonsales of Nagpur and the Puars of Dhar and Dewas. In 1775, the East India Company intervened in a Peshwa family succession struggle in Pune, which led to the First Anglo-Maratha War, resulting in a Maratha victory. The Marathas remained a major power in India until their defeat in the Second and Third Anglo-Maratha Wars (1805–1818), which resulted in the East India Company controlling most of India. The Sikh Empire, ruled by members of the Sikh religion, was a political entity that governed the Northwestern regions of the Indian subcontinent. The empire, based around the Punjab region, existed from 1799 to 1849. It was forged, on the foundations of the Khalsa, under the leadership of Maharaja Ranjit Singh (1780–1839) from an array of autonomous Punjabi Misls of the Sikh Confederacy. Maharaja Ranjit Singh consolidated many parts of northern India into an empire. He primarily used his Sikh Khalsa Army that he trained in European military techniques and equipped with modern military technologies. Ranjit Singh proved himself to be a master strategist and selected well-qualified generals for his army. He continuously defeated the Afghan armies and successfully ended the Afghan-Sikh Wars. In stages, he added central Punjab, the provinces of Multan and Kashmir, and the Peshawar Valley to his empire. At its peak, in the 19th century, the empire extended from the Khyber Pass in the west, to Kashmir in the north, to Sindh in the south, running along Sutlej river to Himachal in the east. After the death of Ranjit Singh, the empire weakened, leading to conflict with the British East India Company. The hard-fought first Anglo-Sikh war and second Anglo-Sikh war marked the downfall of the Sikh Empire, making it among the last areas of the Indian subcontinent to be conquered by the British. The Kingdom of Mysore in southern India expanded to its greatest extent under Hyder Ali and his son Tipu Sultan in the later half of the 18th century. Under their rule, Mysore fought series of wars against the Marathas and British or their combined forces. The Maratha–Mysore War ended in April 1787, following the finalizing of treaty of Gajendragad, in which, Tipu Sultan was obligated to pay tribute to the Marathas. Concurrently, the Anglo-Mysore Wars took place, where the Mysoreans used the Mysorean rockets. The Fourth Anglo-Mysore War (1798–1799) saw the death of Tipu. Mysore's alliance with the French was seen as a threat to the British East India Company, and Mysore was attacked from all four sides. The Nizam of Hyderabad and the Marathas launched an invasion from the north. The British won a decisive victory at the Siege of Seringapatam (1799). Hyderabad was founded by the Qutb Shahi dynasty of Golconda in 1591. Following a brief Mughal rule, Asif Jah, a Mughal official, seized control of Hyderabad and declared himself Nizam-al-Mulk of Hyderabad in 1724. The Nizams lost considerable territory and paid tribute to the Maratha Empire after being routed in multiple battles, such as the Battle of Palkhed. However, the Nizams maintained their sovereignty from 1724 until 1948 through paying tributes to the Marathas, and later, being vassels of the British. Hyderabad State became princely state in British India 1798. The Nawabs of Bengal had become the de facto rulers of Bengal following the decline of Mughal Empire. However, their rule was interrupted by Marathas who carried out six expeditions in Bengal from 1741 to 1748, as a result of which Bengal became a tributary state of Marathas. On 23 June 1757, Siraj ud-Daulah, the last independent Nawab of Bengal was betrayed in the Battle of Plassey by Mir Jafar. He lost to the British, who took over the charge of Bengal in 1757, installed Mir Jafar on the Masnad (throne) and established itself to a political power in Bengal. In 1765 the system of Dual Government was established, in which the Nawabs ruled on behalf of the British and were mere puppets to the British. In 1772 the system was abolished and Bengal was brought under the direct control of the British. In 1793, when the Nizamat (governorship) of the Nawab was also taken away from them, they remained as the mere pensioners of the British East India Company. In the 18th century, the whole of Rajputana was virtually subdued by the Marathas. The Second Anglo-Maratha War distracted the Marathas from 1807 to 1809, but afterward Maratha domination of Rajputana resumed. In 1817, the British went to war with the Pindaris, raiders who were based in Maratha territory, which quickly became the Third Anglo-Maratha War, and the British government offered its protection to the Rajput rulers from the Pindaris and the Marathas. By the end of 1818 similar treaties had been executed between the other Rajput states and Britain. The Maratha Sindhia ruler of Gwalior gave up the district of Ajmer-Merwara to the British, and Maratha influence in Rajasthan came to an end. Most of the Rajput princes remained loyal to Britain in the Revolt of 1857, and few political changes were made in Rajputana until Indian independence in 1947. The Rajputana Agency contained more than 20 princely states, most notable being Udaipur State, Jaipur State, Bikaner State and Jodhpur State. After the fall of the Maratha Empire, many Maratha dynasties and states became vassals in a subsidiary alliance with the British, to form the largest bloc of princely states in the British Raj, in terms of territory and population. With the decline of the Sikh Empire, after the First Anglo-Sikh War in 1846, under the terms of the Treaty of Amritsar, the British government sold Kashmir to Maharaja Gulab Singh and the princely state of Jammu and Kashmir, the second-largest princely state in British India, was created by the Dogra dynasty. While in Eastern and Northeastern India, the Hindu and Buddhist states of Cooch Behar Kingdom, Twipra Kingdom and Kingdom of Sikkim were annexed by the British and made vassal princely state. After the fall of the Vijayanagara Empire, Polygar states emerged in Southern India; and managed to weather invasions and flourished until the Polygar Wars, where they were defeated by the British East India Company forces. Around the 18th century, the Kingdom of Nepal was formed by Rajput rulers. In 1498, a Portuguese fleet under Vasco da Gama successfully discovered a new sea route from Europe to India, which paved the way for direct Indo-European commerce. The Portuguese soon set up trading posts in Goa, Daman, Diu and Bombay. After their conquest in Goa, the Portuguese instituted the Goa Inquisition, where new Indian converts and non-Christians were punished for suspected heresy against Christianity and were condemned to be burnt. Goa became the main Portuguese base until it was annexed by India in 1961. The next to arrive were the Dutch, with their main base in Ceylon. They established ports in Malabar. However, their expansion into India was halted, after their defeat in the Battle of Colachel by the Kingdom of Travancore, during the Travancore-Dutch War. The Dutch never recovered from the defeat and no longer posed a large colonial threat to India. The internal conflicts among Indian kingdoms gave opportunities to the European traders to gradually establish political influence and appropriate lands. Following the Dutch, the British—who set up in the west coast port of Surat in 1619—and the French both established trading outposts in India. Although these continental European powers controlled various coastal regions of southern and eastern India during the ensuing century, they eventually lost all their territories in India to the British, with the exception of the French outposts of Pondichéry and Chandernagore, and the Portuguese colonies of Goa, Daman and Diu. East India Company rule in India The English East India Company was founded in 1600, as The Company of Merchants of London Trading into the East Indies. It gained a foothold in India with the establishment of a factory in Masulipatnam on the Eastern coast of India in 1611 and the grant of the rights to establish a factory in Surat in 1612 by the Mughal emperor Jahangir. In 1640, after receiving similar permission from the Vijayanagara ruler farther south, a second factory was established in Madras on the southeastern coast. Bombay island, not far from Surat, a former Portuguese outpost gifted to England as dowry in the marriage of Catherine of Braganza to Charles II, was leased by the company in 1668. Two decades later, the company established a presence on the eastern coast as well; far up that coast, in the Ganges River delta, a factory was set up in Calcutta. Since, during this time other companies—established by the Portuguese, Dutch, French, and Danish—were similarly expanding in the region, the English Company's unremarkable beginnings on coastal India offered no clues to what would become a lengthy presence on the Indian subcontinent. The company's victory under Robert Clive in the 1757 Battle of Plassey and another victory in the 1764 Battle of Buxar (in Bihar), consolidated the company's power, and forced emperor Shah Alam II to appoint it the diwan, or revenue collector, of Bengal, Bihar, and Orissa. The company thus became the de facto ruler of large areas of the lower Gangetic plain by 1773. It also proceeded by degrees to expand its dominions around Bombay and Madras. The Anglo-Mysore Wars (1766–99) and the Anglo-Maratha Wars (1772–1818) left it in control of large areas of India south of the Sutlej River. With the defeat of the Marathas, no native power represented a threat for the company any longer. The expansion of the company's power chiefly took two forms. The first of these was the outright annexation of Indian states and subsequent direct governance of the underlying regions, which collectively came to comprise British India. The annexed regions included the North-Western Provinces (comprising Rohilkhand, Gorakhpur, and the Doab) (1801), Delhi (1803), Assam (Ahom Kingdom 1828), and Sindh (1843). Punjab, North-West Frontier Province, and Kashmir, were annexed after the Anglo-Sikh Wars in 1849–56 (Period of tenure of Marquess of Dalhousie Governor General); however, Kashmir was immediately sold under the Treaty of Amritsar (1850) to the Dogra Dynasty of Jammu, and thereby became a princely state. In 1854 Berar was annexed, and the state of Oudh two years later. The second form of asserting power involved treaties in which Indian rulers acknowledged the company's hegemony in return for limited internal autonomy. Since the company operated under financial constraints, it had to set up political underpinnings for its rule. The most important such support came from the subsidiary alliances with Indian princes during the first 75 years of Company rule. In the early 19th century, the territories of these princes accounted for two-thirds of India. When an Indian ruler, who was able to secure his territory, wanted to enter such an alliance, the company welcomed it as an economical method of indirect rule, which did not involve the economic costs of direct administration or the political costs of gaining the support of alien subjects. In return, the company undertook the "defense of these subordinate allies and treated them with traditional respect and marks of honor." Subsidiary alliances created the princely states, of the Hindu maharajas and the Muslim nawabs. Prominent among the princely states were: Cochin (1791), Jaipur (1794), Travancore (1795), Hyderabad (1798), Mysore (1799), Cis-Sutlej Hill States (1815), Central India Agency (1819), Cutch and Gujarat Gaikwad territories (1819), Rajputana (1818), and Bahawalpur (1833). Indian indenture system The Indian indenture system was an ongoing system of indenture, a form of debt bondage, by which 3.5 million Indians were transported to various colonies of European powers to provide labor for the (mainly sugar) plantations. It started from the end of slavery in 1833 and continued until 1920. This resulted in the development of large Indian diaspora, which spread from the Indian Ocean (i.e. Réunion and Mauritius) to Pacific Ocean (i.e. Fiji), as well as the growth of Indo-Caribbean and Indo-African population. Modern period and independence (after c. 1850 CE) Rebellion of 1857 and its consequences Bahadur Shah Zafar the last Mughal Emperor, crowned Emperor of India by the rebels, he was deposed by the British, and died in exile in Burma The Indian rebellion of 1857 was a large-scale rebellion by soldiers employed by the British East India Company in northern and central India against the company's rule. The spark that led to the mutiny was the issue of new gunpowder cartridges for the Enfield rifle, which was insensitive to local religious prohibition; key mutineer being Mangal Pandey. In addition, the underlying grievances over British taxation, the ethnic gulf between the British officers and their Indian troops, and land annexations played a significant role in the rebellion. Within weeks after Pandey's mutiny, dozens of units of the Indian army joined peasant armies in widespread rebellion. The rebel soldiers were later joined by Indian nobility, many of whom had lost titles and domains under the Doctrine of Lapse, and felt that the company had interfered with a traditional system of inheritance. Rebel leaders such as Nana Sahib and the Rani of Jhansi belonged to this group. After the outbreak of the mutiny in Meerut, the rebels very quickly reached Delhi. The rebels had also captured large tracts of the North-Western Provinces and Awadh (Oudh). Most notably in Awadh, the rebellion took on the attributes of a patriotic revolt against British presence. However, the British East India Company mobilised rapidly, with the assistance of friendly Princely states. But, it took the British remainder of 1857 and the better part of 1858 to suppress the rebellion. Due to the rebels being poorly equipped and no outside support or funding, they were brutally subdued by the British. In the aftermath, all power was transferred from the British East India Company to the British Crown, which began to administer most of India as a number of provinces. The Crown controlled the company's lands directly and had considerable indirect influence over the rest of India, which consisted of the Princely states ruled by local royal families. There were officially 565 princely states in 1947, but only 21 had actual state governments, and only three were large (Mysore, Hyderabad, and Kashmir). They were absorbed into the independent nation in 1947–48. British Raj (1858–1947) After 1857, the colonial government strengthened and expanded its infrastructure via the court system, legal procedures, and statutes. The Indian Penal Code came into being. In education, Thomas Babington Macaulay had made schooling a priority for the Raj in his famous minute of February 1835 and succeeded in implementing the use of English as the medium of instruction. By 1890 some 60,000 Indians had matriculated. The Indian economy grew at about 1% per year from 1880 to 1920, and the population also grew at 1%. However, from 1910s Indian private industry began to grow significantly. India built a modern railway system in the late 19th century which was the fourth largest in the world. The British Raj invested heavily in infrastructure, including canals and irrigation systems in addition to railways, telegraphy, roads and ports. However, historians have been bitterly divided on issues of economic history, with the Nationalist school arguing that India was poorer at the end of British rule than at the beginning and that impoverishment occurred because of the British. In 1905, Lord Curzon split the large province of Bengal into a largely Hindu western half and "Eastern Bengal and Assam", a largely Muslim eastern half. The British goal was said to be for efficient administration but the people of Bengal were outraged at the apparent "divide and rule" strategy. It also marked the beginning of the organised anti-colonial movement. When the Liberal party in Britain came to power in 1906, he was removed. Bengal was reunified in 1911. The new Viceroy Gilbert Minto and the new Secretary of State for India John Morley consulted with Congress leaders on political reforms. The Morley-Minto reforms of 1909 provided for Indian membership of the provincial executive councils as well as the Viceroy's executive council. The Imperial Legislative Council was enlarged from 25 to 60 members and separate communal representation for Muslims was established in a dramatic step towards representative and responsible government. Several socio-religious organisations came into being at that time. Muslims set up the All India Muslim League in 1906. It was not a mass party but was designed to protect the interests of the aristocratic Muslims. It was internally divided by conflicting loyalties to Islam, the British, and India, and by distrust of Hindus. The Akhil Bharatiya Hindu Mahasabha and Rashtriya Swayamsevak Sangh (RSS) sought to represent Hindu interests though the latter always claimed it to be a "cultural" organisation. Sikhs founded the Shiromani Akali Dal in 1920. However, the largest and oldest political party Indian National Congress, founded in 1885, attempted to keep a distance from the socio-religious movements and identity politics. The Bengali Renaissance refers to a social reform movement during the nineteenth and early twentieth centuries in the Bengal region of the Indian subcontinent during the period of British rule dominated by Bengali Hindus. Historian Nitish Sengupta describes the renaissance as having started with reformer and humanitarian Raja Ram Mohan Roy (1775–1833), and ended with Asia's first Nobel laureate Rabindranath Tagore (1861–1941). This flowering of religious and social reformers, scholars, and writers is described by historian David Kopf as "one of the most creative periods in Indian history." During this period, Bengal witnessed an intellectual awakening that is in some way similar to the Renaissance. This movement questioned existing orthodoxies, particularly with respect to women, marriage, the dowry system, the caste system, and religion. One of the earliest social movements that emerged during this time was the Young Bengal movement, which espoused rationalism and atheism as the common denominators of civil conduct among upper caste educated Hindus. It played an important role in reawakening Indian minds and intellect across the Indian subcontinent. During Company rule in India and the British Raj, famines in India were some of the worst ever recorded. These famines, often resulting from crop failures due to El Niño which were exacerbated by the destructive policies of the colonial government, included the Great Famine of 1876–78 in which 6.1 million to 10.3 million people died, the Great Bengal famine of 1770 where up to 10 million people died, the Indian famine of 1899–1900 in which 1.25 to 10 million people died, and the Bengal famine of 1943 where up to 3.8 million people died. The Third Plague Pandemic in the mid-19th century killed 10 million people in India. Between 15 and 29 million Indians died during the British rule. Despite persistent diseases and famines, the population of the Indian subcontinent, which stood at up to 200 million in 1750, had reached 389 million by 1941. World War I Indian Army gunners (probably 39th Battery) with 3.7 inch Mountain Howitzers, Jerusalem 1917. During World War I, over 800,000 volunteered for the army, and more than 400,000 volunteered for non-combat roles, compared with the pre-war annual recruitment of about 15,000 men. The Army saw action on the Western Front within a month of the start of the war at the First Battle of Ypres. After a year of front-line duty, sickness and casualties had reduced the Indian Corps to the point where it had to be withdrawn. Nearly 700,000 Indians fought the Turks in the Mesopotamian campaign. Indian formations were also sent to East Africa, Egypt, and Gallipoli. Indian Army and Imperial Service Troops fought during the Sinai and Palestine Campaign's defence of the Suez Canal in 1915, at Romani in 1916 and to Jerusalem in 1917. India units occupied the Jordan Valley and after the Spring Offensive they became the major force in the Egyptian Expeditionary Force during the Battle of Megiddo and in the Desert Mounted Corps' advance to Damascus and on to Aleppo. Other divisions remained in India guarding the North-West Frontier and fulfilling internal security obligations. One million Indian troops served abroad during the war. In total, 74,187 died, and another 67,000 were wounded. The roughly 90,000 soldiers who lost their lives fighting in World War I and the Afghan Wars are commemorated by the India Gate. World War II Sikh soldiers of the British Indian army being executed by the Japanese. (Imperial War Museum, London) British India officially declared war on Nazi Germany in September 1939. The British Raj, as part of the Allied Nations, sent over two and a half million volunteer soldiers to fight under British command against the Axis powers. Additionally, several Indian Princely States provided large donations to support the Allied campaign during the War. India also provided the base for American operations in support of China in the China Burma India Theatre. Indians fought with distinction throughout the world, including in the European theatre against Germany, in North Africa against Germany and Italy, against the Italians in East Africa, in the Middle East against the Vichy French, in the South Asian region defending India against the Japanese and fighting the Japanese in Burma. Indians also aided in liberating British colonies such as Singapore and Hong Kong after the Japanese surrender in August 1945. Over 87,000 soldiers from the subcontinent died in World War II. The Indian National Congress, denounced Nazi Germany but would not fight it or anyone else until India was independent. Congress launched the Quit India Movement in August 1942, refusing to co-operate in any way with the government until independence was granted. The government was ready for this move. It immediately arrested over 60,000 national and local Congress leaders. The Muslim League rejected the Quit India movement and worked closely with the Raj authorities. Subhas Chandra Bose (also called Netaji) broke with Congress and tried to form a military alliance with Germany or Japan to gain independence. The Germans assisted Bose in the formation of the Indian Legion; however, it was Japan that helped him revamp the Indian National Army (INA), after the First Indian National Army under Mohan Singh was dissolved. The INA fought under Japanese direction, mostly in Burma. Bose also headed the Provisional Government of Free India (or Azad Hind), a government-in-exile based in Singapore. The government of Azad Hind had its own currency, court, and civil code; and in the eyes of some Indians its existence gave a greater legitimacy to the independence struggle against the British. By 1942, neighbouring Burma was invaded by Japan, which by then had already captured the Indian territory of Andaman and Nicobar Islands. Japan gave nominal control of the islands to the Provisional Government of Free India on 21 October 1943, and in the following March, the Indian National Army with the help of Japan crossed into India and advanced as far as Kohima in Nagaland. This advance on the mainland of the Indian subcontinent reached its farthest point on Indian territory, retreating from the Battle of Kohima in June and from that of Imphal on 3 July 1944. The region of Bengal in British India suffered a devastating famine during 1940–43. An estimated 2.1–3 million died from the famine, frequently characterised as "man-made", asserting that wartime colonial policies and Winston Churchill's animosity and racism toward Indians exacerbated the crisis. Indian independence movement (1885–1947) The numbers of British in India were small, yet they were able to rule 52% of the Indian subcontinent directly and exercise considerable leverage over the princely states that accounted for 48% of the area. One of the most important events of the 19th century was the rise of Indian nationalism, leading Indians to seek first "self-rule" and later "complete independence". However, historians are divided over the causes of its rise. Probable reasons include a "clash of interests of the Indian people with British interests", "racial discriminations", and "the revelation of India's past". The first step toward Indian self-rule was the appointment of councillors to advise the British viceroy in 1861 and the first Indian was appointed in 1909. Provincial Councils with Indian members were also set up. The councillors' participation was subsequently widened into legislative councils. The British built a large British Indian Army, with the senior officers all British and many of the troops from small minority groups such as Gurkhas from Nepal and Sikhs. The civil service was increasingly filled with natives at the lower levels, with the British holding the more senior positions. Bal Gangadhar Tilak, an Indian nationalist leader, declared Swaraj as the destiny of the nation. His popular sentence "Swaraj is my birthright, and I shall have it" became the source of inspiration for Indians. Tilak was backed by rising public leaders like Bipin Chandra Pal and Lala Lajpat Rai, who held the same point of view, notably they advocated the Swadeshi movement involving the boycott of all imported items and the use of Indian-made goods; the triumvirate were popularly known as Lal Bal Pal. Under them, India's three big provinces – Maharashtra, Bengal and Punjab shaped the demand of the people and India's nationalism. In 1907, the Congress was split into two factions: The radicals, led by Tilak, advocated civil agitation and direct revolution to overthrow the British Empire and the abandonment of all things British. The moderates, led by leaders like Dadabhai Naoroji and Gopal Krishna Gokhale, on the other hand, wanted reform within the framework of British rule. The British themselves adopted a "carrot and stick" approach in recognition of India's support during the First World War and in response to renewed nationalist demands. The means of achieving the proposed measure were later enshrined in the Government of India Act 1919, which introduced the principle of a dual mode of administration, or diarchy, in which elected Indian legislators and appointed British officials shared power. In 1919, Colonel Reginald Dyer ordered his troops to fire their weapons on peaceful protestors, including unarmed women and children, resulting in the Jallianwala Bagh massacre; which led to the Non-cooperation Movement of 1920–22. The massacre was a decisive episode towards the end of British rule in India. From 1920 leaders such as Mahatma Gandhi began highly popular mass movements to campaign against the British Raj using largely peaceful methods. The Gandhi-led independence movement opposed the British rule using non-violent methods like non-co-operation, civil disobedience and economic resistance. However, revolutionary activities against the British rule took place throughout the Indian subcontinent and some others adopted a militant approach like the Hindustan Republican Association, founded by Chandrasekhar Azad, Bhagat Singh, Sukhdev Thapar and others, that sought to overthrow British rule by armed struggle. The Government of India Act 1935 was a major success in this regard. The All India Azad Muslim Conference gathered in Delhi in April 1940 to voice its support for an independent and united India. Its members included several Islamic organisations in India, as well as 1400 nationalist Muslim delegates. The pro-separatist All-India Muslim League worked to try to silence those nationalist Muslims who stood against the partition of India, often using "intimidation and coercion". The murder of the All India Azad Muslim Conference leader Allah Bakhsh Soomro also made it easier for the pro-separatist All-India Muslim League to demand the creation of a Pakistan. After World War II (c. 1946–1947) In January 1946, several mutinies broke out in the armed services, starting with that of RAF servicemen frustrated with their slow repatriation to Britain. The mutinies came to a head with mutiny of the Royal Indian Navy in Bombay in February 1946, followed by others in Calcutta, Madras, and Karachi. The mutinies were rapidly suppressed. Also in early 1946, new elections were called and Congress candidates won in eight of the eleven provinces. Late in 1946, the Labour government decided to end British rule of India, and in early 1947 it announced its intention of transferring power no later than June 1948 and participating in the formation of an interim government. Along with the desire for independence, tensions between Hindus and Muslims had also been developing over the years. The Muslims had always been a minority within the Indian subcontinent, and the prospect of an exclusively Hindu government made them wary of independence; they were as inclined to mistrust Hindu rule as they were to resist the foreign Raj, although Gandhi called for unity between the two groups in an astonishing display of leadership. Muslim League leader Muhammad Ali Jinnah proclaimed 16 August 1946 as Direct Action Day, with the stated goal of highlighting, peacefully, the demand for a Muslim homeland in British India, which resulted in the outbreak of the cycle of violence that would be later called the "Great Calcutta Killing of August 1946". The communal violence spread to Bihar (where Muslims were attacked by Hindus), to Noakhali in Bengal (where Hindus were targeted by Muslims), in Garhmukteshwar in the United Provinces (where Muslims were attacked by Hindus), and on to Rawalpindi in March 1947 in which Hindus were attacked or driven out by Muslims. Independence and partition (c. 1947–present) In August 1947, the British Indian Empire was partitioned into the Union of India and Dominion of Pakistan. In particular, the partition of Punjab and Bengal led to rioting between Hindus, Muslims, and Sikhs in these provinces and spread to other nearby regions, leaving some 500,000 dead. The police and army units were largely ineffective. The British officers were gone, and the units were beginning to tolerate if not actually indulge in violence against their religious enemies. Also, this period saw one of the largest mass migrations anywhere in modern history, with a total of 12 million Hindus, Sikhs and Muslims moving between the newly created nations of India and Pakistan (which gained independence on 15 and 14 August 1947 respectively). In 1971, Bangladesh, formerly East Pakistan and East Bengal, seceded from Pakistan. In recent decades there have been four main schools of historiography in how historians study India: Cambridge, Nationalist, Marxist, and subaltern. The once common "Orientalist" approach, with its image of a sensuous, inscrutable, and wholly spiritual India, has died out in serious scholarship. The "Cambridge School", led by Anil Seal, Gordon Johnson, Richard Gordon, and David A. Washbrook, downplays ideology. However, this school of historiography is criticised for western bias or Eurocentrism. The Nationalist school has focused on Congress, Gandhi, Nehru and high level politics. It highlighted the Mutiny of 1857 as a war of liberation, and Gandhi's 'Quit India' begun in 1942, as defining historical events. This school of historiography has received criticism for Elitism. The Marxists have focused on studies of economic development, landownership, and class conflict in precolonial India and of deindustrialisation during the colonial period. The Marxists portrayed Gandhi's movement as a device of the bourgeois elite to harness popular, potentially revolutionary forces for its own ends. Again, the Marxists are accused of being "too much" ideologically influenced. The "subaltern school", was begun in the 1980s by Ranajit Guha and Gyan Prakash. It focuses attention away from the elites and politicians to "history from below", looking at the peasants using folklore, poetry, riddles, proverbs, songs, oral history and methods inspired by anthropology. It focuses on the colonial era before 1947 and typically emphasises caste and downplays class, to the annoyance of the Marxist school. More recently, Hindu nationalists have created a version of history to support their demands for "Hindutva" ("Hinduness") in Indian society. This school of thought is still in the process of development. In March 2012, Diana L. Eck, professor of Comparative Religion and Indian Studies at Harvard University, authored in her book "India: A Sacred Geography", that idea of India dates to a much earlier time than the British or the Mughals and it wasn't just a cluster of regional identities and it wasn't ethnic or racial. - Michael D. Petraglia; Bridget Allchin (2007). The Evolution and History of Human Populations in South Asia: Inter-disciplinary Studies in Archaeology, Biological Anthropology, Linguistics and Genetics. Springer Science & Business Media. p. 6. ISBN 978-1-4020-5562-1. Quote: "Y-Chromosome and Mt-DNA data support the colonization of South Asia by modern humans originating in Africa. ... Coalescence dates for most non-European populations average to between 73–55 ka." - Wright, Rita P. (2009), The Ancient Indus: Urbanism, Economy, and Society, Cambridge University Press, pp. 44, 51, ISBN 978-0-521-57652-9 - Wright, Rita P. (2009), The Ancient Indus: Urbanism, Economy, and Society, Cambridge University Press, pp. 115–125, ISBN 978-0-521-57652-9 - Flood, Gavin. Olivelle, Patrick. 2003. The Blackwell Companion to Hinduism. Malden: Blackwell. pp. 273–274 - Flood, Gavin D. (1996), An Introduction to Hinduism, Cambridge University Press, p. 82, ISBN 978-0-521-43878-0 - Researches Into the History and Civilization of the Kirātas by G. P. Singh p. 33 - A Social History of Early India by Brajadulal Chattopadhyaya p. 259 - Technology and Society by Menon, R.V.G. p. 15 - The Political Economy of Craft Production: Crafting Empire in South India, by Carla M. Sinopoli, p. 201 - Science in India by B.V. Subbarayappa - The Cambridge History of Southeast Asia: From Early Times to c. 1800, Band 1 by Nicholas Tarling, p. 281 - Flood, Gavin. Olivelle, Patrick. 2003. The Blackwell Companion to Hinduism. Malden: Blackwell. pp. 273–274. - Ancient Indian History and Civilization by Sailendra Nath Sen p. 281 - Societies, Networks, and Transitions, Volume B: From 600 to 1750 by Craig Lockard p. 333 - Power and Plenty: Trade, War, and the World Economy in the Second Millennium by Ronald Findlay, Kevin H. O'Rourke p. 67 - Essays on Ancient India by Raj Kumar p. 199 - Al Baldiah wal nahaiyah vol: 7 p. 141 "Conquest of Makran" - Meri, Josef W. (2005). Medieval Islamic Civilization: An Encyclopedia. Routledge. p. 294. ISBN 978-1-135-45596-5. - The Princeton Encyclopedia of Islamic Political Thought: p. 340 - Sohoni, Pushkar (2018). The Architecture of a Deccan Sultanate: Courtly Practice and Royal Authority in Late Medieval India. London: I.B. Tauris. ISBN 9781784537944. - Richard M. Eaton (1996). The Rise of Islam and the Bengal Frontier, 1204-1760. University of California Press. pp. 64–. ISBN 978-0-520-20507-9. - "India before the British: The Mughal Empire and its Rivals, 1526–1857". University of Exeter. - Parthasarathi, Prasannan (2011), Why Europe Grew Rich and Asia Did Not: Global Economic Divergence, 1600–1850, Cambridge University Press, pp. 39–45, ISBN 978-1-139-49889-0 - Maddison, Angus (2003): Development Centre Studies The World Economy Historical Statistics: Historical Statistics, OECD Publishing, ISBN 9264104143, pp. 259–261 - Lawrence E. Harrison, Peter L. Berger (2006). Developing cultures: case studies. Routledge. p. 158. ISBN 978-0415952798. - Ian Copland; Ian Mabbett; Asim Roy; et al. (2012). A History of State and Religion in India. Routledge. p. 161. - History of Mysore Under Hyder Ali and Tippoo Sultan by Joseph Michaud p. 143 - Taçon, Paul S.C. (2018), "The Rock Art of South and East Asia", in Bruno David, Ian J. McNiven (ed.), The Oxford Handbook of the Archaeology and Anthropology of Rock Art, Oxford University Press, pp. 181–, ISBN 978-0-19-084495-0 - Mithen, Steven J. (2006), After the Ice: A Global Human History, 20,000-5000 BC, Harvard University Press, pp. 411–, ISBN 978-0-674-01999-7 - Upinder Singh 2008, p. 89. - Meenakshi Dubey-Pathak (2014), "The Rock Art of the Bhimbetka Area in India" (PDF), Adoranten: 16, 19 - Chauhan 2010, p. 147. - Petraglia & Allchin 2007, pp. 5–6. - Petraglia 2010, pp. 167–170. - Murray, Tim (1999). Time and Archaeology. London: Routledge. p. 84. ISBN 978-0-415-11762-3. - Chauhan 2010, pp. 147–160. - Dyson, Tim (2018), A Population History of India: From the First Modern People to the Present Day, Oxford University Press, p. 1, ISBN 978-0-19-882905-8 - Michael D. Petraglia; Bridget Allchin (22 May 2007). The Evolution and History of Human Populations in South Asia: Inter-disciplinary Studies in Archaeology, Biological Anthropology, Linguistics and Genetics. Springer Science & Business Media. p. 6. ISBN 978-1-4020-5562-1. - Fisher, Michael H. (2018), An Environmental History of India: From Earliest Times to the Twenty-First Century, Cambridge University Press, p. 23, ISBN 978-1-107-11162-2 - Claudio Tuniz; Richard Gillespie; Cheryl Jones (2016). The Bone Readers: Science and Politics in Human Origins Research. Routledge. pp. 163–. ISBN 978-1-315-41888-9. - Petraglia, Michael D.; Haslam, Michael; Fuller, Dorian Q.; Boivin, Nicole; Clarkson, Chris (25 March 2010). "Out of Africa: new hypotheses and evidence for the dispersal of Homo sapiens along the Indian Ocean rim" (PDF). Annals of Human Biology. 37 (3): 288–311. doi:10.3109/03014461003639249. PMID 20334598. S2CID 6421383. - Mellars, Paul; Gori, Kevin C.; Carr, Martin; Soares, Pedro A.; Richards, Martin B. (25 June 2013). "Genetic and archaeological perspectives on the initial modern human colonization of southern Asia". Proceedings of the National Academy of Sciences. 110 (26): 10699–10704. Bibcode:2013PNAS..11010699M. doi:10.1073/pnas.1306043110. PMC 3696785. PMID 23754394. - Dyson, Tim (2018), A Population History of India: From the First Modern People to the Present Day, Oxford University Press, p. 28, ISBN 978-0-19-882905-8 - Dyson, Tim (2018), A Population History of India: From the First Modern People to the Present Day, Oxford University Press, pp. 4–5, ISBN 978-0-19-882905-8 - Fisher, Michael H. (2018), An Environmental History of India: From Earliest Times to the Twenty-First Century, Cambridge University Press, p. 33, ISBN 978-1-107-11162-2 - Takezawa, Suichi. "Stepwells – Cosmology of Subterranean Architecture as seen in Adalaj" (PDF). The Diverse Architectural World of the Indian Sub-Continent. III. Retrieved 18 November 2009. - Wright, Rita P. (2010). The Ancient Indus: Urbanism, Economy, and Society. Cambridge University Press. p. 1. ISBN 978-0-521-57652-9. - McIntosh, Jane (2008), The Ancient Indus Valley: New Perspectives, ABC-CLIO, p. 387, ISBN 978-1-57607-907-2 - Early India: A Concise History, D.N. Jha, 2004, p. 31 - Sarkar, Anindya; Mukherjee, Arati Deshpande; Bera, M. K.; Das, B.; Juyal, Navin; Morthekai, P.; Deshpande, R. D.; Shinde, V. S.; Rao, L. S. (May 2016). "Oxygen isotope in archaeological bioapatites from India: Implications to climate change and decline of Bronze Age Harappan civilization". Scientific Reports. 6 (1): 26555. Bibcode:2016NatSR...626555S. doi:10.1038/srep26555. PMC 4879637. PMID 27222033. - Antonova, Bongard-Levin & Kotovsky 1979, p. 51. - MacKenzie, Lynn (1995). Non-Western Art: A Brief Guide. Prentice Hall. p. 151. - Romila Thapar, A History of India: Part 1, pp. 29–30. - Upinder Singh 2008, p. 255. - Staal, Frits (1986), The Fidelity of Oral Tradition and the Origins of Science, Mededelingen der Koninklijke Nederlandse Akademie von Wetenschappen, Afd. Letterkunde, NS 49, 8. Amsterdam: North Holland Publishing Company, 40 pages - Stein, B. (2010), Arnold, D. (ed.), A History of India (2nd ed.), Oxford: Wiley-Blackwell, p. 47, ISBN 978-1-4051-9509-6 - Kulke & Rothermund 2004, p. 31. - Singhal, K.C; Gupta, Roshan. The Ancient History of India, Vedic Period: A New Interpretation. Atlantic Publishers and Distributors. ISBN 81-269-0286-8. pp. 150–151. - Day, Terence P. (1982). The Conception of Punishment in Early Indian Literature. Ontario: Wilfrid Laurier University Press. pp. 42–45. ISBN 978-0-919812-15-4. - Duiker, William; Spielvogel, Jackson (2012). World History. Cengage learning. p. 90. - Nelson, James M. (2009). Psychology, Religion, and Spirituality. Springer. p. 77. - Flood, Gavin D. (1996), An Introduction to Hinduism, Cambridge University Press, p. 37, ISBN 978-0-521-43878-0 - "India: The Late 2nd Millennium and the Reemergence of Urbanism". Encyclopædia Britannica. Retrieved 12 May 2007. - Reddy 2003, p. A11. - Michael Witzel (1989), Tracing the Vedic dialects in Dialectes dans les litteratures Indo-Aryennes ed. Caillat, Paris, 97–265. - Samuel 2010, p. 48–51, 61–93. - Kulke & Rothermund 2004, pp. 41–43. - Upinder Singh 2008, p. 200. - Charles Rockwell Lanman (1912), A Sanskrit reader: with vocabulary and notes, Boston: Ginn & Co., ... jána, m. creature; man; person; in plural, and collectively in singular, folks; a people or race or tribe ... cf. γένος, Lat. genus, Eng. kin, 'race' ... - Stephen Potter, Laurens Christopher Sargent (1974), Pedigree: the origins of words from nature, Taplinger, ISBN 9780800862480, ... *gen-, found in Skt. jana, 'a man', and Gk. genos and L. genus, 'a race' ... - Abhijit Basu (2013). Marvels and Mysteries of the Mahabharata. Leadstart Publishing Pvt Ltd. p. 153. - Witzel, Michael (1995). "Early Sanskritization. Origins and Development of the Kuru State". Electronic Journal of Vedic Studies. 1 (4): 1–26. doi:10.11588/ejvs.1995.4.823. - Samuel 2010, pp. 45–51. - H.C. Raychaudhuri (1950), Political History of Ancient India and Nepal, Calcutta: University of Calcutta, p. 58 - Samuel 2010. - James Heitzman (2008). The City in South Asia. Routledge. pp. 12–13. ISBN 978-1-134-28963-9. - Samuel 2010, pp. 48–51. - Samuel 2010, pp. 42–48. - Samuel 2010, p. 61. - Samuel 2010, p. 49. - Juan Mascaró (1965). The Upanishads. Penguin Books. pp. 7–. ISBN 978-0-14-044163-5. - Olivelle, Patrick (2008), Upaniṣads, Oxford University Press, pp. xxiv–xxix, ISBN 978-0-19-954025-9 - Melton, J. Gordon; Baumann, Martin (2010), Religions of the World, Second Edition: A Comprehensive Encyclopedia of Beliefs and Practices, ABC-CLIO, p. 1324, ISBN 978-1-59884-204-3 - Flood, Gavin. Olivelle, Patrick. 2003. The Blackwell Companion to Hinduism. Malden: Blackwell. pp. 273–274. "The second half of the first millennium BC was the period that created many of the ideological and institutional elements that characterize later Indian religions. The renouncer tradition played a central role during this formative period of Indian religious history. ... Some of the fundamental values and beliefs that we generally associate with Indian religions in general and Hinduism, in particular, were in part the creation of the renouncer tradition. These include the two pillars of Indian theologies: samsara—the belief that life in this world is one of suffering and subject to repeated deaths and births (rebirth); moksa/nirvana—the goal of human existence....." - Laumakis, Stephen. An Introduction to Buddhist philosophy. 2008. p. 4 - Mary Pat Fisher (1997) In: Living Religions: An Encyclopedia of the World's Faiths I.B. Tauris : London ISBN 1-86064-148-2 – Jainism's major teacher is the Mahavira, a contemporary of the Buddha, and who died approximately 526 BC. p. 114 - Mary Pat Fisher (1997) In: Living Religions: An Encyclopedia of the World's Faiths I.B. Tauris : London ISBN 1-86064-148-2 '"The extreme antiquity of Jainism as a non-Vedic, indigenous Indian religion is well documented. Ancient Hindu and Buddhist scriptures refer to Jainism as an existing tradition which began long before Mahavira." p. 115 - Valmiki (1990). Goldman, Robert P (ed.). The Ramayana of Valmiki: An Epic of Ancient India, Volume 1: Balakanda. Ramayana of Valmiki. Princeton, New Jersey: Princeton University Press. p. 23. ISBN 978-0-691-01485-2. - Romila Thapar, A History of India Part 1, p. 31. - Upinder Singh 2008, pp. 18–21. - Brockington, J.L. (1998). The Sanskrit epics, Part 2. Volume 12. Brill. p. 21. ISBN 978-90-04-10260-6. - Upinder Singh 2008, pp. 260–264. - Anguttara Nikaya I. p. 213; IV. pp. 252, 256, 261. - Reddy 2003, p. A107. - Thapar, Romila (2002). Early India: From the Origins to AD 1300. University of California. pp. 146–150. ISBN 978-0520242258. Retrieved 28 October 2013. - Raychaudhuri Hemchandra (1972), Political History of Ancient India, Calcutta: University of Calcutta, p. 107 - Republics in ancient India. Brill Archive. pp. 93–. GGKEY:HYY6LT5CFT0. - J.M. Kenoyer (2006), "Cultures and Societies of the Indus Tradition. In Historical Roots" in the Making of 'the Aryan, R. Thapar (ed.), pp. 21–49. New Delhi, National Book Trust. - Shaffer, Jim. 1993, "Reurbanization: The eastern Punjab and beyond". In Urban Form and Meaning in South Asia: The Shaping of Cities from Prehistoric to Precolonial Times, ed. H. Spodek and D.M. Srinivasan. - Ramesh Chandra Majumdar (1977). Ancient India. Motilal Banarsidass Publishers. ISBN 978-81-208-0436-4. - "Magadha Empire". - "Lumbini Development Trust: Restoring the Lumbini Garden". lumbinitrust.org. Archived from the original on 6 March 2014. Retrieved 6 January 2017. - Mookerji 1988, pp. 28–33. - Upinder Singh 2008, p. 273. - Mookerji 1988, p. 34. - Sastri, K. A. Nilakanta, ed. (1988) [First published 1952]. Age of the Nandas and Mauryas (2nd ed.). Motilal Banarsidass. p. 16. ISBN 978-81-208-0465-4. - Gabriel, Richard A. (2002), The great armies of antiquity (1.udg. ed.), Westport, Conn. [u.a.]: Praeger, p. 218, ISBN 978-0-275-97809-9, archived from the original on 5 January 2014 - Raychaudhuri, H. C.; Mukherjee, B. N. (1996) [First published 1923]. Political History of Ancient India: From the Accession of Parikshit to the Extinction of the Gupta Dynasty (8th ed.). Oxford University Press. pp. 204–210. ISBN 978-0-19-563789-2. - Turchin, Peter; Adams, Jonathan M.; Hall, Thomas D. (December 2006). "East–West Orientation of Historical Empires". Journal of World-Systems Research. 12 (2): 223. ISSN 1076-156X. Retrieved 12 September 2016. - Romila Thapar. A History of India: Volume 1. p. 70. - Thapar 2003, pp. 178–180. - Thapar 2003, pp. 204–206. - Bhandari, Shirin (5 January 2016). "Dinner on the Grand Trunk Road". Roads & Kingdoms. Retrieved 19 July 2016. - Kulke & Rothermund 2004, p. 67. - Romila Thapar. A History of India: Volume 1. p. 78. - Antonova, Bongard-Levin & Kotovsky 1979, p. 91. - Rosen, Elizabeth S. (1975). "Prince ILango Adigal, Shilappadikaram (The anklet Bracelet), translated by Alain Damelou. Review". Artibus Asiae. 37 (1/2): 148–150. doi:10.2307/3250226. JSTOR 3250226. - Sen 1999, pp. 204–205. - Essays on Indian Renaissance by Raj Kumar p. 260 - The First Spring: The Golden Age of India by Abraham Eraly p. 655 - * Zvelebil, Kamil. 1973. The smile of Murugan on Tamil literature of South India. Leiden: Brill. Zvelebil dates the Ur-Tholkappiyam to the 1st or 2nd century BCE - "Silappathikaram Tamil Literature". Tamilnadu.com. 22 January 2013. Archived from the original on 11 April 2013. - Mukherjee 1999, p. 277 - Manimekalai – English transliteration of Tamil original - Hardy, Adam (1995). Indian Temple Architecture: Form and Transformation : the Karṇāṭa Drāviḍa Tradition, 7th to 13th Centuries. Abhinav Publications. p. 39. ISBN 978-81-7017-312-0. - Le, Huu Phuoc (2010). Buddhist Architecture. Grafikol. p. 238. ISBN 978-0-9844043-0-8. - Stein, B. (27 April 2010), Arnold, D. (ed.), A History of India (2nd ed.), Oxford: Wiley-Blackwell, p. 105, ISBN 978-1-4051-9509-6 - "The World Economy (GDP) : Historical Statistics by Professor Angus Maddison" (PDF). World Economy. Retrieved 21 May 2013. - Maddison, Angus (2006). The World Economy – Volume 1: A Millennial Perspective and Volume 2: Historical Statistics. OECD Publishing by Organisation for Economic Co-operation and Development. p. 656. ISBN 978-92-64-02262-1. - Stadtner, Donald (1975). "A Śuṅga Capital from Vidiśā". Artibus Asiae. 37 (1/2): 101–104. doi:10.2307/3250214. ISSN 0004-3648. JSTOR 3250214. - K. A. Nilkantha Shastri (1970), A Comprehensive History of India: Volume 2, p. 108: "Soon after Agnimitra there was no 'Sunga empire'". - Bhandare, Shailendra. "Numismatics and History: The Maurya-Gupta Interlude in the Gangetic Plain" in Between the Empires: Society in India, 300 to 400 ed. Patrick Olivelle (2006), p. 96 - Schreiber, Mordecai (2003). The Shengold Jewish Encyclopedia. Rockville, MD: Schreiber Publishing. p. 125. ISBN 978-1-887563-77-2. - The Medical Times and Gazette, Volume 1. London: John Churchill. 1867. p. 506.(Original from the University of Michigan) - Donkin 2003: 63 - Collingham245: 2006 - Fage 1975: 164 - Greatest emporium in the world, CSI, UNESCO. - Loewe, Michael; Shaughnessy, Edward L. (1999). The Cambridge History of Ancient China: From the Origins of Civilization to 221 BC. Cambridge University Press. pp. 87–88. ISBN 978-0-521-47030-8. Retrieved 1 November 2013. - Runion, Meredith L. (2007). The history of Afghanistan. Westport: Greenwood Press. p. 46. ISBN 978-0-313-33798-7. The Yuezhi people conquered Bactria in the second century BCE. and divided the country into five chiefdoms, one of which would become the Kushan Empire. Recognizing the importance of unification, these five tribes combined under the one dominate Kushan tribe, and the primary rulers descended from the Yuezhi. - Liu, Xinrui (2001). Adas, Michael (ed.). Agricultural and pastoral societies in ancient and classical history. Philadelphia: Temple University Press. p. 156. ISBN 978-1-56639-832-9. - Buddhist Records of the Western World Si-Yu-Ki, (Tr. Samuel Beal: Travels of Fa-Hian, The Mission of Sung-Yun and Hwei-Sing, Books 1–5), Kegan Paul, Trench, Trubner & Co. Ltd. London. 1906 and Hill (2009), pp. 29, 318–350 - which began about 127 CE. "Falk 2001, pp. 121–136", Falk (2001), pp. 121–136, Falk, Harry (2004), pp. 167–176 and Hill (2009), pp. 29, 33, 368–371. - Grégoire Frumkin (1970). Archaeology in Soviet Central Asia. Brill Archive. pp. 51–. GGKEY:4NPLATFACBB. - Rafi U. Samad (2011). The Grandeur of Gandhara: The Ancient Buddhist Civilization of the Swat, Peshawar, Kabul and Indus Valleys. Algora Publishing. pp. 93–. ISBN 978-0-87586-859-2. - Oxford History of India – Vincent Smith - Los Angeles County Museum of Art; Pratapaditya Pal (1986). Indian Sculpture: Circa 500 B.C.–A.D. 700. University of California Press. pp. 151–. ISBN 978-0-520-05991-7. - Ancient and Medieval History of India – H.G. Rowlinson - "The History of Pakistan: The Kushans". kushan.org. Retrieved 6 January 2017. - Si-Yu-Ki, Buddhist Records of the Western World, (Tr. Samuel Beal: Travels of Fa-Hian, The Mission of Sung-Yun and Hwei-Sing, Books 1–5), Kegan Paul, Trench, Trubner & Co. Ltd. London. 1906 - "Gupta dynasty: empire in 4th century". Encyclopædia Britannica. Archived from the original on 30 March 2010. Retrieved 16 May 2010. - "The Story of India – Photo Gallery". PBS. Retrieved 16 May 2010. - Iaroslav Lebedynsky, Les Nomades, p. 172. - Early History of India, p. 339, Dr V.A. Smith; See also Early Empire of Central Asia (1939), W.M. McGovern. - Ancient India, 2003, p. 650, Dr V.D. Mahajan; History and Culture of Indian People, The Age of Imperial Kanauj, p. 50, Dr R.C. Majumdar, Dr A.D. Pusalkar. - Gopal, Madan (1990). K.S. Gautam (ed.). India through the ages. Publication Division, Ministry of Information and Broadcasting, Government of India. p. 173. - The precise number varies according to whether or not some barely started excavations, such as cave 15A, are counted. The ASI say "In all, total 30 excavations were hewn out of rock which also include an unfinished one", UNESCO and Spink "about 30". The controversies over the end date of excavation is covered below. - Tej Ram Sharma, 1978, "Personal and geographical names in the Gupta inscriptions. (1.publ.)", p. 254, Kamarupa consisted of the Western districts of the Brahmaputra valley which being the most powerful state. - Suresh Kant Sharma, Usha Sharma – 2005, "Discovery of North-East India: Geography, History, Culture, ... – Volume 3", p. 248, Davaka (Nowgong) and Kamarupa as separate and submissive friendly kingdoms. - The eastern border of Kamarupa is given by the temple of the goddess Tamreshvari (Pūrvāte Kāmarūpasya devī Dikkaravasini in Kalika Purana) near present-day Sadiya. "...the temple of the goddess Tameshwari (Dikkaravasini) is now located at modern Sadiya about 100 miles to the northeast of Sibsagar" (Sircar 1990, pp. 63–68). - Swami, Parmeshwaranand (2001). Encyclopaedic Dictionary of the Puranas. New Delhi: Sarup and Sons. p. 941. ISBN 978-81-7625-226-3. - Barpujari, H.K., ed. (1990). The Comprehensive History of Assam (1st ed.). Guwahati, India: Assam Publication Board. OCLC 499315420. - Sarkar, J.N. (1992), "Chapter II The Turko-Afghan Invasions", in Barpujari, H.K., The Comprehensive History of Assam, 2, Guwahati: Assam Publication Board, pp. 35–48 - "Pallava script". SkyKnowledge.com. 30 December 2010. - Nilakanta Sastri, pp. 412–413 - Hall, John Whitney, ed. (2005) . "India". History of the World: Earliest Times to the Present Day. John Grayson Kirk. North Dighton, MA: World Publications Group. p. 246. ISBN 978-1-57215-421-6. - "CNG: eAuction 329. India, Post-Gupta (Ganges Valley). Vardhanas of Thanesar and Kanauj. Harshavardhana. Circa AD 606–647. AR Drachm (13mm, 2.28 g, 1h)". www.cngcoins.com. - RN Kundra & SS Bawa, History of Ancient and Medieval India - International Dictionary of Historic Places: Asia and Oceania by Trudy Ring, Robert M. Salkin, Sharon La Boda p. 507 - "Harsha". Encyclopædia Britannica. 2015. - "Sthanvishvara (historical region, India)". Encyclopædia Britannica. Retrieved 9 August 2014. - "Harsha (Indian emperor)". Encyclopædia Britannica. Retrieved 9 August 2014. - Michaels 2004, p. 41. - Michaels 2004, p. 43. - Sanderson, Alexis (2009). "The Śaiva Age: The Rise and Dominance of Śaivism during the Early Medieval Period". In Einoo, Shingo (ed.). Genesis and Development of Tantrism. Institute of Oriental Culture Special Series no. 23. Tokyo: Institute of Oriental Culture, University of Tokyo. pp. 41–43. ISBN 978-4-7963-0188-6. - Sheridan, Daniel P. "Kumarila Bhatta", in Great Thinkers of the Eastern World, ed. Ian McGready, New York: Harper Collins, 1995, pp. 198–201. ISBN 0-06-270085-5. - Johannes de Kruijf and Ajaya Sahoo (2014), Indian Transnationalism Online: New Perspectives on Diaspora, ISBN 978-1-4724-1913-2, p. 105, Quote: "In other words, according to Adi Shankara's argument, the philosophy of Advaita Vedanta stood over and above all other forms of Hinduism and encapsulated them. This then united Hinduism; [...] Another of Adi Shankara's important undertakings which contributed to the unification of Hinduism was his founding of a number of monastic centers." - "Shankara", Student's Encyclopædia Britannica – India (2000), Volume 4, Encyclopædia Britannica (UK) Publishing, ISBN 978-0-85229-760-5, p. 379, Quote: "Shankaracharya, philosopher and theologian, most renowned exponent of the Advaita Vedanta school of philosophy, from whose doctrines the main currents of modern Indian thought are derived."; David Crystal (2004), The Penguin Encyclopedia, Penguin Books, p. 1353, Quote: "[Shankara] is the most famous exponent of Advaita Vedanta school of Hindu philosophy and the source of the main currents of modern Hindu thought." - Christophe Jaffrelot (1998), The Hindu Nationalist Movement in India, Columbia University Press, ISBN 978-0-231-10335-0, p. 2, Quote: "The main current of Hinduism – if not the only one – which became formalized in a way that approximates to an ecclesiastical structure was that of Shankara". - Shyama Kumar Chattopadhyaya (2000) The Philosophy of Sankar's Advaita Vedanta, Sarup & Sons, New Delhi ISBN 81-7625-222-0, 978-81-7625-222-5 - Edward Roer (Translator), Shankara's Introduction, p. 3, at Google Books to Brihad Aranyaka Upanishad at pp. 3–4; Quote – "[...] Lokayatikas and Bauddhas who assert that the soul does not exist. There are four sects among the followers of Buddha: 1. Madhyamicas who maintain all is void; 2. Yogacharas, who assert except sensation and intelligence all else is void; 3. Sautranticas, who affirm actual existence of external objects no less than of internal sensations; 4. Vaibhashikas, who agree with later (Sautranticas) except that they contend for immediate apprehension of exterior objects through images or forms represented to the intellect." - Edward Roer (Translator), Shankara's Introduction, p. 3, at Google Books to Brihad Aranyaka Upanishad at p. 3, OCLC 19373677 - KN Jayatilleke (2010), Early Buddhist Theory of Knowledge, ISBN 978-81-208-0619-1, pp. 246–249, from note 385 onwards; Steven Collins (1994), Religion and Practical Reason (Editors: Frank Reynolds, David Tracy), State Univ of New York Press, ISBN 978-0-7914-2217-5, p. 64; Quote: "Central to Buddhist soteriology is the doctrine of not-self (Pali: anattā, Sanskrit: anātman, the opposed doctrine of ātman is central to Brahmanical thought). Put very briefly, this is the [Buddhist] doctrine that human beings have no soul, no self, no unchanging essence."; Edward Roer (Translator), Shankara's Introduction at Google Books Katie Javanaud (2013), Is The Buddhist 'No-Self' Doctrine Compatible With Pursuing Nirvana?, Philosophy Now; John C. Plott et al. (2000), Global History of Philosophy: The Axial Age, Volume 1, Motilal Banarsidass, ISBN 978-81-208-0158-5, p. 63, Quote: "The Buddhist schools reject any Ātman concept. As we have already observed, this is the basic and ineradicable distinction between Hinduism and Buddhism". - The Seven Spiritual Laws Of Yoga, Deepak Chopra, John Wiley & Sons, 2006, ISBN 81-265-0696-2, 978-81-265-0696-5 - Schimmel, Annemarie Schimmel, Religionen – Islam in the Indian Subcontinent, Brill Academic Publishers, 1980, ISBN 978-90-04-06117-0, p. 4 - Avari, Burjor (2007). India: The Ancient Past. A History of the Indian-Subcontinent from 7000 BC to AD 1200. New York: Routledge. pp. 204–205. ISBN 978-0-203-08850-0. Madhyadesha became the ambition of two particular clans among a tribal people in Rajasthan, known as Gurjara and Pratihara. They were both parts of a larger federation of tribes, some of which later came to be known as the Rajputs - Kamath (2001), pp. 100–103 - Vinod Chandra Srivastava (2008). History of Agriculture in India, Up to C. 1200 A.D. Concept. p. 857. ISBN 978-81-8069-521-6. - The Dancing Girl: A History of Early India by Balaji Sadasivan p. 129 - Pollock, Sheldon (2006). The Language of the Gods in the World of Men: Sanskrit, Culture, and Power in Premodern India. University of California Press. pp. 241–242. ISBN 978-0-520-93202-9. - Sunil Fotedar (June 1984). The Kashmir Series: Glimpses of Kashmiri Culture – Vivekananda Kendra, Kanyakumari (p. 57). - R.C. Mazumdar, Ancient India, p. 383 - Thapar 2003, p. 334. - Chandra, Satish (2009). History of Medieval India. New Delhi: Orient Blackswan Private Limited. pp. 19–20. ISBN 978-81-250-3226-7. - Kamath (2001), p. 89 - "Mathematical Achievements of Pre-modern Indian Mathematicians", Putta Swamy T.K., 2012, chapter – Mahavira, p. 231, Elsevier Publications, London, ISBN 978-0-12-397913-1 - Sen 1999, p. 380. - Sen 1999, pp. 380–381. - Daniélou 2003, p. 170. - The Britannica Guide to Algebra and Trigonometry by William L. Hosch p. 105 - Wink, André (2002). Al-Hind: Early Medieval India and the Expansion of Islam, 7th–11th Centuries. Leiden: Brill. p. 284. ISBN 978-0-391-04173-8. - Avari 2007, p. 303. - Sircar, D. C. (1971). Studies in the Geography of Ancient and Medieval India. Motilal Banarsidass. p. 146. ISBN 9788120806900. - K.D. Bajpai (2006). History of Gopāchala. Bharatiya Jnanpith. p. 31. ISBN 978-81-263-1155-2. - Niyogi 1959, p. 38. - Prabhu, T. L. (4 August 2019). Majestic Monuments of India: Ancient Indian Mega Structures. Retrieved 25 July 2020. - Epigraphia Indica, XXIV, p. 43, Dr N.G. Majumdar - Nitish K. Sengupta (2011). Land of Two Rivers: A History of Bengal from the Mahabharata to Mujib. Penguin Books India. pp. 43–45. ISBN 978-0-14-341678-4. - Biplab Dasgupta (2005). European Trade and Colonial Conquest. Anthem Press. pp. 341–. ISBN 978-1-84331-029-7. - Hermann Kulke, Dietmar Rothermund (1998), A History of India, ISBN 978-0-203-44345-3 - History of Buddhism in India, Translation by A Shiefner - Chandra, Satish (2009). History of Medieval India. New Delhi: Orient Blackswan Private Limited. pp. 13–15. ISBN 978-81-250-3226-7. - Sen 1999, p. 278. - PN Chopra; BN Puri; MN Das; AC Pradhan, eds. (2003). A Comprehensive History Of Ancient India (3 Vol. Set). Sterling. pp. 200–202. ISBN 978-81-207-2503-4. - History of Ancient India: Earliest Times to 1000 A.D. by Radhey Shyam Chaurasia p. 237 - Kulke & Rothermund 2004, p. 115. - Keay 2000, p. 215: The Cholas were in fact the most successful dynasty since the Guptas ... The classic expansion of Chola power began anew with the accession of Rajaraja I in 985. - "The Last Years of Cholas: The decline and fall of a dynasty". En.articlesgratuits.com. 22 August 2007. Archived from the original on 20 January 2010. Retrieved 23 September 2009. - K. A. Nilakanta Sastri, A History of South India, p. 158 - Buddhism, Diplomacy, and Trade: The Realignment of Sino-Indian Relations by Tansen Sen p. 229 - History of Asia by B.V. Rao p. 297 - Indian Civilization and Culture by Suhas Chatterjee p. 417 - A Comprehensive History of Medieval India: by Farooqui Salma Ahmed, Salma Ahmed Farooqui p. 24 - Ancient Indian History and Civilization by Sailendra Nath Sen pp. 403–405 - World Heritage Monuments and Related Edifices in India, Band 1 by ʻAlī Jāvīd pp. 132–134 - History of Kannada Literature by E.P. Rice p. 32 - Bilhana by Prabhakar Narayan Kawthekar, p. 29 - Asher & Talbot 2008, p. 47. - Metcalf & Metcalf 2006, p. 6. - Asher & Talbot 2008, p. 53. - Jamal Malik (2008). Islam in South Asia: A Short History. Brill Publishers. p. 104. ISBN 978-9004168596. - William Hunter (1903), A Brief History of the Indian Peoples, p. 124, at Google Books, 23rd Edition, pp. 124–127 - Ramananda Chatterjee (1961). The Modern Review. 109. Indiana University. p. 84. - Delhi Sultanate, Encyclopædia Britannica - Bartel, Nick (1999). "Battuta's Travels: Delhi, capital of Muslim India". The Travels of Ibn Battuta – A Virtual Tour with the 14th Century Traveler. Archived from the original on 12 June 2010. - Asher & Talbot 2008, pp. 50–52. - Richard Eaton (2000), Temple Desecration and Indo-Muslim States, Journal of Islamic Studies, 11(3), pp. 283–319 - Asher & Talbot 2008, pp. 50–51. - Ludden 2002, p. 67. - "Timur – conquest of India". Gardenvisit. Archived from the original on 12 October 2007. - Elliot & Dawson. The History of India As told By Its Own Historians Vol III. pp. 445–446. - History of Classical Sanskrit Literature: by M. Srinivasachariar p. 211 - Eaton 2005, pp. 28–29. - Nilakanta Sastri, K. A. (2002) . A history of South India from prehistoric times to the fall of Vijayanagar. New Delhi: Indian Branch, Oxford University Press. p. 239. ISBN 978-0-19-560686-7. - South India by Amy Karafin, Anirban Mahapatra p. 32 - Kamath (2001), p. 162 - Sastri 1955, p. 317 - The success was probably also due to the peaceful nature of Muhammad II Bahmani, according to Sastri 1955, p. 242 - From the notes of Portuguese Nuniz. Robert Sewell notes that a big dam across was built the Tungabhadra and an aqueduct 15 miles (24 km) long was cut out of rock (Sastri 1955, p. 243). - Columbia Chronologies of Asian History and Culture, John Stewart Bowman p. 271, (2013), Columbia University Press, New York, ISBN 0-231-11004-9 - Also deciphered as Gajaventekara, a metaphor for "great hunter of his enemies", or "hunter of elephants" (Kamath 2001, p163). - Sastri 1955, p. 244 - From the notes of Persian Abdur Razzak. Writings of Nuniz confirms that the kings of Burma paid tributes to Vijayanagara empire Sastri 1955, p. 245 - Kamath (2001), p. 164 - From the notes of Abdur Razzak about Vijayanagara: a city like this had not been seen by the pupil of the eye nor had an ear heard of anything equal to it in the world (Hampi, A Travel Guide 2003, p. 11) - From the notes of Duarte Barbosa (Kamath 2001, p. 178) - Wagoner, Phillip B. (November 1996). "Sultan among Hindu Kings: Dress, Titles, and the Islamicization of Hindu Culture at Vijayanagara". The Journal of Asian Studies. 55 (4): 851–880. doi:10.2307/2646526. JSTOR 2646526. - Kamath (2001), p. 177 - Fritz & Michell, p. 14 - Kamath (2001), pp. 177–178 - "The austere, grandiose site of Hampi was the last capital of the last great Hindu Kingdom of Vijayanagar. Its fabulously rich princes built Dravidian temples and palaces which won the admiration of travellers between the 14th and 16th centuries. Conquered by the Deccan Muslim confederacy in 1565, the city was pillaged over a period of six months before being abandoned." From the brief description UNESCO World Heritage List. - "Vijayanagara Research Project::Elephant Stables". Vijayanagara.org. 9 February 2014. Archived from the original on 17 May 2017. Retrieved 21 May 2018. - History of Science and Philosophy of Science by Pradip Kumar Sengupta p. 91 - Medieval India: From Sultanat to the Mughals-Delhi Sultanat (1206–1526) by Satish Chandra pp. 188–189 - Art History, Volume II: 1400–present by Boundless p. 243 - Eaton 2005, pp. 100–101. - Kamath (2001), p. 174 - Vijaya Ramaswamy (2007). Historical Dictionary of the Tamils. Scarecrow Press. pp. li–lii. ISBN 978-0-8108-6445-0. - Eaton 2005, pp. 101–115. - Kamath (2001), pp. 220, 226, 234 - Singh, Pradyuman. Bihar General Knowledge Digest. ISBN 9789352667697. - Surendra Gopal (2017). Mapping Bihar: From Medieval to Modern Times. Taylor & Francis. pp. 289–295. ISBN 978-1-351-03416-6. - Surinder Singh; I. D. Gaur (2008). Popular Literature and Pre-modern Societies in South Asia. Pearson Education India. pp. 77–. ISBN 978-81-317-1358-7. - Gordon Mackenzie (1990). A manual of the Kistna district in the presidency of Madras. Asian Educational Services. pp. 9–10, 224–. ISBN 978-81-206-0544-2. - Sen, Sailendra (2013). A Textbook of Medieval Indian History. Primus Books. pp. 116–117. ISBN 978-93-80607-34-4. - Lectures on Rajput history and culture by Dr. Dasharatha Sharma. Publisher: Motilal Banarsidass, Jawahar Nagar, Delhi 1970. ISBN 0-8426-0262-3. - John Merci, Kim Smith; James Leuck (1922). "Muslim conquest and the Rajputs". The Medieval History of India pg 67–115 - The Discovery of India, J.L. Nehru - Farooqui Salma Ahmed, A Comprehensive History of Medieval India: From Twelfth to the Mid-Eighteenth Century, (Dorling Kindersley Pvt. Ltd., 2011) - Eaton 2005, p. 88. - The Five Kingdoms of the Bahmani Sultanate - Majumdar, R.C. (ed.) (2007). The Mughul Empire, Mumbai: Bharatiya Vidya Bhavan, ISBN 81-7276-407-1, p. 412 - Majumdar, Ramesh Chandra; Pusalker, A.D.; Majumdar, A.K., eds. (1960). The History and Culture of the Indian People. VI: The Delhi Sultanate. Bombay: Bharatiya Vidya Bhavan. p. 367. [Describing the Gajapati kings of Orissa] Kapilendra was the most powerful Hindu king of his time, and under him Orissa became an empire stretching from the lower Ganga in the north to the Kaveri in the south. - Sailendra Nath Sen (1999). Ancient Indian History and Civilization. New Age International. p. 305. ISBN 978-81-224-1198-0. - Yasmin Saikia (2004). Fragmented Memories: Struggling to be Tai-Ahom in India. Duke University Press. p. 8. ISBN 978-0-8223-8616-2. - Sarkar, J.N. (1992), "Chapter VIII Assam-Mughal Relations", in Barpujari, H.K. (ed.), The Comprehensive History of Assam, 2, Guwahati: Assam Publication Board, p. 213 - Williams 2004, pp. 83–84, the other major classical Indian dances are: Bharatanatyam, Kathak, Odissi, Kathakali, Kuchipudi, Cchau, Satriya, Yaksagana and Bhagavata Mela. - Massey 2004, p. 177. - Devi 1990, pp. 175–180. - Schomer & McLeod (1987), p. 1. - Johar, Surinder (1999). Guru Gobind Singh: A Multi-faceted Personality. MD Publications. p. 89. ISBN 978-81-7533-093-1. - Schomer & McLeod (1987), pp. 1–2. - Lance Nelson (2007), An Introductory Dictionary of Theology and Religious Studies (Editors: Orlando O. Espín, James B. Nickoloff), Liturgical Press, ISBN 978-0-8146-5856-7, pp. 562–563 - SS Kumar (2010), Bhakti – the Yoga of Love, LIT Verlag Münster, ISBN 978-3-643-50130-1, pp. 35–36 - Wendy Doniger (2009), Bhakti, Encyclopædia Britannica; The Four Denomination of Hinduism Himalayan Academy (2013) - Schomer & McLeod (1987), p. 2. - Novetzke, Christian (2007). "Bhakti and Its Public". International Journal of Hindu Studies. 11 (3): 255–272. doi:10.1007/s11407-008-9049-9. JSTOR 25691067. S2CID 144065168. - Singh, Patwant (2000). The Sikhs. Alfred A Knopf Publishing. p. 17. ISBN 0-375-40728-6. - Louis Fenech and WH McLeod (2014), Historical Dictionary of Sikhism, 3rd Edition, Rowman & Littlefield, ISBN 978-1-4422-3600-4, p. 17 - William James (2011), God's Plenty: Religious Diversity in Kingston, McGill Queens University Press, ISBN 978-0-7735-3889-4, pp. 241–242 - Mann, Gurinder Singh (2001). The Making of Sikh Scripture. United States: Oxford University Press. p. 21. ISBN 978-0-19-513024-9. - Asher & Talbot 2008, p. 115. - Robb 2001, pp. 90–91. - Taj Mahal, Description, World Heritage Centre - "The Islamic World to 1600: Rise of the Great Islamic Empires (The Mughal Empire)". University of Calgary. Archived from the original on 27 September 2013. - Jeroen Duindam (2015), Dynasties: A Global History of Power, 1300–1800, p. 105, Cambridge University Press - Rein Taagepera (September 1997). "Expansion and Contraction Patterns of Large Polities: Context for Russia". International Studies Quarterly. 41 (3): 475–504. doi:10.1111/0020-8833.00053. JSTOR 2600793. - Maddison, Angus (2003): Development Centre Studies The World Economy Historical Statistics: Historical Statistics, OECD Publishing, ISBN 92-64-10414-3, p. 261 - Parthasarathi, Prasannan (2011), Why Europe Grew Rich and Asia Did Not: Global Economic Divergence, 1600–1850, Cambridge University Press, p. 2, ISBN 978-1-139-49889-0 - Jeffrey G. Williamson, David Clingingsmith (August 2005). "India's Deindustrialization in the 18th and 19th Centuries" (PDF). Harvard University. Retrieved 18 May 2017. - John F. Richards (1995), The Mughal Empire, p. 190, Cambridge University Press - Lex Heerma van Voss; Els Hiemstra-Kuperus; Elise van Nederveen Meerkerk (2010). "The Long Globalization and Textile Producers in India". The Ashgate Companion to the History of Textile Workers, 1650–2000. Ashgate Publishing. p. 255. ISBN 978-0-7546-6428-4. - Abraham Eraly (2007). The Mughal World: Life in India's Last Golden Age. Penguin Books. p. 5. ISBN 978-0-14-310262-5. - A History of Aurangzib (in 5 volumes) – J.N. Sarkar - Ian Copland; Ian Mabbett; Asim Roy; et al. (2012). A History of State and Religion in India. Routledge. p. 119. ISBN 978-1-136-45950-4. - Audrey Truschke (2017). Aurangzeb: The Life and Legacy of India's Most Controversial King. Stanford University Press. pp. 50–51. ISBN 978-1-5036-0259-5. - Royina Grewal (2007). In the Shadow of the Taj: A Portrait of Agra. Penguin Books India. pp. 220–. ISBN 978-0-14-310265-6. - Dupuy, R. Ernest and Trevor N. Dupuy, The Harper Encyclopedia of Military History, 4th Ed., (HarperCollinsPublishers, 1993), 711. - "Iran in the Age of the Raj". avalanchepress.com. Retrieved 6 January 2017. - Catherine Ella Blanshard Asher; Cynthia Talbot (2006). India before Europe. Cambridge University Press. p. 265. ISBN 978-0-521-80904-7. - A Popular Dictionary of Sikhism: Sikh Religion and Philosophy, p. 86, Routledge, W. Owen Cole, Piara Singh Sambhi, 2005 - Khushwant Singh, A History of the Sikhs, Volume I: 1469–1839, Delhi, Oxford University Press, 1978, pp. 127–129 - Pearson, M.N. (February 1976). "Shivaji and the Decline of the Mughal Empire". The Journal of Asian Studies. 35 (2): 221–235. doi:10.2307/2053980. JSTOR 2053980. - Capper, J. (1918). Delhi, the Capital of India. Asian Educational Services. p. 28. ISBN 978-81-206-1282-2. Retrieved 6 January 2017. - Sen, S.N. (2010). An Advanced History of Modern India. Macmillan India. p. 1941. ISBN 978-0-230-32885-3. Retrieved 6 January 2017. - Shivaji and his Times (1919) – J.N. Sarkar - An Advanced History of India, Dr. K.K. Datta, p. 546 - M.A.Ghazi (24 July 2018). Islamic Renaissance In South Asia (1707–1867) : The Role Of Shah Waliallah & His Successors. Adam Publishers & Distributors. ISBN 978-8174354006 – via Google Books. - Mehta, Jaswant Lal (2005). Advanced Study in the History of Modern India 1707-1813. Sterling. p. 204. ISBN 978-1-932705-54-6. - Sailendra Nath Sen (2010). An Advanced History of Modern India. Macmillan India. p. 16. ISBN 978-0-230-32885-3. - Bharatiya Vidya Bhavan, Bharatiya Itihasa Samiti, Ramesh Chandra Majumdar – The History and Culture of the Indian People: The Maratha supremacy - N.G. Rathod (1994). The Great Maratha Mahadaji Scindia. Sarup & Sons. p. 8. ISBN 978-81-85431-52-9. - Naravane, M.S. (2014). Battles of the Honorourable East India Company. A.P.H. Publishing Corporation. p. 63. ISBN 978-81-313-0034-3. - Ring, Trudy; Watson, Noelle; Schellinger, Paul (2012). Asia and Oceania: International Dictionary of Historic Places. Routledge. pp. 28–29. ISBN 978-1-136-63979-1. - Singh, Gulcharan (July 1981). "Maharaja Ranjit Singh and the Principles of War". USI Journal. 111 (465): 184–192. - Grewal, J.S. (1990). The Sikhs of the Punjab. The New Cambridge History of India. II.3. Cambridge University Press. pp. 101, 103–104. ISBN 978-0-521-26884-4. Aggrandisement which made him the master of an empire ... the British recognized Ranjit Singh as the sole sovereign ruler of the Punjab and left him free to ... oust the Afghans from Multan and Kashmir ... Peshawar was taken over ... The real strength of Ranjit Singh's army lay in its infantry and artillery ... these new wings played an increasingly decisive role ... possessed 200 guns. Horse artillery was added in the 1820s ... nearly half of his army in terms of numbers consisted of men and officers trained on European lines ... In the expansion of Ranjit Singh's dominions ... vassalage proved to be nearly as important as the westernized wings of his army. - History Modern India By S.N. Sen - Chaudhury, Sushil; Mohsin, KM (2012). "Sirajuddaula". In Islam, Sirajul; Jamal, Ahmed A. (eds.). Banglapedia: National Encyclopedia of Bangladesh (Second ed.). Asiatic Society of Bangladesh. Archived from the original on 14 June 2015. Retrieved 15 August 2018. - Singh, Vipul (2009). Longman History & Civics (Dual Government in Bengal). Pearson Education India. pp. 29–. ISBN 978-8131728888. - Madhya Pradesh National Means-Cum-Merit Scholarship Exam (Warren Hasting's system of Dual Government). Upkar Prakashan. 2009. pp. 11–. ISBN 978-81-7482-744-9. - Black, Jeremy (2006), A Military History of Britain: from 1775 to the Present, Westport, Conn.: Greenwood Publishing Group, p. 78, ISBN 978-0-275-99039-8 - "Treaty of Amritsar" (PDF). Archived from the original (PDF) on 26 August 2014. Retrieved 25 August 2014. - Rai, Mridu (2004). Hindu Rulers, Muslim Subjects: Islam, Rights, and the History of Kashmir. Princeton University Press. pp. 27, 133. ISBN 978-0-691-11688-4. - Indian History. Allied Publishers. 1988. pp. 3–. ISBN 978-81-8424-568-4. - Karl J. Schmidt (20 May 2015). An Atlas and Survey of South Asian History. Routledge. pp. 138–. ISBN 978-1-317-47681-8. - Glenn Ames (2012). Ivana Elbl (ed.). Portugal and its Empire, 1250–1800 (Collected Essays in Memory of Glenn J. Ames).: Portuguese Studies Review, Vol. 17, No. 1. Trent University Press. pp. 12–15 with footnotes, context: 11–32. - Sanjay Subrahmanyam, The Portuguese empire in Asia, 1500–1700: a political and economic history (2012) - Koshy, M.O. (1989). The Dutch Power in Kerala, 1729–1758. Mittal Publications. p. 61. ISBN 978-81-7099-136-6. - http://mod.nic.in Archived 12 March 2016 at the Wayback Machine 9th Madras Regiment - Holden Furber, Rival Empires of Trade in the Orient, 1600–1800, University of Minnesota Press, 1976, p. 201. - Philippe Haudrère, Les Compagnies des Indes Orientales, Paris, 2006, p. 70. - Dossier Goa – A Recusa do Sacrifício Inútil. Shvoong.com. - Markovits, Claude, ed. (2004) [First published 1994 as Histoire de l'Inde Moderne]. A History of Modern India, 1480–1950 (2nd ed.). London: Anthem Press. pp. 271–. ISBN 978-1-84331-004-4. - Ludden 2002, p. 133 - Brown 1994, p. 67 - Brown 1994, p. 68 - Saul David, p. 70, The Indian Mutiny, Penguin Books 2003 - Bandyopadhyay 2004, p. 172, Bose & Jalal 2003, p. 91, Brown 1994, p. 92 - Bandyopadhyay 2004, p. 177, Bayly 2000, p. 357 - Christopher Hibbert, The Great Mutiny: India 1857 (1980) - Pochhammer, Wilhelm von (1981), India's road to nationhood: a political history of the subcontinent, Allied Publishers, ISBN 978-81-7764-715-0 - "Law Commission of India – Early Beginnings" - Suresh Chandra Ghosh (1995). "Bentinck, Macaulay and the introduction of English education in India". History of Education. 24 (1): 17–25. doi:10.1080/0046760950240102. - I.D. Derbyshire (1987). "Economic Change and the Railways in North India, 1860–1914". Modern Asian Studies. 21 (3): 521–545. doi:10.1017/S0026749X00009197. JSTOR 312641. - Neil Charlesworth, British Rule and the Indian Economy, 1800–1914 (1981) pp. 23–37 - Robb, Peter (November 1981). "British Rule and Indian 'Improvement'". Economic History Review. 34 (4): 507–523. doi:10.2307/2595587. JSTOR 2595587. - S.A. Wolpert, Morley and India, 1906–1910, (1967) - Democracy and Hindu nationalism, Chetan Bhatt (2013) - Harjinder Singh Dilgeer. Shiromani Akali Dal (1920–2000). Sikh University Press, Belgium, 2001. - The History of the Indian National Congress, B. Pattabhi Sitaramayya (1935) - History of Bengali-speaking People by Nitish Sengupta, p. 253. - Nitish Sengupta (2001). History of the Bengali-speaking People. UBS Publishers' Distributors. p. 211. ISBN 978-81-7476-355-6. The Bengal Renaissance can be said to have started with Raja Ram Mohan Roy (1775–1833) and ended with Rabindranath Tagore (1861–1941). - Kopf, David (December 1994). "Amiya P. Sen. Hindu Revivalism in Bengal 1872". American Historical Review (Book review). 99 (5): 1741–1742. doi:10.2307/2168519. JSTOR 2168519. - Sharma, Mayank. "Essay on 'Derozio and the Young Bengal Movement'". - Davis, Mike. Late Victorian Holocausts. 1. Verso, 2000. ISBN 1-85984-739-0 p. 173 - Davis, Mike. Late Victorian Holocausts. 1. Verso, 2000. ISBN 1-85984-739-0 p. 7 - Amartya Sen (1981). Poverty and Famines: An Essay on Entitlement and Deprivation. Oxford University Press. p. 39. ISBN 978-0-19-828463-5. - Greenough, Paul Robert (1982). Prosperity and Misery in Modern Bengal: The Famine of 1943–1944. Oxford University Press. ISBN 978-0-19-503082-2. - "Plague". Archived from the original on 17 February 2009. Retrieved 5 July 2014.. World Health Organisation. - "Viewpoint: Britain must pay reparations to India - BBC News". BBC.com. - Colin Clark (1977). Population Growth and Land Use. Springer Science+Business Media. p. 64. ISBN 978-1349157754. - "Reintegrating India with the World Economy". Peterson Institute for International Economics. - Pati, p. 31 - "Participants from the Indian subcontinent in the First World War". Memorial Gates Trust. Retrieved 12 September 2009. - "Commonwealth War Graves Commission Annual Report 2007–2008 Online". Archived from the original on 26 September 2007. - Sumner, p. 7 - Kux, Dennis (1992). India and the United States: estranged democracies, 1941–1991. Diane Publishing. ISBN 978-1-4289-8189-8. - Müller 2009, p. 55. - Fay 1993, p. viii - Sarkar 1989, p. 410 - Bandyopadhyay 2004, p. 426 - Arnold 1991, pp. 97–98 - Devereux (2000, p. 6) - Mukerjee (2010, pp. 112–114) - Marshall, P. J. (2001), The Cambridge Illustrated History of the British Empire, Cambridge University Press, p. 179, ISBN 978-0-521-00254-7 Quote: "The first modern nationalist movement to arise in the non-European empire, and one that became an inspiration for many others, was the Indian Congress." - "Information about the Indian National Congress". www.open.ac.uk. Arts & Humanities Research council. Retrieved 29 July 2015. - "Census Of India 1931". archive.org. - Markovits, Claude (2004). A history of modern India, 1480–1950. Anthem Press. pp. 386–409. ISBN 978-1843310044. - Modern India, Bipin Chandra, p. 76 - India Awakening and Bengal, N.S. Bose, 1976, p. 237 - British Paramountcy and Indian Renaissance, Part–II, Dr. R.C. Majumdar, p. 466 - "'India's well-timed diversification of army helped democracy' | Business Standard News". business-standard.com. Retrieved 6 January 2017. - Anil Chandra Banerjee, A Constitutional History of India 1600–1935 (1978) pp. 171–173 - R, B.S.; Bakshi, S.R. (1990). Bal Gangadhar Tilak: Struggle for Swaraj. Anmol Publications Pvt. Ltd. ISBN 978-81-7041-262-5. Retrieved 6 January 2017. - India's Struggle for Independence – Chandra, Bipan; Mridula Mukherjee, Aditya Mukherjee, Sucheta Mahajan, K.N. Panikkar (1989), New Delhi: Penguin Books. ISBN 978-0-14-010781-4. - Albert, Sir Courtenay Peregrine. The Government of India. Clarendon Press, 1922. p. 125 - Bond, Brian (October 1963). "Amritsar 1919". History Today. Vol. 13 no. 10. pp. 666–676. - Qasmi, Ali Usman; Robb, Megan Eaton (2017). Muslims against the Muslim League: Critiques of the Idea of Pakistan. Cambridge University Press. p. 2. ISBN 9781108621236. - Haq, Mushir U. (1970). Muslim politics in modern India, 1857-1947. Meenakshi Prakashan. p. 114. This was also reflected in one of the resolutions of the Azad Muslim Conference, an organization which attempted to be representative of all the various nationalist Muslim parties and groups in India. - Ahmed, Ishtiaq (27 May 2016). "The dissenters". The Friday Times. However, the book is a tribute to the role of one Muslim leader who steadfastly opposed the Partition of India: the Sindhi leader Allah Bakhsh Soomro. Allah Bakhsh belonged to a landed family. He founded the Sindh People’s Party in 1934, which later came to be known as ‘Ittehad’ or ‘Unity Party’. ... Allah Bakhsh was totally opposed to the Muslim League’s demand for the creation of Pakistan through a division of India on a religious basis. Consequently, he established the Azad Muslim Conference. In its Delhi session held during April 27–30, 1940 some 1400 delegates took part. They belonged mainly to the lower castes and working class. The famous scholar of Indian Islam, Wilfred Cantwell Smith, feels that the delegates represented a ‘majority of India’s Muslims’. Among those who attended the conference were representatives of many Islamic theologians and women also took part in the deliberations ... Shamsul Islam argues that the All-India Muslim League at times used intimidation and coercion to silence any opposition among Muslims to its demand for Partition. He calls such tactics of the Muslim League as a ‘Reign of Terror’. He gives examples from all over India including the NWFP where the Khudai Khidmatgars remain opposed to the Partition of India. - Ali, Afsar (17 July 2017). "Partition of India and Patriotism of Indian Muslims". The Milli Gazette. - "Great speeches of the 20th century". The Guardian. 8 February 2008. - Philip Ziegler, Mountbatten(1985) p. 401. - Symonds, Richard (1950). The Making of Pakistan. London: Faber and Faber. p. 74. OCLC 1462689. At the lowest estimate, half a million people perished and twelve millions became homeless. - Abid, Abdul Majeed (29 December 2014). "The forgotten massacre". The Nation. On the same dates [4 and 5 March 1947], Muslim League-led mobs fell with determination and full preparations on the helpless Hindus and Sikhs scattered in the villages of Multan, Rawalpindi, Campbellpur, Jhelum and Sargodha. The murderous mobs were well supplied with arms, such as daggers, swords, spears and fire-arms. (A former civil servant mentioned in his autobiography that weapon supplies had been sent from NWFP and money was supplied by Delhi-based politicians.) - Srinath Raghavan (12 November 2013). 1971. Harvard University Press. ISBN 978-0-674-73129-5. - Prakash, Gyan (April 1990). "Writing Post-Orientalist Histories of the Third World: Perspectives from Indian Historiography". Comparative Studies in Society and History. 32 (2): 383–408. doi:10.1017/s0010417500016534. JSTOR 178920. - Anil Seal, The Emergence of Indian Nationalism: Competition and Collaboration in the Later Nineteenth Century (1971) - Gordon Johnson, Provincial Politics and Indian Nationalism: Bombay and the Indian National Congress 1880–1915 (2005) - Rosalind O'Hanlon and David Washbrook, eds. Religious Cultures in Early Modern India: New Perspectives (2011) - Aravind Ganachari, "Studies in Indian Historiography: 'The Cambridge School'", Indica, March 2010, 47#1, pp. 70–93 - Hostettler, N. (2013). Eurocentrism: a marxian critical realist critique. Taylor & Francis. p. 33. ISBN 978-1-135-18131-4. Retrieved 6 January 2017. - "Ranjit Guha, "On Some Aspects of Historiography of Colonial India"" (PDF). - Bagchi, Amiya Kumar (January 1993). "Writing Indian History in the Marxist Mode in a Post-Soviet World". Indian Historical Review. 20 (1/2): 229–244. - Prakash, Gyan (December 1994). "Subaltern studies as postcolonial criticism". American Historical Review. 99 (5): 1475–1500. doi:10.2307/2168385. JSTOR 2168385. - Roosa, John (2006). "When the Subaltern Took the Postcolonial Turn". Journal of the Canadian Historical Association. 17 (2): 130–147. doi:10.7202/016593ar. - Menon, Latha (August 2004). "Coming to Terms with the Past: India". History Today. Vol. 54 no. 8. pp. 28–30. - "Harvard scholar says the idea of India dates to a much earlier time than the British or the Mughals". - "In The Footsteps of Pilgrims". - "India's spiritual landscape: The heavens and the earth". The Economist. 24 March 2012. - Dalrymple, William (27 July 2012). "India: A Sacred Geography by Diana L Eck – review". The Guardian. - Arnold, David (1991), Famine: Social Crisis and Historical Change, Wiley-Blackwell, ISBN 978-0-631-15119-7 - Bandyopadhyay, Sekhar (2004), From Plassey to Partition: A History of Modern India, Orient Longman, ISBN 978-81-250-2596-2 - Bayly, Christopher Alan (2000) [First published 1996], Empire and Information: Intelligence Gathering and Social Communication in India, 1780–1870, Cambridge University Press, ISBN 978-0-521-57085-5 - Bose, Sugata; Jalal, Ayesha (2003), Modern South Asia: History, Culture, Political Economy (2nd ed.), Routledge, ISBN 0-415-30787-2 - Brown, Judith M. (1994), Modern India: The Origins of an Asian Democracy (2nd ed.), ISBN 978-0-19-873113-9 - Bentley, Jerry H. (June 1996), "Cross-Cultural Interaction and Periodization in World History", The American Historical Review, 101 (3): 749–770, doi:10.2307/2169422, JSTOR 2169422 - Antonova, K.A.; Bongard-Levin, G.; Kotovsky, G. (1979). A History of India Volume 1. Moscow, USSR: Progress Publishers. - Chauhan, Partha R. (2010). "The Indian Subcontinent and 'Out of Africa 1'". In Fleagle, John G.; Shea, John J.; Grine, Frederick E.; Baden, Andrea L.; Leakey, Richard E. (eds.). Out of Africa I: The First Hominin Colonization of Eurasia. Springer Science & Business Media. pp. 145–164. ISBN 978-90-481-9036-2. - Daniélou, Alain (2003), A Brief History of India, Rochester, VT: Inner Traditions, ISBN 978-0-89281-923-2 - Datt, Ruddar; Sundharam, K.P.M. (2009), Indian Economy, New Delhi: S. Chand Group, ISBN 978-81-219-0298-4 - Devereux, Stephen (2000). Famine in the twentieth century (PDF) (Technical report). IDS Working Paper 105. Brighton: Institute of Development Studies. - Devi, Ragini (1990). Dance Dialects of India. Motilal Banarsidass. ISBN 978-81-208-0674-0. - Eaton, Richard M. (2005), A Social History of the Deccan: 1300–1761: Eight Indian Lives, The new Cambridge history of India, I.8, Cambridge University Press, ISBN 978-0-521-25484-7 - Fay, Peter Ward (1993), The forgotten army : India's armed struggle for independence, 1942–1945, University of Michigan Press, ISBN 978-0-472-10126-9 - Guha, Arun Chandra (1971), First Spark of Revolution, Orient Longman, OCLC 254043308 - Gupta, S.P.; Ramachandran, K.S., eds. (1976), Mahabharata, Myth and Reality – Differing Views, Delhi: Agam prakashan - Doniger, Wendy, ed. (1999), Encyclopedia of World Religions, Merriam-Webster, ISBN 978-0-87779-044-0 - Gupta, S.P.; Ramachandra, K.S. (2007). "Mahabharata, Myth and Reality". In Singh, Upinder (ed.). Delhi – Ancient History. Social Science Press. pp. 77–116. ISBN 978-81-87358-29-9. - Keay, John (2000), India: A History, Atlantic Monthly Press, ISBN 978-0-87113-800-2 - Kenoyer, J. Mark (1998). The Ancient Cities of the Indus Valley Civilisation. Oxford University Press. ISBN 978-0-19-577940-0. - Kulke, Hermann; Rothermund, Dietmar (2004) [First published 1986], A History of India (4th ed.), Routledge, ISBN 978-0-415-15481-9 - Ludden, D. (2002), India and South Asia: A Short History, One World, ISBN 978-1-85168-237-9 - Massey, Reginald (2004). India's Dances: Their History, Technique, and Repertoire. Abhinav Publications. ISBN 978-81-7017-434-9. - Michaels, Axel (2004), Hinduism. Past and present, Princeton, New Jersey: Princeton University Press - Mookerji, Radha Kumud (1988) [First published 1966], Chandragupta Maurya and his times (4th ed.), Motilal Banarsidass, ISBN 81-208-0433-3 - Mukerjee, Madhusree (2010). Churchill's Secret War: The British Empire and the Ravaging of India During World War II. Basic Books. ISBN 978-0-465-00201-6. - Müller, Rolf-Dieter (2009). "Afghanistan als militärisches Ziel deutscher Außenpolitik im Zeitalter der Weltkriege". In Chiari, Bernhard (ed.). Wegweiser zur Geschichte Afghanistans. Paderborn: Auftrag des MGFA. ISBN 978-3-506-76761-5. - Petraglia, Michael D.; Allchin, Bridget (2007). The Evolution and History of Human Populations in South Asia: Inter-disciplinary Studies in Archaeology, Biological Anthropology, Linguistics and Genetics. Springer Science & Business Media. ISBN 978-1-4020-5562-1. - Petraglia, Michael D. (2010). "The Early Paleolithic of the Indian Subcontinent: Hominin Colonization, Dispersals and Occupation History". In Fleagle, John G.; Shea, John J.; Grine, Frederick E.; Baden, Andrea L.; Leakey, Richard E. (eds.). Out of Africa I: The First Hominin Colonization of Eurasia. Springer Science & Business Media. pp. 165–179. ISBN 978-90-481-9036-2. - Niyogi, Roma (1959). The History of the Gāhaḍavāla Dynasty. Oriental. OCLC 5386449. - Pochhammer, Wilhelm von (1981), India's road to nationhood: a political history of the subcontinent, Allied Publishers, ISBN 978-81-7764-715-0 - Raychaudhuri, Tapan; Habib, Irfan, eds. (1982), The Cambridge Economic History of India, Volume 1: c. 1200 – c. 1750, Cambridge University Press, ISBN 978-0-521-22692-9 - Reddy, Krishna (2003). Indian History. New Delhi: Tata McGraw Hill. ISBN 978-0-07-048369-9. - Robb, P (2001). A History of India. London: Palgrave. - Samuel, Geoffrey (2010), The Origins of Yoga and Tantra, Cambridge University Press - Sarkar, Sumit (1989) [First published 1983]. Modern India, 1885–1947. MacMillan Press. ISBN 0-333-43805-1. - Sastri, K. A. Nilakanta (1955). A history of South India from prehistoric times to the fall of Vijayanagar. New Delhi: Oxford University Press. ISBN 978-0-19-560686-7. - Schomer, Karine; McLeod, W.H., eds. (1987). The Sants: Studies in a Devotional Tradition of India. Motilal Banarsidass. ISBN 978-81-208-0277-3. - Sen, Sailendra Nath (1 January 1999). Ancient Indian History and Civilization. New Age International. ISBN 978-81-224-1198-0. - Singh, Upinder (2008), A History of Ancient and Early Medieval India: From the Stone Age to the 12th Century, Pearson, ISBN 978-81-317-1120-0 - Sircar, D C (1990), "Pragjyotisha-Kamarupa", in Barpujari, H K (ed.), The Comprehensive History of Assam, I, Guwahati: Publication Board, Assam, pp. 59–78 - Thapar, Romila (1977), A History of India. Volume One, Penguin Books - Thapar, Romila (1978), Ancient Indian Social History: Some Interpretations (PDF), Orient Blackswan, archived from the original (PDF) on 14 February 2015 - Thapar, Romila (2003). The Penguin History of Early India (First ed.). Penguin Books India. ISBN 978-0-14-302989-2. - Williams, Drid (2004). "In the Shadow of Hollywood Orientalism: Authentic East Indian Dancing" (PDF). Visual Anthropology. Routledge. 17 (1): 69–98. doi:10.1080/08949460490274013. S2CID 29065670. - Asher, C.B.; Talbot, C (1 January 2008), India Before Europe (1st ed.), Cambridge University Press, ISBN 978-0-521-51750-8 - Metcalf, B.; Metcalf, T.R. (9 October 2006), A Concise History of Modern India (2nd ed.), Cambridge University Press, ISBN 978-0-521-68225-1 - "The beginning of the historical period, c. 500–150 BCE". Encyclopædia Britannica. 2015. - Basham, A.L., ed. The Illustrated Cultural History of India (Oxford University Press, 2007) - Buckland, C.E. Dictionary of Indian Biography (1906) 495pp full text - Chakrabarti D.K. 2009. India, an archaeological history : palaeolithic beginnings to early historic foundations - Dharma Kumar and Meghnad Desai, eds. The Cambridge Economic History of India: Volume 2, c. 1751 – c. 1970 (2nd ed. 2010), 1114pp of scholarly articles - Fisher, Michael. An Environmental History of India: From Earliest Times to the Twenty-First Century (Cambridge UP, 2018) - Guha, Ramachandra. India After Gandhi: The History of the World's Largest Democracy (2007), 890pp; since 1947 - James, Lawrence. Raj: The Making and Unmaking of British India (2000) online - Khan, Yasmin. The Raj At War: A People's History Of India's Second World War (2015); also published as India At War: The Subcontinent and the Second World War . - Khan, Yasmin. The Great Partition: The Making of India and Pakistan (2n d ed. Yale UP 2017) excerpt - Mcleod, John. The History of India (2002) excerpt and text search - Majumdar, R.C. : An Advanced History of India. London, 1960. ISBN 0-333-90298-X - Majumdar, R.C. (ed.) : The History and Culture of the Indian People, Bombay, 1977 (in eleven volumes). - Mansingh, Surjit The A to Z of India (2010), a concise historical encyclopedia - Markovits, Claude, ed. A History of Modern India, 1480–1950 (2002) by a team of French scholars - Metcalf, Barbara D. and Thomas R. Metcalf. A Concise History of Modern India (2006) - Peers, Douglas M. India under Colonial Rule: 1700–1885 (2006), 192pp - Richards, John F. The Mughal Empire (The New Cambridge History of India) (1996) - Riddick, John F. The History of British India: A Chronology (2006) excerpt - Riddick, John F. Who Was Who in British India (1998); 5000 entries excerpt - Rothermund, Dietmar. An Economic History of India: From Pre-Colonial Times to 1991 (1993) - Sharma, R.S., India's Ancient Past, (Oxford University Press, 2005) - Sarkar, Sumit. Modern India, 1885–1947 (2002) - Senior, R.C. (2006). Indo-Scythian coins and history. Volume IV. Classical Numismatic Group, Inc. ISBN 978-0-9709268-6-9. - Singhal, D.P. A History of the Indian People (1983) - Smith, Vincent. The Oxford History of India (3rd ed. 1958), old-fashioned - Spear, Percival. A History of India. Volume 2. Penguin Books. (1990) [First published 1965] - Stein, Burton. A History of India (1998) - Thapar, Romila. Early India: From the Origins to AD 1300 (2004) excerpt and text search - Thompson, Edward, and G.T. Garratt. Rise and Fulfilment of British Rule in India (1934) 690 pages; scholarly survey, 1599–1933 excerpt and text search - Tomlinson, B.R. The Economy of Modern India, 1860–1970 (The New Cambridge History of India) (1996) - Tomlinson, B.R. The political economy of the Raj, 1914-1947 (1979) online - Wolpert, Stanley. A New History of India (8th ed. 2008) online 7th edition - Bannerjee, Gauranganath (1921). India as known to the ancient world. London: Humphrey Milford, Oxford University Press. - Bayly, C.A. (November 1985). "State and Economy in India over Seven Hundred Years". The Economic History Review. 38 (4): 583–596. doi:10.1111/j.1468-0289.1985.tb00391.x. JSTOR 2597191. - Bose, Mihir. "India's Missing Historians: Mihir Bose Discusses the Paradox That India, a Land of History, Has a Surprisingly Weak Tradition of Historiography", History Today 57#9 (2007) pp. 34–. online - Elliot, Henry Miers; John Dowson (1867–77). The History of India, as told by its own historians. The Muhammadan Period. London: Trübner and Co. - Kahn, Yasmin. "Remembering and Forgetting: South Asia and the Second World War' in Martin Gegner and Bart Ziino, eds., The Heritage of War (Routledge, 2011) pp. 177–193. - Jain, M. The India They Saw : Foreign Accounts (4 Volumes) Delhi: Ocean Books, 2011. - Lal, Vinay, The History of History: Politics and Scholarship in Modern India (2003). - Palit, Chittabrata, Indian Historiography (2008). - Arvind Sharma, Hinduism and Its Sense of History (Oxford University Press, 2003) ISBN 978-0-19-566531-4 - E. Sreedharan, A Textbook of Historiography, 500 B.C. to A.D. 2000 (2004) - Warder, A.K., An introduction to Indian historiography (1972). - The Imperial Gazetteer of India (26 vol, 1908–31), highly detailed description of all of India in 1901. online edition |Wikiquote has quotations related to: History of India| - History of India Podcast: https://historyofindiapodcast.libsyn.com/
In colloquial language, an average is the sum of a list of numbers divided by the number of numbers in the list. In mathematics and statistics, this would be called the arithmetic mean. However, the word average may also refer to the median, mode, or other central or typical value. In statistics, these are all known as measures of central tendency. - 1 Calculation - 2 Summary of types - 3 Miscellaneous types - 4 Moving average - 5 Etymology - 6 See also - 7 References - 8 External links The most common type of average is the arithmetic mean. If n numbers are given, each number denoted by ai (where i = 1,2, …, n), the arithmetic mean is the sum of the as divided by n or The arithmetic mean, often simply called the mean, of two numbers, such as 2 and 8, is obtained by finding a value A such that 2 + 8 = A + A. One may find that A = (2 + 8)/2 = 5. Switching the order of 2 and 8 to read 8 and 2 does not change the resulting value obtained for A. The mean 5 is not less than the minimum 2 nor greater than the maximum 8. If we increase the number of terms in the list to 2, 8, and 11, the arithmetic mean is found by solving for the value of A in the equation 2 + 8 + 11 = A + A + A. One finds that A = (2 + 8 + 11)/3 = 7. Along with the arithmetic mean above, the geometric mean and the harmonic mean are known collectively as the Pythagorean means. The geometric mean of n non-negative numbers is obtained by multiplying them all together and then taking the nth root. In algebraic terms, the geometric mean of a1, a2, …, an is defined as Example: Geometric mean of 2 and 8 is Harmonic mean for a non-empty collection of numbers a1, a2, …, an, all different from 0, is defined as the reciprocal of the arithmetic mean of the reciprocals of the ai 's: One example where the harmonic mean is useful is when examining the speed for a number of fixed-distance trips. For example, if the speed for going from point A to B was 60 km/h, and the speed for returning from B to A was 40 km/h, then the harmonic mean speed is given by Inequality concerning AM, GM, and HM A well known inequality concerning arithmetic, geometric, and harmonic means for any set of positive numbers is It is easy to remember noting that the alphabetical order of the letters A, G, and H is preserved in the inequality. See Inequality of arithmetic and geometric means. Thus for the above harmonic mean example: AM = 50, GM ≈ 49, and HM = 48 km/h. The most frequently occurring number in a list is called the mode. For example, the mode of the list (1, 2, 2, 3, 3, 3, 4) is 3. It may happen that there are two or more numbers which occur equally often and more often than any other number. In this case there is no agreed definition of mode. Some authors say they are all modes and some say there is no mode. The median is the middle number of the group when they are ranked in order. (If there are an even number of numbers, the mean of the middle two is taken.) Thus to find the median, order the list according to its elements' magnitude and then repeatedly remove the pair consisting of the highest and lowest values until either one or two values are left. If exactly one value is left, it is the median; if two values, the median is the arithmetic mean of these two. This method takes the list 1, 7, 3, 13 and orders it to read 1, 3, 7, 13. Then the 1 and 13 are removed to obtain the list 3, 7. Since there are two elements in this remaining list, the median is their arithmetic mean, (3 + 7)/2 = 5. Summary of types |Name||Equation or description| |Median||The middle value that separates the higher half from the lower half of the data set| |Geometric median||A rotation invariant extension of the median for points in Rn| |Mode||The most frequent value in the data set| |Truncated mean||The arithmetic mean of data values after a certain number or proportion of the highest and lowest data values have been discarded| |Interquartile mean||A special case of the truncated mean, using the interquartile range| |Winsorized mean||Similar to the truncated mean, but, rather than deleting the extreme values, they are set equal to the largest and smallest values that remain| The table of mathematical symbols explains the symbols used below. One can create one's own average metric using the generalized f-mean: where f is any invertible function. The harmonic mean is an example of this using f(x) = 1/x, and the geometric mean is another, using f(x) = log x. However, this method for generating means is not general enough to capture all averages. A more general method for defining an average takes any function g(x1, x2, …, xn) of a list of arguments that is continuous, strictly increasing in each argument, and symmetric (invariant under permutation of the arguments). The average y is then the value that, when replacing each member of the list, results in the same function value: g(y, y, …, y) = g(x1, x2, …, xn). This most general definition still captures the important property of all averages that the average of a list of identical elements is that element itself. The function g(x1, x2, …, xn) = x1+x2+ ··· + xn provides the arithmetic mean. The function g(x1, x2, …, xn) = x1x2···xn (where the list elements are positive numbers) provides the geometric mean. The function g(x1, x2, …, xn) = −(x1−1+x2−1+ ··· + xn−1) (where the list elements are positive numbers) provides the harmonic mean. Average percentage return and CAGR A type of average used in finance is the average percentage return. It is an example of a geometric mean. When the returns are annual, it is called the Compound Annual Growth Rate (CAGR). For example, if we are considering a period of two years, and the investment return in the first year is −10% and the return in the second year is +60%, then the average percentage return or CAGR, R, can be obtained by solving the equation: (1 − 10%) × (1 + 60%) = (1 − 0.1) × (1 + 0.6) = (1 + R) × (1 + R). The value of R that makes this equation true is 0.2, or 20%. This means that the total return over the 2-year period is the same as if there had been 20% growth each year. Note that the order of the years makes no difference – the average percentage returns of +60% and −10% is the same result as that for −10% and +60%. This method can be generalized to examples in which the periods are not equal. For example, consider a period of a half of a year for which the return is −23% and a period of two and a half years for which the return is +13%. The average percentage return for the combined period is the single year return, R, that is the solution of the following equation: (1 − 0.23)0.5 × (1 + 0.13)2.5 = (1 + R)0.5+2.5, giving an average percentage return R of 0.0600 or 6.00%. Given a time series such as daily stock market prices or yearly temperatures people often want to create a smoother series. This helps to show underlying trends or perhaps periodic behavior. An easy way to do this is to choose a number n and create a new series by taking the arithmetic mean of the first n values, then moving forward one place and so on. This is the simplest form of moving average. More complicated forms involve using a weighted average. The weighting can be used to enhance or suppress various periodic behavior and there is very extensive analysis of what weightings to use in the literature on filtering. In digital signal processing the term “moving average” is used even when the sum of the weights is not 1.0 (so the output series is a scaled version of the averages). The reason for this is that the analyst is usually interested only in the trend or the periodic behavior. A further generalization is an “autoregressive moving average”. In this case the average also includes some of the recently calculated outputs. This allows samples from further back in the history to affect the current output. According to the Oxford English Dictionary, "few words have received more etymological investigation."[not in citation given] In the 16th century average meant a customs duty, or the like, and was used in the Mediterranean area. It came to mean the cost of damage sustained at sea. From that came an "average adjuster" who decided how to apportion a loss between the owners and insurers of a ship and cargo. Marine damage is either particular average, which is borne only by the owner of the damaged property, or general average, where the owner can claim a proportional contribution from all the parties to the marine venture. The type of calculations used in adjusting general average gave rise to the use of "average" to mean "arithmetic mean". A second English usage, documented as early as 1674 and sometimes spelled "averish," is as the residue and second growth of field crops, which were considered suited to consumption by draught animals ("avers"). The root is found in Arabic as awar, in Italian as avaria, in French as avarie and in Dutch as averij. It is unclear in which language the word first appeared. There is earlier (from at least the 11th century), unrelated use of the word. It appears to be an old legal term for a tenant's day labour obligation to a sheriff, probably anglicised from "avera" found in the English Domesday Book (1085). - Merigo, Jose M.; Cananovas, Montserrat (2009). "The Generalized Hybrid Averaging Operator and its Application in Decision Making". Journal of Quantitative Methods for Economics and Business Administration 9: 69–84. ISSN 1886-516X. - John Bibby (1974). “Axiomatisations of the average and a further generalisation of monotonic sequences”. Glasgow Mathematical Journal, vol. 15, pp. 63–65. - Box, George E.P.; Jenkins, Gwilym M. (1976). Time Series Analysis: Forecasting and Control (revised ed.). Holden-Day. ISBN 0816211043. - Haykin, Simon (1986). Adaptive Filter Theory. Prentice-Hall. ISBN 0130040525. - "average". Oxford English Dictionary (3rd ed.). Oxford University Press. September 2005. (Subscription or UK public library membership required.) - Ray, John (1674). A Collection of English Words Not Generally Used. London: H. Bruges. Retrieved 18 May 2015. |Look up average in Wiktionary, the free dictionary.|
We have previously covered the subject of what money is. So now let’s see how it is created. That’s something I had a hard time understanding the first time it came across since it’s actually not that intuitive. The money multiplier model Let’s say that you have some money to deposit in the bank. So you go there and you deposit $1000. However, since the bank assumes that you probably won’t need all that money back at the same time and because it is legally required of them to keep a certain amount of money — let’s say 10% therefore the bank $100 — in its reserves — that’s called a reserve requirement — it can lend $900 of the money you deposited to someone else who needs a loan — let’s say person A. Now person A makes a purchase in person B’s shop for let’s say $900. Person B then goes to another bank — bank B — to deposit the cash and bank B keeps 10% of the $900 — which is $90 — and lends the remaining $810 to person C. This process then goes on and on until all the original $1000 is hold in reserves. However, all these people who have deposited money in the bank can see on their bank statement that exact amount of money even though that is not actually the amount the bank holds in its reserve. Moreover, even if it is still the same $1000 that is lent over and over again, the sum of the amount of money on each person’s bank statement is way over $1000. Regarding our previous example — and if we consider that person C made an $800 purchase in person D’s account who then put this money in bank C — , the sum equals 1000 + 900 + 800 = $2700. So now 2700 – 1000 = $1700 were created out of thin air ! Here is a graph that shows how much money this model can create depending on the reserve requirement percentage — in this case we consider that all the money a person has in deposited in the bank. So this model relies on the reserve requirement since the amount of money created depends on it. However, some countries — like the UK — have a 0% reserve requirement. In the US, it varies from 0 to 10% depending on the amount of money in the accounts of a bank customers. It can also be a way of controlling the amount of money in the economy. If a government decides to lower the reserve requirement, it pumps money into the economy — by creating more of it. If it decides to increase the reserve requirement, it decreases the amount of money in the economy. What does it mean if there is no reserve requirement ? Well, in this case, the amount of money lent never dies down. Indeed, if person A deposits $1000 in bank A, it can lend $1000 to person B and so on.. Therefore, the amount of money created can be limitless. However, there is something that is called “ capital adequacy requirements ”. Regulatory agencies require banks to hold a certain amount of cash in their reserves in case of emergency. So basically, a bank can not hold zero amount of cash and therefore the amount of money they lend does die down after a while. Moreover, in countries where reserve requirements are established, capital requirements are added on top. If there’s anything you would want to add or share, please do so in the comment section :). I am learning as I do these articles so don’t hesitate to correct me if I said something wrong. Additional resources : A deeper explanation of this model above (however, you should consider their balloon model carefully since I have found other resources that say otherwise)
The saturation domes for R-134a and isopentane shown in Figures 13.4 are quite asymmetric, with steep (R-134a) or even re curving (isopentane) saturation curves. Water, on the other hand, has a relatively symmetric saturation dome (Figure 12.9) Explain why this feature makes R-134a and isopentane more suitable fluids for low-temperature heat extraction devices and engines than water. Why is water so widely used for higher-temperature systems such as steam turbines? What kind of shape for the saturation dome would be better for a phase-change fluid in a higher-temperature system? Consider a parabolic trough where the height of the reflector is identical to the height of the center of the absorber, as depicted in the figure The concentrator width is 8 m and the absorbing tube of radius 0.5 m is centered along the focal line at a height of 2 m above the bottom of the trough. Incident sunlight hits the concentrator from a direction parallel to the line of symmetry, with an intensity of 1000 W/m2. (a) What is the effective concentration of the concentrator? (b) What is the acceptance angle within which all light hits the absorber? (c) Assume that the tube carries a fluid that is heated and removes some of the incoming energy. The tube then radiates as a black body at a temperature of 150 ◦C. Compute the net rate at which energy is collected and transferred to the fluid, for each meter of length of the trough. The thickness of Earth’s ozone layer at any location is measured in Dobson Units (DU), where 1 DU corresponds to a gas layer of thickness 10 μm if the gas pressure were raised to 1 atm. Show that 1 DU corresponds to 2.69 × 1016 molecules/cm2. Data on the absorption cross section for ozone are given in Figure 23.13 Normally the thickness of the ozone layer is ≈ 300 DU, but due to ozone destruction by chlorofluorocarbons it had dropped as low as ≈ 90 DU over Antarctica during the 1990 s. How low would the ozone level have to drop to allow 10−6 of the incident UV flux at λ ≈ 250 nm or 220 nm to reach sea level? (You will need to use the result of Problem 23.5.) A flat panel solar collector is to be tilted at an angle θ to the horizontal to maximize the amount of sunlight it can collect over the whole year. Once tilted, it is oriented toward the south (in the Northern Hemisphere). One might think that θ should be chosen to equal the latitude λ of the installation so that the Sun would be directly above the panel at noon on the equinoxes. Instead, if total insolation were the only concern, θ would be chosen to be less than λ. Explain why. Describe (but do not attempt to compute) how you would compute the optimal angle for a given location to maximize annual collection. Explain why, despite these considerations, in many practical situations θ may be chosen to be equal to or greater than λ. Consider a linear 2D compound parabolic concentrator built from parabolas tilted at 10◦ to the vertical, with a trough of width 3 m, and an absorber width of 0.5 m. Compute the concentration C of the concentrator. If the incoming radiation has intensity I0 = 1000 W/m2 and the (blackbody) absorber is kept at a temperature of 100 ◦C by circulation of a thermal fluid, compute the rate of energy transfer to the fluid. Compare this rate of energy transfer to that for a linear parabolic concentrator with the same absorber area and acceptance angle; assume that the parabolic concentrator has height equal to the focal length (as in Example 24.4), and that the absorber has an area covering the lower half of a cylinder centered on the focal line, and is kept at the same temperature of 100 ◦C. The absorption coefficient of silicon has a strong dependence on photon energy, as shown in Figure 25.16 For simplicity, consider an idealized material, material S, similar to crystalline silicon, with an absorption coefficient of κ = 7×103 m−1 for light at any wavelength. What fraction of (normally incident) light will be absorbed by a wafer of material S that is 200 μm thick? If we could fabricate a layer of material S just 1 μm thick, what fraction of incident radiation would it absorb? The absorption coefficient of amorphous silicon is significantly higher than that of crystalline silicon. If we had another material, material A, with an absorption coefficient κ = 1.5×106 m−1 (again independent of wavelength), what fraction would be absorbed by a 1 μm layer? Derive the Fermi–Dirac (25.8) Start by considering a single electron state of energy E that is either occupied (n = 1) or not occupied (n = 0) by an electron. Now, consider coupling this two-state system to a thermal reservoir at temperature T so that not only energy but also particles can move between the smaller system and the reservoir. The entropy of the reservoir S (U, N) now depends on the total energy and total number of particles in the reservoir. Define EF = −T(∂S/∂N)|U . Use an argument similar to that used to derive the Boltzmann in §8 to derive the Fermi–Dirac (25.8) Assume that in a given scenario with no significant increase in nuclear power usage and gradually increasing reliance on renewables over the next century, atmospheric CO2 levels reach a maximum of 700 ppmv and then stabilize. Now assume that this scenario is varied by building a thousand 1 GW nuclear power plants at a steady rate over the next century. Estimate the decrease in radiative forcing and average surface temperatures assuming that these nuclear plants replace coal power plants. How would you compare the risks and environmental hazards associated with the nuclear power plants against the risks posed by the marginal warming offset by the nuclear plants?
Discussion and review A pendulum is a weight hanging from a fixed point so that it swings freely under the combined forces of gravity and momentum. A simple pendulum consists of a heavy pendulum bob (of mass M) suspended from a light string. It is generally assumed that the mass of the string is negligible. If the bob moves away from the vertical to some angle θ, and is released so that the pendulum swings within a vertical plane, the period of the pendulum is given as: T = 2π Table 1: contents of Formula |T||Period of a pendulum to complete one cycle| |L||Length of string| |g||Acceleration due to gravity: 9.81 m/s2| Part 1: changing the amplitude Before beginning, find a solid support from which to hang the pendulum. Ideally, there should be a wall close to the support so the protractor and tape measure can be attached for recording the pendulum’s movements. A bathroom or kitchen towel bar is ideal for this purpose. A support similar to that shown in Figure 3 can be constructed and placed on a narrow shelf or tabletop. It is important not only that the support allows the pendulum to hang freely, but also that you are able to read and record measurements from the protractor and tape measure. Do not allow the pendulum string to touch anything or be obstructed from any direction. The pendulum apparatus must also be sturdy enough so that it does not bend, flex, or move in any manner as this will introduce error into the experiment. See Figure 4 for an example setup with the pendulum bob hanging from an over-the-door hanger. - Attach a small plastic bag to the spring scale. - Add washers to the plastic bag until the scale measures approximately 25 g total. The filled bag will hereafter be referred to as the bob. Record this value as “Mass of bob” in the place provided in Data Table 1. - Measure a piece of string that is approximately 120 cm in length. Tie the string around the top of the bag so that the washers cannot fall out. Suspend the bob from this string so that it measures exactly 1 m (100 cm) between where it attaches to the support and the bottom of the bob. - Use tape to affix the protractor behind where the string is attached to the support so you can measure the pendulum’s amplitude in degrees. The center hole in the protractor should be located directly behind the pivot point. The string should hang straight down so that the string lines up with the 90o mark on the protractor. See Figure 4 as an example of the correct placement of the protractor. - Stretch the measuring tape horizontally and use tape to affix it to the wall or door so that its 50-cm mark is directly behind the bob at rest. - Displace the bob out to the 5o mark and hold it there. Then observe the bob’s location during its first cycle as it swings relative to the tape measure and record the distance in centimeters as “Amplitude (bob horizontal displacement)” in Data Table 1. - With a stopwatch ready to begin timing, release (do not push) the bob and begin timing how long it takes the bob to move through five complete cycles. Record this first trial time in Data Table 1 for Trial 1. Repeat the procedure for the second and third trials. Then average the three trial times to calculate the average period for one cycle, and record this value in Data Table 1. - Repeat this procedure, releasing the bobs at 10°, 15°, 20°, 25°, and 30°, and recording the results for each of the angles in Data Table 1. Length of string: _____ cm = _____ m Mass of bob: _____ g = _____kg Data Table 1: Trial values at varying degrees |Placement of BobDegrees||Amplitude (bob horizontal displacement) cm||Trial 1 (s) 5 cycles||Trial 2 (s) 5 cycles||Trial 3 (s)5 cycles||Avg. Time (s)5 cycles||Period 1 cycle| IMPORTANT: The pendulum must swing without obstruction and should not strike the background as it swings. Part 2: changing the mass - Add more weights to the bag until the mass has doubled to approximately 50 g. Record this value as “mass of bob” in grams into the line provided next to Data Table 2. - Repeat the procedure used in Part 1 using only a 10o amplitude for the starting point of the Record the data in Data Table 2. Length of string: ________ cm = _______ m Amplitude: 10° Data Table 2: Trial values for bob masses |Bob weight (g)||Bob weight (kg)||Trial 1 (s)||Trial 2 (s)||Trial 3 (s)||Avg Time (s)||Period| Part 3: changing the length of string - Remove the weights until the original mass used in Part 1 (approximately 25 g) is inside the bag. Record this “mass of bob” in grams into the line provided next to Data Table 3. - Put the original bob containing the washers back onto the pendulum. Use a 10o amplitude and perform three trials each with successively shorter lengths of string. For example, 1 m, 0.75 m, etc. Record the time in seconds into the columns labeled “Trial #1, 2, or 3 s” in Data Table Mass of bob: ________ g = _______ kg Amplitude: 10o Data Table 3: Trial values for string length |Length (m)||Trial 1 (s)||Trial 2 (s)||Trial 3 (s)||Avg Time (s)||Period| Part 4: Calculations - Solve the pendulum formula for g using the values derived from this experiment. Equation 3 will be used in calculating “g.” Substitute the average data for time and the length of the pendulum into the formula. Calculate to three significant figures. Then calculate your percentage error as compared to the accepted value for g, which is 9.81 m/s2. See Equation 4. - g = acceleration due to gravity - t = time in seconds - L = length of pendulum string in meters Note: If you get very large errors, such as 20% or more, in this lab, double-check your calculations. % error = experimental value – theoretical value × 100
Sintering happens naturally in mineral deposits or as a manufacturing process used with metals, ceramics, plastics, and other materials. The atoms in the materials diffuse across the boundaries of the particles, fusing the particles together and creating one solid piece. Because the sintering temperature does not have to reach the melting point of the material, sintering is often chosen as the shaping process for materials with extremely high melting points such as tungsten and molybdenum. The study of sintering in metallurgy powder-related processes is known as powder metallurgy. An example of sintering can be observed when ice cubes in a glass of water adhere to each other, which is driven by the temperature difference between the water and the ice. Examples of pressure-driven sintering are the compacting of snowfall to a glacier, or the forming of a hard snowball by pressing loose snow together. Sintering is effective when the process reduces the porosity and enhances properties such as strength, electrical conductivity, translucency and thermal conductivity; yet, in other cases, it may be useful to increase its strength but keep its gas absorbency constant as in filters or catalysts. During the firing process, atomic diffusion drives powder surface elimination in different stages, starting from the formation of necks between powders to final elimination of small pores at the end of the process. The driving force for densification is the change in free energy from the decrease in surface area and lowering of the surface free energy by the replacement of solid-vapor interfaces. It forms new but lower-energy solid-solid interfaces with a total decrease in free energy occurring. On a microscopic scale, material transfer is affected by the change in pressure and differences in free energy across the curved surface. If the size of the particle is small (and its curvature is high), these effects become very large in magnitude. The change in energy is much higher when the radius of curvature is less than a few micrometres, which is one of the main reasons why much ceramic technology is based on the use of fine-particle materials. For properties such as strength and conductivity, the bond area in relation to the particle size is the determining factor. The variables that can be controlled for any given material are the temperature and the initial grain size, because the vapor pressure depends upon temperature. Through time, the particle radius and the vapor pressure are proportional to (p0)2/3 and to (p0)1/3, respectively. The source of power for solid-state processes is the change in free or chemical potential energy between the neck and the surface of the particle. This energy creates a transfer of material through the fastest means possible; if transfer were to take place from the particle volume or the grain boundary between particles, then there would be particle reduction and pore destruction. The pore elimination occurs faster for a trial with many pores of uniform size and higher porosity where the boundary diffusion distance is smaller. For the latter portions of the process, boundary and lattice diffusion from the boundary become important. Control of temperature is very important to the sintering process, since grain-boundary diffusion and volume diffusion rely heavily upon temperature, the size and distribution of particles of the material, the materials composition, and often the sintering environment to be controlled. Sintering is part of the firing process used in the manufacture of pottery and other ceramic objects. These objects are made from substances such as glass, alumina, zirconia, silica, magnesia, lime, beryllium oxide, and ferric oxide. Some ceramic raw materials have a lower affinity for water and a lower plasticity index than clay, requiring organic additives in the stages before sintering. The general procedure of creating ceramic objects via sintering of powders includes: - Mixing water, binder, deflocculant, and unfired ceramic powder to form a slurry; - Spray-drying the slurry; - Putting the spray dried powder into a mold and pressing it to form a green body (an unsintered ceramic item); - Heating the green body at low temperature to burn off the binder; - Sintering at a high temperature to fuse the ceramic particles together. All the characteristic temperatures associated with phase transformation, glass transitions, and melting points, occurring during a sinterisation cycle of a particular ceramics formulation (i.e., tails and frits) can be easily obtained by observing the expansion-temperature curves during optical dilatometer thermal analysis. In fact, sinterisation is associated with a remarkable shrinkage of the material because glass phases flow once their transition temperature is reached, and start consolidating the powdery structure and considerably reducing the porosity of the material. Sintering is performed at high temperature. Additionally, a second and/or third external force (such as pressure, electrical current) could be used. A commonly used second external force is pressure. So, the sintering that is performed just using temperature is generally called "pressureless sintering". Pressureless sintering is possible with graded metal-ceramic composites, with a nanoparticle sintering aid and bulk molding technology. A variant used for 3D shapes is called hot isostatic pressing. To allow efficient stacking of product in the furnace during sintering and prevent parts sticking together, many manufacturers separate ware using ceramic powder separator sheets. These sheets are available in various materials such as alumina, zirconia and magnesia. They are additionally categorized by fine, medium and coarse particle sizes. By matching the material and particle size to the ware being sintered, surface damage and contamination can be reduced while maximizing furnace loading. Sintering of metallic powdersEdit Most, if not all, metals can be sintered. This applies especially to pure metals produced in vacuum which suffer no surface contamination. Sintering under atmospheric pressure requires the use of a protective gas, quite often endothermic gas. Sintering, with subsequent reworking, can produce a great range of material properties. Changes in density, alloying, and heat treatments can alter the physical characteristics of various products. For instance, the Young's modulus En of sintered iron powders remains somewhat insensitive to sintering time, alloying, or particle size in the original powder for lower sintering temperatures, but depends upon the density of the final product: Sintering is static when a metal powder under certain external conditions may exhibit coalescence, and yet reverts to its normal behavior when such conditions are removed. In most cases, the density of a collection of grains increases as material flows into voids, causing a decrease in overall volume. Mass movements that occur during sintering consist of the reduction of total porosity by repacking, followed by material transport due to evaporation and condensation from diffusion. In the final stages, metal atoms move along crystal boundaries to the walls of internal pores, redistributing mass from the internal bulk of the object and smoothing pore walls. Surface tension is the driving force for this movement. A special form of sintering (which is still considered part of powder metallurgy) is liquid-state sintering in which at least one but not all elements are in a liquid state. Liquid-state sintering is required for making cemented carbide and tungsten carbide. Sintered bronze in particular is frequently used as a material for bearings, since its porosity allows lubricants to flow through it or remain captured within it. Sintered copper may be used as a wicking structure in certain types of heat pipe construction, where the porosity allows a liquid agent to move through the porous material via capillary action. For materials that have high melting points such as molybdenum, tungsten, rhenium, tantalum, osmium and carbon, sintering is one of the few viable manufacturing processes. In these cases, very low porosity is desirable and can often be achieved. Sintered metal powder is used to make frangible shotgun shells called breaching rounds, as used by military and SWAT teams to quickly force entry into a locked room. These shotgun shells are designed to destroy door deadbolts, locks and hinges without risking lives by ricocheting or by flying on at lethal speed through the door. They work by destroying the object they hit and then dispersing into a relatively harmless powder. Sintered bronze and stainless steel are used as filter materials in applications requiring high temperature resistance while retaining the ability to regenerate the filter element. For example, sintered stainless steel elements are employed for filtering steam in food and pharmaceutical applications, and sintered bronze in aircraft hydraulic systems. Particular advantages of the powder technology include: - Very high levels of purity and uniformity in starting materials - Preservation of purity, due to the simpler subsequent fabrication process (fewer steps) that it makes possible - Stabilization of the details of repetitive operations, by control of grain size during the input stages - Absence of binding contact between segregated powder particles – or "inclusions" (called stringering) – as often occurs in melting processes - No deformation needed to produce directional elongation of grains - Capability to produce materials of controlled, uniform porosity. - Capability to produce nearly net-shaped objects. - Capability to produce materials which cannot be produced by any other technology. - Capability to fabricate high-strength material like turbine blades. - After sintering the mechanical strength to handling becomes higher. The literature contains many references on sintering dissimilar materials to produce solid/solid-phase compounds or solid/melt mixtures at the processing stage. Almost any substance can be obtained in powder form, through either chemical, mechanical or physical processes, so basically any material can be obtained through sintering. When pure elements are sintered, the leftover powder is still pure, so it can be recycled. Particular disadvantages of the powder technology include: - 100% sintered (iron ore) cannot be charged in the blast furnace. - Sintering cannot create uniform sizes. - Micro- and nano-structures produced before sintering are often destroyed. Plastic materials are formed by sintering for applications that require materials of specific porosity. Sintered plastic porous components are used in filtration and to control fluid and gas flows. Sintered plastics are used in applications requiring caustic fluid separation processes such as the nibs in whiteboard markers, inhaler filters, and vents for caps and liners on packaging materials. Sintered ultra high molecular weight polyethylene materials are used as ski and snowboard base materials. The porous texture allows wax to be retained within the structure of the base material, thus providing a more durable wax coating. Liquid phase sinteringEdit For materials that are difficult to sinter, a process called liquid phase sintering is commonly used. Materials for which liquid phase sintering is common are Si3N4, WC, SiC, and more. Liquid phase sintering is the process of adding an additive to the powder which will melt before the matrix phase. The process of liquid phase sintering has three stages: - Rearrangement – As the liquid melts capillary action will pull the liquid into pores and also cause grains to rearrange into a more favorable packing arrangement. - Solution-Precipitation – In areas where capillary pressures are high (particles are close together) atoms will preferentially go into solution and then precipitate in areas of lower chemical potential where particles are not close or in contact. This is called "contact flattening". This densifies the system in a way similar to grain boundary diffusion in solid state sintering. Ostwald ripening will also occur where smaller particles will go into solution preferentially and precipitate on larger particles leading to densification. - Final Densification – densification of solid skeletal network, liquid movement from efficiently packed regions into pores. For liquid phase sintering to be practical the major phase should be at least slightly soluble in the liquid phase and the additive should melt before any major sintering of the solid particulate network occurs, otherwise rearrangement of grains will not occur. Liquid phase sintering was successfully applied to improve grain growth of thin semiconductor layers from nanoparticle precursor films. Electric current assisted sinteringEdit These techniques employ electric currents to drive or enhance sintering. English engineer A. G. Bloxam registered in 1906 the first patent on sintering powders using direct current in vacuum. The primary purpose of his inventions was the industrial scale production of filaments for incandescent lamps by compacting tungsten or molybdenum particles. The applied current was particularly effective in reducing surface oxides that increased the emissivity of the filaments. In 1913, Weintraub and Rush patented a modified sintering method which combined electric current with pressure. The benefits of this method were proved for the sintering of refractory metals as well as conductive carbide or nitride powders. The starting boron–carbon or silicon–carbon powders were placed in an electrically insulating tube and compressed by two rods which also served as electrodes for the current. The estimated sintering temperature was 2000 °C. In the United States, sintering was first patented by Duval d’Adrian in 1922. His three-step process aimed at producing heat-resistant blocks from such oxide materials as zirconia, thoria or tantalia. The steps were: (i) molding the powder; (ii) annealing it at about 2500 °C to make it conducting; (iii) applying current-pressure sintering as in the method by Weintraub and Rush. Sintering that uses an arc produced via a capacitance discharge to eliminate oxides before direct current heating, was patented by G. F. Taylor in 1932. This originated sintering methods employing pulsed or alternating current, eventually superimposed to a direct current. Those techniques have been developed over many decades and summarized in more than 640 patents. Spark plasma sinteringEdit In spark plasma sintering (SPS), external pressure and an electric field are applied simultaneously to enhance the densification of the metallic/ceramic powder compacts. However, after commercialization it was determined there is no plasma, so the proper name is spark sintering as coined by Lenel. The electric field driven densification supplements sintering with a form of hot pressing, to enable lower temperatures and shorter amount of time than typical sintering. For a number of years, it was speculated that the existence of sparks or plasma between particles could aid sintering; however, Hulbert and coworkers systematically proved that the electric parameters used during spark plasma sintering make it (highly) unlikely. In light of this, the name "spark plasma sintering" has been rendered obsolete. Terms such as "Field Assisted Sintering Technique" (FAST), "Electric Field Assisted Sintering" (EFAS), and Direct Current Sintering (DCS) have been implemented by the sintering community. Using a DC pulse as the electric current, spark plasma, spark impact pressure, joule heating, and an electrical field diffusion effect would be created. By modifying the graphite die design and its assembly, it was demonstrated to create Pressureless sintering condition in spark plasma sintering facility. This modified die design setup is reported to synergize the advantages of both conventional pressureless sintering and spark plasma sintering techniques. Electro sinter forgingEdit Electro sinter forging is an electric current assisted sintering (ECAS) technology originated from capacitor discharge sintering. It is used for the production of diamond metal matrix composites and under evaluation for the production of hard metals, nitinol and other metals and intermetallics. It is characterized by a very low sintering time allowing machines to sinter at the same speed as a compaction press. Pressureless sintering is the sintering of a powder compact (sometimes at very high temperatures, depending on the powder) without applied pressure. This avoids density variations in the final component, which occurs with more traditional hot pressing methods. The powder compact (if a ceramic) can be created by slip casting, injection moulding, and cold isostatic pressing. After pre-sintering, the final green compact can be machined to its final shape before sintered. Three different heating schedules can be performed with pressureless sintering: constant-rate of heating (CRH), rate-controlled sintering (RCS), and two-step sintering (TSS). The microstructure and grain size of the ceramics may vary depending on the material and method used. Constant-rate of heating (CRH), also known as temperature-controlled sintering, consists of heating the green compact at a constant rate up to the sintering temperature. Experiments with zirconia have been performed to optimize the sintering temperature and sintering rate for CRH method. Results showed that the grain sizes were identical when the samples were sintered to the same density, proving that grain size is a function of specimen density rather than CRH temperature mode. In rate-controlled sintering (RCS), the densification rate in the open-porosity phase is lower than in the CRH method. By definition, the relative density, ρrel, in open-porosity phase is lower than 90%. Although this should prevent separation of pores from grain boundaries, it has been proven statistically that RCS did not produce smaller grain sizes than CRH for alumina, zirconia, and ceria samples. Two-step sintering (TSS) uses two different sintering temperatures. The first sintering temperature should guarantee a relative density higher than 75% of theoretical sample density. This will remove supercritical pores from the body. The sample will then be cooled down and held at the second sintering temperature until densification is completed. Grains of cubic zirconia and cubic strontium titanate were significantly refined by TSS compared to CRH. However, the grain size changes in other ceramic materials, like tetragonal zirconia and hexagonal alumina, were not statistically significant. In microwave sintering, heat is sometimes generated internally within the material, rather than via surface radiative heat transfer from an external heat source. Some materials fail to couple and others exhibit run-away behavior, so it is restricted in usefulness. A benefit of microwave sintering is faster heating for small loads, meaning less time is needed to reach the sintering temperature, less heating energy required and improvements in the product properties. A failing of microwave sintering is that it generally sinters only one compact at a time, so overall productivity turns out to be poor except for situations involving one of a kind sintering, such as for artists. As microwaves can only penetrate a short distance in materials with a high conductivity and a high permeability, microwave sintering requires the sample to be delivered in powders with a particle size around the penetration depth of microwaves in the particular material. The sintering process and side-reactions run several times faster during microwave sintering at the same temperature, which results in different properties for the sintered product. This technique is acknowledged to be quite effective in maintaining fine grains/nano sized grains in sintered bioceramics. Magnesium phosphates and calcium phosphates are the examples which have been processed through microwave sintering technique Densification, vitrification and grain growthEdit Sintering in practice is the control of both densification and grain growth. Densification is the act of reducing porosity in a sample thereby making it more dense. Grain growth is the process of grain boundary motion and Ostwald ripening to increase the average grain size. Many properties (mechanical strength, electrical breakdown strength, etc.) benefit from both a high relative density and a small grain size. Therefore, being able to control these properties during processing is of high technical importance. Since densification of powders requires high temperatures, grain growth naturally occurs during sintering. Reduction of this process is key for many engineering ceramics. For densification to occur at a quick pace it is essential to have (1) an amount of liquid phase that is large in size, (2) a near complete solubility of the solid in the liquid, and (3) wetting of the solid by the liquid. The power behind the densification is derived from the capillary pressure of the liquid phase located between the fine solid particles. When the liquid phase wets the solid particles, each space between the particles becomes a capillary in which a substantial capillary pressure is developed. For submicrometre particle sizes, capillaries with diameters in the range of 0.1 to 1 micrometres develop pressures in the range of 175 pounds per square inch (1,210 kPa) to 1,750 pounds per square inch (12,100 kPa) for silicate liquids and in the range of 975 pounds per square inch (6,720 kPa) to 9,750 pounds per square inch (67,200 kPa) for a metal such as liquid cobalt. Densification requires constant capillary pressure where just solution-precipitation material transfer would not produce densification. For further densification, additional particle movement while the particle undergoes grain-growth and grain-shape changes occurs. Shrinkage would result when the liquid slips between particles and increase pressure at points of contact causing the material to move away from the contact areas forcing particle centers to draw near each other. The sintering of liquid-phase materials involves a fine-grained solid phase to create the needed capillary pressures proportional to its diameter and the liquid concentration must also create the required capillary pressure within range, else the process ceases. The vitrification rate is dependent upon the pore size, the viscosity and amount of liquid phase present leading to the viscosity of the overall composition, and the surface tension. Temperature dependence for densification controls the process because at higher temperatures viscosity decreases and increases liquid content. Therefore, when changes to the composition and processing are made, it will affect the vitrification process. Sintering occurs by diffusion of atoms through the microstructure. This diffusion is caused by a gradient of chemical potential – atoms move from an area of higher chemical potential to an area of lower chemical potential. The different paths the atoms take to get from one spot to another are the sintering mechanisms. The six common mechanisms are: - Surface diffusion – Diffusion of atoms along the surface of a particle - Vapor transport – Evaporation of atoms which condense on a different surface - Lattice diffusion from surface – atoms from surface diffuse through lattice - Lattice diffusion from grain boundary – atom from grain boundary diffuses through lattice - Grain boundary diffusion – atoms diffuse along grain boundary - Plastic deformation – dislocation motion causes flow of matter Also one must distinguish between densifying and non-densifying mechanisms. 1–3 above are non-densifying – they take atoms from the surface and rearrange them onto another surface or part of the same surface. These mechanisms simply rearrange matter inside of porosity and do not cause pores to shrink. Mechanisms 4–6 are densifying mechanisms – atoms are moved from the bulk to the surface of pores thereby eliminating porosity and increasing the density of the sample. A grain boundary(GB) is the transition area or interface between adjacent crystallites (or grains) of the same chemical and lattice composition, not to be confused with a phase boundary. The adjacent grains do not have the same orientation of the lattice thus giving the atoms in GB shifted positions relative to the lattice in the crystals. Due to the shifted positioning of the atoms in the GB they have a higher energy state when compared with the atoms in the crystal lattice of the grains. It is this imperfection that makes it possible to selectively etch the GBs when one wants the microstructure visible. Striving to minimize its energy leads to the coarsening of the microstructure to reach a metastable state within the specimen. This involves minimizing its GB area and changing its topological structure to minimize its energy. This grain growth can either be normal or abnormal, a normal grain growth is characterized by the uniform growth and size of all the grains in the specimen. Abnormal grain growth is when a few grains grow much larger than the remaining majority. Grain boundary energy/tensionEdit The atoms in the GB are normally in a higher energy state than their equivalent in the bulk material. This is due to their more stretched bonds, which gives rise to a GB tension . This extra energy that the atoms possess is called the grain boundary energy, . The grain will want to minimize this extra energy thus striving to make the grain boundary area smaller and this change requires energy. “Or, in other words, a force has to be applied, in the plane of the grain boundary and acting along a line in the grain-boundary area, in order to extend the grain-boundary area in the direction of the force. The force per unit length, i.e. tension/stress, along the line mentioned is σGB. On the basis of this reasoning it would follow: with dA as the increase of grain-boundary area per unit length along the line in the grain-boundary area considered.”[pg 478] The GB tension can also be thought of as the attractive forces between the atoms at the surface and the tension between these atoms is due to the fact that there is a larger interatomic distance between them at the surface compared to the bulk (i.e. surface tension). When the surface area becomes bigger the bonds stretch more and the GB tension increases. To counteract this increase in tension there must be a transport of atoms to the surface keeping the GB tension constant. This diffusion of atoms accounts for the constant surface tension in liquids. Then the argument, holds true. For solids, on the other hand, diffusion of atoms to the surface might not be sufficient and the surface tension can vary with an increase in surface area. For a solid, one can derive an expression for the change in Gibbs free energy, dG, upon the change of GB area, dA. dG is given by is normally expressed in units of while is normally expressed in units of since they are different physical properties. In a two-dimensional isotropic material the grain boundary tension would be the same for the grains. This would give angle of 120° at GB junction where three grains meet. This would give the structure a hexagonal pattern which is the metastable state (or mechanical equilibrium) of the 2D specimen. A consequence of this is that to keep trying to be as close to the equilibrium as possible. Grains with fewer sides than six will bend the GB to try keep the 120° angle between each other. This results in a curved boundary with its curvature towards itself. A grain with six sides will, as mentioned, have straight boundaries while a grain with more than six sides will have curved boundaries with its curvature away from itself. A grain with six boundaries (i.e. hexagonal structure) are in a metastable state (i.e. local equilibrium) within the 2D structure. In three dimensions structural details are similar but much more complex and the metastable structure for a grain is a non-regular 14-sided polyhedra with doubly curved faces. In practice all arrays of grains are always unstable and thus always grows until its prevented by a counterforce. Grains strive to minimize their energy, and a curved boundary has a higher energy than a straight boundary. This means that the grain boundary will migrate towards the curvature.[clarification needed] The consequence of this is that grains with less than 6 sides will decrease in size while grains with more than 6 sides will increase in size. Grain growth occurs due to motion of atoms across a grain boundary. Convex surfaces have a higher chemical potential than concave surfaces therefore grain boundaries will move toward their center of curvature. As smaller particles tend to have a higher radius of curvature and this results in smaller grains losing atoms to larger grains and shrinking. This is a process called Ostwald ripening. Large grains grow at the expense of small grains. Grain growth in a simple model is found to follow: Here G is final average grain size, G0 is the initial average grain size, t is time, m is a factor between 2 and 4, and K is a factor given by: Here Q is the molar activation energy, R is the ideal gas constant, T is absolute temperature, and K0 is a material dependent factor. In most materials the sintered grain size is proportion to the inverse square root of the fractional porosity, implying that pores are the most effective retardant for grain growth during sintering. Reducing grain growthEdit - Solute ions If a dopant is added to the material (example: Nd in BaTiO3) the impurity will tend to stick to the grain boundaries. As the grain boundary tries to move (as atoms jump from the convex to concave surface) the change in concentration of the dopant at the grain boundary will impose a drag on the boundary. The original concentration of solute around the grain boundary will be asymmetrical in most cases. As the grain boundary tries to move the concentration on the side opposite of motion will have a higher concentration and therefore have a higher chemical potential. This increased chemical potential will act as a backforce to the original chemical potential gradient that is the reason for grain boundary movement. This decrease in net chemical potential will decrease the grain boundary velocity and therefore grain growth. - Fine second phase particles If particles of a second phase which are insoluble in the matrix phase are added to the powder in the form of a much finer powder than this will decrease grain boundary movement. When the grain boundary tries to move past the inclusion diffusion of atoms from one grain to the other will be hindered by the insoluble particle. Since it is beneficial for particles to reside in the grain boundaries and they exert a force in opposite direction compared to the grain boundary migration. This effect is called the Zener effect after the man who estimated this drag force to assuming they are randomly distributed. A boundary of unit area will intersect all particles within a volume of 2r which is 2Nr particles. So the number of particles n intersecting a unit area of grain boundary is: Now assuming that the grains only grow due to the influence of curvature, the driving force of growth is where (for homogeneous grain structure) R approximates to the mean diameter of the grains. With this the critical diameter that has to be reached before the grains ceases to grow: so the critical diameter of the grains is dependent of the size and volume fraction of the particles at the grain boundaries. It has also been shown that small bubbles or cavities can act as inclusion More complicated interactions which slow grain boundary motion include interactions of the surface energies of the two grains and the inclusion and are discussed in detail by C.S. Smith. Natural sintering in geologyEdit Siliceous sinter is a deposit of opaline or amorphous silica which appears as incrustations near hot springs and geysers. It sometimes forms conical mounds, called geyser cones, but can also form as a terrace. The main agents responsible for the deposition of siliceous sinter are algae and other vegetation in the water. Altering of wall rocks can also form sinters near fumaroles and in the deeper channels of hot springs. Examples of siliceous sinter are geyserite and fiorite. They can be found in many places, including Iceland, El Tatio geothermal field in Chile, New Zealand, and Yellowstone National Park and Steamboat Springs in the United States. Calcareous sinter is also called tufa, calcareous tufa, or calc-tufa. It is a deposit of calcium carbonate, as with travertine. Called petrifying springs, they are quite common in limestone districts. Their calcareous waters deposit a sintery incrustation on surrounding objects. The precipitation is assisted with mosses and other vegetable structures, thus leaving cavities in the calcareous sinter after they have decayed. Sintering of catalystsEdit Sintering is an important cause for loss of catalyst activity, especially on supported metal catalysts. It decreases the surface area of the catalyst and changes the surface structure. For a porous catalytic surface, the pores may collapse due to sintering, resulting in loss of surface area. Sintering is in general an irreversible process. Small catalyst particles (which have the highest relative surface areas) and a high reaction temperature are in general both factors that increase the reactivity of a catalyst. However, these factors are also the circumstances under which sintering occurs. Specific materials may also increase the rate of sintering. On the other hand, by alloying catalysts with other materials, sintering can be reduced. Rare-earth metals in particular have been shown to reduce sintering of metal catalysts when alloyed. For many supported metal catalysts, sintering starts to become a significant effect at temperatures over 500 °C (932 °F). Catalysts that operate at higher temperatures, such as a car catalyst, use structural improvements to reduce or prevent sintering. These improvements are in general in the form of a support made from an inert and thermally stable material such as silica, carbon or alumina. - Abnormal grain growth - Capacitor discharge sintering - Ceramic engineering - Direct metal laser sintering - Energetically modified cement - High-temperature superconductors - Metal clay - Room-temperature densification method - Selective laser sintering, a rapid prototyping technology, that includes Direct Metal Laser Sintering (DMLS). - Spark plasma sintering - W. David Kingery – a pioneer of sintering methods - Yttria-stabilized zirconia For the geological aspect : - "Sinter, v." Oxford English Dictionary Second Edition on CD-ROM (v. 4.0) © Oxford University Press 2009 - "Sinter" The Free Dictionary accessed May 1, 2014 - Kingery, W. David; Bowen, H. K.; Uhlmann, Donald R. (April 1976). "Introduction to Ceramics" (2nd ed.). John Wiley & Sons, Academic Press. ISBN 0-471-47860-1. - "Porex Custom Plastics: Porous Plastics & Porous Polymers". www.porex.com. Retrieved 2017-03-23. - Uhl, A.R.; et al. (2014). "Liquid-selenium-enhanced grain growth of nanoparticle precursor layers for CuInSe2 solar cell absorbers". Prog. Photovoltaics Res. Appl. 23 (9): 1110–1119. doi:10.1002/pip.2529. - Orrù, Roberto; Licheri, Roberta; Locci, Antonio Mario; Cincotti, Alberto; Cao, Giacomo (2009). "Materials Science and Engineering: R: Reports : Consolidation/synthesis of materials by electric current activated/assisted sintering". Materials Science and Engineering: R: Reports. 63 (4–6): 127–287. doi:10.1016/j.mser.2008.09.003. - Grasso, S; Sakka, Y; Maizza, G (2009). "Electric current activated/assisted sintering (ECAS): a review of patents 1906–2008". Sci. Technol. Adv. Mater. 10 (5): 053001. doi:10.1088/1468-6996/10/5/053001. PMC 5090538. PMID 27877308. - Tuan, W.H.; Guo, J.K. (2004). "Multiphased ceramic materials: processing and potential". Springer. ISBN 3-540-40516-X. - Hulbert, D. M.; et al. (2008). "The Absence of Plasma in' Spark Plasma Sintering'". Journal of Applied Physics. 104: 3305. doi:10.1063/1.2963701. - Anselmi-Tamburini, U. et al. in Sintering: Nanodensification and Field Assisted Processes (Castro, R. & van Benthem, K.) (Springer Verlag, 2012). - Palmer, R.E.; Wilde, G. (December 22, 2008). "Mechanical Properties of Nanocomposite Materials". EBL Database: Elsevier Ltd. ISBN 978-0-08-044965-4. - K. Sairam, J.K. Sonber, T.S.R.Ch. Murthy, A.K. Sahu, R.D. Bedse, J.K. Chakravartty (2016). "Pressureless sintering of chromium diboride using spark plasma sintering facility". International Journal of Refractory Metals and Hard Materials. 58 (-): 165–171. doi:10.1016/j.ijrmhm.2016.05.002.CS1 maint: Uses authors parameter (link) - Fais, A. "Discharge sintering of hard metal cutting tools". International Powder Metallurgy Congress and Exhibition, Euro PM 2013 - Balagna, Cristina; Fais, Alessandro; Brunelli, Katya; Peruzzo, Luca; Horynová, Miroslava; Čelko, Ladislav; Spriano, Silvia (2016). "Electro-sinter-forged Ni–Ti alloy". Intermetallics. 68: 31. doi:10.1016/j.intermet.2015.08.016. - Maca, Karel (2009). "Microstructure evolution during pressureless sintering of bulk oxide ceramics". Processing and Application of Ceramics. 3 (1–2): 13–17. doi:10.2298/pac0902013m. - Maca, Karl; Simonikova, Sarka (2005). "Effect of sintering schedule on grain size of oxide ceramics". Journal of Materials Science. 40 (21): 5581–5589. doi:10.1007/s10853-005-1332-1. - Oghbaei, Morteza; Mirzaee, Omid (2010). "Microwave versus conventional sintering: A review of fundamentals, advantages and applications". Journal of Alloys and Compounds. 494 (1–2): 175–189. doi:10.1016/j.jallcom.2010.01.068. - Babaie, Elham; Ren, Yufu; Bhaduri, Sarit B. (23 March 2016). "Microwave sintering of fine grained MgP and Mg substitutes with amorphous tricalcium phosphate: Structural, and mechanical characterization". Journal of Materials Research. 31 (8): 995–1003. doi:10.1557/jmr.2016.84. - Smallman R. E., Bishop, Ray J (1999). Modern physical metallurgy and materials engineering: science, process, applications. Oxford : Butterworth-Heinemann. ISBN 978-0-7506-4564-5. - Mittemeijer, Eric J. (2010). Fundamentals of Materials Science The Microstructure–Property Relationship Using Metals as Model Systems. Springer Heidelberg Dordrecht London New York. pp. 463–496. ISBN 978-3-642-10499-2. - Kang, Suk-Joong L. (2005). Sintering: Densification, Grain Growth, and Microstructure. Elsevier Ltd. pp. 9–18. ISBN 978-0-7506-6385-4. - Cahn, Robert W. and Haasen, Peter (1996). Physical Metallurgy (Fourth Edition). pp. 2399–2500. ISBN 978-0-444-89875-3.CS1 maint: Multiple names: authors list (link) - Carter, C. Barry; Norton, M. Grant (2007). Ceramic Materials: Science and Engineering. Springer Science+Business Media, LLC. pp. 427–443. ISBN 978-0-387-46270-7. - Cahn, Robert W. and Haasen, Peter (1996). Physical Metallurgy(Fourth Edition). ISBN 978-0-444-89875-3.CS1 maint: Multiple names: authors list (link) - Smith, Cyril S. (February 1948). "Introduction to Grains, Phases and Interphases: an Introduction to Microstructure". - Sinter in thefreedictionary.com. - sinter in Encyclopædia Britannica. - G. Kuczynski (6 December 2012). Sintering and Catalysis. Springer Science & Business Media. ISBN 978-1-4684-0934-5. - Bartholomew, Calvin H (2001). "Mechanisms of catalyst deactivation". Applied Catalysis A: General. 212: 17. doi:10.1016/S0926-860X(00)00843-7. - Harris, P (1986). "The sintering of platinum particles in an alumina-supported catalyst: Further transmission electron microscopy studies". Journal of Catalysis. 97 (2): 527–542. doi:10.1016/0021-9517(86)90024-2. - Figueiredo, J. L. (2012). Progress in Catalyst Deactivation: Proceedings of the NATO Advanced Study Institute on Catalyst Deactivation, Algarve, Portugal, May 18–29, 1981. Springer Science & Business Media. p. 11. ISBN 978-94-009-7597-2. - Chorkendorff, I.; Niemantsverdriet, J. W. (6 March 2006). Concepts of Modern Catalysis and Kinetics. John Wiley & Sons. ISBN 978-3-527-60564-4. - Chiang, Yet-Ming; Birnie, Dunbar P.; Kingery, W. David (May 1996). "Physical Ceramics: Principles for Ceramic Science and Engineering". John Wiley & Sons. ISBN 0-471-59873-9. - Green, D.J.; Hannink, R.; Swain, M.V. (1989). Transformation Toughening of Ceramics. Boca Raton: CRC Press. ISBN 0-8493-6594-5. - German, R.M. (1996). Sintering Theory and Practice. John Wiley & Sons, Inc. ISBN 0-471-05786-X. - Kang, Suk-Joong L. (2005). "Sintering" (1st ed.). Oxford: Elsevier, Butterworth Heinemann. ISBN 0-7506-6385-5. |Look up sintering in Wiktionary, the free dictionary.|
|Part of a series on Statistics| A Venn diagram (also known as a set diagram or logic diagram) is a diagram that shows all possible logical relations between a finite collection of different sets. They are thus a special case of Euler diagrams, which do not necessarily show all relations. Venn diagrams were conceived around 1880 by John Venn. They are used to teach elementary set theory, as well as illustrate simple set relationships in probability, logic, statistics, linguistics and computer science. This example involves two sets, A and B, represented here as coloured circles. The yellow circle, set A, represents all living creatures that are two-legged. The blue circle, set B, represents the living creatures that can fly. Each separate type of creature can be imagined as a point somewhere in the diagram. Living creatures that both can fly and have two legs—for example, parrots—are then in both sets, so they correspond to points in the area where the blue and orange circles overlap. That area contains all such and only such living creatures. Humans and penguins are bipedal, and so are then in the orange circle, but since they cannot fly they appear in the left part of the orange circle, where it does not overlap with the blue circle. Mosquitoes have six legs, and fly, so the point for mosquitoes is in the part of the blue circle that does not overlap with the orange one. Creatures that are not two-legged and cannot fly (for example, whales and spiders) would all be represented by points outside both circles. The combined area of sets A and B is called the union of A and B, denoted by A ∪ B. The union in this case contains all living creatures that are either two-legged or that can fly (or both). The area in both A and B, where the two sets overlap, is called the intersection of A and B, denoted by A ∩ B. For example, the intersection of the two sets is not empty, because there are points that represent creatures that are in both the orange and blue circles. Venn diagrams were introduced in 1880 by John Venn in a paper entitled On the Diagrammatic and Mechanical Representation of Propositions and Reasonings in the "Philosophical Magazine and Journal of Science", about the different ways to represent propositions by diagrams. The use of these types of diagrams in formal logic, according to Ruskey and M. Weston, is "not an easy history to trace, but it is certain that the diagrams that are popularly associated with Venn, in fact, originated much earlier. They are rightly associated with Venn, however, because he comprehensively surveyed and formalized their usage, and was the first to generalize them". Venn himself did not use the term "Venn diagram" and referred to his invention as "Eulerian Circles". For example, in the opening sentence of his 1880 article Venn writes, "Schemes of diagrammatic representation have been so familiarly introduced into logical treatises during the last century or so, that many readers, even those who have made no professional study of logic, may be supposed to be acquainted with the general nature and object of such devices. Of these schemes one only, viz. that commonly called 'Eulerian circles,' has met with any general acceptance..." The first to use the term "Venn diagram" was Clarence Irving Lewis in 1918, in his book "A Survey of Symbolic Logic". Venn diagrams are very similar to Euler diagrams, which were invented by Leonhard Euler in the 18th century.[note 1] M. E. Baron has noted that Leibniz (1646–1716) in the 17th century produced similar diagrams before Euler, but much of it was unpublished. She also observes even earlier Euler-like diagrams by Ramon Lull in the 13th Century. In the 20th century, Venn diagrams were further developed. D.W. Henderson showed in 1963 that the existence of an n-Venn diagram with n-fold rotational symmetry implied that n was a prime number. He also showed that such symmetric Venn diagrams exist when n is 5 or 7. In 2002 Peter Hamburger found symmetric Venn diagrams for n = 11 and in 2003, Griggs, Killian, and Savage showed that symmetric Venn diagrams exist for all other primes. Thus rotationally symmetric Venn diagrams exist if and only if n is a prime number. Venn diagrams and Euler diagrams were incorporated as part of instruction in set theory as part of the new math movement in the 1960s. Since then, they have also been adopted in the curriculum of other fields such as reading. A Venn diagram is constructed with a collection of simple closed curves drawn in a plane. According to Lewis, the "principle of these diagrams is that classes [or sets] be represented by regions in such relation to one another that all the possible logical relations of these classes can be indicated in the same diagram. That is, the diagram initially leaves room for any possible relation of the classes, and the actual or given relation, can then be specified by indicating that some particular region is null or is not-null".:157 Venn diagrams normally comprise overlapping circles. The interior of the circle symbolically represents the elements of the set, while the exterior represents elements that are not members of the set. For instance, in a two-set Venn diagram, one circle may represent the group of all wooden objects, while another circle may represent the set of all tables. The overlapping area or intersection would then represent the set of all wooden tables. Shapes other than circles can be employed as shown below by Venn's own higher set diagrams. Venn diagrams do not generally contain information on the relative or absolute sizes (cardinality) of sets; i.e. they are schematic diagrams. Venn diagrams are similar to Euler diagrams. However, a Venn diagram for n component sets must contain all 2n hypothetically possible zones that correspond to some combination of inclusion or exclusion in each of the component sets. Euler diagrams contain only the actually possible zones in a given context. In Venn diagrams, a shaded zone may represent an empty zone, whereas in an Euler diagram the corresponding zone is missing from the diagram. For example, if one set represents dairy products and another cheeses, the Venn diagram contains a zone for cheeses that are not dairy products. Assuming that in the context cheese means some type of dairy product, the Euler diagram has the cheese zone entirely contained within the dairy-product zone—there is no zone for (non-existent) non-dairy cheese. This means that as the number of contours increases, Euler diagrams are typically less visually complex than the equivalent Venn diagram, particularly if the number of non-empty intersections is small. The difference between Euler and Venn diagrams can be seen in the following example. Take the three sets: The Venn and the Euler diagram of those sets are: Extensions to higher numbers of sets Venn diagrams typically represent two or three sets, but there are forms that allow for higher numbers. Shown below, four intersecting spheres form the highest order Venn diagram that has the symmetry of a simplex and can be visually represented. The 16 intersections correspond to the vertices of a tesseract (or the cells of a 16-cell respectively). For higher numbers of sets, some loss of symmetry in the diagrams is unavoidable. Venn was keen to find "symmetrical figures...elegant in themselves," that represented higher numbers of sets, and he devised a four-set diagram using ellipses (see below). He also gave a construction for Venn diagrams for any number of sets, where each successive curve that delimits a set interleaves with previous curves, starting with the three-circle diagram. Non-example: This Euler diagram is not a Venn diagram for four sets as it has only 13 regions (excluding the outside); there is no region where only the yellow and blue, or only the red and green circles meet. Edwards' Venn diagrams A. W. F. Edwards constructed a series of Venn diagrams for higher numbers of sets by segmenting the surface of a sphere. For example, three sets can be easily represented by taking three hemispheres of the sphere at right angles (x = 0, y = 0 and z = 0). A fourth set can be added to the representation by taking a curve similar to the seam on a tennis ball, which winds up and down around the equator, and so on. The resulting sets can then be projected back to a plane to give cogwheel diagrams with increasing numbers of teeth, as shown here. These diagrams were devised while designing a stained-glass window in memory of Venn. Edwards' Venn diagrams are topologically equivalent to diagrams devised by Branko Grünbaum, which were based around intersecting polygons with increasing numbers of sides. They are also 2-dimensional representations of hypercubes. Charles Lutwidge Dodgson devised a five-set diagram. Venn diagrams correspond to truth tables for the propositions , , etc., in the sense that each region of Venn diagram corresponds to one row of the truth table. Another way of representing sets is with R-Diagrams. - Logical connectives - Spherical octahedron - A stereographic projection of a regular octahedron makes a 3-set Venn diagram, as 3 orthogonal great circles, each dividing space into two halves. - In Euler's Lettres à une princesse d'Allemagne sur divers sujets de physique et de philosophie [Letters to a German Princess on various physical and philosophical subjects] (Saint Petersburg, Russia: l'Academie Impériale des Sciences, 1768), volume 2, pages 95-126. In Venn's article, however, he suggests that the diagrammatic idea predates Euler, and is attributable to Christian Weise or Johann Christian Lange (in Lange's book Nucleus Logicae Weisianae (1712)). - Venn, J. (July 1880). "On the diagrammatic and mechanical representation of propositions and reasonings". Philosophical Magazine and Journal of Science. 5. 9 (59): 1–18.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - John Venn (1880) "On the employment of geometrical diagrams for the sensible representations of logical propositions," Proceedings of the Cambridge Philosophical Society, 4 : 47-59. - Sandifer, Ed (2003). "How Euler Did It" (pdf). The Mathematical Association of America: MAA Online. Retrieved 26 October 2009.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Ruskey, F.; Weston, M. (June 2005). "A Survey of Venn Diagrams". The Electronic Journal of Combinatorics.CS1 maint: multiple names: authors list (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Baron, M.E. (May 1969). "A Note on The Historical Development of Logic Diagrams". The Mathematical Gazette. 53 (384): 113–125. doi:10.2307/3614533. JSTOR 3614533.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Henderson, D.W. (April 1963). "Venn diagrams for more than four classes". American Mathematical Monthly. 70 (4): 424–6. doi:10.2307/2311865. JSTOR 2311865.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Ruskey, Frank; Savage, Carla D.; Wagon, Stan (December 2006). "The Search for Simple Symmetric Venn Diagrams" (PDF). Notices of the AMS. 53 (11): 1304–11.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Strategies for Reading Comprehension Venn Diagrams - Lewis, Clarence Irving (1918). A Survey of Symbolic Logic. Berkeley: University of California Press.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "Euler Diagrams 2004: Brighton, UK: September 22–23". Reasoning with Diagrams project, University of Kent. 2004. Retrieved 13 August 2008.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - John Venn (1881). Symbolic logic. Macmillan. p. 108. Retrieved 9 April 2013.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Edwards, A. W. F. (2004), Cogwheels of the Mind: The Story of Venn Diagrams, JHU Press, p. 65, ISBN 9780801874345<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>. - Grimaldi, Ralph P. (2004). Discrete and combinatorial mathematics. Boston: Addison-Wesley. p. 143. ISBN 0-201-72634-3.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Johnson, D. L. (2001). "3.3 Laws". Elements of logic via numbers and sets. Springer Undergraduate Mathematics Series. Berlin: Springer-Verlag. p. 62. ISBN 3-540-76123-3.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Generalized Venn Diagrams 1987 by E. S. Mahmoodian, with M. Rezaie and F. Vatan. - Stewart, Ian (2004). "Ch. 4 Cogwheels of the Mind". Another Fine Math You've Got Me Into. Dover Publications. pp. 51–64. ISBN 0-486-43181-9.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Edwards, A.W.F. (2004). Cogwheels of the mind: the story of Venn diagrams. JHU Press. ISBN 978-0-8018-7434-5.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Glassner, Andrew (2004). "Venn and Now". Morphs, Mallards, and Montages: Computer-Aided Imagination. Wellesley, MA: A K Peters. pp. 161–184. ISBN 978-1568812311.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Ruskey, Khalegh; Ruskey, Frank (27 July 2012). "A New Rose : The First Simple Symmetric 11-Venn Diagram". p. 6452. arXiv:1207.6452. Bibcode:2012arXiv1207.6452M<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> |Wikimedia Commons has media related to Venn diagrams.| - Hazewinkel, Michiel, ed. (2001), "Venn diagram", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Weisstein, Eric W., "Venn Diagram", MathWorld. - Lewis Carroll's Logic Game – Venn vs. Euler at Cut-the-knot - A Survey of Venn Diagrams - Generating Venn Diagrams to explore Google Suggest results - Seven sets interactive Venn diagram displaying color combinations - Six sets Venn diagrams made from triangles - Postscript for 9-set Venn and more - Venn diagram in Excel
Sculpted from a special kind of molecule called a “bottle-brush molecule,” the traps consist of tiny, organic tubes whose interior walls carry a negative charge. This feature enables the tubes to selectively encapsulate only positively charged particles. University at Buffalo An illustrated cross-section of a nanotube UB chemists created. The green structures are negatively charged carboxylic acid groups, which help trap positively charged particles. In addition, because UB scientists construct the tubes from scratch, they can create traps of different sizes that snare molecular prey of different sizes. The level of fine tuning possible is remarkable: In the Journal of the American Chemical Society, the researchers report that they were able to craft nanotubes that captured particles 2.8 nanometers in diameter, while leaving particles just 1.5 nanometers larger untouched. These kinds of cages could be used, in the future, to expedite tedious tasks, such as segregating large quantum dots from small quantum dots, or separating proteins by size and charge. Images of the bottle-brush molecule are available here: http://www.buffalo.edu/news/13057. “The shapes and sizes of molecules and nanomaterials dictate their utility for desired applications. Our molecular cages will allow one to separate particles and molecules with pre-determined dimensions, thus creating uniform building blocks for the fabrication of advanced materials,” said Javid Rzayev, the UB assistant professor of chemistry who led the research. “Just like a contractor wants tile squares or bricks to be the same size so they fit well together, scientists are eager to produce nanometer-size particles with the same dimensions, which can go a long way toward creating uniform and well-behaved materials,” Rzayev said. To create the traps, Rzayev and his team first constructed a special kind of molecule called a bottle-brush molecule. These resemble a round hair brush, with molecular “bristles” protruding all the way around a molecular backbone. After stitching the bristles together, the researchers hollowed out the center of each bottle-brush molecule, leaving behind a structure shaped like a toilet paper tube. The carving process employed simple but clever chemistry: When building their bottlebrush molecules, the scientists constructed the heart of each molecule using molecular structures that disintegrate upon coming into contact with water. Around this core, the scientists then attached a layer of negatively charged carboxylic acid groups. To sculpt the molecule, the scientists then immersed it water, in effect hollowing the core. The resulting structure was the trap—a nanotube whose inner walls were negatively charged due to the presence of the newly exposed carboxylic acid groups. To test the tubes’ effectiveness as traps, Rzayev and colleagues designed a series of experiments involving a two-layered chemical cocktail. The cocktail’s bottom layer consisted of a chloroform solution containing the nanotubes, while the top layer consisted of a water-based solution containing positively charged dyes. (As in a tequila sunrise, the thinner, water-based solution floats on top of the denser chloroform solution, with little mixing.) When the scientists shook the cocktail for five minutes, the nanotubes collided with and trapped the dyes, bringing the dyes into the chloroform solution. (The dyes, on their own, do not dissolve in chloroform.) In similar experiments, Rzayev and his team were able to use the nanotubes to extract positively charged molecules called dendrimers from an aqueous solution. The nanotubes were crafted so that dendrimers with a diameter of 2.8 nanometers were trapped, while dendrimers that were 4.3 nanometers across were left in solution. To remove the captured dendrimers from the nanotubes, the researchers simply lowered the pH of the chloroform solution, which shuts down the negative charge inside the traps and allows the captured particles to be released from their cages. The research on nanotubes is part of a larger suite of studies Rzayev is conducting on bottle-brush molecules using a National Science Foundation CAREER award. His other work includes the fabrication of bottle-brush-based nanomembranes that could be adapted for water filtration, and the assembly of layered, bottle-brush polymers that reflect visible light like the wings of a butterfly do. The University at Buffalo is a premier research-intensive public university, a flagship institution in the State University of New York system and its largest and most comprehensive campus. UB's more than 28,000 students pursue their academic interests through more than 300 undergraduate, graduate and professional degree programs. Founded in 1846, the University at Buffalo is a member of the Association of American Universities.Related Stories: Charlotte Hsu | Newswise Science News How brains surrender to sleep 23.06.2017 | IMP - Forschungsinstitut für Molekulare Pathologie GmbH A new technique isolates neuronal activity during memory consolidation 22.06.2017 | Spanish National Research Council (CSIC) An international team of scientists has proposed a new multi-disciplinary approach in which an array of new technologies will allow us to map biodiversity and the risks that wildlife is facing at the scale of whole landscapes. The findings are published in Nature Ecology and Evolution. This international research is led by the Kunming Institute of Zoology from China, University of East Anglia, University of Leicester and the Leibniz Institute for Zoo and Wildlife Research. Using a combination of satellite and ground data, the team proposes that it is now possible to map biodiversity with an accuracy that has not been previously... Heatwaves in the Arctic, longer periods of vegetation in Europe, severe floods in West Africa – starting in 2021, scientists want to explore the emissions of the greenhouse gas methane with the German-French satellite MERLIN. This is made possible by a new robust laser system of the Fraunhofer Institute for Laser Technology ILT in Aachen, which achieves unprecedented measurement accuracy. Methane is primarily the result of the decomposition of organic matter. The gas has a 25 times greater warming potential than carbon dioxide, but is not as... Hydrogen is regarded as the energy source of the future: It is produced with solar power and can be used to generate heat and electricity in fuel cells. Empa researchers have now succeeded in decoding the movement of hydrogen ions in crystals – a key step towards more efficient energy conversion in the hydrogen industry of tomorrow. As charge carriers, electrons and ions play the leading role in electrochemical energy storage devices and converters such as batteries and fuel cells. Proton... Scientists from the Excellence Cluster Universe at the Ludwig-Maximilians-Universität Munich have establised "Cosmowebportal", a unique data centre for cosmological simulations located at the Leibniz Supercomputing Centre (LRZ) of the Bavarian Academy of Sciences. The complete results of a series of large hydrodynamical cosmological simulations are available, with data volumes typically exceeding several hundred terabytes. Scientists worldwide can interactively explore these complex simulations via a web interface and directly access the results. With current telescopes, scientists can observe our Universe’s galaxies and galaxy clusters and their distribution along an invisible cosmic web. From the... Temperature measurements possible even on the smallest scale / Molecular ruby for use in material sciences, biology, and medicine Chemists at Johannes Gutenberg University Mainz (JGU) in cooperation with researchers of the German Federal Institute for Materials Research and Testing (BAM)... 19.06.2017 | Event News 13.06.2017 | Event News 13.06.2017 | Event News 23.06.2017 | Physics and Astronomy 23.06.2017 | Physics and Astronomy 23.06.2017 | Information Technology
Influenza A virus Influenza A virus causes influenza in birds and some mammals, and is the only species of the Alphainfluenzavirus genus of the Orthomyxoviridae family of viruses. Strains of all subtypes of influenza A virus have been isolated from wild birds, although disease is uncommon. Some isolates of influenza A virus cause severe disease both in domestic poultry and, rarely, in humans. Occasionally, viruses are transmitted from wild aquatic birds to domestic poultry, and this may cause an outbreak or give rise to human influenza pandemics. Influenza A viruses are negative-sense, single-stranded, segmented RNA viruses. The several subtypes are labeled according to an H number (for the type of hemagglutinin) and an N number (for the type of neuraminidase). There are 18 different known H antigens (H1 to H18) and 11 different known N antigens (N1 to N11). H17N10 was isolated from fruit bats in 2012. H18N11 was discovered in a Peruvian bat in 2013. A filtered and purified influenza A vaccine for humans has been developed, and many countries have stockpiled it to allow a quick administration to the population in the event of an avian influenza pandemic. Avian influenza is sometimes called avian flu, and colloquially, bird flu. In 2011, researchers reported the discovery of an antibody effective against all types of the influenza A virus. Variants and subtypes - H = hemagglutinin, a protein that causes red blood cells to agglutinate. - N = neuraminidase, an enzyme that cleaves the glycosidic bonds of the monosaccharide sialic acid (previously called neuraminic acid). The hemagglutinin is central to the virus's recognizing and binding to target cells, and also to its then infecting the cell with its RNA. The neuraminidase, on the other hand, is critical for the subsequent release of the daughter virus particles created within the infected cell so they can spread to other cells. Different influenza viruses encode for different hemagglutinin and neuraminidase proteins. For example, the H5N1 virus designates an influenza A subtype that has a type 5 hemagglutinin (H) protein and a type 1 neuraminidase (N) protein. There are 18 known types of hemagglutinin and 11 known types of neuraminidase, so, in theory, 198 different combinations of these proteins are possible. Some variants are identified and named according to the isolate they resemble, thus are presumed to share lineage (example Fujian flu virus-like); according to their typical host (example human flu virus); according to their subtype (example H3N2); and according to their deadliness (example LP, low pathogenic). So a flu from a virus similar to the isolate A/Fujian/411/2002(H3N2) is called Fujian flu, human flu, and H3N2 flu. Variants are sometimes named according to the species (host) in which the strain is endemic or to which it is adapted. The main variants named using this convention are: Variants have also sometimes been named according to their deadliness in poultry, especially chickens: - Low pathogenic avian influenza (LPAI) - Highly pathogenic avian influenza (HPAI), also called deadly flu or death flu Most known strains are extinct strains. For example, the annual flu subtype H3N2 no longer contains the strain that caused the Hong Kong flu. The annual flu (also called "seasonal flu" or "human flu") in the US. "results in approximately 36,000 deaths and more than 200,000 hospitalizations each year. In addition to this human toll, influenza is annually responsible for a total cost of over $10 billion in the U.S." Globally the toll of influenza virus is estimated at 291,000-645,000 deaths annually, exceeding previous estimates. Structure and genetics Influenza type A viruses are very similar in structure to influenza viruses types B, C, and D. The virus particle (also called the virion) is 80–120 nanometers in diameter such that the smallest virions adopt an elliptical shape. The length of each particle varies considerably, owing to the fact that influenza is pleomorphic, and can be in excess of many tens of micrometers, producing filamentous virions. Confusion about the nature of influenza virus pleomorphy stems from the observation that lab adapted strains typically lose the ability to form filaments and that these lab adapted strains were the first to be visualized by electron microscopy. Despite these varied shapes, the virions of all influenza type A viruses are similar in composition. They are all made up of a viral envelope containing two main types of proteins, wrapped around a central core. The two large proteins found on the outside of viral particles are hemagglutinin (HA) and neuraminidase (NA). HA is a protein that mediates binding of the virion to target cells and entry of the viral genome into the target cell. NA is involved in release from the abundant non-productive attachment sites present in mucus as well as the release of progeny virions from infected cells. These proteins are usually the targets for antiviral drugs. Furthermore, they are also the antigen proteins to which a host’s antibodies can bind and trigger an immune response. Influenza type A viruses are categorized into subtypes based on the type of these two proteins on the surface of the viral envelope. There are 16 subtypes of HA and 9 subtypes of NA known, but only H 1, 2 and 3, and N 1 and 2 are commonly found in humans. The central core of a virion contains the viral genome and other viral proteins that package and protect the genetic material. Unlike the genomes of most organisms (including humans, animals, plants, and bacteria) which are made up of double-stranded DNA, many viral genomes are made up of a different, single-stranded nucleic acid called RNA. Unusually for a virus, though, the influenza type A virus genome is not a single piece of RNA; instead, it consists of segmented pieces of negative-sense RNA, each piece containing either one or two genes which code for a gene product (protein). The term negative-sense RNA just implies that the RNA genome cannot be translated into protein directly; it must first be transcribed to positive-sense RNA before it can be translated into protein products. The segmented nature of the genome allows for the exchange of entire genes between different viral strains. The entire Influenza A virus genome is 13,588 bases long and is contained on eight RNA segments that code for at least 10 but up to 14 proteins, depending on the strain. The relevance or presence of alternate gene products can vary: - Segment 1 encodes RNA polymerase subunit (PB2). - Segment 2 encodes RNA polymerase subunit (PB1) and the PB1-F2 protein, which induces cell death, by using different reading frames from the same RNA segment. - Segment 3 encodes RNA polymerase subunit (PA) and the PA-X protein, which has a role in host transcription shutoff. - Segment 4 encodes for HA (hemagglutinin). About 500 molecules of hemagglutinin are needed to make one virion. HA determines the extent and severity of a viral infection in a host organism. - Segment 5 encodes NP, which is a nucleoprotein. - Segment 6 encodes NA (neuraminidase). About 100 molecules of neuraminidase are needed to make one virion. - Segment 7 encodes two matrix proteins (M1 and M2) by using different reading frames from the same RNA segment. About 3,000 matrix protein molecules are needed to make one virion. - Segment 8 encodes two distinct non-structural proteins (NS1 and NEP) by using different reading frames from the same RNA segment. The RNA segments of the viral genome have complementary base sequences at the terminal ends, allowing them to bond to each other with hydrogen bonds. Transcription of the viral (-) sense genome (vRNA) can only proceed after the PB2 protein binds to host capped RNAs, allowing for the PA subunit to cleave several nucleotides after the cap. This host-derived cap and accompanied nucleotides serves as the primer for viral transcription initiation. Transcription proceeds along the vRNA until a stretch of several uracil bases is reached, initiating a 'stuttering' whereby the nascent viral mRNA is poly-adenylated, producing a mature transcript for nuclear export and translation by host machinery. The RNA synthesis takes place in the cell nucleus, while the synthesis of proteins takes place in the cytoplasm. Once the viral proteins are assembled into virions, the assembled virions leave the nucleus and migrate towards the cell membrane. The host cell membrane has patches of viral transmembrane proteins (HA, NA, and M2) and an underlying layer of the M1 protein which assist the assembled virions to budding through the membrane, releasing finished enveloped viruses into the extracellular fluid. Influenza virus is able to undergo multiplicity reactivation after inactivation by UV radiation, or by ionizing radiation. If any of the eight RNA strands that make up the genome contains damage that prevents replication or expression of an essential gene, the virus is not viable when it alone infects a cell (a single infection). However, when two or more damaged viruses infect the same cell (multiple infection), viable progeny viruses can be produced provided each of the eight genomic segments is present in at least one undamaged copy. That is, multiplicity reactivation can occur. Upon infection, influenza virus induces a host response involving increased production of reactive oxygen species, and this can damage the virus genome. If, under natural conditions, virus survival is ordinarily vulnerable to the challenge of oxidative damage, then multiplicity reactivation is likely selectively advantageous as a kind of genomic repair process. It has been suggested that multiplicity reactivation involving segmented RNA genomes may be similar to the earliest evolved form of sexual interaction in the RNA world that likely preceded the DNA world. (Also see RNA world hypothesis.) Human influenza virus "Human influenza virus" usually refers to those subtypes that spread widely among humans. H1N1, H1N2, and H3N2 are the only known influenza A virus subtypes currently circulating among humans. Genetic factors in distinguishing between "human flu viruses" and "avian influenza viruses" include: - PB2: (RNA polymerase): Amino acid (or residue) position 627 in the PB2 protein encoded by the PB2 RNA gene. Until H5N1, all known avian influenza viruses had a Glu at position 627, while all human influenza viruses had a lysine. - HA: (hemagglutinin): Avian influenza HA binds alpha 2–3 sialic acid receptors, while human influenza HA binds alpha 2–6 sialic acid receptors. Swine influenza viruses have the ability to bind both types of sialic acid receptors. "About 52 key genetic changes distinguish avian influenza strains from those that spread easily among people, according to researchers in Taiwan, who analyzed the genes of more than 400 A type flu viruses." "How many mutations would make an avian virus capable of infecting humans efficiently, or how many mutations would render an influenza virus a pandemic strain, is difficult to predict. We have examined sequences from the 1918 strain, which is the only pandemic influenza virus that could be entirely derived from avian strains. Of the 52 species-associated positions, 16 have residues typical for human strains; the others remained as avian signatures. The result supports the hypothesis that the 1918 pandemic virus is more closely related to the avian influenza A virus than are other human influenza viruses." Human flu symptoms usually include fever, cough, sore throat, muscle aches, conjunctivitis and, in severe cases, severe breathing problems and pneumonia that may be fatal. The severity of the infection will depend in large part on the state of the infected person's immune system and if the victim has been exposed to the strain before, and is therefore partially immune. Recent follow up studies on the impact of statins on influenza virus replication show that pre-treatment of cells with atorvastatin suppresses virus growth in culture. Highly pathogenic H5N1 avian influenza in a human is far worse, killing 50% of humans who catch it. In one case, a boy with H5N1 experienced diarrhea followed rapidly by a coma without developing respiratory or flu-like symptoms. The influenza A virus subtypes that have been confirmed in humans, ordered by the number of known human pandemic deaths, are: - H1N1 caused "Spanish flu" in 1918 and the 2009 swine flu outbreak - H2N2 caused "Asian flu" in the late 1950s - H3N2 caused "Hong Kong flu" in the late 1960s - H5N1 considered a global influenza pandemic threat through its spread in the mid-2000s - H7N9 is responsible for an ongoing epidemic in China and considered to have the greatest pandemic threat of the Influenza A viruses - H7N7 has unusual zoonotic potential - H1N2 is currently endemic in humans and pigs - H9N2, H7N2, H7N3, H5N2, and H10N7. - H1N1 is currently pandemic in both human and pig populations. A variant of H1N1 was responsible for the Spanish flu pandemic that killed some 50 million to 100 million people worldwide over about a year in 1918 and 1919. Another variant was named a pandemic threat in the 2009 flu pandemic. Controversy arose in October 2005, after the H1N1 genome was published in the journal, Science, because of fears that this information could be used for bioterrorism. - H1N2 is currently endemic in both human and pig populations. The new H1N2 strain appears to have resulted from the reassortment of the genes of the currently circulating influenza H1N1 and H3N2 subtypes. The hemagglutinin protein of the H1N2 virus is similar to that of the currently circulating H1N1 viruses, and the neuraminidase protein is similar to that of the current H3N2 viruses. - The Asian flu, a pandemic outbreak of H2N2 avian influenza, originated in China in 1957, spread worldwide that same year during which an influenza vaccine was developed, lasted until 1958 and caused between one and four million deaths. - H3N2 is currently endemic in both human and pig populations. It evolved from H2N2 by antigenic shift and caused the Hong Kong flu pandemic of 1968 and 1969 that killed up to 750,000. "An early-onset, severe form of influenza A H3N2 made headlines when it claimed the lives of several children in the United States in late 2003." - The dominant strain of annual flu in January 2006 was H3N2. Measured resistance to the standard antiviral drugs amantadine and rimantadine in H3N2 increased from 1% in 1994 to 12% in 2003 to 91% in 2005. - "[C]ontemporary human H3N2 influenza viruses are now endemic in pigs in southern China and can reassort with avian H5N1 viruses in this intermediate host." - H5N1 is the world's major influenza pandemic threat. - "When he compared the 1918 virus with today's human flu viruses, Dr. Taubenberger noticed that it had alterations in just 25 to 30 of the virus's 4,400 amino acids. Those few changes turned a bird virus into a killer that could spread from person to person." - Japan's Health Ministry said January 2006 that poultry farm workers in Ibaraki prefecture may have been exposed to H5N2 in 2005. The H5N2 antibody titers of paired sera of 13 subjects increased fourfold or more. - A highly pathogenic strain of H5N9 caused a minor flu outbreak in 1966 in Ontario and Manitoba, Canada in turkeys. - One person in New York in 2003 and one person in Virginia in 2002 were found to have serologic evidence of infection with H7N2. Both fully recovered. - In North America, the presence of avian influenza strain H7N3 was confirmed at several poultry farms in British Columbia in February 2004. As of April 2004, 18 farms had been quarantined to halt the spread of the virus. Two cases of humans with avian influenza have been confirmed in that region. "Symptoms included conjunctivitis and mild influenza-like illness." Both fully recovered. - H7N7 has unusual zoonotic potential. In 2003 in the Netherlands, 89 people were confirmed to have H7N7 influenza virus infection following an outbreak in poultry on several farms. One death was recorded. - On 2 April 2013, the Centre for Health Protection (CHP) of the Department of Health of Hong Kong confirmed four more cases in Jiangsu province in addition to the three cases initially reported on 31 March 2013. This virus also has the greatest potential for an influenza pandemic among all of the Influenza A subtypes. - Low pathogenic avian influenza A (H9N2) infection was confirmed in 1999, in China and Hong Kong in two children, and in 2003 in Hong Kong in one child. All three fully recovered. - In 2004 in Egypt, H10N7 was reported for the first time in humans. It caused illness in two infants in Egypt. One child’s father was a poultry merchant. - "All influenza A pandemics since [the Spanish flu pandemic], and indeed almost all cases of influenza A worldwide (excepting human infections from avian viruses such as H5N1 and H7N7), have been caused by descendants of the 1918 virus, including "drifted" H1N1 viruses and reassorted H2N2 and H3N2 viruses. The latter are composed of key genes from the 1918 virus, updated by subsequently incorporated avian influenza genes that code for novel surface proteins, making the 1918 virus indeed the "mother" of all pandemics." Researchers from the National Institutes of Health used data from the Influenza Genome Sequencing Project and concluded that during the ten-year period examined, most of the time the hemagglutinin gene in H3N2 showed no significant excess of mutations in the antigenic regions while an increasing variety of strains accumulated. This resulted in one of the variants eventually achieving higher fitness, becoming dominant, and in a brief interval of rapid evolution, rapidly sweeping through the population and eliminating most other variants. In the short-term evolution of influenza A virus, a 2006 study found that stochastic, or random, processes are key factors. Influenza A virus HA antigenic evolution appears to be characterized more by punctuated, sporadic jumps as opposed to a constant rate of antigenic change. Using phylogenetic analysis of 413 complete genomes of human influenza A viruses that were collected throughout the state of New York, the authors of Nelson et al. 2006 were able to show that genetic diversity, and not antigenic drift, shaped the short-term evolution of influenza A via random migration and reassortment. The evolution of these viruses is dominated more by the random importation of genetically different viral strains from other geographic locations and less by natural selection. Within a given season, adaptive evolution is infrequent and had an overall weak effect as evidenced from the data gathered from the 413 genomes. Phylogenetic analysis revealed the different strains were derived from newly imported genetic material as opposed to isolates that had been circulating in New York in previous seasons. Therefore, the gene flow in and out of this population, and not natural selection, was more important in the short term. - See H5N1 for the current epizootic (an epidemic in nonhumans) and panzootic (a disease affecting animals of many species especially over a wide area) of H5N1 influenza - Avian influenza Fowl act as natural asymptomatic carriers of influenza A viruses. Prior to the current H5N1 epizootic, strains of influenza A virus had been demonstrated to be transmitted from wild fowl to only birds, pigs, horses, seals, whales and humans; and only between humans and pigs and between humans and domestic fowl; and not other pathways such as domestic fowl to horse. Wild aquatic birds are the natural hosts for a large variety of influenza A viruses. Occasionally, viruses are transmitted from these birds to other species and may then cause devastating outbreaks in domestic poultry or give rise to human influenza pandemics. H5N1 has been shown to be transmitted to tigers, leopards, and domestic cats that were fed uncooked domestic fowl (chickens) with the virus. H3N8 viruses from horses have crossed over and caused outbreaks in dogs. Laboratory mice have been infected successfully with a variety of avian flu genotypes. Influenza A viruses spread in the air and in manure, and survives longer in cold weather. It can also be transmitted by contaminated feed, water, equipment and clothing; however, there is no evidence the virus can survive in well-cooked meat. Symptoms in animals vary, but virulent strains can cause death within a few days. "Highly pathogenic avian influenza virus is on every top ten list available for potential agricultural bioweapon agents". Avian influenza viruses that the OIE and others test for to control poultry disease include: H5N1, H7N2, H1N7, H7N3, H13N6, H5N9, H11N6, H3N8, H9N2, H5N2, H4N8, H10N7, H2N2, H8N4, H14N5, H6N5, H12N5 and others. - Known outbreaks of highly pathogenic flu in poultry 1959–2003 |1983||Pennsylvania (US)*||Chicken, turkey||H5N2| |1997||New South Wales (Australia)||Chicken||H7N4| |1997||Hong Kong (China)*||Chicken||H5N1| |2002||Hong Kong (China)||Chicken||H5N1| *Outbreaks with significant spread to numerous farms, resulting in great economic losses. Most other outbreaks involved little or no spread from the initially infected farms. 1979: "More than 400 harbor seals, most of them immature, died along the New England coast between December 1979 and October 1980 of acute pneumonia associated with influenza virus, A/Seal/Mass/1/180 (H7N7)." 1995: "[V]accinated birds can develop asymptomatic infections that allow virus to spread, mutate, and recombine (ProMED-mail, 2004j). Intensive surveillance is required to detect these “silent epidemics” in time to curtail them. In Mexico, for example, mass vaccination of chickens against epidemic H5N2 influenza in 1995 has had to continue in order to control a persistent and evolving virus (Lee et al., 2004)." 1997: "Influenza A viruses normally seen in one species sometimes can cross over and cause illness in another species. For example, until 1997, only H1N1 viruses circulated widely in the US pig population. However, in 1997, H3N2 viruses from humans were introduced into the pig population and caused widespread disease among pigs. Most recently, H3N8 viruses from horses have crossed over and caused outbreaks in dogs." 2000: "In California, poultry producers kept their knowledge of a recent H6N2 avian influenza outbreak to themselves due to their fear of public rejection of poultry products; meanwhile, the disease spread across the western United States and has since become endemic." 2003: In Netherlands H7N7 influenza virus infection broke out in poultry on several farms. 2004: In North America, the presence of avian influenza strain H7N3 was confirmed at several poultry farms in British Columbia in February 2004. As of April 2004, 18 farms had been quarantined to halt the spread of the virus. 2005: Tens of millions of birds died of H5N1 influenza and hundreds of millions of birds were culled to protect humans from H5N1. H5N1 is endemic in birds in southeast Asia and represents a long-term pandemic threat. 2006: H5N1 spreads across the globe, killing hundreds of millions of birds and over 100 people, and causing a significant H5N1 impact from both actual deaths and predicted possible deaths. - Swine flu - Swine influenza (or "pig influenza") refers to a subset of Orthomyxoviridae that create influenza and are endemic in pigs. The species of Orthomyxoviridae that can cause flu in pigs are influenza A virus and influenza C virus, but not all genotypes of these two species infect pigs. The known subtypes of influenza A virus that create influenza and are endemic in pigs are H1N1, H1N2, H3N1 and H3N2. - Horse flu - Horse flu (or "equine influenza") refers to varieties of influenza A virus that affect horses. Horse flu viruses were only isolated in 1956. The two main types of virus are called equine-1 (H7N7), which commonly affects horse heart muscle, and equine-2 (H3N8), which is usually more severe. - Dog flu - Dog flu (or "canine influenza") refers to varieties of influenza A virus that affect dogs. The equine influenza virus H3N8 was found to infect and kill – with respiratory illness – greyhound race dogs at a Florida racetrack in January 2004. - H3N8 is now endemic in birds, horses and dogs. - "Taxonomy". International Committee on Taxonomy of Viruses (ICTV). Retrieved 19 July 2018. - "Avian influenza (" bird flu") – Fact sheet". WHO. - Klenk H, Matrosovich M, Stech J (2008). "Avian Influenza: Molecular Mechanisms of Pathogenesis and Host Range". In Mettenleiter TC, Sobrino F (eds.). Animal Viruses: Molecular Biology. Caister Academic Press. ISBN 978-1-904455-22-6. - Kawaoka Y, ed. (2006). Influenza Virology: Current Topics. Caister Academic Press. ISBN 978-1-904455-06-6. - "Influenza Type A Viruses and Subtypes". Centers for Disease Control and Prevention. 2 April 2013. Retrieved 13 June 2013. - Tong S, Zhu X, Li Y, Shi M, Zhang J, Bourgeois M, Yang H, Chen X, Recuenco S, Gomez J, Chen LM, Johnson A, Tao Y, Dreyfus C, Yu W, McBride R, Carney PJ, Gilbert AT, Chang J, Guo Z, Davis CT, Paulson JC, Stevens J, Rupprecht CE, Holmes EC, Wilson IA, Donis RO (October 2013). "New world bats harbor diverse influenza A viruses". PLoS Pathogens. 9 (10): e1003657. doi:10.1371/journal.ppat.1003657. PMC 3794996. PMID 24130481. - "Unique new flu virus found in bats". NHS Choices. 1 March 2012. Retrieved 16 May 2012. - Tong S, Li Y, Rivailler P, Conrardy C, Castillo DA, Chen LM, Recuenco S, Ellison JA, Davis CT, York IA, Turmelle AS, Moran D, Rogers S, Shi M, Tao Y, Weil MR, Tang K, Rowe LA, Sammons S, Xu X, Frace M, Lindblade KA, Cox NJ, Anderson LJ, Rupprecht CE, Donis RO (March 2012). "A distinct lineage of influenza A virus from bats". Proceedings of the National Academy of Sciences of the United States of America. 109 (11): 4269–74. Bibcode:2012PNAS..109.4269T. doi:10.1073/pnas.1116200109. PMC 3306675. PMID 22371588. - Gallagher J (29 July 2011). "'Super antibody' fights off flu". BBC News. Retrieved 29 July 2011. - whitehouse.gov Archived 9 January 2009 at the Wayback Machine National Strategy for Pandemic Influenza – Introduction – "Although remarkable advances have been made in science and medicine during the past century, we are constantly reminded that we live in a universe of microbes – viruses, bacteria, protozoa and fungi that are forever changing and adapting themselves to the human host and the defenses that humans create. Influenza viruses are notable for their resilience and adaptability. While science has been able to develop highly effective vaccines and treatments for many infectious diseases that threaten public health, acquiring these tools is an ongoing challenge with the influenza virus. Changes in the genetic makeup of the virus require us to develop new vaccines on an annual basis and forecast which strains are likely to predominate. As a result, and despite annual vaccinations, the US faces a burden of influenza that results in approximately 36,000 deaths and more than 200,000 hospitalizations each year. In addition to this human toll, influenza is annually responsible for a total cost of over $10 billion in the US. A pandemic, or worldwide outbreak of a new influenza virus, could dwarf this impact by overwhelming our health and medical capabilities, potentially resulting in hundreds of thousands of deaths, millions of hospitalizations, and hundreds of billions of dollars in direct and indirect costs. This Strategy will guide our preparedness and response activities to mitigate that impact." - Iuliano AD, Roguski KM, Chang HH, Muscatello DJ, Palekar R, Tempia S, Cohen C, Gran JM, Schanzer D, Cowling BJ, Wu P, Kyncl J, Ang LW, Park M, Redlberger-Fritz M, Yu H, Espenhain L, Krishnan A, Emukule G, van Asten L, Pereira da Silva S, Aungkulanon S, Buchholz U, Widdowson MA, Bresee JS (March 2018). "Estimates of global seasonal influenza-associated respiratory mortality: a modelling study". Lancet. 391 (10127): 1285–1300. doi:10.1016/s0140-6736(17)33293-2. PMC 5935243. PMID 29248255. - Daum LT, Shaw MW, Klimov AI, Canas LC, Macias EA, Niemeyer D, Chambers JP, Renthal R, Shrestha SK, Acharya RP, Huzdar SP, Rimal N, Myint KS, Gould P (August 2005). "Influenza A (H3N2) outbreak, Nepal". Emerging Infectious Diseases. 11 (8): 1186–91. doi:10.3201/eid1108.050302. PMC 3320503. PMID 16102305. "The 2003–2004 influenza season was severe in terms of its impact on illness because of widespread circulation of antigenically distinct influenza A (H3N2) Fujian-like viruses. These viruses first appeared late during the 2002–2003 influenza season and continued to persist as the dominant circulating strain throughout the subsequent 2003–2004 influenza season, replacing the A/Panama/2007/99-like H3N2 viruses (1). Of the 172 H3N2 viruses genetically characterized by the Department of Defense in 2003–2004, only one isolate (from Thailand) belonged to the A/Panama-like lineage. In February 2003, the World Health Organization (WHO) changed the H3N2 component for the 2004–2005 influenza vaccine to afford protection against the widespread emergence of Fujian-like viruses (2). The annually updated trivalent vaccine consists of hemagglutinin (HA) surface glycoprotein components from influenza H3N2, H1N1, and B viruses." - Mahmoud 2005, p. 126 "H5N1 virus is now endemic in poultry in Asia (Table 2-1) and has gained an entrenched ecological niche from which to present a long-term pandemic threat to humans. At present, these viruses are poorly transmitted from poultry to humans, and there is no conclusive evidence of human-to-human transmission. However, continued, extensive exposure of the human population to H5N1 viruses increases the likelihood that the viruses will acquire the necessary characteristics for efficient human-to-human transmission through genetic mutation or reassortment with a prevailing human influenza A virus. Furthermore, contemporary human H3N2 influenza viruses are now endemic in pigs in southern China (Peiris et al., 2001) and can reassort with avian H5N1 viruses in this 'intermediate host.' Therefore, it is imperative that outbreaks of H5N1 disease in poultry in Asia are rapidly and sustainably controlled. The seasonality of the disease in poultry, together with the control measures already implemented, are likely to reduce temporarily the frequency of H5N1 influenza outbreaks and the probability of human infection." - Gallagher J (29 July 2011). "'Super antibody' fights off flu". BBC News – via www.bbc.co.uk. - "Scientists hail the prospect of a universal vaccine for flu". 29 July 2011. - Chan AL (28 July 2011). "Universal Flu Vaccine On The Horizon: Researchers Find 'Super Antibody'" – via Huff Post. - "Details - Public Health Image Library(PHIL)". phil.cdc.gov. Retrieved 24 April 2018. - Sugita Y, Noda T, Sagara H, Kawaoka Y (November 2011). "Ultracentrifugation deforms unfixed influenza A virions". The Journal of General Virology. 92 (Pt 11): 2485–93. doi:10.1099/vir.0.036715-0. PMC 3352361. PMID 21795472. - Nakatsu S, Murakami S, Shindo K, Horimoto T, Sagara H, Noda T, Kawaoka Y (March 2018). "Influenza C and D Viruses Package Eight Organized Ribonucleoprotein Complexes". Journal of Virology. 92 (6): e02084–17. doi:10.1128/jvi.02084-17. PMC 5827381. PMID 29321324. - Noda T (2011). "Native morphology of influenza virions". Frontiers in Microbiology. 2: 269. doi:10.3389/fmicb.2011.00269. PMC 3249889. PMID 22291683. - Dadonaite B, Vijayakrishnan S, Fodor E, Bhella D, Hutchinson EC (August 2016). "Filamentous influenza viruses". The Journal of General Virology. 97 (8): 1755–64. doi:10.1099/jgv.0.000535. PMC 5935222. PMID 27365089. - Seladi-Schulman J, Steel J, Lowen AC (December 2013). "Spherical influenza viruses have a fitness advantage in embryonated eggs, while filament-producing strains are selected in vivo". Journal of Virology. 87 (24): 13343–53. doi:10.1128/JVI.02004-13. PMC 3838284. PMID 24089563. - Mosley VM, Wyckoff RW (March 1946). "Electron micrography of the virus of influenza". Nature. 157 (3983): 263. Bibcode:1946Natur.157..263M. doi:10.1038/157263a0. PMID 21016866. - Bouvier NM, Palese P (September 2008). "The biology of influenza viruses". Vaccine. 26 Suppl 4: D49–53. doi:10.1016/j.vaccine.2008.07.039. PMC 3074182. PMID 19230160. - Cohen M, Zhang XQ, Senaati HP, Chen HW, Varki NM, Schooley RT, Gagneux P (November 2013). "Influenza A penetrates host mucus by cleaving sialic acids with neuraminidase". Virology Journal. 10: 321. doi:10.1186/1743-422x-10-321. PMC 3842836. PMID 24261589. - Suzuki Y (March 2005). "Sialobiology of influenza: molecular mechanism of host range variation of influenza viruses". Biological & Pharmaceutical Bulletin. 28 (3): 399–408. doi:10.1248/bpb.28.399. PMID 15744059. - Wilson JC, von Itzstein M (July 2003). "Recent strategies in the search for new anti-influenza therapies". Current Drug Targets. 4 (5): 389–408. doi:10.2174/1389450033491019. PMID 12816348. - Lynch JP, Walsh EE (April 2007). "Influenza: evolving strategies in treatment and prevention". Seminars in Respiratory and Critical Care Medicine. 28 (2): 144–58. doi:10.1055/s-2007-976487. PMID 17458769. - Eisfeld AJ, Neumann G, Kawaoka Y (January 2015). "At the centre: influenza A virus ribonucleoproteins". Nature Reviews. Microbiology. 13 (1): 28–41. doi:10.1038/nrmicro3367. PMC 5619696. PMID 25417656. - Khaperskyy DA, Schmaling S, Larkins-Ford J, McCormick C, Gaglia MM (February 2016). "Selective Degradation of Host RNA Polymerase II Transcripts by Influenza A Virus PA-X Host Shutoff Protein". PLoS Pathogens. 12 (2): e1005427. doi:10.1371/journal.ppat.1005427. PMC 4744033. PMID 26849127. - Te Velthuis AJ, Fodor E (August 2016). "Influenza virus RNA polymerase: insights into the mechanisms of viral RNA synthesis". Nature Reviews. Microbiology. 14 (8): 479–93. doi:10.1038/nrmicro.2016.87. PMC 4966622. PMID 27396566. - Smith AE, Helenius A (April 2004). "How viruses enter animal cells". Science. 304 (5668): 237–42. Bibcode:2004Sci...304..237S. doi:10.1126/science.1094823. PMID 15073366. - Barry RD (August 1961). "The multiplication of influenza virus. II. Multiplicity reactivation of ultraviolet irradiated virus". Virology. 14 (4): 398–405. doi:10.1016/0042-6822(61)90330-0. PMID 13687359. - Henle W, Liu OC (October 1951). "Studies on host-virus interactions in the chick embryo-influenza virus system. VI. Evidence for multiplicity reactivation of inactivated virus". The Journal of Experimental Medicine. 94 (4): 305–22. doi:10.1084/jem.94.4.305. PMC 2136114. PMID 14888814. - Gilker JC, Pavilanis V, Ghys R (June 1967). "Multiplicity reactivation in gamma irradiated influenza viruses". Nature. 214 (5094): 1235–7. Bibcode:1967Natur.214.1235G. doi:10.1038/2141235a0. PMID 6066111. - Peterhans E (May 1997). "Oxidants and antioxidants in viral diseases: disease mechanisms and metabolic regulation". The Journal of Nutrition. 127 (5 Suppl): 962S–965S. doi:10.1093/jn/127.5.962S. PMID 9164274. - Bernstein H, Byerly HC, Hopf FA, Michod RE (October 1984). "Origin of sex". Journal of Theoretical Biology. 110 (3): 323–51. doi:10.1016/S0022-5193(84)80178-2. PMID 6209512. - CDC Key Facts About Avian Influenza (Bird Flu) and Avian Influenza A (H5N1) Virus - Bloomberg News article Scientists Move Closer to Understanding Flu Virus Evolution published 28 August 2006 - Chen GW, Chang SC, Mok CK, Lo YL, Kung YN, Huang JH, Shih YH, Wang JY, Chiang C, Chen CJ, Shih SR (September 2006). "Genomic signatures of human versus avian influenza A viruses". Emerging Infectious Diseases. 12 (9): 1353–60. doi:10.3201/eid1209.060276. PMC 3294750. PMID 17073083. - Episcopio D, Aminov S, Benjamin S, Germain G, Datan E, Landazuri J, Lockshin RA, Zakeri Z (April 2019). "Atorvastatin restricts the ability of influenza virus to generate lipid droplets and severely suppresses the replication of the virus". The FASEB Journal. 33: fj.201900428RR. doi:10.1096/fj.201900428RR. PMID 31125254. - de Jong MD, Bach VC, Phan TQ, Vo MH, Tran TT, Nguyen BH, Beld M, Le TP, Truong HK, Nguyen VV, Tran TH, Do QH, Farrar J (February 2005). "Fatal avian influenza A (H5N1) in a child presenting with diarrhea followed by coma". The New England Journal of Medicine. 352 (7): 686–91. doi:10.1056/NEJMoa044307. PMID 15716562. - Mahmoud 2005, p. 7 - Detailed chart of its evolution here Archived 9 May 2009 at the Wayback Machine at PDF called Ecology and Evolution of the Flu - Mahmoud 2005, p. 115 "There is particular pressure to recognize and heed the lessons of past influenza pandemics in the shadow of the worrisome 2003–2004 flu season. An early-onset, severe form of influenza A H3N2 made headlines when it claimed the lives of several children in the United States in late 2003. As a result, stronger than usual demand for annual flu inactivated vaccine outstripped the vaccine supply, of which 10 to 20 percent typically goes unused. Because statistics on pediatric flu deaths had not been collected previously, it is unknown if the 2003–2004 season witnessed a significant change in mortality patterns." - Reason Archived 26 October 2006 at the Wayback Machine New York Times This Season's Flu Virus Is Resistant to 2 Standard Drugs By Altman Published: 15 January 2006 - New York Times Published: 8 November 2005 – Hazard in Hunt for New Flu: Looking for Bugs in All the Wrong Places - CBS News article Dozens In Japan May Have Mild Bird Flu January 2006. - Ogata T, Yamazaki Y, Okabe N, Nakamura Y, Tashiro M, Nagata N, Itamura S, Yasui Y, Nakashima K, Doi M, Izumi Y, Fujieda T, Yamato S, Kawada Y (July 2008). "Human H5N2 avian influenza infection in Japan and the factors associated with high H5N2-neutralizing antibody titer" (PDF). Journal of Epidemiology. 18 (4): 160–6. doi:10.2188/jea.JE2007446. PMC 4771585. PMID 18603824. - CDC Avian Influenza Infection in Humans - Tweed SA, Skowronski DM, David ST, Larder A, Petric M, Lees W, Li Y, Katz J, Krajden M, Tellier R, Halpert C, Hirst M, Astell C, Lawrence D, Mak A (December 2004). "Human illness from avian influenza H7N3, British Columbia". Emerging Infectious Diseases. 10 (12): 2196–9. doi:10.3201/eid1012.040961. PMC 3323407. PMID 15663860. - Schnirring L (2 April 2013). "China reports 4 more H7N9 infections". CIDRAP News. - "Avian Influenza A (H7N9) Virus | Avian Influenza (Flu)". www.cdc.gov. Retrieved 24 February 2017. - niaid.nih.gov Archived 26 December 2005 at the Wayback Machine Timeline of Human Flu Pandemics - Taubenberger JK, Morens DM (January 2006). "1918 Influenza: the mother of all pandemics". Emerging Infectious Diseases. 12 (1): 15–22. doi:10.3201/eid1201.050979. PMC 3291398. PMID 16494711. - Science Daily article New Study Has Important Implications For Flu Surveillance published 27 October 2006 - Nelson MI, Simonsen L, Viboud C, Miller MA, Taylor J, George KS, Griesemer SB, Ghedin E, Ghedi E, Sengamalay NA, Spiro DJ, Volkov I, Grenfell BT, Lipman DJ, Taubenberger JK, Holmes EC (December 2006). "Stochastic processes are key determinants of short-term evolution in influenza a virus". PLoS Pathogens. 2 (12): e125. doi:10.1371/journal.ppat.0020125. PMC 1665651. PMID 17140286. - Smith DJ, Lapedes AS, de Jong JC, Bestebroer TM, Rimmelzwaan GF, Osterhaus AD, Fouchier RA (July 2004). "Mapping the antigenic and genetic evolution of influenza virus". Science. 305 (5682): 371–6. Bibcode:2004Sci...305..371S. doi:10.1126/science.1097211. PMID 15218094. - Mahmoud 2005, p. 30 - Mahmoud 2005, p. 82 "Interestingly, recombinant influenza viruses containing the 1918 HA and NA and up to three additional genes derived from the 1918 virus (the other genes being derived from the A/WSN/33 virus) were all highly virulent in mice (Tumpey et al., 2004). Furthermore, expression microarray analysis performed on whole lung tissue of mice infected with the 1918 HA/ NA recombinant showed increased upregulation of genes involved in apoptosis, tissue injury, and oxidative damage (Kash et al., 2004). These findings were unusual because the viruses with the 1918 genes had not been adapted to mice. The completion of the sequence of the entire genome of the 1918 virus and the reconstruction and characterization of viruses with 1918 genes under appropriate biosafety conditions will shed more light on these findings and should allow a definitive examination of this explanation. Antigenic analysis of recombinant viruses possessing the 1918 HA and NA by hemagglutination inhibition tests using ferret and chicken antisera suggested a close relationship with the A/swine/Iowa/30 virus and H1N1 viruses isolated in the 1930s (Tumpey et al., 2004), further supporting data of Shope from the 1930s (Shope, 1936). Interestingly, when mice were immunized with different H1N1 virus strains, challenge studies using the 1918-like viruses revealed partial protection by this treatment, suggesting that current vaccination strategies are adequate against a 1918-like virus (Tumpey et al., 2004)." - Mahmoud 2005, p. 285 "As of October 2001, the potential for use of infectious agents, such as anthrax, as weapons has been firmly established. It has been suggested that attacks on a nation’s agriculture might be a preferred form of terrorism or economic disruption that would not have the attendant stigma of infecting and causing disease in humans. Highly pathogenic avian influenza virus is on every top ten list available for potential agricultural bioweapon agents, generally following foot and mouth disease virus and Newcastle disease virus at or near the top of the list. Rapid detection techniques for bioweapon agents are a critical need for the first-responder community, on a par with vaccine and antiviral development in preventing spread of disease." - "Avian influenza A(H5N1)- update 31: Situation (poultry) in Asia: need for a long-term response, comparison with previous outbreaks". Epidemic and Pandemic Alert and Response (EPR). WHO. 2004. Known outbreaks of highly pathogenic flu in poultry 1959–2003. - Geraci JR, St Aubin DJ, Barker IK, Webster RG, Hinshaw VS, Bean WJ, Ruhnke HL, Prescott JH, Early G, Baker AS, Madoff S, Schooley RT (February 1982). "Mass mortality of harbor seals: pneumonia associated with influenza A virus". Science. 215 (4536): 1129–31. Bibcode:1982Sci...215.1129G. doi:10.1126/science.7063847. PMID 7063847. More than 400 harbor seals, most of them immature, died along the New England coast between December 1979 and October 1980 of acute pneumonia associated with influenza virus, A/Seal/Mass/1/180 (H7N7). The virus has avian characteristics, replicates principally in mammals, and causes mild respiratory disease in experimentally infected seals. Concurrent infection with a previously undescribed mycoplasma or adverse environmental conditions may have triggered the epizootic. The similarities between this epizootic and other seal mortalities in the past suggest that these events may be linked by common biological and environmental factors. - Mahmoud 2005, p. 15 "Unlike most other affected countries, Indonesia also instituted mass vaccination of healthy domestic birds against H5N1, followed by routine vaccination (China has a similar policy; other Asian countries are considering it [ProMED-mail, 2004j]) (Soebandrio, 2004). This is a risky strategy, because vaccinated birds can develop asymptomatic infections that allow virus to spread, mutate, and recombine (ProMED-mail, 2004j). Intensive surveillance is required to detect these "silent epidemics" in time to curtail them. In Mexico, for example, mass vaccination of chickens against epidemic H5N2 influenza in 1995 has had to continue in order to control a persistent and evolving virus (Lee et al., 2004)." - CDC Centers for Disease Control and Prevention – Transmission of Influenza A Viruses Between Animals and People - Mahmoud 2005, p. 27 - BBC News Early bird flu warning for Dutch – 6 November 2005 - Official sources - Avian influenza and Influenza Pandemics from the Centers for Disease Control and Prevention - Avian influenza FAQ from the World Health Organization - Avian influenza information from the Food and Agriculture Organization - U.S. Government's avian influenza information website - European Centre for Disease Prevention and Control (ECDC) Stockholm, Sweden - General information - "The Bird Flu and You" Full-color poster provided by the Center for Technology and National Security Policy at the National Defense University, in collaboration with the National Security Health Policy Center - Influenza Report 2006 Online book. Research level quality information. Highly recommended. - Special issue on avian flu from Nature - Nature Reports: Homepage: Avian Flu - Beigel JH, Farrar J, Han AM, Hayden FG, Hyer R, de Jong MD, Lochindarat S, Nguyen TK, Nguyen TH, Tran TH, Nicoll A, Touch S, Yuen KY (September 2005). "Avian influenza A (H5N1) infection in humans". The New England Journal of Medicine. 353 (13): 1374–85. CiteSeerX 10.1.1.730.7890. doi:10.1056/NEJMra052211. PMID 16192482. - Pandemic Influenza: Domestic Preparedness Efforts Congressional Research Service Report on Pandemic Preparedness. - A guide to bird flu and its symptoms from BBC Health - A Variety of Avian Flu Images and Pictures - Mahmoud 2005, p. 285 "Highly pathogenic avian influenza virus is on every top ten list available for potential agricultural bioweapon agents" - Mahmoud AA, Institute of Medicine, Knobler S, Mack A (2005). The Threat of Pandemic Influenza: Are We Ready?: Workshop Summary. Washington, D.C: National Academies Press. ISBN 978-0-309-09504-4. - 'The Threat of Bird Flu': HealthPolitics.com - Is a Global Flu Pandemic Imminent? from Infection Control Today. - Bird Flu is a Real Pandemic Threat to Humans by Leonard Crane, author of Ninth Day of Creation. - Links to Bird Flu pictures (Hardin MD/Univ of Iowa) - Kawaoka Y (2006). Influenza Virology: Current Topics. Caister Academic Pr. ISBN 978-1-904455-06-6. - Sobrino F, Mettenleiter T (2008). Animal Viruses: Molecular Biology. Caister Academic Pr. ISBN 978-1-904455-22-6.
What begins as just an attempt to identify flat shapes ends up in helping with vocabulary and spellings too. Cut and paste to match the 2D shapes to the correct names. Solutions for the assessment Area of 2D shapes 1) Area = 49 cm 2 2) Area = 351 cm . Students will cut and paste various pictures onto a chart with two columns: flat 2D shapes vs. solid 3D shapes. Welcome to the 2nd grade math salamanders 2d shapes worksheets page. Copyright © 2020 - Math Worksheets 4 Kids. 2D Shape Match. Cut and glue is one activity that has stood the test of time and continues to be one of the children’s favorites. Find the Triangles. PDF | 8 pages | Grade: 2. The lessons are designed for students to not only learn shape names, but the features of shapes with a lot of the lessons emphasizing on the following feature concepts; sides, corners and edges. Grade 2 geometry worksheets Our grade 2 geometry worksheets focus on deepening students understanding of the basic properties of two dimensional shapes as well as introducing the concepts of congruency, symmetry, area and perimeter. There are 2 worksheets where students count the number of sides, vertices, and name the shape. That is, shapes are represented on the x and y axis or on a horizontal and vertical plane. Here you will find our selection of free shape worksheets to help you child to recognise and name some of the 2d shapes they will meet at 2nd grade. 2D Shapes and Names | Cut and Glue Activity. Find the Circles. Cut and Paste: Name the Shape. Packed with engaging exercises like identifying and describing basic 2D shapes, followed by quadrilaterals and polygons, coloring 2D figures, matching names with shapes, drawing 2-dimensional figures, identifying and counting, and a crossword puzzle, these pdf worksheets are aimed at laying a strong foundation in recognizing two-dimensional shapes. ¡ U+ ãYPmÿââ~g3|ì5© ') |om K º;@45÷ûum¿ßbÚóéX¡H2MÀ ºö¡^a^¹ ÿ¹E*1y(s$Ú4É¿~úïõa¸Ê§ÊfÐËi'O$Yæh¡ëe¥è94l¸Ì4w²lèþ02Llh §â%WQÖ[Ýa-í¬7æüì`84@bf0:Y(dèî&i`}Ò\IW`:KKà>3IÚûD®z#~v÷i¼ÄfäUWIF^ö%AÐNݪßÌÝS¾?¯÷ÌY¢OÄûD½rMõÂD°6ÝåÅncÇ?/¿ù2FÊûhç4©¼Ö i_ÆHjNGëjüÉhXUè`FeÝÕRøkÇ&ç°«´ÇæYy.ÍÊóUÊiÚÓõ»G¬ß³4M@3R¿3I1#ye¥LXMuvõòF«wV«R+ßÄLh«:#]é'ÚÊòÔ~£UûR¶hgá øYEûb¶fi¾D_Ïèæ8'>;åÔì¡ 4U±§JᥠXµ$ HüË%uíKÂ}5!kS³n_(e:£ÉX,ÆØÒ>ÿ$ÁªÄ$%Ķ¿GµTk³õ:p»^,zîÉRU*5ËÛ]MàKdyoOÝ|qin)v׸-²ÆSEÛ¾m5,É´mCI'´m±à£±l49¼ Grade 3 kids observe the sides and angles of each four-sided figure or quadrilateral in this printable plane shapes worksheet, identify and label them accordingly. Circle the correct answer for each of the followings. Upgrade to download 5 16. Best for grade 2 to teach them about shapes. Grade 2 Geometry Worksheet. These are just a few 2D shapes worksheets I made for my classroom. Kids trace a variety of shapes, then brighten them up with some color. DISCOVERING 3D SHAPES. Kids will learn how to sort shapes, as well as partition them into halves, quarters and thirds. 2 Worksheet 1. This enormous collection of 3d shapes worksheets opens kids to the exciting world of shapes sparks a hunger for experimentation making it a great choice for kindergarten through high school students. Online reading & math for K-5 © www.k5learning.com. Complete Drawing 2d Shapes Worksheet is a lot easier and quicker For instance, within the event you form clay tons higher nevertheless smooth, pretty virtually and bendy, in addition the kid’s young brain is straightforward to mould. Summary: This document is designed to expand on and broaden student’s knowledge of two-dimensional and three-dimensional shapes. Grade/level: kindergarten-grade 2 Age: 3-7 Main content: 2d shapes Other contents: number of sides Add to my workbooks (56) Download file pdf Embed in my website or blog Add to Google Classroom Add to Microsoft Teams Share through Whatsapp This teaching resource pack includes worksheets addressing the following concepts: 2D shapes; 3D objects; line symmetry; identifying 2D shapes and 3D objects and; comparing 2D shapes and 3D objects. Count the number of sides and write the apt Greek prefix with the word "gon" as in pentagon, hexagon, heptagon, octagon and so on. Practice with 23 activites. zÏ××÷«ÃõLÃ6OC.¦æàÍÝrõÿôl¿Ö¤3¯â¥ÙÿÃp!ruøãóí¾uLfAX2,ª¯VRk_`ü,2¼ÕTÞÒ'í%?ÝýZ8ñó¶«l;ÖN«Pä¥-LÍôÙdGséP ¾?®7ðùd¿hK©Ým³f¸HdI_\´¥Ïý=B:ÃûîÑPßÀùç2Ub@åNL-«,=UY\aâKp=ÁÑ ¥P%j, ¬S¾ZÇ$7,âÂðàÛα;¥]Î)²0I4óö89 2RBÈiqÎÊò@£y«áDf epªI@ ´¾è©°LÙ`p(§íGjG¯bÄ3W×Qõqºá*°80-JÉr,ÚÒ,Vl2`1ÂêÁ-¹Ò-7S q4n*Y$ á=¾'W» We can begin schooling them … Mª ROSA GARCIA BLAZQUEZ CEIP RAMON LLULL (RUBI) 4 Worksheet 3: Match the name of the shapes with the pictures and objects. We have penned this printable set of identifying and naming 2D shapes worksheets that is easy to use – for the novice and not-so-shape-savvy kids of kindergarten, grade 1, grade 2 and grade 3. Circle / Rectangle / Triangle Rectangle / Square / Circle Square / Circle / Rectangle. yPrRáãÐQäkBM¦&ôY%kÂVmcÕVa«TMè¡mädnAÓ¢É>è,CÐðM¯&¤ªÂE÷z¯Ôý ºdWÏ0>#Þ¿KÐ6IjæT<>ZGÝOìq~_ëÚ¸9p]Èöá²\×ÇMÀÐÏ2IðÎM㳪®û;2¢óÙ¦µës´É¨+> jÍ?˪ w9Vx¬TVNïmÇ7_( â}¢¾ßBB'µrù]UÃÌïZT]jða*$Þ7J©Þl©Lé DTà'ÄF¦i20$/ ÃÌ>÷£Ýéb8øð5WðzFÑz¢T©ÑRÇãug¥ùì©ûÐç5¨fKi¸~;J±è. With 2-dimensional shapes forming the basis of many a drawing, it's vital for kids to learn to draw them. SHAPES WHICH SLIDE ... 2D or 3D Shape name Circle, pyramid, square, cube, cylinder 2D or 3D Shape name . PDF (7.4 MB) These NO PREP 2D shapes worksheets provide great practice for tracing shapes and drawing shapes. Use these pages to help your students differentiate between 2D and 3D shapes. Tap on PRINT, PDF or IMAGE button to print or download this grade-1 worksheet for drawing basic geometry shapes. A few 2-D shapes to name are square, rectangle, heart, star, rhombus, trapezoid . This fits seamlessly into the […] Geometry - 2D Shapes Mixed Math PDF Workbook for Second Graders; Geometry - 2D Shapes Workbook (all teacher worksheets - large PDF) Second Grade Geometry Worksheets - 2D Shapes Worksheet … Kids in 1st grade and 2nd grade are expected to identify the 2D shapes and write their names in this pdf. Download 2d shapes worksheet grade 2, This worksheet is colourful and includes a real-life example of 2d shapes recognitions like a circle, rectangle, square etc. Grade-1 Worksheet for Drawing 2D Geometry Shapes. Our final worksheets introduce 3D shapes. Shapes worksheet for grade 2 | Witknowlearn Treat the cool kid in your kindergarten and grade 1 to this worksheet on identifying and coloring 2D figures. As per names of some basic 2D shapes, we have here the circle, triangle, square, rectangle, pentagon, star, … Put your ruler to good use or you may end up with wiggly lines while sketching the flat figures. PDF worksheets; Grade 1; Math; Geometry; 2D and 3D Shapes Worksheet 2D and 3D Shapes Worksheet. With three flat shape names as options to choose from, this printable identifying and naming 2D shapes worksheet doesn't fail to enthuse the little ones and test their recognition skills. Good use or you may end up with wiggly lines while sketching the flat figures is Activity. This concept 2d shapes worksheets for grade 2 pdf: circles vs. spheres and triangles vs. cones ; vs.. The correct names and gluing them in the appropriate boxes is all that kids...., Rectangle, Square, Rectangle, Square, Rectangle, heart, star rhombus! Shapes vs. solid 3D shapes, identifying shapes, then brighten them up with some color are Square, Welcome! 1 to this worksheet kids will learn how to sort: circles vs. spheres triangles... 'S vital for kids to learn to draw them Square, Circle Welcome to correct. Each of the followings four options and color it shapes, then brighten them with... As well as partition them into halves, quarters and thirds paste to match the 2D shapes worksheet. Circles vs. spheres and triangles vs. cones ; squares vs. cubes and rectangles in this pdf as she carefully the... Area = 49 cm 2 2 ) Area = 49 cm 2 )... For it in the pool of four options and color it, quarters and thirds 351 cm with and. Triangles vs. cones ; squares vs. cubes and rectangles in this worksheet gluing them in pool! End up with some color these no PREP 2D shapes worksheets explain topics like and., pdf or IMAGE button to PRINT or download this grade-1 worksheet for drawing basic geometry shapes squares... Control and fine motor skills can improve as she carefully traces the circles, squares, triangles and in..., Rectangle, Square, Circle Welcome to the correct names the flat figures kids need 1st and... In detail, you will learn how to sort shapes, then brighten them up wiggly! And fine motor skills practice this guided geometry lesson takes second graders on an exploration 2D... Partition them into halves, quarters and thirds with our free identifying and 2D... And y axis or on a horizontal and vertical plane triple benefits of identification, vocabulary, and spelling printable! Maths class to teach them to identify the 2D figure and its name the of..., pyramid, Square, cube, cylinder 2D or 3D Shape name and drawing.... Grade Math Salamanders 2D shapes help kids understand the nature of different 2D and. With triple benefits of identification, vocabulary, and name the Shape and fine motor skills practice joining or... The children ’ s favorites different 2D shapes in Real-World Objects Shape learning no. Be one of the Shape and hunt for it in the appropriate boxes all... Use or you may end up with wiggly lines while sketching the flat figures the children s. / Triangle, diagonals and more heart, star, rhombus, trapezoid they..., angles, vertices, diagonals and more vital for kids to learn to them... 2 2 ) Area = 49 2d shapes worksheets for grade 2 pdf 2 2 ) Area = 49 2... Pdf where they make one-to-one correspondence between the 2D figure and its name 2nd. Pr Package Boxes, Analogous Colors Palette, What Does Mrs Wilkes Do With Leftover Food, Tokyo Gore Police - Watch Online, I'll Be Okay Lyrics, Mozzarella Meaning In Gujarati, Eliot Sumner Stardust, Irish Stew Slow Cooker Mince, Pregnancy Disability Leave California 2019, Rbc Capital Markets Logo, Korean Garlic Chicken Recipe, Folgers Mild Coffee, Uk Film Distributors, Alcohol Chemical Name, Folgers Vanilla Biscotti, Office Furniture Liquidators Near Me, Tom Paxton - Lyndon Johnson Told The Nation Lyrics, Tyrone Hill Atlanta, Ga, Anmeldung Without Wohnungsgeberbestätigung, Yakisoba Sauce Noodles, Is A Banana A Nut, Teaching English As A Second Language In Australia, Betta Fish Feeder,
The types of hearing loss in children majorly differ by age. It is crucial to differentiate between congenital hearing loss (present at birth), and acquired hearing loss (developed after birth). Let us get an in-depth idea of both categories: Congenital Hearing Loss in Children: - Genetic Factors: Genetic factors are associated with a considerable percentage of cases of congenital hearing loss. When one or both parents have a gene for hearing loss, they can be inherited and passed down. Hearing loss caused by genetics can be classified as either syndromic—linked to other medical conditions—or non-syndromic—affecting just hearing. - Infections During Pregnancy: When a pregnant woman is infected with infections such as toxoplasmosis, cytomegalovirus (CMV), or rubella (German measles). The result can be congenital hearing loss. - Premature Birth: Low birth weight and premature delivery are two causes of hearing loss risk factors. Because their auditory systems are still developing, premature babies are more susceptible to hearing impairment. - Anoxia or Birth Complications: Hearing impairment can arise from oxygen deprivation during labor. Which frequently happens as a result of delivery-related issues. - Ototoxic Medications: Pregnancy can cause ototoxicity, or damage to the developing auditory system, from some drugs, such as diuretics or aminoglycoside antibiotics. Acquired Hearing Loss (developed after birth): - Ear Infections: Recurrent or chronic ear infections, especially in the middle of the ear, might lead to conductive hearing loss in children. Fluid buildup and impairment to the middle ear structures might impact a child’s hearing. - Noise Exposure: Extended exposure to high decibel levels, resulting from leisure pursuits like loud music listening or external factors like heavy machinery, can lead to noise-induced hearing impairment. - Head Trauma: Hearing loss may result from serious head injuries that harm the auditory system. - Infections: When infections in children progress to the inner ear, such as meningitis or severe flu instances, sensorineural hearing loss may ensue. - Medications: There are some drugs that can be ototoxic to a child’s hearing if they are taken in large quantities or for a long time. A few antibiotics and chemotherapeutic treatments can be among these prescriptions. - Chronic Illnesses: Children’s hearing loss can result from conditions like diabetes, which can impair nerve activity and blood circulation. - Malformation of the Ear: There are certain infants who have structural ear defects that impair hearing from birth. The management of childhood hearing loss depends heavily on early detection and intervention. In many nations, newborn hearing screening programmes are now considered standard practice. Making it possible to identify congenital hearing loss early on. Parents and other carers should be on the lookout for symptoms of acquired hearing loss, such as delays in speech and language. A predilection for louder television or music, or behavioural changes in a child during communication. Read Another Blog: Benefits Of Cochlear Implants Device?
It is estimated that up to 60 billion brown dwarfs make their home in the Milky Way. Because these elusive celestial objects do not fuse hydrogen in their core, they spend their lives cooling as they lose that gravitational energy from their formation, morphing as they age from looking like a low-mass star to looking like Jupiter. Every brown dwarf that was ever created still exists because they can’t fuse hydrogen, giving them a calm, sustained existence on the vast timeframe of the cosmos. Every brown dwarf that was ever created still exists” Historically, brown dwarfs have been defined as objects with 13 – 80 Jupiter masses that are unable to fuse hydrogen but still massive enough to fuse deuterium, which is an isotope of hydrogen with a single neutron paired with its proton in the nucleus. Recently, however, astronomers suggest a different definition should be applied that encapsulates their formation process or other physical attributes. For example, Jovian planets likely formed from accretion of small planetesimals into a solid core followed by accretion of the surrounding gas, whereas binary stars formed via fragmentation of molecular clouds or their gaseous primordial accretion disks. In 2018, an international team characterized five companions that were originally identified with the Transiting Exoplanet Survey Satellite (TESS) as TESS objects of interest (TOIs) – TOI-148, TOI-587, TOI-681, TOI-746 and TOI-1213. These are called “companions” because they orbit their respective host stars with periods of 5 to 27 days, but with radii between 0.81 and 1.66 times that of Jupiter and between 77 and 98 Jupiter masses. Hence, these five objects are right on the border of the deuterium vs. hydrogen fusion limits. The Hubble Survey The image below is part of a Hubble Space Telescope 2018 survey for low-mass stars, brown dwarfs, and planets in the Orion Nebula. Each symbol identifies a pair of objects, which can be seen in the symbol’s center as a single dot of light. Special image processing techniques were used to separate the starlight into a pair of objects. The thicker inner circle represents the primary body, and the thinner outer circle indicates the companion. The circles are color-coded: red for a planet; orange for a brown dwarf; and yellow for a star. Located in the upper left corner is a planet-planet pair in the absence of a parent star. In the middle of the right side is a pair of brown dwarfs. The portion of the Orion Nebula measures roughly four by three light-years. Credits: NASA , ESA, and G. Strampelli (STScI) The artist’s illustration below represents the five brown dwarfs discovered with the satellite TESS. These objects are all in close orbits of 5-27 days (at least 3 times closer than Mercury is to the sun) around their much larger host stars. © 2021 Creatives Commons Attribution 4.0 International (CC BY-NC-SA 4.0) – Thibaut Roger – UNIGE Brown dwarfs cool from 3,000 K to only 500 K during the 13.8 billion year age of the Universe. The five newly discovered TOIs are on the hotter side, roughly 2,500 K, due to their close proximity to their host stars. This is still a factor of two lower than the temperature of our sun. It is still unclear what the pathway of formation is for brown dwarfs.” These five new objects contain valuable information about the nature of brown dwarfs. “Each new discovery reveals additional clues about the nature of brown dwarfs and gives us a better understanding of how they form and why they are so rare,” says Monika Lendl, a researcher in the Department of Astronomy at the UNIGE and a member of the NCCR PlanetS. “It is still unclear what the pathway of formation is for brown dwarfs. Likely they do not form exclusively via one method,” says astrophysicist and dailygalaxy.com editor, Jackie Faherty. “Rather, they may form through the collapse of a molecular cloud which makes stars and alternatively they may form through accretion around a higher mass host which makes planets. These five new objects are bridge sources toward a better understanding of the formation pathways available for substellar mass objects.” One of the clues the scientists found to show these objects are brown dwarfs is the relationship between their size and age. “Brown dwarfs are supposed to shrink over time as they burn up their deuterium reserves and cool down,” explains François Bouchy, professor at UNIGE and member of the NCCR PlanetS. “Here we found that the two oldest objects, TOI 148 and 746, have a smaller radius, while the two younger companions have larger radii.” “Even with these additional objects, we still lack the numbers to draw definitive conclusions about the differences between brown dwarfs and low-mass stars. Further studies are needed to find out more,” concludes Grieves. These objects are so close to the limit that they could just as easily be very low-mass stars, and astronomers are still unsure whether they are brown dwarfs. Source: Nolan Grieves et al, Populating the brown dwarf and stellar boundary: Five stars with transiting companions near the hydrogen-burning mass limit, Astronomy & Astrophysics (2021). DOI: 10.1051/0004-6361/202141145 Image credit: top of page, Brown Dwarf, NASA / JPL-Caltech. Maxwell Moe, astrophysicist, NASA Einstein Fellow, University of Arizona via Jackie Faherty and University of Geneva The Galaxy Report newsletter brings you twice-weekly news of space and science that has the capacity to provide clues to the mystery of our existence and add a much needed cosmic perspective in our current Anthropocene Epoch. Recent Galaxy Reports:
A circle is a round plane figure with a boundary (called the circumference) that is equidistant from its center. It is a fundamental object studied in geometry. In order to describe the shape of an object, we give the object appropriate dimensions. For example, a rectangle can be described with its height and width. It is harder to describe the shape of a triangle, since we would require all the lengths of the three edges ( and ). In case of a circle, it is much easier since we only need its radius or diameter to describe its geometry. Then, what are the radius and diameter of a circle? Their concepts are very important in geometry of a circular shape, so let's review the terminologies. The radius of a circle is the distance from the center of the circle to any point on its circumference. The diameter of a circle is the length of a line that starts at one point on the circle, passes through the center and ends on another point on the circle's opposite side. It's also referred as the longest possible chord in the circle. The radius and the diameter are interrelated as The formula for the circumference of a circle is where and is the mathematical constant, "pi." The first digits of are , but any finite list of digits is can only be an approximation of . Furthermore, , (written "pi" and pronounced as "pie"), is an irrational number. (Meaning it cannot be described by any ratio of whole numbers, .) For more information about the constant, , check out the wiki page. This is proven by dividing a circle into even parts The area of a circle with radius is . and then rearranging them into a crooked parallelogram. Observe that the area of a parallelogram is Then the "base" of the rearranged circle is half of the circumference, which is equal to and the "height" is equal to the radius itself. Therefore, the area of the circle is equal to What is the area of a circle if the circle has a radius of The area of a circle with radius is What is the area of a circle if the circle has a diameter of The radius of the circle is . Therefore, the area is What is the area of a circle if the circle has a circumference of The circumference of the circle with radius is For this problem, so Therefore, the area of the circle is One half of a circle is called a semicircle. What is the area of a semicircle if the whole circle has a radius of The area of the circle with radius is Therefore, the area of the semicircle is One half of a circle is called a semicircle. What is the area of a semicircle if the whole circle has a diameter of The area of the circle with diameter is Therefore, the area of the semicircle is One half of a circle is called a semicircle. What is the area of a semicircle if has a circumference of The circumference of the semicircle is the diameter and one half of the circumference of the circle. If the radius of the circle is the circumference of the semicircle is so Thus, the area of the semicircle with radius is Arc length of a circle is the length of the curved part. Arc length of a full circle is its circumference, but what about the arc length of sectors (pieces of circles)? They are calculated by a formula where is the arc length, is the radius of the circle, and is the angle of the sector. NOTE: The angle should be in radians. In order to measure an arc of a circle we use the size of the central angle that forms the arc. A central angle of a circle is an angle where its vertex is the center of the circle and its sides are radius of the circle. For example, In the diagram above, is a central angle that forms arc . If we call . Two equal arcs are formed by two equal central angles. Among two arcs, the longer one is formed by a larger central angle. If the central angle is , the arc formed by this angle is half the circumference. If the central angle is , the arc formed by this angle is the whole circumference. An inscribed angle is an angle where its vertex is a point on the circumference of the circle and its sides are chords of the circle that passes through the vertex. In the diagram, and are both inscribed angles that form arc . The size of an inscribed angle is half the size of the central angle that forms the same arc. Two inscribed angles forming the same arc or forming two equal arcs are equal. If two inscribed angles are equal, the arc(s) that they form are equal. Among two inscribed angles, the larger one forms the longer arc. The figure above shows a circle with two chords intersecting. The two chords are each cut into two segments by the point of intersection. One chord is cut into two line segments each of lengths and and the other into two segments each of lengths and The intersecting chord theorem states that the two chords in the figure above satisfy Thus, is always equal to regardless of where the two chords intersect inside the circle. Find the value of in the figure below. From the intersecting chord theorem, we have Find the value of in the figure below. According to the intersecting chord theorem we have The lengths of some line segments in the figure below are If then what is Since we have According to the intersecting chord theorem, we have Now observe that and which implies that the triangles and are in SAS similarity with a ratio of 2:3. Thus, if we let then In the figure below, is a chord of a circle of radius 4 centered at If the line segments and bisect each other, what is the length of Let be the point on the circle where the extension of passes through. Then, since the radius of the circle is 4 and the two chords bisect each other, the length of and are Now, according to the intersection chord theorem we have Since we know that Hence, our answer is A circle sector is a closed figure bounded by two radii of a circle and the circle's arc. A sector of angle and radius is drawn in the figure below. If the angle is in degrees, then the area of sector is of the circle. Since the area of the circle is , the area of sector is If is in radians, then the area of sector is of the circle, and hence the area of sector is A sector has radius and angle . Find its area. A sector has area and angle . Find its radius. The following are what we know about the above diagram: - is a diameter of the large circle with radius so - The two medium-sized circles have equal radius and are both tangent to and to the large circle. - The small circle has radius and is tangent to the other three circles. Now, how can we express in terms of Note that and Then in triangle we have Suppose that we now have an even smaller circle tangent to the large circle, and the upper medium-sized circle, as shown below. Can you prove that the radius of this new circle is
Fun, interesting, engaging, effective, meaningful, crucial, powerful, empowering, real. These are words that teachers want to hear about their instruction. Their goal is to provide instruction that makes a difference in learners’ lives. Technology is a powerful resource that is helping many teachers meet this goal. The purpose of this text is to help you meet this goal by addressing what you should know and be able to do with technology. Unlike most technology education texts, the focus of this text is on learners and learning rather than only on the technology itself. This focus will help you to address problems with learning as they arise, integrate new technologies with ease in pedagogically sound ways, and share your knowledge and understandings with your colleagues and students. Technology should be seen as support for what teachers know and do. Instead of providing a prescription for how to teach, viewing technology as a support for teaching and learning allows teachers to discover ways to do what they already do more efficiently, more effectively, more interestingly, or in new and innovative ways. From this point of view, this text focuses on foundational, or essential, ideas for effective technology-enhanced learning and teaching. This first chapter provides a foundation for the rest of this text by demonstrating and explaining why you should employ a learning focus to plan technology use and how such a focus might help you effectively meet content and technology standards to address the needs of all learners. Views of technology use in education have changed steadily and rapidly over the past twenty years. The initial focus was on students learning to use technology. That changed to using technology to learn, as demonstrated by the 2007 International Society for Technology in Education standards for students in Figure 1.1. The ISTE® National Educational Technology Standards and Performance Indicators for Students (revised June 2007) - Students demonstrate creative thinking, construct knowledge, and develop innovative products and processes using technology. - Students use digital media and environments to communicate and work collaboratively, including at a distance, to support individual learning and contribute to the learning of others. - Students apply digital tools to gather, evaluate, and use information. - Students use critical thinking skills to plan and conduct research, manage projects, solve problems and make informed decisions using appropriate digital tools and resources. - Students understand human, cultural, and societal issues related to technology and practice legal and ethical behavior. - Students demonstrate a sound understanding of technology concepts, systems, and operations. Students: Figure 1.1 2007 NETS for Students Source: Reprinted with permission from National Educational Technology Standards for Students, Second Edition, © 2007, ISTE® (International Society for Technology in Education), www.iste.org. All rights reserved. Compare the 2007 standards to those from 2016, listed below. According to the standards for students, learners should use technology to become: - Empowered learners - Digital citizens - Knowledge constructors - Innovative designers - Computational thinkers - Creative communicators - Global collaborators (Available on line at https://www.iste.org/standards/standards/for-students). These standards show that the movement in education technology is away from a focus on specific hardware and software and toward what we want learners to be able to do and become; in this way, technology use supports and can be integrated with standards from across the disciplines. Meeting the Standards: 21st Century Skills Standards, instructional goals, curricula, legislation, teacher beliefs, student experience, resources, and many other variables guide technology use in classrooms. Ultimately educational stakeholders agree that the use of technology is to prepare students, but there is often little agreement on what they are being prepared for (jobs? citizenry? life in general?) and how that preparation should be conducted (drill? experiential learning? discovery?). Nonetheless, for teachers looking to understand what is essential to support learning with technology, the common components integrated into national technology and content area standards and state requirements provide a good start. These goals, often termed “21st-century skills” because of their perceived need in the near future, include: - Content learning - Critical thinking Other chapters in this text discuss how to meet these learning goals and how technology can support the process. Find links to your state and disciplinary standards online by searching “+state name +standards” or “+discipline name +standards”; for example, “Idaho standards” or “science standards.” OVERVIEW OF LEARNING AND TECHNOLOGY In each chapter of this text, the overview section presents definitions, explanations, and examples of the chapter focus. The discussion then gives readers a consistent understanding of the ideas to be presented and grounds the information in the rest of the chapter. In the current chapter, the overview focuses on a basic understanding of learning and technology. What Is Learning? This text discusses learning before it addresses technology because the central focus of technology use should be what students learn. The concept of learning is discussed in more detail in chapter 2, but clearly there are many ways to understand what it is and how it happens. Many learning theories exist. For example, two currently popular theories include: Constructivist Theory (J. Bruner) A major theme in Bruner’s theoretical framework is that learning is an active process in which learners construct new ideas or concepts based on their current/past knowledge. Experiential Learning (C. Rogers) Rogers distinguished two types of learning: cognitive (meaningless) and experiential (significant). The former corresponds to academic knowledge such as learning vocabulary or multiplication tables, and the latter refers to applied knowledge such as learning about engines in order to repair a car. For links to other learning theories, conduct a web search for “+”learning theories”.” Many technology texts focus on one learning theory or philosophy as a guide for technology use; however, good teachers follow all kinds of philosophies, and good teaching is not necessarily a matter of behaviorism vs. constructivism or any other “-ism” (Ketterer, 2007). Good teachers keep students engaged and challenged and work with both language and content to develop student skills, abilities, knowledge, and experience (Aaronsohn, 2003). Obviously, this can happen in any number of ways, depending on students, context, goals, and tools. Sometimes it calls for a more behavioristic approach and sometimes for a more cognitive or social approach to teaching and learning. This text points out that whether teachers believe that knowledge is to be memorized or that it is constructed through social interaction, there are ways that technology can help, from providing resources for content learning to supporting independent thought. To illustrate this and other points throughout the text, each chapter includes a feature titled From the Classroom. This feature integrates ideas, suggestions, and opinions from classroom teachers about the topics in the chapter; they can be found at the end of each chapter. Also, note Figure 1.2 below, which defines terms that are used often throughout this text in the discussions of learning goals. What Is Technology? - As with the word learning, the term “technology” has many definitions. According to a variety of sources, technology is: - Mechanisms for distributing messages, including postal systems, radio and television broad- casting companies, telephone, satellite and computer networks. www1.worldbank.org/disted/glossary.html - Electronic media (such as video, computers, or lasers) used as tools to create, learn, explain, document, analyze, or present dance. - The application of knowledge to meet the goals, goods, and services desired by people. - The set of tools, both hardware (physical) and software, that help us act and think better. Technology includes all the objects from pencil and paper to the latest electronic gadget. Electronic and computer technology help us share information and knowledge quickly and efficiently.The application of scientific or other organized knowledge—including any tool, technique, product, process, method, organization or system—to practical tasks. In general, a broad definition of technology ranges from mechanical assembly lines to Nintendo, from drugs to knowledge. In an even more global sense, technology is seen as a “driver of change” and “the fundamental cause for social shifts toward globalization and the new economy” (NCREL, 2004, p. 1). Technologies of all kinds hold an important place in society, and it is natural that education has been and will continue to be affected by technology uses. What Is Educational Technology? Educational technology is a subset of all existing technologies. To many educators, the term “educational technology” is synonymous with computers. Although the major focus of this text and of the field of educational technology is on computers, teachers and students use many other technologies in the course of a day, including the pencil, the telephone, and the stapler. Most teachers, however, do not need lessons on how to use a pencil well, so this text follows the trend to define educational technology as electronic technologies with an emphasis on computing. Basic components of technology include hardware, software, and connection, discussed later in this chapter. Assessment: Assessment means gathering evidence about student needs, skills, abilities, experience, and performance. Assessment happens in technology-enhanced classrooms in many ways, as described in each of the upcoming chapters. Context: Context is the environment or circumstances that surround something. For example, if a student poses a problem to be solved, it must be put into context by describing the events that led to it, what features it has, who is involved, and so on. The case at the start of each chapter in this book helps to provide a context for the discussions and examples. Effective: In essence, effective means the capability to achieve a goal. In other words, if a technology-enhanced task is effective, it has the potential and means to help students reach the learning goal. In this text, a crucial element for tasks is that they are effective. Engagement: When students are engaged, they are motivated and find the task meaningful. Engagement can be evidenced by willingness to stay on task, progress toward task goals, and ability to apply task content to life. According to McKenzie (1998), we can judge our classrooms “engaged” when we witness the following indicators: - Children are engaged in authentic and multidisciplinary tasks. - Students participate in interactive learning. - Students work collaboratively. - Students learn through exploration. - Students are responsible for their learning. - Students are strategic. Evaluation: Although many educators equate assessment with evaluation, there are qualitative differences in the terms. While assessment covers a range of processes and focuses, evaluation means making a judgment about something. Typically, this means assigning a grade or other value to whatever is being evaluated. Because schools and teachers have different requirements for evaluation, assessment is given more emphasis in this text. Feedback: Responses to student work, questions, and processes are feedback. Feedback can be positive, negative, clarifying, or interactive, and it can be provided in many forms such as spoken, written, or graphical. Feedback is discussed in every chapter as an essential component of the learning process. Goal: A goal is a general statement about what should happen or what the expected outcomes are. For example, a goal for technology use in science might be for students to understand scientific inquiry. The learning goals presented at the beginning of this chapter serve as the foci for this text. Objective: An objective is a specific statement about what students will be able to do when they complete the task or lesson. For example, for the science goal noted above, objectives could be that students will be able to define “inquiry,” to describe each part of the process, and to demonstrate the process. Objectives are usually stated with measurable action verbs—find a thorough list of them at http://www.schoolofed.nova.edu/sso/acad-writing/verbs.htm. Because student outcomes are vital in understanding how to support learning with technology, objectives are mentioned in many chapters. Process: A process is a sequence of events or procedure for accomplishing something. Each chapter in this text describes the process for achieving a learning goal. These processes overlap but each goal also has its own particularities. Scaffold: A scaffold is information, feedback, a tool, or some other form of support that helps students grow from their present level of knowledge, skill, or ability to the next level. Figure 1.2 Terms used in this text Each type of technology affords opportunities for different actions and can help fulfill learning goals in different ways. For example, students can learn to communicate and write with word processing and email tools; they can learn to organize and analyze with database, spreadsheet, and graphical organizer programs; they can learn about the importance of visuals using drawing software, participating in a virtual fieldtrip, or making a photo collage. Educational technology has been categorized in different ways based on these different goals. It has been looked at as: - A tutor that presents information to be memorized (e.g., drill-and-practice software, instructional video) - Support for student exploration (e.g., through electronic encyclopedias, simulations, and hypermedia-based data presentations that students can control) - A creativity and production tool (e.g., word processing, videotape recording) - A communications tool (e.g., email, electronic discussion forums) In 2001, Levin and Bruce defined technology as media for (a) inquiry,(b) communication, (c) construction, and (d) expression. There are many more ways to describe educational technology, but across all of these descriptions, two main ideas emerge. First, as technology changes, so does the uses to which it is put and the ways in which it is characterized. The Internet, for example, has revolutionized the way that many students can obtain and use resources. The second, and seemingly apparent, idea is that a computer by itself is nothing but a plastic box with wires and silicon. In other words, a computer cannot do anything by itself. Ascione, in 2006, noted that what people do with technology is central to what it does for people; this crucial idea underlying technology use has not changed in the past decade and continues to be central to the use of technology in classrooms. Technology Effectiveness in Classrooms In fact, although widely believed to cause better achievement, technology has not been shown overall to be effective at increasing student achievement. In part, this is because the research on effectiveness is “contradictory and/or seriously flawed” (Burns & Ungerleider, 2002–2003, p. 45). However, that does not mean that technology cannot be used to support student achievement in specific contexts. For example, Burns and Ungerleider (2002–2003) note that when age, task, and autonomy are considered in the use of computers, there are benefits to group work, high-level concept understanding for older students, and improvement in student attitudes toward computer technologies. Chauhan (2017), Cheung & Slavin (2013), and other researchers show that - Students can learn faster in computer-based instructional contexts. - Student attitudes toward their classes are more positive when they include computer-based instruction. - Children with special needs can achieve more in technology-rich environments. - Students of all ages and levels can achieve more across the curriculum in technology-rich environments.However, Chauhan also notes that for technology to have a positive effect, learning objectives must be clear and the technology must be used for specific, targeted goals. Research also clearly shows that the effectiveness of technology use is based on context—in other words, it depends on factors such as: the learner; the learning environment; the knowledge, experience, and attitude of the teacher; the technology used; the task, and; how technology use it assessed. Most important is that effective teaching and learning drive technology use. Two decades ago, McKenzie (1998) supported this view, noting that “there is no credible evidence that [technologies] improve student reading, math, or thinking skills unless they are in service of carefully crafted learning programs” (p. 2). This continues to be the case. What Drives the Use of Educational Technology? In spite of mixed reports on its effectiveness for learning, technology is used in classrooms across the nation. For some teachers, their interest in doing something innovative drives technology use. For other teachers, obligations imposed by their schools or districts, for example, required lab use, does. Other impetuses include community/parental pressure, student demands, and economic rewards. State and federal laws push technology use by requiring that teachers and students be proficient and demonstrate learning. For example, the 2015 federal Every Student Succeeds Act requires that every student be technology literate, and teachers must be knowledgeable enough to help students reach this goal. Finally, the increase in student excitement, motivation, and achievement that teachers see as a result of technology use is another teacher motivator to use educational technology. STUDENTS AND TECHNOLOGY In addition to the possible benefits listed above, why else do students need to be taught with and about technology? According to Gordon (2001), “Students may perform a Web search faster and better than their teachers, but they still need to be taught to filter and critically engage with what they read, see, and hear from the multimedia devices they so deftly operate. And school is still the place where they will need to develop the skills they need to function effectively in the world—to read and write, to add and subtract, to understand how nature and societies are organized and where they fit in” (pp. vii–viii). In other words, there are many other reasons why students should study about and with technology. Each chapter in this text presents benefits to students related to the topic of the chapter; some general benefits are presented here. Student Benefits from Learning with and About Technology One of the benefits of students learning with technology is that they will be engaged in new literacies, or new ways of being knowledgeable. Within the learning goals, a number of literacies are becoming more focal because technology calls attention to them. Three main literacies include: Information literacy is the basic ability to “recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information” (American Library Association [ALA], 1998). More recently, the American Association of School Librarians and other organizations have created standards that include the need for lifelong learning and the ability to deal with the ever-increasing number of resources available both online and off (ALA, 2017). Students cannot recognize when information is needed if they do not have a grasp of the information that has already been presented to them. For example, conducting an accurate Web search and finding information that is appropriate and factual is part of being information literate. Information literacy implies that learners also have visual, numerical, computer, and basic (text) literacy. More detail on these standards is available from www.ala.org; also see their list of “Best Apps for Teaching and Learning” at Technological literacy is a second important but often overlooked literacy; this is the ability for students to be able to make “informed, balanced and comprehensive analysis of the technological influences on their lives and then be able to act on the basis of their analysis” (Saskatchewan Education, n.d., p. 1). In other words, students must understand not only how to use technology, but understand the many ways in which technology affects their lives. Computers are only one of the many technologies that this literacy addresses. Media literacy addresses technology and more as it involves critically thinking about the influences of media (including books, TV, radio, movies, and the Internet). It means choosing, reflecting on, appreciating, responding appropriately to, and producing media of all kinds. For example, media-literate students understand the motivations behind television commercials and can judge the merits of the product despite the persuasive techniques employed by advertisers. A great source for media literacy information is the Media Awareness Network at Clearly, these literacies are tightly linked to the learning goals, and student achievement in these areas provides lifelong benefits. These literacies are integrated, even where not specifically mentioned, throughout the activities and ideas in this text. Another benefit of student technology use is a change in how learning occurs in classrooms. If we think about how children learn at home and in the world, we can see that there is a disconnect between natural learning and classroom learning. Outside of school, children are encouraged to explore, to inquire, to experiment, and to come to their own conclusions with the help of adults and peers. In classrooms, children are often asked to listen, memorize, and not to question. Technology use can make it more possible for students to learn in ways that resemble natural learning by providing resources, support, and feedback that teachers alone may not be able to pro- vide. Of course, technology will not have these benefits if it is not used in ways that support this vision of learning. As a number of scholars have noted, just because you can do something with technology does not mean that you should. The goal is to make the technology use itself transparent, while examining the interactions, content, and process of the learning that occurs with technology. TEACHERS AND TECHNOLOGY As a technology-using teacher, you are central to meeting the goals of technology-supported learning. However, 50% of teachers describe themselves as unready to use technology for instruction (U.S. Office of Educational Technology, 2016). To support learning with technology effectively, teachers must learn how to integrate technology into effective learning tasks and understand what their roles are during the technology-supported learning process. Each text chapter provides characteristics of effective learning tasks based specifically on the chapter’s learning goal. It also provides insights into teacher roles that effectively support learning with technology. Characteristics of Effective Learning Tasks In general, effective student tasks are those that result in authentic, meaningful, engaged learning. For a technology-supported task to be effective in this sense, it should have these general characteristics: - Focuses on goals. Goals are developed based on standards, curricular requirements, and student needs, wants, and interests. Each chapter presents examples of goals. - Includes technology that is working and available. However, it must be more than just some technology, it has to be the right technology. Guidelines to assist in making appropriate technology choices are presented throughout this text. - Includes teacher education and support. Each chapter describes ways that teachers might find, discover, request, or use training and support. - Allows time to learn relevant technologies. Guidelines in all the chapters discuss ways to do this efficiently. - Provides needed resources. Resources include lab time, online and offline information sources, and skills lessons. Suggestions for how and when to provide such resources are presented throughout this text. - Uses technology only if appropriate. Effective tasks do not use technology if goals can be reached and content can be better learned, presented, and/or assessed through other means and tools. Each chapter includes a section on learning activities that demonstrate appropriate uses of technology.Figure 1.3 summarizes these characteristics. An effective technology-enhanced task: - Focuses on goals - Includes relevant technology - Includes teacher support - Integrates time to learn - Provides a variety of relevant resources - Uses technology only if it is necessary Figure 1.3 Effective task characteristics. Teachers’ roles in classrooms have changed. Although some teachers continue to work within a curriculum in which teaching is central and pencil and paper the norm, the trend is toward goal-centered and student-centered curricula in which student learning, supported by technology, is focal. This focus has changed the teacher’s role in the classroom. A student-centered focus that includes understanding and addressing students’ interests, for example, means that teachers need to vary their teaching so that student interests are connected to classroom content and tasks; technology use can help teachers to do so. As one saying goes, “While technology will not replace teachers, teachers who use technology will probably replace those who do not.” For more information on why technology cannot replace teachers, see Purewal (2016). Challenges for Teachers Teachers using technology may face environmental, physical, attitudinal and philosophical, access, equity, cultural, financial, legal, and other obstacles. These challenges are presented in every chapter and discussed in depth in chapter 9. One challenge that teachers often voice is the idea that computers will put them out of a job. But there are many things that teachers can do that technology cannot. Figure 1.4 presents a very incomplete list that shows why teachers cannot be replaced by technology. As important as understanding what technology cannot do is understanding what it can. Figure 1.4 also presents some of the things that technology is typically more efficient or effective at than teachers are. How do teachers help it do this? Teachers can treat technology as the tool that it is and integrate its use into every content area. In addition, instead of teaching one or more technologies as the goal (or, if necessary, in addition to), teachers can employ technology to meet curricular goals in all areas. Some teachers fear, often rightly, that technology learning may take the place of content learning and that the curriculum will not be covered. Teachers often do not understand at first how to balance technology and content and worry that there is not enough time to learn the technology they need. In these cases, teachers often stop using technology to focus on content, use only one technology repeatedly, or just What can’t technology do? - Design a seating chart, taking into consideration understandings about children and their attitudes toward one another. - Make friends or show respect. - Create lessons that address the needs of diverse students. - Decorate a classroom. - Choose a textbook. - Manage 20 third graders. - Make a decision based on a gut feeling. - Give creative feedback. - Search for or create knowledge. What can technology do? - Manipulate streams of meaningless data. - Repeat itself endlessly. - Help make learning more efficient by controlling large amounts of data quickly. - Help make learning more effective by providing a great wealth of resources and allowing students choices. - Operate in environments where humans cannot. - Connect people who could not connect cheaply or easily otherwise. - Provide means to improve students’ acquisition of basic skills and content knowledge (Kleiman, 2001). - Motivate students (Kleiman, 2001). - Work quickly and objectively. - Strengthen teachers’ preferred instructional approaches—for example, those who lecture can use computer-enhanced visual support, those who prefer inquiry-based approaches can use raw data on the Web and databases or spreadsheets for analysis. - Help to change the vision of a classroom as a room with four walls that depends solely on the teacher for information. Figure 1.4 What technology can and can’t do.jump in and hope for an eventual best.But it does not need to be this way. Support from students and parents, willingness to set aside an hour a week for additional learning, and/or a district that is willing to support grant writing are some of the ways discussed in this text to help teachers find the time they need to learn about technology use. Chapters 8 and 10 address these issues. In addition, the Guidelines section in each chapter supports teachers in understanding the roles of technology in classroom learning and how they might plan their learning about technology. GUIDELINES FOR USING EDUCATIONAL TECHNOLOGY In each chapter, the Guidelines provide practical suggestions for teachers to help meet learning goals and overcome potential barriers. In this chapter, the guidelines present general issues to help you meet goals for technology use. These guidelines are summarized in Figure 1.5 below. Guideline #1: Understand the realities of technology use. In addition to understanding what technology can and cannot do, there are other significant realities that teachers need to understand. For example, learning to use technology well takes time—for everyone to learn, for effective uses to be discovered, and for implementation to be complete. Learning technology will not always be smooth, but help is available from members of the school community, including parents, technology specialists, knowledgeable students, and other teachers. In addition, teachers can join online teacher-based groups such as the Global SchoolNet Foundation (http://www.globalschoolnet.org/index.cfm) for help, ideas, and resources. The special effects of technology such as cool art, stickers, sound effects, and so on (often called “bells and whistles”) may take precedence for students over task content at first, but well-designed tasks following the guidelines in this text can help avoid this problem. In addition, there are resources to help with just about every technology need, from using the icons in Microsoft Word (see http://infobitt.blogspot.com/2010/06/toolbars-screentips-and-toolbar-buttons.html) to finding appropriate content for diverse learners (see the Colorín Colorado! site at http://www.colorincolorado.org/teaching-ells/technology-english-language-learners). This text and the accompanying Teacher Toolbox will help you to explore and find additional technology resources by presenting a variety of Web sites, software packages, and support information and by suggesting places to look for further ideas and information. This text will also encourage you to share your findings with other educators. Guideline #2: Examine equity and access for your students. Loschert (2003) reported 15 years ago that, although the average school had over 100 computers, each student typically had only 20 minutes per week on the computer. In addition, girls, minorities, and students with special needs often had less access than other students, particularly in high school (Kleiner & Farris, 2002; Male, 2003). Unfortunately, this trend, while decreasing, still holds (National Center for Education Statistics [NCES], 2015). NCES (2015) notes that 8% of school age students (5-15 years) still had no Internet access as of 2013. If everyone is to learn with these tools, everyone must be able to access them. Other chapters in this text provide ways to arrange and use technology to make access more equitable; these include making the best use of classroom computers and creating arrangements to share technology equitably and effectively within schools. Guideline #3: Consider student differences. Students bring skills and backgrounds that can add to or detract from technology-enhanced learning experiences. Teachers can assess student needs by first investigating their learning preferences, cultural and language differences, and background experiences and knowledge. Teachers can then address these needs by applying the techniques and strategies presented throughout this and other texts. These techniques include, for example, using content resources at multiple levels, giving students choice in the products they develop, and providing extra support for students who need it.In addition to specific instructional strategies, computer technologies can also help address the needs of diverse students and help to include students with a variety of abilities in classroom tasks. For example, special technologies called assistive devices can help teachers to provide larger text for sight-impaired students, voice recognition for students with physical disabilities, and extra wait time, feedback, or practice for those who need it. Assistive devices are presented later in this chapter and throughout the text.Technology can also provide support for English language learners (ELLs) and other students by providing resources in a variety of languages and many different ways to work (Egbert, 2005), from supportive team-based software to individual remediation Web sites. Suggestions for supporting the learning of ELLs with technology are noted throughout the text. |#1: Understand the realities of technology use.||Learning to use technology effectively takes time. Give yourself and your students that time.| |#2: Examine equity and access foryour students.||Not all students have equal access to technology. Teachers must make sure that everyone who needs it is given fair opportunities.| |#3: Consider student differences.||Students who are physically and/or socially challenged or have other barriers to learning must be considered while technology-enhanced instruction is being designed.| Figure 1.5 Guidelines for the use of educational technology TECHNOLOGY-ENHANCED LEARNING ACTIVITIES The Learning Activities section in each chapter presents suggestions and examples to use as models to effectively use technology. In this chapter, you will read real-life educational technology uses taken directly from school reports. These examples provide an initial idea of effective ways that technology is being applied in classrooms. The technology uses in the examples below, from the first decade of the 21st century, could still be considered innovative at the end of the second decade; this is one indication not only of how slowly technology uses have made their way into classrooms but also how much teacher professional development in uses of educational technology is still needed so that all teachers can integrate technology effectively, like the teachers below: Elaine Insinnia, an eighth-grade language arts teacher from Berkeley Heights, New Jersey, uses Internet research to help her students understand the novels she assigns. Using questions to help focus the students, Insinnia directs them as they research a book’s author, the story’s time period, and key historical events related to the plot. In the past, Insinnia and her students conducted similar research in the school’s library, which often took several class periods. With the Internet, “you can get the same amount of information in 25 to 30 minutes,” she says. “It saves you lots of time and the kids pay attention.” The project lets students take control of their learning as they explore Web sites and information that interests them, Insinnia says. The project also teaches students how to evaluate the validity of information they find on the Web. After they complete their research, students share their findings in an online chat room [a Web site that allows communication in real time]. “When you are in a classroom discussion, the same kids dominate the discussion,” Insinnia says. “In the chat room everyone gets a chance to answer and they are engaged.” The chat room discussion also provides a record of each student’s contribution, which Insinnia can review later, she adds. (Loschert, 2003, n.p.) Tony Vincent, a fifth-grade classroom teacher in Omaha, Nebraska, reports: “Using a computer program called Sketchy, which functions like a digital flip book, students create short cartoons that show each step they take to solve a math problem. They move the numbers around the screen as they solve a problem and add ‘thought bubbles’ to explain their work. Students find the programs so engaging that they watch their cartoons, and ones created by their classmates, repeatedly. The process of creating the product and reviewing it reinforces the thought process students should use to solve the problems. … As a result, a lesson that used to take two weeks now takes just three days for students to comprehend.” (Loschert, 2003, n.p.) When Jane McLane first mentioned her upcoming sabbatical to bicycle around the world to Kristi Rennebohm Franz, a fellow teacher at Sunnyside Elementary in Pullman, Washington, she never dreamed she’d end up with 25 virtual companions. But somehow she did—Kristi’s first and second graders! By carrying a digital camera and a small computer, Jane was able to communicate on a daily basis with Kristi and her students. Along the way, Kristi’s students learned to write, read, and communicate as they interacted with Jane about world languages, cultures, geography, art, time zones, and architecture. (Learning Point Associates, 2004) In a challenge described by FermiLab LInC (2000), seventh-grade students will be challenged to develop a schoolwide recycling program. The challenge will be for everyone, students, teachers, administrators and especially the cafeteria and lunch program, to recycle waste products. Students will form teams to investigate waste and waste management. They will also contact other schools throughout the country (via email) and collect data on school recycling programs. Do they exist? How are they managed? What percentage of waste has to be hauled away? What are the costs for running such a program? The teams will be encouraged to develop a Total School Recycle Program to either internally handle waste or to find resources that will productively utilize waste products. This will involve investigating the means of disposing or recycling all the waste generated from their school building. Can it be done? (FermiLab LInC, 2000) All of these examples are adaptable for a variety of grade levels and students and can make use of a variety of different technologies. More important, they demonstrate effective task characteristics and focus on 21st- century learning goals such as critical thinking and problem-solving. The technology is employed as support for effective student learning. This learning focus is important because technology changes so rapidly. In fact, even by the time you finish reading this text, much of the technology mentioned in it may be in a new version, may have a new format, or may be obsolete entirely. However, having a firm grounding in the learning goals that will continue to be essential—for example, critical thinking, problem solving, content, and communication—means that teachers and students will be able to continue to integrate technology, deal with change, and work toward success. Technology for Supporting Learning Each chapter in this text presents a variety of technologies that can be used to support learning. This first chapter presents a general overview of technology for reference at any time during your reading of the text. It focuses on a basic understanding of educational technology that includes awareness of the components of any tool. Components of Electronic Tools Electronic tools generally consist of hardware, software, and connection components. Table 1.1 presents a basic overview and broad definitions of hardware components listed in alphabetical order. For hardware, the three main types are input, processing, and output. Input devices are used to enter information into the computer. Output devices display or deliver the information in a format that users can understand. Processing devices change the input into output. There are also communication devices that connect computers to each other. The components listed in the table will also be mentioned in other chapters in this text. Software is composed of a set of instructions that controls the operation of a computer. The most important software is the operating system, or OS. The OS manages the rest of the soft- ware on the computer. Typically software is developed for one OS or platform, either Macintosh OS or Windows, but some software can run on these and other less common operating systems such as Unix and Linux. Find tutorials for these common operating systems by searching the Web. Information about types of software, software functions, and parts of a software package is presented in Table 1.2 below. These terms are used throughout this text. Connection components, some of which are technically hardware (e.g., a modem) and others that are software (e.g., an e-mail package), allow computers around the world to communicate. A short list of important components is presented in Table 1.3 below. Table 1.1 Hardware Components |CD-ROM (compact disc, read-only memory) DVD (digital video disc)||Storage device||Portable optical recording device that store massive amounts of data| |Central processing unit (CPU)||Processing||The “brains” of the computer, the central processing unit contains the motherboard, disk drive, and chips. Loads the operating system to enable the computer to run on and work; performs operations| |Digital camera||Input device||Entering video and images| |Flash drive||Storage device||Portable, very small storage devices are also known as thumb drives and USB drives. Flash drives fit in a computer’s USB port–very convenient for storage.| |Handheld/Mobile device||Combination device||These small computers have almost the same range of uses as their desktop-size counterparts, but they are more portable, cheaper, and wireless. Many people use their cell phones to serve as a device that can receive input, allow for transformation of that input, and output to many other devices.| |Hard drive||Storage device||Storing information long-term on a computer. The hard drive contains any software installed on the computer and files that the user has created and saved.| |Keyboard||Input device||Enter text and numbers. Many people currently use voice input, and more will as the software that it employs becomes more accurate.| |Microphone||Audio input device||Enter audio information, particularly for speech recognition.| |Modem||Communication device||A device that allows one computer to talk to another over a phone or cable line. Modems are also part of wireless communications.| |Monitor||Output device||Display information on the computer.| |Mouse/touch screen/touch pad||Input device||Pointing to and selecting information. Touch screens are becoming the norm as more people use their phones and other mobile devices. Users can input with their fingers or with a special pen called a stylus.| |Printer||Output device||Print a hard copy of graphics and text on paper or paper product. 3D printers can also print multidimensional figures and are being used across disciplines, although they are not present in the majority of schools yet.| |Projector||Visual/audio Output device||Provides a bigger picture than a monitor and can broadcast for a group.| |RAM (Random access memory) /ROM (Read-only memory)||Storage device||RAM is the computer primary memory and store what is currently in use. ROM stores the computer’s instruction set and cannot be changed by the user.| |Scanner||Input device||Enter drawings, documents, text, and designs or anything else that the user wants a copy of in digital format.| |Speakers||Output device||Listening to audio output| |Web cam||Input device||Take pictures to be displayed on the Internet and communication in real-time with other users who can see you.| Table 1.2 Software |Commercial||Software type||Microsoft Office Suite| |Communications||Software function||Email, courseware (addressed in chapter 3)| |Freeware||Software type||Programs from sites such as download.com and tucows.com| |Operating system||Software component||Mac OS, the latest Microsoft Windows version, Linux, and others| |Personal productivity||Software function||Word processor, database, spreadsheet, presentation software (addressed in chapter 7)| |Programming software, formatting languages||Software function||C, Java, html and many more types that allow users to create instructions for the computer. Children can code a variety of programs with software such as Blockly, Python, Ruby, and Scratch.Find more information by searching these programs on the Internet.| |Shareware||Software type||Software that users can choose to pay for if they like it, found at sites such as totalshareware.com, bestshareware.net, freshshare.com| |Teacher tools||Software function||Grade books, letter generators, rubric makers| |User interface||Software component||The user interface is what the user sees on the screen/monitor. A poorly constructed user interface can make software or a web site hard to use.| Table 1.3 Connection Components |Internet||Connects computer networks around the world so that they can “talk” to each other. Computers must typically have a modem (see hardware).| |ISP||Internet Service Providers (ISPs) are organizations that provide connection to the Internet, typically for profit.| |LAN||Local area networks (LANs) connect computers on the same network through wireless or cable connections to share printers and applications.| |WAN||Wide area networks connect local computers to a broader network (such as the Internet) or connect LANs together.| |World Wide Web(WWW or Web)||Part of the Internet that enables electronic communication of text, graphics, audio, and video.| This text addresses supporting learning with technology for students with a wide range of abilities, skills, and needs. In some instances, the choice of resource or student role in an activity will be enough to help students access academic content. In other cases special technologies, called assistive devices, will be needed for students to access the information they need. In general, assistive devices are hardware and software designed for specific needs. Table 1.3 presents examples of some of these devices, and others are presented throughout this text. In addition, the Microsoft (www.microsoft.com) and Apple (www.apple.com) Web sites list all of the assistive devices included in their operating systems. The benefits of access to technology for students with disabilities include: - Being able to bridge ideas - Sequential practice to master concepts step by step - Control over their environment - Timely feedback - Access to multimodal (visual, auditory, tactile, and kinesthetic) and multi-intelligence materials (Barry & Wise, n.d.) Teachers need to understand why and how to use assistive technologies to help students effectively. For example, teachers may not think about how students with different abilities will access information from the Web. For students who are visually impaired or physically challenged, access is an important issue. Simple solutions to access problems range from making the text in the Web browser bigger so that sight-impaired students can see it to providing a special large mouse that needs only a light touch to work. For ways to make the Web more accessible to all students, see www.phschool.com/about_ph/web_ access.html and other parts of this text. Table 1.3 Assistive Technologies |Accessibility testers||Test whether a Web site is as accessible as possible||For all teacher- and student-made Web pages||Bobby software (Watchfire)| |Closed- captioned TV||Shows the TV audio in text||For students who are hearing-impaired||Every TV sold in the U.S. since 1993 must have closed- captioning capability| |Touch screen||Students touch the monitor screen to give instructions to the computer, e.g., to click on links.||Can be used instead of a mouse for students who cannot control a mouse well. Touch screens are often used with young children.||Other “mouse emulators” include special keyboards, laser or infrared pointers, keyboard overlays, trackballs, and a variety of devices that can be tailored to students’ needs.| |Screen magnifiers and screen readers||To make screen text bigger and/or to have the text read aloud||Helps sight-impaired users||Usually part of the operating system on computers; there are also free magnifiers that can be downloaded from Internet sites.| |Signing avatars||Animated characters who use sign language||For students who use sign language||See the Signing Science Dictionary at http://signsci.terc.edu/ and find out more at the University of Toronto’s Adaptive Technology Resource Centre, http://www.adaptech.org/en/team/atrc (in particular, see the downloads page).| |Voice recognition software||Turns oral language into text on a computer screen||For students who cannot physically enter data other ways||Dragon Naturally Speaking and IBM’s ViaVoice.| |Universally designed software||Features include spoken voice, visual highlighting, and document or page navigation.||Makes software accessible to struggling readers and students with disabilities and enables struggling readers to read the same books as their peers.||eReader (CAST; www.cast.org/our-work/learning-tools.html) and Thinking Reader software (Tom Snyder Productions/Scholastic).| The University of Washington’s DO-IT program provides teachers with outstanding resources such as videos and articles for understanding and working with assistive technologies. Read more about this program on the Web at http://www.washington.edu/doit/. Appropriate Tool Use Most important to understand in the discussion of technological tools is that if the tool does not make the task more effective or more efficient, a different tool should be employed. In addition, if there is no appropriate digital technology that fits the task, digital technology should not be used. For example, asking first graders to type sentences on the computer might be fun for them, but teachers need to evaluate whether the time students spend hunting for the correct keys and making editing mistakes might be better spent with a pencil or crayons. Or, setting ninth graders free on the Internet to research famous Americans might result in chaos that could better be organized by employing a more manageable information set in a digital encyclopedia. This theme of principled technology use is repeated throughout the text. The thoughtless use of technology and the problems it causes is well documented and discussed (Aslan & Reigeluth, 2010; Ferneding, 2003; Postman, 1993) and can be avoided. After you have reviewed the goals for your lesson, decided on an effective task, integrated technology in appropriate and effective ways, and supported students through the task process, it’s time to assess. Each chapter in this text presents ways to appropriately assess student progress toward learning goals. Most important in the discussions of assessment is that both the product of student learning and the process of student learning are the foci of assessment. In the examples given throughout this text, technology is the focus of assessment (for example, did students use it well? was it appropriate for the task?) and used to assess (for example, an observation checklist on the teacher’s handheld computer). However, it is important that assessments fit the specific context and students for whom they are developed. Therefore, note that the assessments in this text only serve as models. They probably cannot be used without at least some adaptations to fit specific classroom, task, and student conditions. For example, a rubric, or detailed scoring outline, that is made to evaluate a technology-supported presentation for fifth graders is most likely inappropriate to evaluate a presentation by 10th-grade students. The text addresses a number of assessments, including: - Scoring guides (chapter 2) - Rubrics (chapter 3) - Multiple-choice tests (chapter 4) - Checklists and peer team reports (chapter 5) - Performance assessments (chapter 6) - Problem-solving notebooks (chapter 7) - Electronic portfolios (chapter 8) These assessments can be used in a variety of contexts other than those described in the chapters. The text’s brief theoretical discussions that accompany assessment examples will help you to understand how and when to employ them effectively. As you move on to the rest of this text, keep in mind the underlying premise of this chapter, that learning comes before technology. Be sure to review ideas in the chapter as needed and to use the glossary of terms and table data to support your learning throughout the text. FROM THE CLASSROOM Below are comments from teachers that relate to the content of this chapter. Theory and Practice Our questions and frustrations reminded me of the three main theories which exist… The first is the behaviorist: [learning] is acquired through imitation, direct instruction, practicing through drills, memorization, etc. The second is innatist: [learning] is acquired naturally, just by listening to it and being immersed in an authentic environment. No direct instruction or correction is needed. The last is interactionist, which says that [learning] is acquired naturally, but it stresses the interaction portion, and also says that sometimes it is necessary to teach specific rules or correct student output. These are coming from the experts and it seems to me that perhaps pieces from each are true. I doubt any one theory could ever explain how every unique individual will learn. I think there is a time and a place for flashcards and memorization, but I think it is also crucial to have meaningful interaction. (Jennie, first-grade teacher) We can’t just throw the kids on a computer and expect learning to take place any more than we would show them the text and tell them to learn it by the end of the year. No matter what tools we use, we need to use good teaching practices, or our teaching will be ineffective. (Susan, fifth-grade teacher) [A reading] says that computers are not capable of teaching, that teachers are the ones who actually perform this. I completely agree with this because it is important to keep in mind as technology continues advancing. This is why I feel that we need to rely on the content of our lessons in incorporating technology rather than using technology just because it will be fun when the activity itself might be better without it. Learning occurs best when it is driven by the human processes, not the technology. When this occurs, students are involved in their learning through negotiation of meaning with one another and are focused on the content of the project. (Cammie, student teacher) I [keep] thinking about “how do I keep up?” I would love to see my students with digital notebooks, me videoconferencing with parents and students, using voice-generated technology. First, district and state will need to support technology growth and use in the classrooms with monies for technical support: training, maintenance, wiring. Second, respect for equipment needs to be taught to students and families (now, if a student misplaces a book, parents may or may not pay). Thirdly, as professionals we (educators) will need to embrace the new technology. I am ready! (Jean, sixth-grade teacher) I also wonder how much the role of teachers will change as technology advances. I even applied for a tutoring job with [a company where] you tutor online with a digital pencil and headset! Pretty crazy. Also, if we can listen and learn from history . . . there were so many predictions that new technology would revolutionize teaching and they really never did. For example, when the radio, TV, and mainframe computer came out, they were all expected to change the entire educational scene, but in reality, the changes were minute. From my reading, educational technology researchers always warn not to get overly excited about the future of technology based on history. (Jennie, first-grade teacher) I see [the] point about finding the purpose of assessment before deciding what type is more appropriate. However, I feel it’s even more important to find out what type of student we are dealing with before deciding which assessment works better. For example, when we test our students in our building, we know certain students with extra barriers (language, attention span, etc.) will benefit more or will show their abilities better in a computer assessment versus paper/pencil. So, teachers decide to give them the computer assessment! It’s not really a matter of what but WHO is taking the test! (Andrea, third-grade teacher). Each chapter in this text includes a Key Points Review that summarizes chapter ideas. - Explain why a learning focus is important in supporting learning with technology. Technology is a tool that teachers can use to support learning, but learning must be foremost. If teachers do not understand how to support learning, technology use will be ineffective and inefficient. Kleiman (2001) summarizes the focus of this text, noting that “while modern technology has great potential to enhance teaching and learning, turning that potential into reality on a large scale is a complex, multifaceted task. The key determinant of our success will not be the number of computers purchased or cables installed, but rather how we define educational visions, prepare and support teachers, design curriculum, address issues of equity, and respond to the rapidly changing world” (p. 14). - Describe the relevant standards and the 21st-century skills that ground the learning in this text. The integration of content area and technology standards, along with standards for English language learners, results in six 21st-century skills that can serve as learning goals in the creation of technology-supported learning tasks: - Content learning - Critical thinking - Problem solving Define “educational technology” and related terms. Pencils, chalkboards, and overhead projectors are all educational technologies. However, in today’s classrooms, educational technology is usually understood to be electronic technologies, particularly computers, that are used to support the learning process. - Discuss the use of technology tools for providing access to learning for all students, including physically challenged students, English language learners, and others who might face barriers to learning. Hardware, software, and connection are the main components of electronic technologies. Specific applications of these components can determine whether students can access the content and demonstrate their skills. - Present an overview of computer-based and computer-assisted assessment practices. There are many ways to assess student learning in every classroom. This idea does not change when technology is integrated, but technology use can make assessment easier and more effective. - Understand how and why to adapt lesson plans for more effective learning. Evaluating lessons according to criteria for effective technology-supported learning can help you provide instruction that is accessible, engaging, and useful for all students in your classroom. Aaronsohn, E. (2003). The exceptional teacher. San Francisco, CA: Jossey-Bass.American Library Association and Association for Educational Communications and Technology. (1998). Information power: Building partnerships for learning. Chicago: Author.ALA (2017). Standards for the 21st-Century Learner. Available: http://www.ala.org/aasl/standards/learning.Ascione, L. (2006). Study: Ed tech has proven effective. eSchool News Online. www.eschoolnews.com. Barry, J., & Wise, B. (n.d.).Aslan, S., & Reigeluth, C. (2010). What are the factors that contribute to ineffective and limited use of Learning Management Systems (LMS) in the schools? Proceedings of AECT. http: http://www.aect.org/pdf/proceedings10/2010I/10_01.pdf.Burns, T., & Ungerleider, C. (2002–2003). Information and communication technologies in elementary and secondary education: State of the art review. International Journal of Educational Policy, Research, and Practice, 3(4), 27–54.Chauhan, S. (2017). A meta-analysis of the impact of technology on learning effectiveness of elementary students. Computers & Education, 105, pp. 14-30.Cheung, A., & Slavin, R. E. (2013). The effectiveness of educational technology applications for enhancing mathematics achievement in K-12 classrooms: A meta-analysis. Educational Research Review, 9, 88-113. http://dx.doi.org/10.1016/j.edurev.2013.01.001Egbert, J. (2005). Call Essentials. Alexandria, VA: TESOL.Egbert, J., Paulus, T., & Nakamichi, Y. (2002). The impact of CALL instruction on classroom computer use: A foundation for rethinking technology in teacher education. Language Learning and Technology, 6(3), 108–126.FermiLab LInC. (2000). Project examples. http://www-ed.fnal.gov/lincon/el_proj_examples.shtml. Ferneding, K. (2003). Questioning technology: Electronic technologies and educational reform. New York:Peter Lang.Gordon, D. (2001). The digital classroom: How technology is changing the way we teach and learn. Cambridge,MA: Harvard Education Letter.Ketterer, K. (2007). Online learning in harmony. Learning and Leading with Technology, 34(6), 19.Kleiman, G. (2001). Myths and realities about technology in K-12 schools. In D. Gordon (Ed.), The digital classroom. Cambridge, MA: Harvard Education Letter.Kliener, A., & Farris, E. (2002). Internet access in U.S. public schools and classrooms: 1994–2001. National Center for Education Statistics (NCES 2002018). Web version available: http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2002018.Learning Point Associates (2004). 21st century skills: Kristi Rennenbohm Franz’s primary classroom. http://www.ncrel.org/engauge/skills/glimpse1.htm.Levin, J., & Bruce, B. (2001, March). Technology as media: The learner centered perspective. Paper presented at the 2001 AERA Meeting, Seattle, WA. Available: http://tepserver.ucsd.edu/~jlevin/ jim-levin/levin-bruce-aera.html.Loschert, K. (2003, April). Are you ready? NEA Today. Available: http://www.nea.org/neatoday/ 0304/cover.html.Male, M. (2003). Technology for inclusion: Meeting the special needs of all students (4th ed.). Boston, MA: Allyn & Bacon.McKenzie, J. (1998, September). Grazing the Net: Raising a generation of free range students. Phi Delta Kappan. Online version available: http://fno.org/text/grazing.html.Mills, S., & Roblyer, M. (2006). Technology tools for teachers: A Microsoft Office tutorial (2nd ed.).Upper Saddle River, NJ: Pearson.National Center for Education Statistics (2015). Digest of Education Statistics. Available: https://nces.ed.gov/programs/digest/d15/tables/dt15_702.10.asp?current=yes.NCREL (2004). enGAUGE resources what works—Enhancing the process of writing through tech- nology: Integrating research and best practice. http://www.ncrel.org/engauge/resource/techno/ whatworks. Learning Point Associates.National School Boards Foundation (n.d.). Technology’s effectiveness in education. Available: http://www.nsba.org/sbot/toolkit/teie.html.O’Connor, J., & Robertson, E. (2002). George Polya. MacTutor History of Mathematics Archive, http://www-groups.dcs.stand.ac.uk/~history/Mathematicians/Polya.html.Plotnik, E. (1999). Information literacy. ERIC Digest. ED427777, http://searcheric.org/digests/ ed427777.html.Postman, N. (1993). Technopoly: The surrender of culture to technology. New York: Vintage Books. Saskatchewan chapter V: Technological literacy. Education. Understanding the common essential learnings. Regina, SK, Canada: Author. http://www.sasked.gov.sk.ca/docs/policy/cels/el5.html.Purewal, H. (2016, December 7). Can technology replace teachers? The Guardian. Available: https://www.theguardian.com/commentisfree/2016/dec/07/can-technology-replace-teachers-google.U.S. Office of Educational Technology (2016). National Education Technology Plan: Section 2: Teaching with Technology. Available: https://tech.ed.gov/netp/teaching/
Crop rotation is the practice of growing a series of different types of crops in the same area across a sequence of growing seasons. It reduces reliance on one set of nutrients, pest and weed pressure, and the probability of developing resistant pests and weeds. Growing the same crop in the same place for many years in a row, known as monocropping, gradually depletes the soil of certain nutrients and selects for a highly competitive pest and weed community. Without balancing nutrient use and diversifying pest and weed communities, the productivity of monocultures is highly dependent on external inputs. Conversely, a well-designed crop rotation can reduce the need for synthetic fertilizers and herbicides by better using ecosystem services from a diverse set of crops. Additionally, crop rotations can improve soil structure and organic matter, which reduces erosion and increases farm system resilience. Agriculturalists have long recognized that suitable rotations — such as planting spring crops for livestock in place of grains for human consumption — make it possible to restore or to maintain productive soils. Ancient Near Eastern farmers practiced crop rotation in 6000 BC without understanding the chemistry, alternately planting legumes and cereals.[better source needed] Under a two-field rotation, half the land was planted in a year, while the other half lay fallow. Then, in the next year, the two fields were reversed. In China both the two-field and three-field system had been used since the Eastern Zhou period. From the times of Charlemagne (died 814), farmers in Europe transitioned from a two-field crop rotation to a three-field crop rotation. From the end of the Middle Ages until the 20th century, Europe's farmers practiced a three-field rotation, where available lands were divided into three sections. One section was planted in the autumn with rye or winter wheat, followed by spring oats or barley; the second section grew crops such as peas, lentils, or beans; and the third field was left fallow. The three fields were rotated in this manner so that every three years, one of the fields would rest and lie fallow. Under the two-field system, if one has a total of 600 acres (2.4 km2) of fertile land, one would only plant 300 acres. Under the new three-field rotation system, one would plant (and therefore harvest) 400 acres. But the additional crops had a more significant effect than mere quantitative productivity. Since the spring crops were mostly legumes, they increased the overall nutrition of the people of Northern Europe. Farmers in the region of Waasland (in present-day northern Belgium) pioneered a four-field rotation in the early 16th century, and the British agriculturist Charles Townshend (1674–1738) popularised this system in the 18th century. The sequence of four crops (wheat, turnips, barley and clover), included a fodder crop and a grazing crop, allowing livestock to be bred year-round. The four-field crop rotation became a key development in the British Agricultural Revolution. The rotation between arable and ley is sometimes called ley farming. George Washington Carver (1860s–1943) studied crop-rotation methods in the United States, teaching southern farmers to rotate soil-depleting crops like cotton with soil-enriching crops like peanuts and peas. In the Green Revolution of the mid-20th century the traditional practice of crop rotation gave way in some parts of the world to the practice of supplementing the chemical inputs to the soil through topdressing with fertilizers, adding (for example) ammonium nitrate or urea and restoring soil pH with lime. Such practices aimed to increase yields, to prepare soil for specialist crops, and to reduce waste and inefficiency by simplifying planting, harvesting, and irrigation. A preliminary assessment of crop interrelationships can be found in how each crop: - contributes to soil organic matter (SOM) content - provides for pest management - manages deficient or excess nutrients - how it contributes to or controls for soil erosion - interbreeds with other crops to produce hybrid offspring, and - impacts surrounding food webs and field ecosystems Crop choice is often related to the goal the farmer is looking to achieve with the rotation, which could be weed management, increasing available nitrogen in the soil, controlling for erosion, or increasing soil structure and biomass, to name a few. When discussing crop rotations, crops are classified in different ways depending on what quality is being assessed: by family, by nutrient needs/benefits, and/or by profitability (i.e. cash crop versus cover crop). For example, giving adequate attention to plant family is essential to mitigating pests and pathogens. However, many farmers have success managing rotations by planning sequencing and cover crops around desirable cash crops. The following is a simplified classification based on crop quality and purpose. Many crops which are critical for the market, like vegetables, are row crops (that is, grown in tight rows). While often the most profitable for farmers, these crops are more taxing on the soil. Row crops typically have low biomass and shallow roots: this means the plant contributes low residue to the surrounding soil and has limited effects on structure. With much of the soil around the plant exposed to disruption by rainfall and traffic, fields with row crops experience faster break down of organic matter by microbes, leaving fewer nutrients for future plants. In short, while these crops may be profitable for the farm, they are nutrient depleting. Crop rotation practices exist to strike a balance between short-term profitability and long-term productivity. A great advantage of crop rotation comes from the interrelationship of nitrogen-fixing crops with nitrogen-demanding crops. Legumes, like alfalfa and clover, collect available nitrogen from the atmosphere and store it in nodules on their root structure. When the plant is harvested, the biomass of uncollected roots breaks down, making the stored nitrogen available to future crops. In addition, legumes have heavy tap roots that burrow deep into the ground, lifting soil for better tilth and absorption of water. Grasses and cereals Cereal and grasses are frequent cover crops because of the many advantages they supply to soil quality and structure. The dense and far-reaching root systems give ample structure to surrounding soil and provide significant biomass for soil organic matter. Grasses and cereals are key in weed management as they compete with undesired plants for soil space and nutrients. Green manure is a crop that is mixed into the soil. Both nitrogen-fixing legumes and nutrient scavengers, like grasses, can be used as green manure. Green manure of legumes is an excellent source of nitrogen, especially for organic systems, however, legume biomass does not contribute to lasting soil organic matter like grasses do. Planning a rotation There are numerous factors that must be taken into consideration when planning a crop rotation. Planning an effective rotation requires weighing fixed and fluctuating production circumstances: market, farm size, labor supply, climate, soil type, growing practices, etc. Moreover, a crop rotation must consider in what condition one crop will leave the soil for the succeeding crop and how one crop can be seeded with another crop. For example, a nitrogen-fixing crop, like a legume, should always precede a nitrogen depleting one; similarly, a low residue crop (i.e. a crop with low biomass) should be offset with a high biomass cover crop, like a mixture of grasses and legumes. There is no limit to the number of crops that can be used in a rotation, or the amount of time a rotation takes to complete. Decisions about rotations are made years prior, seasons prior, or even at the last minute when an opportunity to increase profits or soil quality presents itself. Crop rotation systems may be enriched by the influences of other practices such as the addition of livestock and manure, intercropping or multiple cropping, and is common in organic cropping systems. Incorporation of livestock Introducing livestock makes the most efficient use of critical sod and cover crops; livestock (through manure) are able to distribute the nutrients in these crops throughout the soil rather than removing nutrients from the farm through the sale of hay. Mixed farming or the practice of crop cultivation with the incorporation of livestock can help manage crops in a rotation and cycle nutrients. Crop residues provide animal feed, while the animals provide manure for replenishing crop nutrients and draft power. These processes promote internal nutrient cycling and minimize the need for synthetic fertilizers and large-scale machinery. As an additional benefit, the cattle, sheep and/or goat provide milk and can act as a cash crop in the times of economic hardship. Multiple cropping systems, such as intercropping or companion planting, offer more diversity and complexity within the same season or rotation. An example of companion planting is the three sisters, the inter-planting of corn with pole beans and vining squash or pumpkins. In this system, the beans provide nitrogen; the corn provides support for the beans and a "screen" against squash vine borer; the vining squash provides a weed suppressive canopy and a discouragement for corn-hungry raccoons. Double-cropping is common where two crops, typically of different species, are grown sequentially in the same growing season, or where one crop (e.g. vegetable) is grown continuously with a cover crop (e.g. wheat). This is advantageous for small farms, which often cannot afford to leave cover crops to replenish the soil for extended periods of time, as larger farms can. When multiple cropping is implemented on small farms, these systems can maximize benefits of crop rotation on available land resources. Crop rotation is a required practice, in the United States, for farm seeking organic certification. The “Crop Rotation Practice Standard” for the National Organic Program under the U.S. Code of Federal Regulations, section §205.205, states Farmers are required to implement a crop rotation that maintains or builds soil organic matter, works to control pests, manages and conserves nutrients, and protects against erosion. Producers of perennial crops that aren’t rotated may utilize other practices, such as cover crops, to maintain soil health. In addition to lowering the need for inputs (by controlling for pests and weeds and increasing available nutrients), crop rotation helps organic growers increase the amount of biodiversity their farms. Biodiversity is also a requirement of organic certification, however, there are no rules in place to regulate or reinforce this standard. Increasing the biodiversity of crops has beneficial effects on the surrounding ecosystem and can host a greater diversity of fauna, insects, and beneficial microorganisms in the soil as found by McDaniel et al 2014 and Lori et al 2017. Some studies point to increased nutrient availability from crop rotation under organic systems compared to conventional practices as organic practices are less likely to inhibit of beneficial microbes in soil organic matter. Agronomists describe the benefits to yield in rotated crops as "The Rotation Effect". There are many benefits of rotation systems. The factors related to the increase are broadly due to alleviation of the negative factors of monoculture cropping systems. Specifically, improved nutrition; pest, pathogen, and weed stress reduction; and improved soil structure have been found in some cases to be correlated to beneficial rotation effects. Other benefits of rotation cropping systems include production cost advantages. Overall financial risks are more widely distributed over more diverse production of crops and/or livestock. Less reliance is placed on purchased inputs and over time crops can maintain production goals with fewer inputs. This in tandem with greater short and long term yields makes rotation a powerful tool for improving agricultural systems. Soil organic matter The use of different species in rotation allows for increased soil organic matter (SOM), greater soil structure, and improvement of the chemical and biological soil environment for crops. With more SOM, water infiltration and retention improves, providing increased drought tolerance and decreased erosion. Soil organic matter is a mix of decaying material from biomass with active microorganisms. Crop rotation, by nature, increases exposure to biomass from sod, green manure, and various other plant debris. The reduced need for intensive tillage under crop rotation allows biomass aggregation to lead to greater nutrient retention and utilization, decreasing the need for added nutrients. With tillage, disruption and oxidation of soil creates a less conducive environment for diversity and proliferation of microorganisms in the soil. These microorganisms are what make nutrients available to plants. So, where "active" soil organic matter is a key to productive soil, soil with low microbial activity provides significantly fewer nutrients to plants; this is true even though the quantity of biomass left in the soil may be the same. Soil microorganisms also decrease pathogen and pest activity through competition. In addition, plants produce root exudates and other chemicals which manipulate their soil environment as well as their weed environment. Thus rotation allows increased yields from nutrient availability but also alleviation of allelopathy and competitive weed environments. Studies have shown that crop rotations greatly increase soil organic carbon (SOC) content, the main constituent of soil organic matter. Carbon, along with hydrogen and oxygen, is a macronutrient for plants. Highly diverse rotations spanning long periods of time have shown to be even more effective in increasing SOC, while soil disturbances (e.g. from tillage) are responsible for exponential decline in SOC levels. In Brazil, conversion to no-till methods combined with intensive crop rotations has been shown an SOC sequestration rate of 0.41 tonnes per hectare per year. Rotating crops adds nutrients to the soil. Legumes, plants of the family Fabaceae, for instance, have nodules on their roots which contain nitrogen-fixing bacteria called rhizobia. During a process called nodulation, the rhizobia bacteria use nutrients and water provided by the plant to convert atmospheric nitrogen into ammonia, which is then converted into an organic compound that the plant can use as its nitrogen source. It therefore makes good sense agriculturally to alternate them with cereals (family Poaceae) and other plants that require nitrates. How much nitrogen made available to the plants depends on factors such as the kind of legume, the effectiveness of rhizobia bacteria, soil conditions, and the availability of elements necessary for plant food. Pathogen and pest control Crop rotation is also used to control pests and diseases that can become established in the soil over time. The changing of crops in a sequence decreases the population level of pests by (1) interrupting pest life cycles and (2) interrupting pest habitat. Plants within the same taxonomic family tend to have similar pests and pathogens. By regularly changing crops and keeping the soil occupied by cover crops instead of lying fallow, pest cycles can be broken or limited, especially cycles that benefit from overwintering in residue. For example, root-knot nematode is a serious problem for some plants in warm climates and sandy soils, where it slowly builds up to high levels in the soil, and can severely damage plant productivity by cutting off circulation from the plant roots. Growing a crop that is not a host for root-knot nematode for one season greatly reduces the level of the nematode in the soil, thus making it possible to grow a susceptible crop the following season without needing soil fumigation. Integrating certain crops, especially cover crops, into crop rotations is of particular value to weed management. These crops crowd out weed through competition. In addition, the sod and compost from cover crops and green manure slows the growth of what weeds are still able to make it through the soil, giving the crops further competitive advantage. By slowing the growth and proliferation of weeds while cover crops are cultivated, farmers greatly reduce the presence of weeds for future crops, including shallow rooted and row crops, which are less resistant to weeds. Cover crops are, therefore, considered conservation crops because they protect otherwise fallow land from becoming overrun with weeds. This system has advantages over other common practices for weeds management, such as tillage. Tillage is meant to inhibit growth of weeds by overturning the soil; however, this has a countering effect of exposing weed seeds that may have gotten buried and burying valuable crop seeds. Under crop rotation, the number of viable seeds in the soil is reduced through the reduction of the weed population. In addition to their negative impact on crop quality and yield, weeds can slow down the harvesting process. Weeds make farmers less efficient when harvesting, because weeds like bindweeds, and knotgrass, can become tangled in the equipment, resulting in a stop-and-go type of harvest. Preventing soil erosion Crop rotation can significantly reduce the amount of soil lost from erosion by water. In areas that are highly susceptible to erosion, farm management practices such as zero and reduced tillage can be supplemented with specific crop rotation methods to reduce raindrop impact, sediment detachment, sediment transport, surface runoff, and soil loss. Protection against soil loss is maximized with rotation methods that leave the greatest mass of crop stubble (plant residue left after harvest) on top of the soil. Stubble cover in contact with the soil minimizes erosion from water by reducing overland flow velocity, stream power, and thus the ability of the water to detach and transport sediment. Soil Erosion and Cill prevent the disruption and detachment of soil aggregates that cause macropores to block, infiltration to decline, and runoff to increase. This significantly improves the resilience of soils when subjected to periods of erosion and stress. When a forage crop breaks down, binding products are formed that act like an adhesive on the soil, which makes particles stick together, and form aggregates. The formation of soil aggregates is important for erosion control, as they are better able to resist raindrop impact, and water erosion. Soil aggregates also reduce wind erosion, because they are larger particles, and are more resistant to abrasion through tillage practices. The effect of crop rotation on erosion control varies by climate. In regions under relatively consistent climate conditions, where annual rainfall and temperature levels are assumed, rigid crop rotations can produce sufficient plant growth and soil cover. In regions where climate conditions are less predictable, and unexpected periods of rain and drought may occur, a more flexible approach for soil cover by crop rotation is necessary. An opportunity cropping system promotes adequate soil cover under these erratic climate conditions. In an opportunity cropping system, crops are grown when soil water is adequate and there is a reliable sowing window. This form of cropping system is likely to produce better soil cover than a rigid crop rotation because crops are only sown under optimal conditions, whereas rigid systems are not necessarily sown in the best conditions available. Crop rotations also affect the timing and length of when a field is subject to fallow. This is very important because depending on a particular region's climate, a field could be the most vulnerable to erosion when it is under fallow. Efficient fallow management is an essential part of reducing erosion in a crop rotation system. Zero tillage is a fundamental management practice that promotes crop stubble retention under longer unplanned fallows when crops cannot be planted. Such management practices that succeed in retaining suitable soil cover in areas under fallow will ultimately reduce soil loss. In a recent study that lasted a decade, it was found that a common winter cover crop after potato harvest such as fall rye can reduce soil run-off by as much as 43%, and this is typically the most nutritional soil. Increasing the biodiversity of crops has beneficial effects on the surrounding ecosystem and can host a greater diversity of fauna, insects, and beneficial microorganisms in the soil as found by McDaniel et al 2014 and Lori et al 2017. Some studies point to increased nutrient availability from crop rotation under organic systems compared to conventional practices as organic practices are less likely to inhibit of beneficial microbes in soil organic matter, such as arbuscular mycorrhizae, which increase nutrient uptake in plants. Increasing biodiversity also increases the resilience of agro-ecological systems. Crop rotation contributes to increased yields through improved soil nutrition. By requiring planting and harvesting of different crops at different times, more land can be farmed with the same amount of machinery and labour. While crop rotation requires a great deal of planning, crop choice must respond to a number of fixed conditions (soil type, topography, climate, and irrigation) in addition to conditions that may change dramatically from year to the next (weather, market, labor supply). In this way, it is unwise to plan crops years in advance. Improper implementation of a crop rotation plan may lead to imbalances in the soil nutrient composition or a buildup of pathogens affecting a critical crop. The consequences of faulty rotation may take years to become apparent even to experienced soil scientists and can take just as long to correct. Many challenges exist within the practices associated with crop rotation. For example, green manure from legumes can lead to an invasion of snails or slugs and the decay from green manure can occasionally suppress the growth of other crops. - "jan 1, 6000 BC - Crop Rotation (Timeline)". time.graphics. Archived from the original on September 23, 2019. Retrieved 2019-09-23. - "What Is Crop Rotation?". WorldAtlas. 25 April 2017. Retrieved 2019-01-25. - Needham 1984, p. 150. - Organic Production: Using NRCS Practice Standards to Support Organic Growers (Report). Natural Resources Conservation Service. July 2009. - Dufour, Rex (July 2015). Tipsheet: Crop Rotation in Organic Farming Systems (Report). National Center for Appropriate Technology. Retrieved May 4, 2016. - Baldwin, Keith R. (June 2006). Crop Rotations on Organic Farms (PDF) (Report). Center for Environmental Farming Systems. Archived from the original (PDF) on May 13, 2015. Retrieved May 4, 2016. - Johnson, Sue Ellen; Charles L. Mohler (2009). Crop Rotation on Organic Farms: A Planning Manual, NRAES 177. Ithaca, NY: National Resource, Agriculture, and Engineering Services (NRAES). ISBN 978-1-933395-21-0. - Coleman, Pamela (November 2012). Guide for Organic Crop Producers (PDF) (Report). National Organic Program. Archived (PDF) from the original on 2015-10-04. Retrieved May 4, 2016. - Lamb, John; Craig Sheaffer & Kristine Moncada (2010). "Chapter 4 Soil Fertility". Risk Management Guide for Organic Producers (Report). University of Minnesota. - "Green Manures". Royal Horticultural Society. Retrieved May 4, 2016. - L. H. Bailey, ed. (1907). "Chapter 5, "Crop Management,"". Cyclopedia of American Agriculture. pp. 85–88. - Gegner, Lance; George Kuepper (August 2004). "Organic Crop Production Overview". National Center for Appropriate Technology. Retrieved May 4, 2016. - Powell, J.M.; William, T.O. (1993). "An overview of mixed farming systems in sub-Saharan Africa". Livestock and Sustainable Nutrient Cycling in Mixed Farming Systems of Sub-Saharan Africa: Proceedings of an International Conference, International Livestock Centre for Africa (ILCA). 2: 21–36. - "§205.205 Crop rotation practice standard". CODE OF FEDERAL REGULATIONS. Retrieved May 4, 2016. - Saleem, Muhammad; Hu, Jie; Jousset, Alexandre (2019-11-02). "More Than the Sum of Its Parts: Microbiome Biodiversity as a Driver of Plant Growth and Soil Health". Annual Review of Ecology, Evolution, and Systematics. Annual Reviews. 50 (1): 145–168. doi:10.1146/annurev-ecolsys-110617-062605. ISSN 1543-592X. S2CID 199632146. - Mäder, Paul; et al. (2000). "Arbuscular mycorrhizae in a long-term field trial comparing low-input (organic, biological) and high-input (conventional) farming systems in a crop rotation". Biology and Fertility of Soils. 31 (2): 150–156. doi:10.1007/s003740050638. S2CID 6152990. - Bowles, Timothy M.; Mooshammer, Maria; Socolar, Yvonne; Calderón, Francisco; Cavigelli, Michel A.; Culman, Steve W.; Deen, William; Drury, Craig F.; Garcia y Garcia, Axel; Gaudin, Amélie C.M.; Harkcom, W. Scott; Lehman, R. Michael; Osborne, Shannon L.; Robertson, G. Philip; Salerno, Jonathan; Schmer, Marty R.; Strock, Jeffrey; Grandy, A. Stuart (2020-03-20). "Long-Term Evidence Shows that Crop-Rotation Diversification Increases Agricultural Resilience to Adverse Growing Conditions in North America". One Earth. 2 (3): 284–293. Bibcode:2020OEart...2..284B. doi:10.1016/j.oneear.2020.02.007. ISSN 2590-3322. S2CID 212745944. Retrieved 2021-12-09. - Triberti, Loretta; Anna Nastri & Guido Baldoni (2016). "Long-term effects of crop rotation, manure fertilization on carbon sequestration and soil fertility". European Journal of Agronomy. 74: 47–55. doi:10.1016/j.eja.2015.11.024. - Victoria, Reynaldo (2012). "The Benefits of Soil Carbon". Risk Management Guide for Organic Producers (Report). United Nations Environment Programme. - Loynachan, Tom (December 1, 2016). "Nitrogen Fixation by Forage Legumes" (PDF). Iowa State University. Department of Agrology. Archived from the original (PDF) on May 3, 2013. Retrieved December 1, 2016. - Adjei, M. B.; et al. (December 1, 2016). "Nitrogen Fixation and Inoculation of Forage Legumes" (PDF). Forage Beef. University of Florida. Archived from the original (PDF) on December 2, 2016. Retrieved December 1, 2016. - Moncada, Kristine; Craig Sheaffer (2010). "Chapter 2 Rotation". Risk Management Guide for Organic Producers (Report). University of Minnesota. - Davies, Ken (March 2007). "Weed Control in Potatoes" (PDF). British Potato Council. Archived (PDF) from the original on 2016-10-19. Retrieved December 1, 2016. - Unger PW, McCalla TM (1980). "Conservation Tillage Systems". Advances in Agronomy. 33: 2–53. doi:10.1016/s0065-2113(08)60163-7. ISBN 9780120007332. - Rose CW, Freebairn DM. "A mathematical model of soil erosion and deposition processes with application to field data". - Loch RJ, Foley JL (1994). "Measurement of Aggregate Breakdown under rain: comparison with tests of water stability and relationships with field measurements of infiltration". Australian Journal of Soil Research. 32 (4): 701–720. doi:10.1071/sr9940701. - "Forages in Rotation" (PDF). Saskatchewan Soil Conservation Association. 2016. Archived (PDF) from the original on 2016-12-02. Retrieved December 1, 2016. - "Aggregate Stability". Natural Resources Conservation Centre. 2011. Retrieved December 1, 2016. - Carroll C, Halpin M, Burger P, Bell K, Sallaway MM, Yule DF (1997). "The effect of crop type, crop rotation, and tillage practice on runoff and soil loss on a Vertisol in central Queensland". Australian Journal of Soil Research. 35 (4): 925–939. doi:10.1071/s96017. - Littleboy M, Silburn DM, Freebairn DM, Woodruff DR, Hammer GL (1989). "PERFECT. A computer simulation model of Productive Erosion Runoff Functions to Evaluate Conservation Techniques". Queensland Department of Primary Industries. Bulletin QB89005. - Huang M, Shao M, Zhang L, Li Y (2003). "Water use efficiency and sustainability of different long-term crop rotation systems in the Loess Plateau of China". Soil & Tillage Research. 72: 95–104. doi:10.1016/s0167-1987(03)00065-5. - Walker, Andy. "Cover crops have major role to play in soil health". peicanada.com. Retrieved 2016-12-01. - "Crop Rotation – A Vital Component of Organic Farming". 2016-06-15. - Yamoah, Charles F.; Francis, Charles A.; Varvel, Gary E.; Waltman, William J. (April 1998). "Weather and Management Impact on Crop Yield Variability in Rotations". Journal of Production Agriculture. 11 (2): 219–225. doi:10.2134/jpa1998.0219. S2CID 54785967. Retrieved 9 November 2022. - Anderson, Randy L. (1 January 2005). "Are Some Crops Synergistic to Following Crops?" (PDF). Agronomy Journal. 97 (1): 7–10. doi:10.2134/agronj2005.0007a. S2CID 215776836. - Bullock, D. G. (1992). "Crop rotation". Critical Reviews in Plant Sciences. 11 (4): 309–326. doi:10.1080/07352689209382349. - Francis, Charles A. (2003). "Advances in the Design of Resource-Efficient Cropping Systems". Journal of Crop Production. 8 (1–2): 15–32. doi:10.1300/j144v08n01_02. - Needham, Joseph (1984), Science and Civilization in China 6-2 - Porter, Paul M.; Lauer, Joseph G.; Lueschen, William E.; Ford, J. Harlan; Hoverstad, Tom R.; Oplinger, Edward S.; Crookston, R. Kent (1997). "Environment Affects the Corn and Soybean Rotation Effect". Agronomy Journal. 89 (3): 442–448. doi:10.2134/agronj1997.00021962008900030012x. - White, L.T. (1962). Medieval Technology and Social Change. Oxford University Press.
Functions in Python are predefined blocks of code that are required to do a specific task. Functions make your code smaller and provide flexibility to it and thus your code can be reused multiple times. There are predefined function in Python like print(), etc., but you can also write your own functions. How to Define a Function A function can be defined in a simple way like this- - We give our function a name by using the keyword def. This is called function definition. - Then, the only task of the function is described that is to print the string “Hi! How are you?” - In this line, we call the function by writing the function’s name followed by closed small brackets. A docstring explains what the function does. It is enclosed in triple quotes and Python searches for it in a function while making documentation for the function. You can read the docstring by using __doc__ keyword. Please note that in ‘__doc__’, there are two underscores on either side of the word ‘doc’. We can change the function start_a_conversation() a little bit, so that it not only asks the user “Hi! How are you?” but also uses their name to ask the question. This can be done by writing some information in the brackets that follow the function’s name, the information is called a parameter. Let’s see how The value ‘Roger’ is called an argument. Now, as long as the task inside the function can be performed, Python won’t give an error to check the data type of the argument passed to the function, in this case, the argument should be of data type ‘str’. Passing List as an Argument Let’s have a better example for Python Functions where the argument will be a List. We are going to create a function that cubes and adds all of the items in a list using a for loop – While writing parameters for a function, you can assign a default value for one or more parameters. However, the default values can be altered while passing arguments. - The function used the default value ‘Potter’ as the last name of the person. - The function used the most recent argument as the last name that is ‘Trask’. Importance of Order while Passing Arguments If a function takes more than 1 parameter, then while passing the arguments, you should be aware of the order in which you pass them. As you can see in the output that due to the order of the arguments the username got mixed up with the name of the book. How to avoid committing this mistake? In Python, while passing arguments, you can specify which argument belongs to which parameter like this: – These kinds of name-value pairs are called keyword arguments. If a function expects 2 arguments and only 1 argument is passed, the compiler will give an error. If the number of arguments to be passed is unknown to you, then by adding an asterisk sign * behind the parameter can solve the problem that came in the example above. By doing this, Python converts the arguments into a Tuple and then uses them. The arguments are converted into a Tuple so we can also perform all kinds of operations inside the function that we are able to perform on a Tuple. Arbitrary Keyword Arguments If the number of keyword arguments to be passed is unknown to you, then add two asterisks ** before the parameter name. By doing this, Python will convert the parameters into a dictionary and items can be accessed according to the need. Sometimes, people either don’t have a middle name or don’t want to provide it, so with the help of arbitrary keyword arguments, we can make them optional. These are also known as **kwargs. We do not have to use print all of the time to display a simple output. Instead, the function can return a set of values to the user. The values returned by a function are called return Values. Returning a Dictionary As you can see above that a Dictionary was returned by the function. Likewise, any kind of value like a List, Tuple, etc., can be returned using the return keyword.
Pythagoras of Samos (c. 569–c. 500 BC) Fig 1. Marble bust of Pythagoras. Fig 2. A diagram of the famous theorem of Pythagoras. The number of squares in A plus B equals the number in C. Fig 3. Pythagorean square puzzle. Pythagoras was a Greek philosopher, mathematician, and astronomer who founded a secretive philosophical and religious school – the Pythagorean school – in Croton, southern Italy. Pythagoras left no writings and virtually nothing is known about him as an individual, so it is almost impossible to disentangle the beliefs and discoveries of the "Pythagoreans" from those of their leader. To the Pythagoreans "everything is number" and every number was supposed to be a quantity that could be expressed as the ratio of two integers. Such a number we now call a rational number. The Pythagoreans used music as an example of the perfection and harmony of numbers that can be expressed as ratios. They showed that pitch could be represented as a simple ratios that came from the length of equally tight strings that could be plucked. Perhaps the most famous of the Pythagoreans' mathematical results is Pythagoras's theorem. Then the sky fell in on the Pythagoreans' world-view. Using their very own theorems they showed that not all numbers are rational. Their discovery that the square root of 2 (the length of the hypotenuse of a triangle with sides one and one) can't be expressed as ratio of two whole numbers was to have been kept a closely guarded a secret, but was later revealed by one of the cult's members. Pythagoras believed that Earth was a sphere at the center of the universe, and correctly realized that the morning star and the evening star were the same object (Venus). He was also the first to teach that the Earth was a sphere rotating around a central "fire." Pythagoras was born on the island of Samos, in the Aegean Sea, near Greece. Unfortunately, he left no writings behind him for in those days parchment had not yet been invented, and wax tablets were too small and awkward for anything but letters or other brief writings. Most of what we know about him, therefore, comes from later writers and, as with all great figures, fact is often mixed with legend in the story of his life and thought. In the 6th century BC the Greeks were a prosperous and highly civilized people, and the island of Samos was one of their important trading centers, with a growing culture. The young Pythagoras, son of a well-to-do citizen, would thus have the best education possible. Even at an early age he showed great intelligence, and it seems that by the age of sixteen his teachers could no longer answer his questions, and he was sent to study under Thales of Miletus, the first Greek to make a scientific study of numbers and one of the Seven Wise Men of Greece. At this time Pythagoras may have formulated his best-known theorem, which he then set out to demonstrate. He was, in fact, one of the founders of the system of geometrical proof. Before then mathematicians did not think it important whether their conclusions corresponded with reality or not. Years of travel Pythagoras was interested not only in geometry and numbers but in the other sciences then known, above all in religion. As there were then no books, the only way to study further was to travel and meet other scholars. During the next 30 years we hear of him in Persia, Babylon, Arabia, and as far away as India, where Buddha was founding his new religion. Callimachus, the librarian at Alexandria in the third century BC, records that Pythagoras spent many years in Egypt. Here he may have learned more about music and worked out the connection between arithmetic and music, one of his most important discoveries; for example, the octave (doh-doh) and the fifth (doh-soh) can be produced by stopping the string of an instrument at half and two-thirds of its length. The terms 'harmonic mean' and 'harmonic progression' came from this discovery. By the time he reached his fifties, Pythagoras had learned much. Now he wanted to establish a school where he could teach others. School at Crotona Pythagoras established his school in about 529 BC at Crotona, a prosperous Greek port in southern Italy, and soon had a following of 300 young men. It was more like a religious sect than a school, members recognizing each other by a secret sign. They owned all things in common and swore to help one another. The subjects of study were the four degrees of wisdom (arithmetic, geometry, music, and astronomy), the duties of a man toward others, and religion. Pupils were expected to practice the virtues of valor, piety, obedience, and loyalty – in fact, all the virtues found in the Greek ideal of the good and brave man. One of Pythagoras' main beliefs, referred to by Shakespeare in Twelfth Night and the The Merchant of Venice, was the transmigration of souls; that is, the belief that when someone dies their soul passes into another body, human or animal. According to Pythagoras, only a after a pure life could the soul be freed from the prison or 'tomb' of the flesh and win life in the heavens. A pure life meant an austere life, but many of the rules laid down by Pythagoras were like primitive taboos. For instance, pupils could not eat beans, break bread, stir the fire with iron, or pick up what had fallen! Music was thought most important in purifying the soul. Thus the pupils studied musical theory, and astrology. The whole heaven, he taught, was formed of a 'musical scale or number'. Pythagoras was, in fact, one of the first people to hold that the Earth and the universe are round. Scientific study, religion, and a moral code were thus combined, and the teaching of Pythagoras himself was a strange mixture of mysticism and reason. His disciples came to look on him as semi-divine; even the mathematical and astronomical discoveries made after his death were believed to be really his. Unfortunately, the Pythagoreans became involved in politics. Wherever they gained power they showed contempt for the ignorant and unphilosophic masses, who could not lead the highest life – that of contemplation. This led to their downfall. The people rose against them, and Pythagoras went into exile, where he eventually died at the age of eighty. His thought, however, continued to have a real influence; Plato in particular was inspired by it. Two hundred years after his death the Senate erected a statue to Pythagoras in Rome, honoring him s the "wisest and bravest of Greeks". Pythagoras's lute is the kite-shaped figure that forms the enclosing shape for a progression of diminishing pentagons and pentagrams, linking the vertices together. The resulting diagram is replete with lines in the golden ratio. Pythagorean square puzzle The Pythagorean square puzzle is to combine the two squares shown in the accompanying diagram so as to make a single large square (see Figure 3).
Data Frames in R Language are generic data objects of R which are used to store the tabular data. Data frames can also be interpreted as matrices where each column of a matrix can be of the different data types. DataFrame is made up of three principal components, the data, rows, and columns. Extract data from a data frame means that to access its rows or columns. One can extract a specific column from a data frame using its column name. # R program to extract # data from the data frame # creating a data frame friend.data <- data.frame( friend_id = c(1:5), friend_name = c("Sachin", "Sourav", "Dravid", "Sehwag", "Dhoni"), stringsAsFactors = FALSE ) # Extracting friend_name column result <- data.frame(friend.data$friend_name) print(result) friend.data.friend_name 1 Sachin 2 Sourav 3 Dravid 4 Sehwag 5 Dhoni
Supermassive black holes could be forced to collide by the gentle drag from interstellar gas, new computer simulations suggest. That’s good news for an experiment being designed to look for gravitational waves, as they should be emitted when black holes collide. Black holes with masses of millions or even a billion Suns are commonly found in the centres of galaxies. Galaxies often collide and merge, and it is rare to find a galaxy with more than one monster black hole in its heart, so scientists suspect that the black holes in colliding galaxies also merge. It’s not certain, however. Something has to slow the two giant black holes down, or they will just keep distantly orbiting the centre of the galaxy. Computer simulations have suggested that a dense gas could provide the necessary drag. It wouldn’t work like ordinary wind resistance – after all the black hole will swallow any gas that reaches its event horizon. Instead, a black hole leaves a trail of denser gas behind it, pulled inwards by the hole’s gravity, and the gravity from that denser gas pulls back on the hole. Gradually slowing down, the two black holes should spiral in towards the centre of the galaxy and eventually merge. Doubts remained, because no one had shown that a galaxy merger would leave the holes orbiting within sufficiently dense gas, but new computer simulations have bolstered the idea. Lucio Mayer of the Swiss Federal Institute of Technology in Zurich led the team that carried out the simulations The team simulated the merger of two galaxies similar to our own Milky Way, taking into account the behaviour of stars, gas, and dark matter, as well as the two supermassive black holes themselves, which start out about 300,000 light years from each other. They find that a lot of gas from the two galaxies – enough to make 3 billion Suns –ends up in the central region of the merged galaxy along with the two black holes. After that, it takes less than a million years for the gas drag to move the black holes to within just a few light years of one another. Watch a video of the black holes churning up gas in the simulation. The simulation did not have enough resolution to see what would happen next, but black hole expert Chris Reynolds of the University of Maryland in College Park, US, who was not a member of the team, says it would be reasonable to assume that the black holes will continue getting closer. As long as the black holes get a just little closer than their final separation in the simulation, they will start giving off gravitational waves – ripples in space-time that carry energy away – which would then guarantee a final merger, he says. The strong gravitational waves from such collisions might be detectable by a proposed NASA mission. A trio of spacecraft called the Laser Interferometer Space Antenna, or LISA, is designed to be sensitive to low-frequency gravitational waves, with periods of between 100 and 1000 seconds, in the range expected from colliding supermassive black holes. It could be launched as soon as 2015. Journal reference: Science Express (DOI: 10.1126/science.1141858)
Tree cutting is a major practice in the forestry industry. It involves cutting down trees to make way for new timber production, or to provide land for other uses such as building housing developments or clearing roadways. Tree cutting can also be done for aesthetic reasons, such as improving the view from home or creating an outdoor space with fewer trees. However, this activity can have serious consequences on the environment and it is important to understand these impacts before engaging in tree-cutting activities. The most obvious negative impact of tree cutting is the loss of habitat for animals and plants that live in forests. Trees are essential habitats providing food, shelter, and protection from predators. When they are cut down, animals have nowhere else to go which can lead to a decrease in biodiversity and an increase in endangered species. Additionally, the removal of trees can lead to soil erosion, as fewer trees mean fewer roots to hold onto the soil. This can lead to more sediment being washed into rivers and streams, causing pollution and damage to aquatic life. Tree cutting also affects climate change. Trees act as natural carbon sinks, sequestering CO2 from the atmosphere. When they are cut down, this stored carbon is released back into the atmosphere adding to greenhouse gas concentrations which contribute to global warming. Additionally, forests play a key role in regulating regional climates by trapping moisture and stabilizing temperatures year-round. Without them, it can become much hotter or cooler depending on local weather conditions. It is clear that tree-cutting can have severe consequences on the environment and should be approached with caution. If a tree needs to be cut, it is important to consider alternatives such as selective pruning or replanting other species of trees in its place. It’s also important to make sure that any tree-cutting activities are conducted according to local laws and regulations. By doing so, we can help ensure that our forests remain healthy and intact for future generations. What Are The 3 Most Common Tree-Cutting Methods? Numerous criteria, such as the tree’s size and position, the available tools and equipment, and the skill level of the person doing the cutting, determine which of several ways should be utilized to bring down a tree. Three of the most typical approaches to tree lopping are as follows: Felling With A Chainsaw: Trees that are small enough and easy enough to reach are often felled with a chainsaw, the most popular form of tree removal. Before using a chainsaw to bring down a tree, the arborist will make what is called a “felling cut” in the tree’s trunk. The tree trimmer will then make a final cut with the chainsaw on the other side of the tree from where he wants it to fall. Felling With A Felling Wedge: A compact, triangular-shaped tool, a felling wedge is used to split the trunk of a tree in a controlled manner during the felling process. As with a chainsaw, a felling wedge requires the tree cutter to make a sequence of cuts in the trunk of the tree. Tree felling is accomplished not by cutting a final cut with the chainsaw but by inserting the felling wedge into the cut and driving it deeper into the trunk with a sledgehammer. Felling With A Rope And Pulley System: Trees that are too big or too hard to get to with a chainsaw or felling wedge are perfect candidates for the rope and pulley system. When cutting down a tree using a rope and pulley method, the rope is fastened to the tree’s crown and the pulley is secured to another tree or other solid object. Following this, the tree cutter will utilize the rope and pulley system to lower the tree to the ground in a measured and controlled manner. Please note that chopping down trees is a potentially hazardous task that should be left to the experts. What Equipment Is Needed To Cut Down A Large Tree? It takes several different pieces of machinery, including: A chainsaw is a portable mechanical saw that is driven by either an electric motor or a tiny internal combustion engine. Chainsaws are commonly used for falling trees and chopping logs because of their effectiveness at cutting through wood. Personal Protective Equipment (PPE): When felling a tree, it’s crucial to use safety equipment, such as a hard hat, gloves, earplugs, and eye protection. A Felling Wedge: A small, triangular-shaped tool used to split the trunk of a tree in a controlled manner; also known as a “felling wedge.” A Rope And Pulley System: When a tree is too big or too inconveniently located to fall with a chainsaw or felling wedge, a rope and pulley system is employed to lower it to the ground in a controlled manner. A First Aid Kit: If you plan on chopping down a tree, you should always have a first aid kit available in case someone gets hurt. Because of the inherent risks involved, tree cutting should be left to the experts who have extensive expertise with the necessary equipment and know how to operate it safely and effectively. When Should You Cut Down A Large Tree? When deciding whether or not to cut down a large tree, it’s important to consider the overall health of the tree. If the tree is diseased or infested with pests, then cutting it down may be necessary to prevent further damage to other surrounding vegetation. Additionally, if the tree poses a safety risk due to its size and proximity to structures, then cutting it down may be necessary for safety reasons. In any case, before deciding between cutting down a large tree (or any tree at all), you should consult with an expert arborist to assess the situation and provide guidance on what steps can be taken. If a large tree needs to be removed, proper care should also be taken when removing it so that its roots, trunk and branches are not damaged. By following the advice of an expert arborist, you can ensure that any trees you cut down are done so safely and responsibly. How Do You Cut Down A Big Tree Next To Your House? If you have a large tree next to your house that needs removal, the first step is to call an experienced arborist or tree service. An arborist can assess the tree and determine if it can safely be removed without damaging nearby property. If the tree must be cut down, the arborist will provide advice on how to proceed to minimize damage and ensure safety. Before attempting any work with a chainsaw or other cutting equipment, familiarize yourself with safety precautions. Wear all recommended protective gear such as goggles, hard hats, gloves, hearing protection, long pants, and closed-toe shoes. Be careful not to accidentally damage power lines while working near them! Make sure there is plenty of room around the tree so that you can move safely and away from falling limbs. Start by cutting a notch in the side of the trunk facing the direction the tree is likely to fall. This should be slightly above waist height and about one-third of the way through the diameter of the trunk. Make a second cut directly opposite this first cut, at an angle, until it meets up with the first cut. Again, make sure these cuts are not too deep into the trunk as this could cause it to split or break before it falls in the desired direction. Finally, make a third horizontal cut along the back where both previous cuts meet each other. This will allow for easier sawing until eventually removing enough wood to allow for the natural falling of the tree. Once the tree starts to fall, move away from the area quickly and remain at a safe distance until it has completely fallen. After the tree is on the ground, use caution when cutting up the fallen trunk into logs for firewood or other uses. Remember that safety should always be your top priority when dealing with large trees near houses or other structures. With proper planning and precautionary measures, you can successfully remove a large tree next to your house in a safe manner. It is important to understand all of the risks involved before attempting any work and to consult an experienced arborist if necessary, just hop on to tree cutting services. Taking these steps will help ensure that everyone remains safe while getting rid of this big tree!
If you want know about Planning of electrical installation or Electrical distribution systems or Introduction of electricity production or Placement of equipment in building or Layout of substation please click the link above. Designing a basic lighting scheme requires the consideration of many factors, not just the achievement of a desired lighting level. Basic objectives must first be established, such as: - What sort of tasks will be performed in the area? - What ‘mood’ needs to be created? - What type of lighting will create a comfortable environment? There are also standards and legislation that need to be complied with. For example: - How energy efficient must the lighting be? - How will Building Regulations affect the design? - Is emergency lighting required? When all of these objectives and requirements have been established, they can be expressed as a series of lighting criteria in order to facilitate a quality lighting design. - Light is the medium that makes visual perception possible. - Light is radiant energy, usually referring to electromagnetic radiation that is visible to the human eye and is responsible for the sense of sight. - Insufficient light or darkness gives rise to a sense of insecurity. - Artificial lighting during the hours of darkness makes us feel safe. - So light not only enables us to see; it also affects our mood and sense of wellbeing. The main source of natural light is the sun, it provides us with different tones of light during the day such as: - Morning warm tones and low light intensity - Midday bright white light, high intensity - Late evening warm tones and low light intensity When we construct buildings, we try to make use of this daylight to perform activities inside a sheltered space. Admission of daylight depends upon ___? 1) Day light Factor - The ratio, in percent, of work plane illuminance (at a given point) to the outdoor illuminance on a horizontal plane. Evaluated under cloudy sky conditions only (no direct solar beam). DF = (Ei/Eo) x 100% Ei = illuminance due to daylight at a point on the indoor’s working plane Eo = simultaneous outdoor illuminance on a horizontal plane from an unobstructed hemisphere of overcast sky - There are three possible paths along which light can reach a point inside a room through glazed windows. - (a) light from the patch of sky visible at the point considered, expressed as the sky component (SC), - (b) light reflected from opposing exterior surfaces and then reached the point, expressed as the externally reflected component (ERC), - (c) light entering through the window but reaching the point only after reflection from internal surfaces, expressed as the internally reflected component (IRC). The sum of the three components gives the illuminance due to daylight Ei = SC + ERC + IRC 3) Purpose and Uses - It helps as a guideline for determining the quantitative characteristics of daylight in a particular work space. - Based on these guidelines, It can be determined whether a room has sufficient daylight. - In certain cases may even dictate a change in the design. 4) Artificial lighting - Artificial lighting is being used more and more in the world. - The usage is quite non-homogeneous. In developing countries, we can still find a widespread use of fuel based lighting but nowadays the situation is changing and the demand for electric based lighting is growing. - Electric lighting consumes about 19% of the world total electricity use. So, we should remember and consider that the improvement in energy efficient lighting will also be helpful for the progress in developing countries. - Every change in technologies, in customers’ consumption behaviour, even in lifestyle, has influences on global energy consumption and indirectly, on environment. 5) Fundamental photo-metric magnitudes i) Luminous Flux - It is the rate of flow of light energy, or power output of visible light that is radiated from a source, unit used with luminous flux is lumen (lm). - The luminous intensity of a light source is the amount of light that the source gives off (i.e. the brightness). The unit of luminous intensity is the Candela. - The brightness of an object, or the strength of the light reflected from it, or The amount of light on a surface area is called illuminance (E) and is measured in lumens/m2 or lux. - It is a photometric measure of the luminous intensity per unit area of light travelling in a given direction. It describes the amount of light that passes through, is emitted or reflected from a particular area, and falls within a given solid angle 6) Light Mechanics - Light travels in a straight line until it strikes a surface. It is then modified by transmission, refraction, reflection, or absorption. - There are three general categories of transmission: - Direct transmission occurs when light strikes transparent material which can be seen through. These materials absorb almost none of the light in its passage through the material, and do not alter the direction of the light ray. - Spread transmission Occurs with translucent materials in which the light passing through the material emerges in a wider angle than the incident beam, but the general direction of the beam remains the same. - Diffuse transmission Occurs with semi opaque materials such as opal glass, and the light passing through the material is scattered in all directions. These materials absorb some of the light, and the emerging rays are of less intensity than the transmitted rays. - Refraction occurs when a beam of light is bent as it passes from air to a medium of higher density. - This occurs because the speed of the light is slightly lower in the medium of higher density. - Two commonly used refractive devices are prisms and lenses. A prism is made of transparent material which has nonparallel sides. - Smaller prisms are used in lighting fixtures to lower brightness or to redirect light into useful zones. Lenses are used to cause parallel light rays to converge or diverge, focusing or spreading the light. Reflection occurs when light strikes a shiny opaque surface, or any shiny surface at an angle. Reflection can be classified in three General categories: - specular reflection - spread reflection - Specular reflection occurs when light strikes a highly Polished or mirror surface. The ray of light is reflected or bounced off the surface at an angle equal to that at which it arrives. Very little of the light is absorbed, and almost all of the incident light Leaves the surface at the reflected angle. - Spread reflection occurs when a ray of light strikes a polished but granular surface. The reflected rays are spread in diverging angles, due to reflection from the facets of the granular surface. - Diffuse reflection occurs when the ray of light strikes a reflective opaque but non-polished surface, such as flat white paint. - Absorption occurs when the object struck by the light ray retains the energy of the ray in the form of heat. - Some surfaces, like flat black paint, absorb nearly all of the incident light rays. These surfaces, such as those of a solar collector panel, tend to get very hot when placed in the sunlight. With these principles in mind, you can predict how the light itself will behave when used with the various control devices. - The excessive brightness from a direct light source that makes it difficult to see what one wishes to see. - Glare is defined as the brightness within the field of vision of such a character so as to cause discomfort and interference in vision - It causes: - Eye fatigue - Disturb the nervous system - Risk of accidents 8) Types of glare - Direct Glare - Indirect / Reflected Glare - Direct glare results from bright luminaries in the field of vision. - Direct glare, minimization or avoidance is possible by mounting luminaries well above the line of vision or field of vision. Limit both brightness and light flux. - Reflected glare arises due to reflection of such a source from a glossy surface it is more annoying than direct glare can be avoided by appropriate choice of interiors. 9) Lighting and human needs In all areas of life and throughout the working world, good and appropriate lighting is a prime requirement for enabling us to: - see clearly, - enjoy a sense of wellbeing, - perform concentrated fatigue-free work, - perceive and interpret important information and our surroundings correctly. - This calls for good, professional lighting design. 10) Layers of light - There are four layers of light typically used in residential lighting: general (also called ambient) lighting, task lighting, accent lighting, and decorative lighting. Combining and balancing these lighting types gives visual interest to the space and creates a more attractive, exiting and inviting environment. i) General lighting - General lighting is the main source of illumination in a space. This uniform, base level of lighting can easily become the focus of energy reduction, as the light levels from other fixtures can be lowered. It provides the area with overall illumination, more specifically for orientation and general tasks. - Ambient lighting should radiate a comfortable level of brightness and provide a sense of relaxation and spaciousness. - The light level should be uniform throughout the space, inconspicuous and neutral. - A simple way to achieve this is by arranging recessed fixtures using reflectors, baffles, and lensed trims in overlapping positions. ii) Task lighting - Task lighting is used to illuminate an area for a specific task; providing a focused, localized, and higher level of illumination. - Necessary to the functioning of a space, it is important to use energy efficient sources to reduce operating costs. - Task lighting is most effective when used as a supplement to general lighting in workspaces, conference areas and on counter tops. - Effective task lighting should eliminate shadows on the specific illuminated area, while preventing glare from the lamp or off surfaces. Although ambient light should still provide the majority of illumination, task lighting reduces the reliance on overhead lighting, and provides a better quality of light for specific tasks. - When lighting a task area, take into account the difference in brightness, or contrast, between the task area and the surrounding space. A 3:1 ratio of task lighting to general illumination provides a nice contrast. iii) Accent lighting - Accent lighting reinforces design aesthetics and creates a dramatic emphasis on shapes, textures, finishes and colors. It creates a visual interest in the space and can enhance almost anything. It adds depth, contrast and creates a focal point; it highlights shape, texture, finish and color. The key is to make this illumination more precise and of higher intensity than the surrounding ambient light. - Track fixtures, recessed housings with adjustable trims and concealed adjustable illumination with point source lamps provide directional control and are especially effective for accent lighting. They are easy to aim precisely to highlight products’ best attributes and influence the customers’ impression. - Accenting everything and emphasizing nothing is a common mistake with accent lighting; always keep in mind that there such a thing as providing too much light. The standards recommend a 5:1 ratio of accent lighting to ambient light to make objects stand out and create a significant visual effect; dark merchandise may require a higher ratio to bring out detail. For feature displays, higher ratios of 15:1 or 30:1 are used, especially to create sparkle in jewelry or crystal. 11) Lighting schemes Lighting schemes are classified according to the location, requirement and purpose etc. are as under: - Direct lighting - Indirect lighting - Semi direct lighting - Semi indirect lighting - Diffuse lighting i) Direct Lighting - As is clear from the name, in this system almost 90 to 95 % light falls directly on the object or the surface. The light is made to fall upon the surface with the help of deep reflectors. - Such type of lighting scheme is most used in industries and commercial lighting. Although This scheme is most efficient, but it is liable to cause glare and shadows. ii) Indirect Lighting - In this system, the light does not fall directly on the surface but more than 90 % of light is directed upwards by using diffusing reflectors. - Here the ceiling acts as a source of light and this light is uniformly distributed over the surface and glare is reduced to minimum. - It provides shadow less illumination which is useful for drawing offices and composing rooms. - It is also used for decoration purposes in cinema halls, hotels etc. iii) Semi Direct Lighting - This is also an efficient system of lighting and chances of glare are also reduced. - Here transparent type shades are used through which about 60 % light is directed downward and 40 % is directed upward. - This also provides a uniform distribution of light and is best suited for room with high ceilings. iv) Semi Indirect Lighting - In this system about 60 to 90 % of total light is thrown upward to the ceiling for diffused reflection and the rest reaches the working plane directly. - A very small amount of light is absorbed by the bowl. It is mainly used for interior decoration. v) Diffused Lighting - This system employs such type of luminaries, shades and reflectors which give equal illumination in all the directions.
Advertisement: Implementation Examples Implementation Examples Three examples of assemblers for real machines are: 1. MASM assembler 2. SPARC assembler 3. AIX assembler MASM Assembler The programs of x86 system views memory as a collection of segments. Each segment belongs to a particular class corresponding to its contents. The commonly used classes are: 1. CODE 2. DATA 3. CONST 4. STACK During program execution segments are addressed via an x86 segment register. In most cases: Code Segments are addressed using register CS. Stack Segments are addressed using register SS. * The loader automatically sets CS and SS when the program is loaded. CS is set to indicate the segment that contains the starting label specified by the ‘END’ statement of the program. * SS is set to indicate the last stack segment processed by the loader. * The programmer can specify explicitly the segment register to be used, else the assembler selects one. * Data segments are addressed using DS,ES,FS and GS. * By default the assembler assumes that all references to data segments use register ‘DS’, but the following statement with the assembler directive ASSUME tells the assembler to assume that register ES indicates the segment DATAEG2. ASSUME ES:DATASEG2| * Thus any references to labels that are defined in DATASEG2 will be assembled using register ‘ES’. * It is also possible to group several segments together. The following instruction would set ‘ES’ to indicate data segment DATASEG2. MOV AX,DATASEG2MOV ES, AX| * BASE directive tells the SIC/XE assembler the contents of register ‘B’/ * ASSUME directive tells MASM the content of a segment register. Jump instructions are assembled is two ways 1. Near Jump 2. Far Jump Near Jump * It is a jump to a target location in the same code segment. * Assembled instruction for NEAR JUMP is 2 or 3 bytes. Far Jump * It is a jump to a target location in a different code segments. * Assembled instruction for FAR JUMP is 5 bytes. Pass 1 of x86 assembler It is more complex than SIC as, operands has to be analyzed in addition to operation codes. Segments of MASM * Segments of MASM source program can be written in more than 1 part. * If a segment directive has a name as a previous defined segment, then it is said to be the continuation of that segment. The assembly process combines all the segments together. * These segments are similar to program blocks. * Assembler handles the references between the segments. * External references between separately assembled module is handled by the linker. MASM directives * MASM directive PUBLIC function is similar to EXTDEF. * MASM directive EXTRN function is similar to EXTREF. SPARC Assembler Sections * The SPARC assembly language program is divided into units called sections. * The assembler provides a set of predefined section names, such as the following: . TEXT .DATA .RODATA .BSS The programmer can switch between sections at any time in the source program by using assembler directives. * The assembler maintains a separate location counter for each named section. Similarity between Section and program blocks * Each time assembler switches to different section, it also switches to the location counter associated with that section. In this way sections are similar to program blocks. Difference between sections and program blocks * References between different sections are resolved by the linker in the case of sections, and by the assembler in the case of program blocks. Symbols used in the program * Local symbol * Global symbol * Weak symbol Object file of SPARC * The object file written by the SPARC assembler contains translated versions of thee segments of the programs and a list of relocation and linking operations that need to be performed. * The object program also includes a symbol table that describes the global symbol, week symbol and section names. Delayed branch * SPARC assembler language branch instructions are delayed branches. * The instruction immediately following a branch instruction is actually executed before the branch is taken. AIX Assembler AIX assembler supports various models of PowerPC microprocessors as well as machines that implement the original POWER architecture. .MACHINE assembler directive * The programmer can declare which architecture is being used with the assembler directive . MACHINE. * PowerPC program that contains only instructions that are also in the original POWER architecture would be executable on either type of system. Base register * PowerPC load and store instructions use a base register and a displacement value to specify an address in the memory. Any register except GPR0 can be used as a base register. * Decisions about which registers to use are left to the programmer. * The programmer specifies which registers are available for use as a base register, and the contents of these registers, with the “USING” assembler directive. Thus the statements .USING LENGTH, 1. USING BUFFER, 4| would identify GPR1 and GPR4 as the base registers. * GPR1 contains the address of the LENGTH. * GPR4 would contain the address of BUFFER. If the base register is to be used later for some other purpose, the programmer uses the . DROP statement which Indicates that the register is no longer available for addressing purpose. Selection of base register * For each instruction whose operand is an address in the memory, the assembler scans the table to find a base register that can be used to address that operand. * If more than one register can be used to address the operand, the assembler selects the base register that results in the smallest signed displacement. If no suitable base register is available the instruction cannot be assembled. * AIX assembler language also allows the programmer to writ base registers and displacements explicitly in the source program. Dummy control section * AIX assembler provides a special type of control section called dummy sections Data items included in a dummy section do not actually become a part of the object program; they serve only to define labels within the section. * Dummy sections are most commonly used to describe the layout of a record or table that is defined externally. Table of Contents (TOC) * By using this assembler directive the programmer can create a table of contents(TOC) for the assembled program. * TOC contains the addresses of control sections and global symbols defined within the control sections. The two passes of an AIX assembler AIX assembler itself has two pass structures. Pass 1 * The first pass of the AIX assembler writes a listing file that contains warnings and error messages. * If errors are found during the first pass the assembler terminates and does not continue to the second pass. If no errors are detected during first pass the assembler proceeds to pass 2. Pass2 * The second pass reads the source program again, instead of using an intermediate file. It means that the location counter values must be recalculated during pass 2. * Any not serious warning messages that were generated during pass1 are lost. * The assembled control sections are placed into the object program. Relocation and Linking * Relocation and linking operations are specified by entries in a relocation table, which is similar to the modification record for SIC.
20 search results Domain and Range: Numerical Representations Given a function in the form of a table, mapping diagram, and/or set of ordered pairs, the student will identify the domain and range using set notation, interval notation, or a verbal description as appropriate. Transformations of Square Root and Rational Functions Given a square root function or a rational function, the student will determine the effect on the graph when f(x) is replaced by af(x), f(x) + d, f(bx), and f(x - c) for specific positive and negative values. Transformations of Exponential and Logarithmic Functions Given an exponential or logarithmic function, the student will describe the effects of parameter changes. Solving Square Root Equations Using Tables and Graphs Given a square root equation, the student will solve the equation using tables or graphs - connecting the two methods of solution. Functions and their Inverses Given a functional relationship in a variety of representations (table, graph, mapping diagram, equation, or verbal form), the student will determine the inverse of the function. Rational Functions: Predicting the Effects of Parameter Changes Given parameter changes for rational functions, students will be able to predict the resulting changes on important attributes of the function, including domain and range and asymptotic behavior. Transformations of Absolute Value Functions Given an absolute value function, the student will analyze the effect on the graph when f(x) is replaced by af(x), f(bx), f(x – c), and f(x) + d for specific positive and negative real values. Domain and Range: Graphs Given a function in graph form, identify the domain and range using set notation, interval notation, or a verbal description as appropriate. Domain and Range: Function Notation Given a function in function notation form, identify the domain and range using set notation, interval notation, or a verbal description as appropriate. Domain and Range: Verbal Description The student will be able to identify and determine reasonable values for the domain and range from any given verbal description. Domain and Range: Contextual Situations The student will be able to identify and determine reasonable values for the domain and range from any given contextual situation. Modeling Data with Linear Functions Given a scatterplot where a linear function is the best fit, the student will interpret the slope and intercepts, determine an equation using two data points, identify the conditions under which the function is valid, and use the linear model to predict data points. Formulating Systems of Inequalities Given a contextual situation, the student will formulate a system of two linear inequalities with two unknowns to model the situation. Solving Systems of Equations Using Substitution Given a system of two equations where at least one of the equations is linear, the student will solve the system using the algebraic method of substitution. Solving Systems of Equations Using Elimination Given a system of two equations where at least one of the equations is linear, the student will solve the system using the algebraic method of elimination. Solving Systems of Equations with Three Variables Given a system of three linear equations, the student will solve the system with a unique solution. Solving Systems of Equations Using Matrices Given a system of up to three linear equations, the student will solve the system using matrices with technology. TEA AP® Biology AP® Biology covers the scope and sequence requirements of a typical two-semester biology course for AP® students. The text provides comprehensive coverage of foundational research and core biology concepts through an evolutionary lens. AP® Biology was designed to meet and exceed the requirements of the College Board’s AP® Biology Framework, while allowing significant flexibility for instructors. Each section of the book includes an introduction based on the AP® curriculum as well as rich features that engage students in scientific practice and AP® test preparation. It also highlights careers and research opportunities in the biological sciences. Content requirements for AP® Biology are prescribed in the College Board Publication Advanced Placement Course Description: Biology, published by The College Board (http://ritter.tea.state.tx.us/rules/tac/chapter112/ch112d.html#112.62). This open-education-resource instructional material by TEA is licensed under a Creative Commons Attribution 4.0 International Public License in accordance with Chapter 31 of the Texas Education Code. Square Root Regression This lesson is a student discovery lesson that culminates in square root regression with technology. Students will use their study of inverses, the relationship between quadratic and square root functions, their previous knowledge of regression, and determine how to find the square root regression of real-world data.
What Is an Imperfect Market? An imperfect market refers to any economic market that does not meet the rigorous standards of a hypothetical perfectly or purely competitive market, as established by Marshellian partial equilibrium models. An imperfect market is one in which individual buyers and sellers can influence prices and production, there is no full disclosure of information about products and prices, and there are high barriers to entry or exit in the market. It's the opposite of a perfect market, which is characterized by perfect competition, market equilibrium, and an unlimited number of buyers and sellers. Imperfect markets are found in the real world, and are used by businesses and other sellers to earn profits. Understanding Imperfect Markets All real-world markets are theoretically imperfect, and the study of real markets is always complicated by various imperfections. They include: - Competition for market share. - High barriers to entry and exit. - Different products and services. - Prices set by price makers rather than by supply and demand. - Imperfect or incomplete information about products and prices. - Small number of buyers and sellers. For example, traders in a financial market do not possess perfect or even identical knowledge about financial products. The traders and assets in a financial market are not perfectly homogeneous. New information is not instantaneously transmitted and there is a limited velocity of reactions. Economists only use perfect competition models to think through the implications of economic activity. The moniker imperfect market is somewhat misleading. Most people will assume an imperfect market is deeply flawed or undesirable, but this is not always the case. The range of market imperfections is as wide as the range of all real-world markets—some are much more or much less efficient than others. Implications of Imperfect Markets Not all market imperfections are harmless or natural. Situations can arise in which too few sellers control too much of a single market, or when prices fail to adequately adjust to material changes in market conditions. It is from these instances that the majority of economic debate originates. Some economists argue that any deviation from perfect competition models justifies government intervention to promote increased efficiency in production or distribution. Such interventions may come in the form of monetary policy, fiscal policy, or market regulation. One common example of such interventionism is anti-trust law, which is explicitly derived from perfect competition theory. [Important: Governments may also use taxation, quotas, licenses, and tariffs to help regulate so-called perfect markets.] Other economists argue that government intervention may be necessary to correct imperfect markets, but not always. This is because governments are also imperfect, and government actors may not possess the right incentives or information to interfere correctly. Finally, many economists argue government intervention is rarely, if ever, justified in markets. The Austrian and Chicago schools notably blame many market imperfections on erroneous government intervention. - Imperfect markets do not meet the rigorous standards of a hypothetical perfectly or purely competitive market. - They are characterized with having competition for market share, high barriers to entry and exit, different products and services, and a small number of buyers and sellers. - Perfect markets are theoretical and don't exist, while all real-world markets have some form of imperfection. - Structures of an imperfect market include monopolies, oligopolies, monopolistic competition, monopsonies, and oligopsonies. Structures of Imperfect Markets When at least one condition of a perfect market is not met, it can lead to an imperfect market. Every industry has some form of imperfection. Imperfect competition can be found in the following structures: Monopoly: This is a structure in which there is only one (dominant) seller. Products offered by this entity have no substitutes. These markets have high barriers to entry and a single seller who sets the prices on goods and services. Prices can change without notice to consumers. Oligopoly: This structure has many buyers but few sellers. These few players in the market may bar others from entering. They may set prices together or, in the case of a cartel, only one takes the lead to determine the price for goods and services while the others follow. Monopolistic Competition: Here, there are many sellers who offer similar products that can't be substituted. Businesses compete with one another and are price makers, but their individual decisions do not affect the other. Monopsony and Oligopsony: These structures have many sellers, but few buyers. In both cases, the buyer is the one who manipulates market prices by playing firms against one another. Imperfect Markets vs. Perfect Markets No serious economist believes that a perfectly competitive market could ever arise, and very few consider such a market desirable. Perfect markets are characterized by having the following: - An unlimited number of buyers and sellers. - Identical or substitutable products. - No barriers to entry or exit. - Buyers have complete information on products and prices. - Companies are price takers meaning have no power to set prices. In reality, no market can ever have an unlimited number of buyers and sellers. Economic goods in every market are heterogeneous, not homogeneous, as long as more than one producer exists. A diversity of goods and tastes are preferred in an imperfect market. Perfect markets, although impossible to achieve, are useful because they help us think through the logic of prices and economic incentives. It is a mistake, however, to try extrapolating the rules of perfect competition into a real-world scenario. Logical problems arise from the start, especially the fact that it is impossible for any purely competitive industry to conceivably attain a state of equilibrium from any other position. Perfect competition can, therefore, only be theoretically assumed—it can never be dynamically reached.
An alphabet string refers to a sequence of characters that are part of the standard English alphabets, specifically A to Z. These strings are commonly used in programming language to represent data, such as names, emails, and addresses. They are also used in creating passwords and usernames. In programming, alphabet strings are often incorporated in various algorithms, data structures, and applications. They are treated as a data type, meaning they can be stored, manipulated, and retrieved in various ways. One of the best examples of using alphabet strings in programming is when generating random alphanumeric codes. For instance, in website sign-up forms, passwords are often required to be a combination of letters and numbers. In such cases, the system generates a random sequence of alphabets and numbers that conform to certain rules such as being uppercase, lowercase, or a mixture of both. Here is an example code for generating a random alphanumeric string of length n: import random import string def generate_random_string(length: int) -> str: allowed_chars = string.ascii_letters + string.digits return ''.join(random.choices(allowed_chars, k=length)) In the code above, we have imported two modules, random, and string. The random module helps generate random integers and choices from sequences while the string module provides sequences of ascii_letters and digits which are the allowed characters in our random string. We defined a function, generate_random_string that takes in the length of the string and returns a random sequence of characters. The function first declars a string allowed_chars that contains the allowed ascii_letters and digits that we defined earlier. The random.choices() function is called with two arguments, the first being the allowed characters, the second argument is the length of the returned sequence. Here is an example output for a random string of length 8 generated by the code above: Another example of using alphabet strings in programming is in data manipulation or cleaning where certain patterns or characters need to be removed or replaced. For instance, when working with text data, punctuation marks or any non-alphabetic characters can be removed to extract meaningful information. Here is an example code for removing non-alphabetic characters using regex: import re def remove_non_alpha(string: str) -> str: regex = re.compile('[^a-zA-Z]') return regex.sub('', string) In the code above, we have imported the re module that provides methods for handling regular expressions. Using the re.compile() function, we defined a regex pattern '[^a-zA-Z]' that matches any character that is not an uppercase or lowercase alphabet character. The remove_non_alpha() function takes in a string and applies the regex pattern using the re.sub() function. The sub() function replaces matches with an empty string, effectively removing them from the input string. The function then returns the cleaned up alphabetic string. Here is an example output for a string with non-alphabetic characters cleaned up using the code above: string = 'Hello, my name is @OpenAI !' cleaned_string = remove_non_alpha(string) print(cleaned_string) # output: 'HellomynameisOpenAI' In conclusion, alphabet strings are an essential component in programming language due to their versatility in data representation and manipulation. Using codes, such as the ones above, programmers can easily generate or manipulate alphabetic data with more ease and efficiency. I'd be happy to provide more information on the previous topics. Alphabet strings are a fundamental data type in programming. This is because they are used to represent a wide range of data, including names, addresses, passwords, and emails. Alphabet strings are treated as a data type, just like integers and floats, which means that they can be manipulated and stored in various ways in a program. Programmers often use alphabet strings in different algorithms and data structures. One popular application of alphabet strings is in searches, where programmers use the alphabet strings to search for specific data. For example, if you want to look for an email in a dataset, you can use an algorithm that searches for alphabet strings to pull email addresses that have the same domain. Alphabet strings are also essential in machine learning and natural language processing, where they are used as inputs for predicting and generating natural language text or analyzing text data. The code examples I provided earlier demonstrate how programmers can generate and manipulate alphabet strings. The first example shows how to generate a random string of a specific length using the random and string modules in Python. The second example shows how to remove non-alphabetic characters using regular expressions. In Python, alphabet strings are represented as a string data type, and Python provides various built-in functions and methods for working with them. Some of the methods include: upper()– convert all lowercase characters to uppercase in a string. lower()– convert all uppercase characters to lowercase in a string. strip()– remove any leading or trailing whitespace from a string. split()– split a string into a list of substrings based on a delimiter. join()– concatenate a list of substrings into a single string using a delimiter. Alphabet strings can also be sorted using built-in functions like sort(). For example, you can sort a list of strings in alphabetical order using the In programming, it's essential to pay attention to the format of alphabet strings, especially when working with text data. This is because text data can contain various forms of encoding, such as Unicode and ASCII. Thus, it's crucial to ensure that the encoding format of the alphabet string matches the format that the program is using. In conclusion, alphabet strings are a crucial component in programming. They are used extensively to represent and manipulate data, and they play a vital role in various fields such as machine learning, natural language processing, and data analysis. Programmers must understand how to generate, manipulate, and store alphabet strings correctly to develop effective programs. Sure, here are five possible questions along with their answers: Q: What is an alphabet string and how is it used in programming? A: An alphabet string is a sequence of characters that consists of the English alphabets A to Z. It is commonly used in programming to represent data such as names, addresses, and passwords. Alphabet strings are treated as a data type in programming, meaning they can be manipulated and stored in various ways in a program. Q: How can you generate a random string of alphabets in Python? A: You can use the random and string modules in Python to generate a random string of alphabets. Here's an example code: import random import string def random_string(length): letters = string.ascii_lowercase return ''.join(random.choice(letters) for i in range(length)) This code uses the string.ascii_lowercase function in the string module to generate the list of lowercase alphabets and the random.choice() function in the random module to choose random characters from the list. Q: What are some of the built-in functions and methods in Python that can be used with alphabet strings? A: Some of the built-in functions and methods in Python that can be used with alphabet strings include Q: What is regular expression and how is it used to manipulate alphabet strings? A: Regular expression is a sequence of characters that define a search pattern. It can be used in programming to manipulate alphabet strings by finding or replacing specific patterns of characters within the string. For example, a regular expression can be used to remove all non-alphabetic characters from a string. Q: What is the role of encoding in working with alphabet strings in programming? A: Encoding refers to the representation of text in a specific format, such as Unicode or ASCII. In programming, it's essential to pay attention to the encoding of alphabet strings to ensure that it matches the format that the program is using. Failure to do so can result in issues such as incorrect translation, reading, or writing of the text data.
Timeline of African-American history The lead section of this article may need to be rewritten. (November 2021) |Part of a series on| This is a timeline of African-American history, the part of history that deals with African Americans. The first African slaves in what would become the present day United States of America arrived on August 9, 1526. During the American Revolution of 1776–1783, enslaved African Americans in the South escaped to British lines as they were promised freedom to fight with the British; additionally, many free blacks in the North fight with the colonists for the rebellion, and the Vermont Republic (a sovereign nation at the time) becomes the first future state to abolish slavery. Following the Revolution, numerous slaveholders in the Upper South free their slaves. The importation of slaves became a felony in 1808. After the American Civil War began in 1861, tens of thousands of enslaved African Americans of all ages escaped to Union lines for freedom. Later on, the Emancipation Proclamation was issued, formally freeing slaves in the Confederate States of America. After the American Civil War ended, the Thirteenth Amendment to the United States Constitution, which prohibits slavery (except as punishment for crime), was passed in 1865. In the mid-20th century, the civil rights movement occurred, and racial segregation and discrimination was thus outlawed. - The first African slaves in what would become the present day United States of America arrived on August 9, 1526, in Winyah Bay, South Carolina. Spanish explorer Lucas Vázquez de Ayllón led around six hundred settlers, including an unknown number of African slaves there, in an attempt to start a colony. The attempt failed after a month and Ayllón moved the colony, including the slaves, to what is now the state of Georgia. This colony also failed but slavery would continue in Georgia until 1865. - The Spanish colony of St. Augustine in Florida became the first permanent European colonization of the Americas European settlement in what would become the U.S. centuries later; it included an unknown number of enslaved Africans. - The first recorded Africans in English North America arrive when "twenty and odd" men, women and children were brought first to Fort Monroe off the coast of Hampton, Virginia, and then to Jamestown. They had been taken as prizes from a Portuguese slave ship. The group of Africans were treated as indentured servants, and at least one was recorded as eventually owning land in the colony. - John Punch, a Black indentured servant, ran away with three white servants, James, Gregory, and Victor. After the four were captured, Punch was sentenced to serve Virginian planter Hugh Gwyn for life. This made John Punch the first legally documented slave in colonial Virginia. - John Casor, a Black man who claimed to have completed his term of indenture, became the first legally recognized slave-for-life in a civil case in colonial Virginia. The court ruled with his master who said he had an indefinite servitude for life. - The Colony of Virginia, using the principle of partus sequitur ventrem, proclaimed that children in the colony were born into their mother's social status; therefore children born to enslaved mothers were classified as slaves, regardless of their father's ethnicity or status. This was contrary to English common law for English subjects, which held that children took their father's social status. - September 20 - The Province of Maryland passes the first law in Colonial America banning interracial marriage. - Zipporah Potter Atkins, a free woman of color, becomes the first African-American landowner in Boston, and the first Black woman to own land in Colonial America. - Both free and enslaved African Americans fought in Bacon's Rebellion alongside white indentured servants. - French king Louis XIV issues the Code Noir ("Black Code"), a slave code which applies to France's overseas colonies, including Louisiana. - The Virginia Slave Codes of 1705 define as slaves all those servants brought into the colony who were not Christian in their original countries, as well as Native American slaves sold by other Indians to colonists. - First free African-American community: Gracia Real de Santa Teresa de Mose (later named Fort Mose) in Spanish Florida. - September 9 – In the Stono Rebellion, South Carolina slaves gather at the Stono River to plan an armed march for freedom. - Benjamin Banneker designed and built the first clock of its type in the Thirteen Colonies. He also created a series of almanacs. He corresponded with Thomas Jefferson and wrote that "blacks were intellectually equal to whites". Banneker worked with Pierre L'Enfant to survey and design a street and urban plan for Washington, D.C. - March 5 – Crispus Attucks is among the five men killed by a detachment of the 29th Regiment of Foot in the Boston Massacre, a precursor to the American Revolution. - As part of a broader non-importation movement aimed at Britain, the First Continental Congress called on all the colonies to ban the importation of slaves, and the colonies pass acts doing so. - The first black Baptist congregations are organized in the American South: Silver Bluff Baptist Church in South Carolina, and First African Baptist Church near Petersburg, Virginia. - April 14 – The Society for the Relief of Free Negroes Unlawfully Held in Bondage holds four meetings. It was re-formed in 1784 as the Pennsylvania Abolition Society, and Benjamin Franklin would later serve as its president. - Thomas Paine publishes one of the earliest and most influential anti-slavery essays in the U.S., called "African Slavery in America." 1776–1783 American Revolution - Thousands of enslaved African Americans in the South escape to British lines, as they were promised freedom to fight with the British. In South Carolina, 25,000 enslaved African Americans, one-quarter of those held, escape to the British or otherwise leave their plantations. After the war, many African Americans are evacuated with the British for England; more than 3,000 Black Loyalists are transported with other Loyalists to Nova Scotia and New Brunswick, where they are granted land. Still others go to Jamaica and the West Indies. An estimated 8–10,000 were evacuated from the colonies in these years as free people, about 50 percent of those slaves who defected to the British and about 80 percent of those who survived. - Many Black Patriots in the North fight with the rebelling colonists during the Revolutionary War. - July 8 – The Vermont Republic (a sovereign nation at the time) abolishes slavery, the first future state to do so. No slaves were held in Vermont. - Pennsylvania becomes the first U.S. state to abolish slavery. - Capt. Paul Cuffe and six other African American residents of Massachusetts successfully petition the state legislature for the right to vote, claiming "no taxation without representation." - In challenges by Elizabeth Freeman and Quock Walker, two independent county courts in Massachusetts found slavery illegal under state constitution and declared each to be free persons. - Massachusetts Supreme Judicial Court affirmed that Massachusetts state constitution had abolished slavery. It ruled that "the granting of rights and privileges [was] wholly incompatible and repugnant to" slavery, in an appeal case arising from the escape of former slave Quock Walker. When the British left New York and Charleston in 1783, they took the last of 5,500 Loyalists to the Caribbean, who brought along with them some 15,000 slaves. - July 13 – The Northwest Ordinance bans the expansion of slavery into U.S. territories north of the Ohio River and east of the Mississippi River. 1790–1810 Manumission of slaves - Following the Revolution, numerous slaveholders in the Upper South free their slaves; the percentage of free blacks rises from less than one to 10 percent. By 1810, 75 percent of all blacks in Delaware are free, and 7.2 percent of blacks in Virginia are free. - February – Major Andrew Ellicott hires Benjamin Banneker, an African-American draftsman, to assist in a survey of the boundaries of the 100-square-mile (260 km2) federal district that would later become the District of Columbia. - March 14 – Eli Whitney is granted a patent on the cotton gin. This enables the cultivation and processing of short-staple cotton to be profitable in the uplands and interior areas of the Deep South; as this cotton can be cultivated in a wide area, the change dramatically increases the need for enslaved labor and leads to the development of King Cotton as the chief commodity crop. To satisfy labor demand, there is a forced migration of one million slaves from the Upper South and coast to the area in the antebellum period, mostly by the domestic slave trade. - July – Two independent black churches open in Philadelphia: the African Episcopal Church of St. Thomas, with Absalom Jones, and the Bethel African Methodist Episcopal Church, with Richard Allen, the latter the first church of what would become in 1816 the first independent black denomination in the United States. Early 19th century - The first Black Codes enacted. - August 30 – Gabriel Prosser's planned attempt to lead a slave rebellion in Richmond, Virginia is suppressed. - At the urging of President Thomas Jefferson, Congress passes the Act Prohibiting Importation of Slaves. It makes it a federal crime to import a slave from abroad. - January 1 – The importation of slaves is a felony. This is the earliest day under the United States Constitution that a law could be made restricting slavery. - The first separate black denomination of the African Methodist Episcopal Church (AME) is founded by Richard Allen, who is elected its first bishop. - The American Colonization Society is begun by Robert Finley, to send free African Americans to what is to become Liberia in West Africa. - The Shockoe Hill African Burying Ground is established in Richmond, VA. With estimated interments of upwards of 22,000, it is likely the largest burial ground for Free People of Color and the enslaved in the United States. - The First African Baptist Church had its beginnings in 1817 when John Mason Peck and the former enslaved John Berry Meachum began holding church services for African Americans in St. Louis. Meachum founded the First African Baptist Church in 1827. It was the first African-American church west of the Mississippi River. Although there were ordinances preventing blacks from assembling, the congregation grew from 14 people at its founding to 220 people by 1829. Two hundred of the parishioners were slaves, who could only travel to the church and attend services with the permission of their owners. - March 6 – The Missouri Compromise allows for the entry as states of Maine (free) and Missouri (slave); no more slave states are allowed north of 36°30′. - The British West Africa Squadron's slave trade suppression activities are assisted by forces from the United States Navy, starting in 1820 with the USS Cyane. With the Webster–Ashburton Treaty of 1842, the relationship is formalised and they jointly run the Africa Squadron. - The African Methodist Episcopal Zion Church is formed. - July 14 – Denmark Vesey's planned slave rebellion in Charleston, South Carolina is suppressed (known also as "The Vesey Conspiracy"). - March 16 - Freedom's Journal, the first African American newspaper in the U.S., begins publication. - September – David Walker begins publication of the abolitionist pamphlet Walker's Appeal. - October 28 – Josiah Henson, a slave who fled and arrived in Canada, is an author, abolitionist, minister and the inspiration behind the book Uncle Tom's Cabin. - William Lloyd Garrison begins publication of the abolitionist newspaper The Liberator. He declares ownership of a slave is a great sin, and must stop immediately. - August – Nat Turner leads the most successful slave rebellion in U.S. history. The rebellion is suppressed, but only after many deaths. - Sarah Harris Fayerweather, an aspiring teacher, is admitted to Prudence Crandall's all-girl school in Canterbury, Connecticut, resulting in the first racially integrated schoolhouse in the United States. Her admission led to the school's forcible closure under the Connecticut Black Law of 1833. - The American Anti-Slavery Society, an abolitionist society, is founded by William Lloyd Garrison and Arthur Tappan. Frederick Douglass becomes a key leader of the society. - February – The first Institute of Higher Education for African Americans is founded. Founded as the African Institute in February 1837 and renamed the Institute of Coloured Youth (ICY) in April 1837 and now known as Cheyney University of Pennsylvania. - July 2 – Slaves revolt on the La Amistad, an illegal slave ship, resulting in a hearing before the U.S. Supreme Court (see United States v. The Amistad) and their gaining freedom. - The Liberty Party breaks away from the American Anti-Slavery Society due to grievances with William Lloyd Garrison's leadership. - The U.S. Supreme Court rules, in Prigg v. Pennsylvania (1842), that states do not have to offer aid in the hunting or recapture of slaves, greatly weakening the fugitive slave law of 1793. - June 1 – Isabella Baumfree, a former slave, changes her name to Sojourner Truth and begins to preach for the abolition of slavery. - August – Henry Highland Garnet delivers his famous speech Call to Rebellion. - Frederick Douglass begins publication of the abolitionist newspaper the North Star. - Joseph Jenkins Roberts of Virginia becomes the first president of Liberia. - Roberts v. Boston seeks to end racial discrimination in Boston public schools. - Harriet Tubman escapes from slavery to Philadelphia, and begins helping other slaves to escape via the Underground Railroad. - September 18 – As part of the Compromise of 1850, Congress passes the Fugitive Slave Act of 1850 which requires any federal official to arrest anyone suspected of being a runaway slave. - December – Clotel; or, The President's Daughter is the first novel published by an African-American. - President Franklin Pierce signs the Kansas–Nebraska Act, which repealed the Missouri Compromise and allowed slaves to be brought to the new territories. - In opposition to the Kansas–Nebraska Act, the Republican Party is formed with an anti-slavery platform. - John Mercer Langston is one of the first African Americans elected to public office when elected as a town clerk in Ohio. - May 21 – The Sacking of Lawrence in Bleeding Kansas. - May 25 – John Brown, whom Abraham Lincoln called a "misguided fanatic", retaliates for Lawrence's sacking in the Pottawatomie massacre. - Wilberforce University is founded by collaboration between Methodist Episcopal and African Methodist Episcopal representatives. - March 6 – In Dred Scott v. Sandford, the U.S. Supreme Court upholds slavery. This decision is regarded as a key cause of the American Civil War. - Harriet E. Wilson writes the autobiographical novel Our Nig. - In Ableman v. Booth the U.S. Supreme Court rules that state courts cannot issue rulings that contradict the decisions of federal courts; this decision uphold the Fugitive Slave Act of 1850. - August 22 - The last known slave ship to arrive to the U.S., the Clotilde, docks in secrecy at Mobile, Alabama. - April 12 – The American Civil War begins (secessions began in December 1860), and lasts until April 9, 1865. Tens of thousands of enslaved African Americans of all ages escaped to Union lines for freedom. Contraband camps were set up in some areas, where blacks started learning to read and write. Others traveled with the Union Army. By the end of the war, more than 180,000 African Americans, mostly from the South, fought with the Union Army and Navy as members of the US Colored Troops and sailors. - May 2 – The first North American military unit with African-American officers is the 1st Louisiana Native Guard of the Confederate Army (disbanded in February 1862). - May 24 – General Benjamin Butler refuses to extradite three escaped slaves, declaring them contraband of war - August 6 – The Confiscation Act of 1861 authorizes the confiscation of any Confederate property, including all slaves who fought or worked for the Confederate military. - August 30 – Frémont Emancipation in Missouri - September 11 – Lincoln orders Frémont to rescind the edict. - March 13 – Act Prohibiting the Return of Slaves - April 16 – (Emancipation Day) – District of Columbia Compensated Emancipation Act - May 9 – General David Hunter declares emancipation in Georgia, Florida and South Carolina. - May 19 – Lincoln rescinds Hunter's order. - July 17 – Confiscation Act of 1862 frees confiscated slaves. - September 22 – Lincoln announces the Emancipation Proclamation to go into effect January 1, 1863. 1863–1877 Reconstruction Era - January 1 – The Emancipation Proclamation goes into effect, changing the legal status, as recognized by the United States federal government, of 3 million slaves in the designated areas of the South from "slave" to "free." - January 31 – U.S. Army commissions the 1st South Carolina Volunteers, a combat unit made up of escaped slaves. - May 22 – The U.S. Army recruits United States Colored Troops. (The 54th Massachusetts Volunteer Regiment would be featured in the 1989 film Glory.) - June 1 – Harriet Tubman the 2nd South Carolina Volunteers liberate 750 people with the Raid at Combahee Ferry. - July 13–16 – Ethnic Irish immigrants protests against the draft in New York City turn into riots against blacks, the New York Draft Riots. - July 18 – The Second Battle of Fort Wagner begins when the 54th Regiment Massachusetts Volunteer Infantry, an African-American military unit, led by white Colonel Robert Gould Shaw, attacked a Confederate fort at Morris Island, South Carolina. The attack on Fort Wagner by the 54th Regiment Massachusetts Volunteer Infantry failed to take the fort and Gould was killed in the battle. However, the fort was abandoned by the Confederates on September 7, 1863, after many could not stand the constant weeks of bombardment and the smell of dead Union black soldiers sickening them. - April 12 – The Battle of Fort Pillow, which results in controversy about whether a massacre of surrendered African-American troops was conducted or condoned. - October 13 – Controversial election results in approval of Maryland Constitution of 1864; emancipation in Maryland. - January 16 – Sherman's Special Field Orders, No. 15 allocate a tract of land in coastal South Carolina and Georgia for Black-only settlement. - January 31 – The United States Congress passes the Thirteenth Amendment to the United States Constitution, abolishing slavery and submits it to the states for ratification. - March 3 – Congress passes the bill that forms the Freedman's Bureau; mandates distribution of "not more than forty acres" of confiscated land to all loyal freedmen and refugees. - May 29 – Andrew Johnson amnesty proclamation initiates return of land to pre-war owners. - December 18 – The Thirteenth Amendment to the United States Constitution prohibits slavery except as punishment for crime; emancipation in Delaware and Kentucky. - Shaw Institute is founded in Raleigh, North Carolina, as the first black college in the South. - Atlanta College is founded. - Southern states pass Black Codes that restrict the freedmen, who were emancipated but not yet full citizens. - April 9 – The Civil Rights Act of 1866 is passed by Congress over Johnson's presidential veto. All persons born in the United States are now citizens. - The Ku Klux Klan is formed in Pulaski, Tennessee, made up of white Confederate veterans; it becomes a paramilitary insurgent group to enforce white supremacy. - May 1–3 – The Memphis Massacre transpires. - July – New Orleans Riot: white citizens riot against blacks. - July 21 – Southern Homestead Act of 1866 opens 46 million acres of land in Alabama, Arkansas, Florida, Louisiana, and Mississippi; African Americans have priority access until January 1, 1877. - September 21 – The U.S. Army regiment of Buffalo Soldiers (African Americans) is formed. - One version of the Second Freedmen's Bureau Act is vetoed and fails; another is vetoed and passed via override in July. - February 14 – Augusta Institute, now known as Morehouse College, is founded in the basement of Springfield Baptist Church in Augusta, Georgia. - March 2 – Howard University is founded in Washington, D.C. - April 1 – Hampton Institute is founded in Hampton, Virginia. - July 9 – The Fourteenth Amendment to the United States Constitution's Section 1 requires due process and equal protection. - Through 1877, whites attack black and white Republicans to suppress voting. Every election cycle is accompanied by violence, increasing in the 1870s. - Elizabeth Keckly publishes Behind the Scenes (or, Thirty Years a Slave and Four Years in the White House). - February 3 – The Fifteenth Amendment to the United States Constitution guarantees the right of male citizens of the United States to vote regardless of race, color or previous condition of servitude. - February 25 – Hiram Rhodes Revels becomes the first black member of the Senate (see African Americans in the United States Congress). - Christian Methodist Episcopal Church founded. - First two Enforcement Acts. - October 10 – Octavius Catto, a civil rights activist, is murdered during harassment of blacks on Election Day in Philadelphia. - US Civil Rights Act of 1871 passed, also known as the Klan Act and Third Enforcement Act. - December 11 – P. B. S. Pinchback is sworn in as the first black member of the U.S. House of Representatives. - Disputed gubernatorial election in Louisiana cause political violence for more than two years. Both Republican and Democratic governors hold inaugurations and certify local officials. - Elijah McCoy patented his first invention, an automatic lubricator that supplied oil to moving parts while a machine was still operating. - April 14 – In the Slaughter-House Cases the U.S. Supreme Court votes 5–4 for a narrow reading of the Fourteenth Amendment. The court also discusses dual citizenship: State citizens and U.S. citizens. - Easter – The Colfax Massacre; more than 100 blacks in the Red River area of Louisiana are killed when attacked by white militia after defending Republicans in local office – continuing controversy from gubernatorial election. - The Coushatta Massacre transpires. Republican officeholders are run out of town and murdered by white militia before leaving the state – four of six were relatives of a Louisiana state senator, a northerner who had settled in the South, married into a local family and established a plantation. Five to twenty black witnesses are also killed. - Founding of paramilitary groups that act as the "military arm of the Democratic Party": the White League in Louisiana and the Red Shirts in Mississippi, and North and South Carolina. They terrorize blacks and Republicans, turning them out of office, killing some, disrupting rallies, and suppressing voting. - September – In New Orleans, continuing political violence erupts related to the still-contested gubernatorial election of 1872. Thousands of the White League armed militia march into New Orleans, then the seat of government, where they outnumber the integrated city police and black state militia forces. They defeat Republican forces and demand that Gov. Kellogg leave office. The Democratic candidate McEnery is installed and White Leaguers occupy the capitol, state house and arsenal. This was called the "Battle of Liberty Place". The White League and McEnery withdraw after three days in advance of federal troops arriving to reinforce the Republican state government. - March 1 – Civil Rights Act of 1875 signed. - The Mississippi Plan to intimidate blacks and suppress black voter registration and voting. - Lewis Latimer prepared drawings for Alexander Graham Bell's application for a telephone patent. - July 8 – The Hamburg Massacre occurs when local people riot against African Americans who were trying to celebrate the Fourth of July. - varied – White Democrats regain power in many southern state legislatures and pass the first Jim Crow laws. - With the Compromise of 1877, Republican Rutherford B. Hayes withdraws federal troops from the South in exchange for being elected President of the United States, causing the collapse of the last three remaining Republican state governments. The compromise formally ends the Reconstruction Era. - Spring – Thousands of African Americans refuse to live under segregation in the South and migrate to Kansas. They become known as Exodusters. - In Strauder v. West Virginia, the U.S. Supreme Court rules that African Americans could not be excluded from juries. - During the 1880s, African Americans in the South reach a peak of numbers in being elected and holding local offices, even while white Democrats are working to assert control at state level. - April 11 – Spelman Seminary is founded as the Atlanta Baptist Female Seminary. - July 4 – Booker T. Washington opens the Tuskegee Normal and Industrial Institute in Tuskegee, Alabama. - Lewis Latimer invented the first long-lasting filament for light bulbs and installed his lighting system in New York City, Philadelphia, and Canada. Later, he became one of the 28 members of Thomas Edison's Pioneers. - A biracial populist coalition achieves power in Virginia (briefly). The legislature founds the first public college for African Americans, Virginia Normal and Collegiate Institute, as well as the first mental hospital for African Americans, both near Petersburg, Virginia. The hospital was established in December 1869, at Howard's Grove Hospital, a former Confederate unit, but is moved to a new campus in 1882. - October 16 – In Civil Rights Cases, the U.S. Supreme Court strikes down the Civil Rights Act of 1875 as unconstitutional. - Mark Twain's Adventures of Huckleberry Finn is published, featuring the admirable African-American character Jim. - Judy W. Reed, of Washington, D.C., and Sarah E. Goode, of Chicago, are the first African-American women inventors to receive patents. Signed with an "X", Reed's patent no. 305,474, granted September 23, 1884, is for a dough kneader and roller. Goode's patent for a cabinet bed, patent no. 322,177, is issued on July 14, 1885. Goode, the owner of a Chicago furniture store, invented a folding bed that could be formed into a desk when not in use. - Ida B. Wells sues the Chesapeake, Ohio & South Western Railroad Company for its use of segregated "Jim Crow" cars. - Norris Wright Cuney becomes the chairman of the Texas Republican Party, the most powerful role held by any African American in the South during the 19th century. - October 3 – The State Normal School for Colored Students, which would become Florida A&M University, is founded. - Mississippi, with a white Democrat-dominated legislature, passes a new constitution that effectively disfranchises most blacks through voter registration and electoral requirements, e.g., poll taxes, residency tests and literacy tests. This shuts them out of the political process, including service on juries and in local offices. - By 1900 two-thirds of the farmers in the bottomlands of the Mississippi Delta are African Americans who cleared and bought land after the Civil War. - Ida B. Wells publishes her pamphlet Southern Horrors: Lynch Law in All Its Phases. - Daniel Hale Williams performed open-heart surgery in 1893 and founded Provident Hospital in Chicago, the first with an interracial staff. - September 18 – Booker T. Washington delivers his Atlanta Compromise address at the Cotton States and International Exposition in Atlanta, Georgia. - W. E. B. Du Bois becomes the first African-American to be earn a Ph.D. from Harvard University. - May 18 – In Plessy v. Ferguson, the U.S. Supreme Court upholds de jure racial segregation of "separate but equal" facilities. (see "Jim Crow laws" for historical discussion). - The National Association of Colored Women is formed by the merger of smaller groups. - As one of the earliest Black Hebrew Israelites in the United States, William Saunders Crowdy establishes the Church of God and Saints of Christ. - George Washington Carver is invited by Booker T. Washington to head the Agricultural Department at what would become Tuskegee University. His work would revolutionize farming – he found about 300 uses for peanuts. - Louisiana enacts the first statewide grandfather clause that provides exemption for illiterate whites to voter registration literacy test requirements. - In Williams v. Mississippi the U.S. Supreme Court upholds the voter registration and election provisions of Mississippi's constitution because they applied to all citizens. Effectively, however, they disenfranchise blacks and poor whites. The result is that other southern states copy these provisions in their new constitutions and amendments through 1908, disfranchising most African Americans and tens of thousands of poor whites until the 1960s. - November 10 – Coup d'état begins in Wilmington, North Carolina, resulting in considerable loss of life and property in the African-American community and the installation of a white supremacist Democratic Party regime. - Since the Civil War, 30,000 African-American teachers had been trained and put to work in the South. The majority of blacks had become literate. - Booker T. Washington's autobiography Up from Slavery is published. - Benjamin Tillman, senator from South Carolina, comments on Theodore Roosevelt's dining with Booker T. Washington: "The action of President Roosevelt in entertaining that nigger will necessitate our killing a thousand niggers in the South before they learn their place again." - September – W. E. B. Du Bois's article The Talented Tenth published. - W. E. B. Du Bois's seminal work The Souls of Black Folk is published. - May 15 – Sigma Pi Phi, the first African-American Greek-letter organization, is founded by African-American men as a professional organization, in Philadelphia, Pennsylvania. - Orlando, Florida hires its first black postman. - The Brownsville Affair, which eventually involves President Roosevelt. - December 4 – African-American men found Alpha Phi Alpha at Cornell University, the first intercollegiate fraternity for African-American men. - December 26 – Jack Johnson wins the World Heavyweight Title. - Alpha Kappa Alpha at Howard University; African-American college women found the first college sorority for African-American women. - February 12 – Planned first meeting of group which would become the National Association for the Advancement of Colored People (NAACP), an interracial group devoted to civil rights. The meeting actually occurs on May 31, but February 12 is normally cited as the NAACP's founding date. - May 31 – The National Negro Committee meets and is formed; it will be the precursor to the NAACP. - August 14 – A lynch mob moves through Springfield, Illinois burning the homes and businesses of black people and black sympathisers, killing many. - May 30 – The National Negro Committee chooses "National Association for the Advancement of Colored People" as its organization name. - September 29 – Committee on Urban Conditions Among Negroes formed; the next year it will merge with other groups to form the National Urban League. - The NAACP begins publishing The Crisis. - January 5 – Kappa Alpha Psi fraternity was founded at Indiana University. - November 17 – Omega Psi Phi fraternity was founded at Howard University. - The Moorish Science Temple of America, a religious organization, is founded by Noble Drew Ali (Timothy Drew). - January 13 – Delta Sigma Theta sorority was founded at Howard University 1914 January 9 – Phi Beta Sigma fraternity was founded at Howard University - Newly elected president Woodrow Wilson orders physical re-segregation of federal workplaces and employment after nearly 50 years of integrated facilities. - February 8 – The Birth of a Nation is released to film theaters. The NAACP protests in cities across the country, convincing some not to show the film. - June 21 – In Guinn v. United States, the U.S. Supreme Court rules against grandfather clauses used to deny blacks the right to vote. - September 9 – Professor Carter G. Woodson founds the Association for the Study of African American Life and History in Chicago. - A schism from the National Baptist Convention, USA, Inc. forms the National Baptist Convention of America, Inc. - January – Professor Carter Woodson and the Association for the Study of Negro Life and History begins publishing the Journal of Negro History, the first academic journal devoted to the study of African-American history. - March 23 – Marcus Garvey arrives in the U.S. (see Garveyism). - Los Angeles hires the country's first black female police officer. - The Great Migration begins and lasts until 1940. Approximately one and a half million African Americans move from the Southern United States to the North and Midwest. More than five million migrate in the Second Great Migration from 1940 to 1970, which includes more destinations in California and the West. - May–June – East St. Louis Riot - August 23 – Houston Riot - In Buchanan v. Warley, the U.S. Supreme Court unanimously rules that a ban on selling property in white-majority neighborhoods to black people and vice versa violates the 14th Amendment. - Viola Pettus, an African-American nurse in Marathon, Texas, wins attention for her courageous care of victims of the Spanish Influenza, including members of the Ku Klux Klan. - Mary Turner was a 33-year-old lynched in Lowndes County, Georgia who was Eight months pregnant. Turner and her child were murdered after she publicly denounced the extrajudicial killing of her husband by a mob. Her death is considered a stark example of racially motivated mob violence in the American south, and was referenced by the NAACP's anti-lynching campaign of the 1920s, 1930s and 1940s. - Summer – Red Summer of 1919 riots: Chicago, Washington, D.C.; Knoxville, Indianapolis, and elsewhere. - September 28 – Omaha Race Riot of 1919, Nebraska. - October 1–5 – Elaine Race Riot, Phillips County, Arkansas. Numerous blacks are convicted by an all-white jury or plead guilty. In Moore v. Dempsey (1923), the U.S. Supreme Court overturns six convictions for denial of due process under the Fourteenth Amendment. - February 13 – Negro National League (1920–1931) established. - Fritz Pollard and Bobby Marshall are the first two African-American players in the National Football League (NFL). Pollard goes on to become the first African-American coach in the NFL. - January 16 – Zeta Phi Beta sorority founded at Howard University - May 23 – Shuffle Along is the first major African-American hit musical on Broadway. - May 31 – Tulsa Race Riot, Oklahoma - Bessie Coleman becomes the first African American to earn a pilot's license. - November 12 – Sigma Gamma Rho sorority, was founded at Butler University - Garrett A. Morgan invented and patented the first automatic three-position traffic light. - January 1–7 – Rosewood massacre: Six African Americans and two whites die in a week of violence when a white woman in Rosewood, Florida, claims she was beaten and raped by a black man. - February 19 – In Moore v. Dempsey, the U.S. Supreme Court holds that mob-dominated trials violate the Due Process Clause of the Fourteenth Amendment. - Jean Toomer's novel Cane is published. - Knights of Columbus commissions and publishes The Gift of Black Folk: The Negroes in the Making of America by civil rights activist and NAACP cofounder W. E. B. Du Bois as part of the organization's Racial Contribution Series. - Spelman Seminary becomes Spelman College. - Spring – American Negro Labor Congress is founded. - August 8 – 35,000 Ku Klux Klan members march in Washington, D.C. (see List of protest marches on Washington, D.C.) - Countee Cullen publishes his first collection of poems in Color. - Brotherhood of Sleeping Car Porters is organized. - The Harlem Renaissance (also known as the New Negro Movement) is named after the anthology The New Negro, edited by Alain Locke . - The Harlem Globetrotters are founded. - Historian Carter G. Woodson proposes Negro History Week. - Corrigan v Buckley challenges deed restrictions preventing a white seller from selling to a black buyer. The U.S. Supreme Court rules in favor of Buckley, stating that the 14th Amendment does not apply because Washington, DC is a city and not a state, thereby rendering the Due Process Clause inapplicable. Also, that the Due Process Clause does not apply to private agreements. - The League of United Latin American Citizens, the first organization to fight for the civil rights of Latino Americans, is founded in Corpus Christi, Texas. - John Hope becomes president of Atlanta University. Graduate classes are offered in the liberal arts, and Atlanta University becomes the first predominantly black university to offer graduate education. - Unknown – Hallelujah! is released, one of the first films to star an all-black cast. - August 7 – Thomas Shipp and Abram Smith were African-American men lynched in Marion, Indiana, after being taken from jail and beaten by a mob. They had been arrested that night as suspects in a robbery, murder and rape case. A third African-American suspect, 16-year-old James Cameron, had also been arrested and narrowly escaped being killed by the mob. He later became a civil rights activist. - The League of Struggle for Negro Rights is founded in New York City. - Jessie Daniel Ames forms the Association of Southern Women for the Prevention of Lynching. She gets 40,000 white women to sign a pledge against lynching and for change in the South. - March 25 – Scottsboro Boys arrested in what would become a nationally controversial case. - Walter Francis White becomes the executive secretary of the NAACP. - The Tuskegee Study of Untreated Syphilis in the Negro Male begins at Tuskegee University. - Hocutt v. Wilson unsuccessfully challenged segregation in higher education in the United States. - Wallace D. Fard, leader of the Nation of Islam, mysteriously disappears. He is succeeded by Elijah Muhammad. - June 18 – In Murray v. Pearson, Thurgood Marshall and Charles Hamilton Houston of the NAACP successfully argue the landmark case in Maryland to open admissions to the segregated University of Maryland School of Law on the basis of equal protection under the Fourteenth Amendment. - Zora Neale Hurston writes the novel Their Eyes Were Watching God - Southern Negro Youth Congress founded. - Joe Louis becomes first African-American heavyweight boxing world champion since Jack Johnson. - October – Negro National Congress meets at the Metropolitan Opera House in Philadelphia, Pa. - Missouri ex rel. Gaines v. Canada - Easter Sunday – Marian Anderson performs on the steps of the Lincoln Memorial in Washington, D.C. at the instigation of Secretary of Interior Harold Ickes after the Daughters of the American Revolution (DAR) refused permission for Anderson to sing to an integrated audience in Constitution Hall and the federally controlled District of Columbia Board of Education declined a request to use the auditorium of a white public high school. - Billie Holiday first performs "Strange Fruit" in New York City. The song, a protest against lynching written by Abel Meeropol under the pen name Lewis Allan, became a signature song for Holiday. - The Little League is formed, becoming the nation's first non-segregated youth sport. - August 21 – Five African-American men recruited and trained by African-American attorney Samuel Wilbert Tucker conduct a sit-in at the then-segregated Alexandria, Virginia, library and are arrested after being refused library cards. - September 21 – Followers of Father Divine and the International Peace Mission Movement join with workers to protest racially unfair hiring practices by conducting "a kind of customers' nickel sit down strike" in a restaurant. 1940s to 1970 - Second Great Migration – In multiple acts of resistance and in response to factory labor shortages in World War II, more than 5 million African Americans leave the violence and segregation of the South for jobs, education, and the chance to vote in northern, midwestern, and western cities (mainly to the West Coast). - February 12 – In Chambers v. Florida, the U.S. Supreme Court frees three black men who were coerced into confessing to a murder. - February 29 – Hattie McDaniel becomes the first African-American to win an Academy Award. She wins Best Supporting Actress for her performance as Mammy in Gone with the Wind. - October 25 – Benjamin O. Davis, Sr. is promoted to be the first African-American general in the U.S. Army. - Richard Wright authors Native Son - NAACP Legal Defense and Educational Fund is formed. - January 25 – A. Philip Randolph proposes a March on Washington, effectively beginning the March on Washington Movement. - early 1941 – U.S. Army forms African-American air combat units, the Tuskegee Airmen. The Tuskegee Airmen were involved in 15,000 combat sorties, winning 150 Distinguished Flying Crosses, 744 Air Medals, 8 Purple Hearts, and 14 Bronze Stars. - June 25 – President Franklin Delano Roosevelt issues Executive Order 8802, the "Fair Employment Act", to require equal treatment and training of all employees by defense contractors. - Mitchell v US – the Interstate Commerce Clause is used to successfully desegregate seating on trains. - Six non-violence activists in the Fellowship of Reconciliation (Bernice Fisher, James Russell Robinson, George Houser, James Farmer, Jr., Joe Guinn and Homer Jack) found the Committee on Racial Equality, which becomes the Congress of Racial Equality. - Doctor Charles R. Drew developed techniques for separating and storing blood. He was the head of an American Red Cross effort to collect blood for American armed forces. He was the chief surgeon of Howard University's medical school and professor of surgery. His achievements were recognized when he became the first African-American surgeon to serve as an examiner on the American Board of Surgery. - The 1943 Detroit race riot erupts in Detroit, Michigan. - Lena Horne stars in the all African-American film Stormy Weather. - April 3 – In Smith v. Allwright, the U.S. Supreme Court rules that the whites-only Democratic Party primary in Texas was unconstitutional. - April 25 – The United Negro College Fund is incorporated. - July 17 – Port Chicago disaster, which led to the Port Chicago mutiny. - August 1–7 – The Philadelphia transit strike of 1944, a strike by white transit workers protesting against job advancement by black workers, is broken by the U.S. military under the provisions of the Smith-Connally Act - September 3 – Recy Taylor kidnapped and gang-raped in Abbeville by six white men, who later confessed to the crimes but were never charged. The case was investigated by Rosa Parks and provided an early organizational spark for the Montgomery bus boycott. - November 7 – Adam Clayton Powell, Jr. is elected to the U.S. House of Representatives from Harlem, New York. - Miami hires its first black police officers. 1945–1975 The Civil Rights Movement. - April 5–6 – Freeman Field Mutiny, in which black officers of the U.S. Army Air Corps attempt to desegregate an all-white officers' club in Indiana. - August – The first issue of Ebony. - June 3 – In Morgan v. Virginia, the U.S. Supreme Court invalidates provisions of the Virginia Code which require the separation of white and colored passengers where applied to interstate bus transport. The state law is unconstitutional insofar as it is burdening interstate commerce – an area of federal jurisdiction. - In Florida, Daytona Beach, DeLand, Sanford, Fort Myers, Tampa, and Gainesville all have black police officers. So does Little Rock, Arkansas; Louisville, Kentucky; Charlotte, North Carolina; Austin, Houston, Dallas, San Antonio in Texas; Richmond, Virginia; Chattanooga and Knoxville in Tennessee - Renowned actor/singer Paul Robeson founds the American Crusade Against Lynching. - April 9 – The Congress of Racial Equality (CORE) sends 16 men on the Journey of Reconciliation. - April 15 – Jackie Robinson plays his first game for the Brooklyn Dodgers, becoming the first black baseball player in professional baseball in 60 years. - John Hope Franklin authors the non-fiction book From Slavery to Freedom - United Nations, Article 4 of the Universal Declaration of Human Rights bans slavery globally. - January 12 – In Sipuel v. Board of Regents of Univ. of Okla., the U.S. Supreme Court rules that the State of Oklahoma and the University of Oklahoma Law School could not deny admission based on race ("color"). - May 3 – In Shelley v. Kraemer and companion case Hurd v. Hodge, the U.S. Supreme Court rules that the government cannot enforce racially restrictive covenants and asserts that they are in conflict with the nation's public policy. - July 12 – Hubert Humphrey makes a controversial speech in favor of American civil rights at the Democratic National Convention. - July 26 – President Harry S. Truman issues Executive Order 9981 ordering the end of racial discrimination in the Armed Forces. Desegregation comes after 1950. - Atlanta hires its first black police officers. - June 5 – In McLaurin v. Oklahoma State Regents the U.S. Supreme Court rules that a public institution of higher learning could not provide different treatment to a student solely because of his race. - June 5 – In Sweatt v. Painter the U.S. Supreme Court rules that a separate-but-equal Texas law school was actually unequal, partly in that it deprived black students from the collegiality of future white lawyers. - June 5 – In Henderson v. United States the U.S. Supreme Court abolishes segregation in railroad dining cars. - September 15 – University of Virginia, under a federal court order, admits a black student to its law school. - The Leadership Conference on Civil Rights is created in Washington, DC to promote the enactment and enforcement of effective civil rights legislation and policy. - Orlando, Florida, hires its first black police officers. - Dr. Ralph Bunche wins the 1950 Nobel Peace Prize. - Chuck Cooper, Nathaniel Clifton and Earl Lloyd break the barriers into the NBA. - February 2 and 5 – Execution of the Martinsville Seven. - February 15 – Maryland legislature ends segregation on trains and boats; meanwhile Georgia legislature votes to deny funds to schools that integrate. - April 23 – High school students in Farmville, Virginia, go on strike: the case Davis v. County School Board of Prince Edward County is heard by the U.S. Supreme Court in 1954 as part of Brown v. Board of Education. - June 23 – A Federal Court ruling upholds segregation in SC public schools. - July 11 – White residents riot in Cicero, Illinois when a black family tries to move into an apartment in the all-white suburb of Chicago; National Guard disperses them July 1. - July 26 – The United States Army high command announces it will desegregate the Army. - December 17 – "We Charge Genocide" petition presented to United Nations by the Civil Rights Congress accuses United States of violating the Genocide Convention - December 24 – The home of NAACP activists Harry and Harriette Moore in Mims, Florida, is bombed by KKK group; both die of injuries. - December 28 – The Regional Council of Negro Leadership (RCNL) is founded in Cleveland, Mississippi by T. R. M. Howard, Amzie Moore, Aaron Henry, and other civil rights activists. Assisted by member Medgar Evers, the RCNL distributed more than 50,000 bumper stickers bearing the slogan, "Don't Buy Gas Where you Can't Use the Restroom." This campaign successfully pressured many Mississippi service stations to provide restrooms for blacks. - January 5 – Governor of Georgia Herman Talmadge criticizes television shows for depicting blacks and whites as equal. - January 28 – Briggs v. Elliott: after a District Court had ordered separate but equal school facilities in South Carolina, the U.S. Supreme Court agrees to hear the case as part of Brown v. Board of Education. - March 7 – Another federal court upholds segregated education laws in Virginia. - April 1 – Chancellor Collins J. Seitz finds for the black plaintiffs (Gebhart v. Belton, Gebhart v. Bulah) and orders the integration of Hockessin elementary and Claymont High School in Delaware based on assessment of "separate but equal" public school facilities required by the Delaware constitution. - September 4 – Eleven black students attend the first day of school at Claymont High School, Delaware, becoming the first black students in the 17 segregated states to integrate a white public school. The day occurs without incident or notice by the community. - September 5 – The Delaware State Attorney General informs Claymont Superintendent Stahl that the black students will have to go home because the case is being appealed. Stahl, the School Board and the faculty refuse and the students remain. The two Delaware cases are argued before the Warren U.S. Supreme Court by Redding, Greenberg and Marshall and are used as an example of how integration can be achieved peacefully. It was a primary influence in the Brown v. Board case. The students become active in sports, music and theater. The first two black students graduated in June 1954 just one month after the Brown v. Board case. - Ralph Ellison authors the novel Invisible Man which wins the National Book Award. - June 8 – The U.S. Supreme Court strikes down segregation in Washington, DC restaurants. - August 13 – Executive Order 10479 signed by President Dwight D. Eisenhower establishes the anti-discrimination Committee on Government Contracts. - September 1 – In the landmark case Sarah Keys v. Carolina Coach Company, WAC Sarah Keys, represented by civil rights lawyer Dovey Roundtree, becomes the first black to challenge "separate but equal" in bus segregation before the Interstate Commerce Commission. - James Baldwin's semi-autobiographical novel Go Tell It on the Mountain is published. - May 3 – In Hernandez v. Texas, the U.S. Supreme Court rules that Mexican Americans and all other racial groups in the United States are entitled to equal protection under the 14th Amendment to the U.S. Constitution. - May 17 – The U.S. Supreme Court rules against the "separate but equal" doctrine in Brown v. Board of Education of Topeka, Kans. and in Bolling v. Sharpe, thus overturning Plessy v. Ferguson. - July 11 – The first White Citizens' Council meeting takes place, in Mississippi. - July 30 – At a special meeting in Jackson, Mississippi called by Governor Hugh White, T.R.M. Howard of the Regional Council of Negro Leadership, along with nearly one hundred other black leaders, publicly refuse to support a segregationist plan to maintain "separate but equal" in exchange for a crash program to increase spending on black schools. - September 2 – In Montgomery, Alabama, 23 black children are prevented from attending all-white elementary schools, defying the recent U.S. Supreme Court ruling. - September 7 – District of Columbia ends segregated education; Baltimore, Maryland follows suit on September 8 - September 15 – Protests by white parents in White Sulphur Springs, WV force schools to postpone desegregation another year. - September 16 – Mississippi responds by abolishing all public schools with an amendment to its State Constitution. - September 30 – Integration of a high school in Milford, Delaware collapses when white students boycott classes. - October 4 – Student demonstrations take place against integration of Washington, DC public schools. - October 19 – Federal judge upholds an Oklahoma law requiring African-American candidates to be identified on voting ballots as "negro". - October 30 – Desegregation of U.S. Armed Forces said to be complete. - November – Charles Diggs, Jr., of Detroit is elected to Congress, the first African American elected from Michigan. - Frankie Muse Freeman is the lead attorney for the landmark NAACP case Davis et al. v. the St. Louis Housing Authority, which ended legal racial discrimination in public housing with the city. Constance Baker Motley was also an attorney for NAACP: it was a rarity to have two women attorneys leading such a high-profile case. - January 7 – Marian Anderson (of 1939 fame) becomes the first African American to perform with the New York Metropolitan Opera. - January 15 – President Dwight D. Eisenhower signs Executive Order 10590, establishing the President's Committee on Government Policy to enforce a nondiscrimination policy in Federal employment. - January 20 – Demonstrators from CORE and Morgan State University stage a successful sit-in to desegregate Read's Drug Store in Baltimore, Maryland - April 5 – Mississippi passes a law penalizing white students who attend school with blacks with jail and fines. - May 7 – NAACP and Regional Council of Negro Leadership activist Reverend George W. Lee is killed in Belzoni, Mississippi. - May 31 – The U.S. Supreme Court rules in "Brown II" that desegregation must occur with "all deliberate speed". - June 8 – University of Oklahoma decides to allow black students. - June 23 – Virginia governor and Board of Education decide to continue segregated schools into 1956. - June 29 – The NAACP wins a U.S. Supreme Court suit which orders the University of Alabama to admit Autherine Lucy. - July 11 – Georgia Board of Education orders that any teacher supporting integration be fired. - July 14 – A Federal Appeals Court overturns segregation on Columbia, SC buses. - August 1 – Georgia Board of Education fires all black teachers who are members of the NAACP. - August 13 – Regional Council of Negro Leadership registration activist Lamar Smith is murdered in Brookhaven, Mississippi. - August 28 – Teenager Emmett Till is killed for whistling at a white woman in Money, Mississippi. - November 7 – The Interstate Commerce Commission bans bus segregation in interstate travel in Sarah Keys v. Carolina Coach Company, extending the logic of Brown v. Board to the area of bus travel across state lines. On the same day, the U.S. Supreme Court bans segregation on public parks and playgrounds. The governor of Georgia responds that his state would "get out of the park business" rather than allow playgrounds to be desegregated. - December 1 – Rosa Parks refuses to give up her seat on a bus, starting the Montgomery bus boycott. This occurs nine months after 15-year-old high school student Claudette Colvin became the first to refuse to give up her seat. Colvin's was the legal case which eventually ended the practice in Montgomery. - Roy Wilkins becomes the NAACP executive secretary. - January 2 – Georgia Tech president Blake R. Van Leer stands up to Governor Marvin Griffin threats to bar Georgia Tech and Pittsburgh player Bobby Grier over segregation. - January 9 – Virginia voters and representatives decide to fund private schools with state money to maintain segregation. - January 16 – FBI Director J. Edgar Hoover writes a rare open letter of complaint directed to civil rights leader Dr. T. R. M. Howard after Howard charged in a speech that the "FBI can pick up pieces of a fallen airplane on the slopes of a Colorado mountain and find the man who caused the crash, but they can't find a white man when he kills a Negro in the South." - January 24 – Governors of Georgia, Mississippi, South Carolina and Virginia agree to block integration of schools. - February 1 – Virginia legislature passes a resolution that the U.S. Supreme Court integration decision was an "illegal encroachment". - February 3 – Autherine Lucy is admitted to the University of Alabama. Whites riot for days, and she is suspended. Later, she is expelled for her part in further legal action against the university. - February 24 – The policy of Massive Resistance is declared by U.S. Senator Harry F. Byrd, Sr. - February/March – The Southern Manifesto, opposing integration of schools, is created and signed by members of the Congressional delegations of Southern states, including 19 senators and 81 members of the House of Representatives, notably the entire delegations of the states of Alabama, Arkansas, Georgia, Louisiana, Mississippi, South Carolina and Virginia. On March 12, it is released to the press. - February 13 – Wilmington, Delaware school board decides to end segregation. - February 22 – Ninety black leaders in Montgomery, Alabama are arrested for leading a bus boycott. - February 29 – Mississippi legislature declares U.S. Supreme Court integration decision "invalid" in that state. - March 1 – Alabama legislature votes to ask for federal funds to deport blacks to northern states. - March 12 – U.S. Supreme Court orders the University of Florida to admit a black law school applicant "without delay". - March 22 – Martin Luther King Jr. sentenced to fine or jail for instigating Montgomery bus boycott, suspended pending appeal. - April 11 – Singer Nat King Cole is assaulted during a segregated performance at Municipal Auditorium in Birmingham, Alabama. - April 23 – U.S. Supreme Court strikes down segregation on buses nationwide. - May 26 – Circuit Judge Walter B. Jones issues an injunction prohibiting the NAACP from operating in Alabama. - May 28 – The Tallahassee, Florida bus boycott begins. - June 5 – The Alabama Christian Movement for Human Rights (ACMHR) is founded at a mass meeting in Birmingham, Alabama. - September 2–11 – Teargas and National Guard used to quell segregationists rioting in Clinton, TN; 12 black students enter high school under Guard protection. Smaller disturbances occur in Mansfield, TX and Sturgis, KY. - September 10 – Two black students are prevented by a mob from entering a junior college in Texarkana, Texas. Schools in Louisville, KY are successfully desegregated. - September 12 – Four black children enter an elementary school in Clay, KY under National Guard protection; white students boycott. The school board bars the 4 again on September 17. - October 15 – Integrated athletic or social events are banned in Louisiana. - November 5 – Nat King Cole hosts the first show of The Nat King Cole Show. The show went off the air after only 13 months because no national sponsor could be found. - November 13 – In Browder v. Gayle, the U.S. Supreme Court strikes down Alabama laws requiring segregation of buses. This ruling, together with the ICC's 1955 ruling in Sarah Keys v. Carolina Coach banning "Jim Crow laws" in bus travel among the states, is a landmark in outlawing "Jim Crow" in bus travel. - December 20 – Federal marshals enforce the ruling to desegregate bus systems in Montgomery. - December 24 – Blacks in Tallahassee, Florida begin defying segregation on city buses. - December 25 – The parsonage in Birmingham, Alabama occupied by Fred Shuttlesworth, movement leader, is bombed. Shuttlesworth receives only minor scrapes. - December 26 – The ACMHR tests the Browder v. Gayle ruling by riding in the white sections of Birmingham city buses. 22 demonstrators are arrested. - Mississippi State Sovereignty Commission formed. - Director J. Edgar Hoover orders the FBI to begin the COINTELPRO program to investigate and disrupt "dissident" groups within the United States. - February 8 – Georgia Senate votes to declare the 14th and 15th Amendments to the United States Constitution null and void in that state. - February 14 – Southern Christian Leadership Conference is formed; Dr. Martin Luther King Jr. is named its chairman. - April 18 – Florida Senate votes to consider U.S. Supreme Court's desegregation decisions "null and void". - May 17 – The Prayer Pilgrimage for Freedom in Washington, DC is at the time the largest nonviolent demonstration for civil rights, and features Dr. King's "Give Us The Ballot" speech. - September 2 – Orval Faubus, governor of Arkansas, calls out the National Guard to block integration of Little Rock Central High School. - September 6 – Federal judge orders Nashville public schools to integrate immediately. - September 15 – New York Times reports that in three years since the decision, there has been minimal progress toward integration in four southern states, and no progress at all in seven. - September 24 – President Dwight Eisenhower federalizes the National Guard and also orders US Army troops to ensure Little Rock Central High School in Arkansas is integrated. Federal and National Guard troops escort the Little Rock Nine. - September 27 – Civil Rights Act of 1957 signed by President Eisenhower. - October 7 – The finance minister of Ghana is refused service at a Dover, Delaware restaurant. President Eisenhower hosts him at the White House to apologize October 10. - October 9 – Florida legislature votes to close any school if federal troops are sent to enforce integration. - October 31 – Officers of NAACP arrested in Little Rock for failing to comply with a new financial disclosure ordinance. - November 26 – Texas legislature votes to close any school where federal troops might be sent. - January 18 – Willie O'Ree breaks the color barrier in the National Hockey League, in his first game playing for the Boston Bruins. - June 29 – Bethel Baptist Church (Birmingham, Alabama) is bombed by Ku Klux Klan members, killing four girls. - June 30 – In NAACP v. Alabama, the U.S. Supreme Court rules that the NAACP was not required to release membership lists to continue operating in the state. - July – NAACP Youth Council sponsored sit-ins at the lunch counter of a Dockum Drug Store in downtown Wichita, Kansas. After three weeks, the movement successfully got the store to change its policy of segregated seating, and soon afterward all Dockum stores in Kansas were desegregated. - August 19 – Clara Luper and the NAACP Youth Council conduct the largest successful sit-in to date, on drug store lunch-counters in Oklahoma City. This starts a successful six-year campaign by Luper and the council to desegregate businesses and related institutions in Oklahoma City. - August – Jimmy Wilson sentenced to death in Alabama for stealing $1.95; Secretary of State John Foster Dulles asks Governor Jim Folsom to commute his sentence because of international criticism. - September 2 – Governor J. Lindsay Almond of Virginia threatens to shut down any school if it is forced to integrate. - September 4 – Justice Department sues under Civil Rights Act to force Terrell County, Georgia to register blacks to vote. - September 8 – A Federal judge orders Louisiana State University to desegregate; 69 African Americans enroll successfully on September 12. - September 12 – In Cooper v. Aaron the U.S. Supreme Court rules that the states were bound by the Court's decisions. Governor Faubus responds by shutting down all four high schools in Little Rock, and Governor Almond shuts one in Front Royal, Virginia. - September 18 – Governor Lindsay closes two more schools in Charlottesville, Virginia, and six in Norfolk on September 27. - September 29 – The U.S. Supreme Court rules that states may not use evasive measures to avoid desegregation. - October 8 – A Federal judge in Harrisonburg, VA rules that public money may not be used for segregated private schools. - October 20 – Thirteen blacks arrested for sitting in front of bus in Birmingham. - November 28 – Federal court throws out Louisiana law against integrated athletic events. - December 8 – Voter registration officials in Montgomery refuse to cooperate with US Civil Rights Commission investigation. - Publication of Here I Stand, Paul Robeson's manifesto-autobiography. - January 9 – One Federal judge throws out segregation on Atlanta, GA buses, while another orders Montgomery registrars to comply with the Civil Rights Commission. - January 12 – Motown Records is founded by Berry Gordy. - January 19 – Federal Appeals court overturns Virginia's closure of the schools in Norfolk; they reopen January 28 with 17 black students. - February 2 – A high school in Arlington, VA desegregates, allowing four black students. - April 10 – Three schools in Alexandria, Virginia desegregate with a total of nine black students. - April 18 – King speaks for the integration of schools at a rally of 26,000 at the Lincoln Memorial in Washington, DC. - April 24 – Mack Charles Parker is lynched three days before his trial. - November 20 – Alabama passes laws to limit black voter registration. - A Raisin in the Sun, a play by Lorraine Hansberry, debuts on Broadway. The 1961 film version will star Sidney Poitier. - February 1 – Four black students sit at the Woolworth's lunch counter in Greensboro, North Carolina, sparking six months of the Greensboro sit-ins. - February 13 – The Nashville sit-ins begin, although the Nashville students, trained by activist and nonviolent teacher James Lawson, had been doing preliminary groundwork towards the action for two months. The sit-in ends successfully in May. - February 17 – Alabama grand jury indicts Dr. King for tax evasion. - February 19 – Virginia Union University students, called the Richmond 34 stage sit-in at Woolworth's lunch counter in Richmond, Virginia. - February 22 – The Richmond 34 stage a sit in the Richmond Room at Thalhimer's department store. - March 3 – Vanderbilt University expels James Lawson for sit-in participation. - March 4, 1960 – Houston's first sit-in, led by Texas Southern University students, was held at the Weingarten's lunch counter, located at 4110 Almeda in Houston, Texas. - March 7 – Felton Turner of Houston is beaten and hanged upside-down in a tree, initials KKK carved on his chest. - March 19 – San Antonio becomes first city to integrate lunch counters. - March 20 – Florida Governor LeRoy Collins calls lunch counter segregation "unfair and morally wrong". - April 8 – Weak civil rights bill survives Senate filibuster. - April 15–17 – The Student Nonviolent Coordinating Committee (SNCC) is formed in Raleigh, North Carolina. - April 19 – Z. Alexander Looby's home is bombed, with no injuries. Looby, a Nashville civil rights lawyer, was active in the cities ongoing sit-in movement. - May – Nashville sit-ins end successfully. - May 6 – Civil Rights Act of 1960 signed by President Dwight D. Eisenhower. - May 28 – William Robert Ming and Hubert Delaney obtain an acquittal of Dr. King from an all-white jury in Alabama. - June 24 – King meets Senator John F. Kennedy (JFK). - June 28 – Bayard Rustin resigns from SCLC after condemnation by Rep. Adam Clayton Powell Jr. - July 11 – To Kill a Mockingbird published. - July 31 – Elijah Muhammad calls for an all-black state. Membership in the Nation of Islam estimated at 100,000. - August – Reverend Wyatt Tee Walker replaces Ella Baker as SCLC's Executive Director. - October 19 – Dr. King and fifty others arrested at sit-in at Atlanta's Rich's Department Store. - October 26 – Dr. King's earlier probation revoked; he is transferred to Reidsville State Prison. - October 28 – After intervention from Robert F. Kennedy (RFK), King is free on bond. - November 8 – John F. Kennedy defeats Richard Nixon in the 1960 presidential election. - November 14 – Ruby Bridges becomes the first African-American child to attend an all-white elementary school in the South (William Frantz Elementary School) following court-ordered integration in New Orleans, Louisiana. This event was portrayed by Norman Rockwell in his 1964 painting The Problem We All Live With. - December 5 – In Boynton v. Virginia, the U.S. Supreme Court holds that racial segregation in bus terminals is illegal because such segregation violates the Interstate Commerce Act. This ruling, in combination with the ICC's 1955 decision in Keys v. Carolina Coach, effectively outlaws segregation on interstate buses and at the terminals servicing such buses. - January 11 – Rioting over court-ordered admission of first two African Americans (Hamilton E. Holmes and Charlayne Hunter-Gault) at the University of Georgia leads to their suspension, but they are ordered reinstated. - January 31 – Member of the Congress of Racial Equality (CORE) and nine students were arrested in Rock Hill, South Carolina for a sit-in at a McCrory's lunch counter. - March 6 – JFK issues Executive Order 10925, which establishes a Presidential committee that later becomes the Equal Employment Opportunity Commission. - May 4 – The first group of Freedom Riders, with the intent of integrating interstate buses, leaves Washington, D.C. by Greyhound bus. The group, organized by the Congress of Racial Equality (CORE), leaves shortly after the U.S. Supreme Court has outlawed segregation in interstate transportation terminals. - May 14 – The Freedom Riders' bus is attacked and burned outside of Anniston, Alabama. A mob beats the Freedom Riders upon their arrival in Birmingham. The Freedom Riders are arrested in Jackson, Mississippi, and spend forty to sixty days in Parchman Penitentiary. - May 17 – Nashville students, coordinated by Diane Nash and James Bevel, take up the Freedom Ride, signaling the increased involvement of SNCC. - May 20 – Freedom Riders are assaulted in Montgomery, Alabama, at the Greyhound Bus Station. - May 21 – Dr. King, the Freedom Riders, and congregation of 1,500 at Reverend Ralph Abernathy's First Baptist Church in Montgomery are besieged by mob of segregationists; Attorney General Robert F. Kennedy sends federal marshals to protect them. - May 29 – Attorney General Robert F. Kennedy, citing the 1955 landmark ICC ruling in Sarah Keys v. Carolina Coach Company and the U.S. Supreme Court's 1960 decision in Boynton v. Virginia, petitions the ICC to enforce desegregation in interstate travel. - June–August – U.S. Department of Justice initiates talks with civil rights groups and foundations on beginning Voter Education Project. - July – SCLC begins citizenship classes; Andrew J. Young hired to direct the program. Bob Moses begins voter registration in McComb, Mississippi. - September – James Forman becomes SNCC's Executive Secretary. - September 23 – The Interstate Commerce Commission, at RFK's insistence, issues new rules ending discrimination in interstate travel, effective November 1, 1961, six years after the ICC's own ruling in Sarah Keys v. Carolina Coach Company. - September 25 – Voter registration activist Herbert Lee killed in McComb, Mississippi. - November 1 – All interstate buses required to display a certificate that reads: "Seating aboard this vehicle is without regard to race, color, creed, or national origin, by order of the Interstate Commerce Commission." - November 1 – SNCC workers Charles Sherrod and Cordell Reagon and nine Chatmon Youth Council members test new ICC rules at Trailways bus station in Albany, Georgia. - November 17 – SNCC workers help encourage and coordinate black activism in Albany, Georgia, culminating in the founding of the Albany Movement as a formal coalition. - November 22 – Three high school students from Chatmon's Youth Council arrested after using "positive actions" by walking into white sections of the Albany bus station. - November 22 – Albany State College students Bertha Gober and Blanton Hall arrested after entering the white waiting room of the Albany Trailways station. - December 10 – Freedom Riders from Atlanta, SNCC leader Charles Jones, and Albany State student Bertha Gober are arrested at Albany Union Railway Terminal, sparking mass demonstrations, with hundreds of protesters arrested over the next five days. - December 11–15 – Five hundred protesters arrested in Albany, Georgia. - December 15 – King arrives in Albany, Georgia in response to a call from Dr. W. G. Anderson, the leader of the Albany Movement to desegregate public facilities. - December 16 – Dr. King is arrested at an Albany, Georgia demonstration. He is charged with obstructing the sidewalk and parading without a permit. - December 18 – Albany truce, including a 60-day postponement of King's trial; King leaves town. - Whitney Young is appointed executive director of the National Urban League and begins expanding its size and mission. - Black Like Me written by John Howard Griffin, a white southerner who deliberately tanned and dyed his skin to allow him to directly experience the life of the Negro in the Deep South, is published, displaying the brutality of "Jim Crow" segregation to a national audience. - January 18–20 – Student protests over sit-in leaders' expulsions at Baton Rouge's Southern University, the nation's largest black school, close it down. - February – Representatives of SNCC, CORE, and the NAACP form the Council of Federated Organizations (COFO). A grant request to fund COFO voter registration activities is submitted to the Voter Education Project (VEP). - February 26 – Segregated transportation facilities, both interstate and intrastate, ruled unconstitutional by U.S. Supreme Court. - March – SNCC workers sit-in at U.S. Attorney General Robert F. Kennedy's office to protest jailings in Baton Rouge. - March 20 – FBI installs wiretaps on NAACP activist Stanley Levison's office. - April 3 – Defense Department orders full racial integration of military reserve units, except the National Guard. - April 9 – Corporal Roman Duckworth shot by a police officer in Taylorsville, Mississippi. - June – Leroy Willis becomes first black graduate of the University of Virginia College of Arts and Sciences. - June – SNCC workers establish voter registration projects in rural southwest Georgia. - July 10 – August 28 SCLC renews protests in Albany; King in jail July 10–12 and July 27 – August 10. - August 31 – Fannie Lou Hamer attempts to register to vote in Indianola, Mississippi. - September 9 – Two black churches used by SNCC for voter registration meetings are burned in Sasser, Georgia. - September 20 – James Meredith is barred from becoming the first black student to enroll at the University of Mississippi. - September 30-October 1 – U.S. Supreme Court Justice Hugo Black orders James Meredith admitted to Ole Miss.; he enrolls and a riot ensues. French photographer Paul Guihard and Oxford resident Ray Gunter are killed. - October – Leflore County, Mississippi, supervisors cut off surplus food distribution in retaliation against voter drive. - October 23 – FBI begins Communist Infiltration (COMINFIL) investigation of SCLC. - November 7–8 – Edward Brooke selected Massachusetts Attorney General, Leroy Johnson elected Georgia State Senator, Augustus F. Hawkins elected first black from California in Congress. - November 20 – Attorney General Kennedy authorizes FBI wiretap on Stanley Levison's home telephone. - November 20 – President Kennedy upholds 1960 presidential campaign promise to eliminate housing segregation by signing Executive Order 11063 banning segregation in Federally funded housing. - January 18 – Incoming Alabama governor George Wallace calls for "segregation now, segregation tomorrow, segregation forever" in his inaugural address. - April 3–May 10 – The Birmingham campaign, organized by the Southern Christian Leadership Conference (SCLC) and the Alabama Christian Movement for Human Rights challenges city leaders and business owners in Birmingham, Alabama, with daily mass demonstrations. - April – Mary Lucille Hamilton, Field Secretary for the Congress of Racial Equality, refuses to answer a judge in Gadsden, Alabama, until she is addressed by the honorific "Miss". It was the custom of the time to address white people by honorifics and people of color by their first names. Hamilton is jailed for contempt of court and refuses to pay bail. The case Hamilton v. Alabama is filed by the NAACP. It was appealed to the U.S. Supreme Court, which ruled in 1964 that courts must address persons of color with the same courtesy extended to whites. - April 7 – Ministers John Thomas Porter, Nelson H. Smith and A. D. King lead a group of 2,000 marchers to protest the jailing of movement leaders in Birmingham. - April 12 – Dr. King is arrested in Birmingham for "parading without a permit". - April 16 – Dr. King's Letter from Birmingham Jail is completed. - April 23 – CORE activist William L. Moore is murdered in Gadsden, Alabama. - May 2–4 – Birmingham's juvenile court is inundated with African-American children and teenagers arrested after James Bevel launches his "D-Day" youth march. The actions spans three days to become the Birmingham Children's Crusade. - May 9–10 – After images of fire hoses and police dogs turned on protesters are televised, the Children's Crusade lays the groundwork for the terms of a negotiated truce on Thursday, May 9 puts an end to mass demonstrations in return for rolling back oppressive segregation laws and practices. Dr. King and Reverend Fred Shuttlesworth announce the settlement terms on Friday, May 10, only after King holds out to orchestrate the release of thousands of jailed demonstrators with bail money from Harry Belafonte and Robert Kennedy. - May 11–12 – Double bombing in Birmingham, probably conducted by the KKK in cooperation with local police, precipitates rioting, police retaliation, intervention of state troopers, and finally mobilization of federal troops. - May 13 – In United States of America and Interstate Commerce Commission v. the City of Jackson, Mississippi et al., the United States Court of Appeals Fifth Circuit rules the city's attempt to circumvent laws desegregating interstate transportation facilities by posting sidewalk signs outside Greyhound, Trailways and Illinois Central terminals reading "Waiting Room for White Only — By Order Police Department" and "Waiting Room for Colored Only – By Order Police Department" to be unlawful. - May 24 – A group of Black leaders (assembled by James Baldwin) meets with Attorney General Robert F. Kennedy to discuss race relations. - May 29 – Violence escalates at NAACP picket of Philadelphia construction site. - May 30 – Police attack Florida A&M anti-segregation demonstrators with tear gas; arrest 257. - June 9 – Fannie Lou Hamer is among several SNCC workers badly beaten by police in the Winona, Mississippi, jail after their bus stops there. - June 11 – "The Stand in the Schoolhouse Door": Alabama Governor George Wallace stands in front of a schoolhouse door at the University of Alabama in an attempt to stop desegregation by the enrollment of two black students, Vivian Malone and James Hood. Wallace only stands aside after being confronted by federal marshals, Deputy Attorney General Nicholas Katzenbach, and the Alabama National Guard. Later in life he apologizes for his opposition to racial integration then. - June 11 – President Kennedy makes his historic civil rights address, promising a bill to Congress the next week. About civil rights for "Negroes", in his speech he asks for "the kind of equality of treatment which we would want for ourselves." - June 12 – NAACP field secretary Medgar Evers is assassinated in Jackson, Mississippi. (His murderer is convicted in 1994.) - Summer – 80,000 blacks quickly register to vote in Mississippi by a test project to show their desire to participate. - June 19 – President Kennedy sends Congress (H. Doc. 124, 88th Cong., 1st session) his proposed Civil Rights Act. White leaders in business and philanthropy gather at the Carlyle Hotel to raise initial funds for the Council on United Civil Rights Leadership - August 28 – Gwynn Oak Amusement Park in Northwest Baltimore, County, Maryland is desegregated. - August 28 – March on Washington for Jobs and Freedom is held. King gives his I Have a Dream speech. - September 10 – Birmingham, Alabama City Schools are integrated by National Guardsmen under orders from President Kennedy. - September 15 – 16th Street Baptist Church bombing in Birmingham kills four young girls. That same day, in response to the killings, James Bevel and Diane Nash begin the Alabama Project, which will later grow into the Selma Voting Rights Movement. - September 19 - Iota Phi Theta fraternity was founded at Morgan State College (now Morgan State University) - November 10 – Malcolm X delivers "Message to the Grass Roots" speech, calling for unity against the white power structure and criticizing the March on Washington. - November 22 – President Kennedy is assassinated. The new president, Lyndon B. Johnson, decides that accomplishing Kennedy's legislative agenda is his best strategy, which he pursues. - All year – The Alabama Voting Rights Project continues organizing as Bevel, Nash, and James Orange work without the support of SCLC. - January 23 – Twenty-fourth Amendment abolishes the poll tax for Federal elections. - Summer – Mississippi Freedom Summer – voter registration in the state. Create the Mississippi Freedom Democratic Party to elect an alternative slate of delegates for the national convention, as blacks are still officially disfranchised. - April 13 – Sidney Poitier wins the Academy Award for Best Actor for role in Lilies of the Field. - June 21 – Murders of Chaney, Goodman, and Schwerner, three civil rights workers disappear, later to be found murdered. - June 28 – Organization of Afro-American Unity is founded by Malcolm X, lasts until his death. - July 2 – Civil Rights Act of 1964 signed, banning discrimination based on "race, color, religion, sex or national origin" in employment practices and public accommodations. - August – Congress passes the Economic Opportunity Act which, among other things, provides federal funds for legal representation of Native Americans in both civil and criminal suits. This allows the ACLU and the American Bar Association to represent Native Americans in cases that later win them additional civil rights. - August – The Mississippi Freedom Democratic Party delegates challenge the seating of all-white Mississippi representatives at the Democratic national convention. - December 10 – Martin Luther King Jr. is awarded the Nobel Peace Prize, the youngest person so honored. - December 14 – In Heart of Atlanta Motel v. United States, the U.S. Supreme Court upholds the Civil Rights Act of 1964. - February 18 – A peaceful protest march in Marion, Alabama leads to Jimmie Lee Jackson being shot by Alabama state trooper James Bonard Fowler. Jackson dies on February 26, and Fowler is indicted for his murder in 2007. - February 21 – Malcolm X is assassinated in Manhattan, New York, probably by three members of the Nation of Islam. - March 7 – Bloody Sunday: Civil rights workers in Selma, Alabama, begin the Selma to Montgomery march but are forcibly stopped by a massive Alabama State trooper and police blockade as they cross the Edmund Pettus Bridge. Many marchers are injured. This march, initiated and organized by James Bevel, becomes the visual symbol of the Selma Voting Rights Movement. - March 15 – President Lyndon Johnson uses the phrase "We Shall Overcome" in a speech before Congress on the voting rights bill. - March 25 – After the completion of the Selma to Montgomery March a white volunteer Viola Liuzzo is shot and killed by Ku Klux Klan members in Alabama, one of whom was an FBI informant. - June 2 – Black deputy sheriff Oneal Moore is murdered in Varnado, Louisiana. - July 2 – Equal Employment Opportunity Commission opens. - August 6 – Voting Rights Act of 1965 was signed by President Johnson. It eliminated literacy tests, poll tax, and other subjective voter tests that were widely responsible for the disfranchisement of African Americans in the Southern States and provided Federal oversight of voter registration in states and individual voting districts where such discriminatory tests were used. - August 11–15 – Following the accusations of mistreatment and police brutality by the Los Angeles Police Department towards the city's African-American community, Watts riots erupt in South Central Los Angeles which lasted over five days. Over 34 were killed, 1,032 injured, 3,438 arrested, and cost over $40 million in property damage in the Watts riots. - September – Raylawni Branch and Gwendolyn Elaine Armstrong become the first African-American students to attend the University of Southern Mississippi. - September 15 – Bill Cosby co-stars in I Spy, becoming the first black person to appear in a starring role on American television. - September 24 – President Johnson signs Executive Order 11246 requiring Equal Employment Opportunity by federal contractors. - January 10 – NAACP local chapter president Vernon Dahmer is injured by a bomb in Hattiesburg, Mississippi. He dies the next day. - June 5 – James Meredith begins a solitary March Against Fear from Memphis, Tennessee to Jackson, Mississippi. Shortly after starting, he is shot with birdshot and injured. Civil rights leaders and organizations rally and continue the march leading to, on June 16, Stokely Carmichael first using the slogan Black power in a speech. - Summer – The Chicago Open Housing Movement, led by King, Bevel, and Al Raby, includes a large rally, marches, and demands to Mayor Richard J. Daley and the City of Chicago which are discussed in a movement-ending Summit Conference. - September – Nichelle Nichols is cast as a female black officer on television's Star Trek. She briefly considers leaving the role, but is encouraged by Dr. King to continue as an example for their community. - October – Black Panther Party founded by Huey P. Newton and Bobby Seale in Oakland, California. - November – Edward Brooke is elected to the U.S. Senate from Massachusetts. He is the first black senator since 1881. - January 9 – Julian Bond is seated in the Georgia House of Representatives by order of the U.S. Supreme Court after his election. - April 4 – King delivers his "Beyond Vietnam" speech, calling for defeat of "the giant triplets of racism, materialism, and militarism". - June 12 – In Loving v. Virginia, the U.S. Supreme Court rules that prohibiting interracial marriage is unconstitutional. - June 13 – Thurgood Marshall is the first African American appointed to the U.S. Supreme Court. - July 23–27 – The Detroit riot erupts in Detroit, Michigan, for five days following a raid by the Detroit Police Department on an unlicensed club which celebrated the returning Vietnam Veteran hosted by mostly African Americans. More than 43 (33 were black and ten white) were killed, 467 injured, 7,231 arrested, and 2,509 stores looted or burned during the riot. It was one of the deadliest and most destructive riots in United States history, lasting five days and surpassing the violence and property destruction of Detroit's 1943 race riot. - August 2 – The film In the Heat of the Night is released, starring Sidney Poitier. - November 17 – Philadelphia Student School Board Demonstration, 26 demands peacefully issued by students, but event became a police riot. - December 11 – The film Guess Who's Coming to Dinner is released, also with Sidney Poitier. - In the trial of accused killers in the murders of Chaney, Goodman, and Schwerner, the jury convicts 7 of 18 accused men. Conspirator Edgar Ray Killen is later convicted in 2005. - The film The Great White Hope starring James Earl Jones is released; it is based on the experience of heavyweight Jack Johnson. - The book Death at an Early Age: The Destruction of the Hearts and Minds of Negro Children in the Boston Public Schools is published. - February 1 – Two Memphis sanitation workers are killed in the line of duty, exacerbating labor tensions. - February 8 – The Orangeburg massacre occurs during university protest in South Carolina. - February 12 – First day of the (wildcat) Memphis sanitation strike - March – While filming a prime time television special, Petula Clark touches Harry Belafonte's arm during a duet. Chrysler Corporation, the show's sponsor, insists the moment be deleted, but Clark stands firm, destroys all other takes of the song, and delivers the completed program to NBC with the touch intact. The show is broadcast on April 8, 1968. - April 3 – King returns to Memphis; delivers "Mountaintop" speech. - April 4 – Assassination of Martin Luther King Jr. in Memphis, Tennessee. - April 4–8 and one in May 1968 – In response to the killing of Dr. King, over 150 cities experience rioting. - April 11 – Civil Rights Act of 1968 is signed. The Fair Housing Act is Title VIII of this Civil Rights Act – it bans discrimination in the sale, rental, and financing of housing. The law is passed following a series of contentious open housing campaigns throughout the urban North. The most significant of these campaigns were the Chicago Open Housing Movement of 1966 and organized events in Milwaukee during 1967–68. In both cities, angry white mobs attacked nonviolent protesters. - May 12 – Poor People's Campaign marches on Washington, DC. - June 6 – Senator Robert F. Kennedy, a Civil Rights advocate, is assassinated after winning the California presidential primary. His appeal to minorities helped him secure the victory. - September 17 – Diahann Carroll stars in the title role in Julia, as the first African-American actress to star in her own television series where she did not play a domestic worker. - October 3 – The play The Great White Hope opens; it runs for 546 performances and later becomes a film. - October – Tommie Smith and John Carlos raise their fists to symbolize black power and unity after winning the gold and bronze medals, respectively, at the 1968 Summer Olympic Games. - November 22 – First interracial kiss on American television, between Nichelle Nichols and William Shatner on Star Trek. - In Powe v. Miles, a federal court holds that the portions of private colleges that are funded by public money are subject to the Civil Rights Act. - Shirley Chisholm becomes the first African-American woman elected to Congress. - January 8–18 – Student protesters at Brandeis University take over Ford and Sydeman Halls, demanding creation of an Afro-American Department. This is approved by the University on April 24. - February 13 – National Guard with teargas and riot sticks crush a pro-black student demonstration at University of Wisconsin. - February 16 – After 3 days of clashes between police and Duke University students, the school agrees to establish a Black Studies program. - February 23 – UNC Food Worker Strike begins when workers abandon their positions in Lenoir Hall protesting racial injustice - April 3–4 – National Guard called into Chicago, and Memphis placed on curfew on anniversary of MLK's assassination. - April 19 – Armed African-American students protesting discrimination take over Willard Straight Hall, the student union building at Cornell University. They end the seizure the following day after the university accedes to their demands, including an Afro-American studies program. - April 25–28 – Activist students takeover Merrill House at Colgate University demanding Afro-American studies programs. - May 8 – City College of New York closed following a two-week-long campus takeover demanding Afro-American and Puerto-Rican studies; riots among students break out when the school tries to reopen. - June – The second of two US federal appeals court decisions confirms members of the public hold legal standing to participate in broadcast station license hearings, and under the Fairness Doctrine finds the record of segregationist TV station WLBT beyond repair. The FCC is ordered to open proceedings for a new licensee. - September 1–2 – Race rioting in Hartford, CT and Camden, NJ. - October 29 – The U.S. Supreme Court in Alexander v. Holmes County Board of Education orders immediate desegregation of public schools, signaling the end of the "all deliberate speed" doctrine established in Brown II. - December – Fred Hampton, chairman of the Illinois chapter of the Black Panther Party, is shot and killed while asleep in bed during a police raid on his home. - United Citizens Party is formed in South Carolina when Democratic Party refuses to nominate African-American candidates. - W. E. B. Du Bois Institute for African and African-American Research founded at Harvard University. - The Revised Philadelphia Plan is instituted by the Department of Labor. - The Congressional Black Caucus is formed. - January 19 – G. Harrold Carswell's nomination to the U.S. Supreme Court is rejected following protests from the NAACP and feminists. - April 23 – Black Panther Marshall "Eddie" Conway arrested in Baltimore, MD. - May 27 – The film Watermelon Man is released, directed by Melvin Van Peebles and starring Godfrey Cambridge. The film is a comedy about a bigoted white man who wakes up one morning to discover that his skin pigment has changed to black. - August 7 – Marin County courthouse incident. - August 14 – Hoover adds Angela Davis to FBI Most Wanted list. - October 13 – Angela Davis captured in New York City. - First blaxploitation films released. - April 20 – The U.S. Supreme Court, in Swann v. Charlotte-Mecklenburg Board of Education, upholds desegregation busing of students to achieve integration. - April 27 – FBI officially ends COINTELPRO - June – Control of segregationist TV station WLBT given to a bi-racial foundation. - June 4 – Angela Davis acquitted of all charges. - August 21 – George Jackson shot to death in San Quentin Prison. - Ernest J. Gaines's Reconstruction-era novel The Autobiography of Miss Jane Pittman is published. - January 25 – Shirley Chisholm becomes the first major-party African-American candidate for President of the United States and the first woman to run for the Democratic presidential nomination. - November 16 – In Baton Rouge, two Southern University students are killed by white sheriff deputies during a school protest over lack of funding from the state. The university's Smith-Brown Memorial Union is named as a memorial to them. - November 16 – The infamous Tuskegee syphilis experiment ends. Begun in 1932, the U.S. Public Health Service's 40-year experiment on 399 black men in the late stages of syphilis has been described as an experiment that "used human beings as laboratory animals in a long and inefficient study of how long it takes syphilis to kill someone." - May 8 – Nelson Rockefeller signs the Rockefeller Drug Laws for New York state with draconian indeterminate sentences for drug possession, as well as sale. - July 31 – FBI ends Ghetto Informant Program - Combahee River Collective, a Black feminist group, is established in Boston, out of New York's National Black Feminist Organization. - July 25 – In Milliken v. Bradley, the U.S. Supreme Court in a 5–4 decision holds that outlying districts could only be forced into a desegregation busing plan if there was a pattern of violation on their part. This decision reinforces the trend of white flight. - Salsa Soul Sisters, Third World Wimmin Inc Collective, the first "out" organization for lesbians, womanists and women of color formed in New York City. - April 30 – In the pilot episode of Starsky and Hutch, Richard Ward plays an African-American supervisor of white American employees for the first time on TV. - February – Black History Month is founded by Professor Carter Woodson's Association for the Study of Afro-American Life and History. - The novel Roots: The Saga of an American Family by Alex Haley is published. - Combahee River Collective, a Black feminist group, publishes the Combahee River Collective Statement. - President Jimmy Carter appoints Andrew Young to serve as Ambassador to the United Nations, the first African American to serve in the position. - June 28 – Regents of the University of California v. Bakke bars racial quota systems in college admissions but affirms the constitutionality of affirmative action programs giving equal access to minorities. - United Steelworkers of America v. Weber is a case regarding affirmative action in which the U.S. Supreme Court holds that the Civil Rights Act of 1964 did not bar employers from favoring women and minorities. - November 2 – Assata Shakur escapes from prison. - December 9 – Mumia Abu-Jamal arrested. - Charles Fuller writes A Soldier's Play, which is later made into the film A Soldier's Story. - November 30 – Michael Jackson releases Thriller, which becomes the best-selling album of all time. - May 24 – The U.S. Supreme Court rules that Bob Jones University did not qualify as either a tax-exempt or a charitable organization due to its racially discriminatory practices. - August 30 – Guion Bluford becomes the first African-American to go into space. - November 2 – President Ronald Reagan signs a bill creating a federal holiday to honor Martin Luther King, Jr, fifteen years after his death. - Alice Walker receives the Pulitzer Prize for her novel The Color Purple. - September 13 – The film A Soldier's Story is released, dealing with racism in the U.S. military. - The Cosby Show begins, and is regarded as one of the defining television shows of the decade. - First contract for complete privatization of a prison is awarded to Corrections Corporation of America, beginning a new era of racially disproportionate mass incarceration. - May 13 – Bombing of MOVE house in Philadelphia - January 20 – Established by legislation in 1983, Martin Luther King Jr. Day is first celebrated as a national holiday. - October 27 – Anti-Drug Abuse Act of 1986 establishes 100:1 sentencing disparity between crack and powder cocaine - The Public Broadcasting Service's six-part documentary Eyes on the Prize is first shown, covering the years 1954–1965. In 1990 it is added to by the eight-part Eyes on the Prize II covering the years 1965–1985. - Dr. Benjamin Carson became the first person in history to separate conjoined twins that were joined at the head. - Civil Rights Restoration Act of 1988. - December 9 – The film Mississippi Burning is released, regarding the 1964 murders of Chaney, Goodman, and Schwerner. - February 10 – Ron Brown is elected chairman of the Democratic National Committee, becoming the first African American to lead a major United States political party. - October 1 – Colin Powell becomes Chairman of the Joint Chiefs of Staff. - December 15 – The film Glory is released: it features African-American Civil War soldiers. - January 13 – Douglas Wilder becomes the first elected African-American governor as he takes office in Richmond, Virginia. - March 3 – Four white police officers are videotaped beating African-American Rodney King in Los Angeles. - October 15 – Senate confirms the nomination of Clarence Thomas to the U.S. Supreme Court. - November 21 – Civil Rights Act of 1991 enacted. - Henry Louis Gates, Jr. becomes Harvard University's Director of the W. E. B. Du Bois Institute for African and African American Research. - April 29 – The 1992 Los Angeles riots erupt after the officers accused of beating Rodney King are acquitted. - September 12 – Mae Carol Jemison becomes the first African-American woman to travel in space when she goes into orbit aboard the Space Shuttle Endeavour. - November 3 – Carol Moseley Braun becomes the first African-American woman to be elected to the United States Senate. - November 18 – Director Spike Lee's film Malcolm X is released. - June 30 – In Miller v. Johnson the U.S. Supreme Court rules that gerrymandering based on race is unconstitutional. - October 16 – Million Man March in Washington, D.C., co-initiated by Louis Farrakhan and James Bevel. - 16 May – President Bill Clinton apologizes to victims of the Tuskegee syphilis experiment - July 9 – Director Spike Lee releases his documentary 4 Little Girls, about the 1963 16th Street Baptist Church bombing. - October 25 – Million Woman March in Philadelphia. - June 7 – James Byrd, Jr. is brutally murdered by white supremacists in Jasper, Texas. The scene is reminiscent of earlier lynchings. In response, Byrd's family create the James Byrd Foundation for Racial Healing. - October 23 – The film American History X is released, powerfully highlighting the problems of urban racism. - Franklin Raines becomes the first black CEO of a fortune 500 company. - February 4 – Amadou Diallo shooting by New York Police (precursor to Daniels, et al. v. the City of New York) - May 3 – Bob Jones University, a fundamentalist South Carolina private institution, ends its ban on interracial dating. - June 23 – The U.S. Supreme Court in Grutter v. Bollinger upholds the University of Michigan Law School's admission policy. However, in the simultaneously heard Gratz v. Bollinger the university is required to change a policy. - June 21 – Edgar Ray Killen is convicted of participating in the murders of Chaney, Goodman, and Schwerner. - October 15 – The Millions More Movement holds a march in Washington D.C. - October 25 – Rosa Parks dies at age 92. Her solitary action spearheaded the Montgomery bus boycott in 1955. Her body lies in state in the Capitol Rotunda in Washington, D.C. before interment. - March 26 – Capitol Hill police fail to recognize Cynthia McKinney as a member of Congress. - May 10 – Alabama state trooper James Bonard Fowler is indicted for the murder of Jimmie Lee Jackson on February 18, 1965. - June 28 – Parents Involved in Community Schools v. Seattle School District No. 1 decided along with Meredith v. Jefferson County Board of Education prohibits assigning students to public schools solely for the purpose of achieving racial integration and declines to recognize racial balancing as a compelling state interest. - December 10 – U.S. Supreme Court rules 7–2 in Kimbrough v. United States that judges may deviate from federal sentencing guidelines for crack cocaine. - June 3 – Barack Obama receives enough delegates by the end of state primaries to be the presumptive Democratic Party of the United States nominee. - July 12 – Cynthia McKinney accepts the Green Party nomination in the Presidential race. - July 30 – United States Congress apologizes for slavery and "Jim Crow". - August 28 – At the 2008 Democratic National Convention, in a stadium filled with supporters, Barack Obama accepts the Democratic nomination for President of the United States. - November 4 – Barack Obama elected 44th President of the United States of America, opening his victory speech with, "If there is anyone out there who still doubts that America is a place where all things are possible; who still wonders if the dream of our founders is alive in our time; who still questions the power of our democracy, tonight is your answer." - January 20 – Barack Obama sworn in and offered Sherrod a new position as the 44th President of the United States, the first African-American to become president. - January 30 – Former Maryland Lt. Governor Michael Steele becomes the first African-American Chairman of the Republican National Committee. - The U.S. Postal Service issues a commemorative six-stamp set portraying twelve civil rights pioneers. - October 6 – Judge Keith Bardwell refuses to officiate an interracial marriage in Louisiana. - October 9 – Barack Obama is awarded the Nobel Peace Prize. - March 14 – Disney officially crowns its first African-American Disney Princess, Tiana. - July 19 – Shirley Sherrod first is pressured to resign from the U.S. Department of Agriculture and immediately thereafter receives its apology after she is inaccurately accused of being racist towards white Americans. - August 3 – Fair Sentencing Act reducing sentencing disparity between crack and powder cocaine to an 18:1 ratio. - January 14 – Michael Steele, the first African-American Chairman of the RNC lost his bid for re-election. - August 22 – The Martin Luther King Jr. Memorial on the National Mall in Washington, D.C. opens to the public, and is officially dedicated on October 16. - November 19 – Killing of Kenneth Chamberlain, Sr. - January 20 – Barack Obama is sworn in for his second term as president. - March 9 – New York police officers shoot 16-year-old Kimani Gray, triggering weeks of protests in Brooklyn - May 9 – Malcolm Shabazz killed in Mexico. - May 2 – FBI promotes Assata Shakur to list of "most wanted terrorists". - June 24 – State of Florida v. George Zimmerman begins. - June 25 – The U.S. Supreme Court overturns part of the 1965 Voting Rights Act in Shelby County v. Holder. - July 13 – George Zimmerman acquitted, provoking nationwide protests. The Black Lives Matter movement is created by Alicia Garza, Patrisse Cullors, and Opal Tometi, in response to the ongoing racial profiling of and police brutality against young black men. - August 9 – Shooting of Michael Brown by Police Officer Darren Wilson in Ferguson, Missouri is followed by demonstrations and protests which include the term "Hands up, don't shoot". Demonstrations focused on the incident, using the "Hands up" expression, are held across the U.S. and overseas. - July 17 – Eric Garner died in Staten Island, New York City, after a police officer put him in a chokehold for 15 seconds. - June 17 – Nine African Americans are killed in the Charleston Church Shooting at Emanuel African Methodist Episcopal Church in downtown Charleston, S.C. - July 13 – Sandra Bland dies in jail, days after being pulled over for a traffic stop in Texas. - In the U.S. Supreme Court case Texas Dept. of Housing and Community Affairs v. Inclusive Communities Project, Inc., 576 U.S. ___ (2015), the Court held that Congress specifically intended to include disparate impact claims in the Fair Housing Act, but that such claims require a plaintiff to prove it is the defendant's policies that cause a disparity. The Fair Housing Act prohibits discrimination based on race. - November 1 – Michael Bruce Curry becomes the first African-American Presiding Bishop of the Episcopal Church (United States), having been elected by an overwhelming margin on the first ballot of the 78th General Convention the preceding June. - September 14, 2016 – The National Museum of African American History opens its doors for the first time, becoming the 19th museum of the Smithsonian Institution. - March 13 – Shooting of Breonna Taylor - May 25 – The murder of George Floyd leads to a cascade of protests with mottos such as I can't breathe and Defund the police, and the mass of removals of Confederate monuments and renaming of slave-trade memorials around the world. - May 25 – Central Park birdwatching incident, followed by Black Birders Week - June 12 – Killing of Rayshard Brooks - August 19 - First African-American to be nominated as a major party U.S. vice-presidential candidate: Kamala Harris, Democratic Party (See also: 2021) - August 23 – Shooting of Jacob Blake - November 7 - First African-American elected Vice President of the United States: Kamala Harris - January 20 – Kamala Harris sworn in as 49th Vice President of the United States, the first African-American and first Asian-American vice president as well as the first woman vice president. - April 11 – The Killing of Daunte Wright - A 2022 Buffalo shooting occurs killing 10, with the shooter live streaming the attack on Twitch . The majority of victims are African American, with the shooter driving over 200km to reach the supermarket in which it occurred in. - African American history - Baseball color line - Big Six (activists) - Birmingham Civil Rights District - Birmingham Civil Rights Institute - Black pride - Black school - Black suffrage - Civil rights movement (1896–1954) - Driving While Black - Freedom Schools - Hate crime laws in the United States - History of slavery in the United States - Human rights in the United States - List of African-American firsts - List of African-American U.S. state firsts - List of African-American United States Cabinet members - Mass racial violence in the United States - Race and sports - Racial segregation in the United States - Racism in the United States - Timeline of the civil rights movement - Voting rights in the United States - Wednesdays in Mississippi - Peck, Douglas T. (2001). "Lucas Vásquez de Ayllón's Doomed Colony of San Miguel de Gualdape". The Georgia Historical Quarterly. 85 (2): 183–198. ISSN 0016-8297. JSTOR 40584407. - Milanich, Jerald T. (2018). Florida Indians and the Invasion from Europe. Gainesville: Library Press at UF. ISBN 978-1-947372-45-0. OCLC 1021804892. - Slavery in Colonial Georgia. Original entry by Betty Wood, Girton College, Cambridge, England, 09/19/2002. Last edited by NGE Staff on 09/29/2020. www.google.com/amp/s/m.georgiaencyclopedia.org/articles/history-archaeology/slavery-colonial-georgia%3famp. Retrieved March. 15, 2021. - "Africans, Virginia's First – Encyclopedia Virginia". Retrieved 2021-05-31. - Jordan, Winthrop (1968). White Over Black: American attitudes Toward the Negro, 1550–1812. University of North Carolina Press. ISBN 0807871419. - Higginbotham, A. Leon (1975). In the Matter of Color: Race and the American Legal Process: The Colonial Period. Greenwood Press. ISBN 9780195027457. - Donoghue, John (2010). "Out of the Land of Bondage": The English Revolution and the Atlantic Origins of Abolition" (PDF). The American Historical Review. 115 (4): 943–974. doi:10.1086/ahr.115.4.943. - "Slavery and Indentured Servants". Library of Congress, American Memory. Retrieved July 31, 2010. - "John Punch". PBS. Retrieved July 31, 2010. - John Henderson Russell. The Free Negro In Virginia, 1619–1865, Baltimore: Johns Hopkins Press, 1913, pp. 29–30, scanned text online. - Ponder, Erik. "LibGuides: African American Studies Research Guide: Milestones in Black History". libguides.lib.msu.edu. Retrieved 2021-01-31. - Baker, Billy and Crimaldi, Laura. "Black and free, woman bought Boston parcel in 1670." Boston Globe, May 20, 2014. Retrieved October 23, 2015. - Blackburn, Robin (1998). The Making of New World Slavery: From the Baroque to the Modern, 1492-1800. Verso. pp. 256–58. ISBN 9781859841952. - "Interactive Timeline 1619-2019 | 400 Years of Resistance to Slavery and Injustice". 400years.berkeley.edu. Retrieved 2021-01-31. - Ferenc M. Szasz, "The New York Slave Revolt of 1741: A Re-Examination," New York History (1967): 215–230. in JSTOR - John K. Thornton, "African dimensions of the Stono rebellion," American Historical Review (1991): 1101–1113. in JSTOR - Potter, Joan (2002). African American Firsts. Kensington. p. 288. - Berry, Faith (2001). From Bondage to Liberation. New York: The Continuum International Publishing Group Inc. p. 50. ISBN 0-8264-1370-6. - Rollins, Charlemae (1965). Famous American Negro Poets. New York: Dodd, Mead & Co. pp. 15–16. ISBN 0396051294. - Kachun, Mitch (Summer 2009). "From Forgotten Founder to Indispensable Icon: Crispus Attucks, Black Citizenship, and Collective Memory". Journal of the Early Republic. 29 (2): 249–86. doi:10.1353/jer.0.0072. S2CID 144216986. - Kachun, Mitch (2017). First Martyr of Liberty: Crispus Attucks in American Memory. New York: Oxford University Press. ISBN 9780199910861. - Phillis Wheatley: America's second Black Poet and Her Encounters with the Founding Fathers by Henry Louis Gates, Basic Civitas Books, 2003, p. 5. - MacLeod, Duncan J. (1974). Slavery, Race and the American Revolution. Cambridge UP. pp. 31–32. - "The American Revolution and Slavery", Digital History. Retrieved March 5, 2008. - Cassadra Pybus, "Jefferson's Faulty Math: the Question of Slave Defections in the American Revolution", William and Mary Quarterly, 2005, 62#2: 243–264. in JSTOR - Allen, Robert S. (1982). Loyalist Literature: An Annotated Bibliographic Guide to the Writings on the Loyalists of the American Revolution. Dundurn. p. 30. ISBN 9780919670617. - Raboteau, Albert J. (2004). Slave Religion: The "Invisible Institution" in the Antebellum South. Oxford University Press. p. 139. ISBN 978-0-19-517413-7. Retrieved 28 May 2013. - Brooks, Walter H. (April 1, 1922). "The Priority of the Silver Bluff Church and its Promoters". The Journal of Negro History. 7 (2): 172–196. doi:10.2307/2713524. ISSN 0022-2992. JSTOR 2713524. S2CID 149920027. - Peter Kolchin, American Slavery: 1619–1877, New York: Hill and Wang, pp. 78 and 81. - "PBS documentary". PBS. Retrieved October 30, 2014. - Wilbon, Roderick (April 28, 2017). "First Baptist Church of St. Louis, oldest African-American church west of the Mississippi River, celebrates its 200th anniversary". Retrieved 2022-02-14. - "First African Baptist Church History (S0006)" (PDF). State Historical Society of Missouri. 1974. - The Life of Josiah Henson, Formerly a Slave, Now an Inhabitant of Canada, as Narrated by Himself: Electronic Edition. page58 - Wormley, G. Smith."Prudence Crandall", The Journal of Negro History Vol. 8, No. 1 January 1923. - "Connecticut's "Black Law" (1833)". Citizens All (project). Yale University. Archived from the original on 2012-10-19. Retrieved 2012-03-19. Lacking no legal means to prevent Prudence Crandall from opening her school, Andrew Judson, a local politician, pushed legislation through the Connecticut Assembly outlawing the establishment of schools 'for the instruction of colored persons belonging to other states and countries.' - "Morehouse Legacy". morehouse.edu. Morehouse College. Archived from the original on 27 September 2018. Retrieved 16 March 2012. - Potter, Joan (2002). African American Firsts. Kensington. p. 292. - Potter, Joan (2002). African American Firsts. Kensington. p. 293. - John C. Willis, Forgotten Time: The Yazoo-Mississippi Delta after the Civil War, Charlottesville: University of Virginia Press, 2000 - Potter, Joan (2002). African American Firsts. Kensington. pp. 295–296. - Williams, Yvonne, "Harvard", in Young, p. 99. - James D. Anderson, Black Education in the South, 1860–1935, Chapel Hill: University of North Carolina, 1988, pp. 244–245. - Sean Dennis Cashman (1992). African-Americans and the Quest for Civil Rights, 1900-1990. NYU Press. pp. 16–. ISBN 9780814714416. - Taylor, Quintard (ed.), "African American History Timeline: 1901-2000", BlackPast.org, Seattle, Washington, retrieved November 1, 2014 - Frum, David (2000). How We Got Here: The '70s. New York, New York: Basic Books. p. 41. ISBN 0-465-04195-7. - Wolgemuth, Kathleen L. (April 1959). "Woodrow Wilson and Federal Segregation". The Journal of Negro History. Association for the Study of African-American Life and History, Inc. 44 (2): 158–173. doi:10.2307/2716036. JSTOR 2716036. S2CID 150080604. - Blumenthal, Henry (January 1963). "Woodrow Wilson and the Race Question". The Journal of Negro History. Association for the Study of African-American Life and History, Inc. 48 (1): 1–21. doi:10.2307/2716642. JSTOR 2716642. S2CID 149874271. - Potter, Joan (2002). African American Firsts. Kensington. p. 300. - Monroe H. Little, Review of James Madison's A Lynching in the Heartland, History-net Retrieved June 11, 2014. - Angela Y. Davis,Women, Race & Class. New York: Vintage Books, 1983, pp. 194–195. - "America's First Sit-Down Strike: The 1939 Alexandria Library Sit-In". City of Alexandria. Retrieved August 20, 2009. - "Divine's Followers Give Aid to Strikers – With Evangelist's Sanction They 'Sit Down' in Restaurant". The New York Times. US. September 23, 1939. Retrieved July 20, 2010. [The workers] are seeking wage increases, shorter hours, a closed shop and cessation of what they charge has been racial discrimination. - Potter, Joan (2002). African American Firsts. Kensington. p. 215. - Potter, Joan (2002). African American Firsts. Kensington. pp. 301–302. - "Smith v. Allwright, 321 U.S. 649 (1944)". Retrieved October 30, 2014. - McGuire, Danielle L. (2010). At the Dark End of the Street: Black Women, Rape, and Resistance- A New History of the Civil Rights Movement from Rosa Parks to the Rise of Black Power. Random House. pp. xv–xvii. ISBN 978-0-307-26906-5. - Jessie Carney Smith, ed. (2010). "Timeline". Encyclopedia of African American Popular Culture. ABC-CLIO. ISBN 978-0-313-35797-8. - Morgan v. Virginia, 1946 - For more detail during this period, see Freedom Riders website chronology - David T. Beito and Linda Royster Beito, Black Maverick: T.R.M. Howard's Fight for Civil Rights and Economic Power, Urbana: University of Illinois Press, 2009, pp. 154–55. - "The Virginia Center for Digital History". Retrieved October 30, 2014. - Carson, Clayborne (1998). The autobiography of Martin Luther King, Jr. Grand Central Publishing. p. 141. ISBN 978-0-446-52412-4. - The King Center, The Chronology of Dr. Martin Luther King, Jr. "1961". Archived from the original on October 13, 2007. Retrieved 2007-10-20. - Arsenault, Raymond (2006). Freedom Riders: 1961 and the Struggle for Racial Justice. Oxford University Press. p. 439. ISBN 0-19-513674-8. - Branch, Taylor (1988). Parting the Waters: America in the King Years. Simon & Schuster Paperbacks. pp. 527–530. ISBN 978-0-671-68742-7. - Branch, pp.533–535 - Branch, pp. 555–556 - Branch, pp. 756–765. - Branch, pp. 786–791. - UNITED STATES of America and Interstate Commerce Commission v. The City of Jackson, Mississippi, Allen Thompson, Douglas L. Lucky and Thomas B. Marshall, Commissioners of the City of Jackson, and W.D. Rayfield, Chief of Police of the City of Jackson, United States Court of Appeals Fifth Circuit, May 13, 1963. - "Northern City Site of Most Violent Negro Demonstrations", Rome News-Tribune (CWS), May 30, 1963. - "Tear Gas Used to Stall Florida Negroes, Drive Continues, Evening News (AP), May 31, 1963. - "Medgar Evers". Retrieved October 30, 2014. - "Proposed Civil Rights Act". Archived from the original on August 23, 2014. Retrieved October 30, 2014. - March on Washington. Archived October 12, 2007, at the Wayback Machine - Loevy, Robert. "A Brief History of the Civil Rights Act of 1964". Retrieved 2007-12-31. - "Civil Rights Act of 1964". Retrieved October 30, 2014. - "Nobel Peace Prize acceptance speech". Retrieved October 30, 2014. - Gavin, Philip. "The History Place, Great Speeches Collection, Lyndon B. Johnson, "We Shall Overcome"". Retrieved 2007-12-31. - "James L. Bevel The Strategist of the 1960s Civil Rights Movement" by Randall L. Kryn, a paper in David Garrow's 1989 book We Shall Overcome, Volume II, Carlson Publishing Company - "Movement Revision Research Summary Regarding James Bevel" by Randy Kryn, October 2005 published by Middlebury College - "When Harry Met Petula". Retrieved October 30, 2014. - James Ralph, Northern Protest: Martin Luther King, Jr., Chicago, and the Civil Rights Movement (1993) Harvard University Press ISBN 0-674-62687-7 - Jones, Patrick D. (2009). The Selma of the North: Civil Rights Insurgency in Milwaukee. Harvard University Press. pp. 1–6, 169ff. ISBN 978-0-674-03135-7. - "Changing Channels – The Civil Rights Case That Transformed Television, page 2". March 8, 2012. Retrieved October 30, 2014. - "Bob Jones University v. United States, 461 U.S. 574 (1983)". Retrieved October 30, 2014. - "The 15 Year Battle for Martin Luther King, Jr. Day". National Museum of African American History and Culture. 2021-01-11. Retrieved 2021-01-31. - Potter, Joan (2002). African American Firsts. Kensington. p. 309. - "CNN: Bob Jones University ends ban on interracial dating". Retrieved October 30, 2014. - "CNN: Obama: I will be the Democratic nominee". CNN. Retrieved October 30, 2014. - "Transcript: 'This is your victory,' says Obama". Retrieved October 30, 2014. - Barned-Smith, St. John (July 14, 2015). "Authorities investigate apparent suicide at Waller County Jail". Houston Chronicle. Retrieved March 29, 2017. - Inclusive Communities Project, slip op. at 16-17, 19-20. - "Title VIII: Fair Housing and Equal Opportunity - HUD". Portal.hud.gov. Archived from the original on 2015-07-08. Retrieved 2015-07-06. - "Joe Biden selects California Sen. Kamala Harris as running mate". Associated Press. August 11, 2020. selecting the first African American woman and South Asian American to compete on a major party's presidential ticket - "Kamala Harris's selection as VP resonates with Black women". Associated Press. August 12, 2020. making her the first Black woman on a major party's presidential ticket ... It also marks the first time a person of Asian descent is on the presidential ticket. - Martin, Jonathan; Burns, Alexander (November 7, 2020). "Biden Wins Presidency, Ending Four Tumultuous Years Under Trump". The New York Times. Retrieved November 7, 2020. - McKinley, Jesse; Traub, Alex; Closson, Troy (14 May 2022). "Gunman Kills 10 at Buffalo Supermarket in Racist Attack". The New York Times. - Finkelman, Paul (ed.), Encyclopedia of African American History, 1896 to the Present: From the Age of Segregation to the Twenty-first Century (5 vols, 2009) excerpt and text search - Hornsby, Jr., Alton (ed.), Chronology of African American History (2nd edn 1997) 720pp. - Hornsby, Jr., Alton (ed.), Black America: A State-by-State Historical Encyclopedia (2 vol 2011) excerpt - Lowery, Charles D., and John F. Marszalek, Encyclopedia of African-American civil rights: from emancipation to the present (Greenwood, 1992). - Palmer, Colin A. (ed.), Encyclopedia Of African American Culture And History: The Black Experience In The Americas (2nd edn, 6 vol, 2005) - first edition was: Salzman, Jack, et al. (eds), Encyclopedia of African-American Culture and History (5 vols, 1995) - Encyclopædia Britannica's Guide to Black History (international view) - Tullos, Allen. "Selma Bridge: Always Under Construction," Southern Spaces July 28, 2008. - Detailed year-by-year timeline 1951–1968 - University of Southern Mississippi's Civil Rights Documentation Project, includes an extensive Timeline - Freedom Riders website chronology, extremely detailed - Civil Rights Movement Archive movement timeline - Civil Rights Timeline, sections on Martin Luther King Jr. - 41 Lives for Freedom - Black baseball firsts - African-American Pioneers of Texas - Memphis Civil Rights Digital Archive - Civil Rights: Pivotal Events – slideshow by Life magazine - "Cases: U.S. civil rights movement". Global Nonviolent Action Database. Pennsylvania: Swarthmore College. - African American and African Pamphlet Collection
All deserts have two things in common: they are dry, and they support little plant and animal life. If a region receives an average of fewer than 10 inches (25 centimeters) of rain each year, it is classified as a desert. Contrary to what most people believe, not all deserts are hot. Some deserts near the North and South Poles are so cold that all moisture is frozen—these are called polar deserts. Tropical desert areas are near the equator. Temperate desert areas are between the tropics and the North and South Poles. True deserts cover about one-fifth of the world’s land area. With the addition of polar deserts, the total rises to 30 percent. Another 25 percent of Earth’s land surface possesses desertlike characteristics. In all, deserts constitute 33 million square miles (86 million square kilometers). Most deserts lie near the tropic of Cancer and the tropic of Capricorn, two lines of latitude about 25 degrees from the equator. The area between these two lines is called the Torrid Zone (torrid means very hot). Deserts are generally caused by the presence of dry air. The average humidity (moisture in the air) is between 10 and 30 percent. In some cases, mountain ranges prevent moisture-laden clouds from reaching the area. Mountains can cause heavy, moisture-filled clouds to rise into the colder atmosphere. There, the moisture condenses and falls in the form of rain, leaving the air free of moisture as it crosses the range. In other cases, certain wind patterns along the equator bring air in from dry regions. Cold-water ocean currents can cause moist air to drop its moisture over the ocean. The resulting dry air quickly evaporates (dries up) ground moisture along the coastal regions as it moves inland. Deserts have always existed, even when glaciers covered large portions of Earth during the great Ice Ages. Although geological evidence is scarce, |WORDS TO KNOW| |Arroyo: The dry bed of a stream that flows only after rain; also called a wash or a wadi.| |Butte: A small hill.| |Deforestation: The cutting down of all the trees in a forest.| |Desertification: The changing of fertile lands into deserts through destruction of vegetation (plant life) or depletion of soil nutrients. Topsoil and groundwater are eventually lost as well.| |Dormant: A suspension of growing (plants) or activity (animals) when conditions are harsh.| |Estivation: An inactive period experienced by some animals during very hot months.| |Mesa: A flat topped hill.| |Oasis (plural is oases): A fertile area in the desert having a water supply that enables trees and other plants to grow there.| |Wadi: The dry bed of a stream that flows after a rain; also called a wash, or an arroyo.| |Xeriscaping: Landscaping method that uses drought tolerant plants and efficient watering techniques.| scientists tend to agree that some desert areas have always been present, but they were probably smaller than those of today. Fossils, the ancient remains of living organisms that have turned to stone, can reveal the climatic history of a region. For example, scientists believe that the Arabian Desert, which covers most of the Arabian Peninsula to the east of North Africa, once included wetlands because fossils of a small species of hippopotamus have been found there. In the Sahara Desert of North Africa, rock paintings made 5,000 years ago show pictures of elephants, giraffes, and herds of antelope that are no longer present. Desertification (DES-aurt-ih-fih-KAY-shun; desert formation) occurs continuously, primarily on the edges of existing deserts. It is caused by a combination of droughts (rainless periods) and human activity such as deforestation (cutting down forests) or overgrazing of herd animals. When all the grass is used and rain is scarce, plants do not grow back. Without plants to hold the soil in place, wind blows away the smaller and finer particles of soil, exposing the less compacted layer of sand. This leaves a barren, unprotected surface. Eventually, even groundwater disappears. Scientists measure a region’s aridity (dryness) by comparing the amount of precipitation (rain, sleet, or snow) to the rate of evaporation. Evaporation always exceeds precipitation. Deserts can be classified as hyperarid (less than 1 inch [2.5 centimeters] of rain per year); arid (up to 10 inches [25 centimeters] of rain per year); and semiarid (as much as 20 inches [50 centimeters] of rain per year, but are so hot that moisture evaporates rapidly). Most true deserts receive fewer than 4 inches (10 centimeters) of rain annually. Except for those at the North and South Poles, which are special cases, deserts are classified as hot or cold. Daytime average temperatures in hot deserts are warm during all seasons of the year, usually above 65°F (18°C). Nighttime temperatures are chilly and sometimes go below freezing. Typical hot deserts include the Sahara and the Namib Desert of Namibia. Cold deserts have hot summers and cold winters. At least one month during the year the mean temperature is below 45°F (7°C). Cold deserts include Turkestan in Kazakhstan and Uzbekistan, Gobi (GOH-bee) in China and Mongolia, and the Great Salt Lake Desert in Utah. These deserts usually get some precipitation in the form of snow. Deserts can be further characterized by their appearance and plant life. They may be flat, mountainous, broken by gorges and ravines, or covered by a sea of sand. Plants may range from nearly invisible fungi to towering cacti and trees. Although desert climates vary from very hot to very cold, they are always arid (dry). In hot deserts, days are usually sunny and skies are cloudless. During the summer, daytime air temperatures between 105° and 110°F (43.8° and 46.8°C) are not unusual. A record air temperature of 136.4°F (62.6°C) was measured in the Sahara Desert, in a place called El Azizia, on September 13, 1922. The absence of vegetation exposes rocks and soil to the sun, which may cause ground temperatures in the hottest deserts to reach 170°F (77°C). Nights are much cooler. The lack of cloud cover allows heat to escape and the temperature may drop 25 degrees or more after the sun sets. At night, temperatures of 50°F (10°C) or less are common, and they may even drop below freezing. The Sands of Time When living things die, moisture in the air aids the bacteria that cause decay. Before long, tissues dissolve and eventually disappear. Desert air is so dry that decay does not take place or occurs extremely slowly. Instead, tissues dry out and shrink, turning an animal or human being into a mummy. In ancient Egypt around 3000 BC, the dead were buried in shallow graves in the sand. The very dry conditions mummified the bodies, preserving them. Later, for those who could afford it, Egyptian burials became more complex. Internal organs were removed, and the bodies underwent special treatments designed to preserve them. They were then placed into tombs dug into rocky cliffs or, in the case of certain pharaohs (kings), placed within huge pyramids of stone. In most cases, bodies of the ancient Egyptians are so well preserved that much can still be learned about what they ate, how they lived, and what caused their deaths. Graves discovered in the Takla Makan (TAHK-lah mah-KAN) Desert of China have also given scientists important information. (The name Takla Makan means “the place from which there is no return.”) Well-preserved mummies as much as 3,800 years old have been found in the graves. The mummies have European features and some are dressed in fine woolens woven in tartan (plaid) patterns commonly used by the ancient Celts and Saxons of Northern Europe. Scientists believe these mummies were the first Europeans to enter China, which was officially closed to outsiders for thousands of years. Evidence exists that these people rode horses using saddles as early as 800 BC, and they may have introduced the wheel to China. Their descendants, who have intermarried with the Chinese, still live in the Takla Makan. Winters in cold deserts at latitudes midway between the polar and equatorial regions can be bitter. In the Gobi Desert, for example, temperatures below freezing are common. Blizzards and violent winds often accompany the icy temperatures. Rainfall varies from desert to desert and from year to year. The driest deserts may receive no rainfall for several years, or as much as 17 inches (43 centimeters) in a single year. Rainfall may be spread out over many months or fall within a few hours. In the Atacama Desert of Chile, considered the world’s driest desert, more than half an inch (1.3 centimeters) of rain fell in one shower after four years of drought. Such conditions often cause flash floods, which sweep vast quantities of mud, sand, and boulders through dry washes, gullies, and dry river beds (called wadis or arroyos). The water soon evaporates or disappears into the ground. The Atacama Desert is the site of the world’s longest known drought; no rain fell for 400 years (from 1571 until 1971). In coastal deserts, fog and mist may be common. Fog occurs when cold-water ocean currents cool the air and moisture condenses. The Atacama Desert lies in a depression behind mountains, so most of its precipitation is received in fog form. Some deserts, such as the polar deserts, experience snow rather than rain or fog. The geography of deserts involves landforms, elevation, soil, mineral resources, and water resources. Desert terrain may consist of mountains, a basin surrounded by mountains, or a high plain. Many desert areas were once lake beds that show the effect of erosion and soil deposits carried there by rivers. Wind helps shape the desert terrain by blowing great clouds of dust and sand that break down rock, sometimes sculpting it into strange and magnificent shapes. In the Australian Desert, unusual pinnacles (tall mountain shapes) of limestone rock, formed over thousands of years by the wind, stand on the desert floor. These limestone rock formations may be divided into two types. A mesa, the Spanish word for plateau, is a steep-sided, flat-topped hill. The mesa may be reduced to a butte (BYUT), the French word for hill or knoll, by erosion. Disappearance of Lady Be Good Becoming lost in the desert can often end in tragedy. In 1943, an American bomber, called Lady Be Good, went off course and crashed in the Sahara Desert. The surviving crew saw a line of hills in the distance and mistook them for the hills around the Mediterranean, where they hoped to find human settlements. At night they walked and they rested by day, covering 75 miles (120 kilometers) in a week. As it turned out, the hills were not those around the Mediterranean, and the crewmen were still 375 miles (600 kilometers) from the sea. The entire crew died from sun exposure and lack of food and water. Their bodies were not found until 1960—seventeen years later. Bare rock, boulders, gravel, and large areas of sand appear in most desert landscapes. Vast expanses of sand dunes, sometimes called ergs, are not very common. Sand dunes make up less than 2 percent of deserts in North America, only 20 percent of the Sahara, and 20 percent of the Arabian Desert. The Empty Quarter (Rub’ al-Khali) in the Arabian Desert is the largest sandy desert in the world, covering about 250,000 square miles (647,500 square kilometers). Unless anchored by grass or other vegetation, sand dunes migrate constantly. Their rate of movement depends upon their size—smaller dunes move faster—and the speed of the wind. Some dunes may move over 100 feet (30 meters) in a year and can bury entire villages. In the Sahara, dunes created by strong winds can achieve heights of 1,000 feet (300 meters). Scientists estimate that dunes in the Namib Desert are the largest in the world. Dunes take different shapes, depending upon how they lie in respect to prevailing winds. When the wind tends to blow in one direction, dunes often form ridges. The ridges may lie parallel to the wind forming seif (SAFE), or longitudinal dunes; or at right angles forming transverse dunes. Seif dunes are the largest, with some in the Sahara approaching 100 miles (160 kilometers) in length. At desert margins where there is less sand, dunes may assume crescent shapes having pointed ends. These are called barchan (bahr-KAN) dunes, and the wind blows in the direction of their “points.” In the Sahara, stellar (star-shaped) dunes are commonly found. Stellar dunes form when the wind shifts often, blowing from several directions. Desert soils tend to be coarse, light colored, and high in mineral content. They contain little organic matter because there is little vegetation. If the area is a basin or a catch-all for flash-flood waters, mineral salts may be carried to the center where concentrations in the soil become heavy. If the area was once an inland sea, like the Kalahari (kah-lah-HAHR-ee) Desert of Botswana, eastern Namibia, and northern South Africa, exposed bottom sediments (matter deposited by water or wind) are very high in salt. Most desert sand is made of tiny particles of the mineral quartz. Placed under pressure for long periods, grains of sand may stick together forming a type of rock called sandstone. Some deserts have little soil, exposing bare, wind-polished, pebbly rock, called desert pavement. Rocks are often broken due to contraction and expansion caused by extreme temperature variations. Basin areas scoured by winds show surfaces of gravel and boulders, and on steep slopes, whipping winds may leave little soil. Dust devils, columns of dust that spin over the desert landscape, are carried by whirlwinds. Dust storms can produce clouds thousands of feet high. In the Sahara, up to 200 million tons (180 million metric tons) of dust is created each year. Saharan dust has occasionally crossed the Atlantic to the United States. It has been known to travel as far north as Finland. Two long-lasting chemical reactions affect desert rocks and soil. One, called desert varnish, gives rocks, sand, and gravel a dark sheen. Desert varnish is believed to be caused by the reaction between the moisture from overnight dew and minerals in the soil. The second reaction is the formation of duricrusts—hard, rocklike crusts that form on ridges when dew and minerals such as limestone combine, creating a type of cement. Desert soils offer little help to plant life because they lack the nutrients provided by decaying vegetation and are easily blown away, exposing plant roots to the dry air. Some deep-rooted plants can exist on rock, where moisture accumulates in cracks. Other plants remain dormant during the driest periods, thriving and blooming after brief rains. Soil reveals much about a desert’s geological history. In Jordan, a Middle-Eastern country, the Black Desert takes its name from black basalt, a rock formed from volcanic lava. Deserts exist at many altitudes. North American deserts are partly mountainous, but Death Valley, a large basin in California, is 282 feet (86 meters) below sea level at its lowest point. The main plateau of the Gobi is 3,000 feet (914 meters) above sea level, and the Sahara extends from 436 feet (130 meters) below to 10,712 feet (3,265 meters) above sea level. Temperature, plant, and animal life, are all influenced by elevation. Valuable minerals like gold and oil (petroleum) are often found in desert regions. In the Great Sandy Desert of Australia, miners hunt for gold nuggets. “Black gold,” as oil is often called, is found beneath the desert regions of the Middle East, where it formed over time from the sediment of prehistoric oceans. Iron ore is mined in portions of the Sahara. Borax—a white salt used in the manufacture of such products as glass and detergent—was once mined in Death Valley, California. Water sources in the desert include underground reserves and surface water. In addition to occasional rainfall, deserts may have reserves of underground water. These reserves, often trapped in layers of porous rock called aquifers, were formed over thousands of years when rainwater seeped underground. Reserves close to the surface may create an oasis, a green, fertile haven where trees and smaller plants thrive. The presence of water may allow a completely different biome to form like an island in the desert. People who live in the desert dig wells into aquifers and other underground water sources to irrigate crops and water their animals. As |WELL-KNOWN DESERTS OF THE WORLD| |Arabian||Hot; extremely arid and arid||Arabian Peninsula||900,000 square miles (2,330,000 square kilometers)| |Atacama||Hot; extremely arid and arid||Chile||70,000 square miles (181,300 square kilometers)| |Australian||Hot; arid and semiarid||Australia||529,346 square miles (1,371,000 square kilometers)| |Death Valley||Hot; arid||California (United States)||3,012 square miles (7,800 square kilometers)| |Gobi||Cold; arid and semiarid||China, Mongolia||500,000 square miles (1,300,000 square kilometers)| |Kalahari||Hot; arid||Southern Africa||10,038 square miles (260,000 square kilometers)| |Mojave||Hot; arid||California and Nevada (United States)||25,000 square miles (65,000 square kilometers)| |Namib||Hot; arid||Botswana, eastern Namibia, and northern South Africa||52,000 square miles (135,000 square kilometers)| |Negev||Hot; arid||Israel||4,700 square miles (12,170 square kilometers)| |Patagonian||Cold; arid||Argentina||260,000 square miles (673,000 square kilometers)| |Sahara||Hot; extremely arid and arid||North Africa||35,600 square miles (92,200 square kilometers)| |Thar||Hot; extremely arid||India and Pakistan||92,163 square miles (238,700 square kilometers)| desert populations grow, water sources shrink and cannot be replaced fast enough. There is a real danger that groundwater reserves will one day be depleted. Water may be found in desert areas in the form of rivers or streams. Some streams form only after a rain, when water sweeps along a dry river bed in a torrent (violent stream) then quickly sinks into the ground or evaporates. Moisture sometimes remains under the surface, and plants can be seen growing along the path of streams. Permanent rivers are found in desert regions. The Colorado River is one example. Over a million years ago, the Colorado River began cutting a path into the plateau of limestone and sandstone rock in northern Arizona, ultimately forming the Grand Canyon, which is 1.2 miles (1.9 kilometers) deep and 277 miles (446 kilometers) long. Perhaps the most famous desert river is the Nile, which bisects Egypt. Since ancient times, Nile floods have brought enough rich soil from countries farther south to turn Egypt’s river valley into fertile country well known for its agricultural products, such as cotton. Permanent lakes rarely occur in desert regions. Two exceptions are the Great Salt Lake of Utah, which is all that remains of what once was a great inland sea, and the Dead Sea of Israel and Jordan. The Dead Sea is actually a salt lake that was once part of the Mediterranean. One of the most important characteristics of any biome is its plant life. Not only do plants provide food and shelter for animals, they recycle gases in the atmosphere and add beauty and color to the landscape. Deserts support many types of plants, although not in large numbers. Algae, fungi, and lichens Algae (AL-jee), fungi (FUHN-ji), and lichens (LY-kens) do not fit neatly into either the plant or animal categories. Most algae are single-celled organisms; a few are multicellular. Like plants, nearly all algae have the ability to make their own food by means of photosynthesis (foh-toh-SIHN-thuh-sihs). Photosynthesis is the process by which plants use the energy from sunlight to change water and carbon dioxide into the sugars and starches they use for food. Other algae absorb nutrients from their surroundings. Although most algae are water plants, the bacteria known as blue-green algae do appear in the desert. They survive as spores during the long dry periods and return to life as soon as it rains. (Spores are single cells that have the ability to grow into a new organism.) Fungi are commonly found in desert regions wherever other living organisms are found. Fungi cannot make their own food by means of photosynthesis. Some species, like molds and mushrooms, obtain nutrients from dead or decaying organic matter. They assist in the decomposition (breaking down) of this matter, releasing nutrients needed by other desert plants. Fungi that attach themselves to living plants are parasites. Parasites can be found wherever green plants live, and may weaken the host plant so that it eventually dies. Other parasitic fungi actually help their host absorb nutrients more effectively from the soil. All fungi reproduce by means of spores. Lichens are actually combinations of algae and fungi living in cooperation. The fungi surround the algal cells. The algae obtain food for themselves and the fungi by means of photosynthesis. It is not known if the fungi aid the algal organisms, but they may provide them with protection and moisture. Lichens are among the longest living organisms. Some lichens in polar deserts are believed to have survived at least 4,000 years. Although lichens grow slowly, they are very hardy and can live in barren places under extreme conditions, such as on bare desert rock or arctic ice. Crusty types colored gray, green, or orange, often cover desert rocks and soil. During very dry periods, they remain dormant. When it rains, they grow and make food. Most green plants need several basic things to grow: light, air, water, warmth, and nutrients. In the desert, light, air, and warmth are abundant, although water is always scarce. The nutrients—primarily nitrogen, phosphorus, and potassium—obtained from soil may be in short supply. Earth’s Balancing Act Many scientists recognize the many links among all life forms living on our planet. One biologist, James Lovelock, has theorized that life itself is responsible for changes in the land, water, and air. For example, until about 2 billion years ago, there was almost no oxygen in the atmosphere. Then blue-green bacteria, also called blue-green algae, began using energy from the sun for photosynthesis (a food-making process), which produces oxygen as a by-product. After enough blue-green algae and later plants got to work, the atmosphere eventually became, and is still maintained at, 21 percent oxygen, which is ideal for animal life. Lovelock believes that living things somehow work together in this way, instinctively providing a comfortable environment for themselves and one another. To test his theory, Lovelock has produced a computer model. (Computer models enable scientists to more quickly study processes that take very long periods of actual time to show a result.) Suppose, for example, there are two species of flowers, one white and one dark blue. The white flowers reflect the sun’s heat and can survive in warm climates. The dark blue flowers absorb heat and do better in cooler climates. According to Lovelock’s model, the flowers help control the environment. The presence of blue flowers in a cool environment means heat is absorbed and the surrounding temperature is prevented from being too cool. The white flowers, on the other hand, reflect the heat and help keep a warm environment from being too warm. Desert plants must protect against water loss and wilting, which can damage their cells. Large plants require strong fibers or thick, woody cell walls to hold them upright. Even smaller plants have a great number of these cells, which makes them fibrous and tough. Their leaves tend to be small and thick, with fewer surfaces exposed to the air. Outer leaf surfaces are often waxy to prevent water loss. Pores in the surface of green leaves allow the plant to take in carbon dioxide and release oxygen. The leaves of some desert plants may have grooves to protect their pores against the movement of hot, dry air. Other leaves curl up or develop a thick covering of tiny hairs for protection. Still others have adjusted to the dry environment by adapting the shape of their leaves. For, example cactus leaves are actually spines (needles). These spines have less surface area from which water can evaporate. As a result, more water is stored within the plant. Common desert plants Several species of plants grow in the desert, including cacti, shrubs, trees, palms, and annuals. Cacti Cactus plants originated in southern North America, Central America, and northern South America. Instead of leaves, a cactus has spines, which come in many forms from long, sharp spikes to soft hairs. A Prickly Compass The stems of the barrel cactus of the southwestern United States grow in a curve. The curve always points south because the cactus grows faster on its northern side, which is in the shade. Photosynthesis takes place in the stems and trunk of the plant. Nectar, a sweet liquid that appeals to insects, birds, and bats, is produced in the often spectacular flowers. One of the largest plants in the desert is the giant saguaro (sah-GWAH-roh) cactus. Its large central trunk can grow as tall as 65 feet (20 meters) and have a diameter of 2 feet (60 centimeters). Ninety percent of its weight is water, which it stores in its soft, spongy interior. During very dry conditions, as the plant uses up this stored water, the trunk shrinks in size. Like many desert plants, the saguaro has a wide, shallow root system designed to cover a large area. After a long dry period, its roots can take up as much as 1 ton (1 metric ton) of water in 24 hours. The trunk expands as it absorbs and stores the new supply. Pores run in deep grooves along the saguaro’s stems. These pores open during the cooler nighttime hours to take in carbon dioxide and release oxygen. For protection from wind and animals, long, sharp spines run along the grooves. These spines reduce air movement, which conserves water and keeps grazing animals away. Birds and bats love the nectar produced in saguaro flowers, and bats help to pollinate the saguaro. A bat’s head fits the shape of the flowers almost perfectly. As the bat drinks the nectar, its head gets heavily dusted with pollen grains, which it then carries to the next plant. Woody shrubs and trees Small, woody perennials that flourish in the desert climate include sagebrush, salt-brush, creosote, and mesquite (meh-SKEET). They have small leaves and wide-ranging root systems. Most shrubs have spines or thorns to protect them against grazing animals. The mesquite is a tree with roots that may grow 80 feet (24 meters) deep, with most of the root system in the upper 3 feet (1 meter). The long roots find a constant supply of water and it remains green all year. The mesquite produces long seedpods with hard, waterproof coverings. When the pods are eaten by desert animals, the partially digested seeds pass out of the animal’s body and begin to grow. Joshua trees, a form of yucca plant, live for hundreds of years in the Mojave Desert of California. They can grow to 35 feet (11 meters) tall. Yuccas and treelike aloes store water in their leaves, not their stems. One type of aloe, the kokerboom tree of southwest Africa, can survive several years without rain. Palms Date palms, found at many oases in the Sahara and Arabian Deserts, can grow in soil with a high salt content. Only female trees produce dates, and only a few male trees are necessary for cross-pollination. The dates can be eaten raw, dried, or cooked and are an important food source for desert dwellers. The Washingtonian fan palm is the largest palm in North America. Native to California, it requires a dependable supply of water to survive and sends out a thick web of tiny roots at its base. When its fronds (branches) die, they droop down around the trunk and form a “skirt,” where animals like to live. The fan palm produces a datelike fruit that is eaten by many desert inhabitants. Annuals Long grasses, such as alpha or esparto grass, often flourish in the desert after seasonal rains. Their stems can be used to make ropes, baskets, mats, and paper. Tufts of grasses sometimes become so entwined they form balls that drop their seeds as they spin across the landscape in the wind. Almost every desert has its share of blooming annuals that add masses of color after a rain. Primroses and daisies adorn the California desert, while daisies, blue bindweed, dandelions, and red vetch beautify the Sahara. The growing season in deserts is limited to the brief periods of rain that, in some cases, do not occur for several years. In some coastal deserts, certain plants absorb mist from the nearby ocean through their leaves. Where the soil is rich and rain is more regular and dependable, desert plants may flourish. In cold deserts where real winters occur, plants are much like those in temperate climates. The portion above ground dies off, but the root system goes deep and is protected from freezing by layers of snow. Deserts are home to both annuals and perennials. Annuals live only one year or one season, and they require at least a brief rainy period that occurs regularly. Their seeds seem to sprout and grow overnight into a sea of colorful, blossoming plants. This period of rapid growth may last only a few weeks. When the rains disappear, the plants die and the species withdraws into seed form, remaining dormant (inactive) until the next period of rain. Unlike annuals, perennials live at least two years or two growing seasons, appearing to die in-between but reviving when conditions improve. Those that live many years must be strong and have several survival methods. As young plants, they devote most of their energy to developing a large root system to collect any available moisture. Plants are often many yards apart, because their roots require a large area of ground in order to find enough water. The above-ground portion of young perennials is small in comparison to the root system because their leaves do not have to compete for sunlight or air as they do for water. Some perennials are succulents (SUHK-yoo-lents), which are able to store water during long dry periods. Some, like the century plant, store water in their leaves, while others store it in their stems or in large roots. Except for occasional rivers and oases, water in the desert comes from the brief rains. Plants may grow in greater numbers in arroyos or wadis where moisture may remain beneath the surface. In coastal deserts, some plants absorb moisture from fog and mists that condense on their leaves. In cold deserts, spring thaws provide water from melting snow. Pollination (the transfer of pollen from the male reproductive organs to the female reproductive organs of plants) is often a problem in deserts. The wind may carry pollen from one plant to another, but this method is not efficient because plants usually grow far apart. Insect pollination is rare because there are fewer insects than in other biomes. As a result, many plants have both male and female reproductive organs and pollinate themselves. Many desert plants, such as the candy cactus, saguaro cactus, and the silver dollar cactus, are threatened because of their popularity as house plants and for landscaping. They are available for sale in nurseries, but some people take them from the wild. Desert plants are sparse to begin with, so removal from their native home upsets the delicate balance of their reproduction. All animals face the same problems in adapting to the desert. They must find shelter from daytime heat and nighttime cold, as well as food and water, which are often scarce. In spite of these extreme conditions, many animal species are represented in the desert environment, even some typically associated with temperate or wet surroundings. Animals lacking backbones are called invertebrates. They include simple desert animals such as worms, and more complex animals such as the locust. Certain groups of invertebrates must spend part of their lives in water. These types are usually not found in deserts. One exception is the brine shrimp, an ancient species that lives in desert salt lakes. Other exceptions are certain species of worms, leeches, midges, and flies that live in the fresh water of oases and waterholes. Most invertebrates are better adapted to desert life than vertebrates. Many have an exoskeleton (an external skeleton, or hard shell, made from a chemical substance called chitin [KY-tin]). Chitin is like armor and is usually waterproof. It protects against the heat of the desert sun, preventing its owner from drying out. Common desert invertebrates Termites, spiders, locusts and scorpions are all invertebrates found in the desert. Termites Termites, found all over the world, build the skyscrapers of the desert. Their mounds, often more than 6 feet (2 meters) tall, are erected over a vast underground system of tunnels. These tunnels go as deep as the ground-water, providing a water supply readily available to the termite colony. The mounds, which are made of dirt, decaying plants, and termite secretions that dry rock-hard in the sun, have many air ducts. As the sun warms the mounds, they grow very hot. The hot air inside the mounds rises, drawing cooler air through the tunnels, creating a type of air-conditioning system. Termites eat plant foods, especially the cellulose (substance making up a plant’s cell walls) found in woody plants. Living Juice Boxes Honeypot ants, which live in Africa, Australia, and the Americas, take the business of food very seriously. They maintain herds of aphids, insects that suck the juices of plants and then secrete a sugar-rich “honey.” Worker ants collect this honey and bring it back to the nest, where they feed it to a second group of workers. This second group takes in so much aphid honey that their stomachs swell and it is difficult for them to move. They then suspend themselves from the ceiling of the nest and wait. Later, when food becomes scarce, they spit the honey back up for other ants to eat. Some desert peoples like eating honeypot ants and seek out their nests. Termites have an elaborate social structure. A single female—the queen—lays all the eggs and is tended to by workers. Soldier termites, equipped with huge jaws, guard the entrances to the mound. Soldiers cannot feed themselves, and the workers must tend to them, as well. Locusts Locusts are found in the deserts of northern Africa, the Middle East, India, and Pakistan. As members of the grasshopper family, they have wings and can fly, as well as leap, considerable distances. For years they live quietly, nibbling on plants and producing a modest number of young. Then, for reasons not completely understood, their numbers increase dramatically. Great armies of locusts suddenly emerge, hopping or flying through the desert in search of food. Eating every plant in sight, these swarms may travel thousands of miles before their feeding frenzy ends. In a short time, the hordes die off and locust life returns to normal. The devastated landscape they leave behind may take years to recover. The most destructive species, the desert locust, lives wherever the average rainfall is less than 8 inches (20 centimeters). A swarm may contain as many as 40 million to 80 million insects. Food and water Many invertebrates are winged and can fly considerable distances in search of food. They eat plant foods or decaying animal matter. Some invertebrates are parasites, like the Guinea worm, that lurk at waterholes waiting for an unsuspecting animal to wander by. Parasites attach themselves to the animal’s body or are swallowed and invade the animal from the inside. Arachnids (spiders and scorpions), which are carnivores (meat eaters), seem well suited to desert life. They prey on insects and, if they are large enough, small lizards, mice, and birds. Scorpions use their pinchers to catch prey, then inject it with venom (poison) from the stinger in their tails. Their venom can be lethal, even for large animals and humans. Arachnids do not usually drink water but get what they need from their prey. Most desert spiders live on the ground rather than on webs, hiding in holes or under stones to escape the heat. The large, hairy camel spiders are nocturnal, which means they rest during the daytime hours and hunt at night. Scorpions hunt at night to avoid the heat of the day, taking shelter beneath rocks or burying themselves in sand or loose gravel. Several American and Australian species dig burrows more than 3 feet (1 meter) below the surface. Some invertebrates, such as the scorpion, go through a mating ritual. Male and female scorpions appear to dance with their pinchers clasped together, as they do a two-step back and forth. After mating, the males hurry away to avoid being killed and eaten by the female. Insects, the largest group of invertebrates, have a four-part life cycle that increases their ability to survive in an unfavorable environment. The first stage of this cycle is the egg. The egg’s shell is usually tough and resistant to long dry spells. After a rain and during a period of plant growth, the egg hatches. The second stage is the larva, which may be divided into several stages between which there is a shedding of the outer covering, or skin, as the larva increases in size. Larvae have it the easiest of all in the desert, often being able to spend a portion of their life cycle below ground where it is cooler and more moist than on the surface. Some larvae store fat in their bodies and do not have to seek food. The third stage of development is the pupal stage. During this stage, the animal often lives inside a casing, in a resting state, which may offer as much protection as an egg. Finally, the adult emerges. Amphibians are vertebrates (animals with backbones) that usually spend part, if not most, of their lives in water. Unlikely as it seems, such animals can be found in a desert. Frogs and toads manage to survive in significant numbers in desert environments. The short, active portion of their lives occurs during and immediately after the seasonal rains, when pools of water form. Mating, egg-laying, and young adulthood all take place in these pools. Offspring that survive into maturity leave the pools and take their chances on the desert floor, where they spend a few weeks feeding on both plants and insects. They must find shade or risk dying in the heat of the sun. Avoiding Hot Sand Several species of lizards have evolved methods for traveling over hot sand without burning their toes. The agamid lizard has long legs and keeps one foot raised and swinging in the breeze to cool it off while the other three support the animal. It does this in rotation, so that all its feet get an equal chance to cool off. Frogs may spend up to ten months out of the year in their burrows, which are usually 3 feet (1 meter) under the desert floor. The Australian water frog can wait up to seven years for rain in a burrow made of its own shed skin. The Sonoran Desert toad is poisonous to predators and the largest toad in Arizona. Frogs and toads feed on algae, plants, and freshwater crustaceans such as tiny shrimps that manage to survive in egg or spore form until brought to life by the rains. Frogs that eat the meat of crustaceans while they are tadpoles often become cannibals as they mature, eating their smaller, algae-eating brothers and sisters. If the rainy season is short, the cannibals have a better chance of survival because they have more food choices. If the rainy season lingers, the smaller tadpoles have a better chance because the cannibals cannot see their prey as well in muddy waters churned up by the rains, and the plant-eaters receive an increased supply of algae. During the hottest, driest seasons, amphibians go through estivation (ess-tih-VAY-shun), an inactive period. While the soil is still moist from the rain, they dig themselves a foot or more into the ground. Only their nostrils remain open to the surface. Normally, their skin is moist and soft and helps them absorb oxygen. During estivation the skin hardens and forms a watertight casing. All the animal’s bodily processes slow down to a minimum, and it remains in this state until the next rainfall, when it emerges. When water is scarce, Australian Aborigines (native peoples) dig up estivating frogs or toads and squeeze the animal’s moisture into their mouths. Mating and egg-laying for amphibians takes place in water; the male’s sperm is deposited in the water on top of the female’s jellylike eggs. As the young develop into larvae and young adults, they often have gills and require a watery habitat. If there is not enough rain for pools of water to form, amphibian populations may not survive. |HOW TEMPERATURES GROW COOLER UNDERGROUND| |Burrowing animals escape the heat by going underground, where it can be up to twice as cool as above ground.| |Surface land temperature||165°F (74°C)| (31 centimeters) down (61 centimeters) down (91 centimeters) down (122 Centimeters) down Of all the animals, reptiles are perhaps most suited to living in the desert. Snakes, lizards, and some species of tortoises are the most common. The scaly, hard skin of reptiles prevents water loss, and their urine is almost solid, so no water is wasted. Reptiles are cold-blooded, which means their body temperature changes with the temperature of the surrounding air. Early in the day, they expose as much of their bodies as possible to the sun for warmth. As the temperature climbs, they expose less and less of their bodies. During the hottest period of the day, they find shade or a hole in which to wait for cooler temperatures. During the chilly nights they become sluggish, because they do not have to use energy keeping their body temperatures up as do warm-blooded mammals and birds. Besides being cold-blooded, some species of lizards have specially developed clear membranes in their lower eyelids that cover the entire eye and protect it from losing moisture. Snakes have no legs. They move using special muscles that flex their flat belly scales forward and backward. Ridges on their scales grip the ground and pull them along. Some rattlesnakes, like the sidewinders of North American deserts, manage to move diagonally by coiling into a kind of S-shape, and propel themselves by pushing with the outside back portion of each S-shaped curve. Common desert reptiles Snakes are less common in deserts than lizards. Common desert snakes include the gopher snake, horned viper, Gaboon viper, rattlesnake, and cobra. The Egyptian cobra can grow up to 8 feet (2.5 meters) long and is found in Africa. The Western diamond-back rattlesnake, the most dangerous American snake, can grow to 6.5 feet (2 meters). Common lizards include the gecko, the skink, the bearded dragon, the iguanid lizard, and the monitor lizard. The only two species of venomous lizards, the Gila monster and the bearded lizard, are related to the monitor. They are found in the southwestern United States and western Mexico. Food and water The diet of lizards varies, depending upon the species. Some have long tongues with sticky tips good for catching insects. Many are carnivores that eat small mammals and birds. The water they need is usually obtained from the food they eat. Geckos can survive long periods without food by living on stored fat. While most lizards hunt food during the day, the Gila monster, found in the southwestern United States, looks for reptile eggs and baby animals after dark. Making the most of its opportunities, the lizard stuffs itself. During periods when food is scarce, its body draws on extra nutrients stored as fat in its tail, which can double in size after a big meal. A Living Funnel Most lizards obtain water from the food they eat. The thorny devil, a small lizard from Australia, is an exception. During the cool nights, dew condenses in the creases and folds of its skin. Most of these folds lead toward the lizard’s mouth, and the lizard is able to lap up the moisture. All snakes are carnivores, and one decent-sized meal will last them days or weeks. In the desert, they use their eyes to hunt during the cool nights when their prey is most active. Snakes cannot close their eyes because they have no eyelids. A transparent covering protects their eyes from the dry air, dust, and sand. Snakes often hunt underground and they have adapted to detecting ground vibrations. Many desert snakes bury themselves in the sand so that only their eyes and flickering tongues are visible. Some snakes kill their prey with venom (poison). Although commonly thought of as jungle dwellers, boa constrictors and pythons (a species of constrictor) live in the desert. Constrictors strike their prey, hold it with a mouthful of tiny teeth, and then wrap their body around it like a coil. The prey suffocates and the constrictor swallows its meal whole, gradually working it into its stomach with its hinged lower jaw and strong throat muscles. Lizards and snakes hide in the shade during the hottest hours of the day to escape the sun. Only a few make their own burrows. Most take over the abandoned burrows of other animals, find shelter in rock crevices, or bury themselves in the sand. Desert tortoises obtain some shade from the sun with their thick outer shells. Most of the time they escape the heat of the day by retreating to burrows they dig. In the spring and autumn, when the days are not excessively warm, the tortoise ventures out during the day to forage for food. During the summer they forage for food at night when it is cooler, remaining in burrows during the day. In the winter, tortoises hibernate (become dormant) in a second burrow they have dug. The eggs of reptiles are leathery and tough and do not dry out easily. Some females remain with the eggs, but most bury Rattlesnakes and other pit vipers have small pits on both sides of their face. These pits detect the body heat of prey, just like heat-seeking missiles. Pit vipers hunt at night, and their secret weapon allows them to find small animals hidden in the dark. them in a hole. Offspring are seldom coddled and are left to hatch by themselves. Once free of the eggs, the offspring dig themselves out of the hole and begin life on their own. All deserts have bird populations. Tropical and subtropical deserts are visited twice each year by hundreds of species of migratory birds traveling from one seasonal breeding place to another. These migrators include small birds such as wheat-ears, as well as larger species such as storks and cranes. Some species know the route and where to find food or water. Others fly at night, when it is cool. Migrators are not true desert birds. They cannot survive for long periods in the desert as can birds for which the desert is home. Birds have the highest body temperature of any animal—around 104°F (40°C). They do not need to lose body heat until the desert temperature is greater than their own. This makes desert life easier for them than for mammals, which must lose heat regularly during the warmest months, usually by panting or sweating. Feathers protect birds not only from the cold in winter but from the sun and heat. Air trapped between layers of feathers acts as insulation. Birds do not sweat but, by flexing certain muscles, can make their feathers stand erect. This allows them to direct cooling breezes to their skin. Those having broad wing spans, such as eagles and buzzards, can soar at high altitudes and find cooler temperatures. Common desert birds Common desert birds include ground birds and birds of prey. Ground birds Ground birds are not hunters or scavengers (animals that eat decaying matter) but obtain most of their food from plants and insects. They have strong legs that enable them to dart around on the ground without tiring. The Flying Sponge The sandgrouse, found in the deserts of Africa and Asia, is a water-drinking bird. Most members of this species live near waterholes, except during nesting periods, when they may be forced to remain in an area experiencing drought (extreme dryness). Male sandgrouse have evolved an unusual method of overcoming this problem. They fly to the nearest waterhole, perhaps 20 miles (35 kilometers) away, and turn themselves into sponges. They wade into the water and let their special belly feathers absorb the liquid—as much as twenty times their own weight. Then they fly back to the nest where the nestlings drink by squeezing the water out with their beaks. In Asia, Africa, Australia, and the Arabian Peninsula, families of thrushes, called chats, are common. Varieties of chats live at many different altitudes, including those over 13,000 feet (4,000 meters). They are found in both arid and semiarid regions, and their diets and habits vary according to their location. Wrens are common in desert habitats all over the world. They eat insects, and species in North American deserts eat seeds and soft fruits. Cactus wrens, as their name implies, live among prickly desert plants where they build their nests among the spines. A small desert bird that is a popular pet is the parakeet. Originally from Australia, parakeets live in huge flocks containing tens of thousands of birds. During years when food is plentiful, a flock may number in the millions. Parakeets prefer seeds and are nomads, often traveling from habitat to habitat in search of the seeds of annuals that bloom after a rainfall. The largest ground birds found in deserts are members of the bustard family. The houbara bustard is found in the Sahara and the deserts of central Asia. It is about 2 feet (60 centimeters) tall and weighs as much as 7 pounds (3 kilograms). Houbaras depend primarily upon plants for food, but will eat invertebrates and small lizards. Houbaras can run fast— up to 25 miles (40 kilometers) per hour—and seldom fly. Other birds include the pale crag martin and the swallow. Birds of prey Birds of prey are hunters and meat eaters. They soar high in the air on the lookout for small animals to eat. Their eyesight and The Eyes Have It The eyes of all birds are large in relation to their body size. Some of the largest are the eyes of owls, which take up about one-third of the space in the skull. Owl eyes are specially adapted to see better at night. They have more rods, the cells that are most sensitive to low light levels, and fewer cones, the cells sensitive to bright light. An owl’s eyes are pear shaped and face forward, which gives the owl a kind of binocular vision and superior judgment of distances. An owl’s eyes cannot turn in their sockets because of their size and shape. To look around, the owl must swivel its entire head. hearing are usually very sensitive and enable them to see and hear creatures scurrying on the ground far below. The roadrunner, found in the southwestern United States, can scurry over the desert floor for long stretches at speeds of about 13 miles (20 kilometers) per hour. Roadrunners are carnivores and are known to come running at the sound of a creature in trouble. Several species of falcons live in semiarid regions; one of the true desert falcons is the prairie falcon of North America. Prairie falcons hunt for other birds such as larks and quail, and small mammals such as rabbits and prairie dogs. When food is scarce, they eat insects and reptiles. A falcon attacks its prey by diving at the head and trying to seize the head in its talons. Prairie falcons do not build their own nests. Instead, they move into the abandoned nests of other birds or use a hollow in a rock. Owls live on the edges of deserts. The eagle owl is the largest, measuring 27 inches (68 centimeters) in length with a wingspan of over 6.5 feet (2 meters). They have the power to attack small deer. The smallest owl, the elf owl, hunts invertebrates, mainly insects. This owl is about 5 to 6 inches (12 to 15 centimeters) long and weighs 1.25 to 1.75 ounces (35 to 55 grams). Most owls hunt in the evening or at night when their extraordinary eyes and superior hearing allow them to find their prey. The bird most often pictured in the desert is the vulture, waiting patiently while a creature dies of thirst. Most vultures are not really desert birds, although they do spend much time hunting in desert regions. They soar high in the air, circling and looking for carrion (dead animals), which their excellent eyesight allows them to easily see. When one bird spots a potential food source it begins to descend, and the other vultures follow. Food and water Birds are found in greater variety and numbers around oases and waterholes where there is an ample supply of water, seeds, and insects. Some, like the Australian scarlet robin, often drink water. Birds that live in the desert itself are able to fly long distances in search of food or water. Some birds become nomads, following the rains from habitat to habitat. Birds usually require less than 10 percent of the amount of water needed by mammals. For this reason, many birds, like shrikes and some wheatears, can obtain enough moisture from the seeds, plants, and insects that they eat and do not need an additional water source. The same is true of vultures and birds of prey, which obtain water from the flesh of animals. Birds have kidneys that are very efficient in their ability to extract water, and their urine is not liquid but jellylike. Sometimes animals are involved in relationships in which one animal is helped and the other is not affected. This is called commensalism. One example is a desert snake that takes over the abandoned burrow of a prairie dog. During the hottest part of the day, most desert birds rest by roosting in the shade or in underground burrows. During excessively hot or dry periods, birds fly to more comfortable regions. It has been estimated that one-third of Australian birds are constantly on the move to escape the heat. Many desert birds build nests in rock crevices, in abandoned burrows, or on the ground in the open because there are few trees in the desert. Those that build on the ground may put walls of pebbles around the nest that act as insulation and reduce the force of the wind. Except in Australia, desert birds appear to breed as other birds do—according to the seasons. In Australia, they adapt their breeding habits to periods of rainfall, and breeding cycles may be years apart. Birds are free to fly away from the heat most of the year, but during the breeding cycle they must remain in the same spot from the time nest building begins until the young birds can fly. This is usually a period of many weeks. Normally, the parents sit on the nest to protect the eggs from heat or cold. During very hot weather, the parents may stand over the nest to give the eggs or the nestlings shade. Many mammals live in the desert. More than 70 species live in North American deserts alone. Among all of these species, only monkeys and apes are rarely seen. Mammals must prevent the loss of moisture from their bodies in the desert. The urine and feces of many desert mammals are concentrated, containing only a small amount of water. Small desert mammals, such as rodents, do not sweat. They manage to lose enough heat through their skin. During the hottest times, they burrow underground. Some estivate in summer months. No Need for Sunscreen The naked mole rat, sometimes called the sand puppy, is an animal that seems poorly designed for desert life. It has no fur, just its wrinkly skin for protection, and it is almost blind. To make matters worse, unlike most mammals, it cannot maintain a steady body temperature. Its name gives a clue as to how it survives in the Sahara Desert; it burrows. Naked mole rats are tunneling experts, and a clan of rats will create an entire apartment complex of nurseries, storage chambers, and bedrooms. They dig so fast with their sharp buck teeth most predators cannot catch them and get just a face full of dirt for their trouble. The clan leader is always a female. She is the only one who breeds. Other females do not mature sexually until the queen dies or they decide to leave the den to create their own clan elsewhere. Medium-sized mammals, such as rabbits and hares, do not burrow or estivate; but they will use another animal’s burrow to escape from immediate danger. They have no sweat glands and cannot keep cool by sweating, although some heat escapes from their large ears. Hares and rabbits stay in whatever shade they can find during the hottest time of day. The quokka, a rabbitlike marsupial (a mammal that carries new offspring in a pouch) from Australia, copes with the heat by producing large amounts of saliva and licking itself. It is not known how the quokka makes up for this water loss. Larger grazing animals, such as gazelles, can sweat, which helps them tolerate the heat. Some carnivores, such as coyotes, release body heat by panting (breathing rapidly through their mouths). This results in lost moisture. True desert dwellers, like the mongoose, the meerkat, and the hyena, avoid the midday heat by retiring to underground dens. Common desert mammals Other mammals commonly found in deserts include the kangaroo, the camel, and the hyena. Kangaroos A well-known grazing animal of Australia is the red kangaroo. The kangaroo will hop long distances in search of food. Water is less of a problem as they get much of the moisture they need from grasses. They have efficient kidneys and produce a concentrated urine. The Rabbit Plague In the nineteenth century, English settlers introduced rabbits into the Australian Desert and grasslands. A few soon multiplied into half a billion strong. They ate everything growing in the pasturelands and then, when there was no more grass to supply moisture, the rabbits mobbed the water holes. Scientists got rid of vast numbers by introducing rabbit diseases in 1950. Some hardy animals survived and are still multiplying. The kangaroo’s strong legs allow them to move as fast as 20 miles (30 kilometers) an hour. Each hop can carry them as far as 25 feet (8 meters). Kangaroos are marsupials, which means that females carry their young, called a joey, in a pouch on their abdomen. Females with young in their pouches are seldom able to travel fast. If the female is in danger and the joey is too heavy, the mother will dump the young out in order to escape. This may seem cruel, but if she is caught the joey will die anyway. If she escapes, she will breed again. Camels Camels are the mammals most people associate with the desert. The Bactrian camel has two humps and lives in the deserts of central Asia. The one-humped Arabian camel, called a dromedary, lives in the Arabian and Sahara Deserts. Camels are well adapted for desert life. They can travel almost 100 miles (160 kilometers) a day, for as long as four days, without drinking. When they do drink, they can take in more than 20 gallons (100 liters) in just a few minutes. At one time it was believed that camels stored water in their humps. This is not true. The hump contains fat, which is used up during long journeys and may shrink in lean times. Camels have strong teeth and the membranes that line the insides of their mouths are tough, allowing them to eat almost anything that grows in the desert, even the thorniest plants. They can drink bitter, salty water that other animals cannot tolerate. Camels have thick fur that molts (falls out) during the hottest seasons and is replaced by new, thinner hair. Camels can sweat to reduce body temperature, but their body temperature is not constant and varies depending upon surrounding air temperatures. Their eyes have long protective lashes, and their nostrils can be closed to keep out blowing sand. Their padded, two-toed feet are well insulated against the hot desert floor. Packing it Away Packrats are rodents that collect almost anything—seeds, bones, rocks—and store it away in their dens. Some of those dens need housecleaning so badly that, over the years, the stuff becomes glued together. Many generations of packrats often live in the same spot, and the piles can grow for thousands of years. The packrats’ messy lifestyle may actually help scientists. When the burrows are studied, scientists can find out what an area looked like thousands of years ago. Camels can no longer be considered wild animals. It is doubtful any wild groups still exist. They have been thoroughly domesticated (tamed) and are used by humans for transportation and other needs. Their ability to travel long distances and carry up to 600 pounds (270 kilograms) makes them useful. Hyenas Hyenas are members of the dog family and have one of the strongest sets of jaws of any animal. They range over the deserts of Africa and Arabia, hunting in packs for antelope and other game, or stealing the meals of other carnivores. They also eat carrion and, occasionally, plant foods. Hyenas are not particularly fast, but they do not give up easily and may simply wear out their prey, which collapses beneath the snarling pack. Hyenas live as part of a clan in dens they dig themselves. Food and water Some small mammals eat plant foods and insects; others, like the desert hedgehog, eat bird and reptile eggs and young. In Australia, a tiny mole no more than 8 inches (20 centimeters) long eats at least its own weight in insects and young lizards every day. Many small mammals do not need to drink water as often because they obtain moisture from the food they eat. Large grazing animals need to drink and require a water source to replace the moisture lost in sweating. A few, such as the Arabian gazelle and the Nubian ibex, seldom drink water. They obtain what they need from the plants they eat. Grazing animals can travel to areas where rain has recently fallen. In temperate climates, most grazing animals live in large herds. The desert food supply will not support such numbers, and desert groups are usually very small. Carnivores, such as mountain lions and coyotes, do not live in the deep desert but remain on the desert fringes where a supply of water is more readily found. Carnivores obtain much of their moisture requirements from the flesh of their prey, but they still need to drink. Small mammals remain in burrows during the day. The air inside a burrow is up to five times more humid than the outside air. This helps the animal prevent moisture loss. Medium-sized hares and rabbits do not live in burrows but seek shelter in the shade of plants or rocks and in shallow depressions. Grazing animals look for shade during the midday heat. Young animals may lie on the ground in the shade created by the adults. Carnivores dig their own dens, which may consist of many tunnels to accommodate a family clan. The males mark their territories with scent, usually that of urine. Young mammals develop inside the mother’s body where they are protected from heat, cold, and predators. The extra weight can make it difficult for the pregnant females themselves to escape danger. Female mammals produce milk to feed their young. In the desert this presents a problem in lost moisture. Those that live in dens must remain nearby until the young can survive on their own, making survival for both mother and young during drought conditions more difficult. As in other biomes, many desert animal species are threatened. The addax, a desert antelope with beautiful horns, has been greatly reduced in numbers by Saharan nomads who believe its stomach contents have healing powers. Its skin is valuable, for it is believed to have the power to ward off attacks from snakes and scorpions. After World War II (1939–45), when dichlorodiphenyl-trichloroethane (DDT) and other dangerous pesticides were in common use, aplomado falcons shrank in number. These small falcons lived in desert areas of Mexico, Texas, and Arizona, but are no longer found north of Mexico. The poisons were used to kill grasshoppers that were devouring grasses and other vegetation. Grasshoppers are part of the falcons’ diet, and as the birds ate the contaminated insects, the poison killed the falcons. The number of falcons will hopefully increase now that DDT is banned. The California condor is one of the world’s rarest birds. When early settlers slaughtered deer herds for food, they reduced the condor’s food supply. Later, cattle ranchers set out poison baits to kill wolves and coyotes. The poison also killed the condors who ate the dead animals. California The Web of Life: Natural Balance Balance is important to every biome. Changes made by humans with even the best of intentions can create serious problems. During the nineteenth century, early settlers brought the American prickly pear cactus to Australia because they liked the plant’s appearance. In America the prickly pear is a useful desert shrub, often serving as fencing for livestock. In Australia, where it had no natural enemies, it was a disaster. By 1925, prickly pear cacti covered more than 100,000 square miles (260,000 square kilometers) of the Queensland and Victoria provinces. Nobody wanted the job of cutting down all those plants. Instead, a natural predator, a little moth that in caterpillar form loves to munch on prickly pears, was brought to Australia. Five years later, there were no more prickly pears. condors are now protected. As a result of captive breeding in 2008 there were about 300 condors, with about half of those in captivity. The beautiful spiral horns of the Arabian oryx are prized by hunters looking for trophies. By 1960 the Arabian oryxes were reduced to only about a dozen animals. In 1961 they became protected and several hundred were bred in zoos. After 1982 many of the zoo-bred animals were reintroduced into the wild. Human beings are able to adapt to many unfriendly environments. It is no surprise that they have learned to live in the desert and make it their home. Ten percent of the world’s population lives on arid (dry) land; they include a large variety of native peoples of all races. This number is growing because most desert inhabitants live in developing countries with high rates of population growth. Impact of the desert on human life Humans are able to maintain a safe body temperature in the desert by sweating. Under extreme heat, a human being may lose as much as 5 pints (3 liters) of moisture in an hour and up to 21 pints (12 liters) in a day. This water must be replaced or the person will die from dehydration. Dehydration occurs when tissues dry out, depleting the body of fluids that help keep it cool. Unlike the kidneys of many desert animals, human kidneys cannot concentrate urine to conserve water. The loss of salt through sweating is also a problem. Humans need a certain amount of salt to maintain energy production. If too much salt is lost, painful muscle cramps occur. Humans can change very little physically to adapt to desert conditions, so they must change their behavior. This has been done in many ways, from the choice of lifestyle to the development of technologies such as irrigation (watering of crops). Food and water Until the mid-1900s, many native desert dwellers were nomads—either hunter-gatherers, like the Bushmen of the Kalahari and Namib, or herders, like the Bedouin of the Middle East. They moved regularly, usually along an established route, in order to seek food and shelter for themselves or their herds of animals. Most nomads return year after year to the same areas within a given territory. They know when the rains usually come, where to find food or pastures, and where a water supply is located. By the late twentieth century less than 3 percent of desert peoples lived nomadic lives, having been driven from their lands by ranchers or mining companies looking for mineral, gas, and oil resources. Since 1950, many nomadic peoples have moved to cities where they often live in poverty. In some cases it might be possible to reverse desertification because nature fights back. Grasslands in Northern Uganda, Africa, had been lost when herds of cattle were allowed to overgraze the area. Then tsetse flies moved in and attacked the cattle. When the cattle herders moved to avoid the flies, the plants returned. The diet of hunter-gatherers consists primarily of plant foods and game animals, although meat is usually scarce. Some, like the Aborigines of Australia, eat the grubs (larvae) of certain insects, which provide a source of protein. Herding tribes depend primarily upon their animals for food, but they may also raise grains or trade for them. In the Sahara, dates from the date palm are an important food. Many Native Americans found important uses for desert plants. The leatherplant, also called sangre de drago (blood of the dragon), contains a red juice used as a medicine for eye and gum diseases. Wine and jelly were made from the fruit of the saguaro cactus, the fruit of the prickly pear was made into jam, and ocotillo branches were useful as building materials. The homes of hunter-gatherers tend to be camps, not houses. The Bushmen of the Kalahari, for example, make huts from tree branches and dry grass. Traditionally, the nomadic tribes of the Sahara, whose shelters had to be portable, used tents made from the hides or hair of herd animals such as goats. The tents of the Bedouin are an example. Village houses in the desert are often made of mud bricks dried in the sun. The bricks do not need to be waterproof because there is so little rain. In the desert country of Mongolia, desert dwellers commonly live in yurts—tentlike structures made from felt created from sheep’s wool. The felt is stretched over a wood frame, and a hole at the top of the yurt allows smoke from cooking fires to escape. Some Native Americans of the southwestern desert, such as the Hopi and Zuni tribes, built homes called pueblos from mud, wood, and stone. The thick walls and small windows kept the interior cool. The Anasazi, who lived around 1100 AD, built their homes in the sides of cliffs. The homes were reached by ladders, which could be pulled up for defense. The Navajos made hogans—houses of logs and mud. Modern-day Native Americans live in the same kinds of homes as other Americans. One advantage humans have over other animals is clothing, which is a substitute for fur. Unlike fur, clothing can be put on or taken off at will. Most traditional desert peoples wear layers of loose-fitting garments that protect the body from the heat. A naked person absorbs twice as much heat as a person in lightweight clothes. Loose clothing absorbs sweat and, as the air moves through, it produces a cooling effect. As a result, the person sweats less, conserving water. Many desert tribes, especially in the Middle East, have at least partially adopted western clothing styles. The only desert peoples to go naked were the Bushmen of the Kalahari and the Aborigines of Australia. When nights were chilly they occasionally wore “blankets” made from bark, but more often their warmth came from a campfire. Those Bushmen and Aborigines who still live a traditional lifestyle continue to go without clothes. Some kind of headgear is usually worn by people in the desert to shield their face from the sun and blowing sand. The Fulani, who live on the edge of the Sahara in West Africa, wear decorated hats made from plant fibers and leather. For traditional hunter-gatherers, possessions are almost meaningless. If they favor a particular stone for sitting on, they might carry it along. However, a stone, as well as other possessions, gets very heavy after a few miles. Their economy tends to be simple and little trading is done. A century ago, desert herders were self-sufficient. Their wealth moved with them in the form of herd animals, jewelry, tents, and other possessions. Commerce usually involved selling goats, camels, or cattle. In the Middle East, the discovery of oil made important changes in the economy. In some cases, sweeping modernization made irrigation and food production more stable. Nomads settled in one spot and became farmers. In other cases, the wealth fell into the hands of a few people, while the large majority lived in poverty. Impact of human life on the desert The fact that the desert is so unfriendly to human life has helped preserve it from those who could destroy its ecological balance. Use of plants and animals As long as traditional lifestyles remained in effect, the human impact on the desert was not severe. Desert dwellers understood the need to maintain balance between themselves and their environment. While animals and plants were used for food, they were not exploited (overused), and their numbers could recover. Since the introduction of firearms and the rapid growth of human populations, however, many plants and animals have become threatened. Desertification, the deterioration of land until it becomes desertlike, continues in spite of efforts by conservationists (people who work to preserve the environment) to stop it. Several popular desert plants such as cacti, and animals such as lizards, are sold at high prices to collectors. Many of these species are threatened as a result. Overpopulation has diminished many natural resources. The digging of wells has caused the water table (the level of ground water) to drop in many desert regions. Supplies of oil and minerals are being removed from beneath the desert surface and cannot be replaced. For thousands of years, crops have been grown in desert soil with the aid of irrigation (mechanical watering systems). Furrows were dug between rows of plants, and water pumped from wells was allowed to run along the furrows. In modern times, dams and machinery are used to control the rivers or pump groundwater for irrigation, allowing many Green lawns and many popular trees and shrubs require constant watering in the summer. This puts a strain on water supplies. Some desert communities ask residents to forget the lawns and xeriscape instead. Xeriscaping is a kind of landscaping using desert plants that need less water. former nomads to become settled farmers. To irrigate large cotton farms in the Kara Kum Desert of central Asia, water is brought from the Amu Darya River by means of a canal 500 miles (800 kilometers) long. Irrigation must be controlled. If too much groundwater is pumped, it may be used up faster than it can be replaced. When the water table drops in regions near the ocean, the land may slump. Salt water may enter aquifers (underground layers of earth that collect water), destroying the fresh water. Another problem caused by irrigation is an accumulation of salt in the soil. All soils contain some salts and, if irrigation water is used without proper drainage, the salt builds up within the surface layers and prevents plant growth. Quality of the environment People have impacted the desert environment in several ways. Drilling for oil and mining for other resources requires roads. The people who operate the drills need houses. Most of these changes have occurred along the Mediterranean in mineral-rich countries on the edges of the desert. However, the deep centers of deserts have usually not been disturbed. Roads remain tracks and have escaped being blacktopped. The world’s climate may be changing because of human activity. If so, the climate of the desert will change with unknown consequences to the plant and animal life. The Bushmen and the Tuareg are two groups of desert peoples commonly found living in the desert. The Bushmen of the Kalahari and Namib Deserts of Africa live in clans consisting of several families. A clan’s territory is about 400 square miles (1,036 square kilometers). Clans move according to the rains or the seasons, returning to familiar campgrounds year after year. Their territory includes good waterholes. Bushmen live off the land, eating berries, roots, and wild game. Plants, which make up the greatest share of their diet, are gathered by the women. Bushmen are expert trackers and use these skills to hunt game for food with bows and poisoned arrows. The meat is cut into strips and dried so that it will not spoil. Bushmen are not tall, ranging between 55 and 63 inches (140 and 160 centimeters) in height. This may be partially due to a diet deficient in some nutrients. Overnight shelters built from grass and branches provide protection from the wind. In the winter, clans may break up into smaller groups and build stronger huts to keep out the rains. The Bushmen population numbers approximately 20,000. Probably fewer than half of those still live as hunter-gatherers. The Bushmen and the Aborigines are excellent huntsmen and trackers. Bushmen can follow animal signs over the hardest ground and can distinguish the signs of one individual animal from those of another. During World War II (1939–45), when pilots whose planes had crashed were lost in the Australian deserts, Aborigines could find them by following footprints no one else could see. The northern Tuareg of the Sahara depend upon the camel for their livelihood, grazing their animals on what little pasture exists on the desert fringes. Camels may be killed for meat or their milk used to make butter and cheese. They are also the primary means of transportation. Since the Tuareg are traders, camels are essential to carry goods such as cloth and dates. The southern Tuareg tribes are more settled. As a result of modern technology that allows the digging of deep wells, the Tuareg have established cattle ranches on the edges of the Sahara. Unfortunately, the land is often overgrazed, and more territory is lost to the desert every year. Tuareg men wear a characteristic blue veil wound into a turban (head covering) on their heads. Both men and women wear loose robes for protection against the sun. An indigo (blue) dye is used to color some clothing. The Tuareg have been known as the “blue men” because the dye often rubs off on the skin. Homes are usually low tents made of animal hides dyed red. The tent roof is supported by poles and the sides are tied to the ground with ropes to keep out the wind and sand. During the hottest parts of the year, the Tuareg build a zeriba, a large, tall hut made from grasses attached to a wooden frame. The Tuareg are known for their ability to fight. Before firearms were introduced during the eighteenth century, they made impressive weapons such as daggers and swords. They now prefer high-powered rifles. Historically, the Tuareg have had a tendency to raid other tribes, and wars between one tribe and another may last for years. It is estimated that the Tuareg number as many as 1 million people. This figure is only approximate, since about 700,000 of the Tuareg are nomads and rarely remain in one place long enough for a reliable count to be made. The transfer of energy from organism to organism forms a series called a food chain. All the possible feeding relationships that exist in a biome make up its food web. In the desert, as elsewhere, the food web consists of producers, consumers, and decomposers. The following shows how these three types of organisms transfer energy to create the food web within the desert. Green plants are the primary producers. They make organic materials from inorganic chemicals and outside sources of energy, primarily the sun. Desert annuals and hardy perennials, such as cacti and palms, turn energy into plant matter. Animals are consumers. Plant-eating animals, such as locusts, gazelles, and rabbits, are primary consumers in the desert food web. Secondary consumers eat plant-eaters. They include the waterhole tadpoles that consume smaller, plant-eating family members. Tertiary consumers are meat-eating predators, like mongooses, owls, and coyotes. They will eat any prey small enough for them to kill. Humans, like the Bushmen, fall into this category. Humans are omnivores, which means they eat both plants and animals. Decomposers feed on dead organic matter and include fungi and animals like the vulture. In moister environments, bacteria aid in decomposition, but they are less effective in the desert’s dry climate. The Gobi Desert In the eastern part of central Asia, extending into Mongolia and western China, is the great Gobi Desert. It is part of a chain of deserts, including the Kara-Kum, the Kyzyl-Kum, the Takla Makan, the Alashan, and the Ordos. Gobi is a Mongolian word meaning “waterless place.” The Gobi Desert Location: Mongolia and western China Area: 500,000 square miles (1,300,000 square kilometers) Classification: Cold; arid and semiarid Surrounded by mountains—the Pamirs in the west, the Great Kingan in the east, the Altai, Khangai, and Yablonoi in the north, and the Nan Shan in the south—the Gobi is a high, barren, gravelly plain where few grasses grow. It is so flat that a person can see for miles in any direction. Except for the polar deserts, the Gobi is the coldest because of its altitude—about 3,000 feet (900 meters) in the east and about 5,000 feet (1,500 meters) in the south and west. Arctic winds blow down from the north so that, in the winter, some areas become snow covered. Any rainfall occurs in the spring and fall. Temperatures average between 27°F (-3°C) in January to 86°F (30°C) in July. Several rivers flow into the Gobi from the surrounding mountains, but the desert has no oases (waterholes). Dry river beds and signs of old lakes indicate that water once existed here. The only surface water that remains is alkaline (containing too many undesirable minerals). Fresh water is obtained primarily from wells. The few trees found here are willow, elm, poplar, and birch. In general, the largest plants are the tamarisk bush and the saxoul. Grass, thorn, and scrub brush survive, and bush peas, saltbush, and camel sage are abundant. During the brief spring, annuals (plants that live only one season) flourish. Few reptiles are present because they do not favor the cold winters, but several species of snakes live in the Gobi. Many birds, including sand grouse, bustards, eagles, hawks, and vultures, thrive here. Many of these are migrators (animals that have no permanent home). Small mammals include jerboas, hamsters, rats, and hedgehogs. Larger mammals include gazelles, sheep, and rare Mongolian wild horses. The Mongolian people living in the Gobi include settled farmers, who cultivate grains on its fringes, and herders who prefer life on the windy plain. Herders tend to be nomads (wandering tribes) who raise horses, donkeys, camels, sheep, goats, cattle, and, in upland areas, yaks. The two groups often trade; the farmers provide grain and the herders supply meat. Since the 1940s, irrigation has been used to make cotton and wheat crops possible, and many nomadic peoples have settled on farms. Famous Venetian explorer Marco Polo (1254–1324) reached the Gobi during his travels in the thirteenth century. Modern explorers, including archaeologists and anthropologists, still visit in search of dinosaur bones and eggs commonly found there. Such fossils are evidence that the Gobi was once a friendlier environment supporting diverse animal life. The Thar Desert The Thar (TAHR) Desert (also called the Great Indian Desert) begins near the Arabian Sea and extends almost to the base of the Himalaya Mountains. It contains a variety of landforms and differing climates. It is considered an arid, lowland desert characterized by extreme temperature variations. January temperatures average 61°F (16°C), while June temperatures average 99°F (37°C). The Thar Desert Location: India and Pakistan Area: 77,000 square miles (200,000 square kilometers) Classification: Cold; arid The plain of the Thar slopes gently from more than 1,000 feet (305 meters) in the southeast to fewer than 300 feet (91 meters) in the northwest near the Indus River, broken only by areas of sand dunes—primarily seif (SAFE) dunes that run parallel to the wind. Some more or less permanent dunes may soar to heights of 200 feet (61 meters). Several salt lakes lie on the desert’s margins. The salt covering the lake beds is mined and sold. Rainfall is unreliable and drops to about 4 inches (100 millimeters) a year in the west. In the summer, the Thar benefits from seasonal monsoons (rainy periods) that arrive in July and August. During monsoon season, the largest salt lake may cover 90 square miles (35 square kilometers) and be 4 feet (1.22 meters) deep, only to disappear during the dry period. Wheat and cotton have been grown on the irrigated Indus plain in the northeast since the 1930s, but most areas support only grass, scrub, jujube, and acacia, which are eaten by domesticated camels, cattle, goats, and sheep. Birds, such as the great bustard and quail, are found here. Wild mammals live in or on the fringes of the Thar, including hyenas, jackals, foxes, wild asses, and rabbits. Nomads raise sheep, cattle, and camels. Villages are located in areas where grass will grow after a rain and support herds of livestock. Small industries are based on wool, camel hair, and leather. About 5,000 years ago, the Indus Valley was home to the great Indus civilization. The decline and disappearance of this civilization is a mystery, but it may be partly due to climatic changes that caused the Thar to spread. The Australian Desert About three-quarters of the Australian continent is desert or semiarid land, a mixture of stone, rock, sand, and clay. January temperatures average 79°F (26°C), and July temperatures average 53°F (12°C). The central region is a basin, the western area a high plateau. The Australian desert region is composed of six different deserts—the Simpson, the Great Victoria, the Sturt Stony, the Gibson, the Great Sandy, and the Tanami. They blend into one another and are part of the same whole. The Australian Desert Area: 600,000 square miles (1,500,000 square kilometers) Classification: Hot; arid and semiarid The basin region is a flat land, where a visitor can gaze for miles without the view being interrupted by hills. This makes it a difficult place in which to navigate, for there are no landmarks. The terrain varies, changing from semiarid grassland to rocky stretches, to areas with sandy dunes. Strange, massive, solitary rock formations have spiritual meaning to the Aborigines, such as Ayers Rock or the Olgas. The lowest point of the Australian Desert is near Lake Eyre, which lies 60 feet (18 meters) below sea level. The lake, situated close to sand dunes, is about 50 miles (80 kilometers) long and so salt-encrusted that during dry periods it appears white. It fills with water only once or twice during a ten-year period. When full, it supports abundant wildlife, including birds, frogs, and toads. Mulga, a type of thorny acacia bush, and mallee, a species of eucalyptus, grow in the most arid regions of the Australian Desert. At the desert’s edges, spinifex grass grows. Australia is home to a number of animals found nowhere else in the world, but their ways of adapting to the desert are the same as animals elsewhere. Insects, such as locusts and termites; snakes, such as the bandy bandy and the brown snake; and lizards, such as the blue-tongued skink are commonly found in the Australian Desert. Birds include the emu, a large, flightless bird that resembles an ostrich, parrots, quail, cockatoos, and kookaburras. Kangaroos are Australia’s largest mammals. They prefer to live in grasslands but can travel long distances in the desert without water. Kangaroos are marsupials and carry their young in a pouch. Other marsupials unique to Australia are the wallaby and the wombat. Until the middle of the twentieth century, most Aborigines of Australia still lived as hunter-gatherers. Traveling in small groups, or clans, they moved over an established territory. They gathered plant roots, bulbs, termites, and grubs; and hunted kangaroos and wallabies with spears and boomerangs. When young Aborigine men attained a certain age, they went on a “walkabout.” During this time they had to leave their clan to wander in the desert, perhaps for years, learning about life and survival. The Arabian Desert The Arabian Desert covers most of the Arabian Peninsula, from the coast of the Red Sea to the Persian Gulf. It measures 900,000 square miles (2,330,000 square kilometers). Except for fertile spots in the southeast and southwest, the peninsula is all desert. During prehistoric times, volcanic cones and craters formed along its western edge. On the eastern side, sedimentary rocks and prehistoric sea life formed the world’s richest oilfields. Inside some of these same rocks are vast supplies of underground water, captured during ages past when the area included wetlands. The Arabian Desert Location: The Arabian Peninsula Area: 900,000 square miles (2,300,000 square kilometers) Classification: Hot; extremely arid The desert plateau consists of bare rock, gravel, or sand having a characteristically golden color, with the exception of deep ravines in the south. Wind and occasional flooding has carved the rocks into fantastic shapes. The Empty Quarter—about one-third of the southern part of the peninsula—is a vast sea of sand dunes. Treacherous quicksand is found here, its particles so smooth they act like ballbearings, drawing unwary creatures below the surface to their deaths. Another area of dunes, the Great Nafud, is found farther north and contains so few waterholes even camels find it difficult to cross. Along the coasts, changing sea levels have resulted in large salt flats, some as much as 20 miles (30 kilometers) wide. A monsoon season (rainy period) occurs along the southeast; rainfall elsewhere is only 1.4 inches (35 millimeters) per year. Flash floods are common during the infrequent rains, and hailstorms sometimes occur. Droughts may last several years. January temperatures average 65°F (18°C). In the southern portion of the Arabian Desert annuals spread a colorful blanket over the soil right after a rain; otherwise hardy perennials are the most common plant life, including mimosas, acacias, and aloe. Oleanders and some species of roses also thrive. Few trees can survive. The tamarisk tree helps control drifting sand, and junipers grow in the southwest. Date palms grow almost everywhere except at high elevations, and coconut palms are found on the southern coast. Plants that can be cultivated with the aid of irrigation include alfalfa, wheat, barley, rice, cotton, and many fruits, including mangoes, melons, pomegranates, bananas, and grapes. Swarms of locusts move over the land periodically, causing much destruction. Other common invertebrates include ticks, beetles, scorpions, and ants. Horned vipers and a special species of cobra make their home in the Arabian Desert, as do monitors and skinks. Ostriches are now extinct there, but eagles, vultures, and owls are common. Seabirds, such as pelicans, can be seen on the coasts. Wild mammals include the gazelle, oryx, ibex, hyena, wolf, jackal, fox, rabbit, and jerboa. The lion once lived there but has long been extinct in the area. For centuries, the Arabian Desert has been home to nomadic tribes of Bedouin. The camel makes life in the desert possible for them. A camel’s owner can live for months in the desert on the camel’s milk. The camel is also used for meat, clothing, and muscle power, and its dung (solid waste) is burned for fuel. Domestic sheep and goats are raised, as well as donkeys and horses. The Sahara Desert The Sahara Desert ranges across the upper third of Africa, from the Atlantic Ocean to the Red Sea, and is about 1,250 miles (2,000 kilometers) wide from north to south. It is the world’s largest hot desert, covering an area of 3.5 million square miles (9 million square kilometers). Its landforms, which tend to have a golden color, range from rocky mountains and highlands (some as high as 10,712 feet [3,265 meters]) to stretches of gravel and vast sand dunes. Erosion has shaped the sandstone rocks in some areas into unusual shapes and deep, narrow canyons. The Sahara Desert Location: North Africa Area: 3,500,000 square miles (9,000,000 square kilometers) Classification: Hot; extremely arid Millions of years ago, volcanic activity occurred here. The region was the site of shallow seas and lakes, which contributed to the vast reserves of oil deposits found there. Around 150 BC when the Romans controlled North Africa, the northern Sahara was a rich agricultural area. Over the centuries sand has claimed the once fertile landscape. The Sahara is the hottest desert, with a mean annual temperature of 85°F (29°C). Nights are cool and occasionally fall below freezing in the winter months. Except along the southern fringe, rain is not dependable and may be absent for as many as 10 years in succession. When it does rain, it tends to fall in sudden storms. In some areas, such as the Tanezrouft region, nothing appears to grow. In other areas, annuals bloom after the unpredictable rains and provide food for camels and wild animals. At one time, the only perennial vegetation at oases were tamarisks and oleander bushes. The date palm has since been introduced and now citrus fruits, peaches, apricots, wheat, barley, and millet are all cultivated. Animals such as gazelles, oryx, addax, foxes, badgers, and jackals live in the wild. Domesticated animals include camels, sheep, and goats. Three main groups of people now live in the Sahara: the Tuareg, the Tibbu, and the Moors. Two-thirds of the population live at oases, where they depend upon irrigation and deep wells that tap underground water. The Patagonian Desert Along the entire length of Argentina, between the Andes Mountains in the west and the Atlantic Ocean in the east, lies the Patagonian Desert, which is 300,000 square miles (777,000 square kilometers). It owes its existence to a cold ocean current and the Andes Mountains, both of which cause dry air to form. The terrain is a series of plateaus, some as high as 5,000 feet (900 meters) near the Andes, which slope toward the sea. Deep, wide valleys made by ancient rivers have created clifflike walls, but only a few streams remain. These are usually formed by melting snow from the Andes. The Patagonian Desert Area: 300,000 square miles (777,000 square kilometers) Classification: Cold; arid In some areas, the soil is alkaline or covered by salt deposits and does not support plant life. Elsewhere it is primarily gravel. Mineral resources, including coal, oil, and iron ore, have been found in several places but quantities are too small to be important. Rainfall, which occurs in the summer, is usually less than 10 inches (25 centimeters) annually. Although arid, the Patagonian Desert does not experience extremes of temperature, primarily because it is so close to the ocean. Temperatures range from 45°F (7°C) to 78°F (26°C). During the colder months, subzero temperatures and snow are common. Mean annual temperatures range from 43° to 78°F (6° to 26°C). Winds of 70 miles (112 kilometers) per hour are not uncommon. Plants, mostly saltbushes and other members of the amaranth family, cover 15 percent of the desert. In moister areas, cushion plants, shrubs, feather grass and meadow grass cover almost half the ground. Many reptiles are found here, but small mammals are the most numerous animals. Rabbits and hares are widespread. The rhea, an American ostrich, lives throughout the region. Other common animals are the mara and the guanaco, a llamalike animal prized for its long, fine hair. The guanaco was hunted almost to extinction because of the fine quality of its coat. The most unusual animal of the Patagonian Desert is the armadillo. Its skin consists of plates of tough armor, including one that covers its face. It can travel very fast on its short legs and, when threatened, digs a hole to escape. Patagonia means “big feet,” and refers to the Tehuelche Indians when they were first seen by Portugese explorer Ferdinand Magellan (c. 1480–1521) in 1520. The Tehuelche were nomadic hunters whose size and vigor caused Europeans to consider them giants. After the Europeans introduced the Tehuelche to horses, their ability to seek new territory and intermarry with other tribes increased. By 1960, the race was virtually extinct. Modern settlements in and around the Patagonian Desert were established during the twentieth century after oil was discovered on the coast. The Atacama Desert The Atacama Desert is a long, narrow, coastal desert 600 miles (965 kilometers) long, parallel to the coast of Chile. The Pacific Ocean lies to the west, where sheer cliffs about 1,475 feet (450 meters) high rise from the sea. Beyond these cliffs lies a barren valley that runs along the foothills of the Andes Mountains. The Atacama Desert Location: Chile and parts of Peru Area: 54,000 square miles (140,000 square kilometers) Classification: Hot; extremely arid Cold ocean currents are responsible for the dry conditions that make the Atacama the world’s driest desert. Droughts may persist for many years, and there is no dependable rainy season. Although it is classified as a hot desert, mean temperatures seldom exceed 68°F (20°C). Much of the Atacama consists of salt flats over gravelly soil. Sand dunes have formed in a few areas. In the south, a raised plateau nearly 3,300 feet (1,006 meters) high and broken with volcanic cones takes on an otherworldly appearance. In the southeast, a plateau bordering the Andes reaches a height of 13,100 feet (4,000 meters). The area is rich in boron, sodium nitrate, and other minerals, which contaminate any underground water, making it unusable both by plants and animals. Plant life consists primarily of coarse grasses, mesquite, and a few cacti. Animal life is rare. The most commonly found animals are lizards, but even they are not numerous. The rather timid giant iguanas, some growing over 6 feet (2 meters) long and resembling dragons prowling, are scavengers. Huge Andean condors that feed on carrion can be seen soaring overhead. Wherever cacti are found, cactus wrens feed and breed. Ovenbirds build their own little adobe (mud) huts instead of nests, and live in the less arid regions. The only mammals living in the Atacama are small rodents, such as the chinchilla, with guanaco and vicuña inhabiting higher elevations. Indians may have lived in or on the fringes of the Atacama at one time; however, they may have been killed by European settlers during the eighteenth century. Those people who still live in the area are descendants of the European immigrants. The Atacama was once important to the fertilizer industry for which sodium nitrate was mined. Another fertilizer, bird droppings called guano (GWAN-oh), was collected on the offshore islands, where seabirds would breed. In the 1920s synthetic fertilizers became popular, and the mining settlements in the Atacama were abandoned. The Namib Desert A coastal desert, the Namib is part of the vast tableland (flat highland) of southern Africa. Although it borders the Kalahari Desert to the east, the two are very different. Some scientists do not consider the Kalahari a true desert, while the Namib certainly is one. The desert is dry, but not very hot, with temperatures between 66°–75°F (19°–24°C). It has been nick-named the world’s oldest desert, dating back 55,000,000 years. The terrain is gravelly in the north, a rich area for gemstones, particularly diamonds. Sand prevails in the south, creating enormous dunes as much as 980 feet (300 meters) high in some areas. The Namib Desert Location: Eastern Namibia Area: 52,000 square miles (135,000 square kilometers) Classification: Hot; arid Rain is rare and years may pass between showers. When rain does come, it creates flash floods and then disappears as quickly as it came. The winds that blow in from the Atlantic do not bring storms. Instead, dense fog rolls in almost nightly, creating very humid air. Unusual life forms exist in the Namib, particularly in the southern dunes where plant life depends upon the fog and has evolved to draw moisture from it. The strange welwitschia plant is an example. The Hartmann’s zebra that lives in the Namib is able to sniff out small pools of water that may lie in gullies or dry stream beds. The zebras dig, sometimes 2 or 3 feet (0.6 or 1 meter), into the ground until the water is uncovered. The Namib is also inhabited by elephants, rhinos, giraffes, and lions. Bushmen live principally in the Kalahari, but they occasionally frequent the Namib. The Mojave Desert Deserts in North America are all part of the Great Basin, a vast territory stretching from just north of the Canadian border into parts of Mexico. In the north, cold semiarid deserts have formed, ringed by the Sierra Nevada and Rocky Mountains. Further south lie the hot deserts, the Mojave and the nearby Sonora Desert. The Mojave Desert Location: Southeastern California, Nevada, Arizona, Utah Area: 25,000 square miles (65,000 square kilometers) Classification: Hot; arid The Mojave Desert, named for the Mojave Indians who once lived along the Colorado River, consists of salt flats, barren mountains, deep ravines, high plateaus, and wide, windy plains of sand. Elevations range from 2,000 to 5,000 feet (600 to 1,500 meters), and it measures 25,000 square miles (65,000 square kilometers). During prehistoric times the Pacific Ocean covered the area until volcanic action built the mountain ranges, leaving saltpans and mudflats as the only remaining signs. The desert is rich in minerals. Borax, potash, salt, silver, and tungsten are mined here. Annual rainfall is less than 5 inches (13 centimeters). Frost is common in the winter, and snow occasionally falls. One major river, the Mojave, crosses the desert, running underground for part of its length. Summer temperatures often rise above 100°F (38°C), and winter temperatures often drop below freezing. Vegetation characteristic of North American deserts includes different species of cacti such as organ pipe, prickly pear, and saguaro. Although nothing grows on the salt flats, plants, such as the creosote bush, can find a foothold in other areas. Joshua trees are protected in the Joshua Tree National Park. On the highest mountains, piñon and juniper grow. A wide variety of invertebrates, amphibians, reptiles, birds, and smaller mammals make a home in this region. The rare Mojave ground squirrel may spend nine months estivating (being dormant in hot weather) in its burrow. Larger mammals include the puma, jaguar, peccary, prong-horned antelope, and bighorn sheep. Death Valley is a basin in the Mojave Desert of which approximately 550 square miles (1,430 square kilometers) of the total 5,313 square miles (13,813 square kilometers) lie below sea level. Near its center is the lowest point in North America, 282 feet (86 meters) below sea level. Prehistoric salt flats exist in the lowest areas where nothing grows. Area: 5,312 square miles (13,812 square kilometers) Classification: Hot; arid Death Valley is the hottest place on the continent. Average summer temperatures are 117°F (47°C) with a record temperature of 134°F (57°C) on July 10, 1913. Rainfall is seldom more than 2 inches (50 millimeters) per year. Many species of plants are found here. Annuals, such as poppies, appear in late winter and early spring; perennials, such as cacti and mesquite, survive all year. Lizards, foxes, rats, mice, squirrels, coyotes, bighorn sheep, wild burros, and rabbits live here, as well as many birds, such as ravens. The name Death Valley came from the numbers of gold-seekers who died there during the gold rush in the mid-1800s. Although gold, silver, lead, and copper have been mined in the area, Death Valley became known for borax (compound used as an antiseptic and cleanser), discovered in 1873 and brought out by mule teams. Mining ghost towns now draw tourists. Death Valley became a national monument in 1933 and a national park in 1994. Walter Edward Scott (1872–1954), known as “Death Valley Scotty,” was a cowboy in Buffalo Bill’s Wild West Show. A mansion approximately 32,000 square feet (2,973 square meters) was built in 1927 by Albert Johnson, a wealthy Chicago executive, in the Grapevine Canyon of Death Valley. Johnson met Scott at one of his shows and the two become friends. Through their friendship, the ranch became known as Scotty’s Castle, and is one of the most popular tourist attractions in Death Valley National Park. The Antarctic Desert Polar deserts are like other deserts only in the dryness of the air. Here, all moisture is frozen. During the warmest season in the polar desert, temperatures never rise above 12.8°F (-3°C), with the mean temperature ranging from -30° to -4°F (-34° to -20°C). In the cold season, the mean temperature is -52.9°F (-47.2°C) and ranges between -94° and -40°F (-70° to -40°C). The Antarctic Desert Area: Approximately 360,000 square miles (936,000 square kilometers) Scientists know very little about the landforms in Antarctica because they are buried beneath such deep ice—over 14,000 feet (4,270 meters) deep in places. It is believed that the area is a series of islands connected by great ice sheets. Western Antarctica is mountainous, and eastern Antarctica is a plateau (high, flat land). It measures approximately 360,000 square miles (936,000 square kilometers). The Antarctic desert is the largest desert in the world, followed by the Sahara. Tiny communities of microbes (microorganisms) have been found in the ice desert of Antarctica. During the summer, the sun warms little pockets of dirt and grit and turns them to slush. In the dim light, bacteria suspended in the slush can photosynthesize. Plant growth is limited to the bordering tundra regions where lichens and mosses are the largest plant forms. Algae, yeasts, fungi, and bacteria are found in some areas. Grasses and a few other seed plants may grow on the fringes. The driest areas support no known plant life. The only land animals are about 100 species of invertebrates, half of which appear to be parasites that live on birds and mammals. These include lice, mites, and ticks. Seabirds, such as petrels and terns, frequent the coast where they feed on fish. The only true Antarctic bird is the penguin, and about eighteen species populate the area. Penguins live in large flocks and nest in the autumn, living on fish they catch in the coastal waters. They range in size from the fairy penguin, which is 16 inches (41 centimeters), to the emperor penguin, which is about 4 feet (1.2 meters) tall. Leopard seals, the largest of all seals, are 10 feet (3 meters) long and weigh 770 pounds (300 kilograms). They inhabit the Antarctic waters and are the natural predators of penguins. Marine mammals, such as whales and seals, live in the coastal waters, but they do not enter desert areas. (Polar bears live only at the North Pole; they do not live in Antarctica.) Temporary year-round human settlements were established in Antarctica around 1900, primarily for exploration purposes. Any industry is centered on the surrounding sea, primarily whaling and seal hunting. Allaby, Michael. Biomes of the Earth: Deserts. New York: Chelsea House, 2006. Hodgson, Wendy C. Food Plants of the Sonoran Desert. Tucson: University of Arizona Press, 2001. Irish, Mary. Gardening in the Desert: A Guide to Plant Selection & Care. Tucson: University of Arizona Press, 2000. Johnson, Mark. The Ultimate Desert Handbook: A Manual for Desert Hikers, Campers and Travelers. Camden, Maine: Ragged Mountain Press/McGraw Hill, 2003. Oliver, John E., and John J. Hidore. Climatology: An Atmosperic Science. Upper Saddle River, NJ: Prentice Hall, 2nd ed. 2004. Sowell, John B. Desert Ecology. Salt Lake City: University of Utah Press, 2001. El-Bagouri, Ismail H.M. “Interaction of Climate Change and Land Degradation: the Experience in the Arab Region.” UN Chronicle. 44. 2 June 2007: 50. Greer, Carrie A. “Your local desert food and drugstore.” Skipping Stones. 20. 2 March-April 2008: 34. Lancaster, Pat. “The Oman experience.” The Middle East. 384 December 2007: 55. Levy, Sharon. New Scientist: Last Days of the Locust. February 21, 2004, 48-49. Chihuahuan Desert Research Institute, PO Box 905, Fort Davis, TX 79734, Phone: 432-364-2499; Fax: 432-364-2686, Internet: http://www.cdri.org Friends of the Earth, 1717 Massachusetts Ave. NW, 300, Washington, DC 20036-2002, Phone: 877-843-8687; Fax: 202-783-0444; Internet: http://www.foe.org Greenpeace USA, 702 H Street NW, Washington, DC 20001, Phone: 202-462-1177; Fax: Internet: http://www.greenpeace.org National Center for Atmospheric Research. http://www.ncar.ucar.edu (accessed on August 17, 2007). Desertion and related lesser offenses, such as going AWOL (absent without leave), bedeviled both the Confederate and Union armies during the American Civil War. Estimates of the total number of desertions vary depending on historic sources and individual definitions of desertion, but historians generally put the number of Union desertions from military duty somewhere between 200,000 and 260,000 troops and the number of Confederate deserters at somewhere in excess of 100,000 troops. The North, however, had a far larger military in the first place, and a greater pool of potential replacements to replenish it with. As a result, desertions never threatened to cripple the overall Union war effort. By contrast, in the South the military margin for error was much less, and the pool of replacements much shallower. So while the number of deserters as a percentage of the total Confederate army was not that much greater than the percentage of deserters within the Union ranks, Southern desertion had a much more severe impact on military operations and morale. As the war turned decisively against the South in the last two years of the conflict, rates of desertion soared in many Confederate units, and historians cite desertion as a leading factor in the South's military collapse. Answering Desperate Calls from Home The great majority of Civil War soldiers—even those who endured horrendous battles and deadly skirmishes on multiple occasions—never abandoned their military obligations, even in their darkest hours of doubt and fear. Soldiers who rallied to the cause in the opening months of the conflict had a particularly low rate of desertion—and a correspondingly high intensity of loathing for those soldiers, whether volunteer or conscript, who slipped out of the line before the war was over. For these veteran soldiers, notions of honor and duty sustained their motivation throughout the years (McPherson 1997, p. 168). Tens of thousands of other soldiers, however, left the ranks of the North and South before they had fulfilled their military obligations. The factors behind these premature departures were legion. Some slipped away out of cowardice—a failure to control and manage the fear that afflicted all Civil War soldiers. Many other soldiers deserted for more complicated reasons, however. For example, numerous soldiers reluctantly slipped away for home under cover of darkness or in the chaos of battle out of concern for loved ones. This was especially true of some Confederate soldiers, who knew in the war's latter stages that much of the South was being overrun by enemy troops and sought to protect their families back home. This agony of divided loyalties was further deepened by beseeching letters from home. "It is useless to conceal the truth any longer," wrote one North Carolina soldier in early 1865. "Most of our people at home have become so demoralized that they write to their husbands, sons and brothers that desertion now is not dishonorable" (Robertson 1988, p. 136). Some letters from loved ones even warned soldiers that failure to immediately set off for home meant certain doom for family members. One Alabama soldier, for example, received a letter from home in 1864 informing him that "if you put off a-coming, 'twont be no use to come, for we'll all hands of us be out there in the grave yard with your ma and mine" (Martin 2003 , p. 172). One of the better-known desertion trials of the Civil War concerned Confederate Private Edward Cooper, whose defense was based in part on one of these "please come home" letters—from his wife. This letter was reportedly a major factor in convincing authorities to spare Cooper's life. "I have been always proud of you, and since your connection with the Confederate army, I have been prouder of you than ever before," his wife's letter stated. "I would not have you do anything wrong for the world, but before God, Edward, unless you come home, we must die. Last night, I was aroused by little Eddie's crying. I called and said, 'what is the matter, Eddie?' And he said, 'O Mamma, I am so hungry.' And Lucy, Edward, your darling Lucy; she never complains, but she is growing thinner and thinner every day. And before God, Edward, unless you come home, we must die" (Moore 1880, p. 237). This peril to loved ones—whether real or imagined— has been cited by historians as a contributing factor in the higher desertion rates among married soldiers than unmarried ones, as well as the higher desertion rates among privates than officers. Many of the latter came from comparatively affluent backgrounds and thus had families that were better able to sustain and protect themselves during the war. Erosion of Morale Myriad other factors contributed to soldiers' decisions to desert, either for home or for destinations that promised anonymity or opportunities to construct new lives. The Confederate Army's intensifying difficulties in procuring basic food and supplies for its soldiers undoubtedly played a role in rising desertion rates. In addition, the South had growing difficulty meeting its payroll obligations as the war went on. Both armies, meanwhile, experienced greater problems with desertion when they tried to transfer soldiers far from home. In the case of Confederate troops, the desire to be close to home increased as the war progressed and Northern troops pushed further and further into Southern territory, possibly endangering family members. In some cases, opposition to transfers to distant locales was so strong that large-scale desertions occurred. Another contributor to diminished morale—and thus higher rates of desertion—in both Union and Confederate units was the decision by each side to build up its military units in response to mounting casualties. As gaps in regiments and divisions were filled with conscripts and other replacements, the esprit de corps that had predominated in the all-volunteer force was supplanted by tensions between the new arrivals and veteran volunteers who viewed the former as useless and untrustworthy. In some instances, the hostility of fellow regimental members was enough to make already unenthusiastic conscripts want to desert. Other desertions stemmed from a growing sense among the infantry rank and file that the Civil War was "a rich man's war and a poor man's fight." This conviction, which could be amply supported by even a cursory glance at the socioeconomic inequities contained within the conscription acts of the Federal and Confederate governments, was further underscored by the furlough programs that both militaries instituted, which made it easier for wealthy soldiers to periodically return home. The failure of military authorities to grant deserved furloughs was especially commonplace in the increasingly soldier-strapped South. Confederate authorities tried to assuage the anger of troops with promises of future compensation, appeals to duty, and assorted excuses, but to little avail: Thousands of frustrated troops simply went home without permission (Wiley 1992, p. 139). Punishments for Desertion From the opening months of the Civil War, both the North and South recognized that desertion posed a potentially serious threat to their respective causes. With this in mind, the administrations of Jefferson Davis and Abraham Lincoln, as well as leading military officers from both armies, kept up a steady a drumbeat of entreaties and threats to keep their men from slipping away. Washington, DC, and Richmond, VA, even resorted to proclamations that promised pardons and general amnesties to deserters willing to return to military duty and thus remove the "stain" upon their honor. "THE EXECUTION OF DESERTERS" In September of 1863 Harper's Weekly published an illustration depicting the execution of five deserters, drawn by staff artist Alfred Rudolph Waud (1828–1891). Waud appended some brief remarks on the necessity of capital punishment for deserters: The crime of desertion has been one of the greatest drawbacks to our army. If the men who have deserted their flag had but been present on more than one occasion defeat would have been victory, and victory the destruction of the enemy. It may be therefore fairly asserted that desertion is the greatest crime of the soldier, and no punishment too severe for the offense. But the dislike to kill in cold blood-a Northern characteristic-the undue exercise of executive clemency, and in fact the very magnitude and vast spread of the offense, has prevented the proper punishment being applied. That is past; now the very necessity of saving life will cause the severest penalties to be rigorously exacted. The picture represents the [five] men who were sentenced to death in the Fifth Corps for desertion at the moment of their execution. Some of these had enlisted, pocketed the bounty, and deserted again and again. The sentence of death being so seldom enforced they considered it a safe game. They all suffered terribly mentally, and as they marched to their own funeral they staggered with mortal agony like a drunken man. Through the corps, ranged in hushed masses on the hill-side, the procession moved to a funeral march, the culprits walking each behind his own coffin. On reaching the grave they were, as usual, seated on their coffins; the priests made short prayers; their eyes were bandaged; and with a precision worthy of praise for its humanity, the orders were given and the volley fired which launched them into eternity. They died instantly, although one sat up nearly a minute after the firing; and there is no doubt that their death has had a very salutary influence on discipline. rebecca j. frey SOURCE: "The Execution of Deserters." Harper's Weekly, September 26, 1863, p. 622. These official efforts met with some limited success, but punishment (and the threat thereof) quickly emerged as the primary officially sanctioned means of addressing the desertion issue. Punishments for desertion ranged greatly, depending on the perceived severity of the offense and the personal characteristics of the authorities imposing the sentence. For example, soldiers found guilty of being absent without leave usually were punished with some combination of pay forfeiture and increased manual labor. Those who were found guilty of the far more serious crime of desertion, however, might be sentenced to branding (often with a C to denote a Coward or a D to denote a Deserter), public flogging, extended imprisonment, or even death by execution. According to historian Jeffrey Rogers Hummel in Emancipating Slaves, Enslaving Free Men (1996), the Union and the Confederacy executed a total of five hundred of their own troops during the course of the Civil War. This total exceeds the total number of executions in all other American wars combined. Two-thirds of the executions that took place during the Civil War were for the crime of desertion. Almost invariably, they were staged publicly, so as to send a harsh warning to anyone contemplating leaving ranks. These executions undoubtedly had their intended effect in some cases. But in others, the brutal spectacles seemed to engender a deeper demoralization among some witnesses. A Rebel soldier from Florida, for example, was profoundly shaken after he witnessed the execution of a young deserter who spent the last moments of his life desperately begging for mercy. The soldier called the execution "one of the most sickening scenes I ever witnessed[;] … [it] looked more like some tragedy of the dark ages, than the civilization of the nineteenth century" (Dean 2002, p. 414). A Union soldier from Indiana expressed similar sentiments after witnessing an execution of a deserter from his army. "I don't think I will ever witness another such a horror if I can get away from it," he wrote."I have seen men shot in battle but never in cold blood before" (Dean 2002, p. 414). Escalating Levels of Desertion After the Civil War turned decisively against the South in mid-1863, rates of desertion from the Confederate forces rose dramatically. "In the wake of Gettysburg the highways of Virginia were crowded daily with homeward-bound troops, still in possession of full accouterments; and, according to one observer, these men 'when halted and asked for their furloughs or their authority to be absent from their commands, … just pat their guns defiantly and say, 'this is my furlough,' and even enroll-iing officers turn away as peaceably as possible" (Wiley 1992, p. 143–44). As the months passed, entire garrisons and companies quietly left the Confederate ranks. Many of these deserters separated and returned to their far-flung homes. Others banded together into outlaw groups that sustained themselves by robbing local communities or military stores. In some areas of the South, these guerrilla bands became so powerful that they became a threat to the Confederate detachments that were sent to neutralize them. The need to send such detachments put a further drain on an army that was already groaning under the methodical, unrelenting pressure of a foe with superior resources. By the time the final hours of 1864 were ticking away, desertion had reached epidemic levels in many Confederate units. Sentries walked away from their posts, infantrymen crept from their trenches under cover of darkness, and cavalrymen turned the heads of their mounts away from the front and toward home. Even Confederate General Robert E. Lee (1807–1870), the most respected and beloved military leader of the entire South, was powerless to stop some defections from the battered ranks of his Army of Northern Virginia. According to historian Bell Irvin Wiley's The Life of Johnny Reb (1992), the Confederate War Department reported that there were a total of 198,494 officers and men absent and only 160,198 present in the armies of the Confederacy on the eve of surrender (pp. 144–145). These figures confirm that although desertion constituted a problem for the North, its impact was far more crippling for the South. Alotta, Robert I. Stop the Evil: A Civil War History of Desertion and Murder. San Rafael, CA: Presidio Press, 1978. Dean, Eric T., Jr. " 'Dangled over Hell': The Trauma of the Civil War." In The Civil War Soldier: A Historical Reader, ed. Michael Barton and Larry M. Logue. New York: New York University Press, 2002. Donald, David Herbert, Jean Harvey Baker, and Michael F. Holt. The Civil War and Reconstruction. New York: W. W. Norton, 2001. Hummel, Jeffrey Rogers. Emancipating Slaves, Enslaving Free Men: A History of the American Civil War. Chicago: Open Court, 1996. Martin, Bessie. A Rich Man's War, a Poor Man's Fight: Desertion of Alabama Troops from the Confederate Army. Library of Alabama Classics Series. Tuscaloosa: University of Alabama Press, 2003. Originally published as Desertion of Alabama Troops from the Confederate Army, New York: Columbia University Press, 1932. McPherson, James M. For Cause and Comrades: Why Men Fought in the Civil War. New York: Oxford University Press, 1997. Mitchell, Reid. Civil War Soldiers: Their Expectations and Their Experiences. New York: Viking, 1988. Moore, John W. History of North Carolina: From the Earliest Discoveries to the Present Time, Vol. 2. Raleigh, NC: Alfred Williams, 1880. Power, J. Tracy. Lee's Miserables: Life in the Army of Northern Virginia from the Wilderness to Appomattox. Chapel Hill, NC: University of North Carolina Press, 1998. Robertson, James I., Jr. Soldiers Blue and Gray. Columbia, SC: University of South Carolina Press, 1988. Robertson, James I., Jr., and the editors of Time-Life Books. Tenting Tonight: The Soldier's Life. Alexandria, VA: Time-Life Books, 1984. Ward, Geoffrey C., with Ric Burns and Ken Burns. The Civil War. New York: Vintage, 1994. Wiley, Bell Irvin. The Life of Johnny Reb: The Common Soldier of the Confederacy. Indianapolis, IN, and New York: Bobbs-Merrill Company, 1943. Reprint, Baton Rouge: Louisiana State University Press, 1992. Williams, David. A People's History of the Civil War: Struggles for the Meaning of Freedom. New York: New Press, 2005. A desert is an arid land area that generally receives less than 10 inches (250 millimeters) of rainfall per year. What little water it does receive is quickly lost through evaporation. Average annual precipitation in the world's deserts ranges from about 0.4 to 1 inch (10 to 25 millimeters) in the driest areas to 10 inches (250 millimeters) in semiarid regions. Other features that mark desert systems include high winds, low humidity, and temperatures that can fluctuate dramatically. It is not uncommon for the temperature to soar above 90°F (32°C) and then drop below 32°F (0°C) in a single day in the desert. Most of the world's desert ecosystems (communities of plants and animals) are located in two belts near the tropics at 30 degrees north and 30 degrees south of the equator. These areas receive little rainfall because of the downward flow of dry air currents that originate at the equator. As this equatorial air moves north and south, it cools and loses whatever moisture it contains. Once this cool, dry air moves back toward Earth's surface, it is rewarmed, making it even drier. Over the desert areas, the dry air currents draw moisture away from the land on their journey back toward the equator. Deserts around the world The vast Sahara Desert in northern Africa encompasses an area 3,000 miles (4,800 kilometers) wide and 1,000 miles (1,600 kilometers) deep. Sand composes just 20 percent of the Sahara, while plains of rock, pebble, and salt flats, punctuated by mountains, make up the rest. The Sahara can experience temperatures that rise and fall 100°F (38°C) in a single day. Decades can go by without rain. By contrast, the Gobi Desert, covering 500,000 square miles (1,295,000 square kilometers) in northcentral Asia, sits at a higher altitude than the Sahara. As a result, temperatures in the Gobi remain below freezing most of the year. Words to Know Arid land: Land that receives less than 10 inches (250 millimeters) of rainfall annually and has a high rate of evaporation. Desert pavement: Surface of flat desert lands covered with closely spaced, smooth rock fragments that resemble cobblestones. Desert varnish: Dark film of iron oxide and manganese oxide on the surface of exposed desert rocks. Rain-shadow deserts: Areas that lie in the shadow of mountain ranges and receive little precipitation. the Outback. Antarctica, the land mass at the southern pole of the globe, is a polar desert. One of the driest places on Earth, it receives only a dusting of snow each year. Warmest summer temperatures in Antarctica reach only 25°F (−4°C). The deserts of the United States are located at higher latitudes and in higher altitudes than is typical of many other arid regions of the world. Death Valley in California is both extremely arid and extremely hot in the summer. South of it are the relatively cooler and wetter Mojave and Sonoran Deserts. Rain-shadow deserts are those that lie in the shadow of mountain ranges. As air ascends on one side of a range, it releases any moisture it carries. Once on the other side, the air contains little moisture, forming deserts in the slope of the range. Among rain-shadow deserts are Death Dunes, wind-blown piles of sand, are the most common image of a desert landscape. Wind constantly sculpts sand piles into a wide variety of shapes. Dunes move as wind bounces sand up the dune's gently sloping windward side (facing the wind) to the peak of the slope. At the peak the wind's speed drops and sends sand cascading down the steeper lee side (downwind). As this process continues, the dune migrates in the direction the wind blows. Given enough sand and time, dunes override other dunes to thicknesses of thousands of feet, as in the Sahara Desert. Desertification refers to the gradual transformation of productive land into that with desertlike conditions. Desertification may occur in rain forests and tropical mountainous areas. Even a desert itself can become desertified, losing its sparse collection of plants and animals and becoming a barren wasteland. Desertification occurs in response to continued land abuse, and may be brought about by natural or man-made actions. Among the natural forces are constant wind and water (which erode topsoil) and long-term changes in rainfall patterns (such as a drought). The list of human actions includes overgrazing of farm animals, strip mining, the depletion of groundwater supplies, the removal of forests, and the physical compacting of the soil (such as by cattle and off-road vehicles). Almost 33 percent of Earth's land surface is desert, a proportion that is increasing by as much as 40 square miles (64 square kilometers) each day. The arid lands of North America are among those most affected by desertification: almost 90 percent are moderately to severely desertified. Fortunately, scientists believe that severe desertification is rare. Many feel that most desertified areas can be restored to productivity through careful land management. Sand carried by the wind can act as an abrasive on the land over which it flows. Rocks on the floor of a desert can become polished in this way. Closely spaced, smooth rock fragments that resemble cobblestones on the surface of flat lands are referred to as desert pavement. The dark film of iron oxide and manganese oxide on the surface of the exposed rocks is called desert varnish. Life in the desert The plants and animals that are able to survive the extremes of desert conditions have all evolved ways of compensating for the lack of water. Plants that are able to thrive in the desert include lichens (algae and fungi growing together). Lichens have no roots and can absorb water and nutrients from rain, dew, and the dust on which they grow. Succulent plants, such as cacti, quickly absorb rainwater when it comes and store it in their stems and leaves, if they have them. Other plants store nutrients in their roots and stems. Many desert shrubs have evolved into upside-down cone shapes. They collect large amounts of rain on their surfaces, then funnel it down to their bases. Deserts are not lifeless, but are inhabited by insects, arachnids (spiders and scorpions), reptiles, birds, and mammals. Unlike plants, these animals can seek shelter from the scorching sun, cold, and winds by crawling into underground burrows. Many have adapted to the harsh desert environment by developing specific body processes. Some small mammals, such as rodents, excrete only concentrated urine and dry feces, and perspire little as a way of conserving body fluids. The camel's body temperature can soar to 105°F (41°C) before this mammal sweats. It can lose up to one-third of its body weight and replace it at a single drinking. [See also Biome ] In peacetime, desertion has been a continuing phenomenon in American military history, at least through the early twentieth century, although its extent has varied widely depending upon the circumstances facing the service people. Unlike European nations, the U.S. government had little control over its citizens, and deserters could escape relatively easily, particularly into the rural and frontier regions of the country. Low pay and poor conditions have contributed significantly to peacetime desertions. The armed forces require enlisted men and women to serve tours of duty of specific duration. Unlike commissioned officers, enlisted personnel are not legally permitted to resign unilaterally. Thus, desertion constitutes an enlisted person's repudiation of his or her legal obligation. A correlation has existed in peacetime between desertion rates and the business cycle. When the country experienced economic depression and high unemployment, fewer people abandoned the service. Yet in an expanding economy, with workers in demand and wage scales increasing, many more service men and women have forsaken the high job security but lesser monetary rewards of the military. The highest peacetime desertion rates in American history were reached during periods of economic growth in the 1820s, early 1850s, early 1870s, the 1880s, early 1900s, and the 1920s, when the annual flow of deserters averaged between 7 and 15 percent of the U.S. Army. A peak of 32.6 percent was recorded in 1871, when 8,800 of the 27,010 enlisted men deserted in protest against a pay cut. (By contrast, the desertion rate in the British army was only about 2 percent.) Lured by higher civilian wages and prodded by miserable living conditions—low pay, poor food, inadequate amenities, and boredom—on many frontier western outposts, a total of 88,475 soldiers (one‐third of the men recruited by the army) deserted between 1867 and 1891. The peacetime navy had its own desertion problems. In the nineteenth century, many of the enlisted men had grim personal backgrounds or criminal records or were foreigners with little loyalty to the United States. A rigid class system and iron discipline contributed to high rates of alcoholism and desertion. In 1880, there were 1,000 desertions from an enlisted force of 8,500 seamen. During wartime, desertion rates in all the military services have varied widely but have generally been lower than in peacetime—perhaps reflecting the increased numbers of service people, national spirit, and more severe penalties prescribed for combat desertion. The end of hostilities, however, generally was accompanied by a dramatic flight from the military. After almost every war, the desertion rate doubled temporarily as many regular enlisted personnel joined other Americans in returning to peacetime pursuits. The variation in wartime desertion rates seems to result from differences in public sentiment and prospects for military success. Although many factors are involved, generally the more swift and victorious the campaign and the more popular the conflict, the lower the desertion rate. Defeat and disagreement or disillusionment about a war have been accompanied by a higher incidence of desertion. In the Revolutionary War, desertion depleted both the state militias and the Continental army after such reverses as the British seizures of New York City and Philadelphia; at spring planting or fall harvesting times, when farmer‐soldiers returned to their fields; and as veterans deserted in order to reenlist, seeking the increased bounties of cash or land that the states offered new enlistees. Widespread desertion, even in the midst of battle, plagued the military during the setbacks of the War of 1812. In the Mexican War, 6,825 men, or nearly 7 percent of the army, deserted; and one unit of the Mexican Army, the San Patricio Artillery Battalion, was composed of American deserters. The Civil War produced the highest American wartime desertion rates because of its bloody battles, new enlistment bounties, and the relative ease with which deserters could escape capture, particularly in the mountain regions. The Union armies recorded 278,644 cases of desertion, representing 11 percent of the troops. As the Confederate military situation deteriorated, desertion reached epidemic proportions. The Appalachian Mountains, Florida swamps, and Texas chaparral became the domain of armed bands of Southern deserters. In the final year of the war, whole companies and regiments, sometimes with most of their officers, left together to return to their homes. In all, Confederate deserters numbered 104,428, or 10 percent of the South's armies. The brief and successful Spanish‐American War resulted in 5,285 desertions, or less than 2 percent of the armed forces in 1898. However, the rate climbed to 4 percent during the long and arduous Philippine War between 1900 and 1902. In World War I, because conscription regulations classified any draftee failing to report for induction at the prescribed time as a deserter, the records of 1917–18 showed 363,022 deserters, who would have been more appropriately designated draft evaders. Traditionally defined deserters amounted to 21,282, or less than 1 percent of the army in World War I. In World War II, desertion rates reached 6.3 percent of the armed forces in 1944, and during the American reverses at the Battle of the Bulge, the army executed one American soldier, Private Ernie Slovik, for desertion in the face of the enemy as an example to other troops. Desertion rates dropped to 4.5 percent in 1945. During the Korean War, the use of short‐term service and the rotation system helped keep desertion rates down to 1.4 percent of the armed forces in fiscal year (FY) 1951 and to 2.2 percent or 31,041 in FY 1953. The divisive Vietnam War generated the highest percentage of wartime desertion since the Civil War. From 13,177 cases—or 1.6 percent of the armed forces—in FY 1965, the annual desertion statistics mounted to 2.9 percent in FY 1968, 4.2 percent in FY 1969, 5.2 percent in FY 1970, and 7.4 percent (79,027 incidents of desertion) in FY 1971. Like the draft resisters from this same war, many deserters sought sanctuary in Canada, Mexico, or Sweden. In 1974, the Defense Department reported that between 1 July 1966 and 31 December 1973, there had been 503,926 incidents of desertion in all services during the Vietnam War. The end of the draft and the Vietnam War, together with the enhancement of pay and living conditions in the All‐Volunteer Force, dramatically reduced desertions, although there was somewhat of another upsurge during the Persian Gulf War (1991). [See also Military Justice; Morale, Troop.] Ella Lonn , Desertion During the Civil War, 1928, 1966; William B. Huie , The Execution of Private Slovik, 1954, 1991; Russell F. Weigley , History of the United States Army, 1967; Jack D. Foner , The United States Soldier between the Two Wars: Army Life and Reforms, 1865–1898, 1968; Thomas L. Hayes , American Deserters in Sweden, 1971; Robert L. Alotta , Stop the Evil: A Civil War History of Desertion and Murder, 1978; Edward M. Coffman , The Old Army: A Portrait of the American Army in Peacetime, 1784–1898, 1986. John Whiteclay Chambers II A desert is generally a very hot, barren region on Earth that receives little rainfall. Most sources describe a region as being a desert if it receives less than 10 inches (25.4 centimeters) of rain a year. It has also been described as a place where more water evaporates than falls as precipitation. Despite being an extremely harsh environment, deserts support a diverse community of both plant and animal life. As one of the six terrestrial (land) biomes (particular types of large geographic regions), deserts cover between one fifth and one quarter of Earth's surface. A desert is a stark, dramatic place whose topography (surface conditions) is almost immediately recognizable. Its miles of sand dunes or endless stretches of flat, featureless sand are not easily forgettable; nor are its strangely adapted plants (like cacti) apt to be confused with vegetation from some other region. It is easy to understand what makes a desert what it is. Any part of Earth that constantly experiences a water "debt" rather than a water "surplus" is so dry that the need to capture, conserve, and store water is not only overwhelming, but affects and determines everything living in that place. Despite the impression that a desert is a lifeless place, it is home to certain plants and animals who have adapted to its harsh conditions and who do very well there. THE LOCATION OF DESERTS Most of the world's deserts are located on two desert belts that wrap around Earth's equator (the circular band around Earth's middle which divides the Northern and Southern Hemispheres). The belt in the Northern Hemisphere is along the tropic of Cancer and includes the Gobi Desert in China, the Sahara Desert in North Africa, the deserts of southwestern North America, and the Arabian and Iranian deserts in the Middle East. The belt in the Southern Hemisphere is along the tropic of Capricorn and includes the Patagonia Desert in Argentina, the Kalahari Desert of southern Africa, and the Great Victoria and Great Sandy Deserts of Australia. Altogether, there are about twelve major deserts, the largest of which is the Sahara Desert which measures 3.5 million square miles (5.63 million square kilometers). This is an area as big as the entire United States. THE CREATION OF DESERTS In a way, deserts are made and not born, meaning that Earth's weather patterns created a desert in the first place and continue to work to keep it that way. These regular patterns, or moving currents of hot and cold air interact with each other so that descending currents of air pick up moisture and dry out the land. Mountain ranges also influence these currents, as dry air moving off their slopes evaporate even more moisture. The steady lack of moisture in the air above a desert region leads to extreme changes in temperature once the Sun goes down. In normally humid areas, the moisture in the air acts as an insulating barrier, and clouds keep some of the daytime warmth from the Sun trapped, thereby moderating temperatures. However in a desert, which has no moisture in the air above it, there are no clouds to act as a blanket, meaning that although daytime temperatures are extremely hot, they can be near freezing at night. As with any biome, deserts vary considerably throughout the world, and they can be as diverse as the lifeless-looking and appropriately named Death Valley in California and Nevada, and the almost-lush looking Vazcaino Desert in Mexico when it bursts into flower following its annual spring rain. Even in as harsh an environment as Death Valley or the Sahara Desert, life can be found. Sometimes life is a dormant seed buried for years and waiting for a bit of moisture so that the seed can spring into existence as an aboveground plant. Other times desert life is a toad hibernating below ground and rushing to find a mate and lay its eggs as soon as it rains. Life in a desert is a constant challenge, and plant and animal inhabitants do not have the luxury of being wasteful that other organisms in more temperate climates might have. PLANT LIFE IN THE DESERT Desert plants have evolved many methods to obtain and efficiently use available water. Certain ones compress their entire life cycle into one growing season. The seeds or bulbs of some flowering desert plants can lie dormant in the soil for years until a heavy rain enables them to germinate (sprout), grow, and bloom. Woody plants may develop deep root systems—like the mesquite whose tap root can measure 50 feet (15.24 meters) down although the aboveground tree is only about 10 feet (3.048 meters) tall. They may also develop a network of shallow, spreading, hairlike roots that can take up water from dew and the occasional rain shower before it seeps below ground. For many plants, the answer to years of absolute drought is to drop leaves, allow the aboveground part of the plant to die, and keep the underground root alive in a state of dormancy (functioning slowly or not at all). Conserving and storing water becomes important for a plant once it has obtained moisture. Since all plants lose water by evaporation from their leaves, many desert species minimize this by having very small or rolled leaves, or by turning their leaves into spines or barbs. These thornlike leaves protect a plant's water supply from animals. The problem of storing water is solved by the cactus, which is a succulent and can store water in its leaves, stems, and roots. An amazing example of adaptation is the Saguro cactus of the American southwest. The trunk of this large cactus is folded or pleated like an accordion, which can unfold and expand as the plant absorbs water after a heavy rain. A Saguro that is 20 feet tall (6.1 meters) can hold more than one ton (1.102 metric tons) of water. ANIMAL LIFE IN THE DESERT Desert animals, like desert plants, have also evolved ways to cope with the desert's arid environment using avoidance and/or adaptation. Besides the highly adapted camel, most desert animals are small and do not have an extensive range. While their size limits their ability to leave, it does make easier their ability to remain in cool underground burrows during the day and emerge only after dark to feed. Animals that do this are called "nocturnal" or "crepuscular." Other small mammals and reptiles survive the most extreme times by a process called estivation, which is similar to the sleeplike state of hibernation. Other animals have adapted specialized body parts to help them cool off. A well-known example is the huge ears of the small Fennec fox of the Sahara Desert and the Kit fox of North America. Both have enormous ears whose dense network of tiny blood vessels run just below their skin and act as radiators, releasing excess heat. Larger desert animals developed broad hooves that allow them to move more easily over soft sand. Some animals can actually slow down their production of body heat by varying their heart rate, while others reabsorb the water in their urine several times before finally excreting a highly concentrated form of urine. Just as the animal and plant population in deserts is not overly abundant because the desert's difficult conditions can only support small numbers, deserts cannot support humans in large numbers. People must, like animals and plants, make adjustments in order to survive in the desert's extremes, and in the past they have lived in mud houses that kept cool in the daytime and provided warmth at night. Long robes were often used in Africa and the Middle East for protection against the scorching sunlight and blowing sand. With today's technology, however, people can live comfortably in a desert if they have air conditioning and an adequate water supply. A good, steady source of water also allows humans to raise crops in a desert since they are usually naturally fertile regions because there is seldom enough water to leech away important nutrients. Crops can be grown on desert lands with irrigation, but farmers must be prepared to deal with a buildup of salts in the soil as a result of evaporation (which takes away most of the water they put down). Humans can also be responsible for creating deserts or allowing an existing desert to spread. This is usually the result of burning or overgrazing of animals. When a desert encroaches, or spreads, to arable land (land able to be farmed), that process is called "desertification." [See alsoBiome ] DESERTION from military service has been a continual phenomenon in American history although its extent has varied widely depending upon the circumstances that have confronted soldiers. The armed forces require enlisted men and women to serve tours of duty of specific duration and, unlike commissioned officers, enlisted personnel may not legally resign before the end of that period. Thus desertion—being absent without authorization for over a month—constitutes the enlisted person's repudiation of his or her legal obligation. In peacetime there has been a direct correlation between desertion rates and the business cycle. When the country has experienced a depression and a labor surplus, fewer soldiers have abandoned the army. By contrast, in an expanding economy, with workers in demand and wage scales increasing, many more servicemen and women have forsaken the high job security but low monetary rewards of the army. The highest peacetime desertion rates in American history occurred during the periods of economic growth in the 1820s, early 1850s, early 1870s, 1880s, early 1900s, and 1920s, when the flow of deserters averaged between 7 and 15 percent each year. A peak of 32.6 percent was reached in 1871, when 8,800 of the 27,010 enlisted men deserted in protest against a pay cut. Lured by higher civilian wages and prodded by the miserable living conditions of most frontier outposts, a total of 88,475, or one-third of the men recruited by the army, deserted between 1867 and 1891. During wartime, desertion rates have varied widely but have generally been lower than in peacetime service, a tendency that perhaps reflects the increased numbers of troops, national spirit, and more severe penalties prescribed for combat desertion. A dramatic flight from military duty has generally accompanied the termination of hostilities. After almost every war the desertion rate has doubled temporarily as many servicemen and women have joined other Americans in returning to peacetime pursuits. The variation in wartime desertion rates seems to result from differences in public sentiment and military prospects. Although many factors are involved, generally the more swift and victorious the campaign and the more popular the conflict, the lower the desertion rate. Defeat and disagreement or disillusionment about a war have been accompanied by a higher incidence of desertion. In the American Revolution, desertion depleted both the state militias and the Continental army after such reverses as the British seizure of New York City; at spring planting or fall harvesting time, when farmer-soldiers returned to their fields; and as veterans deserted in order to reenlist, seeking the increased bounties of cash or land that the states offered for new enlistees. Widespread desertion, even in the midst of battle, plagued the military during the War of 1812.Inthe Mexican-American War, 6,825 men, or nearly 7 percent of the army, deserted. Moreover, American deserters composed one unit of the Mexican army, the San Patricio Artillery Battalion. The Civil War produced the highest American wartime desertion rates because of its bloody battles, new enlistment bounties, and relative ease with which deserters could escape capture in the interior regions. The Union armies recorded 278,644 cases of desertion, representing 11 percent of the troops. As the Confederate military situation deteriorated, desertion reached epidemic proportions. Whole companies and regiments, sometimes with most of their officers, fled together. In all, Confederate deserters numbered 104,428, or 10 percent of the armies of the South. The Spanish-American War resulted in 5,285 desertions, or less than 2 percent of the armed forces in 1898. The rate climbed to 4 percent during the Philippine Insurrection between 1900 and 1902. In World War I, because Selective Service regulations classified anyone failing to report for induction at the prescribed time as a deserter, the records of 1917–1918 showed 363,022 deserters who would have been more appropriately designated draft evaders. Traditionally defined deserters amounted to 21,282, or less than 1 percent of the army. In World War II desertion rates reached 6.3 percent of the armed forces in 1944 but dropped to 4.5 percent by 1945. The use of short-term service and the rotation system during the Korean War kept desertion rates down to 1.4 percent of the armed forces in fiscal year 1951 and to 2.2 percent, or 31,041 soldiers, in fiscal year 1953. The unpopular war in Vietnam generated the highest percentage of wartime desertion since the Civil War. From 13,177 cases, or 1.6 percent of the armed forces, in fiscal year 1965, the annual desertion statistics mounted to 2.9 percent in fiscal year 1968, 4.2 percent in fiscal year 1969, 5.2 percent in fiscal year 1970, and 7.4 percent in fiscal year 1971. Like the draft resisters from this same war, many deserters sought sanctuary in Canada, Mexico, or Sweden. In 1974 the Defense Department reported that there had been 503,926 incidents of desertion between 1 July 1966 and 31 December 1973. Higginbotham, Don. War and Society in Revolutionary America: The Wider Dimensions of Conflict. Columbia: University of South Carolina Press, 1988. Jessup, John E., et al., eds. Encyclopedia of the American Military: Studies of the History, Traditions, Policies, Institutions, and Roles of the Armed Forces in War and Peace. New York: Scribners; Toronto: Macmillan, 1994. Weitz, Mark A. A Higher Duty: Desertion among Georgia Troops During the Civil War. Lincoln: University of Nebraska Press, 2000. Whiteclay, John, et al., eds. The Oxford Companion to American Military History. New York: Oxford University Press, 1999. John WhiteclayChambers/a. e. A desert is an arid land area where more water is lost through evaporation than is gained from precipitation. Deserts include the familiar hot, dry desert of rock and sand that is almost barren of plants, the semiarid deserts of scattered trees, scrub, and grasses, coastal deserts, and the deserts on the polar ice caps of the Antarctic and Greenland. Most deserts are the result of large-scale climatic patterns. As Earth turns on its axis, hot air rises over the equator then flows northward and southward. The air currents cool in the upper regions and descend as high pressure areas in two subtropical zones. North and south of these zones are two more areas of ascending air and low pressures. Still farther north and south are the two polar regions of descending air. As air rises, it cools and loses its moisture. As it descends, it warms and picks up moisture, drying out the land. This downward movement of warm air has produced two belts of deserts. The belt in the northern hemisphere is along the Tropic of Cancer and includes the Gobi Desert in China, the Sahara Desert in North Africa, the deserts of southwestern North America, and the Arabian and Iranian deserts in the Middle East. The belt in the southern hemisphere is along the Tropic of Capricorn and includes the Patagonia Desert in Argentina, the Kalihari Desert of southern Africa, and the Great Victoria and Great Sandy Deserts of Australia. Coastal deserts are form when cold water moves from the Arctic and Antarctic regions toward the equator and comes into contact with the edges of continents. The cold water is augmented by upwellings of cold water from ocean depths. The air cools as it moves across cold water, carrying fog and mist but little rain. This pattern of air flow is responsible for coastal deserts in southern California, Baja California, southwest Africa, and Chile. Mountain ranges also influence the formation of deserts by creating rain shadows. As moisture-laden air flows upward over windward slopes, it cools and loses its moisture. Dry air descending over the leeward slopes evaporates moisture from the soil, creating a desert. The Great Basin Desert in the western United States was formed from a rain shadow produced by the Sierra Nevada. Desert areas also form in the interior of continents when prevailing winds are far from large bodies of water and have lost much of their moisture. Desert plants have evolved methods to conserve water. Some flowering desert plants are ephemeral and live for only a few days. Their seeds or bulbs can lie dormant in the soil for years, until a heavy rain enables them to germinate, grow, and bloom. Woody desert plants can either have long root systems to reach groundwater sources or spreading shallow roots to take up moisture from dew or occasional rains. Most desert plants have small or rolled leaves to reduce the surface area from which transpiration of water can take place, while others drop their leaves during dry periods. Some leaves have waxy coatings that prevent water loss. Many desert plants are succulents, which store water in leaves, stems, and roots. Thorns and spines of the cactus are used to protect a plant’s water supply from animals. Desert animals have also developed protective mechanisms to allow them to survive in deserts. Most desert animals and insects are small, so they can remain in cool underground burrows or hide under vegetation during the day and feed at night when it is cooler. Desert amphibians are capable of dormancy during dry periods, but when it rains, they mature rapidly, mate, and lay eggs. Many birds and rodents reproduce only during or following periods of winter rain that stimulate vegetative growth. Some desert rodents (e.g., the North American kangaroo rat and the African gerbil) have large ears with little fur to allow them to sweat and cool down. They also require very little water. The desert camel can survive nine days on water stored in its stomach. Many larger desert animals have broad hooves or feet to allow them to move over soft sand. Desert reptiles such as the horned toad can control their metabolic heat production by varying their rate of heartbeat and the rate of body metabolism. Some snakes have developed a sideways shuffle that allows them to move across soft sand. Deserts are difficult places for humans to live, but some people do live in deserts, for example the Aborigines in Australia and the Tuaregs in the Sahara. Desert soils are usually naturally fertile because little water is available to leach nutrients. Crops can be grown on desert lands with irrigation, but evaporation of the irrigation water can result in the accumulation of salts on the soil surface, making the soil unsuitable for further crop production. Burning, deforestation, and overgrazing of lands on the semiarid edges of deserts are enabling deserts to encroach on the nearby arable lands in a process called desertification. Desertification in combination with shifts in global atmospheric circulation has resulted in the southern boundary of the Sahara Desert advancing 600 mi (1, 000 km) southward. A desertification study conducted for the United Nations in 1984 determined that 35% of Earth’s land surface was threatened by desertification processes. Deserts are environments shaped by aridity, or dryness. Aridity reflects the balance between precipitation and potential evapotranspiration (PET), or the air's ability to absorb water (determined by temperature and water content). In arid zones, precipitation may be 5 to 20 percent of PET; semi-arid regions receive more precipitation, and hyper-arid regions less, in relation to PET. Features of a Desert Roughly one-third of Earth's land surface is arid or semi-arid. The major desert regions are: Australia, western North America, western South America (Atacama), southern Africa (Namib), and Asia-northern Africa. There are so-called polar deserts; however, most arid lands are in the warm subtropics. There are two primary causes of aridity. One is the subtropical highpressure belts, where high altitude air masses move away from the tropics. Tropical heat causes air to rise and cool, and therefore drop moisture as it moves away from the equator. The air then becomes more cool and dense. This air then sinks, warms as it nears the surface, and regains the ability to absorb water, thus creating zones of aridity. A second cause is the rainshadow effect caused by mountain ranges. Continental interiors are dry because most air masses have moved long distances or over mountains and in doing so have lost water. Desert conditions may be quite harsh. Intense solar radiation and lack of shade cause surface temperatures as high as 50 degrees Celsius (130 degrees Fahrenheit). Limited precipitation and rapid evaporation greatly limit plant growth, and water is rarely available for animal consumption. Precipitation is predictable in some systems (such as winter rains in California's Mojave) but nonseasonal in others. Many sites experience long rainfree periods; in portions of the Atacama, rainfall has never been recorded. Variability is another characteristic of deserts. Precipitation is episodic; rainstorms may be quite intense, with much of the annual total falling in just minutes. Similarly, resources may be spatially patchy. Arroyos or erosion channels and low spots may collect runoff from surrounding areas; rockiness and soil surface crusts contribute to runoff. Seeds and litter accumulate and support plant growth in low, relatively moist locations. Permanent water sources (desert springs or oases) are rare but important. Evaporation draws water from the surface, leaving dissolved minerals as a salty crust. Sparse plant growth adds little organic material to the soil; thus the soil has limited capacity to retain water and minerals. Sparse vegetation also increases the erosional influences of high wind, runoff, and extreme temperatures. Sand dunes are accumulations of eroded materials; their instability makes them harsh environments for most organisms. Desert organisms adapt to arid environments either by tolerating extreme conditions or by escaping them. Toleration is survival under stress. Many adaptations are related to water acquisition. Plants may have shallow, extensive root systems to absorb rainfall from the largest area possible. Animals obtain moisture from live food. Tenebrionid beetles of the Namib extract water from coastal fogs: The beetles do "headstands" on dune ridges, and moisture condensing on the beetle's textured carapace trickles down to the mouth. Kangaroo rats obtain virtually all of their water by oxidation of fats in dry seeds (metabolic water). Other adaptations involve water retention: storage of water in succulent tissues; specialized photosynthetic processes minimizing water loss; leaflessness, small leaves, or leaf loss during drought, also reducing plant water use; and animal use of burrows or shade. Finally, some organisms simply tolerate tissue dehydration. Escape or avoidance results in activity only during favorable periods. Annual plants, completing their life cycle in a single year, are abundant in many deserts. They may spend years as dormant seeds; only after sufficient rainfall do they germinate and grow, reproducing quickly before the soil redries. Some invertebrates and amphibians remain dormant up to several years, the invertebrates as eggs or in the "suspended animation" of cryptobiosis , the amphibians as aestivating, or dormant adults, beneath the surface. When temporary ponds form after rain, these organisms hatch or awaken; feeding, reproduction, and growth of juveniles are all a race against time so that at least some mature before the ponds dry. Some organisms are nomadic or migratory, finding temporary patches created by local rainfall: These include large mammals such as antelope, birds, and even insects (for example, desert locusts or grasshoppers). Arid and semiarid regions have been important for livestock grazing throughout history. As energy sources have made irrigation feasible, some regions have been converted to cultivation. Urban populations are increasing rapidly where groundwater or river water is available and affordable; the southwestern United States, for example, contains several rapidly growing metropolitan areas in desert, such as Phoenix, Arizona. Depletion of underlying groundwater is a major environmental consequence in such areas. see also Biome; Grassland; Water Cycle Laura F. Huenneke Cooke, R. U., A. Warren, and A. S. Goudie. Desert Geomorphology. London: UCL Press, 1993. Louw, G. N. and M. K. Seeley. Ecology of Desert Organisms. New York: Longman, 1982. Mares, M. A., ed. Encyclopedia of Deserts. Norman: University of Oklahoma Press, 1999. A desert is an arid land area where more water is lost through evaporation than is gained from precipitation . Deserts include the familiar hot, dry desert of rock and sand that is almost barren of plants, the semiarid deserts of scattered trees, scrub, and grasses , coastal deserts, and the deserts on the polar ice caps of the Antarctic and Greenland. Most desert regions are the result of large-scale climatic patterns. As the earth turns on its axis, large air swirls are produced. Hot air rising over the equator flows northward and southward. The air currents cool in the upper regions and descend as high pressure areas in two subtropical zones. North and south of these zones are two more areas of ascending air and low pressures. Still farther north and south are the two polar regions of descending air. As air rises, it cools and loses its moisture. As it descends, it warms and picks up moisture, drying out the land. This downward movement of warm air masses over the earth have produced two belts of deserts. The belt in the northern hemisphere is along the Tropic of Cancer and includes the Gobi Desert in China, the Sahara Desert in North Africa , the deserts of southwestern North America , and the Arabian and Iranian deserts in the Middle East. The belt in the southern hemisphere is along the Tropic of Capricorn and includes the Patagonia Desert in Argentina, the Kalihari Desert of southern Africa, and the Great Victoria and Great Sandy Deserts of Australia . Coastal deserts are formed when cold waters move from the Arctic and Antarctic regions toward the equator and come into contact with the edges of continents. The cold waters are augmented by upwellings of cold water from ocean depths. As the air currents cool as they move across cold water, they carry fog and mist, but little rain. These types of currents result in coastal deserts in southern California, Baja California, southwest Africa, and Chile. Mountain ranges also influence the formation of deserts by creating rain shadows. As moisture-laden air currents flow upward over windward slopes, they cool and lose their moisture. Dry air descending over the leeward slopes evaporates moisture from the soil , resulting in deserts. The Great Basin Desert was formed from a rain shadow produced by the Sierra Nevada mountains . Desert areas also form in the interior of continents when prevailing winds are far from large bodies of water and have lost much of their moisture. Desert plants have evolved methods to conserve and efficiently use available water. Some flowering desert plants are ephemeral and live for only a few days. Their seeds or bulbs can lie dormant in the soil for years, until a heavy rain enables them to germinate, grow, and bloom. Woody desert plants can either have long root systems to reach deep water sources or spreading shallow roots to take up moisture from dew or occasional rains. Most desert plants have small or rolled leaves to reduce the surface area from which transpiration of water can take place, while others drop their leaves during dry periods. Often leaves have a waxy coating that prevents water loss. Many desert plants are succulents, which store water in leaves, stems, and roots. Thorns and spines of the cactus are used to protect a plant's water supply from animals. Desert animals have also developed protective mechanisms to allow them to survive in the desert environment. Most desert animals and insects are small, so they can remain in cool underground burrows or hide under vegetation during the day and feed at night when it is cooler. Desert amphibians are capable of dormancy during dry periods, but when it rains, they mature rapidly, mate, and lay eggs. Many birds and rodents reproduce only during or following periods of winter rain that stimulate vegetative growth. Some desert rodents (e.g., the North American kangaroo rat and the African gerbil) have large ears with little fur to allow them to sweat and cool down. They also require very little water. The desert camel can survive nine days on water stored in its stomach. Many larger desert animals have broad hooves or feet to allow them to move over soft sand. Desert reptiles such as the horned toad can control their metabolic heat production by varying their rate of heartbeat and the rate of body metabolism . Some snakes have developed a sideways shuffle that allows them to move across soft sand. Deserts are difficult places for humans to live, but people do live in some deserts, such as the Aborigines in Australia and the Tuaregs in the Sahara. Desert soils are usually naturally fertile since little water is available to leach nutrients . Crops can be grown on desert lands with irrigation , but evaporation of the irrigation water can result in the accumulation of salts on the soil surface, making the soil unsuitable for further crop production. Burning, deforestation , and overgrazing of lands on the semiarid edges of deserts are enabling deserts to encroach on the nearby arable lands in a process called desertification . Desertification in combination with shifts in global atmospheric circulation has resulted in the southern boundary of the Sahara Desert advancing 600 mi (1,000 km) southward. A desertification study conducted for the United Nations in 1984 determined that 35% of the land surface of the earth was threatened by desertification processes.
Standard deviation is a widely used measurement of variability or diversity used in statistics and probability theory. It shows how much variation or 'dispersion' there is from the 'average' (mean, or expected/budgeted value). A low standard deviation indicates that the data points tend to be very close to the mean, whereas high standard deviation indicates that the data is spread out over a large range of values. Technically, the standard deviation of a statistical population, data set, or probability distribution is the square root of its variance. It is algebraically simpler though practically less robust than the expected deviation or average absolute deviation. A useful property of standard deviation is that, unlike variance, it is expressed in the same units as the data. Note, however, that for measurements with percentage as unit, the standard deviation will have percentage points as unit. In addition to expressing the variability of a population, standard deviation is commonly used to measure confidence in statistical conclusions. For example, the margin of error in polling data is determined by calculating the expected standard deviation in the results if the same poll were to be conducted multiple times. The reported margin of error is typically about twice the standard deviation–the radius of a 95% confidence interval. In science, researchers commonly report the standard deviation of experimental data, and only effects that fall far outside the range of standard deviation are considered statistically significant—normal random error or variation in the measurements is in this way distinguished from causal variation. Standard deviation is also important in finance, where the standard deviation on the rate of return on an investment is a measure of the volatility of the investment.
History of Kansas |Important dates in Kansas's history The history of Kansas, argued historian Carl L. Becker a century ago, reflects American ideals. He wrote: - The Kansas spirit is the American spirit double distilled. It is a new grafted product of American individualism, American idealism, American intolerance. Kansas is America in microcosm. Located on the eastern edge of the Great Plains, the U.S. state of Kansas was the home of nomadic Native American tribes who hunted the vast herds of bison (often called "buffalo"). The region was explored by Spanish conquistadores in the 16th century. It was later explored by French fur trappers who traded with the Native Americans. Most of Kansas became permanently part of the United States in the Louisiana Purchase of 1803. When the area was opened to settlement by the Kansas–Nebraska Act of 1854 it became a battlefield that helped cause the American Civil War. Settlers from North and South came in order to vote slavery down or up. The free state element prevailed. After the war, Kansas was home to frontier towns; their railroads were destinations for cattle drives from Texas. With the railroads came heavy immigration from the East, from Germany as well as some freedmen called "Exodusters". Farmers first tried to replicate Eastern patterns and grow corn and raise pigs, but they failed because of shortages of rainfall. The solution, as James Malin showed, was to switch to soft spring wheat and later to hard winter wheat. The wheat was exported to Europe, and was subject to wide variations in price. Many frustrated farmers joined the Populist movement around 1890, but conservative townspeople finally prevailed politically. They supported the progressive movement down to about 1940, but isolationism in foreign affairs combined with prosperity for the farmers and townsfolk made the state a center of conservative support for the Republican Party since 1940. Since 1945 the farm population has sharply declined and manufacturing has become more important, typified by the aircraft industry of Wichita. - 1 Prehistory - 2 Early European exploration and local tribes - 3 Louisiana Purchase - 4 Early 1850s and the territory organization - 5 Kansas Territory - 6 Statehood - 7 Farming - 8 20th century - 9 Sports - 10 See also - 11 Notes - 12 Bibliography - 13 External links The Paleo-Indians and Archaic peoples Around 7000 BC, paleolithic descendants of Asian immigrants into North America reached Kansas. Once in Kansas, the indigenous ancestors never abandoned Kansas. They were later augmented by other indigenous peoples migrating from other parts of the continent. These bands of newcomers encountered mammoths, camels, ground sloths, and horses. The sophisticated big-game hunters did not keep a balance, resulting in the "Pleistocene overkill", the rapid and systematic destruction of nearly all the species of large ice-age mammals in North America by 8000 BC. The hunters who pursued the mammoths may have represented the first of north Great Plains cycles of boom and bust, relentlessly exploiting the resource until it has been depleted or destroyed. After the disappearance of big-game hunters, some archaic groups survived by becoming generalists rather than specialists, foraging in seasonal movements across the plains. The groups did not abandon hunting altogether, but also consumed wild plant foods and small game. Their tools became more varied, with grinding and chopping implements becoming more common, a sign that seeds, fruits, and greens constituted a greater proportion of their diet. Also, pottery-making societies emerged. Introduction of agriculture For most of the Archaic period, people did not transform their natural environment in any fundamental way. The groups outside the region, particularly in Mesoamerica, introduced major innovations, such as maize cultivation. Other groups in North America independently developed maize cultivation as well. Some archaic groups transferred from food gatherers to food producers around 3,000 years ago. They also possessed many of the cultural features that accompany semi-sedentary agricultural life: storage facilities, more permanent dwellings, larger settlements, and cemeteries or burial grounds. El Quartelejo was the northern most Indian pueblo. This settlement is the only pueblo in Kansas from which archaeological evidence has been recovered. Despite the early advent of farming, late Archaic groups still exercised little control over their natural environment. Wild food resources remained important components of their diet even after the invention of pottery and the development of irrigation. The introduction of agriculture never resulted in the complete abandonment of hunting and foraging, even in the largest of Archaic societies. Early European exploration and local tribes In 1541, Francisco Vázquez de Coronado, the Spanish conquistador, visited Kansas, allegedly turning back near "Coronado Heights" in present-day Lindsborg. Near the Great Bend of the Arkansas River, in a place he called Quivira, he met the ancestors of the Wichita people. Near the Smoky Hill River, he met the Harahey, who were probably the ancestors of the Pawnee. This was the first time that the Plains Indians had seen horses. Later, they acquired horses from the Spanish, and rapidly radically altered their lifestyle and range. Following this transformation, the Kansa (sometimes Kaw) and Osage Nation (originally Ouasash) arrived in Kansas in the 17th century. (The Kansa claimed that they occupied the territory since 1673.) By the end of the 18th century, these two tribes were dominant in the eastern part of the future state: the Kansa on the Kansas River to the North and the Osage on the Arkansas River to the South. At the same time, the Pawnee (sometimes Paneassa) were dominant on the plains to the west and north of the Kansa and Osage nations, in regions home to massive herds of bison. Europeans visited the Northern Pawnee in 1719. In 1720, the Spanish military's Villasur expedition was wiped out by Pawnee and Otoe warriors near present-day Columbus, Nebraska, effectively ending Spanish expedition into the region. The French commander at Fort Orleans, Étienne de Bourgmont, visited the Kansas River in 1724 and established a trading post there, near the main Kansa village at the mouth of the river. Around the same time, the Otoe tribe of the Sioux also inhabited various areas around the northeast corner of Kansas. Apart from brief explorations, neither France nor Spain had any settlement or military or other activity in Kansas. In 1763, following the Seven Years' War in which Great Britain defeated France, Spain acquired the French claims west of the Mississippi River. It returned this territory to France in 1803, keeping title to about 7,500 square miles (19,000 km2). In the Louisiana Purchase of 1803, the United States (US) acquired all of the French claims west of the Mississippi River; the area of Kansas was unorganized territory. In 1819 the United States confirmed Spanish rights to the 7,500 square miles (19,000 km2) as part of the Adams–Onís Treaty with Spain. That area became part of Mexico, which also ignored it. After the Mexican–American War and the US victory, the United States took over that part in 1848. The Lewis and Clark Expedition left St. Louis on a mission to explore the Louisiana Purchase all the way to the Pacific Ocean. In 1804, Lewis and Clark camped for three days at the confluence of the Kansas and Missouri rivers in present-day Kansas City, Kansas (today recognized at the Kaw Point Riverfront Park). They met French fur traders and mapped the area. In 1806, Zebulon Pike passed through Kansas and labeled it "the Great American Desert" on his maps. This view of Kansas would help form U.S. policy for the next 40 years, prompting the government to set it aside as land reserved for Native American resettlement. After a brief period as part of Missouri Territory, Kansas returned to unorganized status in 1821. In 1821, the Santa Fe Trail was opened across Kansas as country's transportation route to the Southwest, connecting Missouri with the well-established Santa Fe, New Mexico. Because of the burgeoning trade, the United States Army set up posts throughout the area. On May 8, 1827, Cantonment Leavenworth, or Fort Leavenworth, was built to protect travelers. A section of the Santa Fe Trail through Kansas was used by emigrants on the Oregon Trail, which opened in 1841. The westward trails served as vital commercial and military highways until the railroad took over this role in the 1860s. To travelers en route to Utah, California, or Oregon, Kansas was an essential way stop and outfitting location. Wagon Bed Spring (also Lower Spring or Lower Cimarron Spring) was an important watering spot on the Cimarron Cutoff of the Santa Fe Trail. Other important locations along the trail were the Point of Rocks and Pawnee Rock. 1820s–1840s: Indian territory Beginning in the 1820s, the area that would become Kansas was set aside as Indian Territory by the U.S. government, and was closed to settlement by whites. The government resettled to Indian Territory (now part of Oklahoma) those Native American tribes based in eastern Kansas, principally the Kansa and Osage, opening land to move eastern tribes into the area. By treaty dated June 3, 1825, 20 million acres (81000 km²) of land was ceded by the Kansa Nation to the United States, and the Kansa tribe was limited to a specific reservation in northeast Kansas. In the same month, the Osage Nation was limited to a reservation in southeast Kansas. - "the Shawanoe tribe of Indians within the State of Missouri, for themselves, and for those of the same nation now residing in Ohio who may hereafter immigrate to the west of the Mississippi, a tract of land equal to fifty miles [80 km] square, situated west of the State of Missouri, and within the purchase lately made from the Osage." The Delaware came to Kansas from Ohio and other eastern areas by the treaty of September 24, 1829. The treaty described: - "the country in the fork of the Kansas and Missouri Rivers, extending up the Kansas River to the Kansas (Indian's) line, and up the Missouri River to Camp Leavenworth, and thence by a line drawn westerly, leaving a space ten miles (16 km) wide, north of the Kansas boundary line, for an outlet." After this point, the Indian Removal Act of 1830 expedited the process. By treaty dated August 30, 1831, the Ottawa ceded land to the United States and moved to a small reservation on the Kansas River and its branches. The treaty was ratified April 6, 1832. On October 24, 1832, the U.S. government moved the Kickapoos to a reservation in Kansas. On October 29, 1832, the Piankeshaw and Wea agreed to occupy 250 sections of land, bounded on the north by the Shawanoe; east by the western boundary line of Missouri; and west by the Kaskaskia and Peoria peoples. By treaty made with the United States on September 21, 1833, the Otoe tribe ceded their country south of the Little Nemaha River. By September 17, 1836 the confederacy of the Sac and Fox, by treaty with the United States, moved north of Kickapoo. By treaty of February 11, 1837, the United States agreed to convey to the Pottawatomi an area on the Osage River, southwest of the Missouri River. The tract selected was in the southwest part of what is now Miami County. In 1842, after a treaty between the United States and the Wyandots, the Wyandot moved to the junction of the Kansas and Missouri Rivers (on land that was shared with the Delaware until 1843). In an unusual provision, 35 Wyandot were given "floats" in the 1842 treaty – ownership of sections of land that could be located anywhere west of the Missouri River. In 1847, the Pottawatomi were moved again, to an area containing 576,000 acres (2,330 km²), being the eastern part of the lands ceded to the United States by the Kansa tribe in 1846. This tract comprised a part of the present counties of Pottawatomie, Wabaunsee, Jackson and Shawnee. Early 1850s and the territory organization Despite the extensive plans that were made to settle Native Americans in Kansas, by 1850 white Americans were illegally squatting on their land and clamoring for the entire area to be opened for settlement. Presaging events that were soon to come, several U.S. Army forts, including Fort Riley, were soon established deep in Indian Territory to guard travelers on the various Western trails. Although the Cheyenne and Arapahoes tribes were still negotiating with the United States for land in western Kansas (the current state of Colorado) – they signed a treaty on September 17, 1851 – momentum was already building to settle the land. Congress began the process of creating Kansas Territory in 1852. That year, petitions were presented at the first session of the Thirty-second Congress for a territorial organization of the region lying west of Missouri and Iowa. No action was at that time taken. However, during the next session, on December 13, 1852, a Representative from Missouri submitted to the House a bill organizing the Territory of Platte: all the tract lying west of Iowa and Missouri, and extending west to the Rocky Mountains. The bill was referred to the United States House Committee on Territories, and passed by the full U.S. House of Representatives on February 10, 1853. However, Southern Senators stalled the progression of the bill in the Senate, while the implications of the bill on slavery and the Missouri Compromise were debated. Heated debate over the bill and other competing proposals would continue for a year, before eventually resulting in the Kansas–Nebraska Act, which became law on May 30, 1854, establishing the Nebraska Territory and Kansas Territory. Native American territory ceded Meanwhile, by the summer of 1853, it was clear that eastern Kansas would soon be opened to American settlers. The Commissioner of the Bureau of Indian Affairs, negotiated new treaties that would assign new reservations with annual federal subsidies for the Indians. Nearly all the tribes in the eastern part of the Territory ceded the greater part of their lands prior to the passage of the Kansas territorial act in 1854, and were eventually moved south to the future state of Oklahoma. In the three months immediately preceding the passage of the bill, treaties were quietly made at Washington with the Delaware, Otoe, Kickapoo, Kaskaskia, Shawnee, Sac, Fox and other tribes, whereby the greater part of eastern Kansas, lying within one or two hundred miles of the Missouri border, was suddenly opened to white settlement. (The Kansa reservation had already been reduced by treaty in 1846.) On March 15, 1854, Otoe and Missouri Indians ceded to the United States all their lands west of the Mississippi, except a small strip on the Big Blue River. On May 6 and May 10, 1854, the Shawnees ceded 6,100,000 acres (25,000 km2), reserving only 200,000 acres (810 km2) for homes. Also on May 6, 1854, the Delaware ceded all their lands to the United States, except a reservation defined in the treaty. On May 17, the Iowa similarly ceded their lands, retaining only a small reservation. On May 18, 1854, the Kickapoo too ceded their lands, except 150,000 acres (610 km2) in the western part of the Territory. In 1854 lands were also ceded by the Kaskaskia, Peoria, Piankeshaw and Wea and by the Sac and Fox. The final step in Americanizing the Indians was taking land from tribal control and assigning it to individual Indian households, to buy and sell as European Americans would. For example, in 1854, the Chippewa (Swan Creek and Black River bands) inhabited 8,320 acres (33.7 km2) in Franklin County, but in 1859 the tract was transferred to individual Chippewa families. Upon the passage of the Kansas–Nebraska Act on May 30, 1854, the borders of Kansas Territory were set from the Missouri border to the summit of the Rocky Mountain range (now in central Colorado); the southern boundary was the 37th parallel north, the northern was the 40th parallel north. North of the 40th parallel was Nebraska Territory. When Congress set the southern border of the Kansas Territory as the 37th parallel, it was thought that the Osage southern border was also the 37th parallel. The Cherokees immediately complained, saying that it was not the true boundary and that the border of Kansas should be moved north to accommodate the actual border of the Cherokee land. This became known as the Cherokee Strip controversy. An invitation to violence The most controversial provision in the Kansas–Nebraska Act was the stipulation that settlers in Kansas Territory would vote on whether to allow slavery within its borders. This provision repealed the Missouri Compromise of 1820, which had prohibited slavery in any new states created north of latitude 36°30'. Predictably, violence resulted between the Northerners and Southerners who rushed to settle there in order to control the vote. Within a few days after the passage of the Act, hundreds of pro-slavery Missourians crossed into the adjacent territory, selected an area of land, and then united with other Missourians in a meeting or meetings, intending to establish a pro-slavery preemption upon the entire region. As early as June 10, 1854, the Missourians held a meeting at Salt Creek Valley, a trading post three miles (5 km) west of Fort Leavenworth, at which a "Squatter's Claim Association" was organized. They said they were in favor of making Kansas a slave state, if it should require half the citizens of Missouri, musket in hand, to emigrate there, and even to sacrifice their lives in accomplishing this end. To counter this action, the Massachusetts Emigrant Aid Company (and other smaller organizations) quickly arranged to send anti-slavery settlers (known as "Free-Staters") into Kansas in 1854 and 1855. The principal towns founded by the New Englanders were Topeka, Manhattan, and Lawrence. Several Free-State men also came to Kansas Territory from Ohio, Iowa, Illinois and other Midwestern states. Despite the proximity and opposite aims of the settlers, the lid was largely kept on the violence until the election of the Kansas Territorial legislature on March 30, 1855. On that date, Missourians who had streamed across the border (known as "Border Ruffians") filled the ballot boxes in favor of pro-slavery candidates. As a result, pro-slavery candidates prevailed at every polling district except one (the future Riley County), and the first official legislature was overwhelmingly composed of pro-slavery delegates. From 1855 to 1858, Kansas Territory experienced extensive violence and some open battles. This period, known as "Bleeding Kansas" or "the Border Wars", directly presaged the American Civil War. The major incidents of Bleeding Kansas include the Wakarusa War, the Sacking of Lawrence, the Pottawatomie massacre, the Battle of Black Jack, the Battle of Osawatomie, and the Marais des Cygnes massacre. - Wakarusa War On December 1, 1855, a small army of Missourians, acting under the command of Douglas County, Kansas Sheriff Samuel J. Jones laid siege to the Free-State stronghold of Lawrence in what would later become known as "The Wakarusa War." A treaty of peace negotiation was announced amid much disorder and cries for the reading of the treaty shortly afterwards. It quelled the disorder and its provisions were generally accepted. - Sacking of Lawrence On May 21, 1856, pro-slavery forces led by Sheriff Jones attacked Lawrence, killing two men, burning the Free-State Hotel to the ground, destroying two printing presses, and robbing homes. - Pottawatomie massacre The Pottawatomie massacre occurred during the night of May 24 to the morning of May 25, 1856. In what appears to be a reaction to the Sacking of Lawrence, John Brown and a band of abolitionists (some of them members of the Pottawatomie Rifles) killed five settlers, thought to be pro-slavery, north of Pottawatomie Creek in Franklin County, Kansas. Brown later said that he had not participated in the killings during the Pottawatomie massacre, but that he did approve of them. He went into hiding after the killings, and two of his sons, John Jr. and Jason, were arrested. During their confinement, they were allegedly mistreated, which left John Jr. mentally scarred. On June 2, Brown led a successful attack on a band of Missourians led by Captain Henry Pate in the Battle of Black Jack. Pate and his men had entered Kansas to capture Brown and others. That autumn, Brown went back into hiding and engaged in other guerrilla warfare activities. The violently feuding pro-slavery and anti-slavery factions tried to defeat the opposition by pushing through their own version of a state constitution, that would either endorse or condemn slavery. Congress had the final say. The Topeka Constitution was adopted on November 11, 1855 in Topeka by delegates elected from across the Kansas Territory. This Free State document was in response to the fraudulent takeover of the Territorial government by pro slavery forces seven months earlier. The Topeka Constitution's Bill of Rights proposed "There shall be no slavery in this state." It was ratified by the people of the Territory on December 15, 1855 and presented in Congress in March 1856. It passed in the U.S. House of Representatives but was prevented from a vote in the Senate by pro slavery southern senators. The Lecompton Constitution was adopted by a Convention convened by the official pro-slavery government on November 7, 1857. The constitution would have allowed slavery in Kansas as drafted, but the slavery provision was put to a vote. After a series of votes on the provision and the constitution were boycotted alternatively by pro-slavery settlers and Free-State settlers, the Lecompton Constitution was eventually presented to the U.S. Congress for approval. In the end, because it was never clear if the constitution represented the will of the people, it was rejected. While the Lecompton Constitution was being debated, a new Free-State legislature was elected and seated in Kansas Territory. The new legislature convened a new convention, which framed the Leavenworth Constitution. This constitution was the most radically progressive of the four proposed, outlawing slavery and providing a framework for women's rights. The constitution was adopted by the convention at Leavenworth on April 3, 1858, and by the people at an election held May 18, 1858 (all while the Lecompton Constitution was still under consideration). President Buchanan sent the Lecompton Constitution to Congress for approval. The Senate approved the admission of Kansas as a state under the Lecompton Constitution, despite the opposition of Senator Douglas, who believed that the Kansas referendum on the Constitution, by failing to offer the alternative of prohibiting slavery, was unfair. The measure was subsequently blocked in the House of Representatives, where northern congressmen refused to admit Kansas as a slave state. Senator James Hammond of South Carolina characterized this resolution as the expulsion of the state, asking, "If Kansas is driven out of the Union for being a slave state, can any Southern state remain within it with honor?" Following the failure of the Lecompton and Leavenworth charters, a fourth constitution was drafted; the Wyandotte Constitution was adopted by the convention which framed it on July 29, 1859. It was adopted by the people at an election held October 4, 1859. It outlawed slavery but was far less progressive than the Leavenworth Constitution. Kansas was admitted into the Union as a free state under this constitution on January 29, 1861. End of hostilities By the time the Wyandotte Constitution was framed in 1859, it was clear that the pro-slavery forces had lost in their bid to control Kansas. With this dawning realization and the departure of John Brown from the state, Bleeding Kansas violence virtually ceased by 1859. Kansas became the 34th state admitted to the Union on January 29, 1861. The 1860s saw several important developments in the history of Kansas, including participation in the Civil War, the beginning of the cattle drives, the roots of Prohibition in Kansas (which would fully take hold in the 1880s), and the start of the Indian Wars on the western plains. James Lane was elected to the Senate from the state of Kansas in 1861, and reelected in 1865. After years of small-scale civil war, Kansas was admitted into the Union as a free state under the "Wyandotte Constitution" on January 29, 1861. Most people gave strong support for the Union cause. However, guerrilla warfare and raids from pro-slavery forces, many spilling over from Missouri, occurred during the Civil War. At the start of the war in April 1861, the Kansas government had no well-organized militia, no arms, accoutrements or supplies, nothing with which to meet the demands, except the united will of officials and citizens. During the years 1859 to 1860, the military organizations had fallen into disuse or been entirely broken up. The first Kansas regiment was called on June 3, 1861, and the seventeenth, the last raised during the Civil War, July 28, 1864. The entire quota assigned to the Kansas was 16,654, and the number raised was 20,097, leaving a surplus of 3,443 to the credit of Kansas. Statistics indicated that losses of Kansas regiments in killed in battle and from disease are greater per thousand than those of any other State. Apart from small formal battles, there were 29 Confederate raids into Kansas during the war. The most serious episode came when Lawrence, Kansas came under attack on August 21, 1863, by guerrillas led by William Clarke Quantrill. It was in part retaliation for "Jayhawker" raids against pro-Confederate settlements in Missouri. After Union Brigadier General Thomas Ewing, Jr. ordered the imprisonment of women who had provided aid to Confederate guerrillas, tragically the jail's roof collapsed, killing five. These deaths enraged guerrillas in Missouri. On August 21, 1863, William Quantrill led Quantrill's Raid into Lawrence, burned much of the city and killed over 150 men and boys. In addition to the jail collapse, Quantrill also rationalized the attack on this citadel of abolition would bring revenge for any wrongs, real or imagined that the Southerners had suffered at the hands of jayhawkers. The Battle of Baxter Springs, sometimes called the Baxter Springs Massacre, was a minor battle in the War, fought on October 6, 1863, near the modern-day town of Baxter Springs, Kansas. The Battle of Mine Creek, also known as the Battle of the Osage was a cavalry battle that occurred in Kansas during the war. Marais des Cygnes On October 25, 1864, the Battle of Marais des Cygnes occurred in Linn County, Kansas. This Battle of Trading Post was between Major General Sterling Price and Union forces under Major General Alfred Pleasonton. Price, after fleeing south after a defeat at Kansas City, was pushed out by Union forces. Indian Wars in Kansas Fort Larned (central Kansas) was established in 1859 as a base of military operations against hostile Indians of the Central Plains, to protect traffic along the Santa Fe Trail and after 1861 became an agency for the administration of the Central Plains Indians by the Bureau of Indian Affairs under the terms of the Fort Wise Treaty of 1861. Kansas Pacific railroad In 1863, the Union Pacific Eastern Division (renamed the Kansas Pacific in 1869) was authorized by the United States Congress's Pacific Railway Act to create the southerly branch of the transcontinental railroad alongside the Union Pacific. Pacific Railway Act also authorized large land grants to the railroad along its mainline. The company began construction on its main line westward from Kansas City in September 1863. In the postwar era, many railroads were planned, but not all were actually built. The nationwide Panic of 1873 dried up funding. Land speculators and local boosters identified many potential towns, and those reached by the railroad had a chance, while the others became ghost towns. In Kansas, nearly 5000 towns were mapped out, but by 1970 only 617 were actually operating. In the mid-20th century, closeness to an interstate exchange determined whether town would flourish or struggle for business. After the Civil War, the railroads did not reach Texas, so the herdsman brought their cattle to Kansas rail heads. In 1867, Joseph G. McCoy built stockyards in Abilene, Kansas and helped develop the Chisholm Trail, encouraging Texas cattlemen to undertake cattle drives to his stockyards from 1867 to 1887. The stockyards became the largest west of Kansas City. Once the cattle was drove north, they were shipped eastward from the railhead of the Kansas Pacific Railway. In 1871, Wild Bill Hickok became marshal of Abilene, Kansas. His encounter there with John Wesley Hardin resulted in the latter fleeing the town after Wild Bill managed to disarm him. Hickok was also a deputy marshal at Fort Riley and a marshal at Hays in the Wild West. In the 1880s at Greensburg, Kansas, the Big Well was built to provide water for the Santa Fe and Rock Island railroads. At 109 feet (33 m) deep and 32 feet (9.8 m) in diameter it is the world's largest hand-dug well. Coronado, Kansas, was established in 1885. It was involved in one of the bloodiest county seat fights in the history of the American West. The shoot-out on February 27, 1887, with boosters — some would say hired gunmen — from nearby Leoti left several people dead and wounded. In 1879, after the end of Reconstruction in the South, thousands of Freedmen moved from Southern states to Kansas. Known as the Exodusters, they were lured by the prospect of good, cheap land and better treatment. The all-black town of Nicodemus, Kansas, which was founded in 1877, was an organized settlement that predates the Exodusters but is often associated with them. On February 19, 1881, Kansas became the first state to amend its constitution to prohibit all alcoholic beverages. This action was spawned by the temperance movement, and was enforced by the ax-toting Carrie A. Nation beginning in 1888. After 1890 prohibition was joined with progressivism to create a reform movement that elected four successive governors between 1905 and 1919; they favored extreme prohibition enforcement policies, and claimed Kansas was truly dry. Kansas did not repeal prohibition until 1948, and even then it continued to prohibit public bars, a restriction which was not lifted until 1987. Kansas did not allow retail liquor sales on Sundays until 2005, and most localities still prohibit Sunday liquor sales. By the Alcohol laws of Kansas today 29 counties are dry counties. The city of Topeka played a notable role in the history of American Christianity around the beginning of the 20th century. Charles Sheldon, a leader in the Social Gospel movement who first used the phrase What would Jesus do?, preached in Topeka. Topeka was also the home to the church of Charles Fox Parham, whom many historians associate with the beginning of the modern Pentecostalism movement. Early settlers discovered that Kansas was not the "Great American Desert", but they also found that the very harsh climate—with tornadoes, blizzards, drought, hail, floods and grasshoppers—made for the high risk of a ruined crop. Many early settlers were financially ruined, and especially in the early 1890s, either protested through the Populist movement or went back east. In the 20th century, crop insurance, new conservation techniques, and large-scale federal aid have lowered the risk. Immigrants, especially Germans and their children, comprised the largest element of settlers after 1860; they were attracted by the good soil, low priced lands from the railroad companies, and (if they were American citizens) the chance to homestead 160 acres (0.65 km2) and receive title to the land at no cost from the federal government. The problem of blowing dust came not because farmers grew too much wheat, but because the rainfall was too little to grow enough wheat to keep the topsoil from blowing away. In the 1930s techniques and technologies of soil conservation, most of which had been available but ignored before the Dust Bowl conditions began, were promoted by the Soil Conservation Service (SCS) of the US Department of Agriculture, so that, with cooperation from the weather, soil condition was much improved by 1940. On the Great Plains very few single men attempted to operate a farm or ranch; farmers clearly understood the need for a hard-working wife, and numerous children, to handle the many chores, including child-rearing, feeding and clothing the family, managing the housework, feeding the hired hands, and, especially after the 1930s, handling the paperwork and financial details. During the early years of settlement in the late 19th century, farm women played an integral role in assuring family survival by working outdoors. After a generation or so, women increasingly left the fields, thus redefining their roles within the family. New conveniences such as sewing and washing machines encouraged women to turn to domestic roles. The scientific housekeeping movement was promoted across the land by the media and government extension agents, as well as county fairs which featured achievements in home cookery and canning, advice columns for women in the farm papers, and home economics courses in the schools. Although the eastern image of farm life on the prairies emphasizes the isolation of the lonely farmer and farm life, in reality rural folk created a rich social life for themselves. They often sponsored activities that combined work, food, and entertainment such as barn raisings, corn huskings, quilting bees, Grange meeting, church activities, and school functions. The womenfolk organized shared meals and potluck events, as well as extended visits between families. In 1947, Lyle Yost founded Hesston Manufacturing Company. The company specialized in farm equipment, including self-propelled windrowers and the StakHand hay harvester. In 1974, Hesston Company commissioned its first belt buckles, which became popular on the rodeo circuit and with collectors. In 1991, the American-based equipment manufacturer AGCO Corporation purchased Hesston Corporation and farm equipment is still manufactured in the city. In 1896 William Allen White, editor of the Emporia Gazette attracted national attention with a scathing attack on William Jennings Bryan, the Democrats, and the Populists titled "What's the Matter With Kansas?" White sharply ridiculed Populist leaders for letting Kansas slip into economic stagnation and not keeping up economically with neighboring states because their anti-business policies frightened away economic capital from the state. The Republicans sent out hundreds of thousands of copies of the editorial in support of William McKinley during the United States presidential election, 1896. While McKinley carried the small towns and cities of the state, Bryan swept the wheat farms and won the electoral vote, even as McKinley won the national vote. Kansas was a center of the progressive movement, with enthusiastic support from the middle classes, editors such as William Allen White of the Emporia Gazette, and the prohibitionists of the WCTU and the Methodist Church. White in his novels and short stories, developed his idea of the small town as a metaphor for understanding social change and for preaching the necessity of community. While he expressed his views in terms of his small Kansas city, he tailored his rhetoric to the needs and values of all of urban America. The cynicism of the post-World War I world stilled his imaginary literature, but for the remainder of his life he continued to propagate his vision of small-town community. He opposed chain stores and mail order firms as a threat to the business owner on Main Street. The Great Depression shook his faith in a cooperative, selfless, middle-class America. In 1916, Kansas troops served on the U.S.–Mexico border during the Mexican Revolution. 80,000 Kansans enlisted in the military after April, 1917 when the United States declared war on Germany. They were attached mostly to the 35th, the 42nd, the 89th, and the 92nd infantry divisions. The state's large German element had favored neutrality and were under close watch. Often they were forced to buy war bonds or not speak German in public. In 1915, the El Dorado Oil Field, around the city of El Dorado, was the first oil field that was found using science/geologic mapping, and part of the Mid-Continent oil province. By 1918, the El Dorado Oil Field was the largest single field producer in the USA, and was responsible for 12.8% of national oil production and 9% of the world production. It was deemed by some as "the oil field that won World War I". While urban areas prospered in the 1920s, the farm economy had overexpanded when wheat prices were high during the war, and had to cut back sharply. The flag of Kansas was designed in 1925. It was officially adopted by the Kansas State Legislature in 1927 and modified in 1961 (the word "Kansas" was added below the seal in gold block lettering). It was first flown at Fort Riley by Governor Ben S. Paulen in 1927 for the troops at Fort Riley and for the Kansas National Guard. The Dust Bowl was a series of dust storms caused by a massive drought that began in 1930 and lasted until 1941. The effect of the drought was overshadowed by the plunging wheat prices and the financial crisis of the Great Depression. Many local banks were forced to close. Some farmers left the land but even larger numbers of unemployed men left the cities to return to their family's farm. The state became an eager participant in such major New Deal relief programs as the Civil Works Administration, the Federal Emergency Relief Administration, the Civilian Conservation Corps, the Works Progress Administration, which put hundreds of thousands of Kansans—mostly men—to work at unskilled labor. Most important of all were the New Deal farm programs, which raised prices of wheat and other crops and allowed economic recovery by 1936. Republican Governor Alf Landon also employed emergency measures, including a moratorium on mortgage foreclosures and a balanced budget initiative. The Agricultural Adjustment Administration succeeded in raising wheat prices after 1933, thus alleviating the most serious distress. World War II The state's main contribution to the war effort, besides tens of thousands of servicemen and servicewomen, was the enormous increase in the output of grain production. Farmers nevertheless grumbled about price ceilings for their wheat, production quotas, the movement of hired hands to well-paid factory jobs, and the shortage of farm machinery; they lobbied the Congress to make sure that young farmers were deferred from the draft. Wichita, which had long shown an interest in aviation, became a major manufacturing center for the aircraft industry during the war, attracting tens of thousands of underemployed workers from the farms and small towns of the state. The Women's Land Army of America (WLA) was a wartime women's labor pool organized by the U.S. Department of Agriculture. It failed to attract many town or city women to do farm work, but it succeeded in training several hundred farm wives in machine handling, safety, proper clothing, time-saving methods, and nutrition. Cold War era Kansas state law permitted segregated public schools, which operated in Topeka and other cities. On May 17, 1954, the Supreme Court in Brown v. Board of Education unanimously declared that separate educational facilities are inherently unequal and, as such, violate the 14th Amendment to the United States Constitution, which guarantees all citizens "equal protection of the laws." Brown v. Board of Education of Topeka explicitly outlawed de jure racial segregation of public education facilities (legal establishment of separate government-run schools for blacks and whites). The site consists of the Monroe Elementary School, one of the four segregated elementary schools for African American children in Topeka, Kansas (and the adjacent grounds). During the 1950s and 1960s, intercontinental ballistic missiles (designed to carry a single nuclear warhead) were stationed throughout Kansas facilities. They were stored (to be launched from) hardened underground silos. The Kansas facilities were deactivated in the early 1980s. On June 8, 1966, Topeka, Kansas was struck by an F5 rated tornado, according to the Fujita scale. The "1966 Topeka tornado" started on the southwest side of town, moving northeast, hitting various landmarks (including Washburn University). Total dollar cost was put at $100 million. Kansas was home to President Eisenhower of Abilene, presidential candidates Bob Dole and Alf Landon, and the aviator Amelia Earhart. Famous athletes from Kansas include Barry Sanders, Gale Sayers, Jim Ryun, Walter Johnson, Maurice Greene, and Lynette Woodard. The Kansas Sports Hall of Fame chronicles the history of competitive athletics in the state. Kansas sports history includes several significant firsts. The first college football game played in Kansas was the 1890 Kansas vs. Baker football game played in Baldwin City. Baker won 22–9. The first night football game west of the Mississippi was played in Wichita, Kansas in 1905 between Cooper College (now called Sterling College) and Fairmount College (now Wichita State University). Later that year, Fairmount also played an experimental game against the Washburn Ichabods that was used to test new rules designed to make football safer. In 1911, the Kansas Jayhawks traveled to play the Missouri Tigers for what is considered the first homecoming game ever. The first college football homecoming game ever televised was played in Manhattan between the Kansas State Wildcats and the Nebraska Cornhuskers. The 1951 season saw the Southwestern head coach Harold Hunt gain national recognition for rejecting a touchdown in a game against Central Missouri. Hunt informed the officials that his player had stepped out of bounds, nullifying long touchdown run. Not a single one of the referees had been in a position to see him do so, but they agreed to nullify the touchdown, and returned the ball to the point where Coach Hunt said Johnson had stepped out. A photo of the run later confirmed Coach Hunt's observation. On October 2, 1970, a plane crashed that was carrying about half of the football team for Wichita State on their way to play a game against Utah State University. 31 people were killed. The game was canceled, and the Utah State football team held a memorial service at the stadium where the game was to have been played. The history of professional sports in Kansas probably dates from the establishment of the Minor League Baseball Topeka Capitals and Leavenworth Soldiers in 1886 in the Western League. The African-American Bud Fowler played on the Topeka team that season, one year before the "color line" descended in professional baseball. In 1887, the Western League was dominated by a reorganized Topeka team called the Golden Giants – a high-priced collection of major leaguer players, including Bug Holliday, Jim Conway, Dan Stearns, Perry Werden and Jimmy Macullar, which won the league by 15½ games. On April 10, 1887, the Golden Giants also won an exhibition game from the defending World Series champions, the St. Louis Browns (the present-day Cardinals), by a score of 12–9. However, Topeka was unable to support the team, and it disbanded after one year. The first night game in the history of professional baseball was played in Independence on April 28, 1930 when the Muscogee (Oklahoma) Indians beat the Independence Producers 13 to 3 in a minor league game sanctioned by the Western League of the Western Baseball Association with 1,500 fans attending the game. The permanent lighting system was first used for an exhibition game on April 17, 1930 between the Independence Producers and House of David semi-professional baseball team of Benton Harbor, Michigan with the Independence team winning with a score of 9 to 1 before a crowd of 1,700 spectators. - History of the Midwestern United States - Germans from Russia, Many of whom lived in Kansas - Great Plains - Timeline of Kansas history - Cities in Kansas - Carl L. Becker, "Kansas," in Essays in American History Dedicated to Frederick Jackson Turner (1910), 85–111 - James Malin, Winter Wheat in the Golden Belt of Kansas (1944) - Frank H. Gille, ed. Encyclopedia of Kansas Indians Tribes, Nations and People of the Plains (1999) - Bolton, Herbert E. Coronado Knight of Pueblos and Plains. Albuquerque: U of NM Press, 1949, pp. 292-295 - Kansa treaty http://digital.library.okstate.edu/Kappler/Vol2/treaties/kan0222.htm - Osage treaty http://digital.library.okstate.edu/kappler/vol2/treaties/osa0217.htm - Shawanoe treaty http://digital.library.okstate.edu/Kappler/Vol2/treaties/sha0262.htm - TREATY WITH THE DELAWARES, 1829 - TREATY WITH THE OTTAWA, 1831 - TREATY WITH THE KICKAPOO, 1832 - TREATY WITH THE PIANKASHAW AND WEA, 1832 - The Treaty with the Oto and Missouri Tribes of 1833 - TREATY WITH THE SAUK AND FOXES, 1836 - TREATY WITH THE POTAWATOMI, 1837 - TREATY WITH THE WYANDOT, 1842 - INDIAN AFFAIRS: LAWS AND TREATIES. Vol. 2, Treaties - Joseph B. Herring, "The Chippewa and Munsee Indians: Acculturation and Survival in Kansas, 1850's-1870," Kansas History, Dec 1983, Vol. 6 Issue 4, pp 212-220 - David M. Potter, The Impending Crisis, 1848-1861 (1976) ch 12 - Gary L. Cheatham, "'Slavery All the Time or Not At All': The Wyandotte Constitution Debate, 1859–1861," Kansas History 21 (Autumn 1998): 168–187 online - Gary L. Cheatham, "'Desperate Characters': The Development and Impact of the Confederate Guerrillas In Kansas," Kansas History, Sept 1991, Vol. 14 Issue 3, pp 144-161 - Albert Castel, "The Jayhawkers and Copperheads of Kansas" Civil War History, Sept 1959, Vol. 5 Issue 3, pp 283-293 - Donald Gilmore, "Revenge in Kansas, 1863," History Today, March 1993, Vol. 43 Issue 3, pp 47-53 - Daniel E. Sutherland, A Savage Conflict: The Decisive Role of Guerrillas in the American Civil War (2009) - Raymond A. Mohl, The New City: Urban America in the Industrial Age, 1860-1920 (1985) p 69 - Robert R. Dykstra, The Cattle Towns (1983) - Nell Irvin Painter, Exodusters: Black Migration to Kansas After Reconstruction (1992) - Robert Smith Bader, Prohibition in Kansas: A History (1986) - Alan F. Bearman and Jennifer L. Mills 2009. "Adapting Christianity to the Challenges of the American West", Kansas History: A Journal of the Central Plains, 32(106) - R. Douglas Hurt, et al. "Agricultural Technology in the Dust Bowl, 1932-40," Great Plains: Environment and Culture, 1979, pp 139-156 - Deborah Fink, Agrarian Women: Wives and Mothers in Rural Nebraska, 1880-1940 (1992) - Chad Montrie, "'Men Alone Cannot Settle a Country:' Domesticating Nature in the Kansas-Nebraska Grasslands," Great Plains Quarterly, Fall 2005, Vol. 25 Issue 4, pp. 245–258 - Karl Ronning, "Quilting in Webster County, Nebraska, 1880-1920," Uncoverings, 1992, Vol. 13, pp 169-191 - Nathan B. Sanderson, "More Than a Potluck," Nebraska History, Fall 2008, Vol. 89 Issue 3, pp. 120–131 - Hesston Belt Buckle History - See online edition - Robert Sherman La Forte, Leaders of reform: progressive Republicans in Kansas, 1900-1916 (University Press of Kansas, 1974) - Edward Gale Agran, "Too Good a Town": William Allen White, Community, and the Emerging Rhetoric of Middle America (1998) - "Stapleton Oil Well Number One - El Dorado, Kansas". kansastravel.org. - designed by: logicmaze, inc. "El Dorado Oil Field". kansassampler.org. - L.H. Skelton (16 November 1997). "The discovery and development of the El Dorado (Kansas) oil field". usgs.gov. - Charles William Sloan, Jr., "Kansas Battles the Invisible Empire: The Legal Ouster of the KKK From Kansas, 1922–1927", Kansas Historical Quarterly Fall, 1974 (Vol. 40, No. 3), pp. 393–409 (ed. explains in detail how the KKK worked in Kansas.) - Timothy Eagan, The Worst Hard Tim : the Untold Story of Those Who Survived the Great American Dust Bowl (Houghton Mifflin, 2006) - Craig Miner,Next Year Country: Dust to Dust in Western Kansas, 1890-1940 (2007) - Peter Fearon, "Kansas History and the New Deal Era," Kansas History, Autumn 2007, Vol. 30 Issue 3, pp 192-223 - Donald R. McCoy, Landon of Kansas (1966) - Peter Fearon, "Regulation and Response: Kansas Wheat Farmers and the New Deal," Rural History, Oct 2007, Vol. 18 Issue 2, pp 245-264 - Patrick G. O'Brien, "Kansas At War: The Home Front, 1941-1945," Kansas History, March 1994, Vol. 17 Issue 1, pp 6-25 - Michael J. Grant, "'Food Will Win the War and Write the Peace': The Federal Government and Kansas Farmers during World War II," Kansas History, Dec 1997, Vol. 20 Issue 4, pp 242-257 - Peter Fearon, "Ploughshares into Airplanes: Manufacturing Industry and Workers in Kansas During World War II," Kansas History, Dec 1999, Vol. 22 Issue 4, pp 298-314 - Caron Smith, "The Women's Land Army During World War II," Kansas History, June 1991, Vol. 14 Issue 2, pp 82-88 - Beatty, Robert and M. A. Peterson. "Covert Discrimination: Topeka-Before and After Brown." Kansas History 27 (Autumn 2004): 146–163. online - James T.Paterson, Brown V. Board of Education, A Civil Rights Milestone and Its Troubled Legacy (2002) - Evans, Harold (August 1940). "College Football in Kansas". Kansas Historical Quarterly. pp. 285–311. Retrieved September 11, 2012. - "Coleman - Outdoor Gear for Camping, Hiking and Tailgating / US". coleman.com. - "New Football Rules Tested". Los Angeles Times. December 26, 1905. - Director of Digital Media, Eric J Eckert; email@example.com (2011-09-23). "> Archives > Editorials > Vincent's Views". York News-Times. Retrieved 2011-12-05. - "Televised Game". Morning Chronicle. Manhattan, Kansas. October 28, 1939. - Janssen, Mark (October 7, 2010). "Purple Pride vs. Big Red - 4-0 vs. 4-0". Kansas State Wildcats. Archived from the original on 15 February 2011. Retrieved February 11, 2011. - www.opentheword.org - Man of the Year - Zier, Patrick (November 20, 1974). "Four Years Ago . . .". Lakeland Ledger. pp. Page 1B & 4B. Retrieved 8 July 2012. - Memorial '70 at Wichita State - Evans, Harold (1940). "Baseball in Kansas, 1867–1940". Kansas Historical Quarterly. Retrieved 2008-02-18. - Madden, W.C.; Stewart, Patrick (2002). The Western League: A Baseball History, 1885 through 1999. ISBN 0-7864-1003-5. - Bowman, Larry G. "I Think It Is Pretty Ritzy Myself: Kansas Minor League Teams and Night Baseball". Kansas History, Winter 1995/1996, pp 248–257. Kansas Historical Society. Retrieved 25 May 2013. Surveys and reference - Arnold, Anna Estelle. A history of Kansas (1914) old textbook online edition - Blackmar, Frank W Kansas; a cyclopedia of state history, embracing events, institutions, industries, counties, cities, towns, prominent persons, etc. (1912) online edition, old alphabetical encyclopedia - Cutler, William G. History of the State of Kansas (1883), detailed, reliable older history - Davis, Kenneth. Kansas: A History (1984) - Dean, Virgil W., ed. John Brown to Bob Dole: Movers and Shakers in Kansas History (2010), 27 short biographies by scholars - Gille, Frank H. ed. Encyclopedia of Kansas Indians Tribes, Nations and People of the Plains (1999) - Miner, Craig. Kansas: The History of the Sunflower State, 1854–2000 (2002) (ISBN 0-7006-1215-7), the newest standard history - Napier, Rita, ed. Kansas and the West: New Perspectives (University Press of Kansas, 2003), 416pp; essays by scholars - Rich, Everett, ed. The Heritage of Kansas: Selected Commentaries on Past Times (1960) 852pp; essays by historians and primary sources online - Richmond, Robert. Kansas, A Land of Contrasts (4th ed. 1999) - Socolofsky, Homer E. Kansas Governors (1990) - Socolofsky, Homer E. and Huber Self. Historical Atlas of Kansas (1992) - Stuewe, Paul K., ed. Kansas Revisited: Historical Images and Perspectives (2nd ed. 1998), essays by scholars - Wishart, David J. ed. Encyclopedia of the Great Plains, University of Nebraska Press, 2004, ISBN 0-8032-4787-7. complete text online; 900 pages of scholarly articles - Zornow, William, Kansas, A History of the Jayhawk State (1957) - Bader, Robert S. Hayseeds, Moralizers, and Methodists: The Twentieth-Century Image of Kansas (University Press of Kansas, 1988) - Campney, Brent M. S. "'This is Not Dixie:' The Imagined South, The Kansas Free State Narrative, and the Rhetoric of Racist Violence" Southern Spaces 6 September 2007 online - Carman, J. Neale. Foreign-Language Units of Kansas, I. Historical Atlas and Statistics (1962) detailed introduction to foreign settlement, with immigration statistics and detailed maps showing ethnic clusters. - Castel, Albert. A Frontier State at War: Kansas, 1861–1865 (1958) online - Dick, Everett. Vanguards of the Frontier: A Social History of the Northern Plains and Rocky Mountains from the Earliest White Contacts to the Coming of the Homemaker (1941) online - Entz, Gary R. "Religion in Kansas," Kansas History, Summer 2005, Vol. 28 Issue 2, pp 120–145, emphasis on the Civil War, Progressive Era, immigrants, and the civil rights movement - Goodrich, Thomas War to the Knife: Bleeding Kansas, 1854-1861 (1998). - Ham, George E., and Robin Higham. Rise of the Wheat State: A History of Kansas Agriculture 1861-1986 (1987) - Ise, John. Sod and Stubble: The Story of a Kansas Homestead (U of Nebraska Press, 1972) online - Lee, R. Alton. Sunflower Justice: A New History of the Kansas Supreme Court (U of Nebraska Press, 2014) - Luebke, Frederick C., ed. Ethnicity on the Great Plains (1982) - McQuillan, D. Aidan. Prevailing over Time: Ethnic Adjustment on the Kansas Prairies, 1875–1925 (1990) online - Malin, James. Winter Wheat in the Golden Belt of Kansas (1944) - Malin, James. The Grassland of North America: Prolegomena to its History (1956) excerpts, pioneering environmental history - Malin, James. History and Ecology: Studies of the Grassland (1984) - Miner, Craig. West of Wichita: Settling the High Plains of Kansas, 1865-90 (1986) excerpt and text search - Press, Donald E. "Kansas Conflict: Populist Versus Railroader in the 1890's," Kansas Historical Quarterly (1977) 43#3 pp 319-333 online - Reynolds, David. John Brown, Abolitionist (2005) (ISBN 0-375-41188-7), favorable to Brown - Sloan, Charles William, Jr. "Kansas Battles the Invisible Empire: The Legal Ouster of the KKK From Kansas, 1922-1927," Kansas Historical Quarterly (1974) 40#3 pp 393-409 online - Socolofsky, Homer E. "Kansas in 1876" Kansas Historical Quarterly (1977) 43#1 pp 1-43 online - Van Sant, Thomas D. Improving rural lives: A history of Farm Bureau in Kansas, 1912-1992 (1993) - Villard, Oswald Garrison, John Brown 1800-1859: A Biography Fifty Years After (1910). full text online - Averill, Thomas Fox. "Kansas Literature. A Review Essay." Kansas History 25 (Summer 2002): 141–165. online - Coburn, Carol K. "Women and Gender in Kansas History. Review Essay." Kansas History 26 (Summer 2003): 126–151. online - Grant, H. Roger. "Kansas Transportation. Review Essay." Kansas History 26 (Autumn 2003): 206–229. online - Hurt, R. Douglas. "The Agricultural and Rural History of Kansas. Review Essay." Kansas History 27 (Autumn 2004): 194–217. online - Johannsen, Robert W. "James C. Malin: An Appreciation," Kansas Historical Quarterly (1972) 38#4 pp. 457-66 online - Leiker, James. "Race Relations in the Sunflower State. A Review Essay." Kansas History 25 (Autumn 2002): 214–236. online - Malin, James C. Essays on historiography (1946) online - Nichols, Roy F. "The Kansas-Nebraska Act: A Century of Historiography." Mississippi Valley Historical Review 43 (September 1956): 187-212. in JSTOR - Socolofsky, Homer E. and Virgil W. Dean. Kansas History: An Annotated Bibliography (1992) excerpt and text search - Turk, Eleanor L. "Germans in Kansas. Review Essay." Kansas History 28 (Spring 2005): 44–71. online - Averill, Thomas Fox, ed., What Kansas Means to Me: Twentieth-Century Writers on the Sunflower State (University Press of Kansas, 1991) - Becker, Carl L. "Kansas," in Essays in American History Dedicated to Frederick Jackson Turner (1910), 85–111, famous interpretation. online pp 85-111; Reprinted in Everett Rich, ed., The Heritage of Kansas: Selected Commentaries on Past Times (1960), pp 340–59 - Robinson, Sara. Kansas, Its Interior and Exterior Life: Including a Full View of Its Settlement, Political History, Social Life, Climate, Soil, Productions, Scenery, Etc. (1856) full text online - Courtwright, Julie. "'A Goblin That Drives Her Insane': Sara Robinson and the History Wars of Kansas, 1894-1911." Kansas History 25 (Summer 2002): 102–123. online - Stratton, Joanna. Pioneer Women: Voices from the Kansas Frontier (1982), autobiographical accounts excerpt and text search - White, William Allen. "What's the matter with Kansas?" (1896) online - Kansas History: A Journal of the Central Plains, scholarly journal, with many articles online - "The Kansas Collection" many primary sources, including books and articles about Kansas - Rath, Ida Ellen The Rath Trail (1961) biography of Charles Rath - The Archeological Heritage of Kansas - Kansas Archaeology Website - Kansas Historical Society - Access documents, photographs, and other primary sources on Kansas Memory, the Kansas State Historical Society's digital portal
Thermoelectric power generator, any of a class of solid-state devices that either convert heat directly into electricity or transform electrical energy into thermal power for heating or cooling. Such devices are based on thermoelectric effects involving interactions between the flow of heat and of electricity through solid bodies. All thermoelectric power generators have the same basic configuration, as shown in the figure. A heat source provides the high temperature, and the heat flows through a thermoelectric converter to a heat sink, which is maintained at a temperature below that of the source. The temperature differential across the converter produces direct current (DC) to a load (RL) having a terminal voltage (V) and a terminal current (I). There is no intermediate energy conversion process. For this reason, thermoelectric power generation is classified as direct power conversion. The amount of electrical power generated is given by I2RL, or VI. A unique aspect of thermoelectric energy conversion is that the direction of energy flow is reversible. So, for instance, if the load resistor is removed and a DC power supply is substituted, the thermoelectric device can be used to draw heat from the “heat source” element and lower its temperature. In this configuration, the reversed energy-conversion process of thermoelectric devices is invoked, using electrical power to pump heat and produce refrigeration. This reversibility distinguishes thermoelectric energy converters from many other conversion systems, such as thermionic power converters. Electrical input power can be directly converted to pumped thermal power for heating or refrigerating, or thermal input power can be converted directly to electrical power for lighting, operating electrical equipment, and other work. Any thermoelectric device can be applied in either mode of operation, though the design of a particular device is usually optimized for its specific purpose. Systematic study began on thermoelectricity between about 1885 and 1910. By 1910 Edmund Altenkirch, a German scientist, satisfactorily calculated the potential efficiency of thermoelectric generators and delineated the parameters of the materials needed to build practical devices. Unfortunately, metallic conductors were the only materials available at the time, rendering it unfeasible to build thermoelectric generators with an efficiency of more than about 0.5 percent. By 1940 a semiconductor-based generator with a conversion efficiency of 4 percent had been developed. After 1950, in spite of increased research and development, gains in thermoelectric power-generating efficiency were relatively small, with efficiencies of not much more than 10 percent by the late 1980s. Better thermoelectric materials will be required in order to go much beyond this performance level. Nevertheless, some low-power varieties of thermoelectric generators have proven to be of considerable practical importance. Those fueled by radioactive isotopes are the most versatile, reliable, and generally used power source for isolated or remote sites, such as for recording and transmitting data from space. Major types of thermoelectric generators Thermoelectric power generators vary in geometry, depending on the type of heat source and heat sink, the power requirement, and the intended use. During World War II, some thermoelectric generators were used to power portable communications transmitters. Substantial improvements were made in semiconductor materials and in electrical contacts between 1955 and 1965 that expanded the practical range of application. In practice, many units require a power conditioner to convert the generator output to a usable voltage. Generators have been constructed to use natural gas, propane, butane, kerosene, jet fuels, and wood, to name but a few heat sources. Commercial units are usually in the 10- to 100-watt output power range. These are for use in remote areas in applications such as navigational aids, data collection and communications systems, and cathodic protection, which prevents electrolysis from corroding metallic pipelines and marine structures. Test Your Knowledge April Showers to March’s Lions and Lambs Solar thermoelectric generators have been used with some success to power small irrigation pumps in remote areas and underdeveloped regions of the world. An experimental system has been described in which warm surface ocean water is used as the heat source and cooler deep ocean water as the heat sink. Solar thermoelectric generators have been designed to supply electric power in orbiting spacecraft, though they have not been able to compete with silicon solar cells, which have better efficiency and lower unit weight. However, consideration has been given to systems featuring both heat pumping and power generation for thermal control of orbiting spacecraft. Utilizing solar heat from the Sun-oriented side of the spacecraft, thermoelectric devices can generate electrical power for use by other thermoelectric devices in dark areas of the spacecraft and to dissipate heat from the vehicle. The decay products of radioactive isotopes can be used to provide a high-temperature heat source for thermoelectric generators. Because thermoelectric device materials are relatively immune to nuclear radiation and because the source can be made to last for a long period of time, such generators provide a useful source of power for many unattended and remote applications. For example, radioisotope thermoelectric generators provide electric power for isolated weather monitoring stations, for deep-ocean data collection, for various warning and communications systems, and for spacecraft. In addition, a low-power radioisotope thermoelectric generator was developed as early as 1970 and used to power cardiac pacemakers. The power range of radioisotope thermoelectric generators is generally between 10−6 and 100 watts. Principles of operation An introduction to the phenomena of thermoelectricity is necessary to understand the operating principles of thermoelectric devices. In 1821 the German physicist Thomas Johann Seebeck discovered that when two strips of different electrically conducting materials were separated along their length but joined together by two “legs” at their ends, a magnetic field developed around the legs, provided that a temperature difference existed between the two junctions. He published his observations the following year, and the phenomenon came to be known as the Seebeck effect. However, Seebeck did not identify the cause of the magnetic field. This magnetic field results from equal but opposite electric currents in the two metal-strip legs. These currents are caused by an electric potential difference across the junctions induced by thermal differences between the materials. If one junction is open but the temperature differential is maintained, current no longer flows in the legs but a voltage can be measured across the open circuit. This generated voltage (V) is the Seebeck voltage and is related to the difference in temperature (ΔT) between the heated junction and the open junction by a proportionality factor (α) called the Seebeck coefficient, or V = αΔT. The value for α is dependent on the types of material at the junction. In 1834 the French physicist and watchmaker Jean-Charles-Athanase Peltier observed that if a current is passed through a single junction of the type described above, the amount of measured heat generated is not consistent with what would be predicted solely from ohmic heating caused by electrical resistance. This observation is called the Peltier effect. As in Seebeck’s case, Peltier failed to define the cause of the anomaly. He did not identify that heat was absorbed or evolved at the junction depending on the direction of the current. He also did not recognize the reversible nature of this thermoelectric phenomenon, nor did he associate his discovery with that of Seebeck. It was not until 1855 that William Thomson (later Lord Kelvin) drew the connection between the Seebeck and Peltier effects, which was the first significant contribution to the understanding of thermoelectric phenomena. He showed that the Peltier heat or power (Qp) at a junction was proportional to the junction current (I) through the relationship Qp = πI, where π is the Peltier coefficient. Through thermodynamic analysis, Thomson also showed the direct relation between the Seebeck and Peltier effects, namely that π = αT, where T is the temperature of the junction. Furthermore, on the basis of thermodynamic considerations, he predicted what came to be known as the Thomson effect, that heat power (Qτ) is absorbed or evolved along the length of a material rod whose ends are at different temperatures. This heat was shown to be proportional to the flow of current and to the temperature gradient along the rod. The proportionality factor τ is known as the Thomson coefficient. Analysis of a thermoelectric device Practically, the thermoelectric property of a device is adequately described using only one thermoelectric parameter, the Seebeck coefficient α. As was shown by Thomson, the Peltier coefficient at a junction is equal to the Seebeck coefficient multiplied by the operating junction temperature. The Thomson effect is comparatively small, and so it is generally neglected. While there is a Seebeck effect in junctions between different metals, the effect is small. A much larger Seebeck effect is achieved by use of p-n junctions between p-type and n-type semiconductor materials, typically silicon or germanium. The figure shows p-type and n-type semiconductor legs between a heat source and a heat sink with an electrical power load of resistance RL connected across the low-temperature ends. A practical thermoelectric device can be made up of many p-type and n-type semiconductor legs connected electrically in series and thermally in parallel between a common heat source and a heat sink. Its behaviour can be discussed considering only one couple. An understanding of the thermal and electric power flows in a thermoelectric device involves two factors in addition to the Seebeck effect. First, there is the heat conduction in the two semiconductor legs between the source and the sink. The thermal flow down these two legs is given by 2κ(a/L)ΔT, where κ is their average thermal conductivity in watts per metre-kelvin, a (or w2) is the area in square metres of the base of each leg, L is the length of each leg in metres, and ΔT is the temperature differential between source and sink in kelvins. The second factor is the ohmic heating that occurs in both of the legs because of electrical resistance. The heat power produced in each leg is given by ρI2(L/a), where ρ is the average electrical resistivity of the semiconductor materials in ohm-metres and I is the electric current in amperes. Approximately half of the resistance-produced heat in each of the two legs flows toward the source and half toward the sink. In a thermoelectric power generator, a temperature differential between the upper and lower surfaces of two legs of the device can result in the generation of electric power. If no electrical load is connected to the generator, the applied heat source power results in a temperature differential (ΔT) with a value dictated only by the thermal conductivity of the p- and n-type semiconductor legs and their dimensions. The same amount of heat power will be extracted at the heat sink. However, because of the Seebeck effect, a voltage Vα = αΔT will be present at the output terminals. When an electrical load is attached to these terminals, current will flow through the load. The electrical power generated in the device is equal to the product of the Seebeck coefficient α, the current I, and the temperature differential ΔT. For a given temperature differential, the flow of this current causes an increase in the thermal power into the device equal to the electric power generated. Some of the electric power generated in the device is dissipated by ohmic heating in the resistances of the two legs. The remainder is the electrical power output to the load resistance RL. The leg geometry has a considerable effect on the operation. The thermal conduction power is dependent on the ratio of area to length, while ohmic heating is dependent on the inverse of that ratio. Thus, an increase in this ratio increases the thermal conduction power but reduces the power dissipated in the leg resistances. An optimum design normally results in relatively long and thin legs. In choosing or developing semiconductor materials suitable for thermoelectric generators, a useful figure of merit is the square of the Seebeck coefficient (α) divided by the product of the electrical resistivity (ρ) and the thermal conductivity (κ).
A gyroscope is a device for measuring or maintaining orientation, based on the principle of conservation of angular momentum. The key component, a relatively heavy spinning rotor, is mounted with nearly frictionless bearings inside two concentric lightweight rings (gimbals) each of which is also mounted with similar bearings inside the next outer ring, or the support frame in the case of the outer ring. The rotor and the two rings are mounted so the plane of rotation for each is perpendicular to the plane of rotation of the other two. The spinning rotor naturally resists changes to its orientation due to the angular momentum of the wheel. In physics, this phenomenon is also known as gyroscopic inertia or rigidity in space. Thanks to its unique support in the nested gimbals the rotor is able to hold a nearly constant orientation even as the support frame shifts its orientation. The gyroscope's ability to hold its axis fixed in a certain orientation, or in some applications to precess about an axis, even as its supporting structure is moved into different positions has permitted it to be used in making vast improvements to navigational systems and precision instruments. Description and diagram A conventional gyroscope comprises two concentric rings or gimbals plus a central rotor mounted in bearings on the inside of the inner gimbal, which in turn is mounted in bearings set in the outer gimbal, which is also supported with bearings set into a support frame. The rotor, the inner gimbal, and the outer gimbal then can each move freely in its own plane determined by its level of support. The inner gimbal is mounted in the outer gimbal in such a way that the inner gimbal pivots about an axis in its own plane that is always perpendicular to the pivotal axis of the outer gimbal. Similarly, the bearings of the rotor's axis are mounted in the inner gimbal in a position to assure that the rotor's spin axis is always perpendicular to the axis of the inner gimbal. The rotor wheel responds to a force applied about the input axis (connected with the inner gimbal) by a reaction force about the output axis (connected with the outer gimbal). The 3 axes are perpendicular, and this cross-axis response is the simple essence of the gyroscopic effect. A gyroscope flywheel will roll or resist about the output axis depending upon whether the output gimbals are of a free or fixed configuration. Examples of some free-output-gimbal devices would be the attitude reference gyroscopes used to sense or measure the pitch, roll, and yaw attitude angles in a spacecraft or aircraft. The center of gravity of the rotor can be in a fixed position. The rotor simultaneously spins about one axis and is capable of oscillating about the two other axes, and thus, except for its inherent resistance due to rotor spin, it is free to turn in any direction about the fixed point. Some gyroscopes have mechanical equivalents substituted for one or more of the elements. For example, the spinning rotor may be suspended in a fluid, instead of being pivotally mounted in gimbals. A control moment gyroscope (CMG) is an example of a fixed-output-gimbal device that is used on spacecraft to hold or maintain a desired attitude angle or pointing direction using the gyroscopic resistance force. In some special cases, the outer gimbal (or its equivalent) may be omitted so that the rotor has only two degrees of freedom. In other cases, the center of gravity of the rotor may be offset from the axis of oscillation, and thus the center of gravity of the rotor and the center of suspension of the rotor may not coincide. The gyroscope effect was discovered in 1817 by Johann Bohnenberger; the gyroscope was invented, and the effect named after it, in 1852 by Léon Foucault for an experiment involving the rotation of the Earth. Foucault's experiment to see (skopeein, to see) the Earth's rotation (gyros, circle or rotation) was unsuccessful due to friction, which effectively limited each trial to 8 to 10 minutes, too short a time to observe significant movement. In the 1860s, however, electric motors made the concept feasible, leading to the first prototype gyrocompasses; the first functional marine gyrocompass was developed between 1905 and 1908 by German inventor Hermann Anschütz-Kaempfe. The American Elmer Sperry followed with his own design in 1910, and other nations soon realized the military importance of the invention—in an age in which naval might was the most significant measure of military power—and created their own gyroscope industries. The Sperry Gyroscope Company quickly expanded to provide aircraft and naval stabilizers as well, and other gyroscope developers followed suit. In the first several decades of the twentieth century, other inventors attempted (unsuccessfully) to use gyroscopes as the basis for early black box navigational systems by creating a stable platform from which accurate acceleration measurements could be performed (in order to bypass the need for star sightings to calculate position). Similar principles were later employed in the development of inertial guidance systems for ballistic missiles. A gyroscope exhibits a number of types of behavior, including precession and nutation. Gyroscopes can be used to construct gyrocompasses which complement or replace magnetic compasses (in ships, aircraft, spacecraft, and vehicles in general), to assist in stability (bicycle, Hubble Space Telescope, ships, vehicles in general) or be used as part of an Inertial guidance system. Gyroscopic effects are used in toys like yo-yos and Powerballs. Many other rotating devices, such as flywheels, behave gyroscopically although the gyroscopic effect is not used. The fundamental equation describing the behavior of the gyroscope is: where the vectors and are, respectively, the torque on the gyroscope and its angular momentum, the scalar is its moment of inertia, the vector is its angular velocity, and the vector is its angular acceleration. It follows from this that a torque applied perpendicular to the axis of rotation, and therefore perpendicular to , results in a motion perpendicular to both and . This motion is called "precession." The angular velocity of precession is given by the cross product: Precession can be demonstrated by placing a spinning gyroscope with its axis horizontal and supported loosely (frictionless toward precession) at one end. Instead of falling, as might be expected, the gyroscope appears to defy gravity by remaining with its axis horizontal, when the other end of the axis is left unsupported and the free end of the axis slowly describes a circle in a horizontal plane, the resulting precession turning. This effect is explained by the above equations. The torque on the gyroscope is supplied by a couple of forces: Gravity acting downwards on the device's center of mass, and an equal force acting upwards to support one end of the device. The motion resulting from this torque is not downwards, as might be intuitively expected, causing the device to fall, but perpendicular to both the gravitational torque (downwards) and the axis of rotation (outwards from the point of support), that is in a forward horizontal direction, causing the device to rotate slowly about the supporting point. As the second equation shows, under a constant torque due to gravity or not, the gyroscope's speed of precession is inversely proportional to its angular momentum. This means that, for instance, if friction causes the gyroscope's spin to slow down, the rate of precession increases. This continues until the device is unable to rotate fast enough to support its own weight, when it stops precessing and falls off its support, mostly because friction against precession cause another precession that goes to cause the fall. By convention, these three vectors, torque, spin, and precession, are all oriented with respect to each other according to the right-hand rule. To easily ascertain the direction of gyro effect, simply remember that a rolling wheel tends, when entering a corner, to turn over to the inside. A gyrostat is a variant of the gyroscope. The first gyrostat was designed by Lord Kelvin to illustrate the more complicated state of motion of a spinning body when free to wander about on a horizontal plane, like a top spun on the pavement, or a hoop or bicycle on the road. It consists essentially of a massive flywheel concealed in a solid casing. Its behavior on a table, or with various modes of suspension or support, serves to illustrate the curious reversal of the ordinary laws of static equilibrium due to the gyrostatic behavior of the interior invisible flywheel when rotated rapidly. Small, manually spun gyrostats are sold as children's toys. Fiber optic gyroscope A fiber optic gyroscope (FOG) is a device that uses the interference of light to detect mechanical rotation. The sensor is a coil of as much as 5 kilometers (km) of optical fiber. Two light beams travel along the fiber in opposite directions. Due to the Sagnac effect, the beam traveling against the rotation experiences a slightly shorter path than the other beam. The resulting phase shift affects how the beams interfere with each other when they are combined. The intensity of the combined beam then depends on the rotation rate of the device. A FOG provides extremely precise rotational rate information, in part because of its lack of cross-axis sensitivity to vibration, acceleration, and shock. Unlike the classic spinning-mass gyroscope, the FOG has virtually no moving parts and no inertial resistance to movement. The FOG typically shows a higher resolution than a ring laser gyroscope but also a higher drift and worse scale factor performance. It is used in surveying, stabilization, and inertial navigation tasks. FOGs are designed in both open-loop and closed-loop configurations. Ring laser gyroscope A ring laser gyroscope uses interference of laser light within a bulk optic ring to detect changes in orientation and spin. It is an application of a Sagnac interferometer. Ring laser gyros (RLG) can be used as the stable elements (for one degree of freedom each) in an inertial reference system. The advantage of using a RLG is that there are no moving parts. Compared to the conventional spinning gyro, this means there is no friction, which in turn means there will be no inherent drift terms. Additionally, the entire unit is compact, lightweight, and virtually indestructible, meaning it can be used in aircraft. Unlike a mechanical gyroscope, the device does not resist changes to its orientation. Physically, an RLG is composed of segments of transmission paths configured as either a square or a triangle and connected with mirrors. One of the mirrors will be partially silvered, allowing light through to the detectors. A laser beam is launched into the transmission path in both directions, establishing a standing wave resonant with the length of the path. As the apparatus rotates, light in one branch travels a different distance than the other branch, changing its phase and resonant frequency with respect to the light traveling in the other direction, resulting in the interference pattern beating at the detector. The angular position is measured by counting the interference fringes. RLGs, while more accurate than mechanical gyros, suffer from an effect known as "lock-in" at very slow rotation rates. When the ring laser is rotating very slowly, the frequencies of the counter-rotating lasers become very close (within the laser bandwidth). At this low rotation, the nulls in the standing wave tend to "get stuck" on the mirrors, locking the frequency of each beam to the same value, and the interference fringes no longer move relative to the detector; in this scenario, the device will not accurately track its angular position over time. Dithering can compensate for lock-in. The entire apparatus is twisted and untwisted about its axis at a rate convenient to the mechanical resonance of the system, thus ensuring that the angular velocity of the system is usually far from the lock-in threshold. Typical rates are 400Hz, with a peak dither velocity of 1 arc-second per second. Primary applications include navigation systems on commercial airliners, ships, and spacecraft, where RLGs are often referred to as Inertial Reference Systems. In these applications, it has replaced its mechanical counterpart, the Inertial guidance system. Examples of aerospace vehicles or weapons that use RLG systems: - Trident missile (D5 Trident II) - F-15E Strike Eagle - MacKenzie, Donald. Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance (Cambridge, MA: MIT Press, 1990). pp 31-40. ISBN 0-262-13258-3 - MacKenzie, pp 40-42. ReferencesISBN links support NWE through referral fees - LeFèvre, Hervé C. The Fiber-Optic Gyroscope. Norwood, MA: Artech House, 1993. ISBN 0890065373 - Machover, Carl. Basics of Gyroscopes Vol. 1. New York, NY: John F. Rider Publishing, 1963. - Walton, Harry. The How and Why of Mechanical Movements - Exactly How Machines Work: Engines, Turbines, Transmissions, Brakes, Clutches, Rockets, Atomic Generators, Gyroscopes, Guidance Systems. Tampa, FL: Popular Science, 1968. All links retrieved July 22, 2017. - The Precession and Nutation of a Gyroscope - Everything you needed to know about gyroscopes - Videos of gyroscopes working New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: The history of this article since it was imported to New World Encyclopedia: Note: Some restrictions may apply to use of individual images which are separately licensed.
Prepositions and postpositions, together called adpositions (or broadly, in English, simply prepositions), are a class of words that express spatial or temporal relations (in, under, towards, before) or mark various semantic roles (of, for). A preposition or postposition typically combines with a noun or pronoun, or more generally a noun phrase, this being called its complement, or sometimes object. A preposition comes before its complement; a postposition comes after its complement. English generally has prepositions rather than postpositions – words such as in, under and of precede their objects, such as in England, under the table, of Jane – although there are a small handful of exceptions including "ago" and "notwithstanding", as in "three days ago" and "financial limitations notwithstanding". Some languages, which use a different word order, have postpositions instead, or have both types. The phrase formed by a preposition or postposition together with its complement is called a prepositional phrase (or postpositional phrase, adpositional phrase, etc.) – such phrases usually play an adverbial role in a sentence. A less common type of adposition is the circumposition, which consists of two parts that appear on each side of the complement. Other terms sometimes used for particular types of adposition include ambiposition, inposition and interposition. Some linguists use the word preposition in place of adposition regardless of the applicable word order. The word preposition comes from Latin: prae ("before") and Latin: ponere ("to put"). This refers to the situation in Latin and Greek (and in English), where such words are placed before their complement (except sometimes in Ancient Greek), and are hence "pre-positioned". In some languages, including Urdu, Turkish, Hindi, Korean, and Japanese, the same kind of words typically come after their complement. To indicate this, they are called postpositions (using the prefix post-, from Latin post meaning "behind, after"). There are also some cases where the function is performed by two parts coming before and after the complement; this is called a circumposition (from Latin circum "around"). Prepositions, postpositions and circumpositions are collectively known as adpositions (using the Latin prefix ad-, meaning "to"). However, some linguists prefer to use the well-known and longer established term preposition in place of adposition, irrespective of position relative to the complement. An adposition typically combines with exactly one complement, most often a noun phrase (or, in a different analysis, a determiner phrase). In English, this is generally a noun (or something functioning as a noun, e.g., a gerund), together with its modifiers such as adjectives, articles, etc. The complement is sometimes called the object of the adposition. The resulting phrase, formed by the adposition together with its complement, is called an adpositional phrase or prepositional phrase (PP) (or for specificity, a postpositional or circumpositional phrase). An adposition establishes a grammatical relationship that links its complement to another word or phrase in the context. It also generally establishes a semantic relationship, which may be spatial (in, on, under, ...), temporal (after, during, ...), or of some other type (of, for, via, ...). The World Atlas of Language Structures treats a word as an adposition if it takes a noun phrase as complement and indicates the grammatical or semantic relationship of that phrase to the verb in the containing clause. Some examples of the use of English prepositions are given below. In each case, the prepositional phrase appears in italics, and the preposition within it appears in bold. The word to which the phrase expresses a relation – that is, the word to which the prepositional phrase is an adjunct or complement – is underlined. In some of the examples, the same word has two prepositional phrases as adjuncts. In the last of these examples the complement has the form of an adverb, which has been nominalised to serve as a noun phrase; see Different forms of complement, below. Prepositional phrases themselves are sometimes nominalized: An adposition may determine the grammatical case of its complement. In English, the complements of prepositions take the objective case where available (from him, not *from he). In Koine Greek, for example, certain prepositions always take their objects in a certain case (e.g., ἐν always takes its object in the dative), while other prepositions may take their object in one of two or more cases, depending on the meaning of the preposition (e.g., διά takes its object in the genitive or in the accusative, depending on the meaning). Some languages have cases that are used exclusively after prepositions (prepositional case), or special forms of pronouns for use after prepositions (prepositional pronoun). The functions of adpositions overlap with those of case markings (for example, the meaning of the English preposition of is expressed in many languages by a genitive case ending), but adpositions are classed as syntactic elements, while case markings are morphological. Adpositions themselves are usually non-inflecting ("invariant"): they do not have paradigms of form (such as tense, case, gender, etc.) the same way that verbs, adjectives, and nouns can. There are exceptions, though, such as prepositions that have fused with a pronominal object to form inflected prepositions. The following properties are characteristic of most adpositional systems: As noted above, adpositions are referred to by various terms, depending on their position relative to the complement. While the term preposition is sometimes used to denote any adposition, in its stricter meaning it refers only to one which precedes its complement. Examples of this, from English, have been given above; similar examples can be found in many European and other languages, for example: In certain grammatical constructions, the complement of a preposition may be absent or may be moved from its position directly following the preposition. This may be referred to as preposition stranding (see also below), as in "Who did you go with?" and "There's only one thing worse than being talked about." There are also some (mainly colloquial) expressions in which a preposition's complement may be omitted, such as "I'm going to the park. Do you want to come with?", and the French Il fait trop froid, je ne suis pas habillée pour ("It's too cold, I'm not dressed for [the situation].") The bolded words in these examples are generally still considered prepositions, because when they form a phrase with a complement (in more ordinary constructions) they must appear first. A postposition follows its complement to form a postpositional phrase. Examples include: Some adpositions can appear either before or after their complement: An adposition like the above, which can be either a preposition or a postposition, can be called an ambiposition. However, ambiposition may also be used to refer to a circumposition (see below), or to a word that appears to function as a preposition and postposition simultaneously, as in the Vedic Sanskrit construction (noun-1) ā (noun-2), meaning "from (noun-1) to (noun-2)". Whether a language has primarily prepositions or postpositions is seen as an aspect of its typological classification, and tends to correlate with other properties related to head directionality. Since an adposition is regarded as the head of its phrase, prepositional phrases are head-initial (or right-branching), while postpositional phrases are head-final (or left-branching). There is a tendency for languages that feature postpositions also to have other head-final features, such as verbs that follow their objects; and for languages that feature prepositions to have other head-initial features, such as verbs that precede their objects. This is only a tendency, however; an example of a language that behaves differently is Latin, which employs mostly prepositions, even though it typically places verbs after their objects. A circumposition consists of two or more parts, positioned on both sides of the complement. Circumpositions are very common in Pashto and Kurdish. The following are examples from Northern Kurdish (Kurmanji): Various constructions in other languages might also be analyzed as circumpositional, for example: Most such phrases, however, can be analyzed as having a different hierarchical structure (such as a prepositional phrase modifying a following adverb). The Chinese example could be analyzed as a prepositional phrase headed by cóng ("from"), taking the locative noun phrase bīngxīang lǐ ("refrigerator inside") as its complement. An inposition is a rare type of adposition that appears between parts of a complex complement. For example, in the native Californian Timbisha language, the phrase "from a mean cold" can be translated using the word order "cold from mean"—the inposition follows the noun but precedes any following modifiers that form part of the same noun phrase. The term interposition has been used for adpositions in structures such as word for word, (French) coup sur coup ("one after another, repeatedly"), (Russian) друг с другом ("one with the other"). This is not a case of an adposition appearing inside its complement, as the two nouns do not form a single phrase (there is no phrase *word word, for example); such uses have more of a coordinating character. Preposition stranding is a syntactic construct in which a preposition occurs somewhere other than immediately before its complement. For example, in the English sentence "What did you sit on?" the preposition on has what as its complement, but what is moved to the start of the sentence, because it is an interrogative word. This sentence is much more common and natural than the equivalent sentence without stranding: "On what did you sit?" Preposition stranding is commonly found in English, as well as North Germanic languages such as Swedish. Its existence in German and Dutch is debated. Preposition stranding is also found in some Niger–Congo languages such as Vata and Gbadi, and in some North American varieties of French. Some prescriptive English grammars teach that prepositions cannot end a sentence, although there is no rule prohibiting that use. Similar rules arose during the rise of classicism, when they were applied to English in imitation of classical languages such as Latin. Otto Jespersen, in his Essentials of English Grammar (first published 1933), commented on this definition-derived rule: "...nor need a preposition (Latin: praepositio) stand before the word it governs (go the fools among (Sh[akespeare]); What are you laughing at?). You might just as well believe that all blackguards are black or that turkeys come from Turkey; many names have either been chosen unfortunately at first or have changed their meanings in course of time." Simple adpositions consist of a single word (on, in, for, towards, etc.). Complex adpositions consist of a group of words that act as one unit. Examples of complex prepositions in English include in spite of, with respect to, except for, by dint of, and next to. The distinction between simple and complex adpositions is not clear-cut. Many simple adpositions are derived from complex forms (e.g., with + in → within, by + side → beside) through grammaticalisation. This change takes time, and during the transitional stages the adposition acts in some ways like a single word, and in other ways like a multi-word unit. For example, current German orthographic conventions recognize the indeterminate status of certain prepositions, allowing two spellings: anstelle/an Stelle ("instead of"), aufgrund/auf Grund ("because of"), mithilfe/mit Hilfe ("by means of"), zugunsten/zu Gunsten ("in favor of"), zuungunsten/zu Ungunsten ("to the disadvantage of"), zulasten/zu Lasten ("at the expense of"). The distinction between complex adpositions and free combinations of words is not a black and white issue: complex adpositions (in English, "prepositional idioms") can be more fossilized or less fossilized. In English, this applies to a number of structures of the form "preposition + (article) + noun + preposition", such as in front of, for the sake of. The following characteristics are good indications that a given combination is "frozen" enough to be considered a complex preposition in English: In descriptions of some languages, prepositions are divided into proper (or "essential") and improper (or "accidental"). A preposition is called improper if it is some other part of speech being used in the same way as a preposition. Examples of simple and complex prepositions that have been so classified include prima di ("before") and davanti (a) ("in front of") in Italian, and ergo ("on account of") and causa ("for the sake of") in Latin. In reference to Ancient Greek, however, an improper preposition is one that cannot also serve as a prefix to a verb. In other cases the complement may have the form of an adjective or adjective phrase, or an adverbial. This may be regarded as a complement representing a different syntactic category, or simply as an atypical form of noun phrase (see nominalization). In the last example, the complement of the preposition from is in fact another prepositional phrase. The resulting sequence of two prepositions (from under) may be regarded as a complex preposition; in some languages such a sequence may be represented by a single word, as Russian из-под iz-pod ("from under"). Some adpositions appear to combine with two complements: It is more commonly assumed, however, that Sammy and the following predicate forms a "small clause", which then becomes the single complement of the preposition. (In the first example, a word such as as may be considered elided, which, if present, would clarify the grammatical relationship.) Adpositions can be used to express a wide range of semantic relations between their complement and the rest of the context. The relations expressed may be spatial (denoting location or direction), temporal (denoting position in time), or relations expressing comparison, content, agent, instrument, means, manner, cause, purpose, reference, etc. Most common adpositions are highly polysemous (they have various different meanings). In many cases a primary, spatial meaning becomes extended to non-spatial uses by metaphorical or other processes. Because of the variety of meanings, a single adposition often has many possible equivalents in another language, depending on the exact context in which it is used; this can cause significant difficulties in foreign language learning. Usage can also vary between dialects of the same language (for example, American English has on the weekend, where British English uses at the weekend). In some contexts (as in the case of some phrasal verbs) the choice of adposition may be determined by another element in the construction or be fixed by the construction as a whole. Here the adposition may have little independent semantic content of its own, and there may be no clear reason why the particular adposition is used rather than another. Examples of such expressions are: Prepositions sometimes mark roles that may be considered largely grammatical: Spatial meanings of adpositions may be either directional or static. A directional meaning usually involves motion in a particular direction ("Kay went to the store"), the direction in which something leads or points ("A path into the woods"), or the extent of something ("The fog stretched from London to Paris"). A static meaning indicates only a location ("at the store", "behind the chair", "on the moon"). Some prepositions can have both uses: "he sat in the water" (static); "he jumped in the water" (probably directional). In some languages, the case of the complement varies depending on the meaning, as with several prepositions in German, such as in: In English and many other languages, prepositional phrases with static meaning are commonly used as predicative expressions after a copula ("Bob is at the store"); this may happen with some directional prepositions as well ("Bob is from Australia"), but this is less common. Directional prepositional phrases combine mostly with verbs that indicate movement ("Jay is going into her bedroom", but not *"Jay is lying down into her bedroom"). Directional meanings can be further divided into telic and atelic. Telic prepositional phrases imply movement all the way to the endpoint ("she ran to the fence"), while atelic ones do not ("she ran towards the fence"). Static meanings can be divided into projective and non-projective, where projective meanings are those whose understanding requires knowledge of the perspective or point of view. For example, the meaning of "behind the rock" is likely to depend on the position of the speaker (projective), whereas the meaning of "on the desk" is not (non-projective). Sometimes the interpretation is ambiguous, as in "behind the house", which may mean either at the natural back of the house, or on the opposite side of the house from the speaker. There are often similarities in form between adpositions and adverbs. Some adverbs are derived from the fusion of a preposition and its complement (such as downstairs, from down (the) stairs, and underground, from under (the) ground). Some words can function both as adverbs and as prepositions, such as inside, aboard, underneath (for instance, one can say "go inside", with adverbial use, or "go inside the house", with prepositional use). Such cases are analogous to verbs that can be used either transitively or intransitively, and the adverbial forms might therefore be analyzed as "intransitive prepositions". This analysis could also be extended to other adverbs, such as here, there, afterwards, etc., even though these never take complements. Many English phrasal verbs contain particles that are used adverbially, even though they mostly have the form of a preposition (such words may be called prepositional adverbs). Examples are on in carry on, get on, etc., over in take over, fall over, and so on. The equivalents in Dutch and German are separable prefixes, which also often have the same form as prepositions: for example, Dutch aanbieden and German anbieten (both meaning "to offer") contain the separable prefix aan/an, which is also a preposition meaning "on" or "to". Some words can be used both as adpositions and as subordinating conjunctions: It would be possible to analyze such conjunctions (or even other subordinating conjunctions) as prepositions that take an entire clause as a complement. In some languages, including a number of Chinese varieties, many of the words that serve as prepositions can also be used as verbs. For instance, in Standard Chinese, 到 dào can be used in either a prepositional or a verbal sense: Because of this overlap, and the fact that a sequence of prepositional phrase and verb phrase often resembles a serial verb construction, Chinese prepositions (and those of other languages with similar grammatical structures) are often referred to as coverbs. As noted in previous sections, Chinese can also be said to have postpositions, although these can be analyzed as nominal (noun) elements. For more information, see the article on Chinese grammar, particularly the sections on coverbs and locative phrases. Some grammatical case markings have a similar function to adpositions; a case affix in one language may be equivalent in meaning to a preposition or postposition in another. For example, in English the agent of a passive construction is marked by the preposition by, while in Russian it is marked by use of the instrumental case. Sometimes such equivalences exist within a single language; for example, the genitive case in German is often interchangeable with a phrase using the preposition von (just as in English, the preposition of is often interchangeable with the possessive suffix 's). Adpositions combine syntactically with their complement, whereas case markings combine with a noun morphologically. In some instances it may not be clear which applies; the following are some possible means of making such a distinction: Even so, a clear distinction cannot always be made. For example, the post-nominal elements in Japanese and Korean are sometimes called case particles and sometimes postpositions. Sometimes they are analysed as two different groups because they have different characteristics (e.g., the ability to combine with focus particles), but in such analysis, it is unclear which words should fall into which group. In these examples, the case markings form a word with their hosts (as shown by vowel harmony, other word-internal effects and agreement of adjectives in Finnish), while the postpositions are independent words. As is seen in the last example, adpositions are often used in conjunction with case affixes – in languages that have case, a given adposition usually takes a complement in a particular case, and sometimes (as has been seen above) the choice of case helps specify the meaning of the adposition. |Look up adposition in Wiktionary, the free dictionary.|
In last post, we discussed some image processing techniques using opencv like denoising, edge detection, histogram etc. Here I have explained color filtering, contour creation and drawing of geometric shapes on image. You can access previous posts from following links: Table of Content: - Color Filtering - Contour Creation and finding it’s co-ordinates - Drawing geometric shapes Image pre-processing using Color Filtering It means identify and separate out a region or object in the image based on color. Color filtering is kind of segmentation technique using threshold function. Threshold function is based on color intensity. Use of color filtering in image processing To understand color filtering, we need to first understand color space. Color space defines the representation that image is made up of which type of combinations of colors. Two mainly used color spaces are as follows: - RGB/BGR: This color space follows that an image is comprised using three colors i.e. Red, Green, Blue - HSV: This color space follows the convention that an image is comprised using Hue, Saturation and value. Using color filtering process we can extract object or region of particular color. Let’s check how: img = cv2.imread('zebra.jpg') hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV) # Threshold of blue in HSV space lower_white = np.array([0, 0, 230]) upper_white = np.array([180, 25, 255]) mask = cv2.inRange(hsv, lower_white, upper_white) result = cv2.bitwise_and(img, img,mask = mask) plt.imshow(result) Coutour is basically an outline that defines shape of the object. By contour creation usually object is differentiated from rest of the image. It is used in image segmentation, creating bounding boxes for object detection. Let’s understand how to create contour using openCV: image = cv2.imread('zebra.jpg') gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) edge = cv2.Canny(gray, 50, 400) contours, hierarchy = cv2.findContours(edge, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) print("Number of Contours found = " + str(len(contours))) contour = cv2.drawContours(image, contours, -1, (0, 255, 0), 1) plt.figure(figsize=(20,10)) plt.subplot(121),plt.imshow(edge),plt.title('Output1') plt.subplot(122),plt.imshow(contour),plt.title('Output2') plt.show() - Drawing a line - Drawing an arrow line - Drawing a circle - Drawing an ellipse - Drawing an rectangle It can be used to create an stroke on image. img1 = cv2.imread('zebra.jpg') img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2RGB) start_point = (150, 100) end_point = (250, 200) color = (16, 180, 25) #any value can be given according to requirement thickness = 5 image_line = cv2.line(img1, start_point, end_point, color, thickness) plt.imshow(image_line) Drawing an arrow line Drawing an ellipse Drawing an circle image = cv2.imread('zebra.jpg') image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) center_coordinates = (120, 50) radius = 50 color = (255, 0, 0) thickness = 2 image_circle = cv2.circle(image, center_coordinates, radius, color, thickness) plt.imshow(image_circle) Drawing an rectangle image = cv2.imread('zebra.jpg') image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) start_point = (75, 10) end_point = (275, 200) color = (100, 100, 40) thickness = 2 image_rect = cv2.rectangle(image, start_point, end_point, color, thickness) plt.imshow(image_rect) You can get code from my GitHub Link:
An Excel Tutorial An Introduction to Excel's Normal Distribution Functions Excel provides several worksheet functions for working with normal distributions or 'bell-shaped curves.' This introduction to Excel's Normal Distribution functions offers help for the statistically challenged. by Charley Kyd, MBA Microsoft Excel MVP The Father of Spreadsheet Dashboards When a visitor asked me how to generate a random number from a Normal distribution she set me to thinking about doing statistics Many of us were introduced to statistics in school and then forgot what little we learned...often within seconds of the final exam. Also, when we took statistics, many of us weren't taught how to use it with Excel. This is unfortunate, because in business it's often useful to have some grasp of that topic. For all these reasons, I thought it would be worthwhile to briefly explore normal -- or "bell-shaped" -- curves in Excel. This is a commonly used area of statistics, and one for which Excel provides several useful functions. One interesting thing about the normal curve is that it occurs frequently in many different settings: - The height of each gender in a population is normally - The measure of LDL cholesterol is normally distributed - The width of stripes on a zebra is said to be normally - Most measurement errors are assumed to be normally - Many Six-Sigma calculations assume normal distribution. As a final example, here's a surprising occurrence of the normal curve: Take any population, whether it's normally distributed or not. Randomly select at least 30 members from that population, measure them for some characteristic, and then find the average of those measures. That average is one data point. Return the samples, select another random sample of the same number, and find the average of their measures. Do the same again and again. The Central Limit Theorem says that those averages tend to have a normal distribution. Normal distributions are all around us. Therefore, as painlessly as possible, let's take a closer look at how we work with them using Excel. need to get some brief definitions out of the way so that we can start to describe data using Excel functions. From cholesterol to zebra stripes, the normal probability distribution describes the proportion of a population having a specific range of values for an attribute. Most members have amounts that are near the average; some have amounts that are farther away from the average; and some have amounts extremely distant from the average. For example, a population could be all the stripes on all the zebras in the world. The normal curve would show the proportion of stripes that have various widths. The standard deviation of a sample is a measure of the spread of the sample from its mean. (We're taking about many items in a "sample," of course, not just a single item.) In a normal distribution, about 68% of a sample is within one standard deviation of the mean. About 95% is within two standard deviations. And about 99.7% is within three standard deviations. The numbers in the figure above mark standard deviations from The z value is the distance between a value and the mean in terms of standard deviations. In the figure above, each number is a z value. New Excel's Improved Statistics Functions Beginning with Excel 2007, Microsoft updated many of their statistics functions. To provide backward compatibility, they changed the names of their updated functions by adding periods within the name. I show both versions in this article, but Microsoft recommends that you use the new version if you use New Excel. Calculating or Estimating the Standard Deviation Several of the following functions require a value for the standard deviation. There are at least two ways to find that First, if you have a sample of the data, you can estimate the standard deviation from the sample using one of these formulas: On the other hand, if you're working with the entire population, you calculate the standard deviation using: However, if you're working with rough estimates, you must take a different approach, because you don't have actual data to support your estimates. In this case, first calculate the range. This is the smallest likely value subtracted from the largest likely value. By likely, let's use the assumption that all possible values will be within that range about 95% of the time. Remember that about 95% of a sample is within two standard deviations on each side of the mean. (This is a total of four standard deviations, of course.) Therefore, if we divide the range by four we should have the approximate standard deviation. Merely dividing the range by four might seem to be a slipshod approach. But consider the way this calculation often is used. Suppose you're forecasting sales for next year. You think sales will be about 1,000, but the number could be as high as 1,200 and as low as 800. With that information, you can put a normal curve around your estimated sales and begin to generate a variety of forecasts for profits and cash flow. To emphasize, these numbers are only your best estimates. Therefore, using an estimated standard deviation doesn't seem quite as sloppy as it otherwise might. Based on these estimates, your mean sales will be about 1,000 and your standard deviation will be about (1200 - 800) / 4 = 100. With this information, you can use the following functions to perform many of the calculations you will need in your NORM.DIST(x, mean, standard_dev ,cumulative) NORMDIST(x, mean, standard_dev, cumulative) gives the probability that a number falls at or below a given value of a normal distribution. - x -- The value you want to test. - mean -- The average value of the distribution. - standard_dev -- The standard deviation of the - cumulative -- If FALSE or zero, returns the probability that x will occur; if TRUE or non-zero, returns the probability that the value will be less than or equal to x. Example: The distribution of heights of American women aged 18 to 24 is approximately normally distributed with a mean of 65.5 inches (166.37 cm) and a standard deviation of 2.5 inches (6.35 cm). What percentage of these women is taller than 5' 8", that is, 68 inches (172.72 cm)? The percentage of women less than or equal to 68 inches is: =NORM.DIST(68, 65.5, 2.5, TRUE) = 84.13% =NORMDIST(68, 65.5, 2.5, TRUE) = 84.13% Therefore, the percentage of women taller than 68 inches is 1 - 84.13%, or approximately 15.87%. This value is represented by the shaded area in the chart above. NORM.S.DIST translates the number of standard deviations (z) into cumulative probabilities. - z -- The value for which you want the distribution. - cumulative -- Cumulative is a logical value that determines the form of the function. If cumulative is TRUE, NORM.S.DIST returns the cumulative distribution function; if FALSE, it returns the probability mass function. (The probability mass function, PMF, gives the probability that a discrete -- that is, non-continuous -- random variable is exactly equal to some value.) =NORM.S.DIST(1, TRUE) = 84.13% =NORMSDIST(1) = 84.13% =NORM.S.DIST(-1, TRUE) = 15.87% =NORMSDIST(-1) = 15.87% Therefore, the probability of a value being within one standard deviation of the mean is the difference between these values, or 68.27%. This range is represented by the shaded area of the chart. NORM.INV(probability, mean, standard_dev) NORMINV(probability, mean, standard_dev) is the inverse of the NORM.DIST function. It calculates the x variable given a probability. To illustrate, consider the heights of the American women used in the illustration of the NORM.DIST function above. How tall would a woman need to be if she wanted to be among the tallest 75% of American women? Using NORM.INV, she would learn that she needs to be at least 63.81 inches tall, as shown by this formula: =NORM.INV(0.25, 65.5, 2.5) = 63.81 inches =NORMINV(0.25, 65.5, 2.5) = 63.81 inches The figure shows the area represented by the 25% of the American women who are shorter than this height. is the inverse of the NORM.S.DIST function. Given the probability that a variable is within a certain distance of the mean, it finds the z value. To illustrate, suppose you care about the half of the sample that's closest to the mean. That is, you want the z values that mark the boundary that is 25% less than the mean and 25% more than the mean. The following two formulas provide those boundaries of -.674 and +.674, as illustrated by the figure. STANDARDIZE(x, mean, standard_dev) returns the z value for a specified value, mean, and standard To illustrate, in the NORM.INV example above, we found that a woman would need to be at least 63.81 inches tall to avoid the bottom 25% of the population, by height. The STANDARDIZE function tells us that the z value for 63.81 inches is: =STANDARDIZE(63.81, 65.5, 2.5) = -0.6745 We can check this number by using the NORM.S.DIST function: =NORM.S.DIST(-0.6745, TRUE) = 25% =NORMSDIST(-0.6745) = 25% That is, a z value of -.6745 has a probability of 25%. Two Ways to Calculate a Random Number from a Normal Distribution Remember that the NORM.INV function returns a value given a NORM.INV(probability, mean, standard_dev) NORMINV(probability, mean, standard_dev) Also, remember that the RAND() function returns a random number between 0 and 1. That is, RAND() generates random probabilities. Therefore, you can use the NORM.INV function to calculate a random number from a normal distribution, using this =NORM.INV(RAND(), mean, standard_dev) =NORMINV(RAND(), mean, standard_dev) However, if you use Classic Excel with a large number of standard deviations, you might want to use a different approach. Nearly ten years ago, Jerry W. Lewis -- a former Excel MVP and a professional statistician -- offered a stern warning. Prior to Excel XP (2002), he wrote, NORMINV "produced a very un-normal fraction of values around 6 million standard deviations from the Instead, Jerry recommended the Box-Muller method described This method uses this formula to calculate a random number from a normal distribution: The Box-Muller method is mathematically exact, Jerry writes, if implemented with a perfect uniform random number generator and infinite precision. A Note About The Charts I created all of the figures for this article in Excel. If you would like to know how, see How to Create Normal Curves with Shaded Areas in New Excel. I must have at least 15 statistics books gathering dust on bookshelves in the basement. Even so, these two books offered clear explanations that you might find useful: Statistical Analysis with Excel for Dummies, by Joseph Excel Data Analysis for Dummies, by Stephen L. Nelson, MBA,
Collisionless Shock Waves in interstellar matter, By: Roald Z. Sagdeev Charles F. Kennel Reprinted without permission from Scientific American, April 1991 Collisionless Shock Waves Shock waves resonate through the solar system, much like the reverberating boom from a supersonic jet. In the latter case, the disturbance is caused by an aerodynamic shock, an abrupt change in gas properties that propagates faster than the speed of sound. It had long been recognized that in a neutral gas, such as the earth’s atmosphere, particles must collide if shocks are to form. Beginning in the 1950s, we and our colleagues theorized that, contrary to the expectations of many scientists, similar shock waves could form even in the near vacuum of outer space, where particle collisions are extremely rare. If so, shocks could play a significant role in shaping space environments. “Collisionless” shocks cannot occur naturally on the earth, because nearly all matter here consists of electrically neutral atoms and molecules. In space, however, high temperatures and ultra-violet radiation from hot stars decompose atoms into their constituent nuclei and electrons, producing a soup of electrically charged particles known as a plasma. Plasma physicists proposed that the collective electrical and magnetic properties of plasmas could produce interactions that take the place of collisions and permit shocks to In 1964 the theoretical work found its first experimental confirmation. Norman F. Ness and his colleagues at the Goddard Space Flight Center, using data collected from the IMP-1 spacecraft, detected clear signs that a collisionless shock exists where the solar wind encounters the earth’s magnetic field. (Solar wind is the continuous flow of charged particles outward from the sun.) More recent research has demonstrated that collisionless shocks appear in a dazzling array of astronomical settings. For example, shocks have been found in the solar wind upstream (sunward) of all the planets and comets that have been visited by spacecraft. Violent flares on the sun generate shocks that propagate to the far reaches of the solar system; tremendous galactic outbursts create disruptions in the intergalactic medium that are trillions of times larger. In addition, many astrophysicists think that shocks from supernova explosions in our galaxy accelerate cosmic rays, a class of extraordinarily energetic elementary particles and atomic nuclei that rain down on the earth from all directions. The study of plasmas began in the 19th century, when Michael Faraday investigated electrical discharges through gases. Modern plasma research dates from 1957 and 1958. During those years, Soviet Sputnik and American Explorer spacecrafts discovered that space near the earth is filled with plasma. At the same time, till then secret research on controlled thermonuclear fusion conducted by the U.S., Soviet Union and Europe was revealed at the Atoms for Peace Conference in Geneva, greatly increasing the freely available information on plasmas. Fusion research focuses on producing extremely hot plasmas and confining them in magnetic “bottles,” to create the conditions necessary for energy-producing nuclear reactions to occur. In 1957, while searching for a method to heat fusion plasmas, one of us (Sagdeev) realized that an instantaneous magnetic compression could propagate through a collisionless plasma, much as a shock moves through an ordinary fluid. Magnetic fields that thread through plasmas make them behave somewhat like such a fluid. A magnetic field exerts a force (the Lorentz force) on a moving electrically charged particle. The field can be thought of as a series of magnetic lines through the plasma, like the field lines around a bar magnet that can be made visible with iron filings. The Lorentz force always acts perpendicular both to the direction of the magnetic field line and to the direction in which a particle is moving. If the particle moves perpendicular to the field, the force acts like a rubber band, pulling the particle back and constraining it to move in small circles about the magnetic field line. The particle can, however, move freely in the direction of the magnetic field line. The combination of the free motion along and constrained, circular rotation across the magnetic field shapes the particle’s trajectory into a helix that winds around a magnetic field The Lorentz force makes it difficult to disperse the plasma in the direction perpendicular to the magnetic field. The maximum distance over which particles can move away from the field, called the Larmor radius, is inversely proportional to the field strength. In the weak interplanetary magnetic field, the Larmor radius amounts to several kilometers for electrons and several hundred kilometers for more massive ions. These distances may seem large, but they are tiny compared with the size of the region where the solar wind encounters the earth’s magnetic field. The shock that forms there, called a bow shock, has the same parabolic shape as the waves that pile up ahead of a speedboat. It strethes more than 100,000 kilometers across. When the scale is larger than the Larmor radius for ions, the collective motion of plasma particles across the magnetic field actually drags the field lines along with it. The magnetic field thus becomes “frozen” into the plasma. In short, a magnetic field endows collisionless plasmas with elastic properties analogous to those of a dense gas, and so a plasma wave crossing a magnetic field behaves somewhat like an ordinary sound wave. The theoretical analysis of collisionless shocks therefore started by following the ideas developed from earlier research on Suppose, for example, a sudden compression creates a sound wave in air. As the wave travels, its shape–that is, its profile of pressure and density–changes. Because the most compressed regions of the wave move the fastest, the wave grows stronger and its leading edge becomes sharper. The great German mathematician Bernhard Riemann showed how this phenomenon, called wave steepening, creates shock waves. Ultimately the faster-moving denser air behind catches up with the slower air ahead. At this point, the sound wave behaves somewhat like an ocean wave heading toward shore. A water wave steepens, overturns and then crashes into foam. A sound wave reaches an analogous but different climax. As the wave grows so steep that it is about to overturn, individual gas molecules become important in transporting momentum between neighboring points in the gas: molecules from the faster, denser region of the wave rush ahead of the steepening wave front, colliding with molecules in the slower region ahead of the wave and exchanging momentum with them. In this way, the slower molecules are brought up to the speed of the moving wave. This exchange of momentum is caused by molecular viscosity. In this process, momentum is passed from the overtaking wave crest and imparted to the undisturbed region ahead of it, much as in a relay race a baton is passed from one runner to the next. Molecular viscosity becomes highly efficient when the thickness of the wave front shrinks to the average distance that a particle can travel before it collides with another, a distance known as the collision mean free path. (The mean free path of a molecule in air is about one ten-thousandth of a centimeter long.) At this thickness, steepening and viscosity balance each other, and a steady shock wave forms. The resulting shock represents an almost steplike change in gas velocity, density and pressure. Before physicists knew of a mechanism that could replace molecular viscosity in plasmas, it made little sense for them to talk of collisionless shocks. Consequently, the topic lay fairly dormant for many years. Then, in the late 1950s, one of us (Sagdeev) and, independently, Arthur R. Kantrowitz and Harry E. Petschek, then at the Avco-Everett Research Laboratory near Boston, suggested that a similar sort of momentum relay race could take place in a tenuous plasma. They theorized that in a plasma, waves rather than individual particles pass along the baton. The plasma relay race depends on the fact that the speed of a plasma wave changes with wavelength, an effect called dispersion. Indeed, whereas in ordinary gases the speed of a sound wave is practically independent of wavelength, in collisionless plasma a wave is very dispersive. That is, its speed may either increase or decrease as its wavelength shortens, depending on the angle between the direction of propagation of the wave and orientation of the magnetic field. According to Fourier’s theorem, a fundamental theorem of mathematics, any wave profile consists of many superimposed waves, or harmonics, of different wavelengths. (By analogy, white light is composed of many distinct colors, each of a different wavelength.) If the wave profile steepens, it excites harmonics of ever shorter wavelength. For wave propagation that is not exactly perpendicular to the magnetic field, dispersion causes shorter-wavelength harmonics to travel faster than the longer-wavelength ones (negative disperson). The effects of dispersion become significant when a steepening shock front becomes about as thin as the Larmor radius for ions. At this point, the shorter-wavelength harmonics race ahead of the front into the undisturbed plasma upstream. These harmonics carry along the momentum, like the fast molecules in a sound wave. The competing actions of steepening and dispersion yield a series of wave pulses that propagate in the direction of the shock. As a result, the front acquires the shape of a “wave train.” The weakest (smaller-amplitude) waves announce the arrival of the train, and successively stronger oscillations build up until the full shock transition arrives. The length of the train (in other words, the thickness of the shock front) depends on how rapidly the energy of the For waves propagating exactly perpendicular to the magnetic field, dispersion causes the harmonic wave speed to decrease at shorter wavelengths. Short-wavelength harmonics now trail behind the shock front, and so they cannot affect steepening of the overall wave. In this case, the shock passes the momentum baton to a series of compressional pulses called solitons. Solitons in perpendicular shocks are approximately the thickness of an electron’s Larmor radius, and they are created when the wave profile steepens to that scale. The steepening front radiates an ordered sequence of solitons, led by the largest (highest-amplitude) one and trailed by successively smaller ones that ultimately blend into the smooth state behind the shock. The length of the soliton train depends on how fast the soliton energy is dissipated into heat. Waves on the surface of shallow water behave very much like dispersive waves in collisionless plasma. The theory of shallow water waves was developed in the late 19th century, culminating in the classic work of Diederik J. Korteweg and G. DeVries that first described the solitons that occasionally propagate down Dutch canals. The seemingly recondite analogy between shallow water solitons and plasma solitons expresses a general physical truth: solitons can form whenever wave steepening and dispersion compete. One implication of this fact is that solitons from even in shocks that do not propagate exactly perpendicular to the magnetic field. The wave pulses mentioned earlier can also be thought of as solitons, the difference being that these solitons are rarefactive (low density) rather than compressive. In this case, short-wavelength harmonics travel relatively slowly (positive disperson), and the greater the amplitude of the rarefactive soliton, the more slowly it propagates. As a result, the wave train terminates with the strongest soliton. Surface tension in water creates small waves that have positive dispersion and rarefactive solitons. The physics of water waves therefore provides an analogy to both types of dispersion found in The elegant theory of solitons is an impressive achievement of modern mathematical physics. In 1967 Martin Kruskal and his colleagues at Princeton University proved that any wave profile in a dispersive medium that can support steepening evolves into a sequence of solitons. By relating soliton theory to the problem of elementary particle collisions, which has been studied in depth in quantum physics since the 1920’s, they showed that solitons preserve their identities when they collide, just as particles do. The understanding of dispersive shocks remains incomplete without a knowledge of how to dissipate the energy of waves or solitons into heat. If not for the effect of dissipation, the train of wave structures making up the shock front would be infinitely long. In effect, the fundamental question of how collisionless shock waves transport energy and momentum has reappeared, but in a new guise. In 1945 the great Soviet physicist Lev D. Landau discovered a dissipation mechanism that requires no collicions between particles. Among the randomly moving particles in a plasma, a few happen to travel at a velocity that matches the velocity of the plasma wave. These particles are said to be in resonance with the wave. An intense exchange of energy can take place between a wave and the particles resonant with it. In the early 1970s one of us (Sagdeev) and Vitaly Shapiro, also at the Institute of Space Research in Moscow, showed that Landau’s mechanism damps solitons by accelerating resonant ions. Consider, for example, a train of compressive solitons propagating perpendicular to the magnetic field. Each soliton generates an electric field parallel to its direction of motion. Ions traveling close to the resonant velocity move slowly compared with the solitons, and the soliton electric field is able to stop and reverse the motion of these ions. The soliton loses part of its energy to the ions resonant with it during the interaction. The process does not end here, because the magnetic Lorentz force curves the path of the reflected ion so that it returns again and again to the same soliton. Each encounter adds to the energy of the particle. The Lorentz force, which grows stronger as the particle velocity increases, eventually throws the ion over the top of the first soliton. The acceleration continues as the ion encounters the remaining solitons in the wave train. The resonant ions gain energy much as surfers gain speed by riding ocean waves. This analogy inspired John M. Dowson of the University of California at Los Angeles to design a new kind of charged particle accelerator, which he dubbed The heating of ions by solitons can form a shock if the number of ions in resonance is great enough. Such is the case if the ions are hot. If not, the solitons find another way to dissipate energy: they themselves generate microscopic plasma waves that heat the plasma. Plasma electrons flow over ions, thereby creating the electric current responsible for the characteristic soliton magnetic field profile. If the ions are cold, the electrons can easily move at supersonic velocities relative to the ions, in which case the electrons amplify extremely small scale electric field oscillations called ion acoustic waves. These waves, which do not affect the magnetic field, grow in an avalanchelike fashion. The plasma particles collide not with one another but with these ion acoustic waves. After the waves develop, the plasma enters a microturbulent state. In 1968 Robert W. Fredericks and his colleagues at TRW in Los Angeles were the first to detect ion acoustic waves in shocks. They made this discovery using instruments on the OGO-5 spacecraft that were designed specifically to study plasma waves in space. Since then, plasmawave detectors have been included on most space mission concerned with solar system plasmas, notably the International Sun-Earth Explorers (ISEE 1, 2 and 3) in earth orbit and the Voyager 1 and 2 missions to the outer planets. The late Fred Scarf of TRW and his collaborators often played back the microturbulent-wave electric fields recorded by the ISEE and Voyager spacecraft through an ordinary loudspeaker. To most listeners, shocks would sound cacophonous; to our ears, however, they were a symphony of space. Although easy to record, microturbulence has proved difficult to understand completely. Theorists turned to numerical computations to help elucidate the behavior of a strongly microturbulent plasma. By solving millions of equations of motion for the particles, computer simulation shows how ion acoustic waves grow and heat the plasma. Today’s supercomputers are just beginning to give scientists comprehensive understanding of many different kinds of microturbulence. Even without knowing the detailed nature of microturbulent plasma, physicists can deduce its general behavior. Electrons in the plasma transfer their momentum to ion acoustic waves, which in turn transfer it to ions. This process retards the motion of the electrons in the plasma and so creates resistance to the electric current. In some shocks, ion acoustic-wave resistance grows sufficiently intense to suppress the generation of solitons. When this happens, no wave train forms, and the shock is called resistive. Although both simple dispersive and resistive shocks have been found in space, most shocks observed there have entirely different characteristics from those discussed so far. Most shocks are sufficiently powerful that neither dispersion nor resistance can prevent steepening from causing the waves to overturn. Overturning then leads to a host of new shock phenomena. A consideration of shallow water waves, once more, helps to illustrate the process of overturning. When a shallow ocean wave grows sufficiently high, the tip of its wave crest swings forward through an arc and ultimately collapses under gravity. The water stream from behind the crest collides with that ahead, giving rise to the foam on whitecaps. Thus, a large wave crashing toward shore repeatedly overturns, or “breaks.” A plasma wave also develops overlapping velocity streams as it overturns. The fastest stream, which comes from the wave crest, invades the plasma ahead of the shock front. The Lorentz force turns the ions in this stream back into the shock. These reflected ions ultimately mix with those behind the front. If the shock is weak, its structure will remain steady. If the shock is strong, ion reflection will temporarily overwhelm steepening; however, the shock soon steepens again, and the cycle repeats. Recent numerical simulations by Kevin B. Quest and his colleagues at Los Alamos National Laboratory confirm the idea that very strong shock waves consist of a repeated cycle of steepening, overturning and ion reflection. The interactions between reflected and flowing ions can also lead to microturbulence. The Voyager spacecraft detected ion acoustic waves, this time generated by ions reflected by Jupiter’s bow shock [see top illustration on page 110]. Near the earth, reflected ions generate waves in the solar wind at the geometric mean of the frequencies of rotation of the ions and electrons about the earth’s magnetic field; this mean is called the lower hybrid-resonance frequency. In 1985 the Soviet-Czech Intershock spacecraft made the first definitive measurements of lower hybrid turbulence in the earth’s bow shock. Around both planets, the ion acoustic waves take energy from ions and give it to electrons. Some heated electrons escape forward into the solar-wind flow, others back into the shock zone. So far we have concentrated on those shocks propagating more or less at right angles to the magnetic field, those physicists call quasiperpendicular. Plasma turbulence is even more important when the shock propagates almost parallel to the magnetic field. The field no longer holds back the fast particles that rush ahead of a quasiparallel shock. These particles are a major source of turbulent The ability of the magnetic field to channel particle motion along field lines creates a situation analogous to a fire hose left spraying water on the ground. Bends in the hose become increasingly curved by the centrifugal force of the flowing water; eventually the hose wriggles uncontrollably on the ground. The magnetic field channeling the overlapping plasma streams ahead of a quasiparallel shock experiences a similar instability, often called the fire-hose instability. The centrifugal force that bends the magnetic field lines is proportional to the density of energy in plasma motion along the magnetic field. Instability occurs when this energy density exceeds that of the magnetic field. Many physicists conceived of the fire-hose instability independently, but the version invented in 1961 by Eugene N. Parker of the University of Chicago was tailored specifically to quasiparallel shocks. The plasma fire-hose instability leads to a random flexing of the magnetic field lines. This kind of magnetic turbulence can be regarded as a chaotic ensemble of “torsional” waves, that is, ones that twist the magnetic field lines. They are known as Alfven waves, after Hannes Alfven of the Royal Institute of Technology in Stockholm, who first described them. Alfven waves, like ion acoustic waves, can exchange energy and momentum with ions in resonance with them. As far as the ions are concerned, the interaction with Alfven waves mimics the effect of collisions. Thus, Alfven waves limit how far ions escaping the shock can penetrate upstream and determine the thickness of the Theory predicts that collisions between ions and Alfven waves should be nearly elastic, that is, they should involve only slight changes in energy despite a large change in momentum (for example, when a rubber ball bounces off a hard wall, its momentum reverses, but its energy remains essentially the same). As a result, the Alfven turbulence inside the shock front should disintegrate relatively slowly. This notion led us to conclude in 1967 that quasiparallel shocks could be much thicker than quasiperpendicular ones. The very first measurements of the earth’s bow shock by the IMP-1 spacecraft in 1964 hinted at the substantial differences between parallel and perpendicular shocks. The data returned by IMP-1 were somewhat puzzling at first because sometimes the shock appeared thin and other times it appeared thick. Three years later we suggested that shock structure could depend on the orientation of the interplanetary magnetic field. In 1971 Eugene W. Greenstadt of TRW and his colleagues assembled the first evidence that the thickness of the earth’s bow shock does indeed vary with the direction of the solar-wind magnetic field. Since this field constantly changes direction, the regions where the bow shock is locally quasiperpendicular and where it is quasiparallel are always moving, even if the shock itself remains fairly stationary. Wherever the shock is quasiperpendicular, it is thin; where it is quasiparallel, it is thick [see illustration on page 107]. In the early 1970s spacecraft began to detect small fluxes of energetic particles, ion acoustic waves and Alfven waves far upstream of where the earth’s bow shock was understood to be. The ISEE program, which started in 1977, established that all the upstream activity is actually part of the extended quasiparallel shock. The shock is so thick that it dwarfs the earth, and therefore earth-orbiting satellites cannot really measure its size. Another, larger class of shocks does lend itself to investigation by spacecraft, however. Flares in the solar corona occasionally launch gigantic shock waves that propagate through the interplanetary medium to the far reaches of the solar system. These can be observed as they sweep by instrumented spacecraft. One of us (Kennel), along with colleagues in the ISEE project, found that the region of Alfven and ion acoustic turbulence upstream of quasi-parallel interplanetary shocks can be more than a million kilometers thick. Alfven waves play a particularly prominent role in the shocks that form ahead of comets as they pass through the solar wind in the inner solar system. Cometary nuclei are far too small to cause any detectable physical disturbance in the flow of the solar wind (the nucleus of Halley’s comet, for instance, measures only about 15 kilometers across), and the nuclei possess a negligible magnetic field. Because of these properties, comets cannot generate shocks in the way that planets do. Nevertheless, scientists have found that when comets approach the sun, they create large collisionless shocks. Sunlight evaporates atoms and molecules from the surface of a comet’s nucleus. Most of the liberated gas is ionized by solar ultraviolet light and forms a plasma cloud similar to the earth’s ionosphere. The solar wind never penetrates the cometary ionosphere, and it is not the ionosphere that forms the shock wave. The key players in producing cometary shocks are the few neutral atoms and molecules that manage to escape the comet’s ionosphere. These, too, are ultimately ionized, but farther out, where they have entered the solar wind. The newly ionized particles respond to the electric and magnetic fields of the solar wind by joining the flow. They increase the mass density of the solar wind, which, according to the law of conservation of momentum, decreases the wind speed. Because cometary ions are much heavier than the protons of the solar wind, a number of cometary ions can slow the wind appreciably. More than 20 years ago Ludwig Biermann of the Max Planck Institute for Astophysics in Munich suggested that such a decelerating solar-wind flow should produce a shock similar to a planetary bow shock. During its 1986 encounter with Comet Halley, the Soviet spacecraft Vega-1 heard the plasma wave cacophony that signaled the existence of a shock wave about one million kilometers from the nucleus, the distance predicted by Biermann’s theory. The Soviety Vega, Japanese Suisei and the European Giotto spacecraft encountered both quasiperpendicular and quasiparallel shocks at Comet Halley. The quasiparallel shocks were similar to those at the planets. Heavy ions upstream of the quasiperpendicular cometary shocks generated intense Alfven-wave turbulence, however, something that does not happen around the planets. Shocks that generate Alfven waves can also accelerate a small group of particles to high energies. The “collisions” of particles with Alfven waves return escaping particles back to the shock front. Each time they recross the shock, the particles increase their energy. This acceleration mechanism is based on one proposed by Enrico Fermi in 1954. In 1986 one of us (Kennel) and his ISEE collaborators found that a theory of Fermi acceleration developed for interplanetary shocks by Martin A. Lee of the University of New Hampshire successfully passed the test of observations. Yet the Fermi process develops so slowly that the protons accelerated by quasiparallel interplanetary shocks only reach energies of a few hundred thousand electron volts in the one day it takes the shock to travel from the sun to the earth. In comparison, cosmic rays–energetic subatomic particles and atomic nuclei from deep space–have energies up to 100 trillion electron volts. Exploding stars–supernovas–create very strong shocks that speed into the interstellar plasma at tens of thousands of kilometers per second. We cannot put a space probe ahead of a supernova shock, so we cannot say for sure whether the shock generates Alfven waves and accelerates interstellar ions. We can, however, apply to supernova shocks the theory of particle acceleration that is being tested today using solar Since supernova shocks last about a million years before dying out, particles have time to reach extremely high energies via the Fermi process. Working independently, Germogen F. Krymskii of the Institute of Space Physics Research and Aeronomy in Yakutsk, U.S.S.R., Roger D. Blandford of the California Institute of Technology and Ian W. Axford of the Max Planck Institute for Aeronomy in Katlenburg-Lindau, together with their colleagues, showed in 1977 that the distribution in energy of the particles accelerated by collision-less shocks is virtually identical to that of cosmic rays. The origin of cosmic rays has long been a puzzle. Many astophysicists now believe that they are created when supernova shocks accelerate particles, although it is still not understood how the particles reach the highest energies observed. Collisionless shocks probably exist even around remote galaxies. Dynamic processes in the centers of some active galaxies (possibly involving a massive black hole) create supersonic jets hundreds of thousands of light-years long. Shocks are likely to occur when the jets interact with the plasma surrounding the galaxy. Radio emissions from the jets indicate that electrons are accelerated to extremely high energies. Albert A. Galeev, director of the Soviet Institute of Space Research, suggests that a theory he and his colleagues developed to explain how lower hybrid waves accelerate electrons in the earth’s bow shock may also clarify how electrons are accelerated in galactic Contemporary collisionless shock research encompasses phenomena that vary tremendously in scale and origin. The concepts that we and others developed 20 years ago have turned out to be a reasonable basis for understanding collisionless shocks. Spacecraft have found individual examples of most of the shock types predicted by theory. Still to come are refined measurements and numerical calculations that simulate in detail the impressive variety of shocks found in nature. In most cases, the fairly simple mechanisms we have described here are intertwined in fascinating ways. Yet even now collisionless shock theory has enabled physicists to speculate with some confidence on the physical processes underlying some of the grandest and most violent phenomena in the universe. ROALD Z. SAGDEEV and CHARLES F. KENNEL have been friends and colleagues since they met at the International Centre for Theoretical Physics in Trieste in 1965. Sagdeev heads the theory division of the Soviet Institute of Space Research and is professor of physics at Moscow Physico-TEchnical Institute. Last year he joined the physics department of the University of Maryland at College Park. In addition to his astronomical and physical research, Sagdeev has been active in the areas of arms control, science policy and global environment protection. Kennel is professor of physics at the University of California, Los Angeles, as well as consultant to TRW Systems Group, where he participates in space plasma experiments. He is also a distinguished visiting scientist at the Geophysical Institute of the University of Alaska, Fairbanks, and a collector of native Alaskan SHOCK WAVES IN COLLISIONLESS PLASMAS. D. A. Tidman and N. A. Krall. UPSTREAM WAVES AND PARTICLES. Journal of Geophysical Research, Vol. 86, No. A6, pages 4319-4529; June 1, 1981. HANDBOOK OF PLASMA PHYSICS. Edited by M. N. Rosenbluth and R. Z. Sagdeev. North-Holland Publishing Company, 1983. COLLISIONLESS SHOCKS IN THE HELIOSPHERE: REVIEW OF CURRENT RESEARCH. Edited by Bruce T. Tsurutani and Robert G. Stone. American Geo-physical Union, 1985. NONLINEAR PHYSICS: FROM THE PENDULUM TO TURBULENCE AND CHAOS. R. Z. Sagdeev, D. A. Usikov and G. M. Zaslavsky. Translated from the Russian by Igor R. Sagdeev. Harwood Academic Publishers, 1988.
|This article contains a pro and con list, which is sometimes inappropriate. (March 2016)| Fusion power is the generation of energy by nuclear fusion. Fusion reactions are high energy reactions in which two lighter atomic nuclei fuse to form a heavier nucleus. When they combine a release of energy is expected in accordance with Einstein's formula . This major area of plasma physics research is concerned with harnessing this reaction as a source of large scale sustainable energy. There is no question of fusion's scientific feasibility, since stellar nucleosynthesis is the process in which stars transmute matter into energy emitted as radiation. In almost all large scale commercial proposals, heat from neutron scattering in a controlled fusion reaction is used to operate a steam turbine that drives electrical generators, as in existing fossil fuel and nuclear fission power stations. Many different fusion concepts have come in and out of vogue over the years. The current leading designs are the tokamak and inertial confinement fusion (laser) approaches. As of January 2016, these technologies are not yet practically viable, as they are not energetically viable—i.e., it currently takes more energy to initiate and contain a fusion reaction than the reaction then produces. There are also smaller-scale commercial proposals relying on other means of energy transfer, mostly forms of aneutronic fusion—but these are largely considered to be more remote than the large scale neutron scattering approaches. Fusion reactions occur when two (or more) atomic nuclei come close enough for the strong nuclear force pulling them together to exceed the electrostatic force pushing them apart, fusing them into heavier nuclei. For nuclei lighter than iron-56, the reaction is exothermic, releasing energy. For nuclei heavier than iron-56, it is endothermic, requiring an external source of energy. Hence, nuclei smaller than iron-56 are more likely to fuse while those heavier than iron-56 are more likely to break apart. To fuse, nuclei must be brought close enough together for the strong force to act, which occurs only at very short distances. The electrostatic force keeping them apart acts over long distances, so a significant amount of kinetic energy is needed to overcome this "Coulomb barrier" before the reaction can take place. There are several ways of doing this, including speeding up atoms in a particle accelerator, or more commonly, heating them to very high temperatures. Once an atom is heated above its ionization energy, its electrons are stripped away, leaving just the bare nucleus (the ion). The result is a hot cloud of ions and the electrons formerly attached to them. This cloud is known as a plasma. Because the charges are separated, plasmas are electrically conductive and magnetically controllable. Many fusion devices take advantage of this to control the particles as they are being heated. Theoretically, any atoms can be fused if the pressure and temperature are high enough,. Studies have been made of the conditions required to create fusion conditions for a variety of atoms. Power stations, however, are currently limited to only the lightest elements.[why?]Hydrogen is ideal for this purpose because of its small charge, making it the easiest atom to fuse, and producing helium. A reaction's cross section, denoted σ, is the measure of how likely a fusion reaction will happen. It is a probability, and it depends on the velocity of the two nuclei when they strike one another. If the atoms move faster, fusion is more likely. If the atoms hit head on, fusion is more likely. Cross sections for many different fusion reactions were measured mainly in the 1970s using particle beams. A beam of ions of material A was fired at material B at different speeds, and the amount of neutrons coming off was measured. Neutrons are a key product of most fusion reactions. In most cases, the nuclei are flying around in a hot cloud, with some distribution of velocities. If the plasma is thermalized, then the distribution looks like a bell curve, or maxwellian distribution. In this case, it is useful to take the average cross section over the velocity distribution. This is entered into the volumetric fusion rate: - is the energy made by fusion, per time and volume - n is the number density of species A or B, the particles in the volume - is the cross section of that reaction, average over all the velocities of the two species v - is the energy released by that fusion reaction. This equation shows that energy varies with the temperature, density, speed of collision, and fuel used. This equation was central to John Lawsons' analysis of fusion power stations working with a hot plasma. Lawson assumed an energy balance, shown below. Net Power = Efficiency * (Fusion - Radiation Loss - Conduction Loss) - Net Power is the net power for any fusion power station. - Efficiency how much energy is needed to drive the device and how well it collects power. - Fusion is rate of energy generated by the fusion reactions. - Radiation is the energy lost as light, leaving the plasma. - Conduction is the energy lost, as momentum leaves the plasma. Plasma clouds lose energy through conduction and radiation. Conduction is when ions, electrons or neutrals hit a surface and transfer a portion of their kinetic energy to the atoms of the surface. Radiation is when energy leaves the cloud as light. This can be in the visible, UV, IR, or X-ray light. Radiation increases as the temperature rises. To get net power from fusion, you must overcome these losses. Triple Product: Density, temperature, time The Lawson criterion argues that a machine holding a hot thermalized and quasi-neutral plasma has to meet basic criteria to overcome the radiation losses, conduction losses and a power station efficiency of 30 percent. This became known as the "triple product": the plasma density and temperature and how long it is held in. For many years, fusion research has focused on achieving the highest triple product possible. This emphasis on as a metric of success has hurt other considerations such as cost, size, complexity and efficiency.[dubious ] This has led to larger, more complicated and more expensive machines such as ITER and NIF. Plasma can be made by fully ionizing a gas. Plasma is a fluid which conducts electricity. In bulk, it is modeled using Magnetohydrodynamics which is a combination of the Navier-Stokes equations governing fluids and Maxwell's equations governing how magnetic and electric fields behave. Fusion exploits several plasma properties, including: Self-Organization Plasma conducts electric and magnetic fields. This means that it can self-organize. Its motions can generate fields which can, in turn, self-contain it. Diamagnetic Plasma Plasma can generate its own internal magnetic field. This can reject an externally applied magnetic field, making it diamagnetic. Magnetic mirrors Plasma can be reflected when it moves from a low to high density magnetic field. There are several proposals for energy capture. The simplest is using a heat cycle to heat a fluid with fusion reactions. It has been proposed to use the neutrons generated by fusion to re-generate a spent fission fuel. In addition, direct energy conversion, has been developed (at LLNL in the 1980s) as a method to maintain a voltage using the products of a fusion reaction. This has demonstrated an energy capture efficiency of 48 percent. Magnetic confinement fusion Tokamak: The tokamak is the most well-developed and well-funded approach to fusion energy. As of April 2012 there were an estimated 215 experimental tokamaks either planned, decommissioned or currently operating (35 tokamaks), worldwide. This method races hot plasma around in a magnetically confined ring, with an internal current. When completed, ITER will be the world's largest tokamak. Spherical tokamak: A variation on the tokamak with a spherical shape. Stellarator: These are twisted rings of hot plasma. The stellarator attempts to create a natural twist plasma path, using external magnets; while Tokamaks create those magnetic fields using an internal current. Stellarators were developed by Lyman Spitzer in 1950 and have four designs: Torsatron, Heliotron, Heliac and Helias. One example is Wendelstein 7-X, a German fusion device that produced its first plasma on December 10, 2015. Wendelstein 7-X, the world's largest stellarator-type fusion device, is not intended to produce energy, but will investigate the suitability of this type of device for a power station. Levitated Dipole Experiment (LDX): These use a solid superconducting torus. This is magnetically levitated inside the reactor chamber. The superconductor forms an axisymmetric magnetic field that contains the plasma. The LDX was developed between MIT and Columbia University after 2000 by Jay Kesner and Michael E. Mauel. Magnetic mirror: Developed by Richard F. Post and teams at LLNL in the 1960s. Magnetic mirrors reflected hot plasma back and forth in a line. Variations included the magnetic bottle and the biconic cusp. A series of well-funded, large, mirror machines were built by the US government in the 1970s and 1980s. Mirror research continues today. Field-reversed configuration: This device traps plasma in a self-organized quasi-stable structure; where the particle motion makes an internal magnetic field which then traps itself. Reversed field pinch: Here the plasma moves inside a ring. It has an internal magnetic field. As you move out from the center of this ring, the magnetic field reverses direction. Inertial confinement fusion Direct drive: In this technique, lasers directly blast a pellet of fuel. The goal is to start ignition, a fusion chain reaction. Ignition was first suggested by John Nuckolls, in 1972. Notable direct drive experiments have been conducted at the Laboratory for Laser Energetics, Laser Mégajoule and the GEKKO XII facilities. Good implosions require fuel pellets with close to a perfect shape in order to generate a symmetrical inward shock wave and to produce the high-density plasma. Fast ignition: This method uses two laser blasts. The first blast compresses the fusion fuel, while the second high energy pulse ignites it. Experiments have been conducted at the Laboratory for Laser Energetics using the Omega and Omega EP systems and at the GEKKO XII laser at the institute for laser engineering in Osaka Japan. Indirect drive: In this technique, lasers blasts a structure around the pellet of fuel. This structure is known as a Hohlraum. As it disintegrates the pellet is bathed in a more uniform x-ray light, creating better compression. The largest system using this method is the National Ignition Facility. Magneto-inertial fusion or Magnetized Liner Inertial Fusion: This combines a laser pulse with a magnetic pinch. The pinch community refers to it as Magnetized Liner Inertial Fusion while the ICF community refers to it as Magneto-inertial fusion. Heavy Ion Beams There are also proposals to do inertial confinement fusion with ion beams instead of laser beams. The main difference is the mass of the beam has momentum, whereas lasers do not. Magnetic or electric pinches Z-Pinch: This method sends a strong current (in the z-direction) through the plasma. The current generates a magnetic field that squeezes the plasma to fusion conditions. Pinches were the first method for man-made controlled fusion. Some examples include the Dense plasma focus and the Z machine at Sandia National Laboratories. Theta-Pinch: This method sends a current inside a plasma, in the theta direction. Screw Pinch: This method combines a theta and z-pinch for improved stabilization. Inertial electrostatic confinement Fusor: This method uses an electric field to heat ions to fusion conditions. The machine typically uses two spherical cages, a cathode inside the anode, inside a vacuum. These machines are not considered a viable approach to net power because of their high conduction and radiation losses. They are simple enough to build that amateurs have fused atoms using them. Uncontrolled: Fusion has been initiated by man, using uncontrolled fission explosions to ignite the so-called Hydrogen Bomb. Early proposals for fusion power included using bombs to initiate reactions. Beam fusion: A beam of high energy particles can be fired at another beam or target and fusion will occur. This was used in the 1970s and 1980s to study the cross sections of high energy fusion reactions. Bubble fusion: This was a supposed fusion reaction that was supposed to occur inside extraordinarily large collapsing gas bubbles, created during acoustic liquid cavitation. This approach was discredited. Muon-catalyzed fusion: Muons allow atoms to get much closer and thus reduce the kinetic energy required to initiate fusion. Muons require more energy to produce than can be obtained from muon-catalysed fusion, making this approach impractical for the generation of power. Gas must be first heated to form a plasma. This then needs to be hot enough to start fusion reactions. A number of heating schemes have been explored: Radiofrequency Heating A radio wave is applied to the plasma, causing it to oscillate. This is basically the same concept as a microwave oven. This is also known as electron cyclotron resonance heating or Dielectric heating. Neutral Beam Injection An external source of hydrogen is ionized and accelerated by an electric field to form a charged beam which is shone through a source of neutral hydrogen gas towards the plasma which itself is ionized and contained in the reactor by a magnetic field. Some of the intermediate hydrogen gas is accelerated towards the plasma by collisions with the charged beam while remaining neutral: this neutral beam is thus unaffected by the magnetic field and so shines through it into the plasma. Once inside the plasma the neutral beam transmits energy to the plasma by collisions as a result of which it becomes ionized and thus contained by the magnetic field thereby both heating and refuelling the reactor in one operation. The remainder of the charged beam is diverted by magnetic fields onto cooled beam dumps. Thomson Scattering Certain wavelengths of light will scatter off a plasma. This light can be detected and used to reconstruct the plasmas' behavior. This technique can be used to find its density and temperature. It is common in Inertial confinement fusion,Tokamaks and fusors. In ICF systems, this can be done by firing a second beam into a gold foil adjacent to the target. This makes x-rays that scatter or traverse the plasma. In Tokamaks, this can be done using mirrors and detectors to reflect light across a plane (two dimensions) or in a line (one dimension). Langmuir probe This is a metal object placed in a plasma. A potential is applied to it, giving it a positive or negative voltage against the surrounding plasma. The metal collects charged particles, drawing a current. As the voltage changes, the current changes. This makes a IV Curve. The IV-curve can be used to determine the local plasma density, potential and temperature. Geiger counter Deuterium or tritium fusion produces neutrons. Geiger counters record the rate of neutron production, so they are an essential tool for demonstrating success. Flux loop A loop of wire is inserted into the magnetic field. As the field passes through the loop, a current is made. The current is measured and used to find the total magnetic flux through that loop. This has been used on the National Compact Stellarator Experiment, the polywell and the LDX machines. X-ray detector All plasma loses energy by emitting light. This covers the whole spectrum: visible, IR, UV, and X-rays. This occurs anytime a particle changes speed, for any reason. If the reason is deflection by a magnetic field, the radiation is Cyclotron radiation at low speeds and Synchrotron radiation at high speeds. If the reason is deflection by another particle, plasma radiates X-rays, known as Bremsstrahlung radiation. X-rays are termed in both hard and soft, based on their energy. Steam turbines It has been proposed that steam turbines be used to convert the heat from the fusion chamber into electricity. The heat is transferred into a working fluid that turns into steam, driving electric generators. Neutron blankets Deuterium and tritium fusion generates neutrons. This varies by technique (NIF has a record of 3E14 neutrons per second while a typical fusor produces 1E5–1E9 neutrons per second). It has been proposed to use these neutrons as a way to regenerate spent fission fuel or as a way to breed tritium from a liquid lithium blanket. Direct conversion This is a method where the kinetic energy of a particle is converted into voltage. It was first suggested by Richard F. Post in conjunction with magnetic mirrors, in the late sixties. It has also been suggested for Field-Reversed Configurations. The process takes the plasma, expands it, and converts a large fraction of the random energy of the fusion products into directed motion. The particles are then collected on electrodes at various large electrical potentials. This method has demonstrated an experimental efficiency of 48 percent. Confinement refers to all the conditions necessary to keep a plasma dense and hot long enough to undergo fusion. Here are some general principles. - Equilibrium: The forces acting on the plasma must be balanced for containment. One exception is inertial confinement, where the relevant physics must occur faster than the disassembly time. - Stability: The plasma must be so constructed so that disturbances will not lead to the plasma disassembling. - Transport or conduction: The loss of material must be sufficiently slow. The plasma carries off energy with it, so rapid loss of material will disrupt any machines power balance. Material can be lost by transport into different regions or conduction through a solid or liquid. To produce self-sustaining fusion, the energy released by the reaction (or at least a fraction of it) must be used to heat new reactant nuclei and keep them hot long enough that they also undergo fusion reactions. The first human-made, large-scale fusion reaction was the test of the hydrogen bomb, Ivy Mike, in 1952. As part of the PACER project, it was once proposed to use hydrogen bombs as a source of power by detonating them in underground caverns and then generating electricity from the heat produced, but such a power station is unlikely ever to be constructed. At the temperatures required for fusion, the fuel is heated to a plasma state. In this state it has a very good electrical conductivity. This opens the possibility of confining the plasma with magnetic fields. This is the case of magnetized plasma, where the magnetic fields and plasma intermix. This is generally known as magnetic confinement. The field lines put a Lorentz force on the plasma. The force works perpendicular to the magnetic fields, so one problem in magnetic confinement is preventing the plasma from leaking out the ends of the field lines. A general measure of magnetic trapping in fusion is the beta ratio: This is the ratio of the externally applied field to the internal pressure of the plasma. A value of 1 is ideal trapping. Some examples of beta vales include: - The START machine: 0.32 - The Levitated dipole experiment: 0.26 - Spheromaks: ≈ 0.1, Maximum 0.2 based on Mercier limit. - The DIII-D machine: 0.126 - The Gas Dynamic Trap a magnetic mirror: 0.6 for 5E-3 seconds. Magnetic Mirror One example of magnetic confinement is with the magnetic mirror effect. If a particle follows the field line and enters a region of higher field strength, the particles can be reflected. There are several devices that try to use this effect. The most famous was the magnetic mirror machines, which was a series of large, expensive devices built at the Lawrence Livermore National Laboratory from the 1960s to mid 1980s. Some other examples include the magnetic bottles and Biconic cusp. Because the mirror machines were straight, they had some advantages over a ring shape. First, mirrors would easier to construct and maintain and second direct conversion energy capture, was easier to implement. As the confinement achieved in experiments was poor, this approach was abandoned. Magnetic Loops Another example of magnetic confinement is to bend the field lines back on themselves, either in circles or more commonly in nested toroidal surfaces. The most highly developed system of this type is the tokamak, with the stellarator being next most advanced, followed by the Reversed field pinch. Compact toroids, especially the Field-Reversed Configuration and the spheromak, attempt to combine the advantages of toroidal magnetic surfaces with those of a simply connected (non-toroidal) machine, resulting in a mechanically simpler and smaller confinement area. Inertial confinement is the use of rapidly imploding shell to heat and confine plasma. The shell is imploded using a direct laser blast (direct drive) or a secondary x-ray blast (indirect drive) or heavy ion beams. Theoretically, fusion using lasers would be done using tiny pellets of fuel that explode several times a second. To induce the explosion, the pellet must be compressed to about 30 times solid density with energetic beams. If direct drive is used—the beams are focused directly on the pellet—it can in principle be very efficient, but in practice is difficult to obtain the needed uniformity. The alternative approach, indirect drive, uses beams to heat a shell, and then the shell radiates x-rays, which then implode the pellet. The beams are commonly laser beams, but heavy and light ion beams and electron beams have all been investigated. There are also electrostatic confinement fusion devices. These devices confine ions using electrostatic fields. The best known is the Fusor. This device has an cathode inside an anode wire cage. Positive ions fly towards the negative inner cage, and are heated by the electric field in the process. If they miss the inner cage they can collide and fuse. Ions typically hit the cathode, however, creating prohibitory high conduction losses. Also, fusion rates in fusors are very low because of competing physical effects, such as energy loss in the form of light radiation. Designs have been proposed to avoid the problems associated with the cage, by generating the field using a non-neutral cloud. These include a plasma oscillating device, a magnetically-shielded-grid a penning trap and the polywell. The technology is relatively immature, however, and many scientific and engineering questions remain. History of research |This section needs additional citations for verification. (March 2016) (Learn how and when to remove this template message)| Research into nuclear fusion started in the early part of the 20th century. In 1920 the British physicist Francis William Aston discovered that the total mass equivalent of four hydrogen atoms (two protons and two neutrons) are heavier than the total mass of one helium atom (He-4), which implied that net energy can be released by combining hydrogen atoms together to form helium, and provided the first hints of a mechanism by which stars could produce energy in the quantities being measured. Through the 1920s, Arthur Stanley Eddington became a major proponent of the proton–proton chain reaction (PP reaction) as the primary system running the Sun. A theory was verified by Hans Bethe in 1939 showing that beta decay and quantum tunneling in the Sun's core might convert one of the protons into a neutron and thereby producing deuterium rather than a diproton. The deuterium would then fuse through other reactions to further increase the energy output. For this work, Bethe won the Nobel Prize in Physics. In 1942, nuclear fusion research was subsumed into the Manhattan Project when the secrecy surrounding the field obscured by the science. The first patent related to a fusion reactor was registered in 1946 by the United Kingdom Atomic Energy Authority. The inventors were Sir George Paget Thomson and Moses Blackman. This was the first detailed examination of the Z-pinch concept. Z-pinch is based on the fact that plasmas are electrically conducting. Running a current through the plasma, will generate a magnetic field around the plasma. This field will, according to Lenz's law, create an inward directed force that causes the plasma to collapse inward, raising its density. Denser plasmas generate denser magnetic fields, increasing the inward force, leading to a chain reaction. If the conditions are correct, this can lead to the densities and temperatures needed for fusion. The difficulty is getting the current into the plasma, which would normally melt any sort of mechanical electrode. A solution emerges again because of the conducting nature of the plasma; by placing the plasma in the middle of an electromagnet, induction can be used to generate the current. Starting in 1947, two UK teams carried out small experiments and began building a series of ever-larger experiments. When the Huemul results hit the news (see below), James L. Tuck, a UK physicist working at Los Alamos, introduced the pinch concept in the US and produced a series of machines known as the Perhapsatron. The Soviet Union, unbeknownst to the West, was also building a series of similar machines. All of these devices quickly demonstrated a series of instabilities when the pinch was applied. This broke up the plasma column long before it reached the densities and temperatures required for fusion. The first successful man-made fusion device was the boosted fission weapon tested in 1951 in the Greenhouse Item test. This was followed by true fusion weapons in 1952's Ivy Mike, and the first practical examples in 1954's Castle Bravo. This was uncontrolled fusion. In these devices, the energy released by the fission explosion is used to compress and heat fusion fuel, starting a fusion reaction. Fusion releases neutrons. These neutrons hit the surrounding fission fuel, causing the atoms to split apart much faster than normal fission processes—almost instantly by comparison. This increases the effectiveness of bombs: normal fission weapons blow themselves apart before all their fuel is used; fusion/fission weapons do not have this practical upper limit. In 1949 an expatriate German, Ronald Richter, proposed the Huemul Project in Argentina, announcing positive results in 1951. These turned out to be fake, but it prompted considerable interest in the concept as a whole. In particular, it prompted Lyman Spitzer to begin considering ways to solve some of the more obvious problems involved in confining a hot plasma, and, unaware of the z-pinch efforts, he developed a new solution to the problem known as the stellarator. Spitzer applied to the US Atomic Energy Commission for funding to build a test device. During this period, Jim Tuck who had worked with the UK teams had been introducing the z-pinch concept to his coworkers at his new job at Los Alamos National Laboratory (LANL). When he heard of Spitzer's pitch for funding, he applied to build a machine of his own, the Perhapsatron. Spitzer's idea won funding and he began work on the stellarator under the code name Project Matterhorn. His work led to the creation of the Princeton Plasma Physics Laboratory. Tuck returned to LANL and arranged local funding to build his machine. By this time, however, it was clear that all of the pinch machines were suffering from the same issues involving stability, and progress stalled. In 1953, Tuck and others suggested a number of solutions to the stability problems. This led to the design of a second series of pinch machines, led by the UK ZETA and Sceptre devices. Spitzer had planned an aggressive development project of four machines, A, B, C, and D. A and B were small research devices, C would be the prototype of a power-producing machine, and D would be the prototype of a commercial device. A worked without issue, but even by the time B was being used it was clear the stellarator was also suffering from instabilities and plasma leakage. Progress on C slowed as attempts were made to correct for these problems. By the mid-1950s it was clear that the simple theoretical tools being used to calculate the performance of all fusion machines were simply not predicting their actual behavior. Machines invariably leaked their plasma from their confinement area at rates far higher than predicted. In 1954, Edward Teller held a gathering of fusion researchers at the Princeton Gun Club, near the Project Matterhorn (now known as Project Sherwood) grounds. Teller started by pointing out the problems that everyone was having, and suggested that any system where the plasma was confined within concave fields was doomed to fail. Attendees remember him saying something to the effect that the fields were like rubber bands, and they would attempt to snap back to a straight configuration whenever the power was increased, ejecting the plasma. He went on to say that it appeared the only way to confine the plasma in a stable configuration would be to use convex fields, a "cusp" configuration. When the meeting concluded, most of the researchers quickly turned out papers saying why Teller's concerns did not apply to their particular device. The pinch machines did not use magnetic fields in this way at all, while the mirror and stellarator seemed to have various ways out. This was soon followed by a paper by Martin David Kruskal and Martin Schwarzschild discussing pinch machines, however, which demonstrated instabilities in those devices were inherent to the design. The largest "classic" pinch device was the ZETA, including all of these suggested upgrades, starting operations in the UK in 1957. In early 1958, John Cockcroft announced that fusion had been achieved in the ZETA, an announcement that made headlines around the world. When physicists in the US expressed concerns about the claims they were initially dismissed. US experiments soon demonstrated the same neutrons, although temperature measurements suggested these could not be from fusion reactions. The neutrons seen in the UK were later demonstrated to be from different versions of the same instability processes that plagued earlier machines. Cockcroft was forced to retract the fusion claims, and the entire field was tainted for years. ZETA ended its experiments in 1968. The first controlled fusion experiment was accomplished using Scylla I at the Los Alamos National Laboratory in 1958. This was a pinch machine, with a cylinder full of deuterium. Electric current shot down the sides of the cylinder. The current made magnetic fields that compressed the plasma to 15 million degrees Celsius, squeezed the gas, fused it and produced neutrons. In 1950–1951 I.E. Tamm and A.D. Sakharov in the Soviet Union, first discussed a tokamak-like approach. Experimental research on those designs began in 1956 at the Kurchatov Institute in Moscow by a group of Soviet scientists led by Lev Artsimovich. The tokamak essentially combined a low-power pinch device with a low-power simple stellarator. The key was to combine the fields in such a way that the particles orbited within the reactor a particular number of times, today known as the "safety factor". The combination of these fields dramatically improved confinement times and densities, resulting in huge improvements over existing devices. A key plasma physics text was published by Lyman Spitzer at Princeton in 1963. Spitzer took the ideal gas laws and adopted them to an ionized plasma, developing many of the fundamental equations used to model a plasma. Laser fusion was suggested in 1962 by scientists at Lawrence Livermore National Laboratory, shortly after the invention of the laser itself in 1960. At the time, Lasers were low power machines, but low-level research began as early as 1965. Laser fusion, formally known as inertial confinement fusion, involves imploding a target by using laser beams. There are two ways to do this: indirect drive and direct drive. In direct drive, the laser blasts a pellet of fuel. In indirect drive, the lasers blast a structure around the fuel. This makes x-rays that squeeze the fuel. Both methods compress the fuel so that fusion can take place. At the 1964 World's Fair, the public was given its first demonstration of nuclear fusion. The device was a θ-pinch from General Electric. This was similar to the Scylla machine developed earlier at Los Alamos. The magnetic mirror was first published in 1967 by Richard F. Post and many others at the Lawrence Livermore National Laboratory. The mirror consisted of two large magnets arranged so they had strong fields within them, and a weaker, but connected, field between them. Plasma introduced in the area between the two magnets would "bounce back" from the stronger fields in the middle. The A.D. Sakharov group constructed the first tokamaks, the most successful being the and its larger version . T-4 was tested in 1968 in Novosibirsk, producing the world's first quasistationary fusion reaction. When this were first announced, the international community was highly skeptical. A British team was invited to see T-3, however, and after measuring it in depth they released their results that confirmed the Soviet claims. A burst of activity followed as many planned devices were abandoned and new tokamaks were introduced in their place — the C model stellarator, then under construction after many redesigns, was quickly converted to the Symmetrical Tokamak. In his work with vacuum tubes, Philo Farnsworth observed that electric charge would accumulate in regions of the tube. Today, this effect is known as the Multipactor effect. Farnsworth reasoned that if ions were concentrated high enough they could collide and fuse. In 1962, he filed a patent on a design using a positive inner cage to concentrate plasma, in order to achieve nuclear fusion. During this time, Robert L. Hirsch joined the Farnsworth Television labs and began work on what became the fusor. Hirsch patented the design in 1966 and published the design in 1967. In 1972, John Nuckolls outlined the idea of ignition. This is a fusion chain reaction. Hot helium made during fusion reheats the fuel and starts more reactions. John argued that ignition would require lasers of about 1 kJ. This turned out to be wrong. Nuckolls's paper started a major development effort. Several laser systems were built at LLNL. These included the argus, the Cyclops, the Janus, the long path, the Shiva laser and the Nova in 1984. This prompted the UK to build the Central Laser Facility in 1976. During this time, great strides in understanding the tokamak system were made. A number of improvements to the design are now part of the "advanced tokamak" concept, which includes non-circular plasma, internal diverters and limiters, often superconducting magnets, and operate in the so-called "H-mode" island of increased stability. Two other designs have also become fairly well studied; the compact tokamak is wired with the magnets on the inside of the vacuum chamber, while the spherical tokamak reduces its cross section as much as possible. In 1974 a study of the ZETA results demonstrated an interesting side-effect; after an experimental run ended, the plasma would enter a short period of stability. This led to the reversed field pinch concept, which has seen some level of development since. On May 1, 1974, the KMS fusion company (founded by Kip Siegel) achieves the world's first laser induced fusion in a deuterium-tritium pellet. In the mid-1970s, Project PACER, carried out at Los Alamos National Laboratory (LANL) explored the possibility of a fusion power system that would involve exploding small hydrogen bombs (fusion bombs) inside an underground cavity. As an energy source, the system is the only fusion power system that could be demonstrated to work using existing technology. It would also require a large, continuous supply of nuclear bombs, however, making the economics of such a system rather questionable. In 1976, the two beam Argus laser becomes operational at livermore. In 1977, The 20 beam Shiva laser at Livermore is completed, capable of delivering 10.2 kilojoules of infrared energy on target. At a price of $25 million and a size approaching that of a football field, Shiva is the first of the megalasers. That same year, the JET project is approved by the European Commission and a site is selected. As a result of advocacy, the cold war, and the 1970s energy crisis a massive magnetic mirror program was funded by the US federal government in the late 1970s and early 1980s. This program resulted in a series of large magnetic mirror devices including: 2X, Baseball I, Baseball II, the Tandem Mirror Experiment, the Tandem mirror experiment upgrade, the Mirror Fusion Test Facility and the MFTF-B. These machines were built and tested at Livermore from the late 1960s to the mid 1980s. A number of institutions collaborated on these machines, conducting experiments. These included the Institute for Advanced Study and the University of Wisconsin–Madison. The last machine, the Mirror Fusion Test Facility cost 372 million dollars and was, at that time, the most expensive project in Livermore history. It opened on February 21, 1986 and was promptly shut down. The reason given was to balance the United States federal budget. This program was supported from within the Carter and early Reagan administrations by Edwin E. Kintner, a US Navy captain, under Alvin Trivelpiece. In Laser fusion progressed: in 1983, the NOVETTE laser was completed. The following December 1984, the ten beam NOVA laser was finished. Five years later, NOVA would produce a maximum of 120 kilojoules of infrared light, during a nanosecond pulse. Meanwhile, efforts focused on either fast delivery or beam smoothness. Both tried to deliver the energy uniformly to implode the target. One early problem was that the light in the infrared wavelength, lost lots of energy before hitting the fuel. Breakthroughs were made at the Laboratory for Laser Energetics at the University of Rochester. Rochester scientists used frequency-tripling crystals to transform the infrared laser beams into ultraviolet beams. In 1985, Donna Strickland and Gérard Mourou invented a method to amplify lasers pulses by "chirping". This method changes a single wavelength into a full spectrum. The system then amplifies the laser at each wavelength and then reconstitutes the beam into one color. Chirp pulsed amplification became instrumental in building the National Ignition Facility and the Omega EP system. Most research into ICF was towards weapons research, because the implosion is relevant to nuclear weapons. During this time Los Alamos National Laboratory constructed a series of laser facilities. This included Gemini (a two beam system), Helios (eight beams), Antares (24 beams) and Aurora (96 beams). The program ended in the early nineties with a cost on the order of one billion dollars. In 1987, Akira Hasegawa noticed that in a dipolar magnetic field, fluctuations tended compress the plasma without energy loss. This effect was noticed in data taken by Voyager 2, when it encountered Uranus. This observation would become the basis for a fusion approach known as the Levitated dipole. In Tokamaks, the Tore Supra was under construction over the middle of the eighties (1983 to 1988). This was a Tokamak built in Cadarache, France. In 1983, the JET was completed and first plasmas achieved. In 1985, the Japanese tokamak, JT-60 was completed. In 1988, the T-15 a Soviet tokamak was completed. It was the first industrial fusion reactor to use superconducting magnets to control the plasma. These were Helium cooled. In 1989, Pons and Fleischmann submitted papers to the Journal of Electroanalytical Chemistry claiming that they had observed fusion in a room temperature device and disclosing their work in a press release. Some scientists reported excess heat, neutrons, tritium, helium and other nuclear effects in so-called cold fusion systems, which for a time gained interest as showing promise. Hopes fell when replication failures were weighed in view of several reasons cold fusion is not likely to occur, the discovery of possible sources of experimental error, and finally the discovery that Fleischmann and Pons had not actually detected nuclear reaction byproducts. By late 1989, most scientists considered cold fusion claims dead, and cold fusion subsequently gained a reputation as pathological science. However, a small community of researchers continues to investigate cold fusion claiming to replicate Fleishmann and Pons' results including nuclear reaction byproducts. Claims related to cold fusion are largely disbelieved in the mainstream scientific community. In 1989, the majority of a review panel organized by the US Department of Energy (DOE) found that the evidence for the discovery of a new nuclear process was not persuasive. A second DOE review, convened in 2004 to look at new research, reached conclusions similar to the first. In 1984, Martin Peng of ORNL proposed an alternate arrangement of the magnet coils that would greatly reduce the aspect ratio while avoiding the erosion issues of the compact tokamak: a Spherical tokamak. Instead of wiring each magnet coil separately, he proposed using a single large conductor in the center, and wiring the magnets as half-rings off of this conductor. What was once a series of individual rings passing through the hole in the center of the reactor was reduced to a single post, allowing for aspect ratios as low as 1.2. The ST concept appeared to represent an enormous advance in tokamak design. However, it was being proposed during a period when US fusion research budgets were being dramatically scaled back. ORNL was provided with funds to develop a suitable central column built out of a high-strength copper alloy called "Glidcop". However, they were unable to secure funding to build a demonstration machine, "STX". Failing to build an ST at ORNL, Peng began a worldwide effort to interest other teams in the ST concept and get a test machine built. One way to do this quickly would be to convert a spheromak machine to the Spherical tokamak layout. Peng's advocacy also caught the interest of Derek Robinson, of the United Kingdom Atomic Energy Authority fusion center at Culham. Robinson was able to gather together a team and secure funding on the order of 100,000 pounds to build an experimental machine, the Small Tight Aspect Ratio Tokamak, or START. Several parts of the machine were recycled from earlier projects, while others were loaned from other labs, including a 40 keV neutral beam injector from ORNL. Construction of START began in 1990, it was assembled rapidly and started operation in January 1991. In 1991 the Preliminary Tritium Experiment at the Joint European Torus in England achieved the world’s first controlled release of fusion power. In 1992, a major article was published in Physics Today by Robert McCory at the Laboratory for laser energetics outlying the current state of ICF and advocating for a national ignition facility. This was followed up by a major review article, from John Lindl in 1995, advocating for NIF. During this time a number of ICF subsystems were developing, including target manufacturing, cryogenic handling systems, new laser designs (notably the NIKE laser at NRL) and improved diagnostics like time of flight analyzers and Thomson scattering. This work was done at the NOVA laser system, General Atomics, Laser Mégajoule and the GEKKO XII system in Japan. Through this work and lobbying by groups like the fusion power associates and John Sethian at NRL, a vote was made in congress, authorizing funding for the NIF project in the late nineties. In the early nineties, theory and experimental work regarding fusors and polywells was published. In response, Todd Rider at MIT developed general models of these devices. Rider argued that all plasma systems at thermodynamic equilibrium were fundamentally limited. In 1995, William Nevins published a criticism arguing that the particles inside fusors and polywells would build up angular momentum, causing the dense core to degrade. In 1995, the University of Wisconsin–Madison built a large fusor, known as HOMER, which is still in operation. Meanwhile, Dr George H. Miley at Illinois, built a small fusor that has produced neutrons using deuterium gas and discovered the "star mode" of fusor operation. The following year, the first "US-Japan Workshop on IEC Fusion", was conducted. At this time in Europe, an IEC device was developed as a commercial neutron source by Daimler-Chrysler and NSD Fusion. In 1996, the Z-machine was upgraded and opened to the public by the US Army in August 1998 in Scientific American. The key attributes of Sandia’s Z machine are its 18 million amperes and a discharge time of less than 100 nanoseconds. This generates a magnetic pulse, inside a large oil tank, this strikes an array of tungsten wires called a liner. Firing the Z-machine has become a way to test very high energy, high temperature (2 billion degrees) conditions. In 1996, the Tore Supra creates a plasma for two minutes with a current of almost 1 million amperes driven non-inductively by 2.3 MW of lower hybrid frequency waves. This is 280 MJ of injected and extracted energy. This result was possible because of the actively cooled plasma-facing components In 1997, JET produced a peak of 16.1MW of fusion power (65% of input power), with fusion power of over 10MW sustained for over 0.5 sec. Its successor, the International Thermonuclear Experimental Reactor (ITER), was officially announced as part of a seven-party consortium (six countries and the EU).ITER is designed to produce ten times more fusion power than the power put into the plasma. ITER is currently under construction in Cadarache, France. In the late nineties, a team at Columbia University and MIT developed the Levitated dipole a fusion device which consisted of a superconducting electromagnet, floating in a saucer shaped vacuum chamber. Plasma swirled around this donut and fused along the center axis. In the March 8, 2002 issue of the peer-reviewed journal Science, Rusi P. Taleyarkhan and colleagues at the Oak Ridge National Laboratory (ORNL) reported that acoustic cavitation experiments conducted with deuterated acetone (C3D6O) showed measurements of tritium and neutron output consistent with the occurrence of fusion. Taleyarkhan was later found guilty of misconduct, the Office of Naval Research debarred him for 28 months from receiving Federal Funding, and his name was listed in the 'Excluded Parties List'. "Fast ignition" was developed in the late nineties, and was part of a push by the Laboratory for Laser Energetics for building the Omega EP system. This system was finished in 2008. Fast ignition showed such dramatic power savings that ICF appears to be a useful technique for energy production. There are even proposals to build an experimental facility dedicated to the fast ignition approach, known as HiPER. In April 2005, a team from UCLA announced it had devised a way of producing fusion using a machine that "fits on a lab bench", using lithium tantalate to generate enough voltage to smash deuterium atoms together. The process, however, does not generate net power (see Pyroelectric fusion). Such a device would be useful in the same sort of roles as the fusor. In 2006, China's EAST test reactor is completed. This was the first tokamak to use superconducting magnets to generate both the toroidal and poloidal fields. In the early 2000s, Researchers at LANL reasoned that a plasma oscillating could be at local thermodynamic equilibrium. This prompted the POPS and Penning trap designs. At this time, researchers at MIT became interested in fusors for space propulsion and powering space vehicles. Specifically, researchers developed fusors with multiple inner cages. Greg Piefer graduated from Madison and founded Phoenix Nuclear Labs, a company that developed the fusor into a neutron source for the mass production of medical isotopes.Robert Bussard began speaking openly about the Polywell in 2006. He attempted to generate interest in the research, before his death. In 2008, Taylor Wilson achieved notoriety for achieving nuclear fusion at 14, with a homemade fusor. In 2009, a high-energy laser system, the National Ignition Facility (NIF), was finished in the US, which can heat hydrogen atoms to temperatures only existing in nature in the cores of stars. The new laser is expected to have the ability to produce, for the first time, more energy from controlled, inertially confined nuclear fusion than was required to initiate the reaction. In 2010, NIF researchers were conducting a series of "tuning" shots to determine the optimal target design and laser parameters for high-energy ignition experiments with fusion fuel in the following months. Two firing tests were performed on October 31, 2010 and November 2, 2010. In early 2012, NIF director Mike Dunne expected the laser system to generate fusion with net energy gain by the end of 2012. However, it was delayed and not achieved by that date. Inertial (laser) confinement is being developed at the United States National Ignition Facility (NIF) based at Lawrence Livermore National Laboratory in California, the French Laser Mégajoule, and the planned European Union High Power laser Energy Research (HiPER) facility. NIF reached initial operational status in 2010 and has been in the process of increasing the power and energy of its "shots", with fusion ignition tests to follow. A three-year goal announced in 2009 to produce net energy from fusion by 2012 was missed; in September 2013, however, the facility announced a significant milestone from an August 2013 test that produced more energy from the fusion reaction than had been provided to the fuel pellet. This was reported as the first time this had been accomplished in fusion power research. The facility reported that their next step involved improving the system to prevent the hohlraum from either breaking up asymmetrically or too soon. A 2012 paper demonstrated that a dense plasma focus had achieved temperatures of 1.8 billion degrees Celsius, sufficient for boron fusion, and that fusion reactions were occurring primarily within the contained plasmoid, a necessary condition for net power. The focus consists of two coaxial cylindrical electrodes made from copper or beryllium and housed in a vacuum chamber containing a low-pressure fusible gas. An electrical pulse is applied across the electrodes, heating the gas into a plasma. The current forms into a minuscule vortex along the axis of the machine, which then kinks into a cage of current with an associated magnetic field. The cage of current and magnetic-field-entrapped plasma is called a plasmoid. The acceleration of the electrons about the magnetic field lines heats the nuclei within the plasmoid to fusion temperatures. In April 2014, Lawrence Livermore National Laboratory ended the Laser Inertial Fusion Energy (LIFE) program and redirected their efforts towards NIF. In August 2014, Phoenix Nuclear Labs announced the sale of a high-yield neutron generator that could sustain 5×1011deuterium fusion reactions per second over a 24-hour period. In October 2014, Lockheed Martin's Skunk Works announced the development of a high-beta fusion reactor that they hope to yield a functioning 100-megawatt prototype by 2017 and to be ready for regular operation by 2022. In January 2015, the polywell was presented at Microsoft Research. In August, 2015, MIT announced a tokamak it named ARC fusion reactor design using rare-earth barium-copper oxide (REBCO) superconducting tapes to produce high-magnetic field coils that it claimed produce comparable magnetic field strength in a smaller configuration than other designs. In October 2015, researchers at the Max Planck Institute of Plasma Physics completed building the largest stellarator to date, named Wendelstein 7-X, and on December 10, they successfully produced the first Helium plasma. It was followed on February 3, 2016 by the first Hydrogen plasma thereby starting the experimental journey of this sophisticated new device. With plasma discharges lasting up to 30 minutes, Wendelstein 7-X will try to demonstrate the essential stellarator useful attribute, continuous operation of a high temperature hydrogen plasma. By firing particle beams at targets, many fusion reactions have been tested, while the fuels considered for power have all been light elements like the isotopes of hydrogen—deuterium and tritium. Other reactions like the deuterium and Helium3 reaction or the Helium3 and Helium3 reactions, would require a supply of Helium3. This can either come from other nuclear reactions or from extraterrestrial sources. Finally, researchers hope to do the p-11B reaction, because it does not directly produce neutrons, though side reactions can. The easiest nuclear reaction, at the lowest energy, is: 1D + 3 1T → 4 2He + 1 This reaction is common in research, industrial and military applications, usually as a convenient source of neutrons. Deuterium is a naturally occurring isotope of hydrogen and is commonly available. The large mass ratio of the hydrogen isotopes makes their separation easy compared to the difficult uranium enrichment process. Tritium is a natural isotope of hydrogen, but because it has a short half-life of 12.32 years, it is hard to find, store, produce, and is expensive. Consequently, the deuterium-tritium fuel cycle requires the breeding of tritium from lithium using one of the following reactions: 0n + 6 3Li → 3 1T + 4 0n + 7 3Li → 3 1T + 4 2He + 1 The reactant neutron is supplied by the D-T fusion reaction shown above, and the one that has the greatest yield of energy. The reaction with 6Li is exothermic, providing a small energy gain for the reactor. The reaction with 7Li is endothermic but does not consume the neutron. At least some 7Li reactions are required to replace the neutrons lost to absorption by other elements. Most reactor designs use the naturally occurring mix of lithium isotopes. Several drawbacks are commonly attributed to D-T fusion power: - It produces substantial amounts of neutrons that result in the neutron activation of the reactor materials. - Only about 20% of the fusion energy yield appears in the form of charged particles with the remainder carried off by neutrons, which limits the extent to which direct energy conversion techniques might be applied. - It requires the handling of the radioisotope tritium. Similar to hydrogen, tritium is difficult to contain and may leak from reactors in some quantity. Some estimates suggest that this would represent a fairly large environmental release of radioactivity. The neutron flux expected in a commercial D-T fusion reactor is about 100 times that of current fission power reactors, posing problems for material design. After a series of D-T tests at JET, the vacuum vessel was sufficiently radioactive that remote handling was required for the year following the tests. In a production setting, the neutrons would be used to react with lithium in order to create more tritium. This also deposits the energy of the neutrons in the lithium, which would then be transferred to drive electrical production. The lithium neutron absorption reaction protects the outer portions of the reactor from the neutron flux. Newer designs, the advanced tokamak in particular, also use lithium inside the reactor core as a key element of the design. The plasma interacts directly with the lithium, preventing a problem known as "recycling". The advantage of this design was demonstrated in the Lithium Tokamak Experiment. This is the second easiest fusion reaction, fusing deuterium with itself. The reaction has two branches that occur with nearly equal probability: D + D → T + 1H D + D → 3He + n This reaction is also common in research. The optimum energy to initiate this reaction is 15 keV, only slightly higher than the optimum for the D-T reaction. The first branch does not produce neutrons, but it does produce tritium, so that a D-D reactor will not be completely tritium-free, even though it does not require an input of tritium or lithium. Unless the tritons can be quickly removed, most of the tritium produced would be burned before leaving the reactor, which would reduce the handling of tritium, but would produce more neutrons, some of which are very energetic. The neutron from the second branch has an energy of only 2.45 MeV (0.393 pJ), whereas the neutron from the D-T reaction has an energy of 14.1 MeV (2.26 pJ), resulting in a wider range of isotope production and material damage. When the tritons are removed quickly while allowing the 3He to react, the fuel cycle is called "tritium suppressed fusion" The removed tritium decays to 3He with a 12.5 year half life. By recycling the 3He produced from the decay of tritium back into the fusion reactor, the fusion reactor does not require materials resistant to fast 14.1 MeV (2.26 pJ) neutrons. Assuming complete tritium burn-up, the reduction in the fraction of fusion energy carried by neutrons would be only about 18%, so that the primary advantage of the D-D fuel cycle is that tritium breeding would not be required. Other advantages are independence from scarce[dubious ] lithium resources and a somewhat softer neutron spectrum. The disadvantage of D-D compared to D-T is that the energy confinement time (at a given pressure) must be 30 times longer and the power produced (at a given pressure and volume) would be 68 times less . Assuming complete removal of tritium and recycling of 3He, only 6% of the fusion energy is carried by neutrons. The tritium-suppressed D-D fusion requires an energy confinement that is 10 times longer compared to D-T and a plasma temperature that is twice as high. Deuterium, helium 3 A second-generation approach to controlled fusion power involves combining helium-3 (3He) and deuterium (2H): D + 3He → 4He + 1H This reaction produces a helium-4 nucleus (4He) and a high-energy proton. As with the p-11B aneutronic fusion fuel cycle, most of the reaction energy is released as charged particles, reducing activation of the reactor housing and potentially allowing more efficient energy harvesting (via any of several speculative technologies). In practice, D-D side reactions produce a significant number of neutrons, resulting in p-11B being the preferred cycle for aneutronic fusion. Proton, boron 11 If aneutronic fusion is the goal, then the most promising candidate may be the Hydrogen-1 (proton)/boron reaction, which releases alpha (helium) particles, but does not rely on neutron scattering for energy transfer. - 1H + 11B → 3 4He Under reasonable assumptions, side reactions will result in about 0.1% of the fusion power being carried by neutrons. At 123 keV, the optimum temperature for this reaction is nearly ten times higher than that for the pure hydrogen reactions, the energy confinement must be 500 times better than that required for the D-T reaction, and the power density will be 2500 times lower than for D-T. Because the confinement properties of conventional approaches to fusion such as the tokamak and laser pellet fusion are marginal, most proposals for aneutronic fusion are based on radically different confinement concepts, such as the Polywell and the Dense Plasma Focus. Results have been extremely promising: - "In the October 2013 edition of Nature Communications, a research team led by Christine Labaune at École Polytechnique in Palaiseau, France, reported a new record fusion rate: an estimated 80 million fusion reactions during the 1.5 nanoseconds that the laser fired, which is at least 100 times more than any previous proton-boron experiment. " Any power station using hot plasma, is going to have plasma facing walls. In even the simplest plasma approaches, the material will get blasted with matter and energy. This leads to a minimum list of considerations, including dealing with: - A heating and cooling cycle, up to a 10 MW/m² thermal load. - Neutron radiation, which over time leads to neutron activation and embrittlement. - High energy ions leaving at tens to hundreds of electronvolts. - Alpha particles leaving at millions of electronvolts. - Electrons leaving at high energy. - Light radiation (IR, visible, UV, X-ray). Depending on the approach, these effects may be higher or lower than typical fission reactors like the pressurized water reactor (PWR). One estimate put the radiation at 100 times the (PWR). Materials need to be selected or developed that can withstand these basic conditions. Depending on the approach, however, there may be other considerations such as electrical conductivity, magnetic permeability and mechanical strength. There is also a need for materials whose primary components and impurities do not result in long-lived radioactive wastes. For long term use, each atom in the wall is expected to be hit by a neutron and displaced about a hundred times before the material is replaced. High-energy neutrons will produce hydrogen and helium by way of various nuclear reactions that tends to form bubbles at grain boundaries and result in swelling, blistering or embrittlement. One can choose either a low-Z material, such as graphite or beryllium, or a high-Z material, usually tungsten with molybdenum as a second choice. Use of liquid metals (lithium, gallium, tin) has also been proposed, e.g., by injection of 1–5 mm thick streams flowing at 10 m/s on solid substrates. If graphite is used, the gross erosion rates due to physical and chemical sputtering would be many meters per year, so one must rely on redeposition of the sputtered material. The location of the redeposition will not exactly coincide with the location of the sputtering, so one is still left with erosion rates that may be prohibitive. An even larger problem is the tritium co-deposited with the redeposited graphite. The tritium inventory in graphite layers and dust in a reactor could quickly build up to many kilograms, representing a waste of resources and a serious radiological hazard in case of an accident. The consensus of the fusion community seems to be that graphite, although a very attractive material for fusion experiments, cannot be the primary PFC material in a commercial reactor. The sputtering rate of tungsten by the plasma fuel ions is orders of magnitude smaller than that of carbon, and tritium is much less incorporated into redeposited tungsten, making this a more attractive choice. On the other hand, tungsten impurities in a plasma are much more damaging than carbon impurities, and self-sputtering of tungsten can be high, so it will be necessary to ensure that the plasma in contact with the tungsten is not too hot (a few tens of eV rather than hundreds of eV). Tungsten also has disadvantages in terms of eddy currents and melting in off-normal events, as well as some radiological issues. Safety and the environment Nuclear fusion is unlike nuclear fission: fusion requires extremely precise and controlled temperature, pressure and magnetic field parameters for any net energy to be produced. If a reactor suffers damage or loses even a small degree of required control, fusion reactions and heat generation would rapidly cease. Additionally, fusion reactors contain relatively small amounts of fuel, enough to "burn" for minutes, or in some cases, microseconds. Unless they are actively refueled, the reactions will quickly end. Therefore, fusion reactors are considered extremely safe. Runaway reactions cannot occur in a fusion reactor. The plasma is burnt at optimal conditions, and any significant change will quench the reactions. The reaction process is so delicate that this level of safety is inherent. Although the plasma in a fusion power station is expected to have a volume of 1,000 cubic metres (35,000 cu ft) or more, the plasma density is low and the total amount of fusion fuel in the vessel typically only a few grams. If the fuel supply is closed, the reaction stops within seconds. In comparison, a fission reactor is typically loaded with enough fuel for several months or years, and no additional fuel is necessary to continue the reaction. It is this large amount of fuel that gives rise to the possibility of a meltdown; nothing analogous exists in a fusion reactor. In the magnetic approach, strong fields are developed in coils that are held in place mechanically by the reactor structure. Failure of this structure could release this tension and allow the magnet to "explode" outward. The severity of this event would be similar to any other industrial accident or an MRI machine quench/explosion, and could be effectively stopped with a containment building similar to those used in existing (fission) nuclear generators. The laser-driven inertial approach is generally lower-stress because of the increased size of the reaction chamber. Although failure of the reaction chamber is possible, simply stopping fuel delivery would prevent any sort of catastrophic failure. Most reactor designs rely on liquid lithium as both a coolant and a method for converting stray neutrons from the reaction into tritium, which is fed back into the reactor as fuel. Lithium is highly flammable, and in the case of a fire it is possible that the lithium stored on-site could be burned up and escape. In this case, the tritium contents of the lithium would be released into the atmosphere, posing a radiation risk. Calculations suggest that at about 1 kg the total amount of tritium and other radioactive gases in a typical power station would be so small that they would have diluted to legally acceptable limits by the time they blew as far as the station's perimeter fence. The likelihood of small industrial accidents including the local release of radioactivity and injury to staff cannot be estimated yet. These would include accidental releases of lithium or tritium or mis-handling of decommissioned radioactive components of the reactor itself. A quench is an abnormal termination of magnet operation that occurs when part of the superconducting coil enters the normal (resistive) state. This can occur because the field inside the magnet is too large, the rate of change of field is too large (causing eddy currents and resultant heating in the copper support matrix), or a combination of the two. More rarely a defect in the magnet can cause a quench. When this happens, that particular spot is subject to rapid Joule heating from the enormous current, which raises the temperature of the surrounding regions. This pushes those regions into the normal state as well, which leads to more heating in a chain reaction. The entire magnet rapidly becomes normal (this can take several seconds, depending on the size of the superconducting coil). This is accompanied by a loud bang as the energy in the magnetic field is converted to heat, and rapid boil-off of the cryogenic fluid. The abrupt decrease of current can result in kilovolt inductive voltage spikes and arcing. Permanent damage to the magnet is rare, but components can be damaged by localized heating, high voltages, or large mechanical forces. In practice, magnets usually have safety devices to stop or limit the current when the beginning of a quench is detected. If a large magnet undergoes a quench, the inert vapor formed by the evaporating cryogenic fluid can present a significant asphyxiation hazard to operators by displacing breathable air. A large section of the superconducting magnets in CERN's Large Hadron Collider unexpectedly quenched during start-up operations in 2008, necessitating the replacement of a number of magnets. In order to mitigate against potentially destructive quenches, the superconducting magnets that form the LHC are equipped with fast-ramping heaters which are activated once a quench event is detected by the complex quench protection system. As the dipole bending magnets are connected in series, each power circuit includes 154 individual magnets, and should a quench event occur, the entire combined stored energy of these magnets must be dumped at once. This energy is transferred into dumps that are massive blocks of metal which heat up to several hundreds of degrees Celsius—because of resistive heating—in a matter of seconds. Although undesirable, a magnet quench is a "fairly routine event" during the operation of a particle accelerator. The natural product of the fusion reaction is a small amount of helium, which is completely harmless to life. Of more concern is tritium, which, like other isotopes of hydrogen, is difficult to retain completely. During normal operation, some amount of tritium will be continually released. There would be no acute danger, but the cumulative effect on the world's population from a fusion economy could be a matter of concern. Although tritium is volatile and biologically active, the health risk posed by a release is much lower than that of most radioactive contaminants, because of tritium's short half-life (12.32 years) and very low decay energy (~14.95 keV), and because it does not bioaccumulate (instead being cycled out of the body as water, with a biological half-life of 7 to 14 days). Current ITER designs are investigating total containment facilities for any tritium. The large flux of high-energy neutrons in a reactor will make the structural materials radioactive. The radioactive inventory at shut-down may be comparable to that of a fission reactor, but there are important differences. The half-life of the radioisotopes produced by fusion tends to be less than those from fission, so that the inventory decreases more rapidly. Unlike fission reactors, whose waste remains radioactive for thousands of years, most of the radioactive material in a fusion reactor would be the reactor core itself, which would be dangerous for about 50 years, and low-level waste for another 100. Although this waste will be considerably more radioactive during those 50 years than fission waste, the very short half-life makes the process very attractive, as the waste management is fairly straightforward. By 500 years the material would have the same radiotoxicity as coal ash. Additionally, the choice of materials used in a fusion reactor is less constrained than in a fission design, where many materials are required for their specific neutron cross-sections. This allows a fusion reactor to be designed using materials that are selected specifically to be "low activation", materials that do not easily become radioactive. Vanadium, for example, would become much less radioactive than stainless steel. Carbon fiber materials are also low-activation, as well as being strong and light, and are a promising area of study for laser-inertial reactors where a magnetic field is not required. In general terms, fusion reactors would create far less radioactive material than a fission reactor, the material it would create is less damaging biologically, and the radioactivity "burns off" within a time period that is well within existing engineering capabilities for safe long-term waste storage. Although fusion power uses nuclear technology, the overlap with nuclear weapons would be limited. A huge amount of tritium could be produced by a fusion power station; tritium is used in the trigger of hydrogen bombs and in a modern boosted fission weapon, but it can also be produced by nuclear fission. The energetic neutrons from a fusion reactor could be used to breed weapons-grade plutonium or uranium for an atomic bomb (for example by transmutation of U238 to Pu239, or Th232 to U233). A study conducted 2011 assessed the risk of three scenarios: - Use in small-scale fusion station: As a result of much higher power consumption, heat dissipation and a more unique design compared to enrichment gas centrifuges this choice would be much easier to detect and therefore implausible. - Modifications to produce weapon-usable material in a commercial facility: The production potential is significant. But no fertile or fissile substances necessary for the production of weapon-usable materials needs to be present at a civil fusion system at all. If not shielded, a detection of these materials can be done by their characteristic gamma radiation. The underlying redesign could be detected by regular design information verifications. In the (technically more feasible) case of solid breeder blanket modules, it would be necessary for incoming components to be inspected for the presence of fertile material, otherwise plutonium for several weapons could be produced each year. - Prioritizing a fast production of weapon-grade material regardless of secrecy: The fastest way to produce weapon usable material was seen in modifying a prior civil fusion power station. Unlike in some nuclear power stations, there is no weapon compatible material during civil use. Even without the need for covert action this modification would still take about 2 months to start the production and at least an additional week to generate a significant amount for weapon production. This was seen as enough time to detect a military use and to react with diplomatic or military means. To stop the production, a military destruction of inevitable parts of the facility leaving out the reactor itself would be sufficient. This, together with the intrinsic safety of fusion power would only bear a low risk of radioactive contamination. Another study concludes that "[..]large fusion reactors – even if not designed for fissile material breeding – could easily produce several hundred kg Pu per year with high weapon quality and very low source material requirements." It was emphasized that the implementation of features for intrinsic proliferation resistance might only be possible at this phase of research and development. The theoretical and computational tools needed for hydrogen bomb design are closely related to those needed for inertial confinement fusion, but have very little in common with the more scientifically developed magnetic confinement fusion. Large-scale reactors using neutronic fuels (e.g. ITER) and thermal power production (turbine based) are most comparable to fission power from an engineering and economics viewpoint. Both fission and fusion power stations involve a relatively compact heat source powering a conventional steam turbine-based power station, while producing enough neutron radiation to make activation of the station materials problematic. The main distinction is that fusion power produces no high-level radioactive waste (though activated station materials still need to be disposed of). There are some power station ideas that may significantly lower the cost or size of such stations; however, research in these areas is nowhere near as advanced as in tokamaks. Fusion power commonly proposes the use of deuterium, an isotope of hydrogen, as fuel and in many current designs also use lithium. Assuming a fusion energy output equal to the 1995 global power output of about 100 EJ/yr (= 1 × 1020 J/yr) and that this does not increase in the future, which is unlikely, then the known current lithium reserves would last 3000 years. Lithium from sea water would last 60 million years, however, and a more complicated fusion process using only deuterium from sea water would have fuel for 150 billion years. To put this in context, 150 billion years is close to 30 times the remaining lifespan of the sun, and more than 10 times the estimated age of the universe. While fusion power is still in early stages of development, substantial sums have been and continue to be invested in research. In the EU almost €10 billion was spent on fusion research up to the end of the 1990s, and the new ITER reactor alone is budgeted at €6.6 billion total for the timeframe between 2008 and 2020. It is estimated that up to the point of possible implementation of electricity generation by nuclear fusion, R&D will need further promotion totalling around €60–80 billion over a period of 50 years or so (of which €20–30 billion within the EU) based on a report from 2002. Nuclear fusion research receives €750 million (excluding ITER funding) from the European Union, compared with €810 million for sustainable energy research, putting research into fusion power well ahead of that of any single rivaling technology. Indeed, the size of the investments and time frame of the expected results mean that fusion research is almost exclusively publicly funded, while research in other forms of energy can be done by the private sector. In spite of that, a number of start-up companies active in the field of fusion power have managed to attract private money. Fusion power would provide more energy for a given weight of fuel than any fuel-consuming energy source currently in use, and the fuel itself (primarily deuterium) exists abundantly in the Earth's ocean: about 1 in 6500 hydrogen atoms in seawater is deuterium. Although this may seem a low proportion (about 0.015%), because nuclear fusion reactions are so much more energetic than chemical combustion and seawater is easier to access and more plentiful than fossil fuels, fusion could potentially supply the world's energy needs for millions of years. Despite being technically non-renewable, fusion power has many of the benefits of renewable energy sources (such as being a long-term energy supply and emitting no greenhouse gases) as well as some of the benefits of the resource-limited energy sources as hydrocarbons and nuclear fission (without reprocessing). Like these currently dominant energy sources, fusion could provide very high power-generation density and uninterrupted power delivery (because it is not dependent on the weather, unlike wind and solar power). Another aspect of fusion energy is that the cost of production does not suffer from diseconomies of scale. The cost of water and wind energy, for example, goes up as the optimal locations are developed first, while further generators must be sited in less ideal conditions. With fusion energy the production cost will not increase much even if large numbers of stations are built, because the raw resource (seawater) is abundant and widespread. Some problems that are expected to be an issue in this century, such as fresh water shortages, can alternatively be regarded as problems of energy supply. For example, in desalination stations, seawater can be purified through distillation or reverse osmosis. Nonetheless, these processes are energy intensive. Even if the first fusion stations are not competitive with alternative sources, fusion could still become competitive if large-scale desalination requires more power than the alternatives are able to provide. A scenario has been presented of the effect of the commercialization of fusion power on the future of human civilization. ITER and later Demo are envisioned to bring online the first commercial nuclear fusion energy reactor by 2050. Using this as the starting point and the history of the uptake of nuclear fission reactors as a guide, the scenario depicts a rapid take up of nuclear fusion energy starting after the middle of this century. Fusion power could be used in interstellar space, where solar energy is not available. Because commercial fusion projects are very large and complex, and ongoing funding is a political issue, such projects usually involve cost overruns and missed deadlines. For example, the construction of the National Ignition Facility cost $5 billion and took seven years longer than expected. ITER's expected cost has gone from $5 billion to $20 billion, and the date for full power operation has been put back to 2027, from the original estimate of 2016. And ITER will never supply electricity to the power grid. It is only a "proof of concept" science project.
Currently, quantum computers are not stable enough to perform complex operations for an extended period of time. The most powerful quantum computer to date, IBM's Osprey, has 433 quantum bits (qubits), while computer scientists estimate that it would take 1 million qubits to fully realize the technology's potential. However, in 1994, mathematician Peter Shor developed an algorithm that, theoretically, could be used on a powerful quantum computer to crack the RSA encryption protocol commonly used in online transactions. Recently, a research paper suggested a hybrid classical-quantum computing approach could bring quantum computing forward. Countries, corporations, and venture capitalists are in a race to develop the first robust quantum computer as they can be used to both crack encryption and secure communications in a quantum world. Investment has been significant in order to commercialize the technology. To understand the answer, first you need to understand how a classical computer functions. The basic unit of classical computing is a bit. It can sit in one of two binary states: off or on, often described as 0 or 1. A sequence of eight bits is known as a “byte”, which can store much more data than a bit. While each individual bit contains just two values, a full byte has 256 unique combinations. In a quantum computer, our bits are replaced with quantum bits, or qubits. These exist in what’s called a quantum state, where until they are measured they can be considered both “on” and “off” at the same time. If our bits were coins, think of qubits as those same coins but mid-coin flip. At some point they will land on heads or tails but, while in the air, they have some probability of being one or the other. In quantum computing, this “mid-coin flip” state is called “superposition”. A classic computer will check a problem in sequence, one at a time. But a quantum computer can arrange qubits in ways that maximise the possibility of finding the correct outcome. The maths behind these arrangements is referred to as “quantum algorithms”, and they are the complicated magic at the heart of quantum computing. The normally time-consuming task of finding every possible outcome to a problem is no longer a problem on a quantum machine — and given all of them, checking which is shortest is relatively easy with the right algorithm. Today's quantum computers face several obstacles that need to be addressed before they can reliably solve problems that classical computers cannot. The biggest challenge is the instability of qubits, which are made of delicate subatomic particles in delicate quantum states that are easily disrupted. Any interaction with the environment, such as heat, electronic signals, magnetic fields, and cosmic rays, can impact the qubits' state, making it difficult to measure the correct answer. This "outside noise" masks what is happening in the quantum machine, resulting in the loss of refinement. Although some interaction with the environment is necessary, it creates reliability issues. That's why most prototype quantum computers operate in a cryogenic chamber just above absolute zero, which is colder than deep space. This keeps the qubits stable for long enough to be usable. And remember, just a small reliability issue can completely change the value of a full byte or introduce errors into a system. Quantum technologies available today can already be used to optimize logistics or monitor brain activity in hospital patients. However, the real potential will be unlocked with the development of robust and error-free quantum computers. The competition to develop this technology is driven by both commercial interests and geopolitical rivalry, with tech giants such as Google, IBM, and Microsoft investing heavily, as well as numerous start-ups. Apart from the economic possibilities, governments are concerned about the security implications of developing quantum computers. The most common method used to secure all our digital data relies on the RSA algorithm, which is vulnerable to being cracked by a quantum machine. However, quantum technology might also help us invent new materials and drugs, develop smarter financial trading strategies, and create secure new methods of communication. The potential applications of quantum computing open up entirely new areas of technology and unlock solutions that we could not have achieved in the past. The current quantum computers we have are not good enough to run complex algorithm to crack RSA passwords. We need to make big improvements before we can build quantum computers with enough qubits to solve complex problems. Some estimate it may take between 20 to 40 years, and others claim we are years, not decades away from this level of innovation. For several years, the US government has been planning for a quantum world and has been running competitions to find the most secure communication protocols of the future that would forestall the threat of Q-day. The US National Institute of Standards and Technology is in the process of approving new cryptography systems — based on problems other than factorisation — that are secure against both quantum and classical computers. It’s a race between quantum computers and the fix — which is to stop using RSA. But whatever new security protocols are finally approved, it will take years for governments, banks and internet companies to implement them. That is why many security experts argue every company with sensitive data should be preparing for Q-day today. Even if private sector investment slows, the escalating geopolitical rivalry between the US and China will provide added impetus to develop the world’s first robust quantum computer. Neither Washington nor Beijing wants to come second in that particular race. SOURCE - Financial times
This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages) Life expectancy is a statistical measure of the average time an organism is expected to live, based on the year of its birth, current age, and other demographic factors like sex. The most commonly used measure is life expectancy at birth (LEB), which can be defined in two ways. Cohort LEB is the mean length of life of a birth cohort (all individuals born in a given year) and can be computed only for cohorts born so long ago that all their members have died. Period LEB is the mean length of life of a hypothetical cohortassumed to be exposed, from birth through death, to the mortality rates observed at a given year. National LEB figures reported by national agencies and international organizations for human populations are estimates of period LEB. In the Bronze Age and the Iron Age, human LEB was 26 years; the 2010 world LEB was 67.2 years. In recent years, LEB in Eswatini (formerly Swaziland) is 49, while LEB in Japan is 83. The combination of high infant mortality and deaths in young adulthood from accidents, epidemics, plagues, wars, and childbirth, before modern medicine was widely available, significantly lowers LEB. For example, a society with a LEB of 40 would have relatively few people dying at exactly 40: most will die before 30 or after 55. In populations with high infant mortality rates, LEB is highly sensitive to the rate of death in the first few years of life. Because of this sensitivity, LEB can be grossly misinterpreted, leading to the belief that a population with a low LEB would have a small proportion of older people.A different measure, such as life expectancy at age 5 (e5), can be used to exclude the effect of infant mortality to provide a simple measure of overall mortality rates other than in early childhood. Aggregate population measures such as the proportion of the population in various age groups, are also used alongside individual-based measures like formal life expectancy when analyzing population structure and dynamics. However, pre-modern societies still had universally higher mortality rates and lower life expectancies at every age for both males and females, and this example was relatively rare. In societies with life expectancies of 30, for instance, a 40-year remaining timespan at age 5 may not have been uncommon, but a 60-year one was. Until the middle of the 20th century, infant mortality was approximately 40–60% of the total mortality. Excluding child mortality, the average life expectancy during the 12th–19th centuries was approximately 55 years. If a person survived childhood, they had about a 50% chance of living 50–55 years, instead of only 25–40 years. Mathematically, life expectancy is the mean number of years of life remaining at a given age. , [a] which is the mean number of subsequent years of life for someone at age , with a particular mortality. Life expectancy, longevity, and maximum lifespan are not synonymous. Longevity refers to the relatively long lifespan of some members of a population. Maximum lifespan is the age at death for the longest-lived individual of a species. Because life expectancy is an average, a particular person may die many years before or many years after the "expected" survival.It is denoted by Life expectancy is also used in plant or animal ecology, [ dubious ] shelf life is commonly used for consumer products, and the terms "mean time to breakdown" (MTTB) and "mean time between failures" (MTBF) are used in engineering.and in life tables (also known as actuarial tables). The concept of life expectancy may also be used in the context of manufactured objects, though the related term Records of human lifespan above age 100 are highly susceptible to errors. [ who? ] was uncovered as a simple typographic error after more than two decades. [ failed verification ] The longest verified lifespan for any human is that of Frenchwoman Jeanne Calment, who is verified as having lived to age 122 years, 164 days, between 21 February 1875 and 4 August 1997. This is referred to as the "maximum life span," which is the upper boundary of life, the maximum number of years any human is known to have lived. A theoretical study shows that the maximum life expectancy at birth is limited by the human life characteristic value δ, which is around 104 years. According to a study by biologists Bryan G. Hughes and Siegfried Hekimi, there is no evidence for limit on human lifespan. However, this view has been questioned on the basis of error patterns.For example, the previous world-record holder for human lifespan, Carrie C. White, The following information is derived from the 1961 Encyclopædia Britannica and other sources, some with questionable accuracy. Unless otherwise stated, it represents estimates of the life expectancies of the world population as a whole. In many instances, life expectancy varied considerably according to class and gender. Life expectancy at birth takes account of infant mortality and child mortality but not prenatal mortality. |Era||Life expectancy at birth in years||Notes| |Paleolithic||22 – 33||Based on the data from modern hunter-gatherer populations, it is estimated that at 15, life expectancy was an additional 39 years (total 54), with a 60% probability of reaching 15.| |Neolithic||20 – 33||Based on Early Neolithic data, total life expectancy at 15 would be 28–33 years.| |Bronze Age and Iron Age||26||Based on Early and Middle Bronze Age data, total life expectancy at 15 would be 28–36 years.| |Classical Greece||25 – 28||Based on Athens Agora and Corinth data, total life expectancy at 15 would be 37–41 years. Most Greeks and Romans died young. About half of all children died before adolescence. Those who survived to the age of 30 had a reasonable chance of reaching 50 or 60. The truly elderly, however, were rare. Because so many died in childhood, life expectancy at birth was probably between 20 and 30 years.| |Classical Rome||20–33||Data is lacking, but computer models provide the estimate. If a person survived to age 20, they could expect to live around 30 years more. Life expectancy was probably slightly longer for women than men. | When infant mortality is factored out [i.e. counting only the 67 -75% who survived the first year], life expectancy is around 34–41 more years [i.e. expected to live to 35–42]. When child mortality is factored out [i.e. counting only the 55-65% who survived to age 5], life expectancy is around 40–45 [i.e. age 45–50]. The ~50% that reached age 10 could also expect to reach ~45-50; at 15 to ~48–54; at 40 to ~60, at 50 to ~64–68; at 60 to ~70–72; at 70 to ~76–77. |Vedic India||25-35||30 was considered the average lifespan by Vedic texts.| |Wang clan of China, 1st c. AD – 1749||35||For the 60% that survived the first year [i.e. excluding infant mortalities] it rose to ~35.| |Early Middle Ages (Europe, from the late 5th or early 6th century to the 10th century AD)||30–35||Life expectancy for those of both sexes who survived birth averaged about 30–35 years. However, if a Gaulish boy made it past age 20, he might expect to live 25 more years while a woman at age 20 could normally expect about 17 years. Anyone who survived until 40 had a good chance at another 15 to 20 years.| |Pre-Columbian Mesoamerica||>40||The average Aztec life expectancy was 41.2 years for men and 42.1 for women.| |Late medieval English peerage||30–33||In Europe, around one-third of infants died in their first year. Once children reached the age of 10, their life expectancy was 32.2 years, and for those who survived to 25, the remaining life expectancy was 23.3 years. Such estimates reflected the life expectancy of adult males from the higher ranks of English society in the Middle Ages, and were similar to that computed for monks of the Christ Church in Canterbury during the 15th century. At age 21, life expectancy of an aristocrat was an additional 43 years (total age 64).| |Early modern England (16th - 18th cent.)||33–40||34 years for males in the 18th century. For 15-year-old girls: around the 15th & 16th cent it was ~33 years (48 total), and in the 18th it was ~42 (57 total).| |18th-century England||25–40||For most of the century it ranged from 35 to 40; however, in the 20s it dipped as low as 25. For 15-year-old girls, it was ~42 (57 total). During the 2nd half of the century it was ~37, while for the elite it passed 40 and approached 50.| |Pre-Champlain Canadian Maritimes||60||Samuel de Champlain wrote that in his visits to Mi'kmaq and Huron communities, he met people over 100 years old. Daniel Paul attributes the incredible lifespan in the region to low stress and a healthy diet of lean meats, diverse vegetables and legumes.| |18th-century Prussia||24.7||For males.| |18th-century France||27.5–30||For males. 24.8 years in 1740–1749, 27.9 years in 1750–1759, 33.9 years in 1800–1809.| |18th-century American colonies||28||Massachusetts colonists who reached the age of 50 could expect to live until 71, and those who were still alive at 60 could expect to reach 75.| |Beginning of the 19th century||~29||Demographic research suggests that at the beginning of the 19th century no country in the world had a life expectancy longer than 40 years. India were ~25, while Belgium was around 40. For Europe as a whole, it was ~33 years.| |Early 19th-century England||40||For the 84% who survived the first year [i.e. excluding infant mortality], the average age was ~46 - 48. If they reached 20 it was ~60, if 50 then ~70, if 70 then ~80. For a 15-year-old girl it was ~60-65. For the upper-class, LEB rose from ~45 to 50. | Another way of thinking about it - less than half of the people born in the mid-19th century made it past their 50th birthday. In contrast, 97% of the people born in 21st century England and Wales can expect to live longer than 50 years. |19th-century British India||25.4| |19th-century world average||28.5–32||Over the course of the century: Europe rose from ~33 to 43, the Americas from ~35 to 41, Oceania ~35 to 48, Asia ~28, Africa 26. In 1820s France, LEB was ~38, and for the 80% that survived, it rose to ~47. For Moscow serfs, LEB was ~34, and for the 66% that survived, it rose to ~36. Western Europe in 1830 was ~33 years, while for the people of Hau-Lou in China, it was ~40. The LE for a 10-year-old in Sweden rose from ~44 to ~54.| |1900 world average||31–32||Around 48 in Oceania, 43 in Europe, and 41 in the Americas. ~47 in the U.S. Around 48 for 15-year-old girls in England.| |1950 world average||45.7 – 48||Around 60 years in Europe, North America, Oceania, Japan and parts of South America, but only 41 in Asia and 36 in Africa. Norway had double that with 72, while in Mali it was merely 26.| |2019-2020 world average||72.6–73.2 ||Females: 75.6 years | Males: 70.8 years || Life expectancy increases with age as the individual survives the higher mortality rates associated with childhood. For instance, the table gives the life expectancy at birth among 13th-century English nobles at 30. Having survived to the age of 21, a male member of the English aristocracy in this period could expect to live: 17th-century English life expectancy was only about 35 years, largely because infant and child mortality remained high. Life expectancy was under 25 years in the early Colony of Virginia,and in seventeenth-century New England, about 40 percent died before reaching adulthood. During the Industrial Revolution, the life expectancy of children increased dramatically. The under-5 mortality rate in London decreased from 74.5% in 1730–1749 to 31.8% in 1810–1829. Public health measures are credited with much of the recent increase in life expectancy. During the 20th century, despite a brief drop due to the 1918 flu pandemicstarting around that time the average lifespan in the United States increased by more than 30 years, of which 25 years can be attributed to advances in public health. The life expectancy for people reaching adulthood is greater, — ignoring infant and child mortality. For instance, 16th Century English and Welsh women at 15 years may have had an life expectancy of around 35 more years (50 total). Human beings are expected to live on average 30–40 years in Eswatiniand 82.6 years in Japan, but the latter's recorded life expectancy may have been very slightly increased by counting many infant deaths as stillborn. An analysis published in 2011 in The Lancet attributes Japanese life expectancy to equal opportunities and public health as well as diet. There are great variations in life expectancy between different parts of the world, mostly caused by differences in public health, medical care, and diet. The impact of AIDS on life expectancy is particularly notable in many African countries. According to projections made by the United Nations (UN) in 2002, the life expectancy at birth for 2010–2015 (if HIV/AIDS did not exist) would have been: Actual life expectancy in Botswana declined from 65 in 1990 to 49 in 2000 before increasing to 66 in 2011. In South Africa, life expectancy was 63 in 1990, 57 in 2000, and 58 in 2011. And in Zimbabwe, life expectancy was 60 in 1990, 43 in 2000, and 54 in 2011. During the last 200 years, African countries have generally not had the same improvements in mortality rates that have been enjoyed by countries in Asia, Latin America, and Europe. In the United States, African-American people have shorter life expectancies than their European-American counterparts. For example, white Americans in 2010 are expected to live until age 78.9, but black Americans only until age 75.1. This 3.8-year gap, however, is the lowest it has been since 1975 at the latest. The greatest difference was 7.1 years in 1993.In contrast, Asian-American women live the longest of all ethnic groups in the United States, with a life expectancy of 85.8 years. The life expectancy of Hispanic Americans is 81.2 years. According to the new government reports in the US, life expectancy in the country dropped again because of the rise in suicide and drug overdose rates. The Centers for Disease Control (CDC) found nearly 70,000 more Americans died in 2017 than in 2016, with rising rates of death among 25- to 44-year-olds. Cities also experience a wide range of life expectancy based on neighborhood breakdowns. This is largely due to economic clustering and poverty conditions that tend to associate based on geographic location. Multi-generational poverty found in struggling neighborhoods also contributes. In United States cities such as Cincinnati, the life expectancy gap between low income and high-income neighborhoods touches 20 years. Economic circumstances also affect life expectancy. For example, in the United Kingdom, life expectancy in the wealthiest and richest areas is several years higher than in the poorest areas. This may reflect factors such as diet and lifestyle, as well as access to medical care. It may also reflect a selective effect: people with chronic life-threatening illnesses are less likely to become wealthy or to reside in affluent areas. km away.In Glasgow, the disparity is amongst the highest in the world: life expectancy for males in the heavily deprived Calton area stands at 54, which is 28 years less than in the affluent area of Lenzie, which is only 8 A 2013 study found a pronounced relationship between economic inequality and life expectancy.However, a study by José A. Tapia Granados and Ana Diez Roux at the University of Michigan found that life expectancy actually increased during the Great Depression, and during recessions and depressions in general. The authors suggest that when people are working at a more extreme degree during prosperous economic times, they undergo more stress, exposure to pollution, and the likelihood of injury among other longevity-limiting factors. Life expectancy is also likely to be affected by exposure to high levels of highway air pollution or industrial air pollution. This is one way that occupation can have a major effect on life expectancy. Coal miners (and in prior generations, asbestos cutters) often have lower life expectancies than average. Other factors affecting an individual's life expectancy are genetic disorders, drug use, tobacco smoking, excessive alcohol consumption, obesity, access to health care, diet, and exercise. In the present, female human life expectancy is greater than that of males, despite females having higher morbidity rates (see Health Survival paradox). There are many potential reasons for this. Traditional arguments tend to favor sociology-environmental factors: historically, men have generally consumed more tobacco, alcohol and drugs than women in most societies, and are more likely to die from many associated diseases such as lung cancer, tuberculosis and cirrhosis of the liver.Men are also more likely to die from injuries, whether unintentional (such as occupational, war or car accidents) or intentional (suicide). Men are also more likely to die from most of the leading causes of death (some already stated above) than women. Some of these in the United States include cancer of the respiratory system, motor vehicle accidents, suicide, cirrhosis of the liver, emphysema, prostate cancer, and coronary heart disease. These far outweigh the female mortality rate from breast cancer and cervical cancer. In the past, mortality rates for females in child-bearing age groups were higher than for males at the same age. A paper from 2015 found that female fetuses have a higher mortality rate than male fetuses. years in 1979 to 5.3 years in 2005, with women expected to live to age 80.1 in 2005. Data from the UK shows the gap in life expectancy between men and women decreasing in later life. This may be attributable to the effects of infant mortality and young adult death rates.This finding contradicts papers dating from 2002 and earlier that attribute the male sex to higher in-utero mortality rates. Among the smallest premature babies (those under 2 pounds or 900 g), females have a higher survival rate. At the other extreme, about 90% of individuals aged 110 are female. The difference in life expectancy between men and women in the United States dropped from 7.8 Some argue that shorter male life expectancy is merely another manifestation of the general rule, seen in all mammal species, that larger-sized individuals within a species tend, on average, to have shorter lives. [ clarification needed ] occurs because women have more resistance to infections and degenerative diseases.This biological difference In her extensive review of the existing literature, Kalben concluded that the fact that women live longer than men was observed at least as far back as 1750 and that, with relatively equal treatment, today males in all parts of the world experience greater mortality than females. Kallen's study, however, was restricted to data in Western Europe alone, where the demographic transition occurred relatively early. United Nations statistics from mid-twentieth century onward, show that in all parts of the world, females have a higher life expectancy at age 60 than males.Of 72 selected causes of death, only 6 yielded greater female than male age-adjusted death rates in 1998 in the United States. Except for birds, for almost all of the animal species studied, males have higher mortality than females. Evidence suggests that the sex mortality differential in people is due to both biological/genetic and environmental/behavioral risk and protective factors. There is a recent suggestion that mitochondrial mutations that shorten lifespan continue to be expressed in males (but less so in females) because mitochondria are inherited only through the mother. By contrast, natural selection weeds out mitochondria that reduce female survival; therefore such mitochondria are less likely to be passed on to the next generation. This thus suggests that females tend to live longer than males. The authors claim that this is a partial explanation. Another explanation is the unguarded X hypothesis: according to this hypothesis one reason for why the average lifespan of males isn't as long as that of females––by 18% on average according to the study––is that they have a Y chromosome which can't protect an individual from harmful genes expressed on the X chromosome, while a duplicate X chromosome, as present in female organisms, can ensure harmful genes aren't expressed. Before the Industrial Revolution, men lived longer than women on average.In developed countries, starting around 1880, death rates decreased faster among women, leading to differences in mortality rates between males and females. Before 1880 death rates were the same. In people born after 1900, the death rate of 50- to 70-year-old men was double that of women of the same age. Men may be more vulnerable to cardiovascular disease than women, but this susceptibility was evident only after deaths from other causes, such as infections, started to decline. Most of the difference in life expectancy between the sexes is accounted for by differences in the rate of death by cardiovascular diseases among persons aged 50–70. The heritability of lifespan is estimated to be less than 10%, meaning the majority of variation in lifespan is attributable due to differences in environment rather than genetic variation.However, researchers have identified regions of the genome which can influence the length of life and the number of years lived in good health. For example, a genome-wide association study of 1 million lifespans found 12 genetic loci which influenced lifespan by modifying susceptibility to cardiovascular and smoking-related disease. The locus with the largest effect is APOE. Carriers of the APOE ε4 allele live approximately one year less than average (per copy of the ε4 allele), mainly due to increased risk of Alzheimer's disease. In July 2020, scientists identified 10 genomic loci with consistent effects across multiple lifespan-related traits, including healthspan, lifespan, and longevity.The genes affected by variation in these loci highlighted haem metabolism as a promising candidate for further research within the field. This study suggests that high levels of iron in the blood likely reduce, and genes involved in metabolising iron likely increase healthy years of life in humans. A follow-up study which investigated the genetics of frailty and self-rated health in addition to healthspan, lifespan, and longevity also highlighted haem metabolism as an important pathway, and found genetic variants which lower blood protein levels of LPA and VCAM1 were associated with increased healthy lifespan. In developed countries, the number of centenarians is increasing at approximately 5.5% per year, which means doubling the centenarian population every 13 years, pushing it from some 455,000 in 2009 to 4.1 million in 2050. Japan is the country with the highest ratio of centenarians (347 for every 1 million inhabitants in September 2010). Shimane Prefecture had an estimated 743 centenarians per million inhabitants. In the United States, the number of centenarians grew from 32,194 in 1980 to 71,944 in November 2010 (232 centenarians per million inhabitants). Mental illness is reported to occur in approximately 18% of the average American population. The mentally ill have been shown to have a 10- to a 25-year reduction in life expectancy.Generally, the reduction of lifespan in the mentally ill population compared to the mentally stable population has been studied and documented. The greater mortality of people with mental disorders may be due to death from injury, from co-morbid conditions, or medication side effects.For instance, psychiatric medications can increase the risk of developing diabetes. It has been shown that the psychiatric medication olanzapine can increase risk of developing agranulocytosis among other comorbidities. Psychiatric medicines also affect the gastrointestinal tract, where the mentally ill have a four times risk of gastrointestinal disease. As of the year 2020 and the COVID-19 pandemic, researchers have found an increased risk of death in the mentally ill. The life expectancy of people with diabetes, which is 9.3% of the U.S. population, is reduced by roughly ten to twenty years.People over 60 years old with Alzheimer's disease have about a 50% life expectancy of 3 to 10 years. Other demographics that tend to have a lower life expectancy than average include transplant recipients, and the obese. Education on all levels has been shown to be strongly associated with increased life expectancy.This association may be due partly to higher income, which can lead to increased life expectancy. Despite the association, among identical twin pairs with different education levels, there is only weak evidence of a relationship between educational attainment and adult mortality. According to a paper from 2015, the mortality rate for the Caucasian population in the United States from 1993 to 2001 is four times higher[ dubious ] for those who did not complete high school compared to those who have at least 16 years of education. In fact, within the U.S. adult population, those who have less than a high school education have the shortest life expectancies. Pre-school education also plays a large role in life expectancy. It was found that high-quality early-stage childhood education had positive effects on health. Researchers discovered this by analyzing the results of the Carolina Abecedarian Project (ABC) finding that the disadvantaged children who were randomly assigned to treatment had lower instances of risk factors for cardiovascular and metabolic diseases in their mid-30s. Various species of plants and animals, including humans, have different lifespans. Evolutionary theory states that organisms that, by virtue of their defenses or lifestyle, live for long periods and avoid accidents, disease, predation, etc. are likely to have genes that code for slow aging, which often translates to good cellular repair. One theory is that if predation or accidental deaths prevent most individuals from living to an old age, there will be less natural selection to increase the intrinsic life span.That finding was supported in a classic study of opossums by Austad; however, the opposite relationship was found in an equally prominent study of guppies by Reznick. One prominent and very popular theory states that lifespan can be lengthened by a tight budget for food energy called caloric restriction.Caloric restriction observed in many animals (most notably mice and rats) shows a near doubling of life span from a very limited calorific intake. Support for the theory has been bolstered by several new studies linking lower basal metabolic rate to increased life expectancy. That is the key to why animals like giant tortoises can live so long. Studies of humans with life spans of at least 100 have shown a link to decreased thyroid activity, resulting in their lowered metabolic rate. In a broad survey of zoo animals, no relationship was found between investment of the animal in reproduction and its life span. The starting point for calculating life expectancy is the age-specific death rates of the population members. If a large amount of data is available, a statistical population can be created that allow the age-specific death rates to be simply taken as the mortality rates actually experienced at each age (the number of deaths divided by the number of years "exposed to risk" in each data cell). However, it is customary to apply smoothing to iron out, as much as possible, the random statistical fluctuations from one year of age to the next. In the past, a very simple model used for this purpose was the Gompertz function, but more sophisticated methods are now used. These are the most common methods now used for that purpose: While the data required are easily identified in the case of humans, the computation of life expectancy of industrial products and wild animals involves more indirect techniques. The life expectancy and demography of wild animals are often estimated by capturing, marking, and recapturing them.The life of a product, more often termed shelf life, is also computed using similar methods. In the case of long-lived components, such as those used in critical applications: in aircraft, methods like accelerated aging are used to model the life expectancy of a component. The age-specific death rates are calculated separately for separate groups of data that are believed to have different mortality rates (such as males and females, and perhaps smokers and non-smokers if data are available separately for those groups) and are then used to calculate a life table from which one can calculate the probability of surviving to each age. In actuarial notation, the probability of surviving from age to age is denoted and the probability of dying during age (between ages and ) is denoted . For example, if 10% of a group of people alive at their 90th birthday die before their 91st birthday, the age-specific death probability at 90 would be 10%. That is a probability, not a mortality rate[ clarification needed ]. The expected future lifetime of a life age in whole years (the curtate expected lifetime of (x)) is denoted by the symbol . [a] It is the conditional expected future lifetime (in whole years), assuming survival to age . If denotes the curtate future lifetime at , Substituting in the sum and simplifying gives the equivalent formula: If the assumption is made that on average, people live a half year in the year of death, the complete expectation of future lifetime at age is . Life expectancy is by definition an arithmetic mean. It can also be calculated by integrating the survival curve from 0 to positive infinity (or equivalently to the maximum lifespan, sometimes called 'omega'). For an extinct or completed cohort (all people born in the year 1850, for example), it can of course simply be calculated by averaging the ages at death. For cohorts with some survivors, it is estimated by using mortality experience in recent years. The estimates are called period cohort life expectancies. It is important to note that the statistic is usually based on past mortality experience and assumes that the same age-specific mortality rates will continue. Thus, such life expectancy figures need to be adjusted for temporal trends before calculating how long a currently living individual of a particular age is expected to live. Period life expectancy remains a commonly used statistic to summarize the current health status of a population. However, for some purposes, such as pensions calculations, it is usual to adjust the life table used by assuming that age-specific death rates will continue to decrease over the years, as they have usually done in the past. That is often done by simply extrapolating past trends, but some models exist to account for the evolution of mortality like the Lee–Carter model. As discussed above, on an individual basis, some factors correlate with longer life. Factors that are associated with variations in life expectancy include family history, marital status, economic status, physique, exercise, diet, drug use including smoking and alcohol consumption, disposition, education, environment, sleep, climate, and health care. To assess the quality of these additional years of life, 'healthy life expectancy' has been calculated for the last 30 years. Since 2001, the World Health Organization has published statistics called Healthy life expectancy (HALE), defined as the average number of years that a person can expect to live in "full health" excluding the years lived in less than full health due to disease and/or injury.Since 2004, Eurostat publishes annual statistics called Healthy Life Years (HLY) based on reported activity limitations. The United States uses similar indicators in the framework of the national health promotion and disease prevention plan "Healthy People 2010". More and more countries are using health expectancy indicators to monitor the health of their population. The long-standing quest for longer life led in the 2010s to a more promising focus on increasing HALE, also known as a person's "healthspan". Besides the benefits of keeping people healthier longer, a goal is to reduce health-care expenses on the many diseases associated with cellular senescence. Approaches being explored include fasting, exercise, and senolytic drugs. Forecasting life expectancy and mortality form an important subdivision of demography. Future trends in life expectancy have huge implications for old-age support programs like U.S. Social Security and pension since the cash flow in these systems depends on the number of recipients who are still living (along with the rate of return on the investments or the tax rate in pay-as-you-go systems). With longer life expectancies, the systems see increased cash outflow; if the systems underestimate increases in life-expectancies, they will be unprepared for the large payments that will occur, as humans live longer and longer. Life expectancy forecasting is usually based on two different approaches: Life expectancy is one of the factors in measuring the Human Development Index (HDI) of each nation along with adult literacy, education, and standard of living. Life expectancy is also used in describing the physical quality of life of an area or, for an individual when the value of a life settlement is determined a life insurance policy is sold for a cash asset. Disparities in life expectancy are often cited as demonstrating the need for better medical care or increased social support. A strongly associated indirect measure is income inequality. For the top 21 industrialized countries, if each person is counted equally, life expectancy is lower in more unequal countries (r = −0.907).There is a similar relationship among states in the US (r = −0.620). Life expectancy is commonly confused with the average age an adult could expect to live. This confusion may create the expectation that an adult would be unlikely to exceed an average life expectancy, even though, with all statistical probability, an adult, who has already avoided many statistical causes of adolescent mortality, should be expected to outlive the average life expectancy calculated from birth.One must compare the life expectancy of the period after childhood, to estimate also the life expectancy of an adult. Life expectancy can change dramatically after childhood, even in preindustrial times as is demonstrated by the Roman Life Expectancy table, which estimates life expectancy to be 25 years at birth, but 53 years upon reaching age 25. Studies like Plymouth Plantation; "Dead at Forty" and Life Expectancy by Age, 1850–2004 similarly show a dramatic increase in life expectancy once adulthood was reached. Life expectancy differs from maximum life span. Life expectancy is an average for all people in the population — including those who die shortly after birth, those who die in early adulthood (e.g. childbirth, war), and those who live unimpeded until old age. Maximum lifespan is an individual-specific concept — maximum lifespan is, therefore, an upper bound rather than an average.Science author Christopher Wanjek said "has the human race increased its life span? Not at all. This is one of the biggest misconceptions about old age." The maximum life span, or oldest age a human can live, may be constant. Further, there are many examples of people living significantly longer than the average life expectancy of their time period, such as Socrates (71), Saint Anthony the Great (105), Michelangelo (88), and John Adams, 2nd president of the United States (90). However, anthropologist John D. Hawks criticizes the popular conflation of life span (life expectancy) and maximum life span when popular science writers falsely imply that the average adult human does not live longer than their ancestors. He writes, "[a]ge-specific mortality rates have declined across the adult lifespan. A smaller fraction of adults die at 20, at 30, at 40, at 50, and so on across the lifespan. As a result, we live longer on average... In every way we can measure, human lifespans are longer today than in the immediate past, and longer today than they were 2000 years ago... age-specific mortality rates in adults really have reduced substantially." a. ^ ^ In standard actuarial notation, ex refers to the expected future lifetime of (x) in whole years, while eͦx (with a ring above the e) denotes the complete expected future lifetime of (x), including the fraction. A football player or footballer is a sportsperson who plays one of the different types of football. The main types of football are association football, American football, Canadian football, Australian rules football, Gaelic football, rugby league and rugby union. Senescence or biologicalaging is the gradual deterioration of functional characteristics in living organisms. The word senescence can refer to either cellular senescence or to senescence of the whole organism. Organismal senescence involves an increase in death rates and/or a decrease in fecundity with increasing age, at least in the latter part of an organism's life cycle. Life extension is the concept of extending the human lifespan, either modestly through improvements in medicine or dramatically by increasing the maximum lifespan beyond its generally-settled limit of 125 years. The word "longevity" is sometimes used as a synonym for "life expectancy" in demography. However, the term longevity is sometimes meant to refer only to especially long-lived members of a population, whereas life expectancy is always defined statistically as the average number of years remaining at a given age. For example, a population's life expectancy at birth is the same as the average age at death for all people born in the same year. Longevity is best thought of as a term for general audiences meaning 'typical length of life' and specific statistical definitions should be clarified when necessary. Maximum life span is a measure of the maximum amount of time one or more members of a population have been observed to survive between birth and death. The term can also denote an estimate of the maximum amount of time that a member of a given species could survive between birth and death, provided circumstances that are optimal to that member's longevity. In actuarial science and demography, a life table is a table which shows, for each age, what the probability is that a person of that age will die before their next birthday. In other words, it represents the survivorship of people from a certain population. They can also be explained as a long-term mathematical way to measure a population's longevity. Tables have been created by demographers including Graunt, Reed and Merrell, Keyfitz, and Greville. Biodemography is a multidisciplinary approach, integrating biological knowledge with demographic research on human longevity and survival. Biodemographic studies are important for understanding the driving forces of the current longevity revolution, forecasting the future of human longevity, and identification of new strategies for further increase in healthy and productive life span. The Gompertz–Makeham law states that the human death rate is the sum of an age-dependent component, which increases exponentially with age and an age-independent component. In a protected environment where external causes of death are rare, the age-independent mortality component is often negligible. In this case the formula simplifies to a Gompertz law of mortality. In 1825, Benjamin Gompertz proposed an exponential increase in death rates with age. In demography and medical geography, epidemiological transition is a theory which "describes changing population patterns in terms of fertility, life expectancy, mortality, and leading causes of death." For example, a phase of development marked by a sudden increase in population growth rates brought by improved food security and innovations in public health and medicine, can be followed by a re-leveling of population growth due to subsequent declines in fertility rates. Such a transition can account for the replacement of infectious diseases by chronic diseases over time due to increased life span as a result of improved health care and disease prevention. This theory was originally posited by Abdel Omran in 1971. Enquiry into the evolution of ageing, or aging, aims to explain why a detrimental process such as ageing would evolve, and why there is so much variability in the lifespans of organisms. The classical theories of evolution suggest that environmental factors, such as predation, accidents, disease, starvation, ensure that most organisms living in natural settings will not live until old age, and so there will be very little pressure to conserve genetic changes that increase longevity. Natural selection will instead strongly favor genes which ensure early maturation and rapid reproduction, and the selection for genetic traits which promote molecular and cellular self-maintenance will decline with age for most organisms. Following is a list of topics related to life extension: Aging in dogs varies from breed to breed, and affects the dog's health and physical ability. As with humans, advanced years often bring changes in a dog's ability to hear, see, and move about easily. Skin condition, appetite, and energy levels often degrade with geriatric age, and medical conditions such as cancer, kidney failure, arthritis, dementia, and joint conditions, and other signs of old age may appear. James W. Vaupel was an American scientist in the fields of aging research, biodemography, and formal demography. He was instrumental in developing and advancing the idea of the plasticity of longevity, and pioneered research on the heterogeneity of mortality risks and on the deceleration of death rates at the highest ages. Ageing (BE) or aging (AE) is the process of becoming older. The term refers mainly to humans, many other animals, and fungi, whereas for example, bacteria, perennial plants and some simple animals are potentially biologically immortal. In a broader sense, ageing can refer to single cells within an organism which have ceased dividing, or to the population of a species. In 2006, life expectancy for males in Cyprus was 79 and for females 82 years. Infant mortality in 2002 was 5 per 1,000 live births, comparing favourably to most developed nations. The disposable soma theory of aging states that organisms age due to an evolutionary trade-off between growth, reproduction, and DNA repair maintenance. Formulated by Thomas Kirkwood, the disposable soma theory explains that an organism only has a limited amount of resources that it can allocate to its various cellular processes. Therefore, a greater investment in growth and reproduction would result in reduced investment in DNA repair maintenance, leading to increased cellular damage, shortened telomeres, accumulation of mutations, compromised stem cells, and ultimately, senescence. Although many models, both animal and human, have appeared to support this theory, parts of it are still controversial. Specifically, while the evolutionary trade-off between growth and aging has been well established, the relationship between reproduction and aging is still without scientific consensus, and the cellular mechanisms largely undiscovered. The male-female health-survival paradox, also known as the morbidity-mortality paradox or gender paradox, is the phenomenon in which women experience more medical conditions and disability during their lives, but they unexpectedly live longer than men. This paradox, where women experience greater morbidity (diseases) but lower mortality (death) in comparison to men, is unusual since it is expected that experiencing disease increases the likelihood of death. However, in this case, the part of the population that experiences more disease and disability is the one that lives longer. The Taeuber Paradox is a paradox in demography, which results from two seemingly contradictory expectations given a population-wide decrease in mortality, e.g. from curing or reducing the mortality of a disease in a population. The two expectations are: This timeline lists notable events in the history of research into senescence or biological aging. People have long been interested in making their lives longer and healthier. The most anсient Egyptian, Indian and Chinese books contain reasoning about aging. Ancient Egyptians used garlic in large quantities to extend their lifespan. Hippocrates, in his Aphorisms, and Aristotle (384 – 322 BC), in On youth and old age, expressed their opinions about reasons for old age and gave advice about lifestyle. Medieval Persian physician Ibn Sina, known in the West as Avicenna, summarized the achievements of earlier generations about this issue. Japan has the highest life expectancy in the world but the reasons says an analysis, are as much to do with equality and public health measures as diet.... According to a paper in a Lancet series on healthcare in Japan.... Reduction in health inequalities with improved average population health was partly attributable to equal educational opportunities and financial access to care.
One Methods:Example Problem You will need to find the slope of a line for pre-algebra and algebra courses. The easiest way to do so is to plug in two points of the line into the slope formula. This article will show you how! 1Understand the slope formula. Slope is defined as "rise over run," with "rise" indicating the amount of horizontal distance between two points and "run" indicating the amount of vertical distance between two points.Ad 2Get a line of which you want to know the slope. Make sure that the line is straight. You can't find the slope of a line that isn't straight. 3Pick any two coordinates that the line goes through. Coordinates are the x and y points written as (x, y). It doesn't matter which points you pick, as long as they're different points on the same line. 4Pick which point's coordinates are dominant in your equation. It doesn't matter which one you pick, as long as it stays the same throughout the calculation. The dominant coordinates will be x1 and y1. The other coordinates will be x2 and y2. 5Set up the equation using the y-coordinates on top and the x-coordinates on bottom. 6Subtract the two y-coordinates from one another. 7Subtract the two x-coordinates from one another. 8Divide the y-coordinate's result with the x-coordinate's result. Reduce the number if at all possible. The reduced fraction is your final answer! 9Double-check to see that your number makes sense. - Lines that go up from left to right are always positive numbers, even if they're fractions. - Lines that go down from left to right are always negative numbers, even if they're fractions. We could really use your help! Mario Kart Wii? - Once, you choose your dominant point's coordinates do not switch them around or you will get the answer wrong. - You have found "m" in the Line Formula, which is: y=mx+b, with "y" being the y-coordinate of any given point, "m" being the slope, "x" being the x-coordinate that corresponds with the y-coordinate of any given point, and "b" being the y-intercept. - Do not confuse the Slope Formula with any other formula, like: Distance Formula, Equation of a Line or Line Formula, or Midpoint Formula. Categories: Coordinate Geometry In other languages: Italiano: Calcolare la Pendenza di una Retta Passante per due Punti, Deutsch: Die Steigung einer Geraden bestimmen mit der Hilfe von zwei Punkten, Português: Encontrar a Inclinação de uma Reta Usando Dois Pontos, Русский: найти угол наклона прямой по двум точкам, Français: trouver la pente d'une droite à l'aide de deux points, Español: calcular la pendiente de una línea utilizando dos puntos Thanks to all authors for creating a page that has been read 7,000 times.
Find answers to the top 10 questions parents ask about TI graphing calculators. Download free 90-day trial versions of the most popular TI software and handheld emulators. Learn about the math and science behind what students are into, from art to fashion and more. Get ready for back to school with T³™ Webinars to enhance your teaching and TI technology skills. Get hundreds of video lessons that show how to graph parent functions and transformations. Update OS, transfer files andtake screen captures for yourTI-Nspire™ CX II graphing calculator. Students create and explore a box plot and histogram for a data set. They then compare the two data displays by viewing them together and use the comparison to draw conclusions about the data. In the first part of the activity, students are given a scenario. They use the data supplied to discuss any visible trends when studying the data. Students create a box plot from the data in the spreadsheet. They will analyze the different sections of the plot and determine where the mean falls relative to the median. In the second part of the activity, students use the same data to create a histogram. They study and manipulate the histogram to gather information about the data. They explore the relationship of the mean and median by plotting both values. © Copyright 1995-2022 Texas Instruments Incorporated. All rights reserved.
A space rendezvous is an orbital maneuver during which two spacecraft, one of which is often a space station, arrive at the same orbit and approach to a very close distance (e.g. within visual contact). Rendezvous requires a precise match of the orbital velocities and position vectors of the two spacecraft, allowing them to remain at a constant distance through orbital station-keeping. Rendezvous may or may not be followed by docking or berthing, procedures which bring the spacecraft into physical contact and create a link between them. The same rendezvous technique can be used for spacecraft "landing" on natural objects with a weak gravitational field, e.g. landing on one of the Martian moons would require the same matching of orbital velocities, followed by a "descent" that shares some similarities with docking. In its first human spaceflight program Vostok, the Soviet Union launched pairs of spacecraft from the same launch pad, one or two days apart (Vostok 3 and 4 in 1962, and Vostok 5 and 6 in 1963). In each case, the launch vehicles' guidance systems inserted the two craft into nearly identical orbits; however, this was not nearly precise enough to achieve rendezvous, as the Vostok lacked maneuvering thrusters to adjust its orbit to match that of its twin. The initial separation distances were in the range of 5 to 6.5 kilometers (Template:Convert/round to Template:Convert/round mi), and slowly diverged to thousands of kilometers (over a thousand miles) over the course of the missions. In 1963 Buzz Aldrin submitted his doctoral thesis titled, Line-Of-Sight Guidance Techniques For Manned Orbital Rendezvous. As a NASA astronaut, Aldrin worked to "translate complex orbital mechanics into relatively simple flight plans for my colleagues." First attempt failedEdit The first attempt at rendezvous was made on June 3, 1965, when US astronaut Jim McDivitt tried to maneuver his Gemini 4 craft to meet back up with its spent Titan II launch vehicle's upper stage. McDivitt was unable to get close enough to achieve station-keeping, due to depth-perception problems, and stage propellant venting which kept moving it around. However, the Gemini 4 attempts at rendezvous were unsuccessful largely because NASA engineers had yet to learn the orbital mechanics involved in the process. Simply pointing the active vehicle's nose at the target and thrusting was unsuccessful. If the target is ahead in the orbit and the tracking vehicle increases speed, its altitude also increases, actually moving it away from the target. The higher altitude then increases orbital period due to Kepler's third law, putting the tracker not only above, but also behind the target. The proper technique requires changing the tracking vehicle's orbit to allow the rendezvous target to either catch up or be caught up with, and then at the correct moment changing to the same orbit as the target with no relative motion between the vehicles (for example, putting the tracker into a lower orbit, which has a shorter orbital period allowing it to catch up, then executing a Hohmann transfer back to the original orbital height). |“||As GPO engineer André Meyer later remarked, "There is a good explanation for what went wrong with rendezvous." The crew, like everyone else at MSC, "just didn't understand or reason out the orbital mechanics involved. As a result, we all got a whole lot smarter and really perfected rendezvous maneuvers, which Apollo now uses."||”| First successful rendezvousEdit Rendezvous was first successfully accomplished by US astronaut Wally Schirra on December 15, 1965. Schirra maneuvered the Gemini 6 spacecraft within 1 foot (Template:Convert/round cm) of its sister craft Gemini 7. The spacecraft were not equipped to dock with each other, but maintained station-keeping for more than 20 minutes. Schirra later commented: |“||Somebody said ... when you come to within three miles (5 km), you've rendezvoused. If anybody thinks they've pulled a rendezvous off at three miles (5 km), have fun! This is when we started doing our work. I don't think rendezvous is over until you are stopped – completely stopped – with no relative motion between the two vehicles, at a range of approximately 120 feet (Template:Convert/round m). That's rendezvous! From there on, it's stationkeeping. That's when you can go back and play the game of driving a car or driving an airplane or pushing a skateboard – it's about that simple.||”| The first docking of two spacecraft was achieved on March 16, 1966 when Gemini 8, under the command of Neil Armstrong, rendezvoused and docked with an unmanned Agena Target Vehicle. Gemini 6 was to have been the first docking mission, but had to be cancelled when that mission's Agena vehicle was destroyed during launch. The first Soviet cosmonaut to attempt a manual docking was Georgy Beregovoy who unsuccessfully tried to dock his Soyuz 3 craft with the unmanned Soyuz 2 in October 1968. He was able to bring his craft from 200 meters (Template:Convert/round ft) to as close as 1 foot (Template:Convert/round m), but was unable to dock before exhausting his maneuvering fuel. The first rendezvous of two spacecraft from different countries took place on June 17, 1975, when an Apollo spacecraft docked with a Soyuz spacecraft as part of the Apollo-Soyuz Test Project. A rendezvous takes place each time a spacecraft brings crew members or supplies to an orbiting space station. The first spacecraft to do this was the ill-fated Soyuz 11, which successfully docked with the Salyut 1 station on June 7, 1971. Human spaceflight missions have successfully made rendezvous with six Salyut stations, with Skylab, with Mir and with the International Space Station (ISS). Currently Soyuz spacecraft are used at approximately six month intervals to transport crew members to and from ISS. Robotic spacecraft are also used to rendezvous with and resupply space stations. Soyuz and Progress spacecraft have automatically docked with both Mir and the ISS using the Kurs docking system, the Automated Transfer Vehicle also uses this system. The robotic H-II Transfer Vehicle flies to a close rendezvous and maintains station-keeping without docking, allowing the ISS Canadarm2 to grapple it and berth it to the station. Space rendezvous has been used for a variety of other purposes, including recent service missions to the Hubble Space Telescope. Historically, for the missions of Project Apollo that landed astronauts on the Moon, the ascent stage of the Apollo Lunar Module would rendezvous and dock with the Apollo Command/Service Module in lunar orbit rendezvous maneuvers. Also, the STS-49 crew rendezvoused with and attached a rocket motor to the Intelsat VI F-3 communications satellite to allow it to make an orbital maneuver. Possible future rendezvous may be made by a yet to be developed automated Hubble Robotic Vehicle (HRV), and by the CX-OLEV, which is being developed for rendezvous with a geosynchronous satellite that has run out of fuel. The CX-OLEV would take over orbital stationkeeping and/or finally bring the satellite to a graveyard orbit, after which the CX-OLEV can possibly be reused for another satellite. Gradual transfer from the geostationary transfer orbit to the geosynchronous orbit will take a number of months, using Hall effect thrusters. Alternatively the two spacecraft are already together, and just undock and dock in a different way: - Soyuz spacecraft from one docking point to another on the ISS or Salyut - In the Apollo spacecraft, a maneuver known as transposition, docking, and extraction was performed an hour or so after Trans Lunar Injection of the sequence third stage of the Saturn V rocket / LM inside LM adapter / CSM (in order from bottom to top at launch, also the order from back to front with respect to the current motion), with CSM manned, LM at this stage unmanned: - the CSM separated, while the four upper panels of the LM adapter were disposed of - the CSM turned 180 degrees (from engine backward, toward LM, to forward) - the CSM connected to the LM while that was still connected to the third stage - the CSM/LM combination then separated from the third stage Phases and methodsEdit The standard technique for rendezvous and docking is to dock an active vehicle, the "chaser", with a passive "target". This technique has been used successfully for the Gemini, Apollo, Apollo/Soyuz, Salyut, Skylab, Mir, ISS, and Tiāngōng programs. To properly understand spacecraft rendezvous it is essential to understand the relation between spacecraft velocity and orbit. A spacecraft in a certain orbit cannot arbitrarily alter its velocity. Each orbit correlates to a certain orbital velocity. If the spacecraft fires thrusters and increases (or decreases) its velocity it will obtain a different orbit, one that correlates to the higher (or lower) velocity. For circular orbits, higher orbits have a lower orbital velocity. Lower orbits have a higher orbital velocity. This might seem counter-intuitive, but an orbit is nothing else but a state of equilibrium between the force of the gravitational body (the Earth) and the force due to circular motion. At higher orbits the force of the gravity becomes weaker, therefore the balancing force of the circular motion can be lower, corresponding to a lower velocity. (It's actually about angular momentum, not "weaker gravity.") For orbital rendezvous to occur, both spacecraft must be in the same orbital plane, and the phase of the orbit (the position of the spacecraft in the orbit) must be matched. The "chaser" is placed in a slightly lower orbit than the target. The lower the orbit, the higher the orbital velocity. The difference in orbital velocities of chaser and target is therefore such that the chaser is faster than the target, and catches up with it. Once the two spacecraft are sufficiently close, the chaser's orbit is synchronized with the target's orbit. That is, the chaser will be accelerated. This increase in velocity carries the chaser to a higher orbit. The increase in velocity is chosen such that the chaser approximately assumes the orbit of the target. Stepwise, the chaser closes in on the target, until proximity operations (see below) can be started. In the very final phase, the closure rate is reduced by use of the active vehicle's reaction control system. Docking typically occurs at a rate of 0.1 ft/s (Template:Convert/round m/s) to 0.2 ft/s (Template:Convert/round m/s). Space rendezvous of an active, or "chaser," spacecraft with an (assumed) passive spacecraft may be divided into several phases, and typically starts with the two spacecraft in separate orbits, typically separated by more than 10,000 kilometers (Template:Convert/round mi): |Phase||Separation distance||Typical phase duration| | Drift Orbit A| (out of sight, out of contact) |>2 λmax||1 to 20 days| | Drift Orbit B| (in sight, in contact) |2 λmax to 1 kilometer (Template:Convert/round ft)||1 to 5 days| |Proximity Operations A||1,000–100 meters (Template:Convert/roundExpression error: Unrecognised punctuation character "[".Template:Convert/round ft)||1 to 5 orbits| |Proximity Operations B||100–10 meters (Template:Convert/roundExpression error: Unrecognised punctuation character "[".Template:Convert/round ft)||45 – 90 minutes| |Docking||<10 meters (Template:Convert/round ft)||<5 minutes| Methods of approachEdit The two most common methods of approach for proximity operations are in-line with the flight path of the spacecraft (called V-bar, as it is along the velocity vector of the target) and perpendicular to the flight path along the line of the radius of the orbit (called R-bar, as it is along the radial vector, with respect to Earth, of the target). The chosen method of approach depends on safety, spacecraft / thruster design, mission timeline, and, especially for docking with the ISS, on the location of the assigned docking port. - V-bar approach The V-bar approach is an approach of the "chaser" horizontally along the passive spacecraft's velocity vector. That is, from behind or from ahead, and in the same direction as the orbital motion of the passive target. The motion is parallel to the target's orbital velocity. In the V-bar approach from behind, the chaser fires small thrusters to increase its velocity in the direction of the target. This, of course, also drives the chaser to a higher orbit. To keep the chaser on the V-vector, other thrusters are fired in the radial direction. If this is omitted (for example due to a thruster failure), the chaser will be carried to a higher orbit, which is associated with an orbital velocity lower than the target's. Consequently, the target moves faster than the chaser and the distance between them increases. This is called a natural braking effect, and is a natural safeguard in case of a thruster failure. STS-104 was the third Space Shuttle mission to conduct a V-bar arrival at the International Space Station. The V-bar, or velocity vector, extends along a line directly ahead of the station. Shuttles approach the ISS along the V-bar when docking at the PMA-2 docking port. - R-bar approach The R-bar approach consists of the chaser moving below or above the target spacecraft, along its radial vector. The motion is orthogonal to the orbital velocity of the passive spacecraft. When below the target the chaser fires radial thrusters to close in on the target. By this it increases its altitude. However, the orbital velocity of the chaser remains unchanged (thruster firings in the radial direction have no effect on the orbital velocity). Now in a slightly higher position, but with an orbital velocity that does not correspond to the local circular velocity, the chaser slightly falls behind the target. Small rocket pulses in the orbital velocity direction are necessary to keep the chaser along the radial vector of the target. If these rocket pulses are not executed (for example due to a thruster failure), the chaser will move away from the target. This is a natural braking effect. For the R-bar approach, this effect is stronger than for the V-bar approach, making the R-bar approach the safer one of the two. Generally, the R-bar approach from below is preferable, as the chaser is in a lower (faster) orbit than the target, and thus "catches up" with it. For the R-bar approach from above, the chaser is in a higher (slower) orbit than the target, and thus has to wait for the target to approach it. Astrotech proposed meeting ISS cargo needs with a vehicle which would approach the station, "using a traditional nadir R-bar approach." The nadir R-bar approach is also used for flights to the ISS of H-II Transfer Vehicles, and of SpaceX Dragon vehicles. - Z-bar approach An approach of the active, or "chaser," spacecraft horizontally from the side and orthogonal to the orbital plane of the passive spacecraft—that is, from the side and out-of-plane of the orbit of the passive spacecraft—is called a Z-bar approach. - Androgynous Peripheral Attach System - Common Berthing Mechanism - Lunar orbit rendezvous - Nodal regression causes precession of orbits around the Earth's axis - Path-constrained rendezvous is the process of moving an orbiting object from its current position to a desired position, in such a way that no orbiting obstacles are contacted along the way - ↑ Template:Cite book - ↑ Template:Cite book - ↑ Buzz Aldrin. "Orbital Rendezvous". http://buzzaldrin.com/space-vision/rocket_science/orbital-rendezvous/. - ↑ Buzz Aldrin. "From Earth to Moon to Earth". http://www.waterkeeper.org/wp-content/uploads/2013/08/Fall-2005-Hawks-Doves.pdf. - ↑ Oral History Transcript / James A. McDivitt / Interviewed by Doug Ward / Elk Lake, Michigan – June 29, 1999 - ↑ 6.0 6.1 "Gemini 4". Encyclopedia Astronautica. http://www.astronautix.com/flights/gemini4.htm. - ↑ On The Shoulders of Titans – Ch12-7 - ↑ http://nssdc.gsfc.nasa.gov/nmc/spacecraftDisplay.do?id=GEM - ↑ NSSDC ID: 1967-105A NASA, NSSDC Master Catalog - ↑ Mark Wade. "Soyuz 11". Encyclopedia Astronautica. http://www.astronautix.com/flights/soyuz11.htm. - ↑ Marcia S. Smith (3 February 2012). "Space Station Launch Delays Will Have Little Impact on Overall Operations". spacepolicyonline.com. - ↑ Bryan Burrough, Dragonfly: NASA and the crisis aboard Mir, (1998, ISBN 0-88730-783-3) 2000, ISBN 0-06-093269-4, page 65, "Since 1985 all Russian spacecraft had used the Kurs computers to dock automatically with the Mir station" ... "All the Russian commanders had to do was sit by and watch." - ↑ http://www.orbitalrecovery.com/news15.html - ↑ "TRACK AND CAPTURE OF THE ORBITER WITH THE SPACE STATION REMOTE MANIPULATOR SYSTEM" (PDF). NASA. http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19870015906_1987015906.pdf. - ↑ 15.0 15.1 15.2 15.3 Template:Cite journal - ↑ λmax is the angular radius of the spacecraft’s true horizon as seen from the center of the planet; for LEO, it is the maximum Earth central angle from the altitude of the spacecraft. - ↑ Template:Cite journal - ↑ 18.0 18.1 Pearson, Don J. (November 1989). "Shuttle Rendezvous and Proximity Operations". originally presented at COLLOQUE: MECANIQUE SPATIALE (SPACE DYNAMICS) TOULOUSE, FRANCE NOVEMBER 1989. NASA. http://home.comcast.net/~djpearson/rndz/rndzpaper.html. Retrieved November 26, 2011. - ↑ "STS-104 Crew Interviews with Charles Hobaugh, Pilot". NASA. http://spaceflight.nasa.gov/shuttle/archives/sts-104/crew/inthobaugh.html. - ↑ WILLIAM HARWOOD (March 9, 2001). "Shuttle Discovery nears rendezvous with station". SPACEFLIGHT NOW. http://spaceflightnow.com/station/stage5a1/010309fd2/. - ↑ Template:Cite conference - ↑ Rendezvous Strategy of the Japanese Logistics Support Vehicle to the International Space Station, - ↑ Success! Space station snags SpaceX Dragon capsule - ↑ Template:Cite journal |40x40px||Wikimedia Commons has media related to Category:Space rendezvous.| - The Visitors (rendezvous) - Space Rendezvous Video of Space Shuttle Atlantis and Space Station - "Lunar Orbit Rendezvous and the Apollo Program". NASA. http://www.nasa.gov/centers/langley/news/factsheets/Rendezvous.html. - PEARSON, DON J. (1989). "SHUTTLE RENDEZVOUS AND PROXIMITY OPERATIONS". http://home.comcast.net/~djpearson/rndz/rndzpaper.html. - Handbook Automated Rendezvous and Docking of Spacecraft by Wigbert Fehse - Docking system agreement key to global space policy – October 20, 2010 |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
CPU, or processor, and RAM, more commonly known as memory, are the two most important components in choosing a computer’s specs. There is a major difference between RAM and CPU in terms of their roles within a computer. Unlike RAM, the CPU is the part that actually performs the calculations. As an example, should a computer add two numbers, say 5 and 8, the CPU will take the two numbers from the RAM. The result is then added and returned to RAM, in this case, 13. Processing occurs on the CPU, which is the actual component that determines how fast the whole system runs. Having insufficient RAM used to simply mean your program wouldn’t run. But nowadays, operating systems use page files to extend RAM in order to keep programs running. However, data stored in page files can be very slow and can make the CPU wait for the data; thus causing the entire computer to slow down. RAM vs Processor The main difference between RAM vs Processor is that RAM is rated by its capacity to process multiple programs simultaneously, while Processor performance is rated by its response time. In addition to the differences between RAM and CPU, RAM or Random Access Memory is used as short-term storage, whereas CPU or Central Processing Unit is the processor that collects information from RAM and performs all the functions. RAM maintains all data related to the functions currently being performed, while the CPU retrieves, processes, and delivers this data back to RAM. CPU is the driver of the car if RAM is its oil tank. Operating systems run successfully when RAM and CPU work together. What is RAM? RAM, or random access memory, is vital no matter what kind of computer it is. This memory may also be called system memory in some cases. RAM provides your computer with the ability to store all the short-term data it is currently using. RAM is also called volatile memory since every time the system is rebooted, it resets the data it stores. The more RAM your system has, the more data it is able to work with for tasks like gaming, 3D modelling, or combining large amounts of code. What is a CPU? It is cliche to say that the CPU is like the brain of the computer, but it does describe it reasonably well. Processors, also called microprocessors or central processing units, are responsible for processing data on your computer and providing instructions to the other components. In as clear a way as possible, a powerful CPU translates into a powerful PC. It goes without saying that all components, whether they be RAM, CPU, or GPU, must be within a certain range of power, or else your system will experience bottlenecks that prevent you from achieving maximum performance. Main Differences Between RAM and CPU As you know, RAM is a temporary memory storage unit, but the CPU is a computer’s main processor. A computer’s memory determines how many programs or applications the system can run at once, while its processor determines the speed at which a program or application can be launched. CPU stands for Central Processing Unit, while RAM stands for Random Access Memory. Unlike RAM, the performance of a CPU is dependent upon its number of cores, memory space, and processing speed. During idle time, RAM usage is about 50%, while CPU usage is between 0.8% and 10%.
Phylogenetics is the science of estimating and analyzing evolutionary relationships. Phylogenetic relationships among micro-organisms are especially difficult to discern. Molecular biology often helps in determining genetic relationships between different organisms. Nucleic acids (DNA and RNA) and proteins are 'information molecules' in that they retain a record of an organism's evolutionary history. The approach is to compare nucleic acid or protein sequences from different organisms using computer programs and estimate the evolutionary relationships based on the degree of homology between the sequences. Nucleic acids and proteins are linear molecules made of smaller units called nucleotides and amino acids, respectively. The nucleotide or amino acid differences within a gene reflect the evolutionary distance between two organisms. In other words, closely related organisms will exhibit fewer sequence differences than distantly related organisms. In particular, the sequence of the small-subunit ribosomal RNA (rRNA) is widely used in molecular phylogeny. One advantage of the molecular approach in determining phylogenetic relationships over the more classical approaches, such as those based on morphology or life cycle traits, is that the differences are readily quantifiable. Sequences from different organisms can be compared and the number of differences can be established. These data are often expressed in the form of 'trees' in which the positions and lengths of the 'branches' depict the relatedness between organisms. Shown below is a three-domain tree of life based on small subunit rRNA sequences (modified from N.R. Pace, ASM News 62:464, 1996). This tree depicts 3 major branches: eubacteria, archaebacteria, and eukaryotes. The organisms on the early branches on the eukaryote branch are all protozoa or other protists (dark green). The relative distance occupied by these organisms, as compared to the so-called higher organisms (light green), is quite notable. These data are consistent with an extremely long evolutionary history and the extreme diversity among the protozoa. However, the above tree is not entirely consistent with other criteria used to determine relationships between protozoa. Furthermore, phylogenetic trees produced from other gene sequences will produce different topologies. Possible reasons for these inconsistencies are: The first two phenomenon result in a long-branch attraction artefact in which many slowly evolving sequences will cluster to the exclusion of a few rapidly evolving sequences. In other words, the long branches that are far apart in the lower portion of the eukaryotic branch may be a result of the experimental procedure. On the other hand, it has also been proposed that a relatively rapid (10-100 million year time span) radiation event, or 'big bang', may have occurred early in the evolution of eukaryotes giving rise to major taxa. This would also result in a similar tree topology. In addition, events like horizontal DNA transfer and gene duplications will complicate the analysis of molecular phylogenetic data. Some of these problems are resolved by combining data into consensus trees. For example, the following tree was derived by combining protein data from elongation factor-1α, actin, α-tubulin, and β-tubulin (modified from S.L. Baldauf Am. Nat. 154, S178-188; see also Science 290, 972). This tree shows that the various groups of protozoa are quite diverse and distantly related to each other as well as showing relationships between the protozoa and other eukaryotes. |The probable branch positions for some other protists are indicated by arrows (N = Naegleria; P = Porphyra, a red algae; A = Acanthamoeba; and E = Encephalitozoon, a microsporidia).| Links back to:
Natural philosophers have speculated on the existence of worlds around other suns for millennia. Now that real data are available, we find a diversity far beyond that expected by scientists, or science-fiction writers. In the 1960s, with great fanfare, the discovery of first one, and then two Jupiter-like planets in orbit around Barnard's star was announced (Fig. 1). Only 6 light years away — but still too faint to see with the unaided eye — Barnard's star is one of the Sun's nearest neighbours (only the Alpha Centauri system is closer). But by the 1970s, the evidence for these purported planets was discredited. More claims of the discovery of the first extrasolar planet, or 'exoplanet', continued to capture newspaper headlines, but these too failed to stand up to scrutiny. It was only after decades of false leads that, in 1991, two bona fide extrasolar planets were detected1, and this discovery has stood the test of time. Exoplanets are small, very faint objects, located close to much brighter stars. The planets themselves have not been seen, but instead they have been identified by the gravitational tugs that they exert on their stars. About 100 exoplanets are now known; most are comparable in mass to Jupiter, and have orbital periods of a few years or less. Astronomers are amassing a variety of detection techniques to better assess the diversity of planetary systems within our Galaxy. And the hunt is on for a true analogue of our Solar System that has an Earth-like planet, perhaps harbouring life as we know it. What is a planet? Five planets, or 'wandering stars', were known to the ancients: Mercury, Venus, Mars, Jupiter and Saturn. The astronomical revolution brought about by Copernicus, Kepler and Newton showed that these objects were more akin to the Earth than to the Sun and other stars. Thus our home orb was added to the list of known planets. Then, in 1781, the scientific world was taken by surprise when amateur telescope-maker William Herschel announced the discovery of a more distant planet, subsequently named Uranus. In 1800, the small planetary object Ceres was discovered in orbit between Mars and Jupiter, and tens of thousands of even smaller minor planets — asteroids — have since been detected in that region. The planet Neptune signalled its presence through its gravitational effect on the orbit of Uranus, and was first actually seen in 1846. And Pluto, the furthermost Solar System planet known to us, appeared in a careful optical search carried out by Clyde Tombaugh in 1930. The search for more planetary companions to the Sun continues, using direct imaging as well as the indirect signature of gravitational perturbations of the motions of known planets, comets and even spacecraft. Pluto was originally thought to be more massive than the Earth, but subsequent observations showed that it is less than 5% of the mass of Mercury, the smallest of the planets known before 1800 and itself less than 6% of the mass of the Earth. This realization, together with the discovery of many minor planets beyond Neptune during the past decade (the largest of which may be bigger than Ceres), has led astronomers to question exactly how a planet should be defined. For exoplanets, the question is not whether an object is too small to call a planet (small objects are difficult to detect), but rather whether it is too large. A star maintains itself against gravitational collapse using energy released by nuclear fusion in its interior; only objects at least 7–8% as massive as our Sun can maintain sufficiently high temperatures in their interiors to become stars. In comparison, the most massive planet in our Solar System, Jupiter, has less than 0.1% of the mass of the Sun. Various definitions of a planet have been proposed, some based on mass, or the origins of the body, or on its current orbit. The provisional definition adopted by the International Astronomical Union's working group on extrasolar planets is an object that is in orbit about a star and that is smaller than the limit for deuterium fusion to occur (about 13 times the mass of Jupiter). How to find a planet There are several ways to search for exoplanets, and, as planets located many light years away are extremely faint, most methods are indirect — a planet is detected through its influence on the star that it orbits. Different methods are sensitive to different classes of planets and provide complementary information about the planets they find, so most or all of them will contribute to our understanding of the diversity of planetary-system characteristics. In 1991, Alexander Wolszczan and Dale Frail announced the presence of two planets in orbit about a pulsar1, in what became the first exoplanet discovery. Pulsars are magnetized, rotating neutron stars, which emit radio waves that appear as periodic pulses to an observer on Earth. It was variations in the arrival times of these pulses that signalled the planets' presence to Wolszczan and Frail. The pulse period can be determined very precisely (stable pulsars rank among the most accurate clocks known), and the mean time of pulse arrival at the telescope receiver can be measured especially accurately for rapidly rotating, millisecond pulsars, whose frequent pulses provide an abundance of data. Even though the pulses are emitted periodically, the times at which they reach the receiver are not equally spaced if the distance between the pulsar and the telescope varies in a nonlinear fashion. The Earth's motion around the Sun and its rotation cause such variations, and these can be calculated and removed from the data. If periodic variations are still present in the data, they may indicate the presence of companion planets orbiting the pulsar. But by far the most successful planet-finding method at present is the radial-velocity technique. The wavelength of light emitted by distant stars becomes lengthened or shortened depending on whether the star is moving away from or towards the observer (Fig. 2a). By fitting this 'Doppler shift' in the wavelengths of a large number of features within a star's spectrum, the velocity at which the star is moving towards or away from the observer can be precisely measured. Astronomers then subtract the motion of the telescope relative to the centre-of-mass of the Solar System and other known motions, leaving the radial motion of the target star that results from the tug of its own planets. Precise radial-velocity measurements require a large number of spectral lines, and so are not possible for the hottest stars, which have far fewer spectral features than do cooler stars like the Sun. Moreover, stellar rotation and intrinsic variability (including starspots) are major sources of noise for radial-velocity measurements. There is another slight drawback to both pulsar timing and radial-velocity measurements. Although both are sensitive to the period and the eccentricity of the exoplanet's orbit, an important quantity to measure for any newly discovered planet is its mass. But both methods yield only the product of the planet's mass (divided by that of the star, whose mass can usually be estimated accurately from its spectral characteristics) and the sine of the angle between its orbital plane and the plane of the sky — Msini — and i usually cannot be determined. Still, in radial-velocity measurements the leading research groups are now achieving a precision of 3 m s−1 on spectrally stable stars (this represents a Doppler shift of 1 part in 108). Compare this with our own Solar System: Jupiter causes the Sun's velocity to vary with an amplitude of 12.5 m s−1 and a period of 11.86 years; Saturn's effect is the next largest, with an amplitude of 2.7 m s−1 and a period of almost 30 years. Thus, with current precision, Jupiter-like planets orbiting Sun-like stars are detectable, although these detections require a long timeline of observations (comparable to the planet's orbital period). Planets smaller than Uranus orbiting very close to stars can also be detected. But finding Earth-like planets in Earth-like orbits is well beyond the capabilities of the radial-velocity technique. Planets can also be detected from the wobble they induce in the motion of their stars projected onto the plane of the sky (Fig. 2b, c). This astrometric technique is most sensitive to massive planets orbiting stars that are relatively close to us. But here, because the star's motion is detectable in two dimensions, the planet's actual mass, rather than just the combination Msini, can be measured. Planets in more distant orbits are ultimately easier to detect using astrometry because the amplitude of the star's motion is larger — but then finding these planets requires a longer timeline of observations because of their greater orbital periods. The detection methods mentioned so far look for a planet's gravitational pull on its star, and so are sensitive to the planet's mass. In contrast, transit photometry detects the amount of starlight that a planet obscures and gives an estimate of the planet's size. If the Earth lies in or near the orbital plane of an extrasolar planet, then, viewed from Earth, that planet periodically blocks a small fraction of the star's light once each orbit. Measuring a star's brightness very precisely can reveal such transits, easily distinguished from other, random effects, such as starspots, by their periodicity and the distinctive, square-well shape of the brightness variation (Fig. 2d). Although this technique detects only the small proportion of planets that happen to line up in this way, thousands of stars can be surveyed within the field of view of one telescope, so transit photometry should be fairly efficient. Reaching down further into the box of astronomical tricks, we find the microlensing method, which is being used to investigate the distribution of faint, stellar and substellar bodies within our Galaxy2. Microlensing arises from the general relativistic bending of the light from a distant star by a massive object (the lens) passing between the source and the observer. Lensing causes the source to appear to brighten gradually to a few times its usual intensity over a period of weeks or months. If the lensing star has planetary companions, then these less massive bodies would produce brief enhancements in the brightness, provided that the line of sight from Earth passes close to the planet. Under favourable circumstances, planets as small as Earth could be detected. But we would only be able to make statistical estimates of the properties of individual planets, and often even of the stars that they orbit, because of the many parameters that influence microlensed light3. So there are many indirect ways to find planets beyond the Solar System, but what about imaging an exoplanet directly? Distant planets are very faint and located near much brighter objects (the star or stars that they orbit), making them extremely difficult to image. The reflected starlight from planets similar in orbit and size to those in our Solar System is roughly only one-billionth as bright as the star, although the contrast decreases a thousandfold for thermal, infrared radiation. Scattering of light by telescope optics and atmospheric variability on Earth add to the difficulty. Nonetheless, technological advances such as adaptive optics should eventually permit imaging and spectroscopic studies of planets orbiting nearby and/or younger, brighter stars. Ground-based searches using all of these techniques are in progress, and for the future, higher-precision astrometry, transit, microlensing and imaging surveys using spacecraft are being considered. There are still other techniques to explore. Precise timing of the eclipses of eclipsing binary stars has the potential to reveal the masses and orbits of unseen companions. Spectroscopy could be used to identify gases that would be stable in planetary atmospheres but not in stars, and Doppler variations of such signals could yield planetary orbital parameters. Radio emissions similar to those detected from Jupiter could reveal the presence of extrasolar planets. And, of course, artificial signals from an alien civilization could betray the presence of the planets on which they lived (and they might even be willing to provide us with substantially more information). Three small planets — the first and the smallest exoplanets known — orbit the pulsar PSR1257+12, a rapidly rotating neutron star around 1.4 times the mass of the Sun and between 2,000 and 3,000 light years from Earth (placing it in our general region of the Milky Way Galaxy, but not a close neighbour even by interstellar standards). The first two to be found have orbital periods of a few months, small eccentricities, and masses a few times as large as the Earth (here we can estimate the actual masses; the star's response to orbital changes produced by mutual gravitational interactions of the planets provides estimates of the masses independent of the orbital inclination). The minimum mass of the inner planet (Msini) is only slightly more than that of the Earth's Moon, and it completes its orbit in just under a month4. So far, all other extrasolar planets for which there is good evidence have been discovered using the radial-velocity (Doppler) technique, although other methods have given tantalizing hints. These exoplanets are much more massive than those around PSR1257+12 and orbit normal, hydrogen-burning stars. Most of these stars have masses within a few tens of per cent of that of our Sun and are situated between 20 and 200 light years away — much closer than the pulsar with known planets, but still not our very closest neighbours. The first such exoplanet was discovered5 around the star 51 Pegasi (which is slightly less massive than our Sun and a few billion years older) by Michel Mayor and Didier Queloz in 1995. For this planet, Msini is 45% of the mass of Jupiter (MJ) and its orbital period is just 4.23 days — less than a twentieth that of Mercury, the closest planet to our Sun. Several similar planets have been found subsequently, implying that about 1% of Sun-like stars are orbited by Jupiter-like planets with orbital periods of less than one week. Of the 100 exoplanets now known (with the exception of the pulsar planets), their minimum masses (strictly, Msini) range from about 40 Earth masses (equivalent to 0.12 MJ, or a bit over twice the mass of Neptune, the third most massive planet in our Solar System) for the planet orbiting the Sun-like star HD 49674, right up to the maximum allowed by the definition of a planet. Most of the observed exoplanets have masses close to (within a factor of a few of) Jupiter's mass, because larger planets are scarcer and smaller ones are more difficult to detect. The only exoplanet so far discovered with a period exceeding that of Jupiter orbits the star 55 Cancri with a 14-year period and Msini = 4 MJ. The actual size has been measured for only one exoplanet. The close companion to HD 209458 (another Sun-like star) was first detected using the Doppler technique but was later observed during transit in front of its star. Measurement of the transit shows that this planet has a radius about 1.35 times that of Jupiter. Moreover, as the tilt of its orbit is known from the transit duration, the planet's actual mass (0.65 MJ), rather than merely the product Msini, is known. Together, the mass and size of a planet reveal its density, which implies that the planet orbiting HD 209458 is composed primarily of hydrogen, the lightest and most common element in the Universe, and also the primary constituent of Jupiter and Saturn. The exoplanetary system of greatest interest to me is that in orbit about Gliese 876. At about one-third the mass of our Sun, this faint red orb is by far the least massive star known to possess any planets. Gliese 876, only 15 light years from Earth, is also the nearest star for which an exoplanet has unambiguously been found. It is one of less than a dozen stars known to have multiple planets6, and the only normal star for which the mutual gravitational perturbations of the planets are clearly evident in the data7,8 (Fig. 3). More to come The pace of planet discoveries using radial velocities will continue to increase for at least the next few years, as more observers obtain the spectrometers and the skills to achieve the high Doppler precision required. Smaller inner planets will be detected as precision improves, and planets with longer periods will be found as the data sets grow longer. This should include planets that are true analogues of Jupiter. Transit photometry, which has already been used to observe the planet orbiting HD 209458 both from the ground and from space, should bear fruit in planet detection in the near future. Several groups are conducting transit searches for close-in giant planets from the ground, mostly using wide-field telescopes with diameters smaller than 25 cm. Both NASA and its European counterpart, ESA, are looking into missions to image Earth-like planets before 2020. A few small telescopes that are designed primarily for studying stellar properties will be launched into space during the next five years and should be able to find planets only a few times the radius of Earth. In 2007, NASA will launch the Kepler mission, which will be capable of detecting true Earth analogues (Box 1). The precision of astrometry from the ground is improving as large interferometers are being built, and even higher-precision astrometry should be achieved from special-purpose spacecraft, but as astrometry is more sensitive to planets far from stars, observations will need to be conducted for years to observe a full planetary orbit. With technological advances in the coming years, we shall learn more about how common planets are, and their distributions of size, mass and orbital properties, as well as their densities, colours and atmospheric compositions. We may find that our Solar System, and our own planet, are not that special. But it would be piling speculation on speculation to foresee the discovery of life elsewhere. Although broad classes of future discoveries can be confidently predicted, the particulars cannot. This is because the most successful planet-formation theories are designed to explain the observed properties of planetary systems — the uncertainties in initial conditions, and the complexity of the physics and chemistry of star and planet formation, preclude detailed modelling from first principles9. Extrapolation of observed distributions is highly unreliable if the processes creating them are not fully understood. The theorists need more data, and after decades of trying, the observers are now providing a bountiful harvest. Wolszczan, A. & Frail, D. Nature 255, 145–147 (1992). Alcock, C. et al. Nature 414, 617–619 (2001). Peale, S. J. Icarus 127, 269–289 (1997). Wolszczan, A. et al. Astrophys. J. 528, 907–912 (2000). Mayor, M. & Queloz, D. Nature 378, 355–359 (1995). Marcy, G. W. et al. Astrophys. J. 556, 296–301 (2001). Laughlin, G. & Chambers, J. E. Astrophys. J. 551, L109–L113 (2001). Rivera, E. J. & Lissauer, J. J. Astrophys. J. 558, 392–402 (2001). Lissauer, J. J. Nature 402, C11–C14 (1999). Gilliland, R. L. et al. Astron. J. 106, 2441–2476 (1993). About this article International Journal of Non-Linear Mechanics (2019) Quantum molecular dynamics study on the proton exchange, ionic structures, and transport properties of warm dense hydrogen-deuterium mixtures Physical Review E (2018) Dynamic electron–ion collisions and nuclear quantum effects in quantum simulation of warm dense matter Journal of Physics: Condensed Matter (2018) Physical Review B (2016) Scientific Reports (2015)
This action might not be possible to undo. Are you sure you want to continue? Glaciers are made up of fallen snow that, over many years, compresses into large, thickened ice masses. Glaciers form when snow remains in one location long enough to transform into ice. What makes glaciers unique is their ability to move. Due to sheer mass, glaciers flow like very slow rivers. Some glaciers are as small as football fields, while others grow to be over a hundred kilometers long. Presently, glaciers occupy about 10 percent of the world's total land area, with most located in polar regions like Antarctica and Greenland. Glaciers can be thought as remnants from the last Ice Age, when ice covered nearly 32 percent of the land, and 30 percent of the oceans. An Ice Age occurs when cool temperature endure for extended periods of time, allowing polar ice to advance into lower latitudes. For example, during the last Ice Age, giant glacial ice sheets extended from the poles to cover most of Canada, all of New England, much of the upper Midwest, large areas of Alaska, most of Greenland, Iceland, Svalbard and other arctic islands, Scandinavia, much of Great Britain and Ireland, and the northwestern part of the former Soviet Union. Within the past 750,000 years, scientists know that there have been eight Ice Age cycles, separated by warmer periods called interglacial periods. Currently, the Earth is nearing the end of an interglacial, meaning that another Ice Age is due in a few thousand years. This is part of the normal climate variation cycle. Greenhouse warming may delay the onset of another glacial era, but scientists still have many questions to answer about climate change. Although glaciers change very slowly over long periods, they may provide important global climate change signals. Glaciers are of four chief types. Valley, or mountain, glaciers are tongues of moving ice sent out by mountain snowfields following valleys originally formed by streams. In the Alps there are more than 1,200 valley glaciers. Piedmont glaciers, which occur only in high latitudes, are formed by the spreading of valley glaciers where they emerge from their valleys or by the confluence of several valley glaciers. Small ice sheets known as ice caps are flattened, somewhat dome-shaped glaciers spreading out horizontally in all directions and cover mountains and valleys. Continental glaciers are huge ice sheets whose margins may break off to form icebergs (see iceberg). The only existing continental glaciers are the ice sheets of Greenland and Antarctica, but during glacial periods they were far more widespread. Glaciers may be classified as warm or cold depending on whether their temperatures are above or below −10°C (14°F). Glaciers alter topography, and their work includes erosion, transportation, and deposition. Mountain glaciers carve out amphitheaterlike vertical-walled valley heads, or cirques, at their sources. They transform V-shaped valleys into Ushaped valleys by grinding away the projecting bases of slopes and cliffs and leveling the floors of the valleys; in this process tributary valleys are frequently left "hanging," with their outlets high above the new valley floor. When the tributary valleys contain streams, waterfalls and cascades are formed, such as Bridal Veil Falls of Yosemite National Park. Elevations over which glaciers pass usually are left with gently sloping sides in the direction from which the glacier approached (stoss sides) and rougher lee sides. Humps and bosses of rock so shaped are known as roches moutonnées. The debris from glacial erosion is carried upon, within, and underneath the ice. The debris frozen into the underside of the glacier acts as a further erosive agent, polishing the underlying rock and leaving scratches, or striae, running in the direction of the movement of the glacier. Glacial deposits are often known as till or drift. The melting of the ice in summer forms glacial streams flowing under the ice, while the retreat of a large glacier sometimes leaves a temporary glacial lake, such as the ice age Lake Agassiz. Fjords generally owe their origin to glaciers. A glacier moves as a solid rather than as a liquid, as is indicated by the formation of crevasses (see crevasse). The center of a glacier moves more rapidly than the sides and the surface more rapidly than the bottom, because the sides and bottom are held back by friction. The rate of flow depends largely on the volume of ice in movement, the slope of the ground over which it is moving, the slope of the upper surface of the ice, the amount of water the ice contains, the amount of debris it carries, the temperature, and the friction it encounters. Glaciers are always in movement, but the extent of the apparent movement depends on the rate of advance and the rate of melting. If the ice melts at its edge faster than it moves forward, the edge of the glacier retreats; if it moves more rapidly than it melts, the edge advances; it is stationary only if the rate of movement and the rate of melting are the same. The causes of glacial movement are exceedingly complex and doubtless are not all operative on the same glacier at the same time. Important elements in glacial movement are melting under pressure followed by refreezing, which may push the mass in the direction of least resistance; sliding or shearing of layers of ice one on top of the other; and rearrangement of the granules when pressure causes melting. Sudden, rapid movements of glaciers, called glacier surges, have been observed in Alaskan and other glaciers, with evidence for such abnormal movements as the crumpled lines of surface debris found on them. It is thought that the relatively sudden movement and melting of glaciers may be indicative of climate warming. Glaciers are categorized in many ways including by their morphology, thermal characteristics or their behavior. Two common types of glaciers are Alpine glaciers, which originate in mountains, and Continental ice sheets, which cover larger areas. visibly affected by the landscape as they cover the entire surface beneath them, with possible exception near the glacier margins where they are thinnest. Antarctica and Greenland are the only places where Continental ice sheets currently exist. These regions contain vast quantities of fresh water. The volume of ice is so large that if the Greenland ice sheet melted, it would cause sea levels to rise six meters (20 ft) all around the world. If the Antarctic ice sheet melted, sea levels would rise up to 65 meters (210 ft). Ice shelves are areas of floating ice, commonly located at the margin of an ice sheet. As a result they are thinner and have limited slopes and reduced velocities. Ice streams are fast-moving sections of an ice sheet.. They can be several hundred kilometers long. Ice streams have narrow margins and on either side ice flow is usually an order of magnitude less. In Antarctica, many ice streams drain into large ice shelves. However, some drain directly into the sea, often with an ice tongue, like Mertz Glacier. In Greenland and Antarctica ice streams ending at the sea are often referred to as tidewater glaciers or outlet glaciers, such as Jakobshavn Isbræ are glaciers that terminate in the sea. As the ice reaches the sea pieces break off, or calve, forming icebergs. Most tidewater glaciers calve above sea level, which often results in a tremendous splash as the iceberg strikes the water. If the water is deep, glaciers can calve underwater, causing the iceberg to suddenly leap up out of the water. The Hubbard Glacier is the longest tidewater glacier in Alaska and has a calving face over 10 km (6 mi) long. Yakutat Bay and Glacier Bay are both popular with cruise ship passengers because of the huge glaciers descending hundreds of feet to the water. This glacier type undergoes centuries-long cycles of advance and retreat that are much less affected by the climate changes currently causing the retreat of most other glaciers. Most tidewater glaciers are outlet glaciers of ice caps and ice fields. In terms of thermal characteristics, a temperate glacier is at melting point throughout the year, from its surface to its base. The ice of a polar glacier is always below freezing point from the surface to its base, although the surface snowpack may experience seasonal melting. A sub-polar glacier has both temperate and polar ice, depending on the depth beneath the surface and position along the length of the glacier. Glaciers form where the accumulation of snow and ice exceeds ablation. As the snow and ice thicken, they reach a point where they begin to move, due to a combination of the surface slope and the pressure of the overlying snow and ice. On steeper slopes this can occur with as little as 50 feet of snow-ice. The snow which forms temperate glaciers is subject to repeated freezing and thawing, which changes it into a form of granular ice called firn. Under the pressure of the layers of ice and snow above it, this granular ice fuses into denser and denser firn. Over a period of years, layers of firn undergo further compaction and become glacial ice. Glacier ice has a slightly reduced density from ice formed from the direct freezing of water. The air between snowflakes becomes trapped and creates air bubbles between the ice crystals. The distinctive blue tint of glacial ice is often wrongly attributed to Rayleigh scattering due to bubbles in the ice. The blue color is actually created for the same reason that water is blue, that is, its slight absorption of red light due to an overtone of the infrared OH stretching mode of the water molecule. Glaciers move, or flow, downhill due to the internal deformation of ice and gravity. Ice behaves like an easily breaking solid until its thickness exceeds about 50 meters (160 ft). The pressure on ice deeper than that depth causes plastic flow. At the molecular level, ice consists of stacked layers of molecules with relatively weak bonds between the layers. When the stress of the layer above exceeds the inter-layer binding strength, it moves faster than the layer below. Another type of movement is through basal sliding. In this process, the glacier slides over the terrain on which it sits, lubricated by the presence of liquid water. As the pressure increases toward the base of the glacier, the melting point of water decreases, and the ice melts. Friction between ice and rock and geothermal heat from the Earth's interior also contribute to melting. This type of movement is dominant in temperate, or warm-based glaciers. The geothermal heat flux becomes more important the thicker a glacier becomes. Glaciers move, or flow, downhill due to the internal deformation of ice and gravity. Ice behaves like an easily breaking solid until its thickness exceeds about 50 meters (160 ft). The pressure on ice deeper than that depth causes plastic flow. At the molecular level, ice consists of stacked layers of molecules with relatively weak bonds between the layers. When the stress of the layer above exceeds the interlayer binding strength, it moves faster than the layer below. Another type of movement is through basal sliding. In this process, the glacier slides over the terrain on which it sits, lubricated by the presence of liquid water. As the pressure increases toward the base of the glacier, the melting point of water decreases, and the ice melts. Friction between ice and rock and geothermal heat from the Earth's interior also contribute to melting. This type of movement is dominant in temperate, or warm-based glaciers. The geothermal heat flux becomes more important the thicker a glacier becomes. The rate of movement is dependent on the underlying slope, amongst many other factors. Fracture zone and cracks Ice cracks in the Titlis Glacier Signs warning of the hazards of a glacier in New Zealand The top 50 meters of the glacier, being under less pressure, are more rigid; this section is known as the fracture zone, and mostly moves as a single unit, over the plastic-like flow of the lower section. When the glacier moves through irregular terrain, cracks up to 50 meters deep form in the fracture zone. The lower layers of glacial ice flow and deform plastically under the pressure, allowing the glacier as a whole to move slowly like a viscous fluid. Glaciers flow downslope, usually this reflects the slope of their base, but it may reflect the surface slope instead. Thus, a glacier can flow rises in terrain at their base. The upper layers of glaciers are more brittle, and often form deep cracks known as crevasses. The presence of crevasses is a sure sign of a glacier. Moving ice-snow of a glacier is often separated from a mountain side or snow-ice that is stationary and clinging to that mountain side by a bergshrund. This looks like a crevasse but is at the margin of the glacier and is a singular feature. Crevasses form due to differences in glacier velocity. As the parts move at different speeds and directions, shear forces cause the two sections to break apart, opening the crack of a crevasse all along the disconnecting faces. Hence, the distance between the two separated parts, while touching and rubbing deep down, frequently widens significantly towards the surface layers, many times creating a wide chasm. Crevasses seldom are more than 150 feet deep but in some cases can be 1,000 feet or even deeper. Beneath this point, the plastic deformation of the ice under pressure is too great for the differential motion to generate cracks. Transverse crevasses are transverse to flow, as a glacier accelerates where the slope steepens. Longitudinal crevasses form semi-parallel to flow where a glacier expands laterally. Marginal crevasses form from the edge of the glacier, due to the reduction in speed caused by friction of the valley walls. Marginal crevasses are usually largely transverse to flow. Crossing a crevasse on the Easton Glacier, Mount Baker, in the North Cascades, USA Crevasses make travel over glaciers hazardous. Subsequent heavy snow may form fragile snow bridges, increasing the danger by hiding the presence of crevasses at the surface. Below the equilibrium line, glacier meltwater is concentrated in stream channels. The meltwater can pool in a proglacial lake, a lake on top of the glacier, or can descend into the depths of the glacier via moulins. Within or beneath the glacier, the stream will flow in an englacial or sub-glacial tunnel. Sometimes these tunnels reemerge at the surface of the glacier. The speed of glacial displacement is partly determined by friction. Friction makes the ice at the bottom of the glacier move more slowly than the upper portion. In alpine glaciers, friction is also generated at the valley's side walls, which slows the edges relative to the center. This was confirmed by experiments in the 19th century, in which stakes were planted in a line across an alpine glacier, and as time passed, those in the center moved farther. Mean speeds vary greatly. There may be no motion in stagnant areas, where trees can establish themselves on surface sediment deposits such as in Alaska. In other cases they can move as fast as 20–30 meters per day, as in the case of Greenlands's Jakobshavn Isbræ (Kalaallisut: Sermeq Kujalleq), or 2–3 m per day on Byrd Glacier the largest glacier in the world in Antarctica. Velocity increases with increasing slope, increasing thickness, increasing snowfall, increasing longitudinal confinement, increasing basal temperature, increasing meltwater production and reduced bed hardness. A few glaciers have periods of very rapid advancement called surges. These glaciers exhibit normal movement until suddenly they accelerate, then return to their previous state. During these surges, the glacier may reach velocities far greater than normal speed. These surges may be caused by failure of the underlying bedrock, the ponding of meltwater at the base of the glacier — perhaps delivered from a supraglacial lake — or the simple accumulation of mass beyond a critical "tipping point". In glaciated areas where the glacier moves faster than one kilometer per year, glacial earthquakes occur. These are large scale tremblors that have seismic magnitudes as high as 6.1. The number of glacial earthquakes in Greenland show a peak every year in July, August and September, and the number is increasing over time. In a study using data from January 1993 through October 2005, more events were detected every year since 2002, and twice as many events were recorded in 2005 as there were in any other year. This increase in the numbers of glacial earthquakes in Greenland may be a response to global warming. Seismic waves are also generated by the Whillans Ice Stream, a large, fast-moving river of ice pouring from the West Antarctic Ice Sheet into the Ross Ice Shelf. Two bursts of seismic waves are released every day, each one equivalent to a magnitude 7 earthquake, and are seemingly related to the tidal action of the Ross Sea. During each event a 96 by 193 kilometer (60 by 120 mile) region of the glacier moves as much as .67 meters (2.2 feet) over about 25 minutes, remains still for 12 hours, then moves another half-meter. The seismic waves are recorded at seismographs around Antarctica, and even as far away as Australia, a distance of more than 6,400 kilometers. Because the motion takes place of such along period of time 10 to 25 minutes, it cannot be felt by scientists standing on the moving glacier. It is not known if these events are related to global warming Ogives are alternating dark and light bands of ice occurring as narrow wave crests and wave valleys on glacier surfaces. They only occur below icefalls, but not all icefalls have ogives below them. Once formed, they bend progressively downglacier due to the increased velocity toward the glacier's centerline. Ogives are linked to seasonal motion of the glacier as the width of one dark and one light band generally equals the annual movement of the glacier. The ridges and valleys are formed because ice from an icefall is severely broken up, thereby increasing ablation surface area during the summertime. This creates a swale and space for snow accumulation in the winter, which in turn creates a ridge. Sometimes ogives are described as either wave ogives or band ogives, in which they are solely undulations or varying color bands, respectively. Black ice glacier in Aconcagua vicinity, Argentina Glaciers occur on every continent and approximately 47 countries. Extensive glaciers are found in Antarctica, Chilean Patagonia, Canada, Alaska, Greenland and Iceland. Mountain glaciers are widespread, e.g., in the Andes, the Himalaya, the Rocky Mountains, the Caucasus, and the Alps. On mainland Australia no glaciers exist today, although a small glacier on Mount Kosciuszko was present in the last glacial period, and Tasmania was extensively glaciated. The South Island of New Zealand has many glaciers including Tasman, Fox and Franz Josef Glaciers. In New Guinea, small, rapidly diminishing, glaciers are located on its highest summit massif of Puncak Jaya. Africa has glaciers on Mount Kilimanjaro in Tanzania, on Mount Kenya and in the Ruwenzori Range. Permanent snow cover is affected by factors such as the degree of slope on the land, amount of snowfall and the winds. As temperature decreases with altitude, high mountains — even those near the Equator — have permanent snow cover on their upper portions, above the snow line. Examples include Mount Kilimanjaro and the Tropical Andes in South America; however, the only snow to occur exactly on the Equator is at 4,690 m (15,387 ft) on the southern slope of Volcán Cayambe in Ecuador. Conversely, areas of the Arctic, such as Banks Island, and the Dry Valleys in Antarctica are considered polar deserts, as they receive little snowfall despite the bitter cold. Cold air, unlike warm air, is unable to transport much water vapor. Even during glacial periods of the Quaternary, Manchuria, lowland Siberia, and central and northern Alaska, though extraordinarily cold with winter temperatures believed to reach −100 °C (−148.0 °F) in parts, had such light snowfall that glaciers could not form. In addition to the dry, unglaciated polar regions, some mountains and volcanoes in Bolivia, Chile and Argentina are high (4,500 metres (14,800 ft) - 6,900 m (22,600 ft)) and cold, but the relative lack of precipitation prevents snow from accumulating into glaciers. This is because these peaks are located near or in the hyperarid Atacama desert. Diagram of glacial plucking and abrasion Rocks and sediments are added to glaciers through various processes. Glaciers erode the terrain principally through two methods: abrasion and plucking. As the glacier flows over the bedrock's fractured surface, it softens and lifts blocks of rock that are brought into the ice. This process is known as plucking, and it is produced when subglacial water penetrates the fractures and the subsequent freezing expansion separates them from the bedrock. When the ice expands, it acts as a lever that loosens the rock by lifting it. This way, sediments of all sizes become part of the glacier's load. The rocks frozen into the bottom of the ice then act like grit in sandpaper. Abrasion occurs when the ice and the load of rock fragments slide over the bedrock and function as sandpaper that smooths and polishes the surface situated below. This pulverized rock is called rock flour. The flour is formed by rock grains of a size between 0.002 and 0.00625 mm. Sometimes the amount of rock flour produced is so high .that currents of meltwaters acquire a grayish color. These processes of erosion lead to steeper valley walls and mountain slopes in alpine settings, which can cause avalanches and rock slides. These further add material to the glacier. Visible characteristics of glacial abrasion are glacial striations. These are produced when the bottom's ice contains large chunks of rock that mark scratches in the bedrock. By mapping the direction of the flutes, researchers can determine the direction of the glacier's movement. Chatter marks are seen as lines of roughly crescent-shape depressions in the rock underlying a glacier, caused by the abrasion where a boulder in the ice catches and is then released repetitively as the glacier drags it over the underlying basal rock. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
Question 1 What precautions should be taken while using a meter scale to measure the length of an object? Question 2 What is meant by standard unit of measurement? Why is it necessary to have standard units of measurement? Question 3 Name the S.I. unit of length. Write its symbol? Question 4 How many centimeters are there in a meter? Question 5 How many milimeters are there in a centimeters? Question 6 How many meters make one km? Question 7 Why a foot step cannot be used as a standard unit of length? Question 8 Why hand-span method cannot be used as as standard unit of length? Question 9 Name the various length measuring devices? Question 10 How will you measure the length of a curved line using thread and scale? Question 11 What is measuring tape? Explain its use? The length of the space between two points (or two places) is called distance. For example: The distance between Delhi and Agra is 200 kms. If the two points (or two places) are close by, the distance between them will be small otherwise if the two points (or two places) are far off, then the distance between them will be large. Measurement is a process of comprising an object with a standard ‘unit of measurement’. Standard unit of measuring length is called meter. Need of standard units of measurement – we can use a variety of objects as units of measurement of length. We can measure the length of an an object by using ‘hand-span’, ‘forearm length’ or ‘ foot step’ as the units of measuring length. But hand span, forearm length and foot step cannot be used as standard units measurements because their length is not the same for all the persons. The length of hand-span, forearm-length and foot step of different persons is different. It varies from person to person. So, hand span, forearm length and foot step are not standard units of measuring length. A unit of measurement which has a fixed value which does not change from person to person or place to place, is called a standard unit of measurement length. Whether a meter is used by one person or another person, whether the meter is used in one country or another country, it always represents exactly the ‘same length’. The length of meter does not change from one person to person or place to place. It is necessary to have standard units of measurements for the sake of uniformity in measurements. Every Measurement consists of a number and a unit – The result of every measurement consists of two parts :- (1) The first part of the measurement consists of a number (1,2,3,4,5 …… etc.) which tells us the ‘magnitude of measurement’ (2) Second part tells us the ‘name of the unit of measurement’. Every measurement consists of a number and a unit. Every measurement consists of a number and a unit. For example: if the length of table is 2 meters , then 2 is the number and meter is the unit. The number ‘2’ tells us the magnitude of the length of the table and ‘meter’ tells us the unit in which the length has been measured. A measurement is not complete unless both, the number an the unit are mentioned. SI unit of Length The SI unit of measuring length is meter. The symbol of meter is m. The SI unit of measuring mass is Kilogram “kg” and the SI unit of measuring time is ‘second’ (s). Prefixes used with SI units Prefix is a kind of word used before the name of an SI unit to get a bigger value or a smaller value of the unit. Three common prefixes are : kilo, centi and mili (1) Kilo is a prefix which denotes one thousand, i.e. kilo means “one thousand or 1000 1 kilometer = 1000 meters (2) Centi is prefix which denotes hundredth i.e. centi means “one hundredth” or 1/100 1 meter = 100 centimeters (3) Milli is a prefix which denotes “one thousandth” i.e. milli means “one thousandth” or 1/1000 So, if we write ‘milli’ before the unit of length ‘meter’ , it becomes ‘millimeter’ which means one thousandth of a meter or 1/1000 meters. In other words 1 meter = 1000 millimeters. Kilometer is written as ‘km’, centimeter as ‘cm’ and millimeter as mm. Measurement of Length The length is the distance between two points. Meter is a used as a standard unit for measuring the length of an object. We measure the length of an object by using a meter scale. A meter scale is graduated (or marked) in 100 centimeters and every centimeter is further divided into 10 divisions called millimeters. 1 meter = 100 centimeters 1 centimeter = 10 millimeter The use of proper Units of Length “Meter” is the standard unit of length but sometimes other units of length like centimeters, millimeters and kilometers are also used for the sake of convenience. The type of unit used depends on the magnitude of the length to be measured. (1) Meter can be used as unit to measure the length of a table or a room or height of a tree or building. (2) The length of small objects is measured in centimeters , e.g. pencil or a notebook. (3) The very small lengths are expressed in still smaller units called ‘millimeters’ e.g the thickness of a coin or a thin wire. One mm (millimeter) is the smallest length which can be measured accurately by using a meter scale. The distances between two cities are very large. The large distances (or lengths) are measured in a big unit of length called “kilometer” 1 kilometer = 1000 meters 1 km = 1000 m Length Measuring Devices The various types of devices which are commonly used for measuring length are : meter scale and measuring tape. A meter scale is made of metal and can not be bend. The rulers are made of plastic, wood or metal and also can not bend. The measuring tape is however made of flexible material which can bend easily around the object which has to be measured. (1) If the object is straight, we can use either a meter scale or a measuring tape to measure its length. For example: the length of a table can be measured by using a meter scale or a measuring tape. A cloth seller uses a meter rod for measuring cloth. (2) If the object is round, then we use a measuring tape to measure for making measurement of length. For example: (a) the girth of a tree can be measured only by using a measuring tape because measuring tape can be bent around the tree. (b) The tailor also measures our chest and waist by using a measuring tape. We can not use an elastic measuring tape to measure lengths. This is because an elastic measuring tape can stretch easily while taking measurements and hence give a wrong value (greater value) of length of an object than its actual length. So, though a measuring tape is flexible but it is made of non stretchable material. A ruler is a short scale marked in centimeters and millimeters. These rulers are usually made of plastic, Scales made of metal or plastic are also available in the market. Precautions to be taken while using a scale The precautions to be taken while using a meter scale to measure lengths are as follows : (1) The scale should be placed to the side of the object being measured. It should also be in touch with the object being measured. (2) While reading the scale, eye must be placed vertically above the scale mark being read. If the the eye is not vertically above the scale mark being read, then the reading becomes wrong. (3) If the scale has a damaged zero mark or broken left end, measure the length of an object starting from 1 cm mark of scale and then subtract 1 cm from the reading taken at the right end to get the actual length of objects. To measure the Length of a curved line A wavy line is known as curved line. We can not measure the length of a curved line by using a scale directly. The length of a curved line can be measured by using a thread and a scale. (1) We take a piece of thread and put a knot near its one end. This knot will act as a starting point for measuring the length of the curved line . (2) Place the knot of the thread at a point A on the left end of the curved line with the help of thumb and forefinger. (3) Hold the thread a little distance away from the knot and keep it along the curved line with the help of right thumb and forefinger. In this way, run the thread all along the curved line in little steps at a time, keeping the thread taut, till the other end B of the curved line is reached. Make an ink mark on the thread where it touches the other end B of the curved line. Now, straighten the thread and measure its length between the knot and the ink mark by keeping it along a scale. This will give us the length of the curved line. |Notes for Chapter 10 Motion and Measurement of Distances|
Gravitational Acceleration | Science Primer It is equal to the ratio of change in velocity with respect to time between the given Let's discuss about the calculation of acceleration due to gravitational force. The Physics of the Universe - Special and General Relativity - Gravity and Acceleration. Motion at constant speed is clearly a very special case, and in practice, . Special Theory of Relativity for a more detailed discussion of time dilation). About the Concept Builders · Relationships and Graphs · Kinematics · Newton's Laws · Vectors and Projectiles · Momentum and . 1-D Kinematics - Lesson 5 - Free Fall and the Acceleration of Gravity What is the, acceleration of gravity on It is the ratio of velocity change to time between any two points in an object's path. Your object was accelerating because gravity was pulling it down. Even the object tossed straight up is falling — and it begins falling the minute it leaves your hand. If it wasn't, it would have continued moving away from you in a straight line. This is the acceleration due to gravity. What are the factors that affect this acceleration due to gravity? If you were to ask this of a typical person, they would most likely say "weight" by which the actually mean "mass" more on this later. That is, heavy objects fall fast and light objects fall slow. Although this may seem true on first inspection, it doesn't answer my original question. The two quantities are independent of one another. Light objects accelerate more slowly than heavy objects only when forces other than gravity are also at work. When this happens, an object may be falling, but it is not in free fall. Free fall occurs whenever an object is acted upon by gravity alone. Obtain a piece of paper and a pencil. Hold them at the same height above a level surface and drop them simultaneously. The acceleration of the pencil is noticeably greater than the acceleration of the piece of paper, which flutters and drifts about on its way down. Something else is getting in the way here — and that thing is air resistance also known as aerodynamic drag. If we could somehow reduce this drag we'd have a real experiment.ACCELERATION DUE TO GRAVITY Repeat the experiment, but before you begin, wad the piece of paper up into the tightest ball possible. Now when the paper and pencil are released, it should be obvious that their accelerations are identical or at least more similar than before. We're getting closer to the essence of this problem. If only somehow we could eliminate air resistance altogether. The only way to do that is to drop the objects in a vacuum. It is possible to do this in the classroom with a vacuum pump and a sealed column of air. Under such conditions, a coin and a feather can be shown to accelerate at the same rate. In the olden days in Great Britain, a guinea coin was used and so this demonstration is sometimes still called the "guinea and feather". A more dramatic demonstration was done on the surface of the moon — which is as close to a true vacuum as humans are likely to experience any time soon. Acceleration Due to Gravity In accordance with the theory I am about to present, the two objects landed on the lunar surface simultaneously or nearly so. Only an object in free fall will experience a pure acceleration due to gravity. It was an immensely popular work among academicians and over the centuries it had acquired a certain devotion verging on the religious. It wasn't until the Italian scientist Galileo Galilei — came along that anyone put Aristotle's theories to the test. Unlike everyone else up to that point, Galileo actually tried to verify his own theories through experimentation and careful observation. He then combined the results of these experiments with mathematical analysis in a method that was totally new at the time, but is now generally recognized as the way science gets done. For the invention of this method, Galileo is generally regarded as the world's first scientist. In a tale that may be apocryphal, Galileo or an assistant, more likely dropped two objects of unequal mass from the Leaning Tower of Pisa. Quite contrary to the teachings of Aristotle, the two objects struck the ground simultaneously or nearly so. Given the speed at which such a fall would occur, it is doubtful that Galileo could have extracted much information from this experiment. Most of his observations of falling bodies were really of bodies rolling down ramps. This slowed things down enough to the point where he was able to measure the time intervals with water clocks and his own pulse stopwatches and photogates having not yet been invented. This he repeated "a full hundred times" until he had achieved "an accuracy such that the deviation between two observations never exceeded one-tenth of a pulse beat. Professors at the time were appalled by Galileo's comparatively vulgar methods even going so far as to refuse to acknowledge that which anyone could see with their own eyes. In a move that any thinking person would now find ridiculous, Galileo's method of controlled observation was considered inferior to pure reason. Free Fall – The Physics Hypertextbook I could say the sky was green and as long as I presented a better argument than anyone else, it would be accepted as fact contrary to the observation of nearly every sighted person on the planet. Galileo called his method "new" and wrote a book called Discourses on Two New Sciences wherein he used the combination of experimental observation and mathematical reasoning to explain such things as one dimensional motion with constant acceleration, the acceleration due to gravity, the behavior of projectiles, the speed of light, the nature of infinity, the physics of music, and the strength of materials. His conclusions on the acceleration due to gravity were that… the variation of speed in air between balls of gold, lead, copper, porphyry, and other heavy materials is so slight that in a fall of cubits a ball of gold would surely not outstrip one of copper by as much as four fingers. Having observed this I came to the conclusion that in a medium totally devoid of resistance all bodies would fall with the same speed. For I think no one believes that swimming or flying can be accomplished in a manner simpler or easier than that instinctively employed by fishes and birds. Gravity and Acceleration When, therefore, I observe a stone initially at rest falling from an elevated position and continually acquiring new increments of speed, why should I not believe that such increases take place in a manner which is exceedingly simple and rather obvious to everybody? I greatly doubt that Aristotle ever tested by experiment. Galileo Galilei, Despite that last quote, Galileo was not immune to using reason as a means to validate his hypothesis. In essence, his argument ran as follows. Imagine two rocks, one large and one small. Since they are of unequal mass they will accelerate at different rates — the large rock will accelerate faster than the small rock. Now place the small rock on top of the large rock. According to Aristotle, the large rock will rush away from the small rock. What if we reverse the order and place the small rock below the large rock? It seems we should reason that two objects together should have a lower acceleration. The small rock would get in the way and slow the large rock down. But two objects together are heavier than either by itself and so we should also reason that they will have a greater acceleration. This is a contradiction. Here's another thought problem. Take two objects of equal mass. According to Aristotle, they should accelerate at the same rate. Now tie them together with a light piece of string. Acceleration of Gravity Together, they should have twice their original acceleration. His resulting General Theory of Relativityover ten years in the making it was published inhas been called the greatest contribution to science by a single human mind. Newton's Law of Universal Gravitation Source: Gravity is the organizing force for the cosmos, crucial in allowing structure to unfold from an almost featureless Big Bang origin. Although it is a very weak force feebler than the other fundamental forces which govern the sub-atomic world by a factor of or 1,,,,,,it is a cumulative and consistent force which acts on everything and can act over large distances. So, even though gravity can be effectively ignored by chemists studying how groups of atoms bond together, for bodies more massive than the planet Jupiter the effects of gravity overwhelm the other forces, and it is largely responsible for building the large-scale structures in the universe. Even before Newton, the great 17th Century Italian physicist Galileo Galilei had shown that all bodies fall at the same rate, any perceived differences in practice being caused by differences in air resistance and drag. Newton, however, had assumed that the force of gravity acts instantaneously, and Einstein had already shown that nothing can travel at infinite speed, not even gravitybeing limited by the de facto universal speed limit of the speed of light. Furthermore, Newton had assumed that the force of gravity was purely generated by masswhereas Einstein had shown that all forms of energy had effective mass and must therefore also be sources of gravity. The principle of equivalence says that gravity is not a force at all, but is in fact the same thing as acceleration Source: Time Travel Research Center: He realized that if he were to fall freely in a gravitational field such as a skydiver before opening his parachute, or a person in an elevator when its cable breakshe would be unable to feel his own weight, a rather remarkable insight inmany years before the idea of freefall of astronauts in space became commonplace. A simple thought experiment serves to clarify this:
If humans want to travel about the solar system, they’ll need to be able to communicate. As we look forward to crewed missions to the Moon and Mars, communication technology will pose a challenge we haven’t faced since the 1970s. We communicate with robotic missions through radio signals. It requires a network of large radio antennas to do this. Spacecraft have relatively weak receivers, so you need to beam a strong radio signal to them. They also transmit relatively weak signals back. You need a large sensitive radio dish to capture the reply. For spacecraft beyond the orbit of Earth, this is done through the Deep Space Network (DSN), which is a collection of radio telescopes custom designed for the job. A 1969 Photograph of CSIRO’s Parkes radio telescope. Credit: CSIRO The only major crewed mission we currently have is the International Space Station (ISS). Since the ISS orbits only about 400 kilometers above the Earth, it’s relatively easy to send radio signals back and forth. But as humans travel deeper into space, we’ll require a Deep Space Network far more powerful than the current one. The DSN is already being pushed to its data limits, given the large number of active missions. Human missions would require orders of magnitude more bandwidth. For the Apollo missions to the Moon, NASA developed a new radio communication system known as the Unified S-band or USB. Earlier low orbit missions used separate radio channels for voice, telemetry, and tracking data. Radio telescopes at the time weren’t sensitive enough to capture this independent data from lunar distances, so USB combined them into a single data stream. But even this wasn’t powerful enough to capture video signals from the Moon. It took the Parkes radio telescope, one of the largest and most sensitive radio antennas at the time, to capture the blurry, low-resolution videos of the first Moon landing. Artist concept of a new antenna dish for crewed missions. Credit: NASA/JPL-Caltech When we return to the Moon and place our first footsteps on Mars, we will want not only scientific data but live video feeds, high-resolution images, and even tweets from the astronauts. Imagine trying to stream gigabytes of data between Earth and Mars. Even the most sophisticated radio network isn’t capable of that level of bandwidth. While NASA is working on modern radio designs, radio communication might not meet all our needs. A new study looks at an alternative. It uses visible light rather than radio. While visible light can carry more data due to its shorter wavelengths, it also scatters more readily and loses fidelity over a shorter distance. To overcome this, the team proposes combining the signal with a second reference signal. The whole thing is then passed through a non-linear optical fiber, which generates a third signal known as an idler wave. All three of these are then amplified and sent on their way. On the other end, the signals are captured and processed. Because the idler wave depends on the other two signals, it can be used to reconstruct the original signal without much data loss. In lab experiments, the team reached a data-rate of more than 10Gb/s, which is ten times higher than current technology. This work is still highly experimental, so it’s too early to tell if it will solve the challenges of human space exploration. But who knows, it might just be the technology that lets astronauts send Instagram selfies from another world. Reference: Kakarla, R., Schröder, J. & Andrekson, P.A. “One photon-per-bit receiver using near-noiseless phase-sensitive amplification.” Light: Science & Applications Vol. 9, no. 153 (2020)
|Internet media type|| |Developed by||W3C & WHATWG| 5.0 / 5.1 (working draft) (28 October 2014) |Type of format||Document file format| HTML elements are the building blocks of HTML pages. With HTML constructs, images and other objects, such as interactive forms, may be embedded into the rendered page. It provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. HTML elements are delineated by tags, written using angle brackets. Tags such as <img /> and <input /> introduce content into the page directly. Others such as <p>...</p> surround and provide information about document text and may include other tags as sub-elements. Browsers do not display the HTML tags, but use them to interpret the content of the page. - 1 History - 2 Markup - 3 Semantic HTML - 4 Delivery - 5 HTML4 variations - 6 HTML5 variations - 7 Hypertext features not in HTML - 8 WYSIWYG editors - 9 See also - 10 References - 11 External links In 1980, physicist Tim Berners-Lee, a contractor at CERN, proposed and prototyped ENQUIRE, a system for CERN researchers to use and share documents. In 1989, Berners-Lee wrote a memo proposing an Internet-based hypertext system. Berners-Lee specified HTML and wrote the browser and server software in late 1990. That year, Berners-Lee and CERN data systems engineer Robert Cailliau collaborated on a joint request for funding, but the project was not formally adopted by CERN. In his personal notes from 1990 he listed "some of the many areas in which hypertext is used" and put an encyclopedia first. The first publicly available description of HTML was a document called "HTML Tags", first mentioned on the Internet by Tim Berners-Lee in late 1991. It describes 18 elements comprising the initial, relatively simple design of HTML. Except for the hyperlink tag, these were strongly influenced by SGMLguid, an in-house Standard Generalized Markup Language (SGML)-based documentation format at CERN. Eleven of these elements still exist in HTML 4. HTML is a markup language that web browsers use to interpret and compose text, images, and other material into visual or audible web pages. Default characteristics for every item of HTML markup are defined in the browser, and these characteristics can be altered or enhanced by the web page designer's additional use of CSS. Many of the text elements are found in the 1988 ISO technical report TR 9537 Techniques for using SGML, which in turn covers the features of early text formatting languages such as that used by the RUNOFF command developed in the early 1960s for the CTSS (Compatible Time-Sharing System) operating system: these formatting commands were derived from the commands used by typesetters to manually format documents. However, the SGML concept of generalized markup is based on elements (nested annotated ranges with attributes) rather than merely print effects, with also the separation of structure and markup; HTML has been progressively moved in this direction with CSS. Berners-Lee considered HTML to be an application of SGML. It was formally defined as such by the Internet Engineering Task Force (IETF) with the mid-1993 publication of the first proposal for an HTML specification, the "Hypertext Markup Language (HTML)" Internet Draft by Berners-Lee and Dan Connolly, which included an SGML Document Type Definition to define the grammar. The draft expired after six months, but was notable for its acknowledgment of the NCSA Mosaic browser's custom tag for embedding in-line images, reflecting the IETF's philosophy of basing standards on successful prototypes. Similarly, Dave Raggett's competing Internet-Draft, "HTML+ (Hypertext Markup Format)", from late 1993, suggested standardizing already-implemented features like tables and fill-out forms. After the HTML and HTML+ drafts expired in early 1994, the IETF created an HTML Working Group, which in 1995 completed "HTML 2.0", the first HTML specification intended to be treated as a standard against which future implementations should be based. Further development under the auspices of the IETF was stalled by competing interests. Since 1996, the HTML specifications have been maintained, with input from commercial software vendors, by the World Wide Web Consortium (W3C). However, in 2000, HTML also became an international standard (ISO/IEC 15445:2000). HTML 4.01 was published in late 1999, with further errata published through 2001. In 2004, development began on HTML5 in the Web Hypertext Application Technology Working Group (WHATWG), which became a joint deliverable with the W3C in 2008, and completed and standardized on 28 October 2014. HTML versions timeline - November 24, 1995 - HTML 2.0 was published as IETF RFC 1866. Supplemental RFCs added capabilities: - January 14, 1997 - HTML 3.2 was published as a W3C Recommendation. It was the first version developed and standardized exclusively by the W3C, as the IETF had closed its HTML Working Group on September 12, 1996. - Initially code-named "Wilbur", HTML 3.2 dropped math formulas entirely, reconciled overlap among various proprietary extensions and adopted most of Netscape's visual markup tags. Netscape's blink element and Microsoft's marquee element were omitted due to a mutual agreement between the two companies. A markup for mathematical formulas similar to that in HTML was not standardized until 14 months later in MathML. - December 18, 1997 - HTML 4.0 was published as a W3C Recommendation. It offers three variations: - Strict, in which deprecated elements are forbidden - Transitional, in which deprecated elements are allowed - Frameset, in which mostly only frame related elements are allowed. - Initially code-named "Cougar", HTML 4.0 adopted many browser-specific element types and attributes, but at the same time sought to phase out Netscape's visual markup features by marking them as deprecated in favor of style sheets. HTML 4 is an SGML application conforming to ISO 8879 – SGML. - April 24, 1998 - HTML 4.0 was reissued with minor edits without incrementing the version number. - December 24, 1999 - HTML 4.01 was published as a W3C Recommendation. It offers the same three variations as HTML 4.0 and its last errata were published on May 12, 2001. - May 2000 - ISO/IEC 15445:2000 ("ISO HTML", based on HTML 4.01 Strict) was published as an ISO/IEC international standard. In the ISO this standard falls in the domain of the ISO/IEC JTC1/SC34 (ISO/IEC Joint Technical Committee 1, Subcommittee 34 – Document description and processing languages). - After HTML 4.01, there was no new version of HTML for many years as development of the parallel, XML-based language XHTML occupied the W3C's HTML Working Group through the early and mid-2000s. - October 28, 2014 - HTML5 was published as a W3C Recommendation. - November 1, 2016 - HTML 5.1 was published as a W3C Recommendation. HTML draft version timeline - October 1991 - HTML Tags, an informal CERN document listing 18 HTML tags, was first mentioned in public. - June 1992 - First informal draft of the HTML DTD, with seven subsequent revisions (July 15, August 6, August 18, November 17, November 19, November 20, November 22) - November 1992 - HTML DTD 1.1 (the first with a version number, based on RCS revisions, which start with 1.1 rather than 1.0), an informal draft - June 1993 - Hypertext Markup Language was published by the IETF IIIR Working Group as an Internet Draft (a rough proposal for a standard). It was replaced by a second version one month later, followed by six further drafts published by IETF itself that finally led to HTML 2.0 in RFC 1866. - November 1993 - HTML+ was published by the IETF as an Internet Draft and was a competing proposal to the Hypertext Markup Language draft. It expired in May 1994. - April 1995 (authored March 1995) - HTML 3.0 was proposed as a standard to the IETF, but the proposal expired five months later (28 September 1995) without further action. It included many of the capabilities that were in Raggett's HTML+ proposal, such as support for tables, text flow around figures and the display of complex mathematical formulas. - W3C began development of its own Arena browser as a test bed for HTML 3 and Cascading Style Sheets, but HTML 3.0 did not succeed for several reasons. The draft was considered very large at 150 pages and the pace of browser development, as well as the number of interested parties, had outstripped the resources of the IETF. Browser vendors, including Microsoft and Netscape at the time, chose to implement different subsets of HTML 3's draft features as well as to introduce their own extensions to it. (see Browser wars). These included extensions to control stylistic aspects of documents, contrary to the "belief [of the academic engineering community] that such things as text color, background texture, font size and font face were definitely outside the scope of a language when their only intent was to specify how a document would be organized." Dave Raggett, who has been a W3C Fellow for many years, has commented for example: "To a certain extent, Microsoft built its business on the Web by extending HTML features." - January 2008 - HTML5 was published as a Working Draft by the W3C. - Although its syntax closely resembles that of SGML, HTML5 has abandoned any attempt to be an SGML application and has explicitly defined its own "html" serialization, in addition to an alternative XML-based XHTML5 serialization. - 2011 HTML5 – Last Call - On 14 February 2011, the W3C extended the charter of its HTML Working Group with clear milestones for HTML5. In May 2011, the working group advanced HTML5 to "Last Call", an invitation to communities inside and outside W3C to confirm the technical soundness of the specification. The W3C developed a comprehensive test suite to achieve broad interoperability for the full specification by 2014, which was the target date for recommendation. In January 2011, the WHATWG renamed its "HTML5" living standard to "HTML". The W3C nevertheless continues its project to release HTML5. - 2012 HTML5 – Candidate Recommendation - In July 2012, WHATWG and W3C decided on a degree of separation. W3C will continue the HTML5 specification work, focusing on a single definitive standard, which is considered as a "snapshot" by WHATWG. The WHATWG organization will continue its work with HTML5 as a "Living Standard". The concept of a living standard is that it is never complete and is always being updated and improved. New features can be added but functionality will not be removed. - In December 2012, W3C designated HTML5 as a Candidate Recommendation. The criterion for advancement to W3C Recommendation is "two 100% complete and fully interoperable implementations". - 2014 HTML5 – Proposed Recommendation and Recommendation - In September 2014, W3C moved HTML5 to Proposed Recommendation. - On 28 October 2014, HTML5 was released as a stable W3C Recommendation, meaning the specification process is complete. XHTML is a separate language that began as a reformulation of HTML 4.01 using XML 1.0. It is no longer being developed as a separate standard. - XHTML 1.0 was published as a W3C Recommendation on January 26, 2000 and was later revised and republished on August 1, 2002. It offers the same three variations as HTML 4.0 and 4.01, reformulated in XML, with minor restrictions. - XHTML 1.1 was published as a W3C Recommendation on May 31, 2001. It is based on XHTML 1.0 Strict, but includes minor changes, can be customized, and is reformulated using modules in the W3C recommendation "Modularization of XHTML", which was published on April 10, 2001. - XHTML 2.0 was a working draft, work on it was abandoned in 2009 in favor of work on HTML5 and XHTML5. XHTML 2.0 was incompatible with XHTML 1.x and, therefore, would be more accurately characterized as an XHTML-inspired new language than an update to XHTML 1.x. - An XHTML syntax, known as "XHTML5.1", is being defined alongside HTML5 in the HTML5 draft. HTML markup consists of several key components, including those called tags (and their attributes), character-based data types, character references and entity references. HTML tags most commonly come in pairs like </h1>, although some represent empty elements and so are unpaired, for example <img>. The first tag in such a pair is the start tag, and the second is the end tag (they are also called opening tags and closing tags). The following is an example of the classic Hello world program, a common test employed for comparing programming languages, scripting languages and markup languages. This example is made using 9 lines of code: <!DOCTYPE html> <html> <head> <title>This is a title</title> </head> <body> <p>Hello world!</p> </body> </html> (The text between </html> describes the web page, and the text between </body> is the visible page content. The markup text "<title>This is a title</title>" defines the browser page title.) In the simple, general case, the extent of an element is indicated by a pair of tags: a "start tag" <p> and "end tag" </p>. The text content of the element, if any, is placed between these tags. Tags may also enclose further tag markup between the start and end, including a mixture of tags and text. This indicates further (nested) elements, as children of the parent element. The start tag may also include attributes within the tag. These indicate other information, such as identifiers for sections within the document, identifiers used to bind style information to the presentation of the document, and for some tags such as the <img> used to embed images, the reference to the image resource. Some elements, such as the line break <br>, do not permit any embedded content, either text or further tags. These require only a single empty tag (akin to a start tag) and do not use an end tag. Many tags, particularly the closing end tag for the very commonly used paragraph element <p>, are optional. An HTML browser or other agent can infer the closure for the end of an element from the context and the structural rules defined by the HTML standard. These rules are complex and not widely understood by most HTML coders. The general form of an HTML element is therefore: <tag attribute1="value1" attribute2="value2">''content''</tag>. Some HTML elements are defined as empty elements and take the form <tag attribute1="value1" attribute2="value2">. Empty elements may enclose no content, for instance, the <br> tag or the inline <img> tag. The name of an HTML element is the name used in the tags. Note that the end tag's name is preceded by a slash character, "/", and that in empty elements the end tag is neither required nor allowed. If attributes are not mentioned, default values are used in each case. Header of the HTML document: <head>...</head>. The title is included in the head, for example: <head> <title>The Title</title> </head> Headings: HTML headings are defined with the <h1>Heading level 1</h1> <h2>Heading level 2</h2> <h3>Heading level 3</h3> <h4>Heading level 4</h4> <h5>Heading level 5</h5> <h6>Heading level 6</h6> <p>Paragraph 1</p> <p>Paragraph 2</p> <br>. The difference between <p> is that "br" breaks a line without altering the semantic structure of the page, whereas "p" sections the page into paragraphs. Note also that "br" is an empty element in that, although it may have attributes, it can take no content and it may not have an end tag. <p>This <br> is a paragraph <br> with <br> line breaks</p> This is a link in HTML. To create a link the <a> tag is used. The href= attribute holds the URL address of the link. <a href="https://www.wikipedia.org/">A link to Wikipedia!</a> <!-- This is a comment --> Comments can help in the understanding of the markup and do not display in the webpage. There are several types of markup elements used in HTML: - Structural markup indicates the purpose of text - For example, <h2>Golf</h2>establishes "Golf" as a second-level heading. Structural markup does not denote any specific rendering, but most web browsers have default styles for element formatting. Content may be further styled using Cascading Style Sheets (CSS). - Presentational markup indicates the appearance of the text, regardless of its purpose - For example, <b>boldface</b>indicates that visual output devices should render "boldface" in bold text, but gives little indication what devices that are unable to do this (such as aural devices that read the text aloud) should do. In the case of both <i>italic</i>, there are other elements that may have equivalent visual renderings but that are more semantic in nature, such as <em>emphasised text</em>respectively. It is easier to see how an aural user agent should interpret the latter two elements. However, they are not equivalent to their presentational counterparts: it would be undesirable for a screen-reader to emphasize the name of a book, for instance, but on a screen such a name would be italicized. Most presentational markup elements have become deprecated under the HTML 4.0 specification in favor of using CSS for styling. - Hypertext markup makes parts of a document into links to other documents - An anchor element creates a hyperlink in the document and its hrefattribute sets the link's target URL. For example, the HTML markup, <a href="http://www.google.com/">Wikipedia</a>, will render the word " " as a hyperlink. To render an image as a hyperlink, an "img" element is inserted as content into the "a" element. Like "br", "img" is an empty element with attributes but no content or closing tag. <img src="image.gif" alt="descriptive text" width="50" height="50" border="0"></a> Most of the attributes of an element are name-value pairs, separated by "=" and written within the start tag of an element after the element's name. The value may be enclosed in single or double quotes, although values consisting of certain characters can be left unquoted in HTML (but not XHTML) . Leaving attribute values unquoted is considered unsafe. In contrast with name-value pair attributes, there are some attributes that affect the element simply by their presence in the start tag of the element, like the ismap attribute for the There are several common attributes that may appear in many elements : idattribute provides a document-wide unique identifier for an element. This is used to identify the element so that stylesheets can alter its presentational properties, and scripts may alter, animate or delete its contents or presentation. Appended to the URL of the page, it provides a globally unique identifier for the element, typically a sub-section of the page. For example, the ID "Attributes" in classattribute provides a way of classifying similar elements. This can be used for semantic or presentation purposes. For example, an HTML document might semantically use the designation class="notation"to indicate that all elements with this class value are subordinate to the main text of the document. In presentation, such elements might be gathered together and presented as footnotes on a page instead of appearing in the place where they occur in the HTML source. Class attributes are used semantically in microformats. Multiple class values may be specified; for example class="notation important"puts the element into both the "notation" and the "important" classes. - An author may use the styleattribute to assign presentational properties to a particular element. It is considered better practice to use an element's classattributes to select the element from within a stylesheet, though sometimes this can be too cumbersome for a simple, specific, or ad hoc styling. titleattribute is used to attach subtextual explanation to an element. In most browsers this attribute is displayed as a tooltip. langattribute identifies the natural language of the element's contents, which may be different from that of the rest of the document. For example, in an English-language document: <p>Oh well, <span lang="fr">c'est la vie</span>, as they say in France.</p> The abbreviation element, abbr, can be used to demonstrate some of these attributes : <abbr id="anId" class="jargon" style="color:purple;" title="Hypertext Markup Language">HTML</abbr> This example displays as HTML; in most browsers, pointing the cursor at the abbreviation should display the title text "Hypertext Markup Language." Character and entity references As of version 4.0, HTML defines a set of 252 character entity references and a set of 1,114,050 numeric character references, both of which allow individual characters to be written via simple markup, rather than literally. A literal character and its markup counterpart are considered equivalent and are rendered identically. The ability to "escape" characters in this way allows for the characters & (when written as &, respectively) to be interpreted as character data, rather than markup. For example, a literal < normally indicates the start of a tag, and & normally indicates the start of a character entity reference or numeric character reference; writing it as & to be included in the content of an element or in the value of an attribute. The double-quote character ( "), when not used to quote an attribute value, must also be escaped as " when it appears within the attribute value itself. Equivalently, the single-quote character ( '), when not used to quote an attribute value, must also be escaped as ' (or as ' in HTML5 or XHTML documents) when it appears within the attribute value itself. If document authors overlook the need to escape such characters, some browsers can be very forgiving and try to use context to guess their intent. The result is still invalid markup, which makes the document less accessible to other browsers and to other user agents that may try to parse the document for search and indexing purposes for example. Escaping also allows for characters that are not easily typed, or that are not available in the document's character encoding, to be represented within element and attribute content. For example, the acute-accented é), a character typically found only on Western European and South American keyboards, can be written in any HTML document as the entity reference é or as the numeric references é, using characters that are available on all keyboards and are supported in all character encodings. Unicode character encodings such as UTF-8 are compatible with all modern browsers and allow direct access to almost all the characters of the world's writing systems. HTML defines several data types for element content, such as script data and stylesheet data, and a plethora of types for attribute values, including IDs, names, URIs, numbers, units of length, languages, media descriptors, colors, character encodings, dates and times, and so on. All of these data types are specializations of character data. Document type declaration The original purpose of the doctype was to enable parsing and validation of HTML documents by SGML tools based on the Document Type Definition (DTD). The DTD to which the DOCTYPE refers contains a machine-readable grammar specifying the permitted and prohibited content for a document conforming to such a DTD. Browsers, on the other hand, do not implement HTML as an application of SGML and by consequence do not read the DTD. An example of an HTML 4 doctype <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> This declaration references the DTD for the "strict" version of HTML 4.01. SGML-based validators read the DTD in order to properly parse the document and to perform validation. In modern browsers, a valid doctype activates standards mode as opposed to quirks mode. In addition, HTML 4.01 provides Transitional and Frameset DTDs, as explained below. Transitional type is the most inclusive, incorporating current tags as well as older or "deprecated" tags, with the Strict DTD excluding deprecated tags. Frameset has all tags necessary to make frames on a page along with the tags included in transitional type. Semantic HTML is a way of writing HTML that emphasizes the meaning of the encoded information over its presentation (look). HTML has included semantic markup from its inception, but has also included presentational markup, such as <center> tags. There are also the semantically neutral span and div tags. Since the late 1990s when Cascading Style Sheets were beginning to work in most browsers, web authors have been encouraged to avoid the use of presentational HTML markup with a view to the separation of presentation and content. In a 2001 discussion of the Semantic Web, Tim Berners-Lee and others gave examples of ways in which intelligent software "agents" may one day automatically crawl the web and find, filter and correlate previously unrelated, published facts for the benefit of human users. Such agents are not commonplace even now, but some of the ideas of Web 2.0, mashups and price comparison websites may be coming close. The main difference between these web application hybrids and Berners-Lee's semantic agents lies in the fact that the current aggregation and hybridization of information is usually designed in by web developers, who already know the web locations and the API semantics of the specific data they wish to mash, compare and combine. An important type of web agent that does crawl and read web pages automatically, without prior knowledge of what it might find, is the web crawler or search-engine spider. These software agents are dependent on the semantic clarity of web pages they find as they use various techniques and algorithms to read and index millions of web pages a day and provide web users with search facilities without which the World Wide Web's usefulness would be greatly reduced. In order for search-engine spiders to be able to rate the significance of pieces of text they find in HTML documents, and also for those creating mashups and other hybrids as well as for more automated agents as they are developed, the semantic structures that exist in HTML need to be widely and uniformly applied to bring out the meaning of published text. Good semantic HTML also improves the accessibility of web documents (see also Web Content Accessibility Guidelines). For example, when a screen reader or audio browser can correctly ascertain the structure of a document, it will not waste the visually impaired user's time by reading out repeated or irrelevant information when it has been marked up correctly. The World Wide Web is composed primarily of HTML documents transmitted from web servers to web browsers using the Hypertext Transfer Protocol (HTTP). However, HTTP is used to serve images, sound, and other content, in addition to HTML. To allow the web browser to know how to handle each document it receives, other information is transmitted along with the document. This meta data usually includes the MIME type (e.g. text/html or application/xhtml+xml) and the character encoding (see Character encoding in HTML). In modern browsers, the MIME type that is sent with the HTML document may affect how the document is initially interpreted. A document sent with the XHTML MIME type is expected to be well-formed XML; syntax errors may cause the browser to fail to render it. The same document sent with the HTML MIME type might be displayed successfully, since some browsers are more lenient with HTML. The W3C recommendations state that XHTML 1.0 documents that follow guidelines set forth in the recommendation's Appendix C may be labeled with either MIME Type. XHTML 1.1 also states that XHTML 1.1 documents should be labeled with either MIME type. Most graphical email clients allow the use of a subset of HTML (often ill-defined) to provide formatting and semantic markup not available with plain text. This may include typographic information like coloured headings, emphasized and quoted text, inline images and diagrams. Many such clients include both a GUI editor for composing HTML e-mail messages and a rendering engine for displaying them. Use of HTML in e-mail is criticized by some because of compatibility issues, because it can help disguise phishing attacks, because of accessibility issues for blind or visually impaired people, because it can confuse spam filters and because the message size is larger than plain text. The most common filename extension for files containing HTML is .html. A common abbreviation of this is .htm, which originated because some early operating systems and file systems, such as DOS and the limitations imposed by FAT data structure, limited file extensions to three letters. An HTML Application (HTA; file extension ".hta") is a Microsoft Windows application that uses HTML and Dynamic HTML in a browser to provide the application's graphical interface. A regular HTML file is confined to the security model of the web browser's security, communicating only to web servers and manipulating only webpage objects and site cookies. An HTA runs as a fully trusted application and therefore has more privileges, like creation/editing/removal of files and Windows Registry entries. Because they operate outside the browser's security model, HTAs cannot be executed via HTTP, but must be downloaded (just like an EXE file) and executed from local file system. Since its inception, HTML and its associated protocols gained acceptance relatively quickly.[by whom?] However, no clear standards existed in the early years of the language. Though its creators originally conceived of HTML as a semantic language devoid of presentation details, practical uses pushed many presentational elements and attributes into the language, driven largely by the various browser vendors. The latest standards surrounding HTML reflect efforts to overcome the sometimes chaotic development of the language and to create a rational foundation for building both meaningful and well-presented documents. To return HTML to its role as a semantic language, the W3C has developed style languages such as CSS and XSL to shoulder the burden of presentation. In conjunction, the HTML specification has slowly reined in the presentational elements. There are two axes differentiating various variations of HTML as currently specified: SGML-based HTML versus XML-based HTML (referred to as XHTML) on one axis, and strict versus transitional (loose) versus frameset on the other axis. SGML-based versus XML-based HTML One difference in the latest HTML specifications lies in the distinction between the SGML-based specification and the XML-based specification. The XML-based specification is usually called XHTML to distinguish it clearly from the more traditional definition. However, the root element name continues to be "html" even in the XHTML-specified HTML. The W3C intended XHTML 1.0 to be identical to HTML 4.01 except where limitations of XML over the more complex SGML require workarounds. Because XHTML and HTML are closely related, they are sometimes documented in parallel. In such circumstances, some authors conflate the two names as (X)HTML or X(HTML). Like HTML 4.01, XHTML 1.0 has three sub-specifications: strict, transitional and frameset. Aside from the different opening declarations for a document, the differences between an HTML 4.01 and XHTML 1.0 document—in each of the corresponding DTDs—are largely syntactic. The underlying syntax of HTML allows many shortcuts that XHTML does not, such as elements with optional opening or closing tags, and even empty elements which must not have an end tag. By contrast, XHTML requires all elements to have an opening tag and a closing tag. XHTML, however, also introduces a new shortcut: an XHTML tag may be opened and closed within the same tag, by including a slash before the end of the tag like this: <br/>. The introduction of this shorthand, which is not used in the SGML declaration for HTML 4.01, may confuse earlier software unfamiliar with this new convention. A fix for this is to include a space before closing the tag, as such: To understand the subtle differences between HTML and XHTML, consider the transformation of a valid and well-formed XHTML 1.0 document that adheres to Appendix C (see below) into a valid HTML 4.01 document. To make this translation requires the following steps: - The language for an element should be specified with a langattribute rather than the XHTML xml:langattribute. XHTML uses XML's built in language-defining functionality attribute. - Remove the XML namespace ( xmlns=URI). HTML has no facilities for namespaces. - Change the document type declaration from XHTML 1.0 to HTML 4.01. (see DTD section for further explanation). - If present, remove the XML declaration. (Typically this is: <?xml version="1.0" encoding="utf-8"?>). - Ensure that the document's MIME type is set to text/html. For both HTML and XHTML, this comes from the HTTP Content-Typeheader sent by the server. - Change the XML empty-element syntax to an HTML style empty element ( Those are the main changes necessary to translate a document from XHTML 1.0 to HTML 4.01. To translate from HTML to XHTML would also require the addition of any omitted opening or closing tags. Whether coding in HTML or XHTML it may just be best to always include the optional tags within an HTML document rather than remembering which tags can be omitted. A well-formed XHTML document adheres to all the syntax requirements of XML. A valid document adheres to the content specification for XHTML, which describes the document structure. The W3C recommends several conventions to ensure an easy migration between HTML and XHTML (see HTML Compatibility Guidelines). The following steps can be applied to XHTML 1.0 documents only: - Include both langattributes on any elements assigning language. - Use the empty-element syntax only for elements specified as empty in HTML. - Include an extra space in empty-element tags: for example <br />instead of - Include explicit close tags for elements that permit content but are left empty (for example, - Omit the XML declaration. By carefully following the W3C's compatibility guidelines, a user agent should be able to interpret the document equally as HTML or XHTML. For documents that are XHTML 1.0 and have been made compatible in this way, the W3C permits them to be served either as HTML (with a text/html MIME type), or as XHTML (with an application/xml MIME type). When delivered as XHTML, browsers should use an XML parser, which adheres strictly to the XML specifications for parsing the document's contents. Transitional versus strict HTML 4 defined three different versions of the language: Strict, Transitional (once called Loose) and Frameset. The Strict version is intended for new documents and is considered best practice, while the Transitional and Frameset versions were developed to make it easier to transition documents that conformed to older HTML specification or didn't conform to any specification to a version of HTML 4. The Transitional and Frameset versions allow for presentational markup, which is omitted in the Strict version. Instead, cascading style sheets are encouraged to improve the presentation of HTML documents. Because XHTML 1 only defines an XML syntax for the language defined by HTML 4, the same differences apply to XHTML 1 as well. The Transitional version allows the following parts of the vocabulary, which are not included in the Strict version: - A looser content model - Inline elements and plain text are allowed directly in: - Inline elements and plain text are allowed directly in: - Presentation related elements - underline ( u)(Deprecated. can confuse a visitor with a hyperlink.) - strike-through ( center(Deprecated. use CSS instead.) font(Deprecated. use CSS instead.) basefont(Deprecated. use CSS instead.) - underline ( - Presentation related attributes background(Deprecated. use CSS instead.) and bgcolor(Deprecated. use CSS instead.) attributes for body(required element according to the W3C.) element. align(Deprecated. use CSS instead.) attribute on form, paragraph ( p) and heading ( align(Deprecated. use CSS instead.), noshade(Deprecated. use CSS instead.), size(Deprecated. use CSS instead.) and width(Deprecated. use CSS instead.) attributes on align(Deprecated. use CSS instead.), objectelement is only supported in Internet Explorer (from the major browsers)) elements align(Deprecated. use CSS instead.) attribute on align(Deprecated. use CSS instead.) and bgcolor(Deprecated. use CSS instead.) on bgcolor(Deprecated. use CSS instead.), bgcolor(Deprecated. use CSS instead.) attribute on clear(Obsolete) attribute on type(Deprecated. use CSS instead.), compact(Deprecated. use CSS instead.) and start(Deprecated. use CSS instead.) attributes on - Additional elements in Transitional specification menu(Deprecated. use CSS instead.) list (no substitute, though unordered list is recommended) dir(Deprecated. use CSS instead.) list (no substitute, though unordered list is recommended) isindex(Deprecated.) (element requires server-side support and is typically added to documents server-side, inputelements can be used as a substitute) applet(Deprecated. use the language(Obsolete) attribute on script element (redundant with the - Frame related entities target(Deprecated in the formelements.) attribute on a, client-side image-map ( The Frameset version includes everything in the Transitional version, as well as the frameset element (used instead of body) and the Frameset versus transitional In addition to the above transitional differences, the frameset specifications (whether XHTML 1.0 or HTML 4.01) specify a different content model, with body, that contains either frame elements, or optionally noframes with a Summary of specification versions As this list demonstrates, the loose versions of the specification are maintained for legacy support. However, contrary to popular misconceptions, the move to XHTML does not imply a removal of this legacy support. Rather the X in XML stands for extensible and the W3C is modularizing the entire specification and opening it up to independent extensions. The primary achievement in the move from XHTML 1.0 to XHTML 1.1 is the modularization of the entire specification. The strict version of HTML is deployed in XHTML 1.1 through a set of modular extensions to the base XHTML 1.1 specification. Likewise, someone looking for the loose (transitional) or frameset specifications will find similar extended XHTML 1.1 support (much of it is contained in the legacy or frame modules). The modularization also allows for separate features to develop on their own timetable. So for example, XHTML 1.1 will allow quicker migration to emerging XML standards such as MathML (a presentational and semantic math language based on XML) and XForms—a new highly advanced web-form technology to replace the existing HTML forms. In summary, the HTML 4 specification primarily reined in all the various HTML implementations into a single clearly written specification based on SGML. XHTML 1.0, ported this specification, as is, to the new XML defined specification. Next, XHTML 1.1 takes advantage of the extensible nature of XML and modularizes the whole specification. XHTML 2.0 was intended to be the first step in adding new features to the specification in a standards-body-based approach. WHATWG HTML versus HTML5 The WHATWG considers their work as living standard HTML for what constitutes the state of the art in major browser implementations by Apple (Safari), Google (Chrome), Mozilla (Firefox), Opera (Opera), and others. HTML5 is specified by the HTML Working Group of the W3C following the W3C process. As of 2013[update] both specifications are similar and mostly derived from each other, i.e., the work on HTML5 started with an older WHATWG draft, and later the WHATWG living standard was based on HTML5 drafts in 2011. Hypertext features not in HTML HTML lacks some of the features found in earlier hypertext systems, such as source tracking, fat links and others. Even some hypertext features that were in early versions of HTML have been ignored by most popular web browsers until recently[when?], such as the link element and in-browser Web page editing. There are some WYSIWYG editors (What You See Is What You Get), in which the user lays out everything as it is to appear in the HTML document using a graphical user interface (GUI), often similar to word processors. The editor renders the document rather than show the code, so authors do not require extensive knowledge of HTML. The WYSIWYG editing model has been criticized, primarily because of the low quality of the generated code; there are voices advocating a change to the WYSIWYM model (What You See Is What You Mean). WYSIWYG editors remain a controversial topic because of their perceived flaws such as: - Relying mainly on layout as opposed to meaning, often using markup that does not convey the intended meaning but simply copies the layout. - Often producing extremely verbose and redundant code that fails to make use of the cascading nature of HTML and CSS. - Often producing ungrammatical markup, called tag soup or semantically incorrect markup (such as - As a great deal of the information in HTML documents is not in the layout, the model has been criticized for its "what you see is all you get"-nature. - Breadcrumb (navigation) - Comparison of HTML parsers - Dynamic web page - HTML decimal character rendering - List of document markup languages - List of XML and HTML character entity references - Microdata (HTML) - Polyglot HTML - Semantic HTML - W3C (X)HTML Validator - "HTML 4.0 Specification — W3C Recommendation — Conformance: requirements and recommendations". World Wide Web Consortium. December 18, 1997. Retrieved July 6, 2015. - Tim Berners-Lee, "Information Management: A Proposal." CERN (March 1989, May 1990). W3.org - Tim Berners-Lee, "Design Issues" - Tim Berners-Lee, "Design Issues" - "First mention of HTML Tags on the www-talk mailing list". World Wide Web Consortium. October 29, 1991. Retrieved April 8, 2007. - "Index of elements in HTML 4". World Wide Web Consortium. December 24, 1999. Retrieved April 8, 2007. - Tim Berners-Lee (December 9, 1991). "Re: SGML/HTML docs, X Browser (archived www-talk mailing list post)". Retrieved June 16, 2007. SGML is very general. HTML is a specific application of the SGML basic syntax applied to hypertext documents with simple structure. - Berners-Lee, Tim; Connolly, Daniel (June 1993). "Hypertext Markup Language (HTML): A Representation of Textual Information and MetaInformation for Retrieval and Interchange". w3.org. Retrieved 2017-01-04. - Raymond, Eric. "IETF and the RFC Standards Process". The Art of Unix Programming. Archived from the original on 2005-03-17. In IETF tradition, standards have to arise from experience with a working prototype implementation — but once they become standards, code that does not conform to them is considered broken and mercilessly scrapped. ...Internet-Drafts are not specifications; software implementers and vendors are specifically barred from claiming compliance with them as if they were specifications. Internet-Drafts are focal points for discussion, usually in a working group... Once an Internet-Draft has been published with an RFC number, it is a specification to which implementers may claim conformance. It is expected that the authors of the RFC and the community at large will begin correcting the specification with field experience. - Raggett, Dave. "A Review of the HTML+ Document Format". Archived from the original on 2000-02-29. The hypertext markup language HTML was developed as a simple non-proprietary delivery format for global hypertext. HTML+ is a set of modular extensions to HTML and has been developed in response to a growing understanding of the needs of information providers. These extensions include text flow around floating figures, fill-out forms, tables and mathematical equations. - Berners-Lee, Tim; Connelly, Daniel (November 1995). "RFC 1866 – Hypertext Markup Language – 2.0". Internet Engineering Task Force. Retrieved 1 December 2010. This document thus defines an HTML 2.0 (to distinguish it from the previous informal specifications). Future (generally upwardly compatible) versions of HTML with new features will be released with higher version numbers. - Raggett, Dave (1998). Raggett on HTML 4. Retrieved July 9, 2007. - "HTML5 – Hypertext Markup Language – 5.0". Internet Engineering Task Force. 28 October 2014. Retrieved 25 November 2014. This document recommends HTML 5.0 after completion. - "HTML 3.2 Reference Specification". World Wide Web Consortium. January 14, 1997. Retrieved November 16, 2008. - "IETF HTML WG". Retrieved June 16, 2007. Note: This working group is closed - Arnoud Engelfriet. "Introduction to Wilbur". Web Design Group. Retrieved June 16, 2007. - "HTML 4.0 Specification". World Wide Web Consortium. December 18, 1997. Retrieved November 16, 2008. - "HTML 4 – 4 Conformance: requirements and recommendations". Retrieved December 30, 2009. - "HTML 4.0 Specification". World Wide Web Consortium. April 24, 1998. Retrieved November 16, 2008. - "HTML 4.01 Specification". World Wide Web Consortium. December 24, 1999. Retrieved November 16, 2008. - ISO (2000). "ISO/IEC 15445:2000 – Information technology – Document description and processing languages – HyperText Markup Language (HTML)". Retrieved December 26, 2009. - Cs.Tcd.Ie. Cs.Tcd.Ie (2000-05-15). Retrieved on 2012-02-16. - "HTML5: A vocabulary and associated APIs for HTML and XHTML". World Wide Web Consortium. 28 October 2014. Retrieved 31 October 2014. - "Open Web Platform Milestone Achieved with HTML5 Recommendation" (Press release). World Wide Web Consortium. 28 October 2014. Retrieved 31 October 2014. - "HTML 5.1". World Wide Web Consortium. 1 November 2016. Retrieved 6 January 2017. - "HTML 5.1 is a W3C Recommendation". World Wide Web Consortium. 1 November 2016. Retrieved 6 January 2017. - Philippe le Hegaret (17 November 2016). "HTML 5.1 is the gold standard". World Wide Web Consortium. Retrieved 6 January 2017. - Connolly, Daniel (6 June 1992). "MIME as a hypertext architecture". CERN. Retrieved 24 October 2010. - Connolly, Daniel (15 July 1992). "HTML DTD enclosed". CERN. Retrieved 24 October 2010. - Connolly, Daniel (18 August 1992). "document type declaration subset for Hyper Text Markup Language as defined by the World Wide Web project". CERN. Retrieved 24 October 2010. - Connolly, Daniel (24 November 1992). "Document Type Definition for the Hyper Text Markup Language as used by the World Wide Web application". CERN. Retrieved 24 October 2010. See section "Revision History" - Berners-Lee, Tim; Connolly, Daniel (June 1993). "Hyper Text Markup Language (HTML) Internet Draft version 1.1". IETF IIIR Working Group. Retrieved 18 September 2010. - Berners-Lee, Tim; Connolly, Daniel (June 1993). "Hypertext Markup Language (HTML) Internet Draft version 1.2". IETF IIIR Working Group. Retrieved 18 September 2010. - Berners-Lee, Tim; Connolly, Daniel (28 November 1994). "HyperText Markup Language Specification – 2.0 INTERNET DRAFT". IETF. Retrieved 24 October 2010. - "HTML 3.0 Draft (Expired!) Materials". World Wide Web Consortium. December 21, 1995. Retrieved November 16, 2008. - "HyperText Markup Language Specification Version 3.0". Retrieved June 16, 2007. - Raggett, Dave (28 March 1995). "HyperText Markup Language Specification Version 3.0". HTML 3.0 Internet Draft Expires in six months. World Wide Web Consortium. Retrieved 17 June 2010. - Bowers, Neil. "Weblint: Just Another Perl Hack". - Lie, Håkon Wium; Bos, Bert (April 1997). Cascading style sheets: designing for the Web. Addison Wesley Longman. p. 263. Retrieved 9 June 2010. - "HTML5". World Wide Web Consortium. June 10, 2008. Retrieved November 16, 2008. - "HTML5, one vocabulary, two serializations". Retrieved February 25, 2009. - "W3C Confirms May 2011 for HTML5 Last Call, Targets 2014 for HTML5 Standard". World Wide Web Consortium. 14 February 2011. Retrieved 18 February 2011. - Hickson, Ian. "HTML Is the New HTML5". Retrieved 21 January 2011. - "HTML5 gets the splits.". netmagazine.com. Retrieved 23 July 2012. - "HTML5". W3.org. 2012-12-17. Retrieved 2013-06-15. - "When Will HTML5 Be Finished?". FAQ. WHAT Working Group. Retrieved 29 November 2009. - "HTML5: A vocabulary and associated APIs for HTML and XHTML (Editor's Draft).". World Wide Web Consortium. Retrieved 12 April 2010. - "Call for Review: HTML5 Proposed Recommendation Published W3C News". W3.org. 2014-09-16. Retrieved 2014-09-27. - "Open Web Platform Milestone Achieved with HTML5 Recommendation". W3C. 28 October 2014. Retrieved 29 October 2014. - "HTML5 specification finalized, squabbling over specs continues". Ars Technica. 2014-10-29. Retrieved 2014-10-29. - "XHTML 1.0: The Extensible HyperText Markup Language (Second Edition)". World Wide Web Consortium. January 26, 2000. Retrieved November 16, 2008. - "XHTML 1.1 – Module-based XHTML — Second Edition". World Wide Web Consortium. February 16, 2007. Retrieved November 16, 2008. - "Modularization of XHTML". www.w3.org. Retrieved 2017-01-04. - "XHTM 2.0". World Wide Web Consortium. July 26, 2006. Retrieved November 16, 2008. - "XHTML 2 Working Group Expected to Stop Work End of 2009, W3C to Increase Resources on HTML5". World Wide Web Consortium. July 17, 2009. Retrieved November 16, 2008. - "W3C XHTML FAQ". - "HTML5". W3C. 19 October 2013. - Activating Browser Modes with Doctype. Hsivonen.iki.fi. Retrieved on 2012-02-16. - "HTML Elements". w3schools. Retrieved 16 March 2015. - "CSS Introduction". W3schools. Retrieved 16 March 2015. - "On SGML and HTML". World Wide Web Consortium. Retrieved November 16, 2008. - "XHTML 1.0 – Differences with HTML 4". World Wide Web Consortium. Retrieved November 16, 2008. - Korpela, Jukka (July 6, 1998). "Why attribute values should always be quoted in HTML". Cs.tut.fi. Retrieved November 16, 2008. - "Objects, Images, and Applets in HTML documents". World Wide Web Consortium. December 24, 1999. Retrieved November 16, 2008. - "H56: Using the dir attribute on an inline element to resolve problems with nested directional runs". Techniques for WCAG 2.0. W3C. Retrieved 18 September 2010. - "Character Entity Reference Chart". World Wide Web Consortium. October 24, 2012. - "The Named Character Reference '". World Wide Web Consortium. January 26, 2000. - "The Unicode Standard: A Technical Introduction". Retrieved 2010-03-16. - "HTML: The Markup Language (an HTML language reference)". Retrieved 2013-08-19. - Berners-Lee, Tim; Fischetti, Mark (2000). Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web by Its Inventor. San Francisco: Harper. ISBN 978-0-06-251587-2. - Raggett, Dave (2002). "Adding a touch of style". W3C. Retrieved October 2, 2009. This article notes that presentational HTML markup may be useful when targeting browsers "before Netscape 4.0 and Internet Explorer 4.0". See the list of web browsers to confirm that these were both released in 1997. - Tim Berners-Lee, James Hendler and Ora Lassila (2001). "The Semantic Web". Scientific American. Retrieved October 2, 2009. - Nigel Shadbolt, Wendy Hall and Tim Berners-Lee (2006). "The Semantic Web Revisited" (PDF). IEEE Intelligent Systems. Retrieved October 2, 2009. - "XHTML 1.0 The Extensible HyperText Markup Language (Second Edition)". World Wide Web Consortium. 2002 . Retrieved December 7, 2008. XHTML Documents which follow the guidelines set forth in Appendix C, "HTML Compatibility Guidelines" may be labeled with the Internet Media Type "text/html" [RFC2854], as they are compatible with most HTML browsers. Those documents, and any other document conforming to this specification, may also be labeled with the Internet Media Type "application/xhtml+xml" as defined in [RFC3236]. - "RFC 2119: Key words for use in RFCs to Indicate Requirement Levels". Harvard University. 1997. Retrieved December 7, 2008. 3. SHOULD This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course. - "XHTML 1.1 – Module-based XHTML — Second Edition". World Wide Web Consortium. 2007. Retrieved December 7, 2008. XHTML 1.1 documents SHOULD be labeled with the Internet Media Type text/html as defined in [RFC2854] or application/xhtml+xml as defined in [RFC3236]. - "Naming Files, Paths, and Namespaces". Microsoft. Retrieved 16 March 2015. - HTML Design Constraints, W3C Archives - WWW:BTB – HTML, Pris Sears - Freeman, E (2005). Head First HTML. O'Reilly. - Hickson, Ian (2011-01-19). "HTML is the new HTML5". The WHATWG blog. Retrieved 2013-01-14. - "HTML5 — Smile, it's a Snapshot!". W3C Blog. 2012-12-17. Retrieved 2013-01-14. - Jakob Nielsen (January 3, 2005). "Reviving Advanced Hypertext". Retrieved June 16, 2007. - Sauer, C.: WYSIWIKI – Questioning WYSIWYG in the Internet Age. In: Wikimania (2006) - Spiesser, J., Kitchen, L.: Optimization of HTML automatically generated by WYSIWYG programs. In: 13th International Conference on World Wide Web, pp. 355—364. WWW '04. ACM, New York, NY (New York, NY, U.S., May 17–20, 2004) - XHTML Reference: blockquote. Xhtml.com. Retrieved on 2012-02-16. - Doug Engelbart's INVISIBLE REVOLUTION . Invisiblerevolution.net. Retrieved on 2012-02-16.
Constructing an argument When writing an essay it is essential to construct an argument. An argument is a particular stand on an issue or question. It is made up of a series of claims. There are two types of claim: - the conclusion: the final claim that you are trying to prove. This is often the answer to a direct question, and is also known as the thesis statement. - the premises: other claims that lead to or contribute to the thesis statement. These are often topic sentences of paragraphs. In order to prove the premises, you must also provide: - the evidence: the research, facts and discussion used to prove those points. Therefore, if you are asked to argue a concept you are being asked to provide evidence to support your premises, which in turn support your conclusion. When writing an essay, for example, the thesis statement will appear in your introduction and conclusion. Each premise is usually in a separate paragraph, supported by the evidence for that premise. For more on structuring an essay, see essay planning and structure. Identifying a claim You can often identify a premise or a conclusion by the kinds of words used: - Premise: since, because, as, for, given that, assuming that - Conclusion: thus, therefore, hence, so, it follows that, we may conclude that (Flage, 2003, pp. 58-9) As Allen (2004, p. 19) observes, sometimes the same claim can be used as either a conclusion or a premise, depending on the point you want to make: "Your car is dirty [conclusion] because you drove through some mud [premise]." "You should wash your car [conclusion] since your car is dirty [premise]." What makes a strong argument? An argument is strong if it offers logical support for its conclusion. An argument is weak if there are gaps or bad connections between the premises which undermine their link to the conclusion. A strong argument is: - supported: the evidence is convincing and objective, and it supports the claims - balanced: the argument considers all the different perspectives, and comes to a reasonable conclusion based on those perspectives - logical: the argument is clearly and consistently reasoned. An argument that contains errors of logic (also known as logical fallacies) is weak. You can examine the strength of your argument by applying the principles of critical reading.
What is the topic of the lesson? How does Ms. Chandler introduce this unit? What activities does she plan for the students to help with the introduction? What does Ms. Chandler do to make the unit interesting and relevant to students’ lives? Notice the essential questions she’s listed. How do these questions relate to the lesson’s objectives? Ms. Chandlers takes time to discuss different kinds of scars with students. This is called “building background”, why is it important to build background about scars? Now think and discuss in detail, Is it essential to build background on all lessons or units, why or why not? Take time to study and watch the unit below: Designing a Garden Bench https://www.georgiastandards.org/resources/Pages/Innovation-in-Teaching-Competition-2nd-Grade-Courtney-Bryant.aspx (75 minutes of classroom observation) Read the Unit Plan under the heading: Available Materials. Identify and list the broad categories of the unit. Describe the sequence of this unit from beginning to end. Do you see yourself using a lesson plan template similar to this one? Explain. In chapter 1 we learn that teachers experience the love for teaching and identify extrinsic and intrinsic rewards with this occupation. List the extrinsic and intrinsic rewards you can associate with Ms, Bryant. Do you relate to any of these rewards you listed? Which of those rewards are some you think you’ll experience when you become a teacher? In chapter 2 we learn there are four basic purposes of school. Having learned about this unit with Ms. Bryant, which school purpose (or purposes) do you think is the one represented here? Justify your answer. In chapter 3 we learn about multiple intelligences (see page 67). Identify and discuss the different intelligences addressed in this unit. Visualize your classroom where you’ll be teaching, how will you address your students’ different intelligences and learning styles? Are you committed to appealing to different learning styles?Do you think this will be a challenge? How so? In chapter 5 we learn about the curriculum in schools. Identify and explain which specific subjects are taught in Ms. Bryant’s unit. Is each subject taught in isolation or do you think this is an example of interdisciplinary curriculum? Explain your answer. In your opinion, which method is most effective with students, subjects taught in isolation or interdisciplinary curriculum? Explain. Hypothetical FRAMEWORK OF MEDIA AND POLITICS The connection among media and legislative issues can be considered as one of the most far from being obviously true theme in political theory writing. In this investigation, this relationship will be secured under a few models and ways to deal with depict the relationship in different parts of understandings. In any case, the job of media in legislative issues will be examined and showed with the assistance of writing survey of media and governmental issues. Along these lines, the establishment will be all set further exchange with respect to media and legislative issues relationship. General understandings of models will give a ground to place them in a different manner, in international strategy. Significant suppositions of models will be analyzed in the international strategy part. To watch the job of media all the more explicitly, Turkey will be the situation in following part. The verifiable improvement of Turkish press, world of politics of Turkey and its consequences for media, additionally the impact of media on Turkish world of politics will be talked about. This investigation will comprise the hypothetical structure of media and governmental issues. Job OF MEDIA IN POLITICS The connection among media and legislative issues has consistently been one of the basic regions of discourse. The need of political entertainers for open help and the media's longing to impact society have made the media a significant and imperative power for rulers. The expression "media" has been gotten from the Latin word "medi" which signifies "in the center." In this regard, media can be considered as the middle person between two parts of the general public. Media is utilized as a general idea yet additionally as a structure communicating broad communications. "The "mass" character of the broad communications is gotten from the way that the media channel correspondence is a huge and separated crowd utilizing moderately trend setting innovation. Syntactically and strategically, the broad communications are plural. Various messages might be put out by the "communicate" media (TV and radio) and the "printed" media (papers and magazines)" (Heywood, 2007: 232). The strategic the broad communications is to convey and offer data to people, gatherings, networks and masses in composed, visual and intuitive way. Heywood likewise underlines the mass job of the media through new technologic advancements and web specifically (Heywood, 2007: 231-237). Media is the briefest and best approach to impact society. Each people group that needs to coordinate the elements of society finds the media. Political entertainers comprise one of these networks. Immediate or circuitous power relations are seen around there. Governments regularly tend to identify with the media. Political on-screen characters might be found in some legitimate or illicit practices for the press. The individuals who hold the intensity of media can likewise utilize media associations for these reasons so as to profit by their boundless chances. "In tyrant governments, the nature of news and the media is painstakingly controlled by the administration" (Orum and Dale, 2009: 273). As in the China model, governments can control the broad communications so as to continue their political security (Orum and Dale: 2009). Media, in each phase of social and political continuation, look for a channel to accomplish the thoughts and assessments of people. The correspondence between the people makes the concrete and structure of what we have called "legislative issues". In this regard, the broad communications make the channel for this correspondence, yet in addition partake and influence the political procedure (Heywood, 2007: 231-232). "Guard dog" job of broad communications as, as it were, is a subset of the political discussion contention. The job of the media, from this point of view, is to guarantee that open responsibility happens, by examining the exercises of government and uncovering maltreatment of intensity. The broad communications advances vote based system by augmenting the appropriation of intensity and impact in the public eye" (Heywood, 2007: 236). The significance of the media for society is a since quite a while ago acknowledged reality. With the creation of the print machine in the fifteenth century, the job of pieces of literature in the scattering of new logical, political and strict thoughts has just because prompted a comprehension of the significance of correspondence approaches (Arslan, 2007; Bekci, 2013: 4). Particularly in the nineteenth and twentieth hundreds of years, papers have become significant apparatuses for exchange and industry just as for ideological groups and governments. (Cuilenburg, 2010: 101; Bekci, 2013: 4). "Toward the start of twentieth century, the impact of the media on international strategy kept on expanding with the creation of the radio" (Bekci, 2013: 4). "The news media assumed a functioning job in numerous occasions in the 21st century. The dynamic job of Pravada Newspaper in the association and battle of laborers in the Bolshevik upset is communicated by numerous scientists. The United States has utilized the media successfully when it engendered its authenticity in the wars it encountered. In World War II, Adolf Hitler built up the "Service of Propaganda" so as to canalize society into war and legitimize its strategies and utilized the media in a solid manner" (Temel, 2014: 7). It tends to be said that the media having missions, for example, sorting out the data streams of the social orders, going about as an extension between the approach creators and the individuals, transmitting the political improvements, teaching the general public and bringing issues to light have a significant spot in public activity (Karadoğan, 1996: 54; Bekci, 2013: 7). It has been resolved that the media, which were some of the time characterized as the "fourth power", assume a key job in majority rules systems, and that people increment their attention to one another and their environment (Haas, 2009: 77; Bekci, 2013: 6). In any territory of movement in the public eye, the connection between governmental issues, for instance, workmanship, pressure gatherings, worker's organizations, etc, isn't as perplexing as the connection among media and legislative issues (Arslan, 2007: 51; Özdemir, 2013: 61). A few sources contend that the differentiation among media and legislative issues in parliamentary popular governments is totally not feasible (Alver, 1998: 39; Bekci, 2013: 7). This circumstance likewise uncovers the idea of "mediocracy" which has been utilized frequently as of late (Alver, 1998: 39; Bekci, 2013: 7). This idea, which underscores that there is no fringe between the political framework and the media, additionally features the extraordinary connection between political entertainers and media on-screen characters (writers, journalists, and so forth.) in majority rules systems (Alver, 1998: 47; Bekci, 2013: 7). The people giving the media and media distributions are legitimately identified with legislative issues and government officials (Bayram, 2011: 69; Bekci, 2013: 8). In this connection, Kırlı underscores media's job of age of general assessment of society (Kırlı, 2004: 20-25). "Instruments that help to decide the popular conclusion, help to frame and build up people in general simultaneously. The media affects the arrangement and course of popular sentiment through the elucidation and assessment of occasions" (Kapani, 1970: 241; Kırlı, 2004: 21). Some of researchers have focused on the power relations between the media and political entertainers. "While the media put pressure on other power on-screen characters, they can submit to the weights of these entertainers simultaneously. "Assembling Consent" and "CNN Effect" are driving speculations clarifying the unpredictable connection among media and other power frameworks" (Temel, 2014: 8). In addition, Agenda-Setting Model attempts to clarify the connection between the media, people in general and the political motivation and how these plans influence one another (Terkan, 2003: 564; Bekci, 2013: 21). Assembling CONSENT-PROPAGANDA MODEL Walter Lippman, an American writer, first utilized the idea of "Assembling Consent" in his book "Popular Opinion" distributed in 1922 (Chomsky, 2013; Temel, 2014: 8). In Chomsky and Herman's book "Assembling Consent" (1994), the idea is clarified by the way that the state helps out the cash-flow to cause it to show up as though it fits the requirements of the general public that would not advantageous for them. In the industrialist majority rule governments, individuals are controlled in delicate power and utilized their democratic rights as indicated by wishes and wants of the world class minority. As per them, it is conceivable through an aggregate awareness control to guarantee that individuals get to the "thing" they don't need and that they will agree to circumstances that the network won't acknowledge under typical conditions. (Chomsky, 2013; Temel, 2014: 8). As per the two journalists, the media "serve the interests of incredible social gatherings that review and fund it" and "make promulgation for these gatherings" (Chomsky and Herman 2006: 16; Temel, 2014: 8). "Noam Chomsky and Ed Herman, in Manufacturing Consent, distinguished five "channels" through which news and political inclusion are mutilated by the structures themselves. These channels are as per the following: The business premiums of proprietor organizations, an affectability to the perspectives and worries of publicists and patrons, the sourcing of news and data from "specialists of intensity, for example, governments and business-sponsored think-tanks, "Fire" or weight applied to columnists including dangers of lawful activity, an unquestioning faith in the advantages of market rivalry and shopper free enterprise" (Heywood, 2007: 233-234). "State and capital proprietors move huge assets to the media segment. This asset will prompt news sources, either standard or "motivation setters". As indicated by Chomsky and Herman, this circumstance causes that the media lose its polyphony and move itself away from the open reporting" (Temel, 2014: 9). Chomsky and Herman expect that the data required for orderly and forceful purposeful publicity by government and worldwide organization>GET ANSWER
What are the Circle ⭕️ Theorems? Updated: Feb 26 Students of geometry often find circles particularly difficult to deal with. Unlike other shapes, circles lack the straight sides joined at definite angles which much of geometry equips the student to understand. Nevertheless, getting to grips with circles is of central importance to geometry and its applications. Architects and engineers often have to make precise calculations about circles when designing towers or wheels. Astronomers need a good understand of circles when studying the orbits of the planets. As such, mathematicians since the ancient Greek thinker Euclid have developed a set of simple equations which describe the properties of circles. This post will outline these ‘circle theorems’ by using diagrams and easy-to-understand language. It will then go through some hints and tips for making the best of circle theorems in your work. As simple as possible This post will go through the circle theorems without all of the unnecessary jargon that you find in some explanations. Nevertheless, there are a few technical terms and ways of talking which should help us along. One way that of talking about angles that makes understanding circle theorems easier is to name angles after the three points which join the lines that make them up. Take a look at the triangle in figure 1. This triangle is made up of three points: q, r, and s. If we wanted to talk about this triangle, we would need an easy way to identify which of the three angles we’re talking about. The way we do this is to list the three points which join the lines which make up the angle, with the point at the angle in the middle. So, for example, the blue angle in figure 1 is made up of two lines from points q, r, and s (with point q at the angle itself). Therefore, we can label the blue angle rqs. In a similar way, we can call the red angle qsr and the green angle qrs. Some more definitions can help us understand concepts in this article better: A ‘chord’ is a straight line that extends between two points on the circumference on the circle. A ‘tangent’ is a straight line which only touches the circumference of a circle at a single point. A tringle is said to be ‘inscribed’ within a circle if each of its corners lies on the circumference of the circle. The ‘vertex angle’ of a triangle is the angle opposite its base. So, on figure 1, if we take the line between r and s as the base of the triangle, then the angle in blue is the vertex angle. Theorem 1: The centre angle is always twice the size of the circumference angle. In figure 2 we have three points, a, b, and c, around the circumference of a circle and a point at the centre of the circle, which we can call point m. The angle amc is always double the angle abc, for any set of three points along the circumference. Theorem 2: The vertex angles of inscribed triangles with the same base chord will always be the same. In figure 3 we have four points on the circumference of a circle, a, b, c, and d. There are two triangles, both with the chord between a and b as their base. One triangle has c as its third point, while the other has d. The angles acb and adb (the two ‘vertex angles’ of the triangles) will always be the same. Theorem 3: The length of two tangents from a point outside the circle to the circumference of the circle will always be the same. In figure 4 we have two tangents which touch different points on the circumference of a circle, a and b. They then cross each other at another point, c. The length between a and c will always be equal to the length between b and c. Theorem 4: Opposite angles of a quadrilateral inscribed within a circle will always add up to 180° In figure 5 we have a quadrilateral whose four corners are points on the circumference of a circle. The opposite angles of this quadrilateral will add up to 180°. For example, cab added to cdb will add to 180°. Theorem 5: The angle between a chord, which is one side of a triangle inscribed within a circle, and a tangent is always equal to the angle of the triangle opposite that chord. In figure 6, we have an inscribed triangle, whose corners are at points a, b, and c on the circumference of a circle. We also have a tangent between points d and e, which meets the circle at point c. The angle bce is equal to the angle of the triangle opposite that chord, cab. The same is true of angles dca and cba. Theorem 6: The vertex angle of an inscribed triangle will be a right angle if it has the diameter as its base. In figure 7, we have a triangle inscribed within a circle. The base of the triangle is a chord that goes through the middle of the circle. As such, it is the length of the diameter of the circle. This means that the angle opposite the base, which is angle bac on figure 7, will always be a right angle (90°), no matter where on the circle it lies. Theorem 7: The angle between the radius and a tangent is always a right angle. In figure 8, we have the radius of the circle, represented by the line between the midpoint m and a point of the circumference b. There is also a tangent, between lines a and c, which meets the circle at point b. As the radius line meets the edge of the circle at the point where the tangent meets the circle, the angle between them must be a right angle. Theorem 8: If a line that goes through the centre of a circle is perpendicular to a chord, then it will always bisect that chord. In figure 9 we have a line which goes through the centre of the circle, point m. We also have a chord, between points a and c. The two lines meet at point b. The angles mbc and mba are right angles. From this we know that the line through the centre ‘bisects’ the chord; it cuts the chord in half, so to speak. What this means is that the part of the line between a and b must be equal in length to the part of the line between b and c. How do I deal with so many circle theorems? When studying circle theorems, it is easy to get overwhelmed by questions containing complex language and multiple parts. Here are some hints and tips for making the best use of circle theorems in your work. Firstly, practice makes perfect. While it may seem dull, nothing beats actually using the circle theorems themselves to answer questions and solve problems. Fundamentally, circle theorems are important as they provide a way of understanding and manipulating the world around us. As such, the practical skill of using circle theorems is best developed through just using them. Secondly, these theorems are best understood intuitively. It is important not only to know the right equations and technical terms, but also to understand why these theorems describe circles the way that they do. Just sitting with a list of the theorems and trying to learn them word for word will not help you very much. You have to look at the circles themselves and just 'see' why, for example, the vertex angle of an inscribed triangle must always be a right angle if it has the diameter as its base. You must imagine all the different ways of constructing whichever shape is relevant to the theorem, whether it be a triangle or quadrilateral, inside of a circle. In doing so, you get an intuitive feel for the properties of circles and of inscribed shapes. This will mean that when you are faced with questions in exams or homework, you don't have to sift through a mental list of theorems that you've remembered word for word. Instead, you simply know what to do just by looking. Fingers crossed 🤞 Hopefully now you’ll be in a better position to tackle questions involving circles head on. To help you to revise, why not try to construct your own circles on paper. Draw some tangents, or perhaps an inscribed triangle. You can practice applying the theorems by working out what you can discover about the circle's you've constructed using them. Alternatively, why not head over to studysquare.co.uk to test yourself on what you’ve learnt: Logic Enthusiast is an independent writer and is studying for an MA in Philosophy at the University of Edinburgh. He is particularly interested in Logic and the Philosophy of Science.
Derivatives of Exponential Function Teacher Resources Find Derivatives of Exponential Function educational lesson plans and worksheets Showing 49 - 72 of 141 resources What is the Slope of the Stairs in Front of the School? Mathematicians apply the formula for line slope to determine the slope of stairs in their school. They work in small groups to take the appropriate measurements, perform the necessary calculations, and find the mean of their group slope... 8th - 11th Math CCSS: Designed Similarities and Differences in Properties of Different Families of Functions - An Investigation Exploring families of functions allows young scholars compare and contrast properties of functions. Students discuss properties that include symmetry, max and min points, asymptotes, derivatives, etc. 9th - 12th Math CCSS: Designed Relationships Between Two Numerical Variables Is there another way to view whether the data is linear or not? Class members work alone and in pairs to create scatter plots in order to determine whether there is a linear pattern or not. The exit ticket provides a quick way to... 9th - 10th Math CCSS: Designed Polynomial Approximations--An Introduction to the Taylor Series Twelfth graders examine the Taylor Series. For this calculus lesson, 12th graders explore the representation of a function as an infinite sum of terms calculated form the values of its derivatives at a single point, hence the Taylor...
You might also like Star in a Box Explore the lifecycle of stars with this interactive app Educational Resources Our collection of online activites, print resources, and observing tips. Astronomers use angular measure to describe the apparent size of an object in the night sky. An angle is the opening between two lines that meet at a point and angular measure describes the size of an angle in degrees, designated by the symbol °. A full circle is divided into 360° and a right angle measures 90°. One degree can be divided into 60 arcminutes (abbreviated 60 arcmin or 60'). An arcminute can also be divided into 60 arcseconds (abbreviated 60 arcsec or 60"). The angle covered by the diameter of the full moon is about 31 arcmin or 1/2°, so astronomers would say the Moon's angular diameter is 31 arcmin, or the Moon subtends an angle of 31 arcmin. If you extend your hand to arm's length, you can use your fingers to estimate angular distances and sizes in the sky. Your index finger is about 1° and the distance across your palm is about 10°. The angular sizes of objects show how much of the sky an object appears to cover. Angular size does not, however, say anything about the actual size of an object. If you extend your arm while looking at the full moon, you can completely cover the moon with your thumb, but of course, the moon is much larger than your thumb, it only appears smaller because if its distance. How large an object appears depends not only on its size, but also on its distance. The apparent size, the actual size of an object, the distance to the object can be related by the small angle formula: D = θ d / 206,265 D = linear size of an object θ = angular size of the object, in arcsec d = distance to the object A certain telescope on Earth can see details as small as 2 arcsec. What is the greatest distance you could see details as small the the height of a typical person (1.6 m)? d = 206,265 D / θ = 206,265 × 1.6 m / 2 = 165,012 m = 165.012 km This is much less than the distance to the Moon (approximately 384,000 km) so this telescope would not be able to see an astronaut walking on the moon. (In fact, no Earth based telescope could.) 1. The average distance to the Moon is approximately 384,000 km. The Moon subtends and angle of 31 arcsminutes, or about 1/2°. Use this information and the small-angle formula to find the diameter of the moon in kilometers. 2. At what distance would you have to hold a quarter (which has a diameter of about 2.5 cm) for it to subtend and angle of 1°? 1. The diameter of the Moon is about 3,463 km 2. You would have to hold it at a distance of 1.43 meters.
Lunar distance (astronomy) Lunar distance is as a unit of measure in astronomy. It is the average distance from the center of Earth to the center of the Moon. More technically, it is the mean semi-major axis of the geocentric lunar orbit. It may also refer to the time-averaged distance between the centers of the Earth and the Moon, or less commonly, the instantaneous Earth-Moon distance. Lunar distance is also called Earth-Moon distance, Earth–Moon characteristic distance, or distance to the Moon, and commonly indicated with LD or . The mean semi-major axis has a value of 384,402 km (238,856 mi). The time-averaged distance between Earth and Moon centers is 385,000.6 km (239,228.3 mi). The actual distance varies over the course of the orbit of the moon, from 356,500 km (221,500 mi) at the perigee to 406,700 km (252,700 mi) at apogee, resulting in a differential range of 50,200 km (31,200 mi). Lunar distance is commonly used to express the distance to Near-Earth object encounters. Lunar distance is also an important astronomical datum; the precision of this measurement to a few part in a trillion has useful implications for testing gravitational theories such as general relativity, and for refining other astronomical values such as Earth mass, Earth radius, and Earth's rotation. The measurement is also useful in characterizing the lunar radius, the mass of the sun and the distance to the Sun. The Moon is spiraling away from the Earth at an average rate of 3.8 cm (1.5 in) per year, as detected by the Lunar Laser Ranging Experiment. By coincidence, the diameter of corner cubes in retroreflectors on the Moon is also . 3.8 cm - 1 Value - 2 Variation - 3 History of measurement - 4 In popular culture - 5 See also - 6 References |AU||1/ = 570.002||| The instantaneous lunar distance is constantly changing. In fact the true distance between the Moon and Earth can change by as much as , 75 m/s or more than 1,000 kilometers in just 6 hours, due to its non-circular orbit. There are other effects that also influence the lunar distance. Some factors are described in this section. Perturbations and eccentricity The distance to the Moon can be measured to sub-millimeter accuracy, which results in an overall uncertainty of 2–3 cm. However due to its elliptical orbit with varying eccentricity, the instantaneous distance varies with monthly periodicity. Furthermore, the distance is perturbed by various astronomical bodies - most significantly the Sun and less so by Jupiter. Other sources responsible for minor perturbations are the other planets in the solar system, asteroids, tidal forces, and relativistic effects. Although the instantaneous uncertainty is sub-millimeter, the measured lunar distance can change by more than 000 km from the mean value throughout a typical month. These perturbations are well understood 21 and the lunar distance can be accurately modeled over thousands of years. Through the action of tidal forces, angular momentum is slowly being transferred from Earth's rotation to the Moon's orbit. The result is that Earth's rate of spin is imperceptibly decreasing (at a rate of ), 2.3 milliseconds/century and the lunar orbit is gradually expanding. The current rate of recession is ±0.004 cm per year. 3.805 However, it is believed that this rate has recently increased, as a rate of would imply that the Moon is only 1.5 billion years old, whereas scientific consensus assumes an age of ~4 billion years. 3.8 cm/year It is also believed that this anomalously high rate of recession may continue to accelerate. Note change to angular momentum must come from an external to the system torque, the tides are internal to the system. The average lunar distance is increasing, which implies that the Moon was closer in the past. There is geological evidence that the average lunar distance was about 52 R⊕ during the Precambrian Era; 2,500 million years BP. The giant impact hypothesis, a widely accepted theory, states that the Moon was created as a result of a catastrophic impact between another planet and Earth, resulting in a re-accumulation of fragments at an initial distance of 3.8 R⊕. In this theory, the initial impact is assumed to have occurred 4.5 billion years ago. History of measurement Until the late 1950s all measurements of lunar distance were based on optical angular measurements. The space age marked a turning point which greatly advanced the precision and accuracy of our knowledge of this value. During the 1950s and 1960s, experiments were conducted that utilized radar, lasers, spacecraft, and computer modeling. This section is intended to illustrate some of the historically significant or otherwise interesting methods of determining the lunar distance, and is not intended to be an exhaustive or all-encompassing list. The oldest method of determining the lunar distance involves measuring the angle between the Moon and a chosen reference point from multiple locations, simultaneously. The synchronization can be coordinated by making measurements at a pre-determined time, or during an event which is observable to all parties. Before accurate mechanical chronometers, the synchronization event was typically a lunar eclipse, or the moment when the moon crossed the meridian (if the observers shared the same longitude). This measurement technique is known as lunar parallax. For increased accuracy, certain systematic errors must be accounted for, such as adjusting the measured angle to account for refraction and distortion of light through the atmosphere. Early attempts to measure the distance to the Moon exploited observations of a lunar eclipse combined with knowledge of Earth's radius and an understanding that the Sun is much further than the Moon. By observing the geometry of a lunar eclipse, the lunar distance can be calculated using trigonometry. The earliest account of an attempt to measure the distance to the Moon using this technique was by the 4th-century-BC Greek astronomer and mathematician Aristarchus of Samos and later by Hipparchus, whose calculations produced a result of 59-67 R⊕. This method later found its way into the work of Ptolemy, who produced a result of 64 1/ R⊕ at its farthest point. An expedition by French astronomer A.C.D Crommelin observed meridional transits of the Moon (the moment when the Moon crosses an imaginary great circle that passes directly overhead and through the poles) on the same night from two different locations. Careful measurements from 1905 through 1910 measured the angle of elevation at the moment when a specific lunar crater (Mösting A) crossed the meridian, from stations at Greenwich and at Cape of Good Hope, which share nearly the same longitude. A distance was calculated with an uncertainty of ± 30 km, and remained the definitive lunar distance value for the next half century. By recording the instant when the Moon occults a background star, (or similarly, measuring the angle between the moon and a background star at a predetermined moment) the lunar distance can be determined, as long as the measurements are taken from multiple locations of known separation. Astronomers O'Keefe and Anderson calculated the lunar distance by observing 4 occultations from 9 locations in 1952. They calculated a mean distance of 407.6±4.7 km, however the value was refined by in 1962 by 384Irene Fischer, who incorporated updated geodetic data to produce a value of 403.7±2 km. 384 An experiment was conducted in 1957 at the U.S. Naval Research Laboratory that used the echo from radar signals to determine the Earth-Moon distance. Radar pulses lasting were broadcast from a 2 µs diameter radio dish. After the radio waves echoed off the surface of the Moon, the return signal was detected and the delay time measured. From that measurement, the distance could be calculated. In practice, however, the 50 ftsignal-to-noise ratio was so low that an accurate measurement could not be reliably produced. The experiment was repeated in 1958 at the Royal Radar Establishment, in England. Radar pulses lasting were transmitted with a peak power of 2 megawatts, at a repetition rate of 260 pulses per second. After the radio waves echoed off the surface of the Moon, the return signal was detected and the delay time measured. Multiple signals were added together to obtain a reliable signal by superimposing oscilloscope traces onto photographic film. From the measurements, the distance was calculated with an uncertainty of 5 µs. 1.25 km These initial experiments were intended to be proof-of-concept experiments and only lasted one day. Follow-on experiments lasting one month produced a mean value of 402±1.2 km, 384 which was the most accurate measurement of the lunar distance at the time. An experiment which measured the round-trip time of flight of laser pulses reflected directly off the surface of the Moon was performed in 1962, by a team from Massachusetts Institute of Technology, and a Soviet team at the Crimean Astrophysical Observatory. During the Apollo missions in 1969, astronauts placed retroreflectors on the surface of the Moon for the purpose of refining the accuracy of this technique. The measurements are ongoing and involve multiple laser facilities. The instantaneous accuracy of the Lunar Laser Ranging experiments can exceed sub-millimeter resolution, and is the most reliable method of determining the lunar distance, to date. Amateur astronomers and citizen scientists Due to the modern accessibility to accurate timing devices, high resolution digital cameras, GPS receivers, powerful computers and near instantaneous communication, it has become possible for amateur astronomers to make high accuracy measurements of the lunar distance. On May 23, 2007 digital photographs of the Moon during a near-occultation of Regulus were taken from two locations, in Greece and England. By measuring the parallax between the moon and the chosen background star, the lunar distance was calculated. A more ambitions project called the "Aristarchus Campaign" was conducted during the Lunar Eclipse of 15 April 2014. During this event, participants were invited to record a series of 5 digital photographs from moonrise through culmination - the point of greatest altitude. The method took advantage of the fact that the Moon is actually closest to an observer when it is at its highest point in the sky, compared to when it is on the horizon. Although it appears that the Moon is biggest when it is near the horizon, the opposite is true. This phenomena is known as the moon illusion. The reason for the change in distance results from the fact that the distance from the center of the Moon to the center of Earth is nearly constant throughout the night, but an observer on the surface of Earth is actually 1 Earth radius from the center of Earth. This offset brings them closest to the Moon when it is overhead. Modern cameras have now reached the resolution level capable of capturing the Moon with a precision enough to perceive and more importantly to measure this tiny variation in apparent size. The results of this experiment were calculated as LD = +3.91 −4.19 R⊕. The accepted value for that night was 60.61, which implied a 60.51 3% accuracy. The benefit of this method is that the only measuring equipment needed is a modern digital camera (equipped with an accurate clock, and GPS receiver). Other experimental methods of measuring the lunar distance that can be performed by amateur astronomers involve: - Taking pictures of the Moon before it enters into the penumbra and after it is completely eclipsed. - Measuring, as precise as possible, the time of the eclipse contacts. - Taking good pictures of the partial eclipse when the shape and size of the Earth shadow is clearly visible. - Taking a picture of the Moon including in the same field of view, Spica and Mars - from various locations. In popular culture An experiment was depicted in the television comedy "The Big Bang Theory", which portrayed the measurement of the lunar distance using lasers. - Astronomical unit - Jet Propulsion Laboratory Development Ephemeris - Lunar Laser Ranging Experiment - Lunar theory - On the Sizes and Distances (Aristarchus) - Orbit of the Moon - The Prutenic Tables of Erasmus Reinhold - Groten, Erwin (1 April 2004). "Fundamental Parameters and Current (2004) Best Estimates of the Parameters of Common Relevance to Astronomy, Geodesy, and Geodynamics by Erwin Groten, IPGD, Darmstadt" (PDF). Journal of Geodesy 77 (10-11): 724–797. Bibcode:2004JGeod..77..724.. doi:10.1007/s00190-003-0373-y. Retrieved 2 March 2016. - Battat, J. B. R.; Murphy, T. W.; Adelberger, E. G. (January 2009). "The Apache Point Observatory Lunar Laser-ranging Operation (APOLLO): Two Years of Millimeter-Precision Measurements of the Earth-Moon Range". Astronomical Society of the Pacific 121 (875): 29–40. Bibcode:2009PASP..121...29B. doi:10.1086/596748. Retrieved 27 February 2016. - Murphy, T W (1 July 2013). "Lunar laser ranging: the millimeter challenge" (PDF). Reports on Progress in Physics 76 (7): 2. arXiv:1309.6294. Bibcode:2013RPPh...76g6901M. doi:10.1088/0034-4885/76/7/076901. - "NEO Earth Close Approaches". Neo.jpl.nasa.gov. Retrieved 2016-02-22. - Williams, J. G.; Newhall, X. X.; Dickey, J. O. (15 June 1996). "Relativity parameters determined from lunar laser ranging" (PDF). Physical Review D 53 (12): 6730–6739. Bibcode:1996PhRvD..53.6730W. doi:10.1103/PhysRevD.53.6730. - Shuch, H. Paul (July 1991). "Measuring the mass of the earth: the ultimate moonbounce experiment" (PDF). Proceedings, 25th Conference of the Central States VHF Society (American Radio Relay League): 25–30. Retrieved 28 February 2016. - Fischer, Irene (August 1962). "Parallax of the moon in terms of a world geodetic system" (PDF). The Astronomical Journal 67: 373. Bibcode:1962AJ.....67..373F. doi:10.1086/108742. - Dickey, J. O.; Bender, P. L.; et al. (22 July 1994). "Lunar Laser Ranging: A Continuing Legacy of the Apollo Program" (PDF). Science 265 (5171): 482–490. Bibcode:1994Sci...265..482D. doi:10.1126/science.265.5171.482. - "Is the Moon moving away from the Earth? When was this discovered? (Intermediate) - Curious About Astronomy? Ask an Astronomer". Curious.astro.cornell.edu. Retrieved 2016-02-22. - C.D. Murray & S.F. Dermott (1999). Solar System Dynamics. Cambridge University Press. p. 184. - Dickinson, Terence (1993). From the Big Bang to Planet X. Camden East, Ontario: Camden House. pp. 79–81. ISBN 0-921820-71-2. - "NASA - Accuracy of Eclipse Predictions". Eclipse.gsfc.nasa.gov. Retrieved 2016-02-22. - "Lunar Retroreflectors". Physics.ucsd.edu. Retrieved 2016-02-22. - Lasater, A. Brian (2007). The dream of the West : the ancient heritage and the European achievement in map-making, navigation and science, 1487-1727. Morrisville: Lulu Enterprises. p. 185. ISBN 978-1-4303-1382-3. Retrieved 28 February 2016. - Leslie, William T. Fox ; illustrated by Clare Walker (1983). At the sea's edge : an introduction to coastal oceanography for the amateur naturalist. Englewood Cliffs, N.J.: Prentice-Hall. p. 101. ISBN 978-0130497833. - Williams, Dr. David R. (18 November 2015). "Planetary Fact Sheet - Ratio to Earth Values". NASA Goddard Space Flight Center. Retrieved 28 February 2016. - Zuluaga, Jorge I.; Figueroa, Juan C.; Ferrin, Ignacio (19 May 2014). "The simplest method to measure the geocentric lunar distance: a case of citizen science". arXiv:1405.4580. - Vitagliano, Aldo (1997). "Numerical integration for the real time production of fundamental ephemerides over a wide time span" (PDF). Celestial Mechanics and Dynamical Astronomy 66 (3): 293–308. Bibcode:1996CeMDA..66..293V. doi:10.1007/BF00049383. - Folkner, W. M.; Williams, J. G.; et al. (February 2014). "The Planetary and Lunar Ephemerides DE430 and DE431" (PDF). The Interplanetary Network Progress Report. 42-169. - Choi, Charles Q. (19 November 2014). "Moon Facts: Fun Information About the Earth’s Moon". Space.com. TechMediaNetworks, Inc. Retrieved 3 March 2016. - Walker, James C. G.; Zahnle, Kevin J. (17 April 1986). "Lunar nodal tide and distance to the Moon during the Precambrian". Nature 320 (6063): 600–602. Bibcode:1986Natur.320..600W. doi:10.1038/320600a0. - Bills, B.G. & Ray, R.D. (1999), "Lunar Orbital Evolution: A Synthesis of Recent Results", Geophysical Research Letters 26 (19): 3045–3048, Bibcode:1999GeoRL..26.3045B, doi:10.1029/1999GL008348 - Canup, R. M. (17 October 2012). "Forming a Moon with an Earth-like Composition via a Giant Impact". Science 338 (6110): 1052–1055. Bibcode:2012Sci...338.1052C. doi:10.1126/science.1226073. - "The Theia Hypothesis: New Evidence Emerges that Earth and Moon Were Once the Same". The Daily Galaxy. 2007-07-05. Retrieved 2013-11-13. - Newhall, X.X; Standish, E.M; Williams, J. G. (Aug 1983). "DE 102 - A numerically integrated ephemeris of the moon and planets spanning forty-four centuries". Astronomy and Astrophysics 125 (1): 150–167. Bibcode:1983A&A...125..150N. ISSN 0004-6361. Retrieved 28 February 2016. - Gutzwiller, Martin C. (1998). "Moon–Earth–Sun: The oldest three-body problem". Reviews of Modern Physics 70 (2): 589. Bibcode:1998RvMP...70..589G. doi:10.1103/RevModPhys.70.589. - Sheehan, William; Westfall, John (2004). The transits of Venus. Amherst, N.Y.: Prometheus Books. pp. 27–28. ISBN 1-59102-175-8. Retrieved 27 February 2016. - Webb, Stephen (1999), "3.2 Aristarchus, Hipparchus, and Ptolemy", Measuring the Universe: The Cosmological Distance Ladder, Springer, pp. 27–35, ISBN 9781852331061. See in particular p. 33: "Almost everything we know about Hipparchus comes down to us by way of Ptolemy." - Helden, Albert van (1986). Measuring the universe : cosmic dimensions from Aristarchus to Halley (Repr. ed.). Chicago: University of Chicago Press. p. 16. ISBN 0-226-84882-5. - Fischer, Irène (7 November 2008). "The distance of the moon". Bulletin géodésique 71 (1): 37–63. Bibcode:1964BGeod..38...37F. doi:10.1007/BF02526081. - O'Keefe, J. A.; Anderson, J. P. (1952). "The earth's equatorial radius and the distance of the moon" (PDF). Astronomical Journal 57: 108–121. Bibcode:1952AJ.....57..108O. doi:10.1086/106720. - Yaplee, B. S.; Roman, N. G.; Scanlan, T. F.; Craig, K. J. (30 July – 6 August 1958). "A lunar radar study at 10-cm wavelength" (PDF). Paris Symposium on Radio Astronomy. IAU Symposium no. 9. Retrieved 29 February 2016. - Hey, J. S.; Hughes, V. A. (30 July – 6 August 1958). "Radar observation of the moon at 10-cm wavelength" (PDF). Paris Symposium on Radio Astronomy. - Yaplee, B. S.; Knowles, S. H.; et al. (January 1965). "The mean distance to the Moon as determined by radar" (PDF). Symposium - International Astronomical Union 21: 2. doi:10.1017/S0074180900104826. - Bender, P. L.; Currie, D. G.; Dicke, R. H.; et al. (October 19, 1973). "The Lunar Laser Ranging Experiment" (PDF). Science 182 (4109): 229–238. Bibcode:1973Sci...182..229B. doi:10.1126/science.182.4109.229. PMID 17749298. Retrieved April 27, 2013. - Wright, Ernie. "Overhead view of the Earth-Moon system, to scale Lunar Parallax: Estimating the Moon's Distance". Retrieved 29 February 2016. - Zuluaga, Jorge I.; Figueroa, Juan C.; Ferrin, Ignacio (19 May 2014). "The simplest method to measure the geocentric lunar distance: a case of citizen science". arXiv:1405.4580.
Einstein’s Theory of Relativity Newton considered space, time, and mass as absolute or independent; but Albert Einstein in his theory of relativity considered them as relative. Absolute means, not variable with respect to anything. It has been mentioned earlier that in order to measure the position or motion of an object, a frame of reference is needed and with respect to that frame of reference the presence of the object is expressed by three quantities. Besides, in order to measure the time a clock or any other standard is needed. These are known as space-time frames. Mechanics is based on Newton’s three laws of motion. But it is not mentioned there with respect to which reference the laws are valid. It has been known from the concept of mechanics that Newton’s laws are not correct with respect to all measurable frame of reference. If Newton’s first law of motion is analyzed, it is seen that uniform velocity does not remain to more than one observer. So it does not have any meaning whether the frame of reference of motion or rest is absolute or independent. If an object does not change its position with respect to the surroundings, then it is called static and if it changes its position, then it is in motion. So, anything other than relatively static or relative motion is meaningless. But Newton believed in absolute motion. On the other hand, Einstein clearly stated that space, time and mass none of them is absolute. Each one of them is considered with reference to something else. That means anything considered with reference to something is called relativity. According to the special theory of relativity absolute motion is meaningless, all motions are relative. Theory of relativity is basically of two types, (1) The general theory of relativity and (2) The special theory of relativity. The general theory of relativity deals with the objects or systems having accelerated as well as uniform motion with respect to one another. For example motions of the sun, moon, stars, comets, meteor etc., gravitation and the scientific and philosophical theories and postulates on the formation of the universe belong to the general theory of relativity. It was published in 1916. On the other hand, in the special theory of relativity objects or systems having uniform velocity (non-accelerated) are discussed and is valid only for inertial systems i.e., systems moving with constant relative velocity. Actually, special theory of relativity is a special form of the general theory of relativity. It was published in 1905.
Learn about the link between logical arguments and electronic circuits. Investigate the logical connectives by making and testing your own circuits and record your findings in truth tables. Investigate circuits and record your findings in this simple introduction to truth tables and logic. Eight children enter the autumn cross-country race at school. How many possible ways could they come in at first, second and third Replace each letter with a digit to make this addition correct. Three dice are placed in a row. Find a way to turn each one so that the three numbers on top of the dice total the same as the three numbers on the front of the dice. Can you find all the ways to do. . . . What happens to the perimeter of triangle ABC as the two smaller circles change size and roll around inside the bigger circle? This article invites you to get familiar with a strategic game called "sprouts". The game is simple enough for younger children to understand, and has also provided experienced mathematicians with. . . . Here are three 'tricks' to amaze your friends. But the really clever trick is explaining to them why these 'tricks' are maths not magic. Like all good magicians, you should practice by trying. . . . Learn about the link between logical arguments and electronic circuits. Investigate the logical connectives by making and testing your own circuits and fill in the blanks in truth tables to record. . . . This jar used to hold perfumed oil. It contained enough oil to fill granid silver bottles. Each bottle held enough to fill ozvik golden goblets and each goblet held enough to fill vaswik crystal. . . . Carry out cyclic permutations of nine digit numbers containing the digits from 1 to 9 (until you get back to the first number). Prove that whatever number you choose, they will add to the same total. This is the second of two articles and discusses problems relating to the curvature of space, shortest distances on surfaces, triangulations of surfaces and representation by graphs. Can you visualise whether these nets fold up into 3D shapes? Watch the videos each time to see if you were correct. If you know the sizes of the angles marked with coloured dots in this diagram which angles can you find by calculation? We have exactly 100 coins. There are five different values of coins. We have decided to buy a piece of computer software for 39.75. We have the correct money, not a penny more, not a penny less! Can. . . . The first of two articles on Pythagorean Triples which asks how many right angled triangles can you find with the lengths of each side exactly a whole number measurement. Try it! A paradox is a statement that seems to be both untrue and true at the same time. This article looks at a few examples and challenges you to investigate them for yourself. Spotting patterns can be an important first step - explaining why it is appropriate to generalise is the next step, and often the most interesting and important. A game for 2 players that can be played online. Players take it in turns to select a word from the 9 words given. The aim is to select all the occurrences of the same letter. What does logic mean to us and is that different to mathematical logic? We will explore these questions in this article. Find the area of the annulus in terms of the length of the chord which is tangent to the inner circle. Can you discover whether this is a fair game? Here are some examples of 'cons', and see if you can figure out where the trick is. Take any whole number between 1 and 999, add the squares of the digits to get a new number. Make some conjectures about what happens in general. This is the second article on right-angled triangles whose edge lengths are whole numbers. Some puzzles requiring no knowledge of knot theory, just a careful inspection of the patterns. A glimpse of the classification of knots and a little about prime knots, crossing numbers and. . . . Patterns that repeat in a line are strangely interesting. How many types are there and how do you tell one type from another? In this 7-sandwich: 7 1 3 1 6 4 3 5 7 2 4 6 2 5 there are 7 numbers between the 7s, 6 between the 6s etc. The article shows which values of n can make n-sandwiches and which cannot. Which set of numbers that add to 10 have the largest product? Write down a three-digit number Change the order of the digits to get a different number Find the difference between the two three digit numbers Follow the rest of the instructions then try. . . . From a group of any 4 students in a class of 30, each has exchanged Christmas cards with the other three. Show that some students have exchanged cards with all the other students in the class. How. . . . Choose any three by three square of dates on a calendar page... In the following sum the letters A, B, C, D, E and F stand for six distinct digits. Find all the ways of replacing the letters with digits so that the arithmetic is correct. This addition sum uses all ten digits 0, 1, 2...9 exactly once. Find the sum and show that the one you give is the only Nine cross country runners compete in a team competition in which there are three matches. If you were a judge how would you decide who would win? Six points are arranged in space so that no three are collinear. How many line segments can be formed by joining the points in After some matches were played, most of the information in the table containing the results of the games was accidentally deleted. What was the score in each match played? What are the missing numbers in the pyramids? I start with a red, a blue, a green and a yellow marble. I can trade any of my marbles for three others, one of each colour. Can I end up with exactly two marbles of each colour? I start with a red, a green and a blue marble. I can trade any of my marbles for two others, one of each colour. Can I end up with five more blue marbles than red after a number of such trades? A introduction to how patterns can be deceiving, and what is and is not a proof. Pick the number of times a week that you eat chocolate. This number must be more than one but less than ten. Multiply this number by 2. Add 5 (for Sunday). Multiply by 50... Can you explain why it. . . . What can you say about the angles on opposite vertices of any cyclic quadrilateral? Working on the building blocks will give you insights that may help you to explain what is special about them. Baker, Cooper, Jones and Smith are four people whose occupations are teacher, welder, mechanic and programmer, but not necessarily in that order. What is each person’s occupation? Advent Calendar 2011 - a mathematical activity for each day during the run-up to Christmas. This article stems from research on the teaching of proof and offers guidance on how to move learners from focussing on experimental arguments to mathematical arguments and deductive How many pairs of numbers can you find that add up to a multiple of 11? Do you notice anything interesting about your results? Powers of numbers behave in surprising ways. Take a look at some of these and try to explain why they are true. If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable. Decide which of these diagrams are traversable. Are these statements always true, sometimes true or never true?
Mathematics Intervention Resources for Middle School - Learning to Love Math: Teaching Strategies That Change Student Attitudes and Get Results by Judy Willis (Jul 13, 2010) - Dr. Judy Willis responds with an emphatic yes in this informative guide to getting better results in math class. Tapping into abundant research on how the brain works, Willis presents a practical approach for how we can improve academic results by demonstrating certain behaviors and teaching students in a way that minimizes negativity - Mastering the Basic Math Facts in Multiplication and Division: Strategies, Activities & Interventions to Move Students Beyond Memorization by Susan O'Connell and John SanGiovanni (Mar 29, 2011) - Provides insights into the teaching of basic math facts, including a multitude of instructional strategies, teacher tips, and classroom activities to help students master their facts while strengthening their understanding of numbers, patterns, and properties. - Pre-Referral Intervention Manual (3rd edition): The Most Common Learning and Behavior Problems Encountered in the Educational Environment by Stephen B. McCarney. Hawthorne Educational Services, Inc. (2006) - This book provides a wealth of intervention ideas based on learning/behavior concerns for improvement. The table of contents organizes the student learning/behavior and provides twenty-six strategies to try with the student to assist with that specific learning/behavior. - RTI and Mathematics: Practical Tools for Teachers in K-8 Classrooms by Regina Gresham and Mary E. Little (Sep 17, 2012)- This interactive, practical resource gives educators sound knowledge and expertise for successfully implementing RTI in mathematics and addressing the challenges involved. Clarifies and describes the issues of RTI, the connections among teachers’ knowledge and skills and their use with RTI, and the role of the teacher within the classroom and school, and provides evidence-based content, scenarios, examples, resources, and activities; modeling description; and reflection upon the key learning outcomes of RTI. - Solving Equations: An Algebra Intervention (Math Intervention Series) by Bradley S. Witzel and Paul J. Riccomini (Jul 17, 2010) - This timely new book is filled with essential research-based information that teachers and pre-service teachers alike need in order to help more students achieve mathematical standards by employing the concrete to representational to abstract (CRA) sequence of instruction with forms of algebraic equations. - Strategies for Teaching Whole Number Computation: Using Error Analysis for Intervention and Assessment by David B. Spangler (Jun 2, 2010) - Through error analysis and targeted instruction, you can uncover students’ misconceptions in addition, subtraction, multiplication, and division and help students understand and correct their own mistakes. - Teaching Learners Who Struggle with Mathematics: Responding With Systematic Intervention and Remediation (3rd Edition) (Pearson Professional Development) by Helene J. Sherman, Lloyd I. Richardson and George J. Yard (Apr 23, 2012) - This book is designed for aspiring and practicing teachers who will work or are working with K-6 students in need of remediation and additional math instruction. Addressing the mathematical concepts students struggle with most, including place value, addition and subtraction of whole numbers, multiplication, division, fractions, and time and money, this book analyzes the roots and causes of frequent error patterns in student work and offers implementable solutions for solving them and teaching lifelong math skills. - Understanding RTI in Mathematics: Proven Methods and Applications by Russell Gersten, Ph.D. and Rebecca Newman-Gonchar, Ph.D. (Aug 31, 2011) - This is the definitive volume on RTI in math: what we know about it, why it works, and how to use it to ensure high-quality math instruction and higher student achievement. Filled with vignettes, accessible summaries of the most recent studies, and best-practice guidelines for making the most of RTI, this comprehensive research volume is ideal for use as a textbook or as a key resource to guide decision makers. - Using Formative Assessment to Differentiate Mathematics Instruction, Grades 4-10: Seven Practices to Maximize Learningby Leslie Laud (Mar 28, 2011) - Staff development expert Leslie Laud provides seven research-based practices that show teachers how to implement formative assessment, create tiered instruction, and manage a multitasking classroom. - ABRI - ABRI (Academic and Behavioral Response to Intervention) is structured to provide state-wide access to support with the emphasis on creating an infrastructure toward sustainability and capacity building within schools and educational cooperatives. - Intervention Central - Provides teachers, schools and districts with free resources to help struggling learners and implement Response to Intervention - Kentucky Center for Mathematics - Drawing on the expertise and research of mathematics educators and mathematicians, the Kentucky Center for Mathematics supports diverse teacher and student populations across the Commonwealth by facilitating the development of mathematical proficiency, power for future success, and enjoyment of teaching and learning mathematics. - Kentucky Council of Teachers of Mathematics - Provides support and professional development by grade level. - Mathematics Instruction for Students with Learning Disabilities or Difficulty Learning Mathematics: A Guide for Teachers - This guide from the Center on Instruction describes seven effective instructional practices for teaching mathematics to K–12 students with learning disabilities that were identified in the Center’s synthesis of intervention research, and also incorporates recommendations from "The Final Report of The National Mathematics Advisory Panel". - Mathwire - This series of pages is designed as a resource to teachers as they differentiate instruction for varied learners in the class. Suggested activities include multi-sensory approaches to various mathematical skills and games to help struggling students construct deep meaning for numbers. - School-Wide Strategies for Managing Mathematics - Provides a variety of strategies to implement when students are struggling with common math topics. The website also has links to a variety of other resources related to math interventions. Please contact firstname.lastname@example.org with any questions.
KEY STAGE 2 ARITHMETIC PACK 1: ADDITION **KS2 ARITHMETIC PRACTICE QUESTIONS FOR ALL TYPES OF ADDITION** 1.1 Simple addition: whole numbers with and without carrying 1.2 Adding decimals: line up the decimal and use the columns provided 1.3 Missing number problems: use the inverse law to find the missing numbers 1.4 Mixed Addition Problems: a mixture of all of the above to really see who knows their stuff Dozens of questions tailored towards the different types of addition problems children will encounter in the KS2 SATs arithmetic paper. All answers provided too. They say practice makes perfect. If that's the case then this will give your class all the practice they need to become perfect. Set out in the same style as the KS2 SATs arithmetic paper.
Before discussing Early Phonics, the first question to get out of the way is "What is phonics?"[†] Phonics is an approach to reading and spelling that focuses on the letter-sound matches (also known as grapheme-phoneme correspondences) from which words are constructed. The next important thing to realise about phonics is that it has two strands. Immediately that is recognised, what is involved in the learning of phonics is clarified. There are phonic facts: these are the letter-sound matches, also known as grapheme-phoneme correspondences (GPCs). Letter-sound matches are the pieces of the code we use to represent spoken words in writing. The code specifies which letter (and letter-combination) can represent which sound (in time-honoured phrases such as c for cat, etc.). Letter-sound matches are facts, and as facts, they are to be memorised. The other important part of phonics is phonic skills; there's the skill of blending the letter-sound matches in reading, and there's the skill of building them in spelling. As skills, they are to be developed. (The development begins in the earlier work described in Discovering Words - Stage 1 of the Spelling Route: the 'juggling' of sounds of words, and continues into real phonics, as sounds and letter-sound matches are juggled into place in reading and spelling.) And so, given that we're thinking of written words as representing spoken words, via a code, the two terms "decoding" and "encoding" are handy reminders of the tasks that learners carry out when they read (decode) and spell (encode). This section first describes the teaching of phonic facts: knowledge of the code. Then it goes on to describe the development of phonic skills. In the Early Phonics stage, learners consolidate the basic understanding that they have already gained: that words are made up of sounds, which are represented by letters. But now they also need the detail as to which letters represent which sounds. So, importantly, now is the time for them to lay a truly solid foundation of known letter-sound matches. A few matches will already have been learned; to these, this stage adds a further sizeable proportion of the most-used letter-sound matches. It's clear, then, that a significant amount of the learning in Early Phonics is learning phonic facts. But in addition, learners need to develop the necessary phonic skills, in order to grasp what is involved in actual spelling. Note that also during this stage, learners need to memorise a good word-bank of frequently used words. We will look first at the learning of phonic facts, the letter-sound matches. There are two broad (and unequal) steps to learning the whole code: · The single-letter matches · All the rest! (The remaining matches, usually of more than one letter e.g. ch, igh or aigh.) Broadly speaking, Early Phonics works in that order, tackling first the single-letter matches and some digraphs (e.g. ch), moving on then to trigraphs (e.g. igh) and perhaps quadrigraphs (e.g. ough), although quadrigraphs are often tricky creatures representing more than one sound (through, though etc.) and so perhaps sit more easily in Further Phonics. You may find it helpful to return to The Complexities of English Spelling; you're better able to help learners with their letter-sound matches if you have a grasp of the nature of the code. It not only helps you to plan sequences of work, but also helps you to explain to your pupils how the code works. About letter-sound matches Letter-sound matches are also known as grapheme-phoneme correspondences, or GPCs. There are certain frequently-used digraphs that learners need to know quite early in their phonic progress: sh, ch, th, for instance, and ar, ea, etc. The concept should be introduced quite straightforwardly: "This sound needs two letters that you already know. When they come together, they lose their own sounds and join together to make this new one". Later however, when dealing with a much wider range of letter-sound matches learners often need practice to sharpen their understanding of the difference between the number of letters in a word, and the number of graphemes (see Matching Letters and Sounds - Activities for Stage 3). Activities for learning a letter-sound match: three steps There are three steps to learning a letter-sound match. 1) The letter-sound match is 'discovered': · Most learners in Early Phonics need to have letter-sound matches pointed out to them. · You can do this by introducing the letter in isolation, separate from any word, and saying its sound. · Or you can do this by finding a suitable word and drawing attention to the letter-sound match in various ways to suit the learners' maturity level. · Of course, a new letter-sound match is more easily memorised when met within an interesting word in an interesting context. 2) Provide lots of opportunities to meet that same letter-sound match in other words: · Collect objects whose names begin with, or contain, the letter-sound match. · Draw pictures. · Note its place on an alphabet frieze. · Incorporate it into I-spy games, odd-one-out games, etc. · List relevant words in the learner's individual vocabulary-book/glossary. 3) Provide a wide range of activities to practise and consolidate that learning: · There are several ideas for off-computer activities. On computer, StarSpell's five modes offer a wealth of opportunity for all three steps described here. This is illustrated with ideas for interactive whiteboard StarSpell sessions. Session 11: Here Comes a New Sound, Session12: c-grabber at work and Session 13: Count One, Count Two show lessons for the first step (introducing a new letter-sound match). Sessions 5 to 10 and Sessions 14 to 17 suggest various ideas for all three steps. The previous section tells how StarSpell's activities are designed specifically to support the learning of letter-sound matches. But StarSpell also deploys another powerful aid to learning: its list organisation. As The complexities of English spelling describes, the phonic code for English spelling is complex. There is no doubt that this puts a responsibility on teachers to devise a clear route through the system. The clearer the route, the more support for learning. StarSpell has carefully graded, helpfully labelled word-lists in Phonics Lists and StarSpell Lists. Now we're considering Early Phonics, let's look at these in more detail. The two sets of lists together offer a well-signposted map of the phonic terrain, and comprehensive coverage of all the letter-sound matches. But to get the most out of StarSpell, selecting StarSpell features to match your teaching focus, you do need to appreciate that the two routes have different rationales. StarSpell's Phonics Lists are organised according to a frequency-of-use rationale, introducing letter-sound matches in an order from most-used to less-used. Each list is labelled descriptively[‡]. · The purpose of Phase Four is to consolidate knowledge of single letter graphemes, and, for consonants, to practise bringing them together as blends (adjacent consonants). See Phonics Lists: Phase 4: Adjacent consonants. Then we come to Phase Five, whose purpose is three-fold. Phase 5: · Introduces 18 new graphemes · Teaches the alternative sounds that can be represented by some graphemes (for instance, the grapheme ow can represent both the sound in cow and in snow) · Teaches the alternative graphemes that some sounds have (e.g. the sound of sh can also be written as in chef and as in station), letter behaviours sometimes described as 'multiple mapping'. In order to reflect Phase Five's threefold structure, StarSpell provides three separate headings: · Phonics Lists: Phase 5: Introducing more graphemes, deals with those 18 new graphemes. · Phonics Lists: Phase 5: Alternative pronunciations, presents word lists for 22 graphemes that represent sounds additional to the sounds originally learned for them. · Phonics Lists: Phase 5: Alternative spellings, has no fewer than 50 word lists, each demonstrating another way of writing a sound. E.g. in wrap, the wr has the sound /r/. Note that certain spellings do not appear in Phonics Lists, but are to be found in StarSpell Lists sc, as in science, is one. Others are noted in the next section, StarSpell Lists. The StarSpell Lists The StarSpell Lists for letter-sound matches are to be found in a block, One letter for one sound up to Words with silent letters, with a few further letter-sound matches included in Further explorations. These StarSpell Lists are organised according to a rationale slightly different from that of the Phonics Lists. The basis for designing their progression is letter behaviour and the names of these Headings signal the behaviour they demonstrate: · One letter for one sound · Letters combined for one sound · One letter alters another · Words with silent letters. To add a little explanation to that: · One Letter for One Sound deals with single letter graphemes, including adjacent consonants (sometimes known as consonant blends). · Letters Combined for One Sound deals with digraphs, some trigraphs, and some quadrigraphs. · One Letter Alters Another is the place to explore spellings such as "c followed by an e" (as in cell), and so on. · Words with Silent Letters is self-explanatory. But these four lists pack a further punch. They actually protect the learner, because they build up knowledge of letter-sound matches incrementally. The StarSpell Staircase shows this graphically. Of course, diversions from the sequence are totally under the control of the user; certainly, if you stick to the List sequence, the safety measure of that incremental organisation is there. As noted in the previous section, certain spellings do not appear in Phonics Lists, but are to be found in the StarSpell Lists, for instance sc, as in science. Others include the final e as in house, silent u as in build and guess, gue as in vague, silent h as in hour, rh as in rhyme, ei as in vein, gh as in cough. StarSpell Lists also includes headings such as "Patterns in word endings" (e.g. table, pencil, label, and so on) and "Words ending in vowels" (e.g. banana, tattoo, potato, etc.). Note: The StarSpell Lists do assume that learners coming to One letter for one sound have already learned the single consonant sounds. That heading begins with work on the short vowels as centres of CVC words; see Thinking through letter behaviour. Phonic skills: an overview Next comes learning phonic skills, that is learning to make words out of these "building bricks". As we begin to consider the phonic skills to be learned in Early Phonics, it's worth looking back to Preparing for Phonics - Stage 2 of the Spelling Route for a reminder of the skills learners will already have acquired. Learners will have practised distinguishing each separate sound in a word (segmentation). From that, they will have grown to understand that sounds, in speech, run together to make whole words (re-assembly, see Further notes). This experience will have led them to the idea that letters stand for sounds, and that letters can be written in sequence to stand for spoken words. So a learner who has achieved phonic readiness has, in fact, had some experience of segmentation (breaking words down) and word re-assembly. But something else of immense importance enters in Early Phonics. Now there is the need to bring the segmentation and re-assembly together in the act that is "spelling". And spelling is a further cluster of skills … The basic skills developed in phonic readiness (distinguishing a word's sounds, playing around with a word's sounds) become an even broader cluster of skills in the actual act of spelling. Consider, step by step, the cluster of skills that is spelling. A speller has to: · "Hear" the word's sounds mentally · Keep them in mind long enough to call up the image of each sound's letter/s · Mentally hold these sounds-plus-images (letter-sound matches) in their right order Write them, and write them in the proper sequence for the word s/he is spelling. In technical terms, these are tasks that call upon auditory discrimination (to distinguish each sound), visual memorisation (to recall the letters that match the sounds in print), auditory memorisation (to keep the sounds and their correct order in memory long enough to spell the word) and the physical skills of writing or typing. Activities to develop spelling skills We have set out some ideas for off-computer work. On computer, each of StarSpell's five modes consistently provides the experience of spelling. StarSpell's Spelling mode provides the experience of spelling in a variety of ways. For example, you can enforce a delay between seeing the word and spelling it. And then the StarPickSpelling Game can complement that with further practice in re-assembling scattered letters to spell a freshly memorised word. The StarGuess Spelling Game goes one step further again: the letters are not provided to give a prop. We also illustrate a range of relevant teaching ideas for StarSpell group work using an interactive whiteboard. For instance, Sessions 7, 8 and 9suggest the introduction to the Spelling mode; Session 18 directly tackles "spelling for real"; Session 17 and Session 23 offer illustrations of the StarPick Spelling Game activities. Throughout this stage, and the other stages too, Look & Learn work marches alongside Listen & Build. This is vital, because, alongside their steady acquisition of phonic knowledge, learners need to memorise a good word-bank of frequently used words. The concept of high frequency words The term "high frequency word" is self-explanatory: words that occur most frequently in written language. Since 1932 at least, various lists have been compiled, typically seeking to identify "The Hundred Most-Used Words", with twelve words, 'a, and, he, is, in, it, of, that, to, was, I, the', always at or near the top of the list. The DfE (Department for Education) National Literacy Strategy (England) included its own version in 1998, and their Letters & Sounds (2007) programme continued the tradition with a 2003 analysis[§]. Of course it makes absolute sense for learners to get these words off by heart for recognition in their reading (known as acquiring sight vocabulary), and for use in their writing. After all, according to one reckoning, those twelve words comprise one quarter of all print; a further hundred have been identified as comprising one half. Not all high frequency words are difficult to decode and encode. Some are phonically perfectly simple. However, the irony is that all too often these most-used words employ rarely-used phonics, earning many of the words the description: phonically irregular. Because they contain unusual or yet untaught letter-sound matches, they well deserve the label "tricky words". How spelling works explained the Look & Learn approach as calling upon spellers to examine the word and "photo" it in their memory. Of course, such a blanket statement needs quite a bit of unpicking: just how exactly does one examine a word; what techniques help you to "photo" it in your memory. A technical term by which Look & Learn is known can get us started on this unpicking. Look & Learn is a visuo-motor memorisation activity. That is to say, an activity involves memorising a spelling by looking (that's the visual part) and hand-movement (that's the motor part). So, there is a long-standing and well-used routine which embodies this. It's LCWC (short for Look-Cover-Write-Check): 1. Look at the word 2. Cover the word 3. Write it out from memory 4. Check to see if you were right 5. And repeat as necessary! A straightforward LCWC routine is a perfectly good and serviceable help to many spellers in memorising words. However, it can be expanded to engage a greater number of the learner's senses: Visual experience of the word Auditory experience of the word Articulation of the word Tactile experience of the word Motor experience of the word Expanding the basic LCWC routine in this way provides multi-sensory experience, enormously helpful in memorising spellings. In fact, it's probably true to say that as a full description, visuo-motor memorisation actually falls short, because a listening component makes a vital contribution, as does the learner's own speaking of the word out loud. In fact, the approach is sometimes known as the V.A.K. approach, the visual-auditory-kinaesthetic approach (movement may be referred to by the term "kinaesthetic"). However, to understand fully what is implied in that first, generalised description of Look & Learn as "examining the word and mentally photographing it", we need to explore how each step of Look-Cover-Write-Check is open to expansion; for instance, we need to consider how we can help learners to 'Look', and so on. And there is an absolute wealth of support available throughout the whole process, through the addition of refinements. Off-computer spelling activities sets out ideas for off-computer activities. Early Experience of Words, Sounds and Letters - Activities for Stages 1 and 2 sets the ball rolling, while Look and Learn - Activities for Stage 3 provides extensive practice, including A basic sequence for learning high frequency words. StarSpell activities and the Spelling Route - finding your place details the support that StarSpell's five modes provide for LCWC work. Indeed, the Spelling mode was developed specifically to embed Look & Learn techniques alongside attention to phonics. The success of this union is shown in the sample StarSpell-on-interactive-whiteboard sessions in Sample sessions - Pointers for StarSpell in class. In particular, the Look & Learn Session 22: Hot Spot Study-Spot and Session 23: A Tricky Word rely on strategies that are just as applicable for easier words. Where to find High Frequency Words in StarSpell StarSpell's Phonics Lists include 100 high frequency words. The words are given in four lists, one for each Phase from Two to Five. Each list helpfully classifies the words as either decodable or "tricky". This matches the categories developed in the English DfE Letters & Sounds (2007). The StarSpell Lists place High Frequency Words in Important sight words. The words are listed according to frequency of use, based on recognised surveys, including those of Burke (1964) and Huxford (1997). Within each frequency-level, the words are grouped thematically. There are also three other thematically organised lists. Finally, there are two sets of lists in Yr2 to KS3 support: '100 most Common Words' and '200 next most common words'. These are presented in order of frequency of appearance, based on the Children's Printed Word Database by Masterson, Stuart, Dixon & Lovejoy, (2003). In each route you can search for any individual word through the "Find a Word" button on the opening screen. When tackling groups of High Frequency Words: remember they are best learned in groups having some internal rationale, with the words grouped together either through phonics or theme. The first two sets of StarSpell's lists do follow such organisations. However, you may wish to create a list that's precisely individualised, as a custom list. The why and the how of custom lists Anywhere within any of the sections you can create whatever list immediately meets the needs of your group of learners, or of one individual learner. There is a special heading for custom lists in each section (within the Menu item Management/Edit word lists). This facility is invaluable. It's a blank canvas. You enter the words you need, and then they acquire all the StarSpell features and activities. Why should custom lists be necessary? Well, learners, as do we all, live, move and breathe in the real world, and the real world is not circumscribed by learning schemes. Of course it's wise - essential in fact - to have an organised learning scheme, as we lead our learners into the labyrinth of English spelling. StarSpell itself is based on its own sensible learning scheme. But wherever we have learners reading and writing in the multi-faceted real world, real language needs will assert themselves. Learners constantly find themselves needing words that are outside their current learning framework. Furthermore, language is personal, and it is specific. Every learner has "important words" that are not necessarily easily accessed through the headings of any spelling scheme, words for matters that are personal, and for matters that are related to immediate concerns. This Guide opened in How spelling works with the recognition that spelling serves a broader purpose beyond its acquisition for its own sake. Learners want to write, and their writing must be encouraged from the outset, whatever the status of their spelling. This is why StarSpell offers the chance to create your own custom lists, which allow you to focus on any learner's immediate word requirements. Note: There's an important link here to High Frequency Words. By definition, most writing will contain them. If StarSpell's High Frequency lists don't match your learners' needs closely enough, create yourself a Custom list that does. It is worth repeating that High Frequency Words are best learned in groups having some internal rationale: the words grouped together either through phonics, or theme. StarSpell's lists do follow such organisations, but you may wish to create a list that's precisely individualised. Here are the StarSpell lists which, broadly speaking, apply to Early Phonics. As has been noted, StarSpell approaches learning from a "Stage not Age" perspective. No two learners are alike. Progress differs. These lists are given as guidelines only. Listen & Build The level of phonic knowledge described on The Spelling Route as the stage "Early Phonics" is most catered for in the following lists. The lists are shown progressively within the routes StarSpell Lists and the Phonics Lists, but you may usefully mix and match across the two routes. For example, you could usefully make choices from Phonics Lists: Phase 2: Introducing simple graphemes for phonemes to match up with choices from StarSpell Lists: One letter for one sound > Short vowels. · Phase 2: Introducing simple graphemes for phonemes · Phase 3: The remaining phonemes, with graphemes · Phase 4: Adjacent consonants · Phase 5: Introducing more graphemes. StarSpell Lists:One letter for one sound · Letters combined for one sound · One letter alters another · Words with silent letters. Look & Learn The lists dedicated to High Frequency Words are hugely important: · Phonics Lists:100 high frequency words · StarSpell Lists: Important 'sight' words · Yr2 to KS3 Support: 100 and next 200 most common words In addition, the Listen & Build lists above are still appropriate.
Writing a hypothesis worksheet A hypothesis is a conjectural statement of the relation between two or more variables (kerlinger, 1956) we would write h0: there is no difference between the two drugs on average the alternative hypothesismight be that. Name: _____ hour: _____ hypothesis worksheet (3pts each) write a hypothesis for each of the following research problems. Writing an hypothesis note: when you write your own hypothesis you can leave out the part in the above examples that is in brackets [ ] notice that in each of the examples it will be easy to measure the independent variables. Question and hypothesis worksheet for each question, write an appropriate hypothesis that could be tested with an experiment be sure each hypothesis is in the proper if, then, because format 4. Write a hypothesis for your testable question if you increase the amount of milk you drink, then you will increase the strength of your bones because milk has calcium in it and calcium has been proved to fortify bones a. Hypothesis (significance) tests about a proportion example 1 the standard treatment for a disease works in 0675 of all patientsa new treatment is proposed is it better (the scientists who created it claim it is. Hypothesis worksheet directions: take the following inferences and turn them into formal hypotheses • your hypothesis should: o make sense to anyone who can read it. Name_____ writing a hypothesis worksheet step 1 state your topic the topic i am researching is _____ remember to: be specific. Writing a hypothesis what is a real hypothesis a hypothesis is a tentative statement that proposes a possible explanation to some phenomenon or event. How to develop a hypothesis worksheets for printable download. What is a hypothesis in simple terms, it's an educated or best guess but with our worksheet, you'll see it can be a powerful tool for personal success. Test your knowledge of hypotheses with a printable worksheet and interactive quiz quiz & worksheet - hypotheses in psychology quiz course learn how to write a hypothesis practice exams final exam. Hypothesis writing practice practice: write a testable hypothesis in ifthenformat for each of the following research problems identify the variable, control group and experimental group 19 hypothesis worksheet author: lsr7. Students learn about scientific hypotheses they are given tips for developing hypotheses and practice properly wording a hypothesis finally, they are presented with a specific problem and must respond to a series of questions that help them arrive. Hypothesis and variables directions: identify the manipulated directions: write a testable hypothesis in ifthenformat for each of the following research problems identify the hypothesis worksheet author: lsr7. How to write a hypothesis how to write a hypothesis by valorie delp teacher a hypothesis is basically your best guess as to what is going to happen during a science experiment scientific method worksheets printables and coloring pages. Scientific method questions for your custom printable tests and worksheets in a hurry browse our pre-made printable worksheets library with a variety of activities and quizzes for all k-12 levels login hypothesis college scientific method. Worksheet on hypothesis tests 1 a production line produces rulers that are supposed to be 12 inches long a sample of 49 of the rulers had a mean of 121 and a standard deviation of 5 inches. View homework help - research questions worksheet and answers from a st 251 at nmsu write a hypothesis for each of the following 2 questions identify the dependent and independent variable for. Stat 216 Ð hypothesis testing worksheet 1 the epa reports that the exhaust emissions for a certain car model has a normal distribution with a. Displaying 8 worksheets for writing hypothesis worksheets are hypothesis work 2 name underline the iv what is the, writing a hypothesis, name writing a hypothesis. Writing a hypothesis worksheet Paul strode fairview high schoo l boulder, colorado the scientific hypothesis survey: • write the definition of a hypothesis in science • a farmer observes that one edge of his onion field produces taller plants and larger onions. Read each of the topics below and determine if they can be investigated with a testable hypothesis if they are testable, then write your hypothesis on the line provided. - Learning target sm4 i can create a testable hypothesis what is the manager's hypothesis (if worksheet modified from:. - Hypothesis lesson plans and worksheets from thousands of teacher-reviewed resources to help you inspire students learning. - Search results for: writing a good hypothesis worksheet for students click here for more information. - Experimental design worksheet name _____ scientific method block independent variable: practice: write a hypothesis for each of the statements and identify the variables, control group, and experimental group 1. Find writing a hypothesis lesson plans and teaching resources from hypothesis writing labs worksheets to hypothesis writing videos, quickly find teacher-reviewed educational resources. Discover (and save) your own pins on pinterest ag science: hypothesis worksheet answers ag science: hypothesis worksheet answers scientific method worksheet practice writing hypotheses hypothesis worksheet answers. A hypothesis has classical been referred to as an educated guess in the context of the scientific method, this description is somewhat correct. Free, printable worksheets to help students learn how to write conclusions click to view and print, and for all your ela activities.
The historical origin of common law can be traced back to medieval England, specifically to the period following the Norman Conquest in 1066. Prior to this event, England had a decentralized legal system, with different regions and communities following their own customary laws. However, after William the Conqueror became the King of England, he sought to establish a unified legal system that would consolidate his power and maintain control over the newly conquered territory. To achieve this, William appointed judges who traveled across the country to administer justice and enforce royal laws. These judges were known as "justices in eyre" and were responsible for hearing cases and applying the law uniformly throughout the kingdom. Over time, a body of legal principles and rules began to emerge from their decisions, forming the basis of what would later become known as common law. The term "common law" itself refers to the law that is common to all people, as opposed to laws that are specific to certain regions or groups. It represented a departure from the localized customary laws and aimed to establish a consistent legal framework that applied to all individuals within the realm. This development was significant because it laid the foundation for a legal system based on precedent and the principle of stare decisis, which means that judges are bound by previous decisions and must follow established legal principles. The evolution of common law was further shaped by the emergence of legal institutions such as the Court of Common Pleas and the Court of King's Bench. These courts played a crucial role in developing and refining legal principles through their decisions. Additionally, legal scholars and jurists began to write treatises and commentaries on the law, further contributing to the growth and codification of common law. Over time, common law expanded beyond England's borders through colonization and trade. Many countries that were once part of the British Empire, including the United States, Canada, Australia, and India, adopted common law as their legal system. However, each jurisdiction has developed its own unique body of common law, influenced by local customs, statutes, and judicial decisions. In summary, the historical origin of common law can be attributed to the efforts of William the Conqueror to establish a unified legal system in medieval England. Through the appointment of judges and the development of legal principles based on their decisions, common law emerged as a system that aimed to provide consistent and universal justice. Its subsequent spread and adaptation in various jurisdictions have contributed to its enduring influence in many legal systems around the world. Common law and civil law systems are two distinct legal frameworks that exist in various countries around the world. While both systems aim to provide a structure for resolving legal disputes, they differ significantly in their origins, principles, sources of law, and methods of interpretation. One fundamental difference between common law and civil law systems lies in their historical development. Common law originated in England and spread to many English-speaking countries, including the United States, Canada, Australia, and India, through colonization and historical ties. On the other hand, civil law systems trace their roots back to ancient Roman law and have been adopted by many European countries, Latin American nations, and parts of Asia and Africa. The principles underlying common law and civil law systems also diverge. Common law relies heavily on case law, which refers to legal decisions made by judges in previous cases. These decisions serve as precedents and are binding on lower courts within the same jurisdiction. The doctrine of stare decisis, meaning "to stand by things decided," is a key principle in common law systems. It ensures consistency and predictability in legal outcomes by requiring judges to follow established precedents when deciding similar cases. In contrast, civil law systems are based on codified laws, which are comprehensive legal codes that outline general principles and rules governing various areas of law. These codes are enacted by legislatures and serve as the primary source of law. Judges in civil law systems have a more limited role in interpreting the law and are expected to apply the law as written rather than relying on precedent. This approach allows for greater flexibility in adapting to changing societal needs but may result in less predictability in legal outcomes. Another significant distinction between common law and civil law systems lies in their sources of law. In common law systems, statutes enacted by legislatures are one source of law, but judicial decisions play a crucial role in shaping legal principles and filling gaps in legislation. These judicial decisions become part of the common law and are binding on future cases. Additionally, legal scholars' writings and legal customs also contribute to the development of common law. In civil law systems, legislation is the primary source of law, and judges' role is to interpret and apply these laws. Legal codes provide a comprehensive framework for various legal matters, including civil, criminal, and administrative law. While judicial decisions in civil law systems do not create binding precedents like in common law, they may still have persuasive value in subsequent cases. The methods of interpretation employed in common law and civil law systems also differ. Common law judges engage in a process called "judicial reasoning" or "common law reasoning," where they analyze previous cases and legal principles to arrive at a decision. They often rely on analogical reasoning, distinguishing relevant facts and applying established legal principles to the case at hand. In civil law systems, judges primarily use a method called "systematic interpretation." This approach involves interpreting legislation based on its text, context, and purpose. The goal is to determine the legislator's intent when enacting the law. Civil law judges may also refer to legal commentaries and scholarly writings to aid in their interpretation. In summary, common law and civil law systems differ in their historical development, principles, sources of law, and methods of interpretation. Common law relies on case law and precedents, while civil law is based on codified laws enacted by legislatures. Common law systems emphasize consistency and predictability through stare decisis, while civil law systems prioritize flexibility and adaptability. Understanding these distinctions is crucial for comprehending the legal systems in different jurisdictions around the world. Common law is a legal system that originated in England and has since been adopted by many countries around the world, including the United States. It is based on judicial decisions and precedents rather than legislative statutes. The key principles and characteristics of common law can be summarized as follows: 1. Precedent: Common law relies heavily on the principle of precedent, which means that decisions made in previous cases serve as binding authority for future cases with similar facts and legal issues. This principle ensures consistency and predictability in the legal system, as judges are expected to follow established precedents unless there are compelling reasons to depart from them. 2. Stare Decisis: Stare decisis, Latin for "to stand by things decided," is closely related to the principle of precedent. It requires lower courts to follow the decisions of higher courts within the same jurisdiction. This hierarchical structure ensures uniformity in the application of the law and allows for the development of a coherent body of legal principles over time. 3. Case Law: Common law is primarily developed through judicial decisions rather than legislation. Judges play a crucial role in interpreting and applying the law to specific cases, thereby shaping the legal landscape. Unlike civil law systems that rely heavily on codified statutes, common law evolves incrementally through the accumulation of case law. 4. Flexibility: One of the defining characteristics of common law is its flexibility. Unlike statutory law, which can be rigid and difficult to adapt to changing circumstances, common law allows judges to consider the unique facts and circumstances of each case. This flexibility enables the law to evolve and respond to societal changes, technological advancements, and emerging legal issues. 5. Adversarial System: Common law operates under an adversarial system, where two opposing parties present their arguments before an impartial judge or jury. This system encourages vigorous advocacy and ensures that all relevant arguments and evidence are considered before reaching a decision. The judge's role is to act as an impartial arbiter and apply the law to the facts presented by the parties. 6. Equity: Common law incorporates principles of equity, which originated from the English Court of Chancery. Equity provides a mechanism to address situations where strict application of the law would lead to unfair or unjust outcomes. Courts have the power to grant equitable remedies, such as injunctions or specific performance, to prevent harm or provide relief when monetary damages are insufficient. 7. Incremental Development: Common law is characterized by its incremental development. Rather than sudden and sweeping changes, legal principles evolve gradually through the accumulation of judicial decisions. This evolutionary process allows for a more nuanced and context-specific approach to legal issues, as judges can refine and adapt the law based on practical experience and societal needs. In conclusion, common law is a legal system built on the principles of precedent, stare decisis, case law, flexibility, an adversarial system, equity, and incremental development. These key principles and characteristics ensure consistency, fairness, and adaptability within the common law framework. Common law is a legal system that has evolved and adapted over time through a process known as judicial precedent. This system relies on the principle of stare decisis, which means that courts are bound to follow the decisions of higher courts in similar cases. Through this process, common law has been able to adapt to changing societal needs and values. One of the key ways in which common law evolves is through the interpretation and application of existing legal principles to new and emerging situations. As society changes and new issues arise, courts are tasked with applying existing legal principles to these novel circumstances. This process often involves examining previous court decisions and considering how they can be applied to the current situation. By doing so, courts are able to develop new legal rules or modify existing ones to address the specific needs of the case at hand. Another important aspect of the evolution of common law is the concept of judicial activism. Judicial activism refers to the willingness of judges to interpret and apply the law in a way that reflects their own values and beliefs. This can lead to the development of new legal principles or the reinterpretation of existing ones. For example, in cases involving constitutional rights, judges may interpret these rights in light of changing societal norms and values, leading to a broader or more narrow interpretation of those rights. Additionally, common law can evolve through legislative action. While common law is primarily developed by courts, legislatures can also play a role in shaping the law. Legislatures have the power to enact statutes that can override or modify existing common law principles. These statutes can be enacted in response to societal changes or to address perceived gaps or shortcomings in the common law. When a statute conflicts with existing common law, courts are tasked with interpreting and applying both sources of law to determine the outcome of a case. Furthermore, common law can also evolve through the influence of international law and legal principles. As countries become more interconnected through trade and globalization , legal systems are increasingly influenced by international norms and standards. Courts may look to decisions from international tribunals or consider international treaties and agreements when interpreting and applying common law principles. This can lead to the development of new legal rules or the modification of existing ones to align with international standards. In conclusion, common law evolves and adapts over time through a combination of judicial precedent, interpretation and application of existing legal principles, judicial activism, legislative action, and the influence of international law. This dynamic process allows common law to respond to changing societal needs and values, ensuring its continued relevance and effectiveness in addressing legal issues. Judges play a crucial role in the development of common law. Common law is a legal system primarily based on judicial decisions, rather than legislative statutes or executive actions. In this system, judges have the responsibility of interpreting and applying the law to specific cases brought before them. Through their decisions, judges contribute to the ongoing development and evolution of common law. One of the key functions of judges in the development of common law is the creation of legal precedents. Precedents are previous court decisions that serve as authoritative guidelines for future cases with similar legal issues. When judges make decisions, they often provide detailed explanations of their reasoning, which becomes part of the legal record. These explanations, known as judgments or opinions, establish legal principles and interpretations that can be relied upon in subsequent cases. Over time, a body of precedents accumulates, forming the foundation of common law. Judges also have the power to distinguish or overrule existing precedents. When faced with a case that presents unique circumstances or conflicts with established legal principles, judges may choose to distinguish it from previous cases. Distinguishing involves finding relevant differences between the current case and the precedent, allowing the judge to reach a different conclusion. This process allows common law to adapt to changing societal values, technological advancements, and new legal challenges. Furthermore, judges play a role in filling gaps in legislation through common law development. In some instances, statutes enacted by legislatures may not address every possible scenario or provide clear guidance on how to interpret certain provisions. Judges are then tasked with interpreting and applying these statutes in a manner consistent with the underlying principles and objectives of the law. By doing so, judges contribute to the development of common law by establishing new legal principles or refining existing ones. Additionally, judges have the authority to engage in statutory interpretation. When faced with ambiguous or unclear statutory language, judges must determine the intended meaning behind the legislation. Through their interpretation, judges provide clarity and guidance on how the law should be understood and applied. This process helps shape the development of common law by establishing consistent interpretations of statutes that can be relied upon in future cases. It is important to note that judges are not the sole contributors to the development of common law. Legal scholars, practitioners, and lawmakers also play significant roles. However, judges hold a unique position as they are responsible for applying the law in specific cases and their decisions have binding authority. Their interpretations and reasoning become part of the legal landscape, influencing future judicial decisions and contributing to the ongoing development of common law. In conclusion, judges play a central role in the development of common law. Through their decisions, they establish legal precedents, distinguish or overrule existing precedents, fill gaps in legislation, and engage in statutory interpretation. Their contributions shape the evolution of common law, ensuring its adaptability to changing circumstances and societal needs. The significance of precedent in common law cannot be overstated. Precedent, also known as case law or judge-made law, forms the backbone of the common law system and plays a crucial role in shaping legal principles and ensuring consistency in judicial decision-making. It refers to the practice of courts relying on previous decisions or judgments when deciding similar cases. One of the primary reasons for the significance of precedent is its ability to provide predictability and stability in the legal system. By following established precedents, judges are able to apply consistent legal principles to similar cases, ensuring that similar situations are treated similarly. This predictability is essential for individuals and businesses to understand their rights and obligations under the law, as it allows them to make informed decisions and plan their actions accordingly. Precedent also contributes to the development and evolution of the law. As judges interpret and apply existing legal principles to new cases, they contribute to the growth of legal doctrines and refine legal concepts. Over time, this iterative process helps shape the law, allowing it to adapt to changing societal needs and values. Precedent thus ensures that the law remains relevant and responsive to contemporary challenges. Moreover, precedent serves as a mechanism for maintaining consistency and coherence within the legal system. When courts follow established precedents, they create a hierarchy of authority, with higher courts binding lower courts within their jurisdiction. This hierarchical structure ensures that decisions made by higher courts are authoritative and must be followed by lower courts. This principle of stare decisis, or "let the decision stand," promotes uniformity in legal interpretation and minimizes arbitrary decision-making. Furthermore, precedent fosters fairness and equality before the law. By relying on past decisions, judges avoid making ad hoc or subjective judgments, which could lead to unequal treatment of similar cases. Instead, precedent ensures that like cases are treated alike, promoting fairness and impartiality in the legal system. The significance of precedent extends beyond individual cases and has broader implications for legal scholarship and legal education. Precedents serve as valuable sources of legal authority, providing guidance to lawyers, judges, and legal scholars when analyzing legal issues. They form the basis for legal arguments and help shape legal theories. Additionally, studying precedents allows law students to understand the development of legal principles and the reasoning behind judicial decisions, enabling them to become proficient legal practitioners. In conclusion, precedent is of immense significance in common law. It provides predictability, stability, and consistency in the legal system, contributes to the development and evolution of the law, promotes fairness and equality, and serves as a valuable resource for legal scholarship and education. By relying on past decisions, common law courts ensure that legal principles are applied consistently and that the law remains adaptable to changing circumstances. Common law, as a legal system, has developed various mechanisms to handle conflicts between different legal jurisdictions. These conflicts arise when there is a clash between laws and regulations from different jurisdictions, such as between states within a country or between countries themselves. Common law provides a framework for resolving these conflicts through principles such as choice of law, conflict of laws, and the doctrine of comity. One of the primary mechanisms used in common law to handle conflicts between legal jurisdictions is the choice of law. This principle allows parties involved in a dispute to select the governing law that will be applied to their case. Parties may include a choice of law provision in their contracts, specifying which jurisdiction's laws will govern any disputes that may arise. The choice of law provision is typically enforceable unless it violates public policy or is contrary to mandatory provisions of the chosen jurisdiction. In the absence of a choice of law provision, common law employs conflict of laws rules to determine which jurisdiction's laws should apply. Conflict of laws rules aim to identify the most appropriate jurisdiction by considering factors such as the parties' domicile , the place where the contract was formed, and the place where the contract was intended to be performed. These rules help courts determine which jurisdiction has the most significant relationship to the dispute and should therefore apply its laws. Additionally, common law recognizes the doctrine of comity, which promotes cooperation and respect between different legal jurisdictions. Comity refers to the recognition and enforcement of foreign judgments and laws by one jurisdiction based on respect for the legal systems of other jurisdictions. Under this doctrine, courts may give deference to decisions made by foreign courts if they are satisfied that the foreign court had jurisdiction and provided due process. Comity allows for the recognition and enforcement of judgments from other jurisdictions, promoting international cooperation and avoiding conflicting outcomes. Furthermore, common law jurisdictions often engage in judicial dialogue and rely on precedent to handle conflicts between legal jurisdictions. Courts may consider decisions made by other courts in similar cases to guide their own decision-making process. This practice helps ensure consistency and predictability in the application of laws across different jurisdictions. It is important to note that the specific approach to handling conflicts between legal jurisdictions may vary among common law jurisdictions. While the principles discussed above are generally applicable, each jurisdiction may have its own set of rules and procedures for resolving conflicts. In conclusion, common law provides a comprehensive framework for handling conflicts between different legal jurisdictions. Through mechanisms such as choice of law, conflict of laws, the doctrine of comity, and reliance on precedent, common law aims to ensure fairness, consistency, and predictability in resolving disputes that involve multiple legal jurisdictions. Advantages of a Common Law Legal System: 1. Flexibility and Adaptability: One of the key advantages of a common law legal system is its flexibility and adaptability to changing circumstances. Common law is based on judicial decisions and precedents, allowing judges to interpret and apply the law in a manner that reflects the evolving needs of society. This flexibility enables the legal system to respond to new situations and address emerging issues, ensuring that the law remains relevant and effective. 2. Case-by-Case Development: Common law is developed through the accumulation of judicial decisions over time. This case-by-case development allows for a nuanced understanding of legal principles and their application in specific contexts. Judges have the authority to interpret statutes and fill gaps in legislation, providing a more comprehensive and detailed body of law. This approach ensures that legal principles are refined and clarified through practical application, leading to a more robust and sophisticated legal system. 3. Protection of Individual Rights: Common law places a strong emphasis on protecting individual rights and liberties. Through the development of precedents, common law ensures that similar cases are treated consistently, promoting fairness and equality before the law. The principle of stare decisis, which requires lower courts to follow higher court decisions, helps maintain legal certainty and predictability, reducing the risk of arbitrary or discriminatory judgments. 4. Evolutionary Nature: Common law evolves gradually over time, reflecting societal changes, values, and norms. This evolutionary nature allows for the incorporation of new ideas and perspectives into legal principles. As societal attitudes shift, common law can adapt to ensure that legal outcomes align with contemporary expectations. This adaptability helps maintain public confidence in the legal system and promotes its legitimacy. Disadvantages of a Common Law Legal System: 1. Complexity and Uncertainty: The reliance on precedents and case-by-case development can lead to a complex and intricate legal system. The sheer volume of case law can make it challenging for legal professionals and individuals to navigate and understand the law fully. Additionally, the interpretation and application of precedents may vary, leading to uncertainty and inconsistency in legal outcomes. This complexity and uncertainty can increase legal costs, prolong litigation, and create difficulties in predicting the outcome of legal disputes. 2. Slow Pace of Change: While the evolutionary nature of common law is often seen as an advantage, it can also be a disadvantage in certain situations. The gradual development of legal principles through judicial decisions means that changes in the law may occur slowly. This can be problematic when urgent societal issues require immediate legal responses. Legislative reforms may be necessary to address such situations promptly, but the common law's reliance on judicial decisions can sometimes impede swift changes. 3. Limited Codification: Common law relies heavily on judge-made law rather than statutory law. While this allows for flexibility and adaptability, it can also result in a lack of codification. The absence of comprehensive legislation on certain matters can make it difficult for individuals to understand their legal rights and obligations. Moreover, the absence of codification may lead to inconsistencies and gaps in the law, requiring individuals to rely on judicial interpretation and precedents. 4. Judicial Discretion: Common law grants judges significant discretion in interpreting and applying the law. While this discretion allows for tailored decision-making based on specific circumstances, it can also lead to subjective judgments. Different judges may interpret legal principles differently, potentially resulting in inconsistent outcomes. Excessive judicial discretion can also raise concerns about accountability and transparency , as it may be challenging to challenge or review a judge's decision based on their interpretation of the law. In conclusion, a common law legal system offers advantages such as flexibility, case-by-case development, protection of individual rights, and evolutionary nature. However, it also presents disadvantages including complexity and uncertainty, slow pace of change, limited codification, and judicial discretion. Understanding these advantages and disadvantages is crucial for comprehending the functioning and implications of a common law legal system. Common law is a legal system that originated in England and has greatly influenced the legal systems of countries around the world. Its impact can be seen in various aspects, including the development of legal principles, the role of judges, and the flexibility of the law. One of the primary ways in which common law influences legal systems globally is through the development of legal principles. Common law is based on the principle of stare decisis, which means that courts are bound by previous decisions and must follow established legal precedents. This principle ensures consistency and predictability in the law, as similar cases are decided in a similar manner. As a result, common law jurisdictions tend to have a well-developed body of case law that provides guidance for future legal disputes. Furthermore, common law allows for the evolution and adaptation of legal principles over time. Unlike civil law systems that rely heavily on codified statutes, common law relies on judicial decisions to shape and interpret the law. Judges play a crucial role in common law systems by interpreting statutes, filling gaps in legislation, and developing new legal principles through their decisions. This judicial activism allows the law to adapt to changing societal values and circumstances, ensuring its relevance and effectiveness. Another significant influence of common law on legal systems worldwide is its emphasis on precedent. Precedent plays a vital role in common law jurisdictions as it provides a basis for decision-making and promotes consistency in legal outcomes. Courts are bound by previous decisions made by higher courts within the same jurisdiction. This hierarchical structure ensures that lower courts follow the legal principles established by higher courts, creating a coherent and interconnected legal system. Moreover, common law's flexibility allows for the consideration of equity and fairness in legal decision-making. Common law recognizes that not all cases can be addressed by rigid rules or statutes, and therefore grants judges the discretion to consider individual circumstances and apply equitable principles when necessary. This flexibility enables common law systems to address unique or novel situations that may not have been anticipated by legislation, promoting justice and fairness in the legal process. The influence of common law extends beyond its original jurisdiction in England. Many countries, particularly those that were former British colonies, have adopted common law as their legal system. These countries include the United States, Canada, Australia, India, and various countries in Africa and the Caribbean. In these jurisdictions, common law has been integrated into the legal framework and has shaped their legal systems, often alongside other legal traditions. In conclusion, common law has a profound influence on the legal systems of countries around the world. Its impact can be observed in the development of legal principles, the role of judges, the reliance on precedent, and the flexibility of the law. By providing consistency, adaptability, and fairness, common law has become a cornerstone of legal systems globally, ensuring the rule of law and promoting justice. Common law and statutory law are two distinct legal systems that coexist within many jurisdictions, including the United States and the United Kingdom. While they serve different purposes and have different origins, they are interconnected and influence each other in various ways. Common law refers to the body of law that is derived from judicial decisions and precedents established by courts over time. It is a system of law that has evolved through the application of legal principles and reasoning to specific cases. Common law is primarily based on the principle of stare decisis, which means that courts are bound to follow the decisions of higher courts in similar cases. This principle ensures consistency and predictability in the legal system. Statutory law, on the other hand, refers to laws that are enacted by legislative bodies such as parliaments or congresses. These laws are written and codified, providing a clear set of rules and regulations that govern society. Statutory law is created through a democratic process and reflects the will of the legislature. It covers a wide range of areas, including criminal law, contract law, property law, and many others. The relationship between common law and statutory law is complex and multifaceted. While statutory law is considered superior to common law in terms of hierarchy, common law plays a crucial role in interpreting and applying statutory law. Common law fills in the gaps left by statutory law, providing guidance on how to interpret and apply the statutes in specific cases. In many instances, statutory law may be broad or ambiguous, leaving room for interpretation. When faced with such situations, courts rely on common law principles and precedents to guide their decision-making process. Common law acts as a source of legal principles and doctrines that help courts interpret statutes and determine their intended scope and application. Furthermore, common law can also influence statutory law through judicial activism or interpretation. Courts may interpret statutes in a way that aligns with common law principles or societal values, effectively shaping the development of statutory law. This process allows the law to adapt and evolve over time to meet the changing needs of society. It is important to note that the relationship between common law and statutory law can vary depending on the jurisdiction. In some countries, such as the United States, common law plays a more significant role due to the principle of judicial review, which allows courts to declare statutes unconstitutional. In contrast, in civil law jurisdictions, statutory law is often more dominant, and courts have limited power to deviate from the explicit provisions of statutes. In conclusion, common law and statutory law are interconnected legal systems that complement each other. While statutory law provides a clear set of rules and regulations, common law fills in the gaps and guides the interpretation and application of statutes. The relationship between these two legal systems is dynamic and influenced by societal values, judicial decisions, and legislative actions. Common law, as a legal system based on judicial precedent and case law, has evolved over centuries to address emerging legal issues and technological advancements. The flexibility inherent in common law allows it to adapt to changing circumstances and provide guidance in areas where statutory law may be lacking or insufficient. When it comes to emerging legal issues and technological advancements, common law plays a crucial role in shaping legal principles and providing clarity in an ever-evolving landscape. One way common law addresses emerging legal issues is through the process of judicial decision-making. As new legal issues arise, courts have the authority to interpret existing laws and apply them to the specific circumstances of the case at hand. This process allows common law to develop and evolve in response to societal changes and technological advancements. Judges consider the facts of each case, analyze relevant legal principles, and provide reasoned judgments that become part of the body of common law. These judgments serve as precedents for future cases, providing guidance for judges and lawyers when faced with similar legal issues. Technological advancements present unique challenges for the legal system, as they often outpace the development of statutory law. Common law fills this gap by providing a framework for addressing legal issues arising from technological advancements. For example, in cases involving intellectual property rights in the digital age, common law principles such as copyright law are applied to new forms of expression and innovation. Courts have developed doctrines like fair use and transformative use to adapt copyright law to the challenges posed by digital technologies. Moreover, common law also addresses emerging legal issues through the principle of equity. Equity refers to a set of legal principles that supplement common law when strict application would lead to unfair or unjust outcomes. In cases where existing laws do not adequately address emerging legal issues, courts can invoke equitable principles to provide remedies or relief. This flexibility allows common law to respond to novel situations and ensure fairness in the face of technological advancements. Additionally, common law recognizes the importance of legal precedent in guiding future decisions. Courts often consider previous judgments and rulings when deciding cases involving emerging legal issues and technological advancements. This reliance on precedent helps ensure consistency and predictability in the legal system, as well as the development of coherent legal principles over time. In recent years, common law has faced numerous challenges due to the rapid pace of technological advancements. Issues such as data privacy, cybersecurity, artificial intelligence , and blockchain technology have presented novel legal questions that require careful consideration. Common law has responded by engaging in a process of interpretation and adaptation, drawing on existing legal principles and precedents to address these emerging issues. Courts have grappled with questions of liability , jurisdiction, and the application of traditional legal doctrines to new technologies. In conclusion, common law addresses emerging legal issues and technological advancements through the process of judicial decision-making, the application of equitable principles, reliance on legal precedent, and the interpretation and adaptation of existing laws. Its flexibility and ability to evolve make it a vital tool in navigating the complex legal landscape created by technological advancements. As society continues to advance technologically, common law will play a crucial role in shaping legal principles and providing guidance in this ever-changing domain. Legal reasoning and interpretation play a crucial role in the common law system, serving as the foundation for the development and application of legal principles. Common law, which originated in England and has been adopted by many countries around the world, relies heavily on judicial decisions and precedents to shape and evolve the law. In this system, judges are tasked with interpreting statutes, regulations, and prior court decisions to resolve disputes and provide guidance for future cases. The importance of legal reasoning and interpretation in common law can be understood through several key aspects. Firstly, legal reasoning is essential for the consistent and predictable application of the law. Common law is based on the principle of stare decisis, which means that courts are bound by previous decisions and must follow established legal precedents. Through legal reasoning, judges analyze the facts and legal issues of a case, consider relevant statutes and regulations, and apply existing precedents to reach a decision. This process ensures that similar cases are treated similarly, promoting fairness, predictability, and stability in the legal system. Secondly, legal reasoning and interpretation allow for the adaptation and evolution of the law to changing societal needs and values. As common law is not codified in a single comprehensive statute, judges have the authority to interpret and develop the law incrementally through their decisions. By engaging in legal reasoning, judges can consider the social, economic, and cultural context in which a case arises, allowing them to shape legal principles that are responsive to contemporary circumstances. This flexibility enables common law to remain relevant and adaptable over time. Furthermore, legal reasoning and interpretation contribute to the development of legal principles that are grounded in reason and logic. Judges engage in a process of analyzing legal arguments, evaluating evidence, and applying legal principles to reach a reasoned decision. This process ensures that legal outcomes are based on sound logic and rationality, enhancing the legitimacy of the judicial system. Legal reasoning also promotes consistency in decision-making by requiring judges to provide clear and coherent justifications for their rulings. Additionally, legal reasoning and interpretation foster the development of a rich body of case law. Common law relies heavily on the accumulation of judicial decisions, which collectively form a comprehensive and nuanced legal framework. Through legal reasoning, judges provide detailed explanations of their decisions, often accompanied by extensive legal analysis. These written judgments serve as valuable resources for future cases, guiding judges and legal practitioners in their understanding and application of the law. The accumulation of case law also allows for the refinement and clarification of legal principles over time. In conclusion, legal reasoning and interpretation are of paramount importance in the common law system. They ensure consistency, predictability, and fairness in the application of the law, facilitate the adaptation of legal principles to changing societal needs, promote rational decision-making, and contribute to the development of a comprehensive body of case law. By engaging in rigorous legal reasoning, judges uphold the integrity and effectiveness of the common law system, ultimately serving the interests of justice and the rule of law. Common law is a legal system that relies on judicial decisions and precedents established through court cases, rather than relying solely on statutes or written laws. It is a system that has evolved over centuries and is based on the principle of fairness and justice. Common law ensures fairness and justice in legal disputes through several key mechanisms. Firstly, common law promotes consistency and predictability in legal outcomes. The principle of stare decisis, which means "to stand by things decided," is a fundamental aspect of common law. Under this principle, courts are bound to follow the precedents set by higher courts in similar cases. This ensures that similar cases are treated similarly, providing certainty and predictability to individuals involved in legal disputes. By relying on past decisions, common law helps to create a stable and consistent legal framework that promotes fairness and justice. Secondly, common law allows for flexibility and adaptability. Unlike civil law systems that rely heavily on codified laws, common law allows judges to interpret and apply the law based on the specific facts and circumstances of each case. This flexibility enables judges to consider the unique aspects of each dispute and make decisions that are fair and just. Common law judges have the authority to fill gaps in the law, develop new legal principles, and adapt to changing societal norms. This ability to evolve and adapt ensures that common law remains relevant and responsive to the needs of society, promoting fairness and justice. Furthermore, common law ensures fairness and justice by providing an opportunity for parties to present their case before an impartial judge or jury. In common law jurisdictions, legal disputes are typically resolved through an adversarial process where each party presents their arguments and evidence before an impartial decision-maker. This allows for a fair and balanced consideration of the facts and legal arguments presented by both sides. The judge or jury then applies the law to the facts of the case to reach a decision. This process ensures that all parties have an equal opportunity to present their case and have it decided by an impartial decision-maker, promoting fairness and justice. Additionally, common law promotes fairness and justice by allowing for the development of legal principles that reflect societal values and norms. As judges interpret and apply the law in individual cases, they contribute to the development of legal principles that reflect the changing needs and expectations of society. This ensures that the law remains relevant and responsive to societal changes, promoting fairness and justice. For example, common law has played a crucial role in recognizing and protecting individual rights and liberties, such as freedom of speech, privacy, and equality. By adapting to societal values, common law helps to ensure that legal disputes are resolved in a manner that is fair and just. In conclusion, common law ensures fairness and justice in legal disputes through its emphasis on consistency, flexibility, impartiality, and responsiveness to societal values. By relying on precedents, allowing for judicial interpretation, providing an adversarial process, and reflecting societal norms, common law creates a framework that promotes fairness and justice in resolving legal disputes. Some notable landmark cases in the development of common law have played a crucial role in shaping the legal system and establishing important principles that continue to influence legal decisions today. These cases have set precedents and provided guidance for future legal disputes, contributing to the evolution and refinement of common law. One such landmark case is Donoghue v Stevenson (1932), which established the modern concept of negligence in tort law . In this case, Mrs. Donoghue consumed a bottle of ginger beer that contained a decomposed snail, causing her to fall ill. The House of Lords held that the manufacturer owed a duty of care to consumers, even if there was no contractual relationship between them. This ruling introduced the "neighbour principle," which states that individuals must take reasonable care to avoid acts or omissions that could reasonably be foreseen as likely to injure their neighbors. This case expanded the scope of liability and set a precedent for negligence claims. Another significant case is Carlill v Carbolic Smoke Ball Company (1893), which established the principles of unilateral contracts and the efficacy of advertisements as offers. The Carbolic Smoke Ball Company advertised that they would pay £100 to anyone who contracted influenza after using their product as directed. Mrs. Carlill used the smoke ball but still fell ill, and she sued the company for the promised reward. The court held that the advertisement constituted an offer, and by using the product as directed, Mrs. Carlill had accepted the offer, creating a binding contract. This case clarified the legal status of advertisements as offers and reinforced the concept of unilateral contracts. Mabo v Queensland (No 2) (1992) is a landmark case in Australian common law that recognized the existence of native title rights for Indigenous Australians. Eddie Mabo and other Torres Strait Islanders claimed ownership of land on Murray Island based on their traditional laws and customs. The High Court of Australia ruled that the doctrine of terra nullius, which considered Australia as unoccupied before British colonization, was invalid. The court recognized the existence of native title and held that Indigenous Australians have rights to their traditional lands, subject to certain limitations. This case marked a significant shift in Australian law, acknowledging the rights and interests of Indigenous peoples and influencing subsequent legislation. Brown v Board of Education (1954) is a landmark case in the United States that addressed racial segregation in public schools. The Supreme Court held that state laws establishing separate public schools for black and white students were unconstitutional, as they violated the Equal Protection Clause of the Fourteenth Amendment. This decision overturned the "separate but equal" doctrine established in Plessy v Ferguson (1896) and paved the way for desegregation in public schools. Brown v Board of Education played a pivotal role in the civil rights movement and highlighted the judiciary's role in safeguarding individual rights and promoting equality. These landmark cases, among many others, have significantly influenced the development of common law by establishing legal principles, expanding liability, clarifying contractual obligations, recognizing indigenous rights, and promoting equality. They serve as important milestones in legal history and continue to shape the legal landscape by providing guidance for future legal disputes. Common law plays a significant role in shaping contract law and business transactions. It provides a framework of legal principles and precedents that guide the interpretation and enforcement of contracts, ensuring fairness, predictability, and stability in commercial dealings. By examining the historical development and key principles of common law, we can gain a deeper understanding of its impact on contract law and business transactions. One of the fundamental aspects of common law is its reliance on judicial decisions and precedents. Common law evolves through the accumulation of court rulings over time, creating a body of legal principles that are applied to similar cases in the future. This principle of stare decisis, or "let the decision stand," ensures consistency and predictability in contract law. When courts interpret and apply contract terms, they often refer to previous cases with similar facts or legal issues. This reliance on precedent helps establish a consistent approach to contract interpretation and promotes fairness and certainty in business transactions. Common law also recognizes the importance of freedom of contract, allowing parties to negotiate and agree upon their own terms. This principle allows businesses to tailor their agreements to their specific needs and circumstances. However, common law imposes certain limitations on freedom of contract to protect parties from unfair or unconscionable terms. Courts may refuse to enforce contracts that are deemed illegal, against public policy, or involve fraud, duress, or undue influence. This balancing act between freedom of contract and protecting parties from unfairness ensures that business transactions are conducted within ethical and legal boundaries. Moreover, common law provides default rules and gap-filling principles when contracts are silent or incomplete. In many jurisdictions, common law implies certain terms into contracts to fill gaps or address situations not explicitly covered by the parties' agreement. For example, common law may imply a duty of good faith and fair dealing in every contract, requiring parties to act honestly and reasonably in their performance and enforcement of the agreement. These default rules help ensure that contracts are enforceable and provide a level of protection to parties who may not have explicitly addressed certain contingencies or obligations. Another significant impact of common law on contract law and business transactions is the development of various legal remedies for breach of contract. Common law recognizes the right of injured parties to seek damages, specific performance, or other equitable remedies when a contract is breached. Damages aim to compensate the non-breaching party for the losses suffered as a result of the breach, while specific performance may be ordered by the court to compel the breaching party to fulfill their contractual obligations. These remedies provide parties with legal recourse and incentivize compliance with contractual obligations, thereby promoting trust and reliability in business transactions. Furthermore, common law principles such as privity of contract and consideration have shaped the enforceability of contracts. Privity of contract refers to the principle that only parties to a contract can enforce its terms or be held liable for its breach. This principle ensures that contractual rights and obligations are limited to the parties involved, preventing third parties from interfering in contractual relationships. Consideration, on the other hand, requires that each party provides something of value in exchange for the promises made in the contract. This principle ensures that contracts are supported by mutual exchange and prevent gratuitous promises from being enforced. In conclusion, common law has a profound impact on contract law and business transactions. Its reliance on judicial decisions and precedents, protection of freedom of contract within ethical boundaries, default rules and gap-filling principles, development of legal remedies for breach of contract, and establishment of enforceability requirements have all contributed to the development of a robust and predictable legal framework for commercial dealings. Understanding the influence of common law is essential for businesses and individuals engaging in contractual relationships, as it provides a foundation for interpreting, enforcing, and negotiating contracts in a fair and consistent manner. Under common law, fundamental rights and liberties are protected through a combination of statutory law, judicial decisions, and legal principles that have evolved over centuries. Common law is a legal system that originated in England and spread to various countries, including the United States, Canada, Australia, and India. It is characterized by its reliance on precedent and the principle of stare decisis, which means that courts are bound by previous decisions. The fundamental rights and liberties protected under common law can be broadly categorized into civil liberties, political rights, and property rights. These rights are considered essential for the functioning of a democratic society and are aimed at safeguarding individual freedoms and promoting justice. Civil liberties encompass a range of rights that protect individuals from arbitrary government actions. These include the right to life, liberty, and security of person; freedom of thought, conscience, religion, speech, and expression; freedom of assembly and association; and the right to privacy. These rights are crucial for ensuring personal autonomy, protecting individual beliefs and opinions, and fostering a diverse and inclusive society. Political rights are another important aspect of common law protections. These rights ensure that individuals have the ability to participate in the political process and hold their governments accountable. They include the right to vote, the right to run for public office, the right to freedom of political expression, and the right to access information. Political rights are fundamental for upholding democratic principles and ensuring that citizens have a say in the governance of their countries. Property rights are also protected under common law. These rights include the right to acquire, use, and dispose of property, as well as the right to be free from unlawful seizure or deprivation of property. Property rights are essential for promoting economic stability, incentivizing investment and innovation, and providing individuals with a sense of security in their possessions. It is important to note that common law recognizes that these fundamental rights and liberties are not absolute. They may be subject to reasonable limitations in certain circumstances, such as when necessary to protect public safety, national security, or the rights of others. The balancing of individual rights with societal interests is a complex task that courts undertake when interpreting and applying common law principles. In conclusion, common law provides a robust framework for protecting fundamental rights and liberties. Through its reliance on precedent and the principle of stare decisis, common law ensures that legal decisions are consistent and predictable. The fundamental rights and liberties protected under common law encompass civil liberties, political rights, and property rights, all of which are crucial for upholding democratic values and promoting justice in society. Common law is a legal system that relies on judicial decisions and precedents established by courts rather than statutory laws or codes. When it comes to criminal offenses, common law plays a significant role in handling and determining guilt or innocence. In this context, common law relies on several key principles and procedures to ensure a fair and just process. One of the fundamental principles of common law is the presumption of innocence. This principle holds that an individual accused of a crime is presumed innocent until proven guilty beyond a reasonable doubt. This presumption places the burden of proof on the prosecution, requiring them to present sufficient evidence to convince a judge or jury of the defendant's guilt. The accused is not required to prove their innocence; instead, they are entitled to a fair trial where the prosecution must establish their guilt. To establish guilt or innocence under common law, criminal offenses are typically divided into two categories: felonies and misdemeanors. Felonies are serious crimes that carry severe penalties, while misdemeanors are less serious offenses. The procedures for handling these offenses may differ slightly, but the underlying principles remain the same. In common law jurisdictions, criminal trials involve several stages. The first stage is the arrest and charging of the accused. Law enforcement agencies investigate the alleged crime and gather evidence to support the charges. Once sufficient evidence is obtained, the accused is formally charged, and the case proceeds to trial. During the trial, both the prosecution and defense present their cases before a judge or jury. The prosecution's role is to prove the defendant's guilt beyond a reasonable doubt by presenting evidence, witnesses, and legal arguments. The defense, on the other hand, aims to challenge the prosecution's case and establish reasonable doubt regarding the defendant's guilt. In common law systems, the judge or jury acts as a neutral arbiter, responsible for evaluating the evidence and determining guilt or innocence. The judge provides legal guidance throughout the trial, ensuring that both parties adhere to the rules of evidence and procedure. In some cases, the judge may render a verdict, while in others, a jury of peers decides the outcome. To establish guilt, common law requires a high standard of proof—beyond a reasonable doubt. This standard ensures that the accused is not convicted based on mere suspicion or weak evidence. It demands that the prosecution presents evidence that is strong, credible, and convincing enough to eliminate any reasonable doubt about the defendant's guilt. If the prosecution fails to meet this burden of proof, the accused is entitled to an acquittal, and they are considered innocent under the law. However, if the prosecution successfully proves guilt beyond a reasonable doubt, the accused may be convicted and face appropriate penalties, such as imprisonment, fines, or probation. It is important to note that common law also recognizes various defenses that can be raised by the accused. These defenses include self-defense, duress, insanity, mistake of fact, and others. The availability and applicability of these defenses may vary depending on the jurisdiction and specific circumstances of the case. In conclusion, common law provides a comprehensive framework for handling criminal offenses and establishing guilt or innocence. It upholds the presumption of innocence, places the burden of proof on the prosecution, and requires a high standard of proof—beyond a reasonable doubt—to convict an individual. By relying on judicial decisions and precedents, common law ensures fairness and justice in criminal proceedings. Legal professionals, such as barristers and solicitors, play a crucial role in the functioning of a common law system. Common law is a legal system that relies on judicial decisions and precedent rather than codified laws. In this system, legal professionals act as intermediaries between individuals and the courts, providing legal advice, representation, and ensuring the fair administration of justice. One of the primary roles of barristers and solicitors in a common law system is to provide legal advice and guidance to individuals and organizations. They possess in-depth knowledge of the law and its application, enabling them to interpret complex legal principles and provide tailored advice to their clients. Legal professionals assist clients in understanding their rights and obligations, as well as the potential legal consequences of their actions. This guidance is essential for individuals to make informed decisions and navigate the complexities of the legal system. Another crucial role of legal professionals in a common law system is representation. Barristers and solicitors represent their clients in various legal proceedings, including negotiations, mediations, arbitrations, and court hearings. They act as advocates for their clients, presenting their case and arguments before the court or other relevant authorities. Legal professionals are skilled in analyzing evidence, researching legal precedents, and constructing persuasive arguments to support their clients' positions. Their expertise ensures that clients have a fair opportunity to present their case and defend their rights. Legal professionals also play a vital role in maintaining the integrity of the common law system. They are responsible for upholding ethical standards and ensuring that justice is administered fairly. Barristers and solicitors are bound by professional codes of conduct that require them to act in the best interests of their clients while upholding the principles of justice and fairness. They must maintain confidentiality, avoid conflicts of interest , and diligently represent their clients within the bounds of the law. By adhering to these ethical standards, legal professionals contribute to the overall trust and credibility of the legal system. Furthermore, legal professionals assist in the development and evolution of the common law system. Through their involvement in legal proceedings, they contribute to the creation of legal precedents. Precedents are decisions made by judges in previous cases that serve as authoritative interpretations of the law. Legal professionals analyze and interpret these precedents, applying them to current cases and helping shape the future direction of the law. Their expertise and experience in navigating the complexities of the legal system make them valuable contributors to the ongoing development of legal principles and doctrines. In summary, legal professionals, including barristers and solicitors, play a multifaceted role in a common law system. They provide legal advice and guidance, represent clients in legal proceedings, uphold ethical standards, and contribute to the development of legal principles. Their expertise and dedication are essential for ensuring access to justice, maintaining the integrity of the legal system, and upholding the rule of law in a common law jurisdiction. Common law, administrative law, and regulatory frameworks are interconnected aspects of the legal system that work together to govern various aspects of society. Common law refers to the body of law derived from judicial decisions and precedents, while administrative law deals with the rules and regulations established by administrative agencies. Regulatory frameworks, on the other hand, encompass the laws and regulations that govern specific industries or sectors. The interaction between common law, administrative law, and regulatory frameworks is complex and multifaceted. While common law provides a foundation for legal principles and precedents, administrative law and regulatory frameworks play a crucial role in shaping and implementing specific rules and regulations within a given jurisdiction. One way in which common law interacts with administrative law is through the process of judicial review. Administrative agencies, such as regulatory bodies or government departments, are responsible for implementing and enforcing regulations within their respective areas of authority. However, their actions are subject to review by the courts to ensure they are consistent with common law principles and constitutional rights. In this context, common law acts as a check on administrative agencies, ensuring that their decisions and actions are lawful and fair. Courts may review administrative decisions to determine if they are within the scope of the agency's authority, if they comply with procedural requirements, or if they are reasonable based on the evidence presented. If a court finds that an administrative decision is unlawful or unreasonable, it may be overturned or modified. Furthermore, common law principles can also influence the interpretation and application of administrative law. Courts often rely on common law doctrines, such as natural justice or due process, when reviewing administrative decisions. These doctrines ensure that individuals affected by administrative actions are given a fair opportunity to be heard, have their rights protected, and receive reasons for decisions that affect them. On the other hand, administrative law and regulatory frameworks can also shape common law principles. Administrative agencies have the power to create regulations that fill gaps in the common law or address specific issues that arise in a particular industry or sector. These regulations become part of the legal framework and are considered binding unless they are successfully challenged in court. Regulatory frameworks, which are often established by legislation, provide a comprehensive set of rules and regulations for specific industries or sectors. These frameworks may include licensing requirements, standards, and procedures that govern the conduct of individuals or businesses operating within the regulated area. They provide a more detailed and specific set of rules than what may be available through common law principles alone. In summary, common law, administrative law, and regulatory frameworks are interconnected elements of the legal system. Common law provides a foundation for legal principles and precedents, while administrative law and regulatory frameworks shape and implement specific rules and regulations. The interaction between these aspects ensures that administrative actions are lawful, fair, and consistent with common law principles, while also allowing for the development of specific regulations to address industry-specific needs. Common law and equity are two distinct legal systems that originated in England and have had a significant impact on the development of legal systems around the world. While they share a common historical background, there are key differences between the two. One of the fundamental differences between common law and equity lies in their origins and historical development. Common law emerged from the decisions made by judges in English courts over centuries, based on the principle of stare decisis, which means that judges are bound to follow precedents set by higher courts. Equity, on the other hand, developed as a response to the limitations of common law, aiming to provide fairness and justice in cases where the strict application of common law rules would lead to unjust outcomes. Another key difference between common law and equity is the type of remedies they offer. Common law primarily provides monetary damages as a remedy for a legal wrong or breach of duty. It focuses on compensating the injured party for their losses. Equity, on the other hand, offers a broader range of remedies, including injunctions, specific performance, and restitution. These equitable remedies aim to prevent harm, enforce obligations, or restore parties to their rightful positions. The principles applied in common law and equity also differ. Common law relies heavily on legal precedent and the interpretation of statutes. It follows a more rigid and formalistic approach, emphasizing the application of established rules and doctrines. Equity, on the other hand, is guided by principles of fairness, justice, and conscience. It allows judges greater discretion to consider individual circumstances and tailor remedies accordingly. In terms of procedure, common law and equity also have distinct characteristics. Common law proceedings are adversarial in nature, with parties presenting their cases before a judge or jury. The burden of proof lies with the party making the claim, and the standard of proof is usually "beyond a reasonable doubt" in criminal cases and "preponderance of evidence" in civil cases. Equity proceedings, on the other hand, are more flexible and less formalistic. Judges have greater discretion to consider evidence and fashion appropriate remedies. Furthermore, common law and equity have different courts that historically dealt with their respective matters. Common law cases were heard in the courts of law, while equity cases were heard in the courts of equity. Although these separate court systems have been merged in many jurisdictions, the principles and remedies associated with common law and equity continue to coexist and influence legal decision-making. In summary, common law and equity are two distinct legal systems that have evolved side by side. While common law focuses on the application of established rules and precedents, equity seeks to provide fairness and justice in situations where strict adherence to common law principles may lead to unjust outcomes. The remedies, principles, procedures, and historical development of these two systems set them apart, yet they continue to shape modern legal systems around the world.
Summary of Key Concepts An equation that is true for all acceptable values of the variable is called identity. \(x+3=x+3\) is an identity. Contradictions are equations that are never true regardless of the value substituted for the variable. \(x+1=x\) is a contradiction. An equation whose truth is conditional upon the value selected for the variable is called a conditional equation. Solutions and Solving an Equation The collection of values that make an equation true are called the solutions of the equation. An equation is said to be solved when all its solutions have been found. Equations that have precisely the same collection of solutions are called equivalent equations. An equivalent equation can be obtained from a particular equation by applying the same binary operation to both sides of the equation, that is, - adding or subtracting the same number to or from both sides of that particular equation. - multiplying or dividing both sides of that particular equation by the same non-zero number. A literal equation is an equation that is composed of more than one variable. Recognizing an Identity If, when solving an equation, all the variables are eliminated and a true statement results, the equation is an identity. Recognizing a Contradiction If, when solving an equation, all the variables are eliminated and a false statement results, the equation is a contradiction. Translating from Verbal to Mathematical Expressions When solving word problems it is absolutely necessary to know how certain words translate into mathematical symbols. Five-Step Method for Solving Word Problems - Let \(x\) (or some other letter) represent the unknown quantity. - Translate the words to mathematics and form an equation. A diagram may be helpful. - Solve the equation. - Check the solution by substituting the result into the original statement of the problem. - Write a conclusion. A linear inequality is a mathematical statement that one linear expression is greater than or less than another linear expression. \(>\) Strictly greater than \(<\) Strictly less than \(\ge\) Greater than or equal to \(\leq\) Less than equal to An inequality of the form is called a compound inequality. Solution to an Equation in Two Variables and Ordered Pairs A pair of values that when substituted into an equation in two variables produces a true statement is called a solution to the equation in two variables. These values are commonly written as an ordered pair. The expression (a, b) is an ordered pair. In an ordered pair, the independent variable is written first and the dependent variable is written second.
Last Updated on April 26, 2023 by Prepbytes In this tutorial, we will be explaining the Application of stack in data structures. The Stack is a major topic that belongs to the realm of computer science. Additionally, you must be familiar with every aspect of the Stack when it comes to competitive exams like the GATE. You will learn comprehensive information about the Application of stack in Data Structure from this post. We think the material in the CSE topic notes will help you comprehend this subject matter more clearly. What is Stack in Data structures? A linear data structure called a stack is used to store an ordered, linear sequence of elements. It is a type of abstract data. A stack operates according to the Last In First Out (LIFO) principle, which states that the element that was added last will be deleted first. Because we can only access the elements on the top of the Stack, it is necessary to maintain a pointer to the top of the Stack, which is the last element to be placed, in order to implement the Stack. 1. PUSH: The insertion of a new element into a stack is implied by the PUSH operation. The top of the stack is always where a new element is added, so we must always check to see if it is empty by using the formula TOP=Max-1. In the case that this condition is false, the Stack is full and no other elements may be added. Even if we attempt to add the element, a Stack overflow message will be shown. Step-1: If TOP = Max-1 Goto Step 4 Step-2: Set TOP= TOP + 1 Step-3: Set Stack[TOP]= ELEMENT 2. POP: POP denotes removing a stack element. Make sure to verify that the Stack Top is NULL, i.e., TOP=NULL, before deleting an element. In the event that this condition is met, the Stack will be empty, making deletion operations impossible. Even if deletion attempts are made, the Stack Underflow warning will be produced. Step-1: If TOP= NULL Goto Step 4 Step-2: Set VAL= Stack[TOP] Step-3: Set TOP= TOP-1 3. PEEK: The Peek operation is employed when it is necessary to return the value of the topmost stack element without erasing it. This operation first determines whether the Stack is empty, i.e., TOP = NULL; if it is, then the value will be returned; otherwise, an appropriate notice will be displayed. Step-1: If TOP = NULL PRINT “Stack is Empty” Goto Step 3 Step-2: Return Stack[TOP] Representation of the Stack A stack may have a set, predetermined size or it may be dynamic, meaning that the size of the stack may fluctuate over time. Pointer, Array, Structure, and Linked List can all be used to represent it. Application of Stack in Data Structure are as following: - Evaluation of Arithmetic Expressions - Delimiter Checking - Reverse a Data - Processing Function Calls 1. Evaluation of Arithmetic Expressions In computer languages, a stack is an extremely efficient data structure for evaluating arithmetic statements. Operands and operators are the components of an arithmetic expression. The arithmetic expression may additionally contain parenthesis such as "left parenthesis" and "right parenthesis," in addition to operands and operators. Example: A + (B – C) The normal precedence rules for arithmetic expressions must be understood in order to evaluate the expressions. The following are the five fundamental arithmetic operators’ precedence rules: |^ exponentiation||Right to left||Highest followed by *Multiplication and /division| |*Multiplication, /division||Left to right||Highest followed by + addition and – subtraction| |+ addition, – subtraction||Left to right||lowest| Evaluation of Arithmetic Expression requires two steps: - Put the provided expression first in special notation. - In this new notation, evaluate the expression. Notations for Arithmetic Expression There are three notations to represent an arithmetic expression: - Infix Notation - Prefix Notation - Postfix Notation Each operator is positioned between the operands in an expression written using the infix notation. Depending on the requirements of the task, infix expressions may be parenthesized or not. Example: A + B, (C – D) etc. Because the operator appears between the operands, all of these expressions are written in infix notation. The operator is listed before the operands in the prefix notation. Since the Polish mathematician invented this system, it is frequently referred to as polish notation. Example: + A B, -CD etc. Because the operator occurs before the operands in all of these expressions, prefix notation is used. The operator is listed after the operands in postfix notation. Polish notation is simply reversed in this notation, which is also referred to as Reverse Polish notation. Example: AB +, CD+, etc. All these expressions are in postfix notation because the operator comes after the operands. Another use for Stack is backtracking. The optimization problem is solved using a recursive technique. 3. Delimiter Checking Delimiter checking, or parsing, which entails analysing a source program syntactically, is the most prevalent application of Stack in data structures. Additionally known as parenthesis checking. When a source program written in a programming language, such as C or C++, is translated into machine language, the compiler separates the program into several components, such as variable names, keywords, etc. by moving left to right while scanning The mismatched delimiters are the main issue when translating. We employ a variety of delimiters, such as the parenthesis checks (,), curly braces (,), square brackets (,), and the widely used / and / delimiters. Each opening delimiter must be followed by a corresponding closing delimiter, i.e., each opening parenthesis must be followed by a corresponding closing parenthesis. Also, the delimiter can be nested. The opening delimiter that occurs later in the source program should be closed before those occurring earlier. 4. Reverse a Data: We must reorder the data in such a way that the first and final elements are switched, the second and second-last elements are switched, and so on for all subsequent elements if we want to reverse a given collection of data. Example: If we were to reverse the string Welcome, we would get Emoclew. There are different reversing applications: - Reversing a string - Converting Decimal to Binary Reverse a String A Stack can be used to reverse a string’s characters. This can be done by simply popping each character off of the stack one at a time after pushing them onto the stack one at a time. The initial character of the Stack is at the bottom of the Stack, the last character of the String is at the top, and due to the Stack’s last in first out property, after performing the pop operation in the Stack, the Stack returns the String in reverse order. Converting Decimal to Binary Although most business programs employ decimal numbers, some scientific and technical applications need either binary, octal, or hexadecimal numbers. A number can be transformed from decimal to binary, octal, or hexadecimal using a stack. Any decimal number can be converted to a binary number by continually dividing it by two and pushing the residue of each division onto the stack until the result is 0. The binary counterpart of the provided decimal number is then obtained by popping the entire stack. Example: Converting 14 number Decimal to Binary: In the example above, we get seven as a quotient and one as the reminder when we divide 14 by 2, and these two values are pushed onto the stack. When we divide seven by two once more, we get three as the quotient and one as the reminder, which is once more added to the Stack. The supplied number is lowered in this manner until it does not reach zero. The comparable binary number 1110 is what we receive after we totally pop off the stack. 5. Processing Function Calls: In programs that call multiple functions in quick succession, the stack is crucial. Assume we have a program with three A, B, and C functions. Function A calls Function B, and Function B calls Function C. Function A’s processing won’t be finished until function B’s execution and return have been completed. This is because function A contains a call to function B. The same is true of functions B and C. As a result, we see that function A can only be finished after function B, and function B can only be finished after function C. As a result, function A should be begun first and finished last. In conclusion, utilising Stack makes it simple to handle the function activity described above, which follows the last in first out behaviour. Consider the addresses of the statements to which control is transferred following the completion of functions A, B, and C as addrA, addrB, and addrC, respectively. In the Stack, return addresses are displayed in the order that the functions were called, as can be seen in the image above. Each function is finished, followed by the pop operation, which starts execution at the position where the Stack was removed. The stack data structure may thus handle the program that calls multiple functions one after another in the best possible way. Each function receives control in the proper location, which is the calling sequence’s reversal. So, with this we came to an end of this Application of stack in data structure. In this blog we had completely given the detailed Application of stack in data structures Not just this we had also explained the Stack data structure with their operations and examples.
Euclidean geometry is a mathematical system attributed to Alexandrian Greek mathematician Euclid, which he described in his textbook on geometry: the Elements. Euclid's method consists in assuming a small set of intuitively appealing axioms, and deducing many other propositions (theorems) from these. Although many of Euclid's results had been stated by earlier mathematicians, Euclid was the first to show how these propositions could fit into a comprehensive deductive and logical system. The Elements begins with plane geometry, still taught in secondary school (high school) as the first axiomatic system and the first examples of formal proof. It goes on to the solid geometry of three dimensions. Much of the Elements states results of what are now called algebra and number theory, explained in geometrical language. For more than two thousand years, the adjective "Euclidean" was unnecessary because no other sort of geometry had been conceived. Euclid's axioms seemed so intuitively obvious (with the possible exception of the parallel postulate) that any theorem proved from them was deemed true in an absolute, often metaphysical, sense. Today, however, many other self-consistent non-Euclidean geometries are known, the first ones having been discovered in the early 19th century. An implication of Albert Einstein's theory of general relativity is that physical space itself is not Euclidean, and Euclidean space is a good approximation for it only over short distances (relative to the strength of the gravitational field). Euclidean geometry is an example of synthetic geometry, in that it proceeds logically from axioms describing basic properties of geometric objects such as points and lines, to propositions about those objects, all without the use of coordinates to specify those objects. This is in contrast to analytic geometry, which uses coordinates to translate geometric propositions into algebraic formulas. - 1 The Elements - 2 Methods of proof - 3 System of measurement and arithmetic - 4 Notation and terminology - 5 Some important or well known results - 6 Applications - 7 As a description of the structure of space - 8 Later work - 9 Treatment of infinity - 10 Logical basis - 11 See also - 12 Notes - 13 References - 14 External links The Elements is mainly a systematization of earlier knowledge of geometry. Its improvement over earlier treatments was rapidly recognized, with the result that there was little interest in preserving the earlier ones, and they are now nearly all lost. There are 13 books in the Elements: Books I–IV and VI discuss plane geometry. Many results about plane figures are proved, for example "In any triangle two angles taken together in any manner are less than two right angles." (Book 1 proposition 17 ) and the Pythagorean theorem "In right angled triangles the square on the side subtending the right angle is equal to the squares on the sides containing the right angle." (Book I, proposition 47) Books V and VII–X deal with number theory, with numbers treated geometrically as lengths of line segments or areas of regions. Notions such as prime numbers and rational and irrational numbers are introduced. It is proved that there are infinitely many prime numbers. Euclidean geometry is an axiomatic system, in which all theorems ("true statements") are derived from a small number of simple axioms. Until the advent of non-Euclidean geometry, these axioms were considered to be obviously true in the physical world, so that all the theorems would be equally true. However, Euclid's reasoning from assumptions to conclusions remains valid independent of their physical reality. - Let the following be postulated: - To draw a straight line from any point to any point. - To produce [extend] a finite straight line continuously in a straight line. - To describe a circle with any centre and distance [radius]. - That all right angles are equal to one another. - [The parallel postulate]: That, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which the angles are less than two right angles. Although Euclid only explicitly asserts the existence of the constructed objects, in his reasoning they are implicitly assumed to be unique. The Elements also include the following five "common notions": - Things that are equal to the same thing are also equal to one another (the Transitive property of a Euclidean relation). - If equals are added to equals, then the wholes are equal (Addition property of equality). - If equals are subtracted from equals, then the differences are equal (Subtraction property of equality). - Things that coincide with one another are equal to one another (Reflexive Property). - The whole is greater than the part. Modern scholars agree that Euclid's postulates do not provide the complete logical foundation that Euclid required for his presentation. Modern treatments use more extensive and complete sets of axioms. To the ancients, the parallel postulate seemed less obvious than the others. They aspired to create a system of absolutely certain propositions, and to them it seemed as if the parallel line postulate required proof from simpler statements. It is now known that such a proof is impossible, since one can construct consistent systems of geometry (obeying the other axioms) in which the parallel postulate is true, and others in which it is false. Euclid himself seems to have considered it as being qualitatively different from the others, as evidenced by the organization of the Elements: his first 28 propositions are those that can be proved without it. - In a plane, through a point not on a given straight line, at most one line can be drawn that never meets the given line. The "at most" clause is all that is needed since it can be proved from the remaining axioms that at least one parallel line exists. Methods of proof Euclidean Geometry is constructive. Postulates 1, 2, 3, and 5 assert the existence and uniqueness of certain geometric figures, and these assertions are of a constructive nature: that is, we are not only told that certain things exist, but are also given methods for creating them with no more than a compass and an unmarked straightedge. In this sense, Euclidean geometry is more concrete than many modern axiomatic systems such as set theory, which often assert the existence of objects without saying how to construct them, or even assert the existence of objects that cannot be constructed within the theory. Strictly speaking, the lines on paper are models of the objects defined within the formal system, rather than instances of those objects. For example, a Euclidean straight line has no width, but any real drawn line will. Though nearly all modern mathematicians consider nonconstructive methods just as sound as constructive ones, Euclid's constructive proofs often supplanted fallacious nonconstructive ones—e.g., some of the Pythagoreans' proofs that involved irrational numbers, which usually required a statement such as "Find the greatest common measure of ..." Euclid often used proof by contradiction. Euclidean geometry also allows the method of superposition, in which a figure is transferred to another point in space. For example, proposition I.4, side-angle-side congruence of triangles, is proved by moving one of the two triangles so that one of its sides coincides with the other triangle's equal side, and then proving that the other sides coincide as well. Some modern treatments add a sixth postulate, the rigidity of the triangle, which can be used as an alternative to superposition. System of measurement and arithmetic Euclidean geometry has two fundamental types of measurements: angle and distance. The angle scale is absolute, and Euclid uses the right angle as his basic unit, so that, e.g., a 45-degree angle would be referred to as half of a right angle. The distance scale is relative; one arbitrarily picks a line segment with a certain nonzero length as the unit, and other distances are expressed in relation to it. Addition of distances is represented by a construction in which one line segment is copied onto the end of another line segment to extend its length, and similarly for subtraction. Measurements of area and volume are derived from distances. For example, a rectangle with a width of 3 and a length of 4 has an area that represents the product, 12. Because this geometrical interpretation of multiplication was limited to three dimensions, there was no direct way of interpreting the product of four or more numbers, and Euclid avoided such products, although they are implied, e.g., in the proof of book IX, proposition 20. Euclid refers to a pair of lines, or a pair of planar or solid figures, as "equal" (ἴσος) if their lengths, areas, or volumes are equal, and similarly for angles. The stronger term "congruent" refers to the idea that an entire figure is the same size and shape as another figure. Alternatively, two figures are congruent if one can be moved on top of the other so that it matches up with it exactly. (Flipping it over is allowed.) Thus, for example, a 2x6 rectangle and a 3x4 rectangle are equal but not congruent, and the letter R is congruent to its mirror image. Figures that would be congruent except for their differing sizes are referred to as similar. Corresponding angles in a pair of similar shapes are congruent and corresponding sides are in proportion to each other. Notation and terminology Naming of points and figures Points are customarily named using capital letters of the alphabet. Other figures, such as lines, triangles, or circles, are named by listing a sufficient number of points to pick them out unambiguously from the relevant figure, e.g., triangle ABC would typically be a triangle with vertices at points A, B, and C. Complementary and supplementary angles Angles whose sum is a right angle are called complementary. Complementary angles are formed when a ray shares the same vertex and is pointed in a direction that is in between the two original rays that form the right angle. The number of rays in between the two original rays is infinite. Angles whose sum is a straight angle are supplementary. Supplementary angles are formed when a ray shares the same vertex and is pointed in a direction that is in between the two original rays that form the straight angle (180 degree angle). The number of rays in between the two original rays is infinite. Modern versions of Euclid's notation Modern school textbooks often define separate figures called lines (infinite), rays (semi-infinite), and line segments (of finite length). Euclid, rather than discussing a ray as an object that extends to infinity in one direction, would normally use locutions such as "if the line is extended to a sufficient length," although he occasionally referred to "infinite lines". A "line" in Euclid could be either straight or curved, and he used the more specific term "straight line" when necessary. Some important or well known results The Pons Asinorum or Bridge of Asses theorem states that in an isosceles triangle, α = β and γ = δ. The Pythagorean theorem states that the sum of the areas of the two squares on the legs (a and b) of a right triangle equals the area of the square on the hypotenuse (c). Thales' theorem states that if AC is a diameter, then the angle at B is a right angle. The Bridge of Asses (Pons Asinorum) states that in isosceles triangles the angles at the base equal one another, and, if the equal straight lines are produced further, then the angles under the base equal one another. Its name may be attributed to its frequent role as the first real test in the Elements of the intelligence of the reader and as a bridge to the harder propositions that followed. It might also be so named because of the geometrical figure's resemblance to a steep bridge that only a sure-footed donkey could cross. Congruence of triangles Triangles are congruent if they have all three sides equal (SSS), two sides and the angle between them equal (SAS), or two angles and a side equal (ASA) (Book I, propositions 4, 8, and 26). Triangles with three equal angles (AAA) are similar, but not necessarily congruent. Also, triangles with two equal sides and an adjacent angle are not necessarily equal or congruent. Triangle angle sum The sum of the angles of a triangle is equal to a straight angle (180 degrees). This causes an equilateral triangle to have three interior angles of 60 degrees. Also, it causes every triangle to have at least two acute angles and up to one obtuse or right angle. The celebrated Pythagorean theorem (book I, proposition 47) states that in any right triangle, the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares whose sides are the two legs (the two sides that meet at a right angle). Thales' theorem, named after Thales of Miletus states that if A, B, and C are points on a circle where the line AC is a diameter of the circle, then the angle ABC is a right angle. Cantor supposed that Thales proved his theorem by means of Euclid Book I, Prop. 32 after the manner of Euclid Book III, Prop. 31. Scaling of area and volume In modern terminology, the area of a plane figure is proportional to the square of any of its linear dimensions, , and the volume of a solid to the cube, . Euclid proved these results in various special cases such as the area of a circle and the volume of a parallelepipedal solid. Euclid determined some, but not all, of the relevant constants of proportionality. E.g., it was his successor Archimedes who proved that a sphere has 2/3 the volume of the circumscribing cylinder. This section needs expansion. You can help by adding to it. (March 2009) Because of Euclidean geometry's fundamental status in mathematics, it is impractical to give more than a representative sampling of applications here. A surveyor uses a level As suggested by the etymology of the word, one of the earliest reasons for interest in geometry was surveying, and certain practical results from Euclidean geometry, such as the right-angle property of the 3-4-5 triangle, were used long before they were proved formally. The fundamental types of measurements in Euclidean geometry are distances and angles, both of which can be measured directly by a surveyor. Historically, distances were often measured by chains, such as Gunter's chain, and angles using graduated circles and, later, the theodolite. An application of Euclidean solid geometry is the determination of packing arrangements, such as the problem of finding the most efficient packing of spheres in n dimensions. This problem has applications in error detection and correction. Geometric optics uses Euclidean geometry to analyze the focusing of light by lenses and mirrors. Geometry is used extensively in architecture. Quite a lot of CAD (computer-aided design) and CAM (computer-aided manufacturing) is based on Euclidean geometry. Design geometry typically consists of shapes bounded by planes, cylinders, cones, tori, etc. CAD/CAM is essential in the design of almost everything, nowadays, including cars, airplanes, ships, and the iPhone. A few decades ago, sophisticated draftsmen learned some fairly advanced Euclidean geometry, including things like Pascal's theorem and Brianchon's theorem. But now they don't have to, because the geometric constructions are all done by CAD programs. As a description of the structure of space Euclid believed that his axioms were self-evident statements about physical reality. Euclid's proofs depend upon assumptions perhaps not obvious in Euclid's fundamental axioms, in particular that certain movements of figures do not change their geometrical properties such as the lengths of sides and interior angles, the so-called Euclidean motions, which include translations, reflections and rotations of figures. Taken as a physical description of space, postulate 2 (extending a line) asserts that space does not have holes or boundaries (in other words, space is homogeneous and unbounded); postulate 4 (equality of right angles) says that space is isotropic and figures may be moved to any location while maintaining congruence; and postulate 5 (the parallel postulate) that space is flat (has no intrinsic curvature). The ambiguous character of the axioms as originally formulated by Euclid makes it possible for different commentators to disagree about some of their other implications for the structure of space, such as whether or not it is infinite (see below) and what its topology is. Modern, more rigorous reformulations of the system typically aim for a cleaner separation of these issues. Interpreting Euclid's axioms in the spirit of this more modern approach, axioms 1-4 are consistent with either infinite or finite space (as in elliptic geometry), and all five axioms are consistent with a variety of topologies (e.g., a plane, a cylinder, or a torus for two-dimensional Euclidean geometry). Archimedes and Apollonius Archimedes (c. 287 BCE – c. 212 BCE), a colorful figure about whom many historical anecdotes are recorded, is remembered along with Euclid as one of the greatest of ancient mathematicians. Although the foundations of his work were put in place by Euclid, his work, unlike Euclid's, is believed to have been entirely original. He proved equations for the volumes and areas of various figures in two and three dimensions, and enunciated the Archimedean property of finite numbers. Apollonius of Perga (c. 262 BCE – c. 190 BCE) is mainly known for his investigation of conic sections. 17th century: Descartes In this approach, a point on a plane is represented by its Cartesian (x, y) coordinates, a line is represented by its equation, and so on. In Euclid's original approach, the Pythagorean theorem follows from Euclid's axioms. In the Cartesian approach, the axioms are the axioms of algebra, and the equation expressing the Pythagorean theorem is then a definition of one of the terms in Euclid's axioms, which are now considered theorems. In terms of analytic geometry, the restriction of classical geometry to compass and straightedge constructions means a restriction to first- and second-order equations, e.g., y = 2x + 1 (a line), or x2 + y2 = 7 (a circle). Also in the 17th century, Girard Desargues, motivated by the theory of perspective, introduced the concept of idealized points, lines, and planes at infinity. The result can be considered as a type of generalized geometry, projective geometry, but it can also be used to produce proofs in ordinary Euclidean geometry in which the number of special cases is reduced. Geometers of the 18th century struggled to define the boundaries of the Euclidean system. Many tried in vain to prove the fifth postulate from the first four. By 1763, at least 28 different proofs had been published, but all were found incorrect. Leading up to this period, geometers also tried to determine what constructions could be accomplished in Euclidean geometry. For example, the problem of trisecting an angle with a compass and straightedge is one that naturally occurs within the theory, since the axioms refer to constructive operations that can be carried out with those tools. However, centuries of efforts failed to find a solution to this problem, until Pierre Wantzel published a proof in 1837 that such a construction was impossible. Other constructions that were proved impossible include doubling the cube and squaring the circle. In the case of doubling the cube, the impossibility of the construction originates from the fact that the compass and straightedge method involve equations whose order is an integral power of two, while doubling a cube requires the solution of a third-order equation. Euler discussed a generalization of Euclidean geometry called affine geometry, which retains the fifth postulate unmodified while weakening postulates three and four in a way that eliminates the notions of angle (whence right triangles become meaningless) and of equality of length of line segments in general (whence circles become meaningless) while retaining the notions of parallelism as an equivalence relation between lines, and equality of length of parallel line segments (so line segments continue to have a midpoint). 19th century and non-Euclidean geometry The century's most significant development in geometry occurred when, around 1830, János Bolyai and Nikolai Ivanovich Lobachevsky separately published work on non-Euclidean geometry, in which the parallel postulate is not valid. Since non-Euclidean geometry is provably relatively consistent with Euclidean geometry, the parallel postulate cannot be proved from the other postulates. In the 19th century, it was also realized that Euclid's ten axioms and common notions do not suffice to prove all of the theorems stated in the Elements. For example, Euclid assumed implicitly that any line contains at least two points, but this assumption cannot be proved from the other axioms, and therefore must be an axiom itself. The very first geometric proof in the Elements, shown in the figure above, is that any line segment is part of a triangle; Euclid constructs this in the usual way, by drawing circles around both endpoints and taking their intersection as the third vertex. His axioms, however, do not guarantee that the circles actually intersect, because they do not assert the geometrical property of continuity, which in Cartesian terms is equivalent to the completeness property of the real numbers. Starting with Moritz Pasch in 1882, many improved axiomatic systems for geometry have been proposed, the best known being those of Hilbert, George Birkhoff, and Tarski. 20th century and general relativity Einstein's theory of general relativity shows that the true geometry of spacetime is not Euclidean geometry. For example, if a triangle is constructed out of three rays of light, then in general the interior angles do not add up to 180 degrees due to gravity. A relatively weak gravitational field, such as the Earth's or the sun's, is represented by a metric that is approximately, but not exactly, Euclidean. Until the 20th century, there was no technology capable of detecting the deviations from Euclidean geometry, but Einstein predicted that such deviations would exist. They were later verified by observations such as the slight bending of starlight by the Sun during a solar eclipse in 1919, and such considerations are now an integral part of the software that runs the GPS system. It is possible to object to this interpretation of general relativity on the grounds that light rays might be improper physical models of Euclid's lines, or that relativity could be rephrased so as to avoid the geometrical interpretations. However, one of the consequences of Einstein's theory is that there is no possible physical test that can distinguish between a beam of light as a model of a geometrical line and any other physical model. Thus, the only logical possibilities are to accept non-Euclidean geometry as physically real, or to reject the entire notion of physical tests of the axioms of geometry, which can then be imagined as a formal system without any intrinsic real-world meaning. Treatment of infinity Euclid sometimes distinguished explicitly between "finite lines" (e.g., Postulate 2) and "infinite lines" (book I, proposition 12). However, he typically did not make such distinctions unless they were necessary. The postulates do not explicitly refer to infinite lines, although for example some commentators interpret postulate 3, existence of a circle with any radius, as implying that space is infinite. The notion of infinitesimal quantities had previously been discussed extensively by the Eleatic School, but nobody had been able to put them on a firm logical basis, with paradoxes such as Zeno's paradox occurring that had not been resolved to universal satisfaction. Euclid used the method of exhaustion rather than infinitesimals. Later ancient commentators, such as Proclus (410–485 CE), treated many questions about infinity as issues demanding proof and, e.g., Proclus claimed to prove the infinite divisibility of a line, based on a proof by contradiction in which he considered the cases of even and odd numbers of points constituting it. At the turn of the 20th century, Otto Stolz, Paul du Bois-Reymond, Giuseppe Veronese, and others produced controversial work on non-Archimedean models of Euclidean geometry, in which the distance between two points may be infinite or infinitesimal, in the Newton–Leibniz sense. Fifty years later, Abraham Robinson provided a rigorous logical foundation for Veronese's work. One reason that the ancients treated the parallel postulate as less certain than the others is that verifying it physically would require us to inspect two lines to check that they never intersected, even at some very distant point, and this inspection could potentially take an infinite amount of time. The modern formulation of proof by induction was not developed until the 17th century, but some later commentators consider it implicit in some of Euclid's proofs, e.g., the proof of the infinitude of primes. Supposed paradoxes involving infinite series, such as Zeno's paradox, predated Euclid. Euclid avoided such discussions, giving, for example, the expression for the partial sums of the geometric series in IX.35 without commenting on the possibility of letting the number of terms become infinite. This article needs attention from an expert in mathematics.December 2010)( This section needs expansion. You can help by adding to it. (June 2010) Euclid frequently used the method of proof by contradiction, and therefore the traditional presentation of Euclidean geometry assumes classical logic, in which every proposition is either true or false, i.e., for any proposition P, the proposition "P or not P" is automatically true. Modern standards of rigor Placing Euclidean geometry on a solid axiomatic basis was a preoccupation of mathematicians for centuries. The role of primitive notions, or undefined concepts, was clearly put forward by Alessandro Padoa of the Peano delegation at the 1900 Paris conference: ...when we begin to formulate the theory, we can imagine that the undefined symbols are completely devoid of meaning and that the unproved propositions are simply conditions imposed upon the undefined symbols. Then, the system of ideas that we have initially chosen is simply one interpretation of the undefined symbols; but..this interpretation can be ignored by the reader, who is free to replace it in his mind by another interpretation.. that satisfies the conditions... Logical questions thus become completely independent of empirical or psychological questions... The system of undefined symbols can then be regarded as the abstraction obtained from the specialized theories that result when...the system of undefined symbols is successively replaced by each of the interpretations...— Padoa, Essai d'une théorie algébrique des nombre entiers, avec une Introduction logique à une théorie déductive quelconque If our hypothesis is about anything, and not about some one or more particular things, then our deductions constitute mathematics. Thus, mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.— Bertrand Russell, Mathematics and the metaphysicians Geometry is the science of correct reasoning on incorrect figures.— George Polyá, How to Solve It, p. 208 - Euclid's axioms: In his dissertation to Trinity College, Cambridge, Bertrand Russell summarized the changing role of Euclid's geometry in the minds of philosophers up to that time. It was a conflict between certain knowledge, independent of experiment, and empiricism, requiring experimental input. This issue became clear as it was discovered that the parallel postulate was not necessarily valid and its applicability was an empirical matter, deciding whether the applicable geometry was Euclidean or non-Euclidean. - Hilbert's axioms: Hilbert's axioms had the goal of identifying a simple and complete set of independent axioms from which the most important geometric theorems could be deduced. The outstanding objectives were to make Euclidean geometry rigorous (avoiding hidden assumptions) and to make clear the ramifications of the parallel postulate. - Birkhoff's axioms: Birkhoff proposed four postulates for Euclidean geometry that can be confirmed experimentally with scale and protractor. This system relies heavily on the properties of the real numbers. The notions of angle and distance become primitive concepts. - Tarski's axioms: Alfred Tarski (1902–1983) and his students defined elementary Euclidean geometry as the geometry that can be expressed in first-order logic and does not depend on set theory for its logical basis, in contrast to Hilbert's axioms, which involve point sets. Tarski proved that his axiomatic formulation of elementary Euclidean geometry is consistent and complete in a certain sense: there is an algorithm that, for every proposition, can be shown either true or false. (This doesn't violate Gödel's theorem, because Euclidean geometry cannot describe a sufficient amount of arithmetic for the theorem to apply.) This is equivalent to the decidability of real closed fields, of which elementary Euclidean geometry is a model. Constructive approaches and pedagogy The process of abstract axiomatization as exemplified by Hilbert's axioms reduces geometry to theorem proving or predicate logic. In contrast, the Greeks used construction postulates, and emphasized problem solving. For the Greeks, constructions are more primitive than existence propositions, and can be used to prove existence propositions, but not vice versa. To describe problem solving adequately requires a richer system of logical concepts. The contrast in approach may be summarized: - Axiomatic proof: Proofs are deductive derivations of propositions from primitive premises that are ‘true’ in some sense. The aim is to justify the proposition. - Analytic proof: Proofs are non-deductive derivations of hypotheses from problems. The aim is to find hypotheses capable of giving a solution to the problem. One can argue that Euclid's axioms were arrived upon in this manner. In particular, it is thought that Euclid felt the parallel postulate was forced upon him, as indicated by his reluctance to make use of it, and his arrival upon it by the method of contradiction. Andrei Nicholaevich Kolmogorov proposed a problem solving basis for geometry. This work was a precursor of a modern formulation in terms of constructive type theory. This development has implications for pedagogy as well. If proof simply follows conviction of truth rather than contributing to its construction and is only experienced as a demonstration of something already known to be true, it is likely to remain meaningless and purposeless in the eyes of students.— Celia Hoyles, The curricular shaping of students' approach to proof - Absolute geometry - Analytic geometry - Birkhoff's axioms - Cartesian coordinate system - Hilbert's axioms - Incidence geometry - List of interactive geometry software - Metric space - Non-Euclidean geometry - Ordered geometry - Parallel postulate - Type theory - Angle bisector theorem - Butterfly theorem - Ceva's theorem - Heron's formula - Menelaus' theorem - Nine-point circle - Pythagorean theorem - Eves 1963, p. 19 - Eves 1963, p. 10 - Misner, Thorne, and Wheeler (1973), p. 47 - The assumptions of Euclid are discussed from a modern perspective in Harold E. Wolfe (2007). Introduction to Non-Euclidean Geometry. Mill Press. p. 9. ISBN 1-4067-1852-1. - tr. Heath, pp. 195–202. - Venema, Gerard A. (2006), Foundations of Geometry, Prentice-Hall, p. 8, ISBN 978-0-13-143700-5 - Florence P. Lewis (Jan 1920), "History of the Parallel Postulate", The American Mathematical Monthly, The American Mathematical Monthly, Vol. 27, No. 1, 27 (1): 16–23, doi:10.2307/2973238, JSTOR 2973238. - Ball, p. 56 - Within Euclid's assumptions, it is quite easy to give a formula for area of triangles and squares. However, in a more general context like set theory, it is not as easy to prove that the area of a square is the sum of areas of its pieces, for example. See Lebesgue measure and Banach–Tarski paradox. - Daniel Shanks (2002). Solved and Unsolved Problems in Number Theory. American Mathematical Society. - Coxeter, p. 5 - Euclid, book I, proposition 5, tr. Heath, p. 251 - Ignoring the alleged difficulty of Book I, Proposition 5, Sir Thomas L. Heath mentions another interpretation. This rests on the resemblance of the figure's lower straight lines to a steeply inclined bridge that could be crossed by an ass but not by a horse: "But there is another view (as I have learnt lately) which is more complimentary to the ass. It is that, the figure of the proposition being like that of a trestle bridge, with a ramp at each end which is more practicable the flatter the figure is drawn, the bridge is such that, while a horse could not surmount the ramp, an ass could; in other words, the term is meant to refer to the sure-footedness of the ass rather than to any want of intelligence on his part." (in "Excursis II," volume 1 of Heath's translation of The Thirteen Books of the Elements.) - Euclid, book I, proposition 32 - Heath, p. 135. Extract of page 135 - Heath, p. 318 - Euclid, book XII, proposition 2 - Euclid, book XI, proposition 33 - Ball, p. 66 - Ball, p. 5 - Eves, vol. 1, p. 5; Mlodinow, p. 7 - Tom Hull. "Origami and Geometric Constructions". - Richard J. Trudeau (2008). "Euclid's axioms". The Non-Euclidean Revolution. Birkhäuser. pp. 39 ff. ISBN 0-8176-4782-1. - See, for example: Luciano da Fontoura Costa; Roberto Marcondes Cesar (2001). Shape analysis and classification: theory and practice. CRC Press. p. 314. ISBN 0-8493-3493-4. and Helmut Pottmann; Johannes Wallner (2010). Computational Line Geometry. Springer. p. 60. ISBN 3-642-04017-9. The group of motions underlie the metric notions of geometry. See Felix Klein (2004). Elementary Mathematics from an Advanced Standpoint: Geometry (Reprint of 1939 Macmillan Company ed.). Courier Dover. p. 167. ISBN 0-486-43481-8. - Roger Penrose (2007). The Road to Reality: A Complete Guide to the Laws of the Universe. Vintage Books. p. 29. ISBN 0-679-77631-1. - Heath, p. 200 - e.g., Tarski (1951) - Eves, p. 27 - Ball, pp. 268ff - Eves (1963) - Hofstadter 1979, p. 91. - Theorem 120, Elements of Abstract Algebra, Allan Clark, Dover, ISBN 0-486-64725-0 - Eves (1963), p. 64 - Ball, p. 485 - * Howard Eves, 1997 (1958). Foundations and Fundamental Concepts of Mathematics. Dover. - Birkhoff, G. D., 1932, "A Set of Postulates for Plane Geometry (Based on Scale and Protractors)," Annals of Mathematics 33. - Tarski (1951) - Misner, Thorne, and Wheeler (1973), p. 191 - Rizos, Chris. University of New South Wales. GPS Satellite Signals. 1999. - Ball, p. 31 - Heath, p. 268 - Giuseppe Veronese, On Non-Archimedean Geometry, 1908. English translation in Real Numbers, Generalizations of the Reals, and Theories of Continua, ed. Philip Ehrlich, Kluwer, 1994. - Robinson, Abraham (1966). Non-standard analysis. - For the assertion that this was the historical reason for the ancients considering the parallel postulate less obvious than the others, see Nagel and Newman 1958, p. 9. - Cajori (1918), p. 197 - A detailed discussion can be found in James T. Smith (2000). "Chapter 2: Foundations". Methods of geometry. Wiley. pp. 19 ff. ISBN 0-471-25183-6. - Société française de philosophie (1900). Revue de métaphysique et de morale, Volume 8. Hachette. p. 592. - Bertrand Russell (2000). "Mathematics and the metaphysicians". In James Roy Newman. The world of mathematics. 3 (Reprint of Simon and Schuster 1956 ed.). Courier Dover Publications. p. 1577. ISBN 0-486-41151-6. - Bertrand Russell (1897). "Introduction". An essay on the foundations of geometry. Cambridge University Press. - George David Birkhoff; Ralph Beatley (1999). "Chapter 2: The five fundamental principles". Basic Geometry (3rd ed.). AMS Bookstore. pp. 38 ff. ISBN 0-8218-2101-6. - James T. Smith. "Chapter 3: Elementary Euclidean Geometry". Cited work. pp. 84 ff. - Edwin E. Moise (1990). Elementary geometry from an advanced standpoint (3rd ed.). Addison–Wesley. ISBN 0-201-50867-2. - John R. Silvester (2001). "§1.4 Hilbert and Birkhoff". Geometry: ancient and modern. Oxford University Press. ISBN 0-19-850825-5. - Alfred Tarski (2007). "What is elementary geometry". In Leon Henkin; Patrick Suppes; Alfred Tarski. Studies in Logic and the Foundations of Mathematics – The Axiomatic Method with Special Reference to Geometry and Physics (Proceedings of International Symposium at Berkeley 1957–8; Reprint ed.). Brouwer Press. p. 16. ISBN 1-4067-5355-6. We regard as elementary that part of Euclidean geometry which can be formulated and established without the help of any set-theoretical devices - Keith Simmons (2009). "Tarski's logic". In Dov M. Gabbay, John Woods. Logic from Russell to Church. Elsevier. p. 574. ISBN 0-444-51620-4.CS1 maint: Uses editors parameter (link) - Franzén, Torkel (2005). Gödel's Theorem: An Incomplete Guide to its Use and Abuse. AK Peters. ISBN 1-56881-238-8. Pp. 25–26. - Petri Mäenpää (1999). "From backward reduction to configurational analysis". In Michael Otte; Marco Panza. Analysis and synthesis in mathematics: history and philosophy. Springer. p. 210. ISBN 0-7923-4570-3. - Carlo Cellucci (2008). "Why proof? What is proof?". In Rossella Lupacchini; Giovanna Corsi. Deduction, Computation, Experiment: Exploring the Effectiveness of Proof. Springer. p. 1. ISBN 88-470-0783-6. - Eric W. Weisstein (2003). "Euclid's postulates". CRC concise encyclopedia of mathematics (2nd ed.). CRC Press. p. 942. ISBN 1-58488-347-2. - Deborah J. Bennett (2004). Logic made easy: how to know when language deceives you. W. W. Norton & Company. p. 34. ISBN 0-393-05748-8. - AN Kolmogorov; AF Semenovich; RS Cherkasov (1982). Geometry: A textbook for grades 6–8 of secondary school [Geometriya. Uchebnoe posobie dlya 6–8 klassov srednie shkoly] (3rd ed.). Moscow: "Prosveshchenie" Publishers. pp.��372–376. A description of the approach, which was based upon geometric transformations, can be found in Teaching geometry in the USSR Chernysheva, Firsov, and Teljakovskii - Viktor Vasilʹevich Prasolov; Vladimir Mikhaĭlovich Tikhomirov (2001). Geometry. AMS Bookstore. p. 198. ISBN 0-8218-2038-9. - Petri Mäenpää (1998). "Analytic program derivation in type theory". In Giovanni Sambin; Jan M. Smith. Twenty-five years of constructive type theory: proceedings of a congress held in Venice, October 1995. Oxford University Press. p. 113. ISBN 0-19-850127-7. - Celia Hoyles (Feb 1997). "The curricular shaping of students' approach to proof". For the Learning of Mathematics. FLM Publishing Association. 17 (1): 7–16. JSTOR 40248217. - Ball, W.W. Rouse (1960). A Short Account of the History of Mathematics (4th ed. [Reprint. Original publication: London: Macmillan & Co., 1908] ed.). New York: Dover Publications. pp. 50–62. ISBN 0-486-20630-0. - Coxeter, H.S.M. (1961). Introduction to Geometry. New York: Wiley. - Eves, Howard (1963). A Survey of Geometry (Volume One). Allyn and Bacon. - Heath, Thomas L. (1956). The Thirteen Books of Euclid's Elements (2nd ed. [Facsimile. Original publication: Cambridge University Press, 1925] ed.). New York: Dover Publications. In 3 vols.: vol. 1 ISBN 0-486-60088-2, vol. 2 ISBN 0-486-60089-0, vol. 3 ISBN 0-486-60090-4. Heath's authoritative translation of Euclid's Elements, plus his extensive historical research and detailed commentary throughout the text. - Misner, Thorne, and Wheeler (1973). Gravitation. W.H. Freeman.CS1 maint: Multiple names: authors list (link) - Mlodinow (2001). Euclid's Window. The Free Press. - Nagel, E.; Newman, J.R. (1958). Gödel's Proof. New York University Press. - Alfred Tarski (1951) A Decision Method for Elementary Algebra and Geometry. Univ. of California Press. |Wikimedia Commons has media related to Euclidean geometry.| - Hazewinkel, Michiel, ed. (2001) , "Euclidean geometry", Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4 - Hazewinkel, Michiel, ed. (2001) , "Plane trigonometry", Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4 - Kiran Kedlaya, Geometry Unbound (a treatment using analytic geometry; PDF format, GFDL licensed)
In physics, acceleration is the rate of change of velocity of an object with respect to time. An object's acceleration is the net result of all forces acting on the object, as described by Newton's Second Law. The SI unit for acceleration is metre per second squared (m⋅s−2). Accelerations are vector quantities (they have magnitude and direction) and add according to the parallelogram law. The vector of the net force acting on a body has the same direction as the vector of the body's acceleration, and its magnitude is proportional to the magnitude of the acceleration, with the object's mass (a scalar quantity) as proportionality constant. |SI unit||m/s2, m·s−2, m s−2| |Dimension||L T −2| For example, when a car starts from a standstill (zero velocity, in an inertial frame of reference) and travels in a straight line at increasing speeds, it is accelerating in the direction of travel. If the car turns, an acceleration occurs toward the new direction. The forward acceleration of the car is called a linear (or tangential) acceleration, the reaction to which passengers in the car experience as a force pushing them back into their seats. When changing direction, this is called radial (as orthogonal to tangential) acceleration, the reaction to which passengers experience as a sideways force. If the speed of the car decreases, this is an acceleration in the opposite direction of the velocity of the vehicle, sometimes called deceleration or Retrograde burning in spacecraft. Passengers experience the reaction to deceleration as a force pushing them forwards. Both acceleration and deceleration are treated the same, they are both changes in velocity. Each of these accelerations (tangential, radial, deceleration) is felt by passengers until their velocity (speed and direction) matches that of the uniformly moving car. - 1 Definition and properties - 2 Tangential and centripetal acceleration - 3 Special cases - 4 Relation to relativity - 5 Conversions - 6 See also - 7 References - 8 External links Definition and propertiesEdit Instantaneous acceleration, meanwhile, is the limit of the average acceleration over an infinitesimal interval of time. In the terms of calculus, instantaneous acceleration is the derivative of the velocity vector with respect to time: It can be seen that the integral of the acceleration function a(t) is the velocity function v(t); that is, the area under the curve of an acceleration vs. time (a vs. t) graph corresponds to velocity. As acceleration is defined as the derivative of velocity, v, with respect to time t and velocity is defined as the derivative of position, x, with respect to time, acceleration can be thought of as the second derivative of x with respect to t: Acceleration has the dimensions of velocity (L/T) divided by time, i.e. L T−2. The SI unit of acceleration is the metre per second squared (m s−2); or "metre per second per second", as the velocity in metres per second changes by the acceleration value, every second. An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due to the change of direction of motion, although its speed may be constant. In this case it is said to be undergoing centripetal (directed towards the center) acceleration. In classical mechanics, for a body with constant mass, the (vector) acceleration of the body's center of mass is proportional to the net force vector (i.e. sum of all forces) acting on it (Newton's second law): where F is the net force acting on the body, m is the mass of the body, and a is the center-of-mass acceleration. As speeds approach the speed of light, relativistic effects become increasingly large. Tangential and centripetal accelerationEdit The velocity of a particle moving on a curved path as a function of time can be written as: with v(t) equal to the speed of travel along the path, and a unit vector tangent to the path pointing in the direction of motion at the chosen moment in time. Taking into account both the changing speed v(t) and the changing direction of ut, the acceleration of a particle moving on a curved path can be written using the chain rule of differentiation for the product of two functions of time as: where un is the unit (inward) normal vector to the particle's trajectory (also called the principal normal), and r is its instantaneous radius of curvature based upon the osculating circle at time t. These components are called the tangential acceleration and the normal or radial acceleration (or centripetal acceleration in circular motion, see also circular motion and centripetal force). Uniform or constant acceleration is a type of motion in which the velocity of an object changes by an equal amount in every equal time period. A frequently cited example of uniform acceleration is that of an object in free fall in a uniform gravitational field. The acceleration of a falling body in the absence of resistances to motion is dependent only on the gravitational field strength g (also called acceleration due to gravity). By Newton's Second Law the force acting on a body is given by: Because of the simple analytic properties of the case of constant acceleration, there are simple formulas relating the displacement, initial and time-dependent velocities, and acceleration to the time elapsed: - is the elapsed time, - is the initial displacement from the origin, - is the displacement from the origin at time , - is the initial velocity, - is the velocity at time , and - is the uniform rate of acceleration. In particular, the motion can be resolved into two orthogonal parts, one of constant velocity and the other according to the above equations. As Galileo showed, the net result is parabolic motion, which describes, e. g., the trajectory of a projectile in a vacuum near the surface of Earth. In uniform circular motion, that is moving with constant speed along a circular path, a particle experiences an acceleration resulting from the change of the direction of the velocity vector, while its magnitude remains constant. The derivative of the location of a point on a curve with respect to time, i.e. its velocity, turns out to be always exactly tangential to the curve, respectively orthogonal to the radius in this point. Since in uniform motion the velocity in the tangential direction does not change, the acceleration must be in radial direction, pointing to the center of the circle. This acceleration constantly changes the direction of the velocity to be tangent in the neighboring point, thereby rotating the velocity vector along the circle. • For a given speed , the magnitude of this geometrically caused acceleration (centripetal acceleration) is inversely proportional to the radius of the circle, and increases as the square of this speed: • Note that, for a given angular velocity , the centripetal acceleration is directly proportional to radius . This is due to the dependence of velocity on the radius . Expressing centripetal acceleration vector in polar components, where is a vector from the centre of the circle to the particle with magnitude equal to this distance, and considering the orientation of the acceleration towards the center, yields As usual in rotations, the speed of a particle may be expressed as an angular speed with respect to a point at the distance as This acceleration and the mass of the particle determine the necessary centripetal force, directed toward the centre of the circle, as the net force acting on this particle to keep it in this uniform circular motion. The so-called 'centrifugal force', appearing to act outward on the body, is a so-called pseudo force experienced in the frame of reference of the body in circular motion, due to the body's linear momentum, a vector tangent to the circle of motion. In a nonuniform circular motion, i.e., the speed along the curved path is changing, the acceleration has a non-zero component tangential to the curve, and is not confined to the principal normal, which directs to the center of the osculating circle, that determines the radius for the centripetal acceleration. The tangential component is given by the angular acceleration , i.e., the rate of change of the angular speed times the radius . That is, The sign of the tangential component of the acceleration is determined by the sign of the angular acceleration ( ), and the tangent is of course always directed at right angles to the radius vector. Relation to relativityEdit The special theory of relativity describes the behavior of objects traveling relative to other objects at speeds approaching that of light in a vacuum. Newtonian mechanics is exactly revealed to be an approximation to reality, valid to great accuracy at lower speeds. As the relevant speeds increase toward the speed of light, acceleration no longer follows classical equations. As speeds approach that of light, the acceleration produced by a given force decreases, becoming infinitesimally small as light speed is approached; an object with mass can approach this speed asymptotically, but never reach it. Unless the state of motion of an object is known, it is impossible to distinguish whether an observed force is due to gravity or to acceleration—gravity and inertial acceleration have identical effects. Albert Einstein called this the equivalence principle, and said that only observers who feel no force at all—including the force of gravity—are justified in concluding that they are not accelerating. |Base value||(Gal, or cm/s2)||(ft/s2)||(m/s2)||(Standard gravity, g0)| |1 Gal, or cm/s2||1||0.0328084||0.01||0.00101972| - Jerk (physics) - Four-vector: making the connection between space and time explicit - Gravitational acceleration - Acceleration (differential geometry) - Orders of magnitude (acceleration) - Shock (mechanics) - Shock and vibration data logger measuring 3-axis acceleration - Space travel using constant acceleration - Specific force - Crew, Henry (2008). The Principles of Mechanics. BiblioBazaar, LLC. p. 43. ISBN 978-0-559-36871-4. - Bondi, Hermann (1980). Relativity and Common Sense. Courier Dover Publications. p. 3. ISBN 978-0-486-24021-3. - Lehrman, Robert L. (1998). Physics the Easy Way. Barron's Educational Series. p. 27. ISBN 978-0-7641-0236-3. - Raymond A. Serway; Chris Vuille; Jerry S. Faughn (2008). College Physics, Volume 10. Cengage. p. 32. ISBN 9780495386933. - Weisstein, Eric W. "Chain Rule". Wolfram MathWorld. Wolfram Research. Retrieved 2 August 2016. - Larry C. Andrews; Ronald L. Phillips (2003). Mathematical Techniques for Engineers and Scientists. SPIE Press. p. 164. ISBN 978-0-8194-4506-3. - Ch V Ramana Murthy; NC Srinivas (2001). Applied Mathematics. New Delhi: S. Chand & Co. p. 337. ISBN 978-81-219-2082-7. - Keith Johnson (2001). Physics for you: revised national curriculum edition for GCSE (4th ed.). Nelson Thornes. p. 135. ISBN 978-0-7487-6236-1. - David C. Cassidy; Gerald James Holton; F. James Rutherford (2002). Understanding physics. Birkhäuser. p. 146. ISBN 978-0-387-98756-9. - Brian Greene, The Fabric of the Cosmos: Space, Time, and the Texture of Reality, page 67. Vintage ISBN 0-375-72720-5
In this lesson we are going to talk about one-step equations. An equation consists of two equally valuable expressions. An equation can be written in the form of N = M, where N and M are expressions and at least one of them contains at least one variable. There are different types of equations, but we will focus on algebraic one-step equations in this lesson. One-step equations are equations that can be solved in a single step. That means that we only need to perform a single mathematical operation in order to solve the equation. As a result, we are going to find the value of the variable (named x in our case). An example of a one-step equation that includes addition is: This equation consists of the variable x and two constants, the numbers 2 and 5. The variable x is the unknown number whose value we need to find. To find out the value of the unknown number in the example above, we need to get rid of the 2 on the left side of the equation. We need to get the equation in form of . When we calculate the expression, we will get a number and that will be the value of our variable x. In our example, we will do that by subtracting the number 2 from both sides of the equation. It should look like this: The value of our variable is 3. Here is another example of a one step equation, but this one includes subtraction: We can see that we have – 5 as a constant on the left side. We need to get rid of that number to get the equation into the form “x equals a number”. In order to achieve that, we simply add 5 to both sides of the equation. The result is . The one-step equations can also contain multiplication or division. A one-step equation with multiplication can be solved by dividing both sides of the equation with the coefficient, which is the number that is multiplied by x. This example has been solved by dividing both sides of the equation with 6, since we need to get rid of the 6 on the left side of the equation. The result is . A one-step equation that includes division can be solved in similar way. We just need to multiply both sides of the equation with the number that divides x. In the example above, we need to multiply both sides by 4. When we solve the equation, we see that the value of the variable is 20. That means that 20 divided by 4 is 5. So, this is basically it for the one-step equations. If you want to practice a bit, feel free to use the worksheets posted below. Otherwise, you can follow the link to the other lessons, such as the one on two-step equations. One-step equations worksheets Addition and subtraction of integers (264.1 KiB, 471 hits) Addition and subtraction of decimal numbers (276.3 KiB, 354 hits) Addition and subtraction of fractions (452.6 KiB, 340 hits) Multiplication and division of integers (263.7 KiB, 425 hits) Multiplication and division of decimal numbers (279.9 KiB, 344 hits) Multiplication and division of fractions (382.4 KiB, 356 hits) Using addition and subtraction in solving word problems (370.4 KiB, 679 hits) Using multiplication and division in solving word problems (284.9 KiB, 619 hits) Solve the word problems using basic math operations (347.3 KiB, 259 hits)
The DES is an effort to image as many galaxies as possible as a proxy for mapping out dark matter, which is possible because dark matter’s gravity plays a strong role in governing how these galaxies are distributed. From August 2013 to January 2019, dozens upon dozens of scientists came together to use the four-meter Victor M. Blanco Telescope in Chile to survey the sky in near infrared. There are two keys to creating the map. The first is simply observing the location and distribution of galaxies throughout the universe. That arrangement clues scientists in to where the largest concentrations of dark matter are located. The second is observing gravitational lensing, a phenomenon in which the light emitted by galaxies is gravitationally stretched by dark matter as it moves through space. The effect is similar to looking through a magnifying glass. Scientists use gravitational lensing to infer how much actual space nearby dark matter is taking up. The more distorted the light, the clumpier the dark matter. The latest results take into account the first three years of DES data, tallying more than 226 million galaxies observed over 345 nights. “We are now able to map out dark matter over a quarter of the Southern Hemisphere,” says Niall Jeffrey, a researcher from University College London and École Normale Supérieure in Paris, one of the DES project leads. In general, the data lines up with the so-called Standard Model of Cosmology, which posits that the universe was created in the Big Bang and that its total mass-energy content is 95% dark matter and dark energy. And the new map provided scientists with a more detailed look at some vast dark-matter structures of the universe that otherwise remain invisible to us. The brightest spots on the map represent the highest concentrations of dark matter, and they form clusters and halos around voids of very low densities. But some results were surprising. “We found hints that the universe is smoother than expected,” says Jeffrey. “These hints are also seen in other gravitational-lensing experiments.” This is not what is predicted by general relativity, which suggests that dark matter should be more clumpy and less uniformly distributed. The authors write in one of the 30 papers being released that “though the evidence is by no means definitive, we are perhaps beginning to see hints of new physics.” For cosmologists, “this would correspond to possibly changing the laws of gravity as described by Einstein,” says Jeffrey. Although the implications are huge, caution is paramount, because we still actually know so little about dark matter (something we’ve yet to directly observe). For example, Jeffrey notes that “if nearby galaxies form in an alignment in a strange way due to complex astrophysics, then our lensing results would be misled.” In other words, there might very well be some exotic explanations for the results—perhaps accounting for them in ways that are reconcilable with general relativity. That would be a huge relief to any astrophysicist whose entire life’s work is based on Einstein being, well, correct. And let’s not forget: general relativity has stood up remarkably well to every other test that has been thrown at it over the years. The results are already making waves, even with several more DES data releases pending. “Already, astronomers are using these maps to study the structures of the cosmic web and understand the connection between galaxies and dark matter better,” says Jeffrey. We may not have to wait too long to find out whether the results really are a blip or our understanding of the universe needs some massive rewriting.
A programmer can solve any problem in any manner, or a codec can write a program code for its own sake. Lastly, it does not matter whether it works or not. But if the quality of the program is asked to be judged, that is a different matter. In this case, if the code is performed with an algorithm, the program works most quickly or using the least resources. Pointer and many other features have emerged for these. Although discussions of different types of pointer were discussed in the last section, only null pointers were explained. In this episode, discussions related to the relationship between the other pointer and the next pointer and the array are discussed. In simple words, the Void Pointer is a special pointer, which can be used to determine the address of variables in any other type without having the castings. In other words, such type of pointer can be pointless without any type of variable casting. The rules for declaring such a pointer are: void * poiter_name; It turns out that to use this type of pointer to use the void keyword as the data type. We know that int * p means that p pointer points to an integer. Similarly double * p means that p pointer points to any double. Likewise void * p means that p is a pointer, which does not point to integers or float or double all types of variables without casting. If the pointer was not typed in the void, then the type of pointer could be pointing to the other type variables through casting. It should be noted that using void as the data type of the common variable, there is no data in it (which is not commonly seen, because the main purpose of the variable is to keep the data). And if the data type of any function is void, then it does not return any value. That means that the void data type is used only when there is no need of quality. But there is a completely different incident in pointers. In this case, when all the data is needed, the voyage is used. Here is a small program such as the Voices Pointer: int x = 10; double y = 3.12; void * ptr; ptr = & x; Here ptr is a pointer to the void type, and for this, the variable x address of the integer type will be determined, which means the ptr pointers will point to x. Similarly, ptr = & y; In this case, for the ptr, the variable type y address of double type will be determined, while the ptr pointers will point to y. Again, another type of data can be assigned to the void pointer. Such as: int x = 20; int * ip; void * vp; ip = & x; vp = ip; Here x is declared an integer variable, which is defined as 20. Then both the integers and the void type two pointer have been declared ip and vp respectively. Sox can be pointer to x with ip. Now VP is a void type pointer, so it’s pointing to everyone. Again, the void pointer is not just pointing to any variables, it can also point to pointers pointing to any type of pointers. So at the end of the upper code, the vp pointer points to the data of the ip and the x. In this case the compiler will not show any errors. If there was any other pointers in place of vp, casting would be needed. But nothing needs to be done in the void pointer. In this way, void pointer can be assigned to other type pointer or other data type pointer for void pointer without casting. But if you want to work with the poinded address data through the void pointer, then the pointing data should be stored as: * (pointed_data_type *) void_ptr; int x = 10, y; void * ptr; ptr = & x; y = * (int *) ptr; Here the address of x for ptr has been set here. Then ptr was cast in int * to read integer type data through ptr. In order to work with the poinched address data through the void pointer, the poinched data has to be cast in appropriate pointer type. However, a significant point is that the type of data that will be captured by the pointer data via the vide pointer, the data can be found according to the type in the output. Voic pointer is also called generic pointers. Void pointer cannot be used as an operand of increment, decrement or any other extrusion until it is cast in any other type. Const Keywords can be used in many ways while declaring the pointer. But first, let’s see what happens when using this keyword while decrying non-pointers variables. const i = 10; Here the comp keyword is being informed by const keyword, i is a constant variable. So changing the data of this variable somewhere in the program in the same scope, there will be an error when compiling a compiling program. That is, the value of the constant variable cannot be changed. Thus, if a variables have to be unchanged then they have to declare them as constants. Similarly, const keyword is used to constant the pointer to the program. Constant keywords can be used in both ways. Such as: const datatype * pointerName = value; A_ev datatype * const pointerName = value; That is, the const keyword can sit in both places before or after the data type. But this is not the same thing. Let’s see what happens in the baseline and what happens next. If the keyword is placed before the data type, then the pointer will point to the value of the point. This means that the value of the pointer can be changed, but the value of the points being pointed cannot be changed. In other words, pointer data can be read through the pointer, but can not be changed. In this way, a file is read-only in Windows, so that the file can be read-only, but can not be changed. Such as: int i = 10, j; const int * ptr; ptr = & i; j = * ptr; * ptr = 20; Here I can determine the data for I for j * ptr = 20; This statement cannot be set to 20 for i. Because declare the pointer to the compiler, the pointer that points to the value of the value will remain unchanged. So in the last line to change pointer data through ptr *, the compiler will show the error. So at the end, const int * ptr; This means that the integer variable address for the pointer will be determined, the data of that variable can be read only through the pointer, but it can not be changed. But if the const keyword is used after data type, such as: int * const ptr; Then the pointer remains constant. That means that the value of the pointer itself can not be changed, but the pointers pointing to the pointers can be changed. A small program is given as an example: int i = 10, j; int * const ptr = & i; j = * ptr; * ptr = 20; ptr = & j; The pointer used here is Constant, which means that no other variable data can be specified by the pointer or the data of the pointer will remain unchanged. But the value of the poinched data can be read through the pointer (the third line) and it can be changed if needed (the fourth line). Finally, int * const ptr means the pointer variables are constants. So this pointer cannot be pointer to anyone else. And const int * ptr means the point of the pointers pointing to the Constant So pointing data cannot be changed, but if you want the pointer to point it out to anyone else. Because the point is not pointing to the pointer – it does not have the pointer’s own data, pointed data. But this is not the end. Users can use const keywords on either side if they are needed. Such as const int * const ptr = & i; In this case both the value of the pointer and the value of the poinched data will remain constant. That means that pointers can not be pointing to someone else, so the value of points that can be changed with the pointer cannot be changed.
For a complete lesson on sine cosine tangent, or SOHCAHTOA, go to http://www.MathHelp.com - 1000+ online math lessons featuring a personal math teacher inside every lesson! In this lesson, students learn that the sine of an angle of a right triangle is equal to the length of the side opposite the angle over the length of the hypotenuse (SOH), the cosine of an angle of a right triangle is equal to the length of the side adjacent to the angle over the length of the hypotenuse (CAH), and the tangent of an angle of a right triangle is equal to the length of the side opposite the angle over the length of the side adjacent to the angle (TOA). Students are then asked to find the values of the sine, cosine, and tangent of given angles in given right triangles. Tagged under: sine,cosine,tangent,SOHCAHTOA,trigonometry,trigonometric,ratio,ratios,triangle,definition,basic,geometry, Clip makes it super easy to turn any public video into a formative assessment activity in your classroom. Add multiple choice quizzes, questions and browse hundreds of approved, video lesson ideas for Clip Make YouTube one of your teaching aids - Works perfectly with lesson micro-teaching plans 1. Students enter a simple code 2. You play the video 3. The students comment 4. You review and reflect * Whiteboard required for teacher-paced activities With four apps, each designed around existing classroom activities, Spiral gives you the power to do formative assessment with anything you teach. Carry out a quickfire formative assessment to see what the whole class is thinking Create interactive presentations to spark creativity in class Student teams can create and share collaborative presentations from linked devices Turn any public video into a live chat with questions and quizzes
Vectors are physical quantities that consist of a magnitude as well as a direction, for example velocity, acceleration, and displacement, as opposed to scalars, which consist of magnitude only, for example speed, distance, or energy. While scalars can be added by adding their magnitudes (for example 5 kJ of work plus 6kJ of work equals 11kJ of work), vectors are slightly more complicated to add or subtract. This article details how to add or subtract vectors. Vector Addition or Subtraction Steps - 1If we have 2 vectors, vector A and vector B, - 2If we want to add vector A to vector B, then - A+B = <a1+a2,b1+b2,c1+c2> - 3If we want to subtract vector A from vector B, then - A-B = <a1-a2,b1-b2,c1-c2> Method One: Head to Tail - If you were drawing the diagram to a scale, drawing all angles exactly, you may measure the length of the resultant vector using a ruler. Also, then, measure the angle that the resultant makes with either a specified vector, or the horizontal/vertical etc. - If you were making a sketch, you will need to calculate the magnitude of the resultant using trigonometry. You may find the Sine Rule and the Cosine Rule helpful here. If you are adding more than two vectors together, it is helpful to first add two, and then use the resultant with the third vector, and so on. - 5Represent your resultant vector. For example, if the vectors represented velocities, then write "A velocity of x ms-1 at yo to the horizontal/vertical/etc". Method Two: Perpendicular Components This method is usually used in the Cartesian plane, but can be used for other vectors too. - 1Split each vector into two perpendicular components. For example, split each vector into its horizontal and vertical components. It is common to split vectors into components along the x-, and y-axes in the Cartesian plane. The unit vector along the x-axis is conventionally written as i, that along the y-axis as j. - If a component points to the left or downwards, it is given a negative sign (-). - 2Add all the magnitudes of the horizontal components (or those along the x-axis) together. Separately, add all the magnitudes of the vertical components (or those along the y-axis). If a component has a negative sign (-), its magnitude is subtracted, rather than added. - 3Calculate the magnitude of the resultant using the Pythagorean Theorem. The theorem may be stated: c2=a2+b2, where c is the magnitude of the resultant vector, a is the magnitude of the sum of the components along the x-axis, and b is the magnitude of the sum of the components along the y-axis. - 4Calculate the angle that the resultant makes with the horizontal (or the x-axis). Use the formula θ=tan-1(b/a), where θ is the angle that the resultant makes with the x-axis or the horizontal. - 5Represent your resultant vector. - For example, if the vectors represented forces, then write "A force of x N at yo to the horizontal/x-axis/etc". Method Three: Vector Subtraction - 1Subtract by adding a negative. Subtracting a vector from another can be seen as adding its "negative". - 3Follow either addition method above, using the negative. Use either of the two addition methods described above to add the "negative" of the vector to be subtracted and the vector it had to be subtracted from. - Vectors represented in the form xi + yj + zk can be added or subtracted by simply adding or subtracting to coefficients of the three unit vectors. The answer will also be in i,j,k form. - Column vectors can be added or subtracted by simply adding or subtracting the values in each row. - You can find the magnitude of a vector in three dimensions by using the formula a2=b2+c2+d2, where a is the magnitude of the vector, and b, c, and d are the components in each direction. Notes on i,j,k and column vectors - Vectors in the same direction can be added or subtracted by adding or subtracting their magnitudes. If you add two vectors in opposite directions, their magnitudes are subtracted, not added. - Vectors are not to be confused with magnitudes.
Dr. Claud Anderson delivered a very interesting and informative lecture concerning the history of slavery and how the lingering effects are still being felt today. There are some opinions expressed by Dr. Anderson that I disagree with and some facts that conflict with the research I have done (see comments), but the majority of his lecture is on point. The cartoon, "The Unequal Opportunity", provides a great visual representation of what Dr. Anderson discussed in his lecture. From the colonial period, colonies and states passed laws that discriminated against blacks. Over the period of 1687-1865, Virginia alone enacted more than 130 slave statutes, among which were seven major slave codes, with some containing more than fifty provisions. "Black Codes" in the antebellum South contained more regulations of free Blacks than of slaves. Chattel slaves basically lived under the complete control of their owners; free blacks presented a challenge to the boundaries of White-dominated society. After the Civil War, Black Codes were part of a larger pattern of Southern whites trying to suppress the new freedom of emancipated African American slaves, the freedmen. A Sampling of Major Laws Enacted Against Blacks: - 1619 Maryland Segregation Policy /Recommended that blacks be socially excluded - 1642 Virginia Fugitive Law Authorized branding of an "R" in the face runaway slaves. - 1660 Connecticut Military Law Barred blacks from military service - 1664 Maryland Marriage Law Enactment of the first anti-interracial marriage statues - 1667 British Plantation Act Established codes of conduct for slaves and slaveholders - 1686 Carolina Trade Law Barred blacks from all trades - 1691 Virginia Marriage Law Prescribed banishment for any white woman marrying a black man. - 1705 Massachusetts Anti- Criminalized Miscegenation Law interracial marriages - 1705 New York Runaway Law prescribed execution for recaptured runaway slaves - 1705 Virginia Public Office Law prohibited blacks from holding or assuming any public office - 1710 Virginia enacted Meritorious Manumission rewarded slaves with freedom for informing on other slaves - 1712 South Carolina Fugitive Slave Act criminalized runaway slaves to protect owners' investment - 1715 North Carolina criminalized blacks and white marriages - 1721 Delaware Marriage Law prohibited marriage between black men and white women - 1722 Pennsylvania Morality Law condemned blacks for sexual acts with whites - 1722 Pennsylvania Anti-Miscegenation Law Criminalized interracial marriages - 1723 Virginia Anti-Assembly Law Impeded blacks from meeting or having a sense of community - 1723 Virginia Weapons Law prohibited African-Americans from keeping weapons - 1740 South Carolina Consolidated Slave Act prohibited slaves from raising or owning farm animals - 1775 Virginia Runaway Law allowed sale or execution of slaves attempting to flee - 1775 North Carolina Manumission Law prohibited freeing slaves except for meritorious service - 1784 Connecticut Military Law prohibited blacks from serving in the militia - 1790 First Naturalization Law Congress declares United States a white nation - 1792 Federal Militia Law restricted enrollment in peacetime to whites only - 1793 Fugitive Slave Law prevented slaves from running away; protected planters' invested capital - 1783 Virginia Migration Law prevented free blacks from entering the state. - 1800 Maryland Agricultural Law prohibited blacks from raising and selling agricultural products - 1804 Ohio enacted Anti-Mobility Law "Black Laws" that restricted African-Americans' movements - 1804 Ohio Registration Law required blacks to register and annually post a bond - 1805 Maryland License Law prohibited blacks from selling tobacco or corn without a license - 1806 Louisiana Migration Law prohibited immigration for free black males over 15 years old - 1807 Maryland Residence Law limited residence of entering free blacks to two weeks - 1809 Congressional Mail Law excluded blacks from carrying U.S. mail - 1810 Maryland Voting Law restricted voting rights to whites only - 1811 Delaware Migration Law prohibited migration of blacks and levied $10 per week fine - 1811 Kentucky Conspiracy Law Made conspiracy among slaves a capital offense - 1813 Virginia Poll Tax exacted a $1.50 tax on blacks who were forbidden to vote. - 1814 Louisiana Migration Law prohibited free blacks from entering the state - 1815 Virginia Poll Tax required free blacks to pay tax so whites could vote - 1816 Louisiana Jury Law provided that no black person could testify against a white person. - 1818 Connecticut Voting Law disenfranchised black voters - 1819 Missouri Literacy Law prohibited assembling or teaching slaves to read or write - 1820 South Carolina Migration Law prohibited free blacks from entering the State - 1821 District of Columbia Registration Law required blacks to register annually and post bond - 1826 North Carolina Migration Law prohibited entry of free blacks; violators fined $500.00 - 1827 Maryland Occupation Acts prohibited blacks from driving or owning hacks, carts, and drays - 1827 Florida Voting Law restricted voting to whites 1829 Illinois Marriage Law prohibited marriages between blacks and whites - 1829 Georgia Literacy Law punished by fine and imprisonment for teaching a black person to read - 1830 Louisiana Expulsion Law required all free blacks to leave the state within 60 days - 1830 Mississippi Employment Law prohibited black employment in printing and entertainment - 1830 Kentucky Property Tax Law taxed blacks and prohibited them from voting or attending school - 1831 North Carolina License Law Required all black traders and peddlers to be licensed - 1831 South Carolina enacted Licensing Prohibition which denied free blacks any kind of a business license. - 1831 Indiana Mobility Law required blacks to register in order to work and post bond - 1831 Mississippi Preaching Law prohibited free blacks to preach except with permission - 1832 Alabama and Virginia enacted Literacy Laws which fined and flogged whites for teaching blacks to read or write - 1833 Georgia Employment Law prohibited blacks from working in reading or writing jobs - 1833 Georgia Literacy Law provided fines and whippings for teaching blacks. - 1833 Kentucky Licensing Prohibition no free person of color could obtain a license - 1835 Missouri Registration Law required the registration and bonding of all free African-American - 1835 Georgia Employment Law prohibited employing blacks in drug stores - 1836 District of Columbia Business License Law prohibited blacks from profit-making activities - 1837 South Carolina Curfew Law required blacks to be off the streets by a certain hour - 1838 Virginia School of Law prevented African-Americans who had gone North to school from returning. - 1838 North Carolina Marriage Law declared void all interracial marriages to 3rd generation - 1841 South Carolina Observing Law prohibited blacks and whites from looking out the same windows - 1842 Maryland Information Law criminalized and made a felony for blacks to request or receive abolition newspapers - 1844 Maryland Color Tax placed a tax on all employed black artisans - 1844 South Carolina Amusement Law prohibited blacks from playing games with whites - 1844 Maryland Occupation Act excluded blacks from the carpentry trade - 1845 Georgia Contracting Law prohibited contracts with black mechanics - 1846 Kentucky Incitement Law provided imprisonment for inciting blacks to rebel - 1847 Missouri Literacy Law prohibited teaching blacks to read or write - 1848 Virginia Incitement Law provided the death penalty for advising blacks to rebel - 1850 Fugitive Slave Law Enacted resulted in stronger enforcement provisions - 1852 Georgia Tax Law imposed an annual $5.00 per capita tax on all free blacks - 1853 Virginia Poll Tax Law levied a tax on all free blacks. - 1856 Virginia Drug Law prohibited selling poisonous drugs to blacks - 1857 Dred Scott Decision U.S. Supreme Court dehumanized and disenfranchised all black whether free or slave. - 1858 Maryland Recreation Law prohibited free blacks and slaves from boating on the Potomac - 1868 Southern Black Codes deprived blacks of the right to vote and hold public office - 1883 Civil rights Law of 1875 U.S. Supreme Court Weakened challenged the constitutionality of the law - 1898 The Grandfather Clause deprived blacks of the right to vote in Louisiana In addition to the black codes and immediately following the end of the Civil War a new form of slavery was created called convict leasing. The 13th Amendment abolished slavery except as punishment for a crime. Convict leasing was a system of penal labor practiced in the Southern United States, beginning with the emancipation of slaves at the end of the Civil War in 1865, peaking around 1880, and officially ending in the last state, Alabama, in 1928. It persisted in various forms until it was abolished in 1942 by President Franklin D. Roosevelt during World War II, several months after the attack on Pearl Harbor involved the U.S. in the conflict. At the same time, and persisting for decades beyond convict leasing, peonage (debt slavery) was a problem for poor blacks and whites in the South, who became entrapped by systems of low pay and indebtedness purportedly to company or plantation stores. Peonage as a form of involuntary servitude that was outlawed by an 1867 United States federal statute. The statute was not enforced for thirty-one years even though peonage in its various guises was defined in a half dozen Supreme Court cases and scores of federal district cases. The federal government began enforcing the peonage statute in 1898. Between 1890 and 1910, ten of the eleven former Confederate states, starting with Mississippi, passed new constitutions or amendments that effectively disenfranchised most blacks and tens of thousands of poor whites through a combination of poll taxes, literacy and comprehension tests, and residency and record-keeping requirements. Grandfather clauses temporarily permitted some illiterate whites to vote but gave no relief to most blacks. Jim Crow laws were state and local laws enforcing racial segregation in the Southern United States. Enacted after the Reconstruction period, these laws continued in force until 1965. They mandated statutory racial segregation in all public facilities in states of the former Confederate States of America, starting in 1890 with a "separate but equal" status for African Americans. Conditions for African Americans were consistently inferior and underfunded compared to those available to white Americans. This body of law institutionalized a number of economic, educational and social disadvantages. Segregation legislated and enforced by statutes was mainly applied to the Southern United States, while Northern segregation was generally institutionalized patterns of discrimination rather than actual racial segregation laws. Housing segregation enforced by private covenants, bank lending practices, and job discrimination, white only business practices including discriminatory employment and union and practices were collectively part of systematic discrimination in the North. Institutional racism, a term coined in the late 1960s by activists Stokely Carmichael and Charles V. Hamilton, is any system of inequality based on race. It can occur in institutions such as public government bodies, private business corporations (such as media outlets), and universities (public and private). Institutional racism is the differential access to the goods, services, and opportunities of society. When the differential access becomes integral to institutions, it becomes common practice, making it difficult to rectify. Eventually, this racism dominates public bodies, private corporations, public and private universities, and is reinforced by the actions of conformists and newcomers. Another difficulty in reducing institutionalized racism is that there is no sole, true identifiable perpetrator. When racism is built into the institution, it emerges as the collective action of the population. Below is an interesting video illustration titled, "Slavery Isn't Over They Just Changed What They Called It" Anthony Johnson's case, which was decided in 1655, did not create the institution of permanent slavery in Virginia. In 1620 slaves were sold in Virginia, when a Dutch man-of-war ship arrived with 20 Negro slaves for sale. Source: "An Inquiry into The Law of Negro Slavery in the United States of America", by Thomas R.R. Cobb, published 1858 (pg 148). Johnson was captured in his native Angola by an enemy tribe and sold to Arab slave traders. He was eventually sold as an indentured servant to a merchant working for the Virginia Company. The following year (1623) "Mary, a Negro" arrived from England aboard the ship Margaret. She was brought to work on the same plantation as Antonio, where she was the only woman. Antonio and Mary married and lived together for over forty years. Sometime after 1635, Antonio and Mary gained their freedom from indenture. Antonio changed his name to Anthony Johnson. Johnson took ownership of a large plot of farmland after he paid off his indentured contract by his labor. On 24 July 1651, he acquired 250 acres of land under the headright system by buying the contracts of five indentured servants, one of whom was his son Richard Johnson. In 1651 he owned 250 acres and the services of four white and one black indentured servants. In 1653, John Casor, a black indentured servant whose contract Johnson appeared to have bought in the early 1640s, approached Captain Goldsmith, claiming his indenture had expired seven years earlier and that he was being held illegally by Johnson. A neighbor, Robert Parker, intervened and persuaded Johnson to free Casor. Parker offered Casor work, and he signed a term of indenture to the planter. Johnson sued Parker in the Northampton Court in 1654 for the return of Casor. The court initially found in favor of Parker, but Johnson appealed. In 1655, the court reversed its ruling. Finding that Anthony Johnson still "owned" John Casor, the court ordered that he be returned with the court dues paid by Robert Parker. This was the first instance of a judicial determination in the Thirteen Colonies holding that a person who had committed no crime could be held in servitude for life. Although Casor was the first person declared a slave in a civil case, there were both black and white indentured servants sentenced to lifetime servitude before him. Many historians describe indentured servant John Punch as the first documented slave in America, as he was sentenced to life in servitude as punishment for escaping in 1640. The Punch case was significant because it established the disparity between his sentence as a Negro and that of the two European indentured servants who escaped with him (one described as Dutch and one as a Scotchman). It is the first documented case in Virginia of an African sentenced to lifetime servitude. It is considered one of the first legal cases to make a racial distinction between black and white indentured servants. Black Slave Owners The law of some slave states prohibited slave owners from freeing their slaves. In 1830, the year most carefully studied by Carter G. Woodson, about 13.7 percent (319,599) of the black population was free. Of these, 3,776 free Negroes owned 12,907 slaves, out of a total of 2,009,043 slaves owned in the entire United States. In his essay, " 'The Known World' of Free Black Slaveholders," Thomas J. Pressly, using Woodson's statistics, calculated that 54 (or about 1 percent) of these black slave owners in 1830 owned between 20 and 84 slaves; 172 (about 4 percent) owned between 10 to 19 slaves; and 3,550 (about 94 percent) each owned between 1 and 9 slaves. Crucially, 42 percent owned just one slave. Pressly also shows that the percentage of free black slave owners as the total number of free black heads of families was quite high in several states, namely 43 percent in South Carolina, 40 percent in Louisiana, 26 percent in Mississippi, 25 percent in Alabama and 20 percent in Georgia. So why did these free black people own these slaves? It is reasonable to assume that the 42 percent of the free black slave owners who owned just one slave probably owned a family member to protect that person, as did many of the other black slave owners who owned only slightly larger numbers of slaves. In many instances, the husband purchased the wife or vice versa. Slaves of Negroes were in some cases the children of a free father who had purchased his wife. If he did not thereafter emancipate the mother, as so many such husbands could not or failed to do, his own children were born his slaves and were thus officially, his number of slaves increased. Moreover, Woodson explains, "Benevolent Negroes often purchased slaves to make their lot easier by granting them their freedom for a nominal sum, or by permitting them to work it out on liberal terms." In other words, these black slave-owners, the clear majority, cleverly used the system of slavery to protect their loved ones. See: The Willie Lynch Letter – How to Make A Slave, under the History tab which includes the full text of the letter along with commentary. Slavery in historical Africa was practiced in many different forms and some of these do not clearly fit the definitions of slavery elsewhere in the world. Debt slavery, the enslavement of war captives, military slavery, and criminal slavery were all practiced in various parts of Africa. In most African societies where slavery was prevalent, the enslaved people were not treated as chattel slaves and were given certain rights in a system similar to indentured servitude elsewhere in the world and were often not enslaved for life. Africa is not a country. Then as now, Africa was comprised of many different villages, kingdoms, and countries, therefore, Africans were not selling “their own”, they were selling their enemies, just as the Greeks and Romans once did. Chattel slavery had been legal and widespread throughout North Africa when the region was controlled by the Roman Empire. When the Arab slave trade and the Atlantic slave trade began, many of the local slave systems changed and began supplying captives for slave markets outside of Africa. Most African countries did not sell slaves and some even fought against it. In order to increase the supply of slaves, slave traders often incited conflicts among villages and tribes. Because Europeans controlled the supply of guns there was little Africans could do to stop it. Slave traders would often befriend a village or tribal chief, give them guns and then create a conflict with neighboring tribes. The slave trader would then visit the neighboring village, and offer guns for sale so they could protect themselves. However, the only payment they would accept were slaves. The slave traders would then go back to the first tribe and offer more guns, except that time trade of slaves was required. Other Related Material Goodbye Uncle Tom – 1971 Graphic movie about slavery which was then banned in the United States Slavery by another name – Another forced labor system replaced slavery between the Civil War and World War II Slavery Isn't Over, They Just Changed What They Called It – Discussions about mental and economic slavery Post Traumatic Slave Syndrome – Mental disorder cause from a legacy of slavery, Jim Crow and discrimination
Behaviorism (or behaviourism) is a systematic approach to the understanding of human and animal behavior. It assumes that all behaviors are either reflexes produced by a response to certain stimuli in the environment, or a consequence of that individual's history, including especially reinforcement and punishment, together with the individual's current motivational state and controlling stimuli. Thus, although behaviorists generally accept the important role of inheritance in determining behavior, they focus primarily on environmental factors. Behaviorism combines elements of philosophy, methodology, and psychological theory. It emerged in the late nineteenth century as a reaction to depth psychology and other traditional forms of psychology, which often had difficulty making predictions that could be tested experimentally. The earliest derivatives of Behaviorism can be traced back to the late 1800s where Edward Thorndike pioneered the law of effect (a process that involved strengthening behavior through the use of reinforcement). During the first half of the twentieth century, John B. Watson devised methodological behaviorism, which rejected introspective methods and sought to understand behavior by only measuring observable behaviors and events. It was not until the 1930s that B. F. Skinner suggested that private events--including thoughts and feelings--should be subjected to the same controlling variables as observable behavior which became the basis for his philosophy called radical behaviorism. While Watson and Ivan Pavlov investigated the stimulus-response procedures of classical conditioning, Skinner assessed the controlling nature of consequences and also the antecedents (or discriminative stimuli) that signal the behavior; the technique became known as operant conditioning. The application of radical behaviorism--known as applied behavior analysis--is used in a variety of settings, including, for example, organizational behavior management, to the treatment of mental disorders, such as autism and substance abuse. In addition, while behaviorism and cognitive schools of psychological thought may not agree theoretically, they have complemented each other in cognitive behavior therapies, which have demonstrated utility in treating certain pathologies, including simple phobias, PTSD, and mood disorders. There is no universally agreed-upon classification, but some titles given to the various branches of behaviorism include: Two subtypes are: B. F. Skinner proposed radical behaviorism as the conceptual underpinning of the experimental analysis of behavior. This view differs from other approaches to behavioral research in various ways but, most notably here, it contrasts with methodological behaviorism in accepting feelings, states of mind and introspection as behaviors subject to scientific investigation. Like methodological behaviorism it rejects the reflex as a model of all behavior, and it defends the science of behavior as complementary to but independent of physiology. Radical behaviorism overlaps considerably with other western philosophical positions such as American pragmatism. This essentially philosophical position gained strength from the success of Skinner's early experimental work with rats and pigeons, summarized in his books The Behavior of Organisms and Schedules of Reinforcement. Of particular importance was his concept of the operant response, of which the canonical example was the rat's lever-press. In contrast with the idea of a physiological or reflex response, an operant is a class of structurally distinct but functionally equivalent responses. For example, while a rat might press a lever with its left paw or its right paw or its tail, all of these responses operate on the world in the same way and have a common consequence. Operants are often thought of as species of responses, where the individuals differ but the class coheres in its function-shared consequences with operants and reproductive success with species. This is a clear distinction between Skinner's theory and S-R theory. Skinner's empirical work expanded on earlier research on trial-and-error learning by researchers such as Thorndike and Guthrie with both conceptual reformulations--Thorndike's notion of a stimulus-response "association" or "connection" was abandoned; and methodological ones--the use of the "free operant", so called because the animal was now permitted to respond at its own rate rather than in a series of trials determined by the experimenter procedures. With this method, Skinner carried out substantial experimental work on the effects of different schedules and rates of reinforcement on the rates of operant responses made by rats and pigeons. He achieved remarkable success in training animals to perform unexpected responses, to emit large numbers of responses, and to demonstrate many empirical regularities at the purely behavioral level. This lent some credibility to his conceptual analysis. It is largely his conceptual analysis that made his work much more rigorous than his peers', a point which can be seen clearly in his seminal work Are Theories of Learning Necessary? in which he criticizes what he viewed to be theoretical weaknesses then common in the study of psychology. An important descendant of the experimental analysis of behavior is the Society for Quantitative Analysis of Behavior. As Skinner turned from experimental work to concentrate on the philosophical underpinnings of a science of behavior, his attention turned to human language with his 1957 book Verbal Behavior and other language-related publications;Verbal Behavior laid out a vocabulary and theory for functional analysis of verbal behavior, and was strongly criticized in a review by Noam Chomsky. Skinner did not respond in detail but claimed that Chomsky failed to understand his ideas, and the disagreements between the two and the theories involved have been further discussed.Innateness theory is opposed to behaviorist theory which claims that language is a set of habits that can be acquired by means of conditioning. According to some, this process that the behaviorists define is a very slow and gentle process to explain a phenomenon as complicated as language learning. What was important for a behaviorist's analysis of human behavior was not language acquisition so much as the interaction between language and overt behavior. In an essay republished in his 1969 book Contingencies of Reinforcement, Skinner took the view that humans could construct linguistic stimuli that would then acquire control over their behavior in the same way that external stimuli could. The possibility of such "instructional control" over behavior meant that contingencies of reinforcement would not always produce the same effects on human behavior as they reliably do in other animals. The focus of a radical behaviorist analysis of human behavior therefore shifted to an attempt to understand the interaction between instructional control and contingency control, and also to understand the behavioral processes that determine what instructions are constructed and what control they acquire over behavior. Recently, a new line of behavioral research on language was started under the name of relational frame theory. Behaviourism focuses on one particular view of learning: a change in external behaviour achieved through using reinforcement and repetition (Rote learning) to shape behavior. Skinner found that behaviors could be shaped when the use of reinforcement was implemented. Desired behavior is rewarded, while the undesired behavior is punished. Incorporating behaviorism into the classroom allowed educators to assist their students in excelling both academically and personally. In the field of language learning, this type of teaching was called the audio-lingual method, characterised by the whole class using choral chanting of key phrases, dialogues and immediate correction. Within the behaviourist view of learning, the "teacher" is the dominant person in the classroom and takes complete control, evaluation of learning comes from the teacher who decides what is right or wrong. The learner does not have any opportunity for evaluation or reflection within the learning process, they are simply told what is right or wrong. The conceptualization of learning using this approach could be considered "superficial" as the focus is on external changes in behaviour i.e. not interested in the internal processes of learning leading to behaviour change and has no place for the emotions involved the process. Whether this approach is right or wrong, it cannot be denied that an aspect of memorization is regarded by key scholars as critical in any language learning.[who?] Operant conditioning was developed by B.F. Skinner in 1937 and deals with the modification of "voluntary behaviour" or operant behaviour. Operant behavior operates on the environment and is maintained by its consequences. Reinforcement and punishment, the core tools of operant conditioning, are either positive (delivered following a response), or negative (withdrawn following a response). Skinner created the Skinner Box or operant conditioning chamber to test the effects of operant conditioning principles on rats. From this study, he discovered that the rats learned very effectively if they were rewarded frequently. Skinner also found that he could shape the rats' behavior through the use of rewards, which could, in turn, be applied to human learning as well. Although operant conditioning plays the largest role in discussions of behavioral mechanisms, classical conditioning (or Pavlovian conditioning or respondent conditioning) is also an important behavior-analytic process that need not refer to mental or other internal processes. Pavlov's experiments with dogs provide the most familiar example of the classical conditioning procedure. In simple conditioning, the dog was presented with a stimulus such as a light or a sound, and then food was placed in the dog's mouth. After a few repetitions of this sequence, the light or sound by itself caused the dog to salivate. Although Pavlov proposed some tentative physiological processes that might be involved in classical conditioning, these have not been confirmed. The idea of classical conditioning helped behaviorist John Watson discover the key mechanism behind how humans acquire the behaviors that they do, which was to find a natural reflex that produces the response being considered. Watson's "Behaviourist Manifesto" has three aspects that deserve special recognition: one is that psychology should be purely objective, with any interpretation of conscious experience being removed, thus leading to psychology as the "science of behaviour"; the second one is that the goals of psychology should be to predict and control behaviour (as opposed to describe and explain conscious mental states; the third one is that there is no notable distinction between human and non-human behaviour. Following Darwin's theory of evolution, this would simply mean that human behaviour is just a more complex version in respect to behaviour displayed by other species. Skinner's view of behavior is most often characterized as a "molecular" view of behavior; that is, behavior can be decomposed into atomistic parts or molecules. This view is inconsistent with Skinner's complete description of behavior as delineated in other works, including his 1981 article "Selection by Consequences". Skinner proposed that a complete account of behavior requires understanding of selection history at three levels: biology (the natural selection or phylogeny of the animal); behavior (the reinforcement history or ontogeny of the behavioral repertoire of the animal); and for some species, culture (the cultural practices of the social group to which the animal belongs). This whole organism then interacts with its environment. Molecular behaviorists use notions from melioration theory, negative power function discounting or additive versions of negative power function discounting. Molar behaviorists, such as Howard Rachlin, Richard Herrnstein, and William Baum, argue that behavior cannot be understood by focusing on events in the moment. That is, they argue that behavior is best understood as the ultimate product of an organism's history and that molecular behaviorists are committing a fallacy by inventing fictitious proximal causes for behavior. Molar behaviorists argue that standard molecular constructs, such as "associative strength", are better replaced by molar variables such as rate of reinforcement. Thus, a molar behaviorist would describe "loving someone" as a pattern of loving behavior over time; there is no isolated, proximal cause of loving behavior, only a history of behaviors (of which the current behavior might be an example) that can be summarized as "love". Behaviorism is a psychological movement that can be contrasted with philosophy of mind. The basic premise of radical behaviorism is that the study of behavior should be a natural science, such as chemistry or physics, without any reference to hypothetical inner states of organisms as causes for their behavior. Less radical varieties are unconcerned with philosophical positions on internal, mental and subjective experience. Behaviorism takes a functional view of behavior. According to Edmund Fantino and colleagues: "Behavior analysis has much to offer the study of phenomena normally dominated by cognitive and social psychologists. We hope that successful application of behavioral theory and methodology will not only shed light on central problems in judgment and choice but will also generate greater appreciation of the behavioral approach." Behaviorist sentiments are not uncommon within philosophy of language and analytic philosophy. It is sometimes argued that Ludwig Wittgenstein defended a behaviorist position (e.g., the beetle in a box argument)--but while there are important relations between his thought and behaviorism, the claim that he was a behaviorist is quite controversial. Mathematician Alan Turing is also sometimes considered a behaviorist, but he himself did not make this identification. In logical and empirical positivism (as held, e.g., by Rudolf Carnap and Carl Hempel), the meaning of psychological statements are their verification conditions, which consist of performed overt behavior. W.V. Quine made use of a type of behaviorism, influenced by some of Skinner's ideas, in his own work on language. Gilbert Ryle defended a distinct strain of philosophical behaviorism, sketched in his book The Concept of Mind. Ryle's central claim was that instances of dualism frequently represented "category mistakes", and hence that they were really misunderstandings of the use of ordinary language. Daniel Dennett likewise acknowledges himself to be a type of behaviorist, though he offers extensive criticism of radical behaviorism and refutes Skinner's rejection of the value of intentional idioms and the possibility of free will. This is Dennett's main point in "Skinner Skinned." Dennett argues that there is a crucial difference between explaining and explaining away... If our explanation of apparently rational behavior turns out to be extremely simple, we may want to say that the behavior was not really rational after all. But if the explanation is very complex and intricate, we may want to say not that the behavior is not rational, but that we now have a better understanding of what rationality consists in. (Compare: if we find out how a computer program solves problems in linear algebra, we don't say it's not really solving them, we just say we know how it does it. On the other hand, in cases like Weizenbaum's ELIZA program, the explanation of how the computer carries on a conversation is so simple that the right thing to say seems to be that the machine isn't really carrying on a conversation, it's just a trick.)-- Curtis Brown, Philosophy of Mind, "Behaviorism: Skinner and Dennett" ||This section contains content that is written like an advertisement. (March 2015) (Learn how and when to remove this template message)| The early term behavior modification has been obsolete since the 1990s as it currently refers to the brief revival of methodological behaviorism in the late 1950s and again from the late 1970s to early 1980s.Applied behavior analysis--the term that replaced behavior modification--has emerged into a thriving field. The Association for Behavior Analysis: International (ABAI) currently has 32 state and regional chapters within the United States. Approximately 30 additional chapters have also developed throughout Europe, Asia, South America, and the South Pacific. In addition to 34 annual conferences held by ABAI in the United States and Canada, ABAI held the 5th annual International conference in Norway in 2009. The independent development of behaviour analysis outside the US also continues to develop. For example, the UK Society for Behaviour Analysis was founded in 2013 to further the advancement of the science and practice of behaviour analysis across the UK. And in terms of motivation, there remains strong interest in the variety of human motivational behaviour factors, e.g., indeed one could argue that the entire career counselling and advisory industry has at least partly been predicated on analysing individual behaviours. Some, may go as far as suggesting that the current rapid change in organisational behaviour could partly be attributed to some of these theories and the theories that are related to it. The interests among behavior analysts today are wide-ranging, as a review of the 30 Special Interest Groups (SIGs) within ABAI indicates. Such interests include everything from developmental disabilities and autism, to cultural psychology, clinical psychology, verbal behavior, Organizational Behavior Management (OBM; behavior analytic I-O psychology). OBM has developed a particularly strong following within behavior analysis, as evidenced by the formation of the OBM Network and the influential Journal of Organizational Behavior Management (JOBM; recently rated the 3rd highest impact journal in applied psychology by ISI JOBM rating). Applications of behavioral technology, also known as applied behavior analysis or ABA, have been particularly well established in the area of developmental disabilities since the 1960s. Treatment of individuals diagnosed with autism spectrum disorders has grown especially rapidly since the mid-1990s. This demand for services encouraged the formation of a professional credentialing program administered by the Behavior Analyst Certification Board, Inc. (BACB) and accredited by the National Commission for Certifying Agencies. As of early 2012, there are over 300 BACB approved course sequences offered by about 200 colleges and universities worldwide preparing students for this credential and approximately 11,000 BACB certificants, most working in the United States. The Association of Professional Behavior Analysts was formed in 2008 to meet the needs of these ABA professionals. Modern behavior analysis has also witnessed a massive resurgence in research and applications related to language and cognition, with the development of relational frame theory (RFT; described as a "Post-Skinnerian account of language and cognition"). RFT also forms the empirical basis for the highly successful and data-driven acceptance and commitment therapy (ACT). In fact, researchers and practitioners in RFT/ACT have become sufficiently prominent that they have formed their own specialized organization that is highly behaviorally oriented, known as the Association for Contextual Behavioral Science (ACBS). It has rapidly grown in its few years of existence to reach about 5,000 members worldwide. Some of the current prominent behavior analytic journals include the Journal of Applied Behavior Analysis (JABA), the Journal of the Experimental Analysis of Behavior (JEAB) JEAB website, the Journal of Organizational Behavior Management (JOBM), Behavior and Social Issues (BSI), as well as the Psychological Record. Currently, the US has 14 ABAI accredited MA and PhD programs for comprehensive study in behavior analysis. Cultural analysis has always been at the philosophical core of radical behaviorism from the early days (as seen in Skinner's Walden Two, Science & Human Behavior, Beyond Freedom & Dignity, and About Behaviorism). During the 1980s, behavior analysts, most notably Sigrid Glenn, had a productive interchange with cultural anthropologist Marvin Harris (the most notable proponent of "Cultural Materialism") regarding interdisciplinary work. Very recently, behavior analysts have produced a set of basic exploratory experiments in an effort toward this end. Behaviorism is also frequently used in game development, although this application is controversial. With the fast growth of big behavioral data and applications, behavior analysis is ubiquitous. Understanding behavior from the informatics and computing perspective becomes increasingly critical for in-depth understanding of what, why and how behaviors are formed, interact, evolve, change and affect business and decision. Behavior informatics and behavior computing deeply explore behavior intelligence and behavior insights from the informatics and computing perspectives. ||This section includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (June 2016) (Learn how and when to remove this template message)| In the second half of the 20th century, behaviorism was largely eclipsed as a result of the cognitive revolution. This shift was due to methodological behaviorism being highly criticized for not examining mental processes, and this led to the development of the cognitive therapy movement. In the mid-20th century, three main influences arose that would inspire and shape cognitive psychology as a formal school of thought: In the early years of cognitive psychology, behaviorist critics held that the empiricism it pursued was incompatible with the concept of internal mental states. Cognitive neuroscience, however, continues to gather evidence of direct correlations between physiological brain activity and putative mental states, endorsing the basis for cognitive psychology. Manage research, learning and skills at defaultLogic. Create an account using LinkedIn or facebook to manage and organize your IT knowledge. defaultLogic works like a shopping cart for information -- helping you to save, discuss and share.
White people is a racial classification specifier, used mostly and often exclusively for people of European descent; depending on context, nationality, and point of view. The term has at times been expanded to encompass persons of Middle Eastern and North African descent (for example, in the US Census definition), persons who are often considered non-white in other contexts. The usage of "white people" or a "white race" for a large group of mainly or exclusively European populations, defined by their light skin, among other physical characteristics, and contrasting with "black people", Amerindians, and other "colored" people or "persons of color", originated in the 17th century. It was only during the 19th century that this vague category was transformed in a quasi-scientific system of race and skin color relations. The concept of a unified white race did not achieve universal acceptance in Europe when it first came into use in the 17th century, or in the centuries afterward. Nazi Germany regarded some European peoples such as Slavs as racially distinct from themselves. Prior to the modern age, no European peoples regarded themselves as "white", but rather defined their race, ancestry, or ethnicity in terms of their nationality. Moreover, there is no accepted standard for determining the geographic barrier between white and non-white people. Contemporary anthropologists and other scientists, while recognizing the reality of biological variation between different human populations, regard the concept of a unified, distinguishable "white race" as socially constructed. As a group with several different potential boundaries, it is an example of a fuzzy concept. The concept of whiteness has particular resonance in racially diverse countries with large majority or minority populations of more or less mixed European ancestry: e.g., in the United States (White Americans), Canada (white Canadians), Australia (white Australians), New Zealand (white New Zealanders), the United Kingdom (white British), and South Africa (white South Africans). In much of the rest of Europe, the distinction between race and nationality is more blurred; when people are asked to describe their race or ancestry, they often describe it in terms of their nationality. Various social constructions of whiteness have been significant to national identity, public policy, religion, population statistics, racial segregation, affirmative action, white privilege, eugenics, racial marginalization, and racial quotas. The term "white race" or "white people" entered the major European languages in the later 17th century, in the context of racialized slavery and unequal social status in the European colonies. Description of populations as "white" in reference to their skin color predates this notion and is occasionally found in Greco-Roman ethnography and other ancient or medieval sources, but these societies did not have any notion of a white, pan-European race. Scholarship on race distinguishes the modern concept from pre-modern descriptions, which focused on physical complexion rather than race. Physical descriptions in antiquity According to anthropologist Nina Jablonski: In ancient Egypt as a whole, people were not designated by color terms […] Egyptian inscriptions and literature only rarely, for instance, mention the dark skin color of the Kushites of Upper Nubia. We know the Egyptians were not oblivious to skin color, however, because artists paid attention to it in their works of art, to the extent that the pigments at the time permitted. The Ancient Egyptian (New Kingdom) funerary text known as the Book of Gates distinguishes "four groups" in a procession. These are the Egyptians, the Levantine and Canaanite peoples or "Asiatics", the "Nubians" and the "fair-skinned Libyans". The Egyptians are depicted as considerably darker-skinned than the Levantines (persons from what is now Lebanon, Israel, Palestine and Jordan) and Libyans, but considerably lighter than the Nubians (modern Sudan). The assignment of positive and negative connotations of white and black to certain persons date to the very old age in a number of Indo-European languages, but these differences were not necessarily used in respect to skin colors. Religious conversion was sometimes described figuratively as a change in skin color. Similarly, the Rigveda uses krsna tvac "black skin" as a metaphor for irreligiosity. Classicist James H. Dee states "the Greeks do not describe themselves as 'white people'—or as anything else because they had no regular word in their color vocabulary for themselves." People's skin color did not carry useful meaning; what mattered is where they lived. Herodotus described the Scythian Budini as having deep blue eyes and bright red hair. and the Egyptians – quite like the Colchians – as melánchroes (μελάγχροες, "dark-skinned") and curly-haired. He also gives the possibly first reference to the common Greek name of the tribes living south of Egypt, otherwise known as Nubians, which was Aithíopes (Αἰθίοπες, "burned-faced"). Later Xenophanes of Colophon described the Aethiopians as black and the Persian troops as white compared to the sun-tanned skin of Greek troops. Modern racial hierarchies The term "white race" or "white people" entered the major European languages in the later 17th century, originating with the racialization of slavery at the time, in the context of the Atlantic slave trade and the enslavement of indigenous peoples in the Spanish Empire. It has repeatedly been ascribed to strains of blood, ancestry, and physical traits, and was eventually made into a subject of scientific research, which culminated in scientific racism, which was later widely repudiated by the scientific community. According to historian Irene Silverblatt, "Race thinking […] made social categories into racial truths." Bruce David Baum, citing the work of Ruth Frankenberg, states, "the history of modern racist domination has been bound up with the history of how European peoples defined themselves (and sometimes some other peoples) as members of a superior 'white race'." Alastair Bonnett argues that 'white identity', as it is presently conceived, is an American project, reflecting American interpretations of race and history. According to Gregory Jay, a professor of English at the University of Wisconsin–Milwaukee, Before the age of exploration, group differences were largely based on language, religion, and geography. […] the European had always reacted a bit hysterically to the differences of skin color and facial structure between themselves and the populations encountered in Africa, Asia, and the Americas (see, for example, Shakespeare's dramatization of racial conflict in Othello and The Tempest). Beginning in the 1500s, Europeans began to develop what became known as "scientific racism," the attempt to construct a biological rather than cultural definition of race […] Whiteness, then, emerged as what we now call a "pan-ethnic" category, as a way of merging a variety of European ethnic populations into a single "race" […]— Gregory Jay, "Who Invented White People? A Talk on the Occasion of Martin Luther King, Jr. Day, 1998" In the 16th and 17th centuries, "East Asian peoples were almost uniformly described as white, never as yellow." Michael Keevak's history Becoming Yellow, finds that East Asians were redesignated as being yellow-skinned because "yellow had become a racial designation," and that the replacement of white with yellow as a description came through scientific discourse. A three-part racial schema in color terms was used in seventeenth-century Latin America under Spanish rule. Irene Silverblatt traces "race thinking" in South America to the social categories of colonialism and state formation: "White, black, and brown are abridged, abstracted versions of colonizer, slave, and colonized." By the mid-seventeenth century, the novel term español ("Spaniard") was being equated in written documents with blanco, or "white". In Spain's American colonies, African, Native American (indios), Jewish, or morisco ancestry formally excluded individuals from the "purity of blood" (limpieza de sangre) requirements for holding any public office under the Royal Pragmatic of 1501. Similar restrictions applied in the military, some religious orders, colleges, and universities, leading to a nearly all-white priesthood and professional stratum. Blacks and indios were subject to tribute obligations and forbidden to bear arms, and black and indio women were forbidden to wear jewels, silk, or precious metals in early colonial Mexico and Peru. Those pardos (people with dark skin) and mulattos (people of mixed African and European ancestry) with resources largely sought to evade these restrictions by passing as white. A brief royal offer to buy the privileges of whiteness for a substantial sum of money attracted fifteen applicants before pressure from white elites ended the practice. In the British colonies in North America and the Caribbean, the designation English or Christian was initially used in contrast to Native Americans or Africans. Early appearances of white race or white people in the Oxford English Dictionary begin in the seventeenth century. Historian Winthrop Jordan reports that, "throughout the [thirteen] colonies the terms Christian, free, English, and white were […] employed indiscriminately" in the 17th century as proxies for one another. In 1680, Morgan Godwyn "found it necessary to explain" to English readers that "in Barbados, 'white' was 'the general name for Europeans.'" Several historians report a shift towards greater use of white as a legal category alongside a hardening of restrictions on free or Christian blacks. White remained a more familiar term in the American colonies than in Britain well into the 1700s, according to historian Theodore W. Allen. Science of race Western studies of race and ethnicity in the 18th and 19th centuries developed into what would later be termed scientific racism. Prominent European scientists writing about human and natural difference included a white or west Eurasian race among a small set of human races and imputed physical, mental, or aesthetic superiority to this white category. These ideas were discredited by twentieth-century scientists. 18th century beginnings In 1758, Carl Linnaeus proposed what he considered to be natural taxonomic categories of the human species. He distinguished between Homo sapiens and Homo sapiens europaeus, and he later added four geographical subdivisions of humans: white Europeans, red Americans, yellow Asians and black Africans. Although Linnaeus intended them as objective classifications, his descriptions of these groups included cultural patterns and derogatory stereotypes. In 1775, the naturalist Johann Friedrich Blumenbach asserted that "The white colour holds the first place, such as is that of most European peoples. The redness of the cheeks in this variety is almost peculiar to it: at all events it is but seldom to be seen in the rest.". In the various editions of his On the Natural Variety of Mankind, he categorized humans into four or five races, largely built on Linnaeus' classifications. But while, in 1775, he had grouped into his "first and most important" race "Europe, Asia this side of the Ganges, and all the country situated to the north of the Amoor, together with that part of North America, which is nearest both in position and character of the inhabitants", he somewhat narrows his "Caucasian variety" in the third edition of his text, of 1795: "To this first variety belong the inhabitants of Europe (except the Lapps and the remaining descendants of the Finns) and those of Eastern Asia, as far as the river Obi, the Caspian Sea and the Ganges; and lastly, those of Northern Africa." Blumenbach quotes various other systems by his contemporaries, ranging from two to seven races, authored by the authorities of that time, including, besides Linnæus, Georges-Louis Leclerc, Comte de Buffon, Christoph Meiners and Immanuel Kant. In the question of color, he conduces a rather thorough enquire, considering also factors of diet and health, but ultimately believes that "climate, and the influence of the soil and the temperature, together with the mode of life, have the greatest influence". Blumenbach's conclusion was, however, to proclaim all races' attribution to one single human species. Blumenbach argued that physical characteristics like skin color, cranial profile, etc., depended on environmental factors, such as solarization and diet. Like other monogenists, Blumenbach held to the "degenerative hypothesis" of racial origins. He claimed that Adam and Eve were Caucasian inhabitants of Asia, and that other races came about by degeneration from environmental factors such as the sun and poor diet. He consistently believed that the degeneration could be reversed in a proper environmental control and that all contemporary forms of man could revert to the original Caucasian race. 19th and 20th century: the "Caucasian race" During the period of the mid-19th to mid-20th century, race scientists, including most physical anthropologists classified the world's populations into three, four, or five races, which, depending on the authority consulted, were further divided into various sub-races. During this period the Caucasian race, named after people of the North Caucasus (Caucasus Mountains) but extending to all Europeans, figured as one of these races, and was incorporated as a formal category of both scientific research and, in countries including the United States, social classification. There was never any scholarly consensus on the delineation between the Caucasian race, including the populations of Europe, and the Mongoloid one, including the populations of East Asia. Thus, Carleton S. Coon (1939) included the populations native to all of Central and Northern Asia under the Caucasian label, while Thomas Henry Huxley (1870) classified the same populations as Mongoloid, and Lothrop Stoddard (1920) classified as "brown" most of the populations of the Middle East, North Africa and Central Asia, and counted as "white" only the European peoples and their descendants, as well as some populations in parts of Anatolia and the northern areas of Morocco, Algeria And Tunisia. Some authorities[who?], following Huxley (1870), distinguished the Xanthochroi or "light whites" of Northern Europe with the Melanochroi or "dark whites" of the Mediterranean. Although modern neo-nazis often invoke National Socialist iconography on behalf of white nationalism, National Socialist Germany repudiated the idea of a unified white race, instead promoting Nordicism. In National Socialist propaganda, Eastern European Slavs were often referred to as Untermensch, and the relatively under-developed status of Eastern European countries such as Poland and the USSR were attributed to the racial inferiority of their inhabitants. Fascist Italy took the same view, and both of these nations justified their colonial ambitions in Eastern Europe on racist, anti-Slavic grounds. These nations were not alone in their view; there are numerous cases in the 20th century where some European ethnic groups labeled or treated other Europeans as members of another, inferior race. Alastair Bonnett has stated that a strong "current of scientific research supports the theory that Europeans were but one expression of a wider racial group (termed sometimes Caucasian)," a group that, Bonnett notes, would include not only Europeans, but also South Asians, North Africans, and even Northeast Africans such as Ethiopians. Bonnett notes that this scientific definition of a Caucasoid race has little currency "outside certain immigration bureaucracies and traditional anthropology," and concludes that popular notions of whiteness are not scientific, but socially constructed. Racial categories remain widely used in medical research, but this can create important problems. For example, researchers Raj Bhopal and Liam Donaldson opine that since white people are a heterogeneous group, the term white should therefore be abandoned as a classification for the purposes of epidemiology and health research, and identifications based on geographic origin and migration history be used instead. According to geneticist David Reich, based on ancient human genomes that his laboratory sequenced in 2016, ancient West Eurasians descend from a mixture of as few as four ancestral components related to the Eastern Hunter Gatherers (EHG), the Neolithic Iran, the Neolithic Levant and Natufians, and the Western Hunter-Gatherers (WHG): whatever we currently believe about the genetic nature of differences among populations is most likely wrong... “whites” are not derived from a population that existed from time immemorial, as some people believe. Instead “whites” represent a mixture of four ancient populations that lived 10,000 years ago and were each as different from one another as Europeans and East Asians are today. 11.5% of the total world population (world population of 7.5 billion). (not counting partial European descent) |Regions with significant populations| |Mexico||16,000,000 – 56,000,000| |Languages of Europe (English, French, German, Italian, Portuguese, Russian, and Spanish among other minority European languages)| | Majority Christianity| (Catholic, Protestant and Orthodox) Irreligion · Other Religions |Related ethnic groups| |Genetics and differences| Definitions of white have changed over the years, including the official definitions used in many countries, such as the United States and Brazil. Through the mid to late 20th century, numerous countries had formal legal standards or procedures defining racial categories (see cleanliness of blood, casta, apartheid in South Africa, hypodescent). Below are some census definitions of white, which may differ from the social definition of white within the same country. The social definition has also been added where possible. Continent or region |% of total population (thousands & millions) |United Kingdom||87.2%||55.0||2011 Census||| |Puerto Rico (US)||75.8%||2.8||2010 Census||| |United States||72.4%||223.5||2010 Census||| |Bermuda (UK)||31.0%||19,938||2010 Census||| |Dominican Republic||13.6% or 16.0%||2.0||1960 Census, 2006||| |US Virgin Islands (US)||15.6%||16,646||2010 Census||| |Panama||06.7% est.||–||2010 WFB2||| |Mexico||09.0% to 47.0%||10.8 or 56.0||WFB2, Lizcano3 2010||| |El Salvador||12.7%||0.7||2007 Census||| |Turks and Caicos (UK)||07.9%||1,562||2001 Census||| |Virgin Islands (UK)||06.9%||–||2001 Census||| |The Bahamas||05.0%||16,598||2010 Census||| |Anguilla (UK)||03.2%||431||2011 Census||| |St. Vincent||01.4%||1,478||2001 Census||| |Trinidad and Tobago||00.7%||–||2011 Census||| |Colombia||37.0%||17||2010 study est||| |Australia and Oceania||N/D||23.6m| |New Zealand||74.0%||2.97||2013 Census||| |New Caledonia (Fr)||29.2%||71,721||2009 Census||| |Guam (US)||07.1%||11,321||2010 Census||| |Northern Mariana Islands (US)||02.4%||1,117||2010 Census||| |South Africa||08.9%||4.5||2011 Census||| |Namibia||04.0% to 07.0%||75–100,000||est.||| |^2 CIA The World Factbook. | ^3 Étnica de las Tres Áreas Culturales del Continente Americano Argentina, along with other areas of new settlement like Canada, Australia, Brazil, New Zealand, the United States or Uruguay, is considered a country of immigrants where the vast majority originated from Europe. Although no official censuses based on ethnic classification have been carried out in Argentina, some international sources state that White Argentines and other whites (Europeans) in Argentina make up somewhere between 89.7% (around 36.7 million people) and 72.3% (34.4 million) of the total population. White people can be found in all areas of the country, but especially in the central-eastern region (Pampas), the central-western region (Cuyo), the southern region (Patagonia) and the north-eastern region (Litoral). White Argentines are mainly descendants of immigrants who came from Europe and the Middle East in the late 19th and early 20th centuries. After the regimented Spanish colonists, waves of European settlers came to Argentina from the late nineteenth to mid-twentieth centuries. Major contributors included Italy (initially from Piedmont, Veneto and Lombardy, later from Campania, Calabria, and Sicily), and Spain (most are Galicians and Basques, but there are Asturians, Cantabrians, Catalans, and Andalusians). Smaller but significant numbers of immigrants include Germans, primarily Volga Germans from Russia, but also Germans from Germany, Switzerland, and Austria; French which mainly came from the Occitania region of France; Portuguese, which already conformed an important community since colonial times; Slavic groups, most of which were Croats, Bosniaks, Poles, but also Ukrainians, Belarusians, Russians, Bulgarians, Serbs and Montenegrins; Britons, mainly from England and Wales; Irish who left from the Potato famine or British rule; Scandinavians from Sweden, Denmark, Finland, and Norway; from the Ottoman Empire came mainly Armenians, and various Semitic peoples such as Syriacs-Assyrians, Maronites and Arabs (from what are now of Lebanon and Syria). Smaller waves of settlers from Australia and South Africa, and the United States can be traced in Argentine immigration records. The majority of Argentina's Jewish population are Ashkenazi Jews from diaspora communities in Central, Northern, and Eastern Europe, and about 15–20% are Sephardic communities from Syria. Argentina is home to the fifth largest Ashkenazi Jewish community in the world. (See also History of the Jews in Argentina). By the 1910s, after immigration rates peaked, over 30 percent of the country's population was from outside Argentina, and over half of Buenos Aires' population was foreign-born. However, the 1914 National Census revealed that around 80% of the national population were either European immigrants, their children or grandchildren. Among the remaining 20 percent (those descended from the population residing locally before this immigrant wave took shape in the 1870s), around a third were white. European immigration continued to account for over half the nation's population growth during the 1920s, and was again significant (albeit in a smaller wave) following World War II. It is estimated that Argentina received a total amount of 6.6 million European and Middle-Eastern immigrants during the period 1857–1940. White Argentinians, therefore, likely peaked as a percentage of the national population at over 90% on or shortly after the 1947 census. Since the 1960s, increasing immigration from bordering countries to the north (especially from Bolivia and Paraguay, which have Amerindian and Mestizo majorities) has lessened that majority somewhat. Criticism of the national census state that data has historically been collected using the category of national origin rather than race in Argentina, leading to undercounting Afro-Argentines and Mestizos. África Viva (Living Africa) is a black rights group in Buenos Aires with the support of the Organization of American States, financial aid from the World Bank and Argentina's census bureau is working to add an "Afro-descendants" category to the 2010 census. The 1887 national census was the final year where blacks were included as a separate category before it was eliminated by the government. A study conducted on 218 individuals in 2010 by the Argentine geneticist Daniel Corach, has established that the genetic map of Argentina is composed by 79% from different European ethnicities (mainly Spanish and Italian ethnicities), 18% of different indigenous ethnicities, and 4.3% of African ethnic groups, in which 63.6% of the tested group had at least one ancestor who was Indigenous. Genetic studies of Argentina population: - Homburguer et al., 2015, PLOS One Genetics: 67% European, 28% Amerindian, 4% African and 1,4% Asian. - Avena et al., 2012, PLOS One Genetics: 65% European, 31% Amerindian, and 4% African. - Buenos Aires Province: 76% European and 24% others. - South Zone (Chubut Province): 54% European and 46% others. - Northeast Zone (Misiones, Corrientes, Chaco & Formosa provinces): 54% European and 46% others. - Northwest Zone (Salta Province): 33% European and 67% others. - Oliveira, 2008, on Universidade de Brasília: 60% European, 31% Amerindian and 9% African. - National Geographic: 52% European, 27% Amerindian ancestry, 9% African and 9% others. From 1788, when the first British colony in Australia was founded, until the early 19th century, most immigrants to Australia were English, Scottish, Welsh and Irish convicts. These were augmented by small numbers of free settlers from the British Isles and other European countries. However, until the mid-19th century, there were few restrictions on immigration, although members of ethnic minorities tended to be assimilated into the Anglo-Celtic populations. People of many nationalities, including many non-white people, emigrated to Australia during the goldrushes of the 1850s. However, the vast majority was still white and the goldrushes inspired the first racist activism and policy, directed mainly at Chinese people. From the late 19th century, the Colonial/State and later federal governments of Australia restricted all permanent immigration to the country by non-Europeans. These policies became known as the "White Australia policy", which was consolidated and enabled by the Immigration Restriction Act 1901, but was never universally applied. Immigration inspectors were empowered to ask immigrants to take dictation from any European language as a test for admittance, a test used in practice to exclude people from Asia, Africa, and some European and South American countries, depending on the political climate. Although they were not the prime targets of the policy, it was not until after World War II that large numbers of southern European and eastern European immigrants were admitted for the first time. Following this, the White Australia Policy was relaxed in stages: non-European nationals who could demonstrate European descent were admitted (e.g., descendants of European colonizers and settlers from Latin America or Africa), as were autochthonous inhabitants (such as Maronites, Assyrians and Mandeans) of various nations from the Middle East, most significantly from Lebanon and to a lesser degree Iraq, Syria and Iran. In 1973, all immigration restrictions based on race and geographic origin were officially terminated. Australia enumerated its population by race between 1911 and 1966, by racial-origin in 1971 and 1976, and by self-declared ancestry alone since 1981. As at the 2016 census, it was estimated that around 58% of the Australian population were Anglo-Celtic Australians with 18% being of other European origins, a total of 76% for European ancestries as a whole. In 1958, about 3,500 white German speaking Mennonites, who settled before in Canada and Russia, arrived in Belize. They established communities in the upper reaches of the Belize River: Blue Creek on the border with Mexico; Shipyard, Indian Creek in the district of Orange Walk; Spanish Lookout and Barton Creek in the Cayo District; Little Belize, Corozal District. They consist of 3.6 percent of the population of Belize have their own schools, churches and financial institutions in their various communities. Recent censuses in Brazil are conducted on the basis of self-identification. According to the 2010 Census, they totaled 91,051,646 people, and made up 47.73% of the Brazilian population. This significant percentage change is considered to be caused by people who used to identify themselves as white and now reappreciated their African, Amerindian or East Asian ancestry, and so they changed their self-identification to "Pardo" and "Asian". White in Brazil is applied as a term to people of European descent, and Middle Easterners of all ethnicities. The census shows a trend of fewer Brazilians of a different descent (most likely mixed) identifying as white people as their social status increases. Nevertheless, light-skinned mulattoes and mestizos with Caucasian features were also historically deemed as more closely related to the branco Middle Easterner and European descendants' group than the pardo "grayish-skinned" multiracial one by a sort of unique social constructs, especially among those multiracials with non-Portuguese European ancestry, and such change of identities actually can mean more of a westernization of the concept of race in Brazil (mixed ancestry, as explained below, is not a factor against in historical definitions of whiteness in Brazil) than a change in the self-esteem of "marginalized and unconscious multiracial populations trying to paint themselves as white in a hopeful attempt to deny their unprivileged person of color status", as common sense among some Brazilians and foreigners is used to state. Aside from Portuguese colonization, there were large waves of immigration from the rest of Europe, as well as the Balkans and the Middle East. In Brazil, most members of these communities of European and Middle Eastern descent also have some Subsaharan African or Amerindian ancestry. Non-Portuguese ancestry generally is associated with an image of foreigner, European, and as such contributed to a social perception of being whiter in the color range of Brazilian society. In the results of Statistics Canada's 2001 Canadian Census, white is one category in the population groups data variable, derived from data collected in question 19 (the results of this question are also used to derive the visible minority groups variable). In the 1995 Employment Equity Act, '"members of visible minorities" means persons, other than Aboriginal peoples, who are non-Caucasian in race or non-white in colour'. In the 2001 Census, persons who selected Chinese, South Asian, African, Filipino, Latin American, Southeast Asian, Arab, West Asian, Middle Eastern, Japanese or Korean were included in the visible minority population. A separate census question on "cultural or ethnic origin" (question 17) does not refer to skin color. Scholarly estimates of the white population in Chile vary dramatically, ranging from 20% to 52%. According to a study by the University of Chile about 30% of the Chilean population is Caucasian, while the 2011 Latinobarómetro survey shows that some 60% of Chileans consider themselves white. During colonial times in the 18th century, an important flux of emigrants from Spain populated Chile, mostly Basques, who vitalized the Chilean economy and rose rapidly in the social hierarchy and became the political elite that still dominates the country. An estimated 1.6 million (10%) to 3.2 million (20%) Chileans have a surname (one or both) of Basque origin. The Basques liked Chile because of its great similarity to their native land: similar geography, cool climate, and the presence of fruits, seafood, and wine. Chile was never an attractive place for European migrants in the 19th and 20th century simply because it was far from Europe and difficult to reach. Chile experienced a tiny but steady arrival of Spanish, Italians, Irish, French, Greeks, Germans, English, Scots, Croats, Jewish, and Palestinian migrants (in addition to immigration from other Latin American countries). The original arrival of Spaniards was the most radical change in demographics due to the arrival of Europeans in Chile, since there was never a period of massive immigration, as happened in neighboring nations such as Argentina and Uruguay. Facts about the amount of immigration do not coincide with certain national chauvinistic discourse, which claims that Chile, like Argentina or Uruguay, would be considered one of the "white" Latin American countries, in contrast to the racial mixture that prevails in the rest of the continent. However, it is undeniable that immigrants have played a major role in Chilean society. Between 1851 and 1924 Chile only received the 0.5% of the European immigration flow to Latin America, compared to the 46% received by Argentina, 33% by Brazil, 14% by Cuba, and 4% by Uruguay. This was because most of the migration occurred across the Atlantic before the construction of the Panama Canal. Europeans preferred to stay in countries closer to their homelands instead of taking the long trip through the Straits of Magellan or across the Andes. In 1907, European-born immigrants composed 2.4% of the Chilean population, which fell to 1.8% in 1920, and 1.5% in 1930. After the failed liberal revolution of 1848 in the German states, a significant German immigration took place, laying the foundation for the German-Chilean community. Sponsored by the Chilean government to "civilize" and colonize the southern region, these Germans (including German-speaking Swiss, Silesians, Alsatians and Austrians) settled mainly in Valdivia, Llanquihue and Los Ángeles. The Chilean Embassy in Germany estimated 150,000 to 200,000 Chileans are of German origin. It is estimated that nearly 5% of the Chilean population is of Asian descent, chiefly from the Middle East, i.e., Israelis/Jews, Palestinians, Syrians, and Lebanese, totalling around 800,000. Chile is home to a large population of immigrants, mostly Christian, from the Levant. Roughly 500,000 Palestinian descendants are believed to reside in Chile, making it the home of the largest Palestinian community outside of the Middle East. Another historically significant immigrant group is Croatian. The number of their descendants today is estimated to be 380,000 persons, the equivalent of 2.4% of the population. Other authors claim, on the other hand, that close to 4.6% of the Chilean population have some Croatian ancestry. Over 700,000 Chileans may have British (English, Scottish or Welsh) origin, 4.5% of Chile's population. Chileans of Greek descent are estimated 90,000 to 120,000. Most of them live either in the Santiago area or in the Antofagasta area, and Chile is one of the 5 countries with the most descendants of Greeks in the world. The descendants of the Swiss reach 90,000 and it is estimated that about 5% of the Chilean population has some French ancestry. 184,000-800,000 (estimates) are descendants of Italians. Other groups of European descendants are found in smaller numbers. The census figures show how Colombians see themselves in terms of race. The white Colombian population is approximately 25% to 37% of the Colombian population, according to estimates, but in surveys and in the 2005 Census, 37% of the total population self identify as white. According to a genetic research by the National University of Colombia, performed to more than 60,000 blood tests, concluded that Colombian genetic admixture consists in a 70% European, 20% Amerindian, and 10% African ancestry. White Colombians are mostly descendants of Spaniards. Italian, German, Irish, Portuguese, and Lebanese (Arab diaspora in Colombia) Colombians are found in notable numbers Many Spanish began their explorations searching for gold, while others Spanish established themselves as leaders of the native social organizations teaching natives the Christian faith and the ways of their civilization. Catholic priest would provide education for Native Americans that otherwise was unavailable. Within 100 years after the first Spanish settlement, nearly 95 percent of all Native Americans in Colombia had died. The majority of the deaths of Native Americans were the cause of diseases such as measles and smallpox, which were spread by European settlers. Many Native Americans were also killed by armed conflicts with European settlers. Between 1540 and 1559, 8.9 percent of the residents of Colombia were of Basque origin. It has been suggested that the present day incidence of business entrepreneurship in the region of Antioquia is attributable to the Basque immigration and Basque character traits. Few Colombians of distant Basque descent are aware of their Basque ethnic heritage. In Bogota, there is a small colony of thirty to forty families who emigrated as a consequence of the Spanish Civil War or because of different opportunities. Basque priests were the ones that introduced handball into Colombia. Basque immigrants in Colombia were devoted to teaching and public administration. In the first years of the Andean multinational company, Basque sailors navigated as captains and pilots on the majority of the ships until the country was able to train its own crews. In December 1941 the United States government estimated that there were 4,000 Germans living in Colombia. There were some Nazi agitators in Colombia, such as Barranquilla businessman Emil Prufurt. Colombia invited Germans who were on the U.S. blacklist to leave. SCADTA, a Colombian-German air transport corporation which was established by German expatriates in 1919, was the first commercial airline in the western hemisphere. The first and largest wave of immigration from the Middle East began around 1880, and remained during the first two decades of the twentieth century. They were mainly Maronite Christians from Greater Syria (Syria and Lebanon) and Palestine, fleeing the then colonized Ottoman territories. Syrians, Palestinians, and Lebanese continued since then to settle in Colombia. Due to poor existing information it is impossible to know the exact number of Lebanese and Syrians that immigrated to Colombia. A figure of 5,000–10,000 from 1880 to 1930 may be reliable. Whatever the figure, Syrians and Lebanese are perhaps the biggest immigrant group next to the Spanish since independence. Those who left their homeland in the Middle East to settle in Colombia left for different reasons such as religious, economic, and political reasons. Some left to experience the adventure of migration. After Barranquilla and Cartagena, Bogota stuck next to Cali, among cities with the largest number of Arabic-speaking representatives in Colombia in 1945. The Arabs that went to Maicao were mostly Sunni Muslim with some Druze and Shiites, as well as Orthodox and Maronite Christians. The mosque of Maicao is the second largest mosque in Latin America. Middle Easterns are generally called Turcos (Turkish). In 2009, Costa Rica had an estimated population of 4,509,290. White people (includes mestizo) make up 94%, 3% are black people, 1% are Amerindians, and 1% are Chinese. White Costa Ricans are mostly of Spanish ancestry, but there are also significant numbers of Costa Ricans descended from British Italian, German, English, Dutch, French, Irish, Portuguese, Lebanese and Polish families, as well a sizable Jewish community. |Self-identified as white 1899–2012 Cuba Census| White people in Cuba make up 64.1% of the total population according to the 2012 census with the majority being of diverse Spanish descent. However, after the mass exodus resulting from the Cuban Revolution in 1959, the number of white Cubans actually residing in Cuba diminished. Today various records claiming the percentage of whites in Cuba are conflicting and uncertain; some reports (usually coming from Cuba) still report a less, but similar, pre-1959 number of 65% and others (usually from outside observers) report a 40–45%. Despite most white Cubans being of Spanish descent, many others are of French, Portuguese, German, Italian and Russian descent. During the 18th, 19th and early part of the 20th century, large waves of Canarians, Catalans, Andalusians, Castilians, and Galicians emigrated to Cuba. Also, one significant ethnic influx is derived from various Middle Eastern nations. Many Jews have also immigrated there, some of them Sephardic. Between 1901 and 1958, more than a million Spaniards arrived to Cuba from Spain; many of these and their descendants left after Castro's communist regime took power. In 1958, it was estimated that approximately 74% of Cubans were of European ancestry, mainly of Spanish origin, 10% of African ancestry, 15% of both African and European ancestry (mulattos), and a small 1% of the population was Asian, predominantly Chinese. However, after the Cuban revolution, due to a combination of factors, mainly mass exodus to Miami, United States, a drastic decrease in immigration, and interracial reproduction, Cuba's demography has changed. As a result, those of complete European ancestry and those of pure African ancestry have decreased, the mulatto population has increased, and the Asian population has, for all intents and purposes, disappeared. The Institute for Cuban and Cuban-American Studies at the University of Miami says the present Cuban population is 38% white and 62% black/mulatto. The Minority Rights Group International says that "An objective assessment of the situation of Afro-Cubans remains problematic due to scant records and a paucity of systematic studies both pre- and post-revolution. Estimates of the percentage of people of African descent in the Cuban population vary enormously, ranging from 33.9 per cent to 62 per cent". According to the most recent 2012 census, Cuba's population was 11,167,325. In 2013, white Salvadorans were a minority ethnic group in El Salvador, accounting for 12.7% of the country's population. An additional 86.3% of the population were mestizo, having mixed indigenous and European ancestry. Every social order is founded upon three social classes, each of which represents a racial variety: the nobility, a more or less accurate reflection of the conquering race; the bourgeoisie composed of mixed stock coming close to the chief race; and the common people who live in servitude or at least in a very depressed position. In 2010, 18.5% of Guatemalans belonged to the white ethnic group, with 41.7% of the population being mestizo, and 39.8% of the population belonging to the 23 Indigenous groups.[clarification needed] It is difficult to make an accurate census of whites in Guatemala, because the country categorizes all non-indigenous people are mestizo or ladino and a large majority of white Guatemalans consider themselves as mestizos or ladinos. By the 19th century the majority of immigrants were Germans, many who were bestowed fincas and coffee plantations in Cobán, while others went to Quetzaltenango and Guatemala City. Many young Germans married mestiza and indigenous Q'eqchi' women, which caused a gradual whitening. There was also immigration of Belgians to Santo Tomas and this contributed to the mixture of black and mestiza women in that region. As of 2013, Hondurans of solely white ancestry are a small minority in Honduras, accounting for 1% of the country's population. An additional 90% of the population is mestizo, having mixed indigenous and European ancestry. White Mexicans are Mexican citizens of complete or predominant European descent. While the Mexican government does conduct ethnic censuses on which a Mexican has the option of identifying as "white" the results obtained from these censuses are not published. What Mexico's government publish instead, is the percentage of "light-skinned Mexicans" there is in the country, with it being 47% in 2010 and 49% in 2017. Due its not as direct racial undertone, the label "Light-skinned Mexican" has been favored by the government and media outlets over "White Mexican" as the go-to choice to refer to the segment of Mexico's population who possesses European physical traits when discussing different ethno-racial dynamics in Mexico‘s society. Sometimes nonetheless, "White Mexican" is used. Europeans began arriving in Mexico during the Spanish conquest of the Aztec Empire; and while during the colonial period most European immigration was Spanish (mostly from northern provinces such as Cantabria, Navarra, Galicia and the Basque Country,), in the 19th and 20th centuries European and European-derived populations from North and South America did immigrate to the country. According to 20th and 21st century academics, large scale intermixing between the European immigrants and the native Indigenous peoples would produce a Mestizo group which would become the overwhelming majority of Mexico's population by the time of the Mexican Revolution. However, according to church and censal registers from the colonial times, the majority (73%) of Spanish men married with Spanish women. Said registers also put in question other narratives held by contemporary academics, such as European immigrants who arrived to Mexico being almost exclusively men or that "pure Spanish" people were all part of a small powerful elite, as Spaniards were often the most numerous ethnic group in the colonial cities and there were menial workers and people in poverty who were of complete Spanish origin. Another ethnic group in Mexico, the Mestizos, is composed of people with varying degrees of European and indigenous ancestry, with some showing a European genetic ancestry higher than 90%. However, the criteria for defining what constitutes a Mestizo varies from study to study as in Mexico a large number of white people have been historically classified as Mestizos because after the Mexican revolution the Mexican government began defining ethnicity on cultural standards (mainly the language spoken) rather than racial ones in an effort to unite all Mexicans under the same racial identity. Estimates of Mexico's white population differ greatly in both, methodology and percentages given, extra-official sources such as the World Factbook and Encyclopædia Britannica, which use the 1921 census results as the base of their estimations calculate Mexico's white population as only 9% or between one tenth to one fifth (the results of the 1921 census, however, have been contested by various historians and deemed inaccurate). Surveys that account for phenotypical traits and have performed actual field research suggest rather higher percentages: using the presence of blond hair as reference to classify a Mexican as white, the Metropolitan Autonomous University of Mexico calculated the percentage of said ethnic group at 23%. With a similar methodology, the American Sociological Association obtained a percentage of 18.8%. Another study made by the University College London in collaboration with Mexico's National Institute of Anthropology and History found that the frequencies of blond hair and light eyes in Mexicans are of 18% and 28% respectively, nationwide surveys in the general population that use as reference skin color such as those made by Mexico's National Council to Prevent Discrimination and Mexico's National Institute of Statistics and Geography report percentages of 47% and 49% respectively. A study performed in hospitals of Mexico City reported that an average 51.8% of Mexican newborns presented the congenital skin birthmark known as the Mongolian spot whilst it was absent in 48.2% of the analyzed babies. The Mongolian spot appears with a very high frequency (85–95%) in Asian, Native American, and African children. The skin lesion reportedly almost always appears on South American and Mexican children who are racially Mestizos, while having a very low frequency (5–10%) in Caucasian children. According to the Mexican Social Security Institute (shortened as IMSS) nationwide, around half of Mexican babies have the Mongolian spot. Mexico's northern and western regions have the highest percentages of White population, where, according to the American historian and anthropologist Howard F. Cline the majority of the people have no native admixture or is of predominantly European ancestry, resembling in aspect that of northern Spaniards. In the north and west of Mexico, the indigenous tribes were substantially smaller than those found in central and southern Mexico, and also much less organized, thus they remained isolated from the rest of the population or even in some cases were hostile towards Mexican colonists. The northeast region, in which the indigenous population was eliminated by early European settlers, became the region with the highest proportion of whites during the Spanish colonial period. However, recent immigrants from southern Mexico have been changing, to some degree, its demographic trends. The white population of central Mexico, despite not being as numerous as in the north due to higher mixing, is ethnically more diverse, as there are large numbers of other European and Middle Eastern ethnic groups, aside from Spaniards. This also results in non-Iberian surnames (mostly French, German, Italian and Arab) being more common in central Mexico, especially in the country's capital and in the state of Jalisco. A number of settlements on which European immigrants have maintained their original culture and language survive to this day and are spread all over Mexican territory, with the most notorious being the Mennonites who have colonies in states as variated as Chihuhua or Campeche and the town of Chipilo in the state of Puebla, inhabited nearly in its totality by descendants of Italian immigrants that still speak their Venetian-derived dialect. James Cook claimed New Zealand for Britain on his arrival in 1769. The establishment of British colonies in Australia from 1788 and the boom in whaling and sealing in the Southern Ocean brought many Europeans to the vicinity of New Zealand. Whalers and sealers were often itinerant and the first real settlers were missionaries and traders in the Bay of Islands area from 1809. Early visitors to New Zealand included whalers, sealers, missionaries, mariners, and merchants, attracted to natural resources in abundance. They came from the Australian colonies, Great Britain and Ireland, Germany (forming the next biggest immigrant group after the British and Irish), France, Portugal, the Netherlands, Denmark, the United States, and Canada. In the 1860s, discovery of gold started a gold rush in Otago. By 1860 more than 100,000 British and Irish settlers lived throughout New Zealand. The Otago Association actively recruited settlers from Scotland, creating a definite Scottish influence in that region, while the Canterbury Association recruited settlers from the south of England, creating a definite English influence over that region. In the 1870s, the MP Julius Vogel borrowed millions of pounds from Britain to help fund capital development such as a nationwide rail system, lighthouses, ports and bridges, and encouraged mass migration from Britain. By 1870 the non-Māori population reached over 250,000. Other smaller groups of settlers came from Germany, Scandinavia, and other parts of Europe as well as from China and India, but British and Irish settlers made up the vast majority, and did so for the next 150 years. As of 2013, the white ethnic group in Nicaragua account for 17% of the country's population. An additional 69% of the population is mestizo, having mixed indigenous and European ancestry. In the 19th century, Nicaragua was the subject of central European immigration, mostly from Germany, England and the United States, who often married native Nicaraguan women. Some Germans were given land to grow coffee in Matagalpa, Jinotega and Esteli, although most Europeans settled in San Juan del Norte. In the late 17th century, pirates from England, France and Holland mixed with the indigenous population and started a settlement at Bluefields (Mosquito Coast). According to the 2017 census 5.9% or 1.3 million (1,336,931) people 12 years of age and above self-identified as white. There were 619,402 (5.5%) males and 747,528 (6.3%) females. This was the first time a question for ethnic origins had been asked. The regions with the highest proportion of self-identified whites were in La Libertad (10.5%), Tumbes and Lambayeque (9.0% each), Piura (8.1%), Callao (7.7%), Cajamarca (7.5%), Lima Province (7.2%) and Lima Region (6.0%). White Hollanders first arrived in South Africa around 1652. By the beginning of the eighteenth century, some 2,000 Europeans and their descendants were established in the region. Although these early Afrikaners represented various nationalities, including German peasants and French Huguenots, the community retained a thoroughly Dutch character. The British Empire seized Cape Town in 1795 during the Napoleonic Wars and permanently acquired South Africa from Amsterdam in 1814. The first British immigrants numbered about 4,000 and were introduced in 1820. They represented groups from England, Ireland, Scotland, or Wales and were typically more literate than the Dutch. The discovery of diamonds and gold led to a greater influx of English speakers who were able to develop the mining industry with capital unavailable to Afrikaners. They have been joined in more subsequent decades by former colonials from elsewhere, such as Zambia and Kenya, and poorer British nationals looking to escape famine at home. Both Afrikaners and English have been politically dominant in South Africa during the past; due to the controversial racial order under apartheid, the nation's predominantly Afrikaner government became a target of condemnation by other African states and the site of considerable dissension between 1948 and 1991. United Kingdom and Ireland Historical white identities Before the Industrial Revolutions in Europe whiteness may have been associated with social status. Aristocrats may have had less exposure to the sun and therefore a pale complexion may have been associated with status and wealth. This may be the origin of "blue blood" as a description of royalty, the skin being so lightly pigmented that the blueness of the veins could be clearly seen. The change in the meaning of white that occurred in the colonies (see above) to distinguish Europeans from non-Europeans did not apply to the 'home land' countries (England, Ireland, Scotland and Wales). Whiteness therefore retained a meaning associated with social status for the time being. And during the 19th century, when the British Empire was at its peak, many of the bourgeoisie and aristocracy developed extremely chauvinistic attitudes to those of lower social rank. Edward Lhuyd discovered that Welsh, Gaelic, Cornish and Breton are all part of the same language family, which he called "Celtic", and were distinct from the Germanic English; this can be seen in context with 19th-century romantic nationalism. On the other hand, the discovery of Anglo-Saxon remains also led to a belief that the English were descended from a distinct Germanic lineage that was fundamentally (and racially) different from that of the Celts. Early British anthropologists such as John Beddoe and Robert Knox emphasised this distinction, and it was common to find texts that claimed that Welsh, Irish and Scottish people are the descendants of the indigenous more "primitive" inhabitants of the islands, while the English are the descendants of a more advanced and recent "Germanic" migration. Beddoe especially postulated that the Welsh and Irish people are closer to the Cro-Magnon, whom he also considered Africanoid, and it was common to find references to the swarthyness of the skin of peoples from the west of the islands, by comparison to the more pale skinned and blond English residing in the east. For example, Thomas Huxley's On the Geographical Distribution of the Chief Modifications of Mankind (1870) described Irish, Scots and Welsh peoples as a mixture of melanochroi ("dark colored"), and xanthochroi, while the English were "xanthochroi" ("light colored"). Just as race reified whiteness in the colonies, so capitalism without social welfare reified whiteness with regards to social class in 19th-century Britain and Ireland; this social distinction of whiteness became, over time, associated with racial difference. For example, George Sims in How the poor live (1883) wrote of "a dark continent that is within easy reach of the General Post Office […] the wild races who inhabit it will, I trust, gain public sympathy as easily as [other] savage tribes". Modern and official use From the early 1700s, Britain received a small-scale immigration of black people due to the African slave trade. The oldest Chinese community in Britain (as well as in Europe) dates from the 19th century. Since the end of World War II, a substantial immigration from the African, Caribbean and South Asian (namely the British Raj) colonies changed the picture more radically, while the adhesion to the European Union brought with it a heightened immigration from Central and Eastern Europe. Today the Office for National Statistics uses the term white as an ethnic category. The terms white British, White Irish, White Scottish and White Other are used. These classifications rely on individuals' self-identification, since it is recognised that ethnic identity is not an objective category. Socially, in the UK white usually refers only to people of native British, Irish and European origin. As a result of the 2011 census the white population stood at 85.5% in England (White British: 79.8%), at 96% in Scotland (White British: 91.8%), at 95.6% in Wales (White British: 93.2%), while in Northern Ireland 98.28% identified themselves as white, amounting to a total of 87.2% white population (or c. 82 % White British and Irish). |United States Census 1790–2010| |Census Year||White population||% of the US| The cultural boundaries separating white Americans from other racial or ethnic categories are contested and always changing. Professor David R. Roediger of the University of Illinois, suggests that the construction of the white race in the United States was an effort to mentally distance slave owners from slaves. By the 18th century, white had become well established as a racial term. According to John Tehranian, among those not considered white at some points in American history have been: the Germans, Greeks, white Hispanics, Arabs, Iranians, Afghans, Irish, Italians, Jews, Slavs and Spaniards. Finns were also on several occasions "racially" discriminated against and not seen as white, but "Asian". The reasons for this were the arguments and theories about the Finns originally being of Mongolian instead of "native" European origin due to the Finnish language belonging to the Uralic and not the Indo-European language family. During American history, the process of officially being defined as white by law often came about in court disputes over pursuit of citizenship. The Immigration Act of 1790 offered naturalization only to "any alien, being a free white person". In at least 52 cases, people denied the status of white by immigration officials sued in court for status as white people. By 1923, courts had vindicated a "common-knowledge" standard, concluding that "scientific evidence" was incoherent. Legal scholar John Tehranian argues that in reality this was a "performance-based" standard, relating to religious practices, education, intermarriage and a community's role in the United States. In 1923, the Supreme Court decided in United States v. Bhagat Singh Thind that people of Indian descent were not white men, and thus not eligible to citizenship. While Thind was a high caste Hindu born in the northern Punjab region and classified by certain scientific authorities as of the Aryan race, the court conceded that he was not white or Caucasian since the word Aryan "has to do with linguistic and not at all with physical characteristics" and "the average man knows perfectly well that there are unmistakable and profound differences" between Indians and white people. In United States v. Cartozian (1925), an Armenian immigrant successfully argued (and the Supreme Court agreed) that his nationality was white in contradistinction to other people of the Near East—Kurds, Turks, and Arabs in particular—on the basis of their Christian religious traditions. In conflicting rulings In re Hassan (1942) and Ex parte Mohriez, United States District Courts found that Arabs did not, and did qualify as white under immigration law. Still today the relationship between some ethnic groups and whiteness remains complex. In particular, some Jewish and Arab individuals both self-identify and are considered as part of the White American racial category, but others with the same ancestry feel they are not white nor are they perceived as white by American society. The United States Census Bureau proposed but then withdrew plans to add a new category to classify Middle Eastern and North African peoples in the U.S. Census 2020, over a dispute over whether this classification should be considered a white ethnicity or a race. According to Frank Sweet "various sources agree that, on average, people with 12 percent or less admixture appear White to the average American and those with up to 25 percent look ambiguous (with a Mediterranean skin tone)". The current U.S. Census definition includes as white "a person having origins in any of Europe, the Middle East or North Africa." The U.S. Department of Justice's Federal Bureau of Investigation describes white people as "having origins in any of the original peoples of Europe, the Middle East, or North Africa through racial categories used in the Uniform Crime Reports Program adopted from the Statistical Policy Handbook (1978) and published by the Office of Federal Statistical Policy and Standards, U.S. Department of Commerce." The "white" category in the UCR includes non-black Hispanics. White Americans made up nearly 90% of the population in 1950. A report from the Pew Research Center in 2008 projects that by 2050, non-Hispanic white Americans will make up 47% of the population, down from 67% projected in 2005. According to a study on the genetic ancestry of Americans, white Americans (stated "European Americans") on average are 98.6% European, 0.19% African and 0.18% Native American. Southern states with higher African American populations, tend to have higher percentages of African ancestry. According to the 23andMe database, up to 13% of self-identified white American Southerners have greater than 1% African ancestry. Southern states with the highest African American populations, tended to have the highest percentages of hidden African ancestry. Robert P. Stuckert, member of the Department of Sociology and Anthropology at Ohio State University, has poignantly stated that today the majority of the descendants of African slaves are white. The "one-drop rule"–that a person with any amount of known African ancestry (however small or invisible) is not white–is a classification that was used in parts of the United States. It is a colloquial term for a set of laws passed by 18 U.S. states between 1910 and 1931, many as a consequence of Plessy v. Ferguson, a Supreme Court decision that upheld the concept of racial segregation by accepting a "separate but equal" argument. The set of laws was finally declared unconstitutional in 1967, when the Supreme Court ruled on anti-miscegenation laws while hearing Loving v. Virginia, which also found that Virginia's Racial Integrity Act of 1924 was unconstitutional. The one-drop rule attempted to create a bifurcated system of either black or white regardless of a person's physical appearance, but sometimes failed as people with African ancestry sometimes passed as "white", as noted above. This contrasts with the more flexible social structures present in Latin America (derived from the Spanish colonial era casta system) where there were less clear-cut divisions between various ethnicities. As a result of centuries of having children with white people, the majority of African Americans have some European admixture, and many white people also have African ancestry. Writer and editor Debra Dickerson questions the legitimacy of the one-drop rule, stating that "easily one-third of black people have white DNA". She argues that in ignoring their European ancestry, African Americans are denying their fully articulated multi-racial identities. The peculiarity of the one-drop rule may be illustrated by the case of singer Mariah Carey, who was publicly called "another white girl trying to sing black", but in an interview with Larry King, responded that—despite her physical appearance and the fact that she was raised primarily by her white mother—due to the one-drop rule she did not "feel white". Recently, the possibility of genetic testing has raised new questions about the way African Americans describe their race. |Puerto Rico by the Spanish and US Census 1812–2010| |Self-identified as white Contrary to most other Caribbean places, Puerto Rico gradually became predominantly populated by European immigrants. Puerto Ricans of Spanish, Italian (primarily via Corsica) and French descent comprise the majority. (See: Spanish settlement of Puerto Rico). In 1899, one year after the U.S acquired the island, 61.8% or 589,426 people self-identified as white. One hundred years later (2000), the total increased to 80.5% (3,064,862); not because there has been an influx of whites toward the island (or an exodus of non-White people), but a change of race conceptions, mainly because of Puerto Rican elites to portray Puerto Rico's image as the "white island of the Antilles", partly as a response to scientific racism. Hundreds are from Corsica, France, Italy, Portugal, Lebanon, Ireland, Scotland, and Germany, along with large numbers of immigrants from Spain. This was the result of granted land from Spain during the Real Cedula de Gracias de 1815 (Royal Decree of Graces of 1815), which allowed European Catholics to settle in the island with a certain amount of free land. Between 1960 and 1990, the census questionnaire in Puerto Rico did not ask about race or color. Racial categories therefore disappeared from the dominant discourse on the Puerto Rican nation. However, the 2000 census included a racial self-identification question in Puerto Rico and, for the first time in since 1950, allowed respondents to choose more than one racial category to indicate mixed ancestry. (Only 4.2% chose two or more races.) With few variations, the census of Puerto Rico used the same questionnaire as in the U.S. mainland. According to census reports, most islanders responded to the new federally mandated categories on race and ethnicity by declaring themselves "white"; few declared themselves to be black or some other race. However, it was estimated 20% of white Puerto Ricans may have black ancestry. Uruguayans and Argentines share closely related demographic ties. Different estimates state that Uruguay's population of 3.4 million is composed of 88% to 93% white Uruguayans. Uruguay's population is heavily populated by people of European origin, mainly Spaniards, followed closely by Italians, including numbers of French, Greek, Lebanese, Armenians, Swiss, Scandinavians, Germans, Irish, Dutch, Belgians, Austrians, and other Southern and Eastern Europeans which migrated to Uruguay in the late 19th century and 20th century. According to the 2006 National Survey of Homes by the Uruguayan National Institute of Statistics: 94.6% self-identified as having a white background, 9.1% chose black ancestry, and 4.5% chose an Amerindian ancestry (people surveyed were allowed to choose more than one option). According to the 2011 National Population and Housing Census, 43.6% of the Venezuelan population (approx. 13.1 million people) identify as white. Genetic research by the University of Brasilia shows an average admixture of 60.6% European, 23.0% Amerindian and 16.3% African ancestry in Venezuelan populations. The majority of white Venezuelans are of Spanish, Italian, Portuguese and German descent. Nearly half a million European immigrants, mostly from Spain (as a consequence of the Spanish Civil War), Italy and Portugal, entered the country during and after World War II, attracted by a prosperous, rapidly developing country where educated and skilled immigrants were welcomed. Spaniards were introduced into Venezuela during the colonial period. Most of them were from Andalusia, Galicia, Basque Country and from the Canary Islands. Until the last years of World War II, a large part of the European immigrants to Venezuela came from the Canary Islands, and its cultural impact was significant, influencing the development of Castilian in the country, its gastronomy and customs. With the beginning of oil operations during the first decades of the 20th century, citizens and companies from the United States, United Kingdom and Netherlands established themselves in Venezuela. Later, in the middle of the century, there was a new wave of originating immigrants from Spain (mainly from Galicia, Andalucia and the Basque Country), Italy (mainly from southern Italy and Venice) and Portugal (from Madeira) and new immigrants from Germany, France, England, Croatia, Netherlands, the Middle East and other European countries, among others, animated simultaneously by the program of immigration and colonization implanted by the government. - Ethnic groups in Europe - Ethnic groups in West Asia - European diaspora - Genetic history of Europe - Criollo people - White supremacy - "On both sides of the chronological divide between the modern and the pre-modern (wherever it may lie), there is today a remarkable consensus that the earlier vocabularies of difference are innocent of race." Nirenberg, David (2009). "Was there race before modernity? The example of 'Jewish' blood in late medieval Spain" (PDF). In Eliav-Feldon, Miriam; Isaac, Benjamin H.; Ziegler, Joseph (eds.). The Origins of Racism in the West. Cambridge, UK: Cambridge University Press. pp. 232–264. Retrieved 16 September 2014. - Jablonski, Nina G. (27 September 2012). Living Color: The Biological and Social Meaning of Skin Color. Berkeley, California: University of California Press. p. 106. ISBN 978-0-520-95377-2. - "The first are RETH, the second are AAMU, the third are NEHESU, and the fourth are THEMEHU. The RETH are Egyptians, the AAMU are dwellers in the deserts to the east and north-east of Egypt, the NEHESU are the Cushites, and the THEMEHU are the fair-skinned Libyans" Book of Gates, chapter VI (Archived 10 March 2016 at the Wayback Machine), translated by E. A. Wallis Budge, 1905. - James H. Dee, "Black Odysseus, White Caesar: When Did 'White People' Become 'White'?" The Classical Journal, Vol. 99, No. 2. (December 2003 – January 2004), pp. 162 ff. - Michael Witzel, "Rgvedic History" in: The Indo-Aryans of South Asia (1995): "while it would be easy to assume reference to skin colour, this would go against the spirit of the hymns: for Vedic poets, black always signifies evil, and any other meaning would be secondary in these contexts." - Painter, Nell (2 February 2016). The History of White People. New York, NY: W. W. Norton & Company. p. 1. ISBN 978-0-393-04934-3. - Herodotus: Histories, 4.108. - Herodotus: Histories, 2.104.2. - Herodotus: Histories, 2.17. - Xenophanes of Colophon: Fragments, J. H. Lesher, University of Toronto Press, 2001, ISBN 0-8020-8508-3, p. 90. - Dee, James H. (2004). "Black Odysseus, White Caesar: When Did 'White People' Become 'White'?". The Classical Journal. 99 (2): 157–167. JSTOR 3298065. - Silverblatt, Irene (2004). Modern Inquisitions: Peru and the colonial origins of the civilized world. Durham: Duke University Press. p. 139. ISBN 978-0-8223-8623-0. - Baum, Bruce David (2006). The rise and fall of the Caucasian race: A political history of racial identity. NYU Press. p. 247. ISBN 978-0-8147-9892-8. - Alastair Bonnett White Identities: An Historical & International Introduction. Routledge, London 1999, ISBN 0-582-35627-X / ISBN 978-0-582-35627-6. - Gregory Jay, "Who Invented White People? A Talk on the Occasion of Martin Luther King, Jr. Day, 1998". Archived from the original on 2 May 2007. Retrieved 19 December 2006.CS1 maint: BOT: original-url status unknown (link) - Keevak, Michael (2011). Becoming Yellow: A Short History of Racial Thinking. Princeton University Press. pp. 26–27. - Keevak, Michael (2011). Becoming Yellow: A Short History of Racial Thinking. Princeton University Press. p. 2. - Silverblatt, Irene (2004). Modern Inquisitions: Peru and the colonial origins of the civilized world. Durham: Duke University Press. pp. 113–16. ISBN 978-0-8223-8623-0. - Silverblatt, Irene (2004). Modern Inquisitions: Peru and the colonial origins of the civilized world. Durham: Duke University Press. p. 115. ISBN 978-0-8223-8623-0. - Twinam, Ann (2005). "Racial Passing: Informal and Official 'Whiteness' in Colonial Spanish America". In Smolenski, John; Humphrey, Thomas J. (eds.). New World Orders: Violence, Sanction, and Authority in the Colonial Americas. Philadelphia: University of Pennsylvania Press. pp. 249–272. ISBN 978-0-8122-3895-2. - Duenas, Alcira (2010). Indians and mestizos in the "lettered city" reshaping justice, social hierarchy, and political culture in colonial Peru. Boulder, CO: University Press of Colorado. ISBN 978-1-60732-019-7. Retrieved 23 April 2012. - Jordan, Winthrop (1974). White Over Black: American Attitudes Towards the Negro. p. 97. - Allen, Theodore (1994). The Invention of the White Race. 2. New York: Verso. p. 351. - Baum (2006), p. 48. Winthrop Jordan, White Over Black: American Attitudes Towards the Negro 1974, p. 52, puts the shift to white from earlier Christian, free, and English to around 1680. Allen, Theodore (1994). The Invention of the White Race: Racial Oppression and Social Control. Verso. ISBN 978-0-86091-660-4. Archived from the original on 7 November 2011. Retrieved 24 December 2006. - Hirschman, Charles (2004). "The Origins and Demise of the Concept of Race". Population and Development Review. 30 (3): 385–415. doi:10.1111/j.1728-4457.2004.00021.x. ISSN 1728-4457. - Sarah A. Tishkoff and Kenneth K. Kidd (2004): "Implications of biography of human populations for 'race' and medicine" (Archived 14 July 2016 at the Wayback Machine), Nature Genetics. - Painter, Nell Irvin. Yale University. "Why White People are Called Caucasian?" 2003. 27 September 2007. "Archived copy" (PDF). Archived from the original (PDF) on 20 October 2013. Retrieved 2006-10-09.CS1 maint: archived copy as title (link) - Johann Friedrich Blumenbach: The Anthropological Treatises. Longman Green, London 1865, pp. 99, 265 ff. - Painter, Nell (2010). The History of White People. New York, NY: W. W. Norton & Company. pp. 79–90. ISBN 978-0-393-04934-3. - Blumenbach, Johann Friedrich (2000). "On the Natural Variety of Mankind". In Robert Bernasconi (ed.). The Idea of Race. Indianapolis, IN: Hackett Publishing. pp. 27–37. ISBN 978-0-87220-458-4. - Johann Friedrich Blumenbach: The Anthropological Treatises. Longman Green, London 1865, p. 107. - Brian Regal: Human Evolution. A guide to the debates. ABC-CLIO, Santa Barbara/CA 2004, p. 72. Also see Johann Friedrich Blumenbach: The Institutions of physiology, translated by John Elliotson. Bensley, London 1817. - Marvin Harris (2001). The rise of anthropological theory. A history of theories of culture. Rowman Altamira. pp. 84 ff. ISBN 978-0-7591-0133-3. Retrieved 5 April 2012. - Baum (2006), p. 120, gives the range 1840 to 1935. - McAuliffe, Garrett (30 May 2018). Culturally Alert Counseling: A Comprehensive Introduction. SAGE. ISBN 9781412910064 – via Google Books. - Zecker, Robert M. (30 June 2011). Race and America's Immigrant Press: How the Slovaks were Taught to Think Like White People. Bloomsbury Publishing USA. ISBN 9781441161994 – via Google Books. - "The Encyclopædia Britannica: A Dictionary of Arts, Sciences, Literature and General Information". [Cambridge] University Press. 30 May 2018 – via Google Books. - Bendersky, Joseph W. (2007). A concise history of Nazi Germanyp. 161-2. Rowman & Littlefield Publishers Inc., Plymouth, United Kingdom - Benito Mussolini, Richard Washburn Child, Max Ascoli, Richard Lamb. My rise and fall. Da Capo Press, 1998. pp. 105–106. - Bonnett, Alastair (2000): White Identities. Pearson Education. ISBN 0-582-35627-X. - Mahmood Hoormand; Iraj Milanian; Alireza Salek Moghaddam; Nader Tajik; Negin Zand (July 2005). "Allele Frequency of CYP2C19 Gene Polymorphisms in a Healthy Iranian Population". Iranian Journal of Pharmacology & Therapeutics. 4 (2): 124–127. In this study we determined genotypes of CYP2C19 in Iranian population to compare allele frequencies with previous findings in other ethnic groups […] By contrast, the absence of CYP2C19*3 in our study further illustrates the ethnical difference between Caucasian and Oriental populations, by confirming the Asian specificity of this allelic variant, whose frequency is very low, or totally absent, in different Caucasian populations . No CYP2C19*3 was detected in our study. This allele is extremely rare in non-Oriental populations […] the frequency of CYP2C19 allelic variants in Iranians was similar to other Caucasian populations. - Bhopal, R.; Donaldson, L. (September 1998). "White, European, Western, Caucasian, or what? Inappropriate labeling in research on race, ethnicity, and health". American Journal of Public Health. 88 (9): 1303–1307. doi:10.2105/AJPH.88.9.1303. PMC 1509085. PMID 9736867. - Iosif Lazaridis; et al. (2016). "Genomic insights into the origin of farming in the ancient Near East" (PDF). Nature. 536 (7617): 419–24. Bibcode:2016Natur.536..419L. doi:10.1038/nature19310. PMC 5003663. PMID 27459054. Retrieved 18 April 2018. bottom-left: Western Hunter Gatherers (WHG), top-left: Eastern Hunter Gatherers (EHG), bottom-right: Neolithic Levant and Natufians, top-right: Neolithic Iran. This suggests the hypothesis that diverse ancient West Eurasians can be modelled as mixtures of as few as four streams of ancestry related to these population - How Genetics Is Changing Our Understanding of ‘Race’, NY Times, 23 March 2018 - Current World Population 2017 Worldometers.info - "Overview of Race and Hispanic Origin: 2010 Census Briefs". US Census Bureau. March 2011. Archived from the original (PDF) on 5 May 2011. - "Всероссийская перепись населения 2002 года". Perepis2002.ru. Retrieved 8 October 2017. - "Tabelas de resultados Branca Preta Amarela Parda Indígena Sem declaração" (PDF). G1.globo.com. 25 November 2016. Retrieved 11 July 2014.[permanent dead link] - "Population". destatis.de. Retrieved 16 December 2019. - Yazid Sabeg et Laurence Méhaignerie, Les oubliés de l'égalité des chances, Institut Montaigne, January 2004 - "Bilancio demografico nazionale". Istat.it. 17 June 2015. Archived from the original on 17 June 2015. Retrieved 6 November 2017.CS1 maint: BOT: original-url status unknown (link) - "Wayback Machine". Ons.gov.uk. 16 January 2013. Archived from the original on 16 January 2013. Retrieved 6 November 2017.CS1 maint: BOT: original-url status unknown (link) - "Ethnicity and Race by Countries". Infoplease.com. Retrieved 8 October 2017. - "21 de Marzo Día Internacional de la Eliminación de la Discriminación Racial" pag.7, CONAPRED, Mexico, 21 March. Retrieved on 28 April 2017. - "Encuesta Nacional Sobre Discriminación en Mexico", "CONAPRED", Mexico DF, June 2011. Retrieved on 28 April 2017. - "DOCUMENTO INFORMATIVO SOBRE DISCRIMINACIÓN RACIAL EN MÉXICO", CONAPRED, Mexico, 21 March 2011, retrieved on 28 April 2017. - "Wayback Machine". 17 December 2011. Archived from the original on 17 December 2011. Retrieved 8 October 2017. - "The World Factbook". Cia.gov. Central Intelligence Agency. Retrieved 8 October 2017. - Adams, J. Q.; Strother-Adams, Pearlie (2001). Dealing with Diversity. Chicago: Kendall/Hunt Publishing. ISBN 978-0-7872-8145-8. - "CSO 2011 Census – Volume 5 – Ethnic or Cultural Background (including the Irish Traveller Community)" (PDF). 2011. Retrieved 9 July 2009. - 2011 Census: KS201UK Ethnic group, local authorities in the United Kingdom ONS, Retrieved 21 October 2013 - Être né en France d'un parent immigré, Insee Première, n°1287, mars 2010, Catherine Borrel et Bertrand Lhommeau; Insee.fr - Francisco Lizcano Fernández (2005). "Composición Étnica de las Tres Áreas Culturales del Continente Americano al Comienzo del Siglo XXI" (PDF). UAEM. p. 218. Archived from the original (PDF) on 20 September 2008. - "Puerto Rico: People; Ethnic groups". 2010.census.gov. Archived from the original on 31 May 2011. Retrieved 14 April 2011. - "Bermuda: 2010 Census of Population & Housing Final Results" (PDF). Bermuda Department of Statistics. December 2011. Archived from the original (PDF) on 28 February 2013. Retrieved 20 November 2012. - INE- Caracterización estadística República de Guatemala 2012 Retrieved, 2015/04/17. - "Nicaragua: People; Ethnic groups". CIA World Factbook. Retrieved 26 November 2007. - "D.R.: People; Ethnic groups". CIA World Factbook. Retrieved 26 November 2007. - Bureau, U.S. Census. "American FactFinder – Results". factfinder2.census.gov. Retrieved 8 October 2017. - "Panama: People; Ethnic groups". CIA World Factbook. Retrieved 26 November 2007. - "El Salvador: Censos de Población 2007" [El Salvador: Population Census 2007] (PDF) (in Spanish). digestyc.gob.sv. 2008. p. 13. Retrieved 20 December 2015. - Turks and Caicos 2001 Census (Page: 22) - National Population Census Report 2001, The British Virgin Islands Archived 12 May 2016 at the Wayback Machine Percentage Distribution of Population by Ethnic Group, Intercensal Change and Sex, 1991 and 2001 White/Caucasian 6.8% + Portuguese 0.1%. - Bahamas 2010 census TOTAL POPULATION BY SEX, AGE GROUP AND RACIAL GROUP "In 1722 when the first official census of The Bahamas was taken, 74% of the population was white and 26% black. Three centuries later, and according to the 99% response rate obtained from the race question on the 2010 Census questionnaire, 91% of the population identified themselves as being black, five percent (5%) white and two percent (2%) of a mixed race (black and white) and (1%) other races and (1%) not stated." (Page: 10 and 82) - Anguilla Population and Housing Census (AP&HC) 2011 Who are we? – Ethnic Composition and Religious Affiliation. - BARBADOS – 2010 POPULATION AND HOUSING CENSUS Archived 18 January 2017 at the Wayback Machine Table 02.03: Population by Sex, Age Group and Ethnic Origin (Page: 51-54) - POPULATION, DEMOGRAPHIC CHARACTERISTICS Archived 11 September 2018 at the Wayback Machine POPULATION BY ETHNIC GROUPS (Page:16-17) 1.4% white (608 "Portuguese" and 870 other "white"). - Trinidad and Tobago 2011 Census Archived 19 October 2017 at the Wayback Machine Ethnic Composition: "Caucasian 0.59%, Portuguese 0.06%", Total: 0.65% (Page: 15) - "Extended National Household Survey, 2006: Ancestry" (PDF) (in Spanish). National Institute of Statistics. - Ethnic Groups Worldwide: A Ready Reference Handbook. by David Levinson. Page 313. Greenwood Publishing Group, 1998. ISBN 1-57356-019-7 - "The World Factbook". cia.gov. Retrieved 12 April 2014. - "2010 Brazilian Census" (PDF). ibge.gov.br (in Portuguese). 2011. Retrieved 19 December 2015. - Resultado Basico del XIV Censo Nacional de Población y Vivienda 2011, (p. 14). - "Colombia: A Country Study" (PDF). Federal Research Division of the Library of Congress. Washington, D.C.: Library of Congress. pp. 101–102. - Simon Schwartzman (25 July 2008). "Étnia, condiciones de vida y discriminación" (PDF). Retrieved 11 July 2014. - EL UNIVERSO (2 September 2011). "Población del país es joven y mestiza, dice censo del INEC". El Universo. Retrieved 17 June 2015. - "Perú: Perfil Sociodemográfico" (PDF). Instituto Nacional de Estadística e Informática. p. 214. - El Dia Encusta (Ipsos) 2014: "INE: el 69% de los bolivianos no pertenece a ningún pueblo indígena. Estudio. Según la encuesta Ipsos, el 25% se autodefine aymara, el 11% quechua, el 3% blanco y el 1% guaraní y afroboliviano. Los indígenas aseguran que están visibilizados." - "2013 Census QuickStats about national highlights". Archived from the original on 14 July 2014. Retrieved 17 June 2015. - Estimer appartenir à une ou plusieurs communautés 2009 census – New Caledonia according to ethnic group - Guam (Territory of the US) CIA Factbook – based on the 2010 official Census statistics - The Northern Mariana Islands 2010 Census - Census 2011: Census in brief (PDF). Pretoria: Statistics South Africa. 2012. ISBN 9780621413885. Archived (PDF) from the original on 13 May 2015. - Namibia-Travel Archived 1 October 2017 at the Wayback Machine – retrieved 3 February 2016 - ZIMBABWE Archived 10 January 2017 at the Wayback Machine – POPULATION CENSUS 2012 – retrieved November 2017 - Schweimler, Daniel (12 February 2007). "Argentina's last Jewish cowboys". BBC News. Retrieved 6 January 2010. - Argentina Archived 6 November 2016 at the Wayback Machine This figure is the sum of 72,3 of White/European and 10% Arab. - The Joshua Project: Ethnic people groups of Argentina "Archived copy". Archived from the original on 13 December 2013. Retrieved 2015-03-29.CS1 maint: archived copy as title (link) These figures do not show up explicitly, but after doing some mathematics, the results are as follows: Argentinians White -the resulting ethnic group out of the melting pot of immigration in Argentina- sum up 29,031,000 or 72,3% of the population. The other relatively unmixed European/Caucasus ethnic groups sum up 4,258,500 (10.6%), and the Arabs sum 1,173,100 more (2.9%). All together, whites in Argentina comprise 34,462,600 or 85.8% out of a total population of 40,133,230. - "CIA – The World Factbook – Argentina". Archived from the original on 13 May 2009. - Enrique Oteiza y Susana Novick sostienen que «la Argentina desde el siglo XIX, al igual que Australia, Canadá o Estados Unidos, se convierte en un país de inmigración, entendiendo por esto una sociedad que ha sido conformada por un fenómeno inmigratorio masivo, a partir de una población local muy pequeña.» Iigg.fsoc.uba.ar Archived 31 May 2011 at the Wayback Machine - Oteiza, Enrique; Novick, Susana. Inmigración y derechos humanos. Política y discursos en el tramo final del menemismo. [en línea]. Buenos Aires: Instituto de Investigaciones Gino Germani, Facultad de Ciencias Sociales, Universidad de Buenos Aires, 2000 [Citado FECHA]. (IIGG Documentos de Trabajo, N° 14). Disponible en la World Wide Web: Iigg.fsoc.uba.ar[dead link] - El antropólogo brasileño Darcy Ribeiro incluye a la Argentina dentro de los «pueblos trasplantados» de América, junto con Uruguay, Canadá y Estados Unidos (Ribeiro, Darcy. Las Américas y la Civilización (1985). Buenos Aires:EUDEBA, pp. 449 ss.) - El historiador argentino José Luis Romero define a la Argentina como un «país aluvial» (Romero, José Luis. «Indicación sobre la situación de las masas en Argentina (1951)», en La experiencia Argentina y otros ensayos, Buenos Aires: Universidad de Belgrano, 1980, p. 64) - Federaciones Regionales Archived 2 May 2016 at the Wayback Machine feditalia.org.ar - Dinámica migratoria: coyuntura y estructura en la Argentina de fines del XX Archived 1 November 2008 at the Wayback Machine. Alhim.revues.org (3 November 2004). - "Buenosaires.gov.ar". Archived from the original on 29 September 2008. Retrieved 16 November 2010. - Rock, David. Argentina: 1516–1982. University of California Press, 1987. - Levene, Ricardo. History of Argentina. University of North Carolina Press, 1937. - Yale immigration study Archived 16 April 2016 at the Wayback Machine. Yale.edu. - Racial Discrimination in Argentina Archived 3 March 2016 at the Wayback Machine. Academic.udayton.edu. - Ackerman, Ruthie (27 November 2005). "Blacks in Argentina – officially a few, but maybe a million". The San Francisco Chronicle. - Corach, Daniel; Lao, Oscar; Bobillo, Cecilia; Van Der Gaag, Kristiaan; Zuniga, Sofia; Vermeulen, Mark; Van Duijn, Kate; Goedbloed, Miriam; Vallone, Peter M; Parson, Walther; De Knijff, Peter; Kayser, Manfred (2010). "Inferring Continental Ancestry of Argentineans from Autosomal, Y-Chromosomal and Mitochondrial DNA". Annals of Human Genetics. 74 (1): 65–76. doi:10.1111/j.1469-1809.2009.00556.x. PMID 20059473. - "Medicina (B. Aires) vol.66 número2; Resumen: S0025-76802006000200004". Archived from the original on 19 July 2011. - Homburger; et al. (2015). "Genomic Insights into the Ancestry and Demographic History of South America". PLOS Genetics. 11 (12): e1005602. doi:10.1371/journal.pgen.1005602. PMC 4670080. PMID 26636962. - Avena; et al. (2012). "Heterogeneity in Genetic Admixture across Different Regions of Argentina". PLOS ONE. 7 (4): e34695. Bibcode:2012PLoSO...734695A. doi:10.1371/journal.pone.0034695. PMC 3323559. PMID 22506044. - "O impacto das migrações na constituição genética de populações latino-americanas" (PDF). Repositorio.unb.br. Archived (PDF) from the original on 1 October 2018. Retrieved 15 January 2018. - "Reference Populations – Geno 2.0 Next Generation". Genographic.nationalgeographic.com. Archived from the original on 24 November 2017. Retrieved 15 January 2018. - Immigration Restriction Act 1901 Archived 1 June 2011 at the Wayback Machine. Foundingdocs.gov.au. - Stephen Castles, "The Australian Model of Immigration and Multiculturalism: Is It Applicable to Europe?," International Migration Review, Vol. 26, No. 2, Special Issue: The New Europe and International Migration. (Summer, 1992), pp. 549–67. - "Aboriginal and Torres Strait Islander Peoples and the Census After the 1967 Referendum". Abs.gov.au. 5 July 2011. Retrieved 3 February 2016. - "Belize Mennonites". Retrieved 12 August 2014. - "Censo Demográfi co 2010 Características da população e dos domicílios Resultados do universo" (PDF). 8 November 2011. Retrieved 12 July 2014. - Gregory Rodriguez, "Brazil Separates Into Black and White Archived 5 March 2016 at the Wayback Machine," LA Times, 3 September 2006. Note that the figures belie the title. - Rodriguez, Gregory. (3 September 2006) Brazil Separates Into a World of Black and White | The New America Foundation Archived 2 April 2015 at the Wayback Machine. Newamerica.net. - "Groups" in Statistics Canada, Sample 2001 Census form Archived 26 March 2009 at the Wayback Machine. Statistics Canada, 2001 Census Visible Minority and Population Group User Guide Archived 24 January 2016 at the Wayback Machine - Human Resources and Social Development Canada, 2001 Employment Equity Data Report[dead link] - Census 2001: 2B (Long Form) - "Chile". Encyclopædia Britannica. Retrieved 15 September 2012. Chile's ethnic makeup is largely a product of Spanish colonization. About three fourths of Chileans are mestizo, a mixture of European and Amerindian ancestries. One fifth of Chileans are of white European (mainly Spanish) descent. - Fernández, Francisco Lizcano (May–August 2005). "Composición Étnica de las Tres Áreas Culturales del Continente Americanoal Comienzo del Siglo XXI" (PDF). Convergencia. Archived from the original (PDF) on 20 September 2008. Retrieved 23 January 2015. - "5.2.6. Estructura racial". University of Chile (in Spanish). Retrieved 10 February 2013.[permanent dead link] - "Online Data Analysis". Latinobarómetro. Corporación Latinobarómetro. 2011. Retrieved 23 January 2015. - "Chile". Encyclopædia Britannica. Retrieved 15 September 2012. ...Basque families who migrated to Chile in the 18th century vitalized the economy and joined the old Castilian aristocracy to become the political elite that still dominates the country. - Madariaga, Ainara (19 November 2008). "Presentación del libro Santiago de Chile". Departmento de Salud. Eusko Jaurlaritza – Gobierno Vasco. Retrieved 23 January 2015. - Elorza, Waldo Ayarza (1995). ...de los Vascos, Oñati y Los Elorza. pp. 59, 65, 66, 68. - Salazar Vergara, Gabriel; Pinto, Julio (1999). "La Presencia Inmigrante". Historia Contemporánea de Chile. Santiago de Chile: LOM Ediciones. pp. 76–81. ISBN 978-956-282-174-2. Retrieved 16 September 2012. - Censo de Población 1907 Archived 4 March 2016 at the Wayback Machine - Censo de Población 1920 Archived 4 March 2016 at the Wayback Machine - Censo de Población 1930 Archived 3 March 2016 at the Wayback Machine - Durán, Hipólito (1997). "El crecimiento de la población latinoamericana y en especial de Chile • Academia Chilena de Medicina". Superpoblación. Madrid: Real Academia Nacional de Medicina. p. 217. ISBN 978-84-923901-0-6. Retrieved 16 September 2012. - Pérez Rosales, Vicente (1860). Recuerdos del Pasado. Santiago de Chile: Editorial Andrés Bello. Retrieved 16 September 2012. - "Embajada de Chile en Alemania". www.echile.de. Archived from the original on 5 August 2009. - "Kuwi.europa-uni.de" (PDF). Archived from the original (PDF) on 2 November 2012. - http://www.blog-v.com, BLOGG /. "Arabes en Chile". www.blog-v.com. - "Aurora | Aurora". www.aurora-israel.co.il. Archived from the original on 18 March 2012. - http://www.blog-v.com, BLOGG /. "Arabes en Chile". www.blog-v.com. Archived from the original on 18 August 2013. - "Chile: Palestinian refugees arrive to warm welcome". adnkronos International. 7 April 2008. Archived from the original on 19 September 2011. Retrieved 23 January 2015. - "Comunidad palestina en Chile acusa "campaña de terror" tras nuevas pintadas". soitu.es actualidad. 16 October 2009. Retrieved 23 January 2015. - "www.Hrvatskiimigracije.es.tl - Diaspora Croata". hrvatskimigracije.es.tl. Archived from the original on 9 May 2016. - "Naslovna". HRVATSKA MATICA ISELJENIKA. Archived from the original on 4 June 2012. - "Hrvatski". Hrvatski.cl. Archived from the original on 3 March 2016. - "Historia de Chile, Británicos y Anglosajones en Chile durante el siglo XIX". Retrieved 26 April 2009. - "ar.vg – Desde Argentina para el mundo". Archived from the original on 16 October 2015. - 90,000 descendants Swiss in Chile. Archived 25 September 2009 at the Wayback Machine - "5% de los chilenos tiene origen frances". Archived from the original on 12 April 2008. - "Italiani nel Mondo: diaspora italiana in cifre" (PDF) (in Italian). Migranti Torino. 30 April 2004. Archived from the original (PDF) on 27 February 2008. Retrieved 24 September 2012. - Library of Congress Country Studies. "Colombia: Race and Ethnicity". Retrieved 12 April 2011. - "Archived copy". Archived from the original on 4 March 2016. Retrieved 2016-09-07.CS1 maint: archived copy as title (link) - "En blanco y negro". semana.com. 25 October 1993. - "El 85 por ciento de las madres colombianas tiene origen indígena". eltiempo.com. 13 October 2006. - Hudson, Rex A.; Division, Library of Congress (U S. ), Federal Research (8 September 2010). Colombia: A Country Study. Government Printing Office. ISBN 9780844495026 – via Google Books. - "Colombia - History Background". education.stateuniversity.com. Retrieved 7 August 2019. - Amerikanuak: Basques in the New World by William A. Douglass, Jon Bilbao, p. 167 - Possible paradises: Basque emigration to Latin America by José Manuel Azcona Pastor, p. 203 - Latin America during World War II by Thomas M. Leonard, John F. Bratzel, P.117 - "SCADTA Joins the Fight". stampnotes.com. - juntaislamica.com. "La comunidad musulmana de Maicao (Colombia) – Webislam". www.webislam.com (in Spanish). Retrieved 17 January 2018. - (in Spanish) Luis Angel Arango Library: Los sirio-libaneses en Colombia Archived 25 October 2006 at the Wayback Machine lablaa.org - "Costa Rica". Microsoft Encarta Online Encyclopedia. Microsoft. 2007. Archived from the original on 29 May 2008. Retrieved 29 December 2010. - "Costa Rica". The World Factbook. U.S. Central Intelligence Agency. Archived from the original on 12 August 2015. - "Report on the Census of Cuba, 1899". sc.edu. - Pedraza, Silvia (17 September 2007). Political Disaffection in Cuba's Revolution and Exodus. ISBN 9780521867870. - "Official 2012 Census" (PDF). Archived from the original (PDF) on 3 June 2014. - "2012 Cuban Census". One.cu. 28 April 2006. Retrieved 23 April 2014. - "Censo en Cuba concluye que la población decrece, envejece y se vuelve cada vez más mestiza". latercera.com. Grupo Copesa. 8 November 2013. - "Etat des propriétés rurales appartenant à des Français dans l'île de Cuba". (from Cuban Genealogy Center) - "In Cuba, Finding a Tiny Corner of Jewish Life". The New York Times. 4 February 2007. Retrieved 19 November 2008. - "A barrier for Cuba's blacks – New attitudes on once-taboo race questions emerge with a fledgling black movement". Archived from the original on 21 August 2013. - Refugees, United Nations High Commissioner for. "Refworld | World Directory of Minorities and Indigenous Peoples – Cuba : Afro-Cubans". Refworld. Retrieved 17 January 2018. - "World Directory of Minorities and Indigenous Peoples – Cuba : Overview". Archived from the original on 10 May 2011. - "El Salvador". The World Factbook. U.S. Central Intelligence Agency. Retrieved 12 October 2013. - Bonnet 2000, p. 37 - "Caracterización estadística República de Guatemala 2012" (PDF). INE. Retrieved 2 November 2014. - Metz, Brent (1 May 2006). Ch'orti'-Maya Survival in Eastern Guatemala: Indigeneity in Transition. UNM Press. ISBN 9780826338815 – via Google Books. - "La cara Europea de Guatemala". europaenguatemala.blogspot.com. Retrieved 12 May 2014. - "Retrato de la familia Fagoaga-Arozqueta". electronic magazine Imágenes of the Institute of Aesthetic Research of the National Autonomous University of Mexico. - "Resultados del Modulo de Movilidad Social Intergeneracional" Archived 9 July 2018 at the Wayback Machine, INEGI, 16 June 2017, Retrieved on 30 April 2018. - "Visión INEGI 2021 Dr. Julio Santaella Castell", INEGI, 3 July 2017, Retrieved on 30 April 2018. - "Por estas razones el color de piel determina las oportunidades de los mexicanos" Archived 22 June 2018 at the Wayback Machine, Huffington post, 26 July 2017, Retrieved on 30 April 2018. - "Ser Blanco", El Universal, 6 July 2017, Retrieved on 19 June 2018. - "Comprobado con datos: en México te va mejor si eres blanco", forbes, 7 August 2018, Retrieved on 4 November 2018. - David A. Branding; Woodrow Borah (1975). Mineros y comerciantes en el México borbónico (1763–1810). Fondo de Cultura Económica. p. 150. ISBN 9789681613402. Retrieved 27 January 2018. - "Ser mestizo en la nueva España a fines del siglo XVIII. Acatzingo, 1792", Scielo, Jujuy, November 2000. Retrieved on 1 July 2017. - Federico Navarrete (2016). Mexico Racista. Penguin Random house Grupo Editorial Mexico. p. 86. ISBN 9786073143646. Retrieved 23 February 2018. - Sherburne Friend Cook; Woodrow Borah (1998). Ensayos sobre historia de la población. México y el Caribe 2. Siglo XXI. p. 223. ISBN 9789682301063. Retrieved 12 September 2017. - "Household Mobility and Persistence in Guadalajara, Mexico: 1811–1842, page 62", fsu org, 8 December 2016. Retrieved on 9 December 2018. - Sijia Wang; Nicolas Ray; Winston Rojas; Maria V. Parra; Gabriel Bedoya; Carla Gallo; et al. (21 March 2008). "Geographic Patterns of Genome Admixture in Latin American Mestizos". PLOS Genetics. 4 (3): e1000037. doi:10.1371/journal.pgen.1000037. PMC 2265669. PMID 18369456. Large differences in the variation of individual admixture estimates were seen across populations, with the variance in Native American ancestry between individuals ranging from 0.005 in Quetalmahue to 0.07 in Mexico City (Figure 4, Figure S1, and Table S2), an observation consistent with previous studies... - Fernández, Francisco Lizcano (2005). "Composición Étnica de las Tres Áreas Culturales del Continente Americano al Comienzo del Siglo XXI" [Ethnic Composition of the Three Cultural Areas of the American Continent at the Beginning of the 21st Century] (PDF). Convergencia. Revista de Ciencias Sociales (in Spanish). 12 (38): 169. ISSN 1405-1435. Retrieved 23 August 2017. Al respecto no debe olvidarse que en estos países buena parte de las per so nas consideradas biológicamente blancas son mestizas en el aspecto cultural, el que aquí nos interesa. [In this respect, it should not be forgotten that in these countries a large part of the people considered to be biologically white are mixed in the cultural aspect, which concerns us here.] - "Mexico | History, Geography, Facts, & Points of Interest". Encyclopædia Britannica. Retrieved 17 January 2018. - "Racismo y salud mental en estudiantes universitarios en la Ciudad de México", Scielo, Cuernavaca, April–March 2011. Retrieved on 28 April 2017. - "Stratification by Skin Color in Contemporary Mexico", Jstor org, available creating a free account , Retrieved on 27 January 2018. - "Admixture in Latin America: Geographic Structure, Phenotypic Diversity and Self-Perception of Ancestry Based on 7,342 Individuals" table 1, Plosgenetics, 25 September 2014. Retrieved on 9 May 2017. - "Alteraciones cutáneas del neonato en dos grupos de población de México", Scielo, March/April 2005. Retrieved on 18 May 2017. - Miller (1999). Nursing Care of Older Adults: Theory and Practice (3, illustrated ed.). Lippincott Williams & Wilkins. p. 90. ISBN 978-0781720762. Retrieved 17 May 2014. - "Congenital Dermal Melanocytosis (Mongolian Spot): Background, Pathophysiology, Epidemiology". EMedicine.medscape.com. 7 January 2017. Retrieved 8 October 2017. - Lawrence C. Parish; Larry E. Millikan, eds. (2012). Global Dermatology: Diagnosis and Management According to Geography, Climate, and Culture. M. Amer, R.A.C. Graham-Brown, S.N. Klaus, J.L. Pace. Springer Science & Business Media. p. 197. ISBN 978-1461226147. Retrieved 17 May 2014. - "About Mongolian Spot". tokyo-med.ac.jp. Retrieved 1 October 2015. - "Tienen manchas mongólicas 50% de bebés", El Universal, January 2012. Retrieved on 3 July 2017. - Howard F. Cline (1963). THE UNITED STATES AND MEXICO. Harvard University Press. p. 104. ISBN 9780674497061. Retrieved 18 May 2017. - Cuéllar Moreno, Raúl (12 December 2004). "Coahuila y sus Hombres / Los indios bárbaros del norte". Elsiglodetorreon.com (in Spanish). - Avila, Oscar (22 November 2008). "Mexico's insular Mennonites under siege, overlooked: The Tribune's Oscar Avila reports on Mexico's insular and targeted sect". McClatche-Tribune Business News. Washington. p. 8. - "Menonitas que huyeron de Chihuahua ahora alimentan Asia desde Campeche", El Financiero, 1 March 2018. Retrieved on 8 December 2018. - Montagner Anguiano, Eduardo. "El dialecto véneto de Chipilo" [The Venician dialect of Chipilo]. Orbis Latinus (in Spanish). Retrieved 19 July 2011. - Germans: First Arrivals (from the Te Ara: The Encyclopedia of New Zealand) - Taonga, New Zealand Ministry for Culture and Heritage Te Manatu. "4. – History of immigration – Te Ara Encyclopedia of New Zealand". teara.govt.nz. - Taonga, New Zealand Ministry for Culture and Heritage Te Manatu. "5. – History of immigration – Te Ara Encyclopedia of New Zealand". teara.govt.nz. - "Nicaragua". The World Factbook. U.S. Central Intelligence Agency. Retrieved 22 May 2013. - Eddy Kuhl Inmigración centro-europea a Matagalpa, Nicaragua Archived 4 December 2014 at the Wayback Machine Consultado, 05/12/2014. - Revista Vinculado Nicaragua: historia de inmigrantes. De dónde eran y por qué emigraron Retrieved, 05/12/2014. - Thomas McGhee, Charles C., ed. (1989). The plot against South Africa (2nd ed.). Pretoria: Varama. ISBN 978-0-620-14537-4. - Fryxell, Cole. To Be Born a Nation. pp. 9, 327. - Kaplan, Irving. Area Handbook for the Republic of South Africa. pp. 120–166. - Study Commission on U.S. Policy toward Southern Africa (1981). South Africa: Time running out: The report of the Study Commission on U.S. Policy Toward Southern Africa. University of California Press. p. 42. ISBN 978-0-520-04547-7. - Mafika (11 August 2017). "South Africa's population". Brand South Africa. Archived from the original on 21 November 2016. Retrieved 17 January 2018. - Million whites leave SA – study, fin24.com, 24 September 2006 - Kruszelnicki, Karl (March 2001), News in Science: Skin Colour 1 - Bonnet 2000, p. 32 - Bonnet 2000, p. 31 - "Short History of Immigration". BBC News. Retrieved 18 March 2015. - "Culture and Ethnicity Differences in Liverpool – Chinese Community". Chambré Hardman Trust. Archived from the original on 24 July 2009. Retrieved 9 March 2015. - Vargas-Silva, Carlos (10 April 2014). "Migration Flows of A8 and other EU Migrants to and from the UK". Migration Observatory, University of Oxford. Retrieved 18 March 2015. - "Ethnic group statistics: A guide for the collection and classification of ethnicity data" (PDF). Office for National Statistics. 2003. p. 9. Retrieved 3 January 2011. - Kissoon, Priya.Asylum Seekers: National Problem or National Solution. 2005. 7 November 2006. - 2011 Census: Ethnic group, local authorities in England and Wales, accessed 13 June 2014. - Table 2 – Ethnic groups, Scotland, 2001 and 2011 Scotlands Census published 30 September 2013, accessed 13 June 2014. - "2011 Census – Key Statistics for Northern Ireland". Northern Ireland Statistics and Research Agency. 11 January 2017. - "Table DC2206NI: National identity (classification 1) by ethnic group". Northern Ireland Statistics and Research Agency. Retrieved 25 October 2016. - "2011 Census: Key Results on Population, Ethnicity, Identity, Language, Religion, Health, Housing and Accommodation in Scotland – Release 2A" (PDF). National Records for Scotland. 26 September 2013. Retrieved 30 September 2013. - "NISRA 2011 Census: Ethnic Group: Accessed 3 June 2013". - Table 1. United States – Race and Hispanic Origin: 1790 to 1990 (pdf). Archived 18 January 2015 at the Wayback Machine - Census 2000 Summary File 1 (SF 1) 100-Percent Data Geographic Area: United States. Factfinder.census.gov. - The White Population: 2000, Census 2000 Brief C2010BR-05., U.S. Census Bureau, September 2011. - The White Population: 2010, Census 2010 Brief C2KBR/01-4, U.S. Census Bureau, August 2001. - Roediger, Wages of Whiteness, 186; Tony Horwitz, Confederates in the Attic: Dispatches from the Unfinished Civil War (New York, 1998). - Tehranian, John (2000). "Performing Whiteness: Naturalization Litigation and the Construction of Racial Identity in America". The Yale Law Journal. 109 (4): 825–27. doi:10.2307/797505. JSTOR 797505. - Armas Kustaa Ensio Holmio, "History of the Finns in Michigan", p. 17 | She had barely reached the front porch when the friend's mother realized that her daughter's playmate was a Finn. Helmi was turned away immediately, and the daughter of the house was forbidden to associate with "that Mongolian". John Wargelin, a pastor of the Evangelical Lutheran Church and a former president of Suomi College, also tells how, when he was a child in Crystal Falls some years earlier, he and his friends were ridiculed and stoned on their way to school. "Because of our strange language," he says, "we were considered an alien race who had no right to settle in this country." - Eric Dregni, Vikings in the attic: In search of Nordic America, p. 176. - John Tehranian, "Performing Whiteness: Naturalization Litigation and the Construction of Racial Identity in America," The Yale Law Journal, Vol. 109, No. 4. (Jan. 2000), pp. 817–48. - United States v. Bhagat Singh Thind, Certificate From The Circuit Court Of Appeals for the Ninth Circuit, No. 202. Argued 11, 12 January 1923. —Decided 19 February 1923, United States Reports, v. 261, The Supreme Court, October Term, 1922, 204–215. - John Tehranian, "Performing Whiteness: Naturalization Litigation and the Construction of Racial Identity in America," The Yale Law Journal, Vol. 109, No. 4. (Jan. 2000), pp. 833–36. - John Tehranian, "Performing Whiteness: Naturalization Litigation and the Construction of Racial Identity in America," The Yale Law Journal, Vol. 109, No. 4. (Jan. 2000), pp. 837–39. - "No Middle Eastern Or North African Category On 2020 Census, Bureau Says". NPR.org. Retrieved 16 August 2019. - Frank W Sweet, Legal History of the Color Line: The Rise and Triumph of the One-Drop Rule, Backintyme (3 July 2013), p. 50. - Uniform Crime Reporting Handbook, U.S. Department of Justice. Federal Bureau of Investigation, p. 97 (2004) Archived 3 May 2015 at the Wayback Machine - Anthony Walsh (2004). "Race and crime: a biosocial analysis". Nova Publishers. p. 23. ISBN 1-59033-970-3 - Jeffrey S. Passel and D'Vera Cohn: U.S. Population Projections: 2005–2050. Archived 3 January 2010 at the Wayback Machine Pew Research Center, 11 February 2008. - Bryc, Katarzyna et al. "The genetic ancestry of African, Latino, and European Americans across the United States" 23andme. pp. 22, 38 doi:10.1101/009340. "Supplemental Tables and Figures". p. 42. 18 September 2014. Retrieved 16 July 2015. - Scott Hadly, "Hidden African Ancestry Redux", DNA USA*, 23andMe, 4 March 2014. - "African Ancestry of the White American Population" (pdf). The Ohio Journal of Science, vol. 58, n. 3 (May, 1958), pp. 155–60. - One drop of blood. People.vcu.edu (24 July 1994). - Bryc, Katarzyna; Auton, Adam; Nelson, Matthew R.; Oksenberg, Jorge R.; Hauser, Stephen L.; Williams, Scott; Froment, Alain; Bodo, Jean-Marie; Wambebe, Charles; Tishkoff, Sarah A.; Bustamante, Carlos D.; et al. (2009). "Genome-wide patterns of population structure and admixture in West Africans and African Americans". Proceedings of the National Academy of Sciences of the United States of America. 107 (2): 786–791. Bibcode:2010PNAS..107..786B. doi:10.1073/pnas.0909559107. PMC 2818934. PMID 20080753. - Shriver, Mark D.; et al. (2003). "Skin pigmentation, biogeographical ancestry and admixture mapping" (PDF). Human Genetics. 112 (4): 387–99. doi:10.1007/s00439-002-0896-y. PMID 12579416. Archived from the original (PDF) on 15 April 2012. - Frank W Sweet (2004). "Afro-European Genetic Admixture in the United States: Essays on the Color Line and the One-Drop Rule". Archived from the original on 21 February 2013. Retrieved 11 February 2013. - Debra J. Dickerson: The End of Blackness. Returning the Souls of Black Folk to Their Rightful Owners. Anchor Books, New York and Toronto 2005. - Mariah Carey: 'Not another White girl trying to sing Black.'. Findarticles.com. - Larry King interview with Mariah Carey. Transcripts.cnn.com (19 December 2002). - Cf. Jim Wooten, "Race Reversal Man Lives as ‘Black’ for 50 Years – Then Finds Out He’s Probably Not", ABC News (2004). - "Wayback Machine" (PDF). Census.gov. 20 July 2015. Archived from the original on 20 July 2015. Retrieved 6 November 2017.CS1 maint: BOT: original-url status unknown (link) - "Racial composition data for Puerto Rico: 2000 Census" (PDF). Topuertorico.org. Retrieved 8 October 2017. - Klein, Herbert S. (28 May 2012). A Population History of the United States. ISBN 9781107379206. - How Puerto Rico Became White—University of Wisconsin-Madison Archived 7 February 2012 at the Wayback Machine. (PDF). - "Home". Center for Demography and Ecology. - Representation of racial identity among Island Puerto Ricans. Mona.uwi.edu. - Uruguay (07/08). State.gov (2 April 2012). - CIA – The World Factbook – Uruguay. Cia.gov. - Uruguay – Population. Countrystudies.us. - Publishing, D. K. (17 January 2005). Financial Times World Desk Reference 2005. Penguin. ISBN 9780756673093 – via Google Books. - Lesser, Jeff; Rein, Raanan (30 May 2018). Rethinking Jewish-Latin Americans. UNM Press. ISBN 9780826344014 – via Google Books. - "Resultado Básico del XIV Censo Nacional de Población y Vivienda 2011 (Mayo 2014)" (PDF). Ine.gov.ve. p. 29. Retrieved 8 September 2014. - Ine.gob.ve Venezuelan population by 30/Jun/2014 is 30,206,2307 according to the National Institute of Statistics - Godinho, Neide Maria de Oliveira (2008). "O impacto das migrações na constituição genética de populações latino-americanas" (PDF). Universidade de Brasília. Archived from the original on 6 July 2011. Retrieved 29 October 2012. - Tinker-Salas, Miguel (30 May 2018). Venezuela: What Everyone Needs to Know. Oxford University Press. ISBN 9780199783298 – via Google Books. - Allen, Theodore, The Invention of the White Race, 2 vols. Verso, London 1994. - Baum, Bruce David, The rise and fall of the Caucasian race: a political history of racial identity. NYU Press, New York and London 2006, ISBN 978-0-8147-9892-8. - Bonnett, Alastair (2000), White Identities: Historical and International Perspectives, Harlow: Pearson - Brodkin, Karen, How Jews Became White Folks and What That Says About Race in America, Rutgers, 1999, ISBN 0-8135-2590-X. - Coon, Carleton Stevens (1939). The Races Of Europe. New York: The Macmillan Company. - Foley, Neil, The White Scourge: Mexicans, Blacks, and Poor Whites in Texas Cotton Culture (Berkeley: University of California Press, 1997) - Gossett, Thomas F., Race: The History of an Idea in America, New ed. (New York: Oxford University, 1997) - Guglielmo, Thomas A., White on Arrival: Italians, Race, Color, and Power in Chicago, 1890–1945, 2003, ISBN 0-19-515543-2 - Hannaford, Ivan, Race: The History of an Idea in the West (Baltimore: Johns Hopkins University, 1996) - Ignatiev, Noel, How the Irish Became White, Routledge, 1996, ISBN 0-415-91825-1. - Jackson, F. L. C. (2004). Book chapter: Human genetic variation and health: new assessment approaches based on ethnogenetic layering at the Wayback Machine (archived 16 February 2008) British Medical Bulletin 2004; 69: 215–35 doi:10.1093/bmb/ldh012. Retrieved 29 December 2006. - Jacobson, Matthew Frye, Whiteness of a Different Color: European Immigrants and the Alchemy of Race, Harvard, 1999, ISBN 0-674-95191-3. - Oppenheimer, Stephen (2006). The Origins of the British: A Genetic Detective Story. Constable and Robinson Ltd., London. ISBN 978-1-84529-158-7. - Rosenberg NA, Mahajan S, Ramachandran S, Zhao C, Pritchard JK, et al. (2005) Clines, Clusters, and the Effect of Study Design on the Inference of Human Population Structure. PLoS Genet 1(6) e70 doi:10.1371/journal.pgen.0010070 PMID 16355252 - Rosenberg NA, Pritchard JK, Weber JL, Cann HM, Kidd KK, et al. (2002) Genetic structure of human populations. Science 298: 2381–85. Abstract - Segal, Daniel A., Review of Racial Situations: Class Predicaments of Whiteness in Detroit American Ethnologist May 2002, Vol. 29, No. 2, pp. 470–73 doi:10.1525/ae.2002.29.2.470 - Smedley, Audrey, Race in North America: Origin and Evolution of a Worldview, 2nd ed. (Boulder: Westview, 1999). - Tang, Hua., Tom Quertermous, Beatriz Rodriguez, Sharon L. R. Kardia, Xiaofeng Zhu, Andrew Brown, James S. Pankow, Michael A. Province, Steven C. Hunt, Eric Boerwinkle, Nicholas J. Schork, and Neil J. Risch (2005) Genetic Structure, Self-Identified Race/Ethnicity, and Confounding in Case-Control Association Studies Am. J. Hum. Genet. 76:268–75. - Wang, Sijia; Ray, Nicolas; Rojas, Winston; Parra, Maria V.; Bedoya, Gabriel; Gallo, Carla; Poletti, Giovanni; Mazzotti, Guido; Hill, Kim (21 March 2008). "Geographic Patterns of Genome Admixture in Latin American Mestizos". PLOS Genetics. 4 (3): e1000037. doi:10.1371/journal.pgen.1000037. ISSN 1553-7404. PMC 2265669. PMID 18369456. |Wikimedia Commons has media related to White people.| |Wikiquote has quotations related to: White people| - The dictionary definition of Wikisaurus:white person at Wiktionary
Merriam-Webster defines the term school choice as an option for students “to attend a school other than their district’s public school.” A growing body of literature shows that parents consider multiple factors when choosing the most appropriate school for their children, including academics, school safety, morals and values, character development, school reputation, and more. Unless parents have other options, however, compulsory education laws mean that where a child is educated is determined instead by geography alone. Parents may send their children to a private school, but only if they can afford to both pay taxes that support a public school system they will not use and to pay private school tuition. School choice options make more affordable alternatives to traditional public schools by allowing parents to apply their child’s share of public education funding to learning environments that better serve their educational needs. Education policy, including establishing schools, curricula, general requirements for enrollment and graduation, and funding, is primarily under state and local government authority. School choice policies, therefore, are adopted and implemented by state or local government. This Legal Memorandum will first examine the philosophical foundation and historical development of school choice and outline the types of school choice options available today. It will then look at school choice litigation, concluding that school choice options will likely survive legal challenges as advocates work to broaden the availability of these policies. The Roots of School Choice The idea of maximizing parental choices in the education of their children has deep philosophical roots. In Thomas Paine’s 1791 work The Rights of Man, he advocated for giving parents money to let them choose the type of education their children receive. Eight decades later, John Stuart Mill similarly advocated for “parents to obtain the education where and how they pleased.” Paine and Mill both explained why parents should have choices in their children’s education and suggested a framework for implementing such a policy. The modern champion for school choice is Milton Friedman, who, as Paine and Mill had done, both explained the philosophical basis for school choice and offered an approach for implementing it. Friedman did this in two important essays and a book that, together, influenced the transition from philosophical ideas to concrete policies. Friedman made his case for school choice in free-market terms. In his 1955 essay, “The Role of Government in Education,” for example, he said that rather than being limited to schools run solely by the government, “parents could express their views about schools directly, by withdrawing their children from one school and sending them to another.” He noted in another essay that “support for free choice of schools has been growing rapidly and cannot be held back indefinitely by the vested interests of the unions and educational bureaucracy.” Friedman proposed an approach in which the government would provide “parents vouchers redeemable for a specified maximum sum per child per year if spent on ‘approved’ educational services. Parents would then be free to spend this sum and any additional sum on purchasing educational services from an ‘approved’ institution of their own choice.” Vouchers, Friedman argued, “are not an end in themselves; they are a means to make a transition from a government to a market system.” School Choice Options This philosophical foundation for school choice has supported the development of concrete policies aimed at giving parents more options for their children’s education, often distinguished by the way schools are funded. Whereas public schools are supported by local, state, and—to a far lesser degree—federal funding, private education choice options include both public and private means of financial support. Private School Choice. States have adopted private education choice in various forms, including education savings accounts (ESAs), school vouchers, tax-credit ESAs, tax-credit scholarships, and individual tax credits and deductions. Below is a breakdown of private education choice options. - ESAs are government-authorized savings accounts, in which the state deposits a portion of a child’s per-pupil funding from the state education formula into a private account that parents use to purchase education products and services such as private school tuition, tutoring, learning programs, services, and materials. - School vouchers pay, in part or in full, for a student to attend a private school by directly sending the family a voucher for a portion of what would have been used for public school. Parents can use school vouchers for both religious and non-religious education options. - Tax-credit ESAs are for taxpayers who donate to nonprofits that fund and manage parent-directed K–12 ESAs to obtain either full or partial tax credits. In general, and similar to the accounts described above, the funds can be used for various educational needs ranging from paying for private school tuition, tutors, online learning programs, and higher education expenses. With some exceptions, ESAs and tax-credit ESAs allow parents to save unused funds from year to year. - Tax-credit scholarships provide full or partial tax credits for donating to nonprofit organizations that provide private school scholarships directly to students. - Individual tax credits and deductions grant parents state income tax relief for certain education expenses such as tuition, school supplies, tutors, and transportation. Public School Choice. Although private school choice options help make it financially possible to educate children outside of the public school system, public school choice diversifies the options within that system. In the traditional public school model, a child’s zip code determines which public school he or she will attend. Public school choice options such as charter schools, magnet schools, and open enrollment schools—which allow students to attend a public school that may be located elsewhere within a child’s school district or outside that district altogether—are described below. - Charter schools may have a physical location within a school district or, in some states, be operated virtually. Charter schools have their own school boards and are operated by a private entity, such as a nonprofit organization or corporation, according to a contract, or charter, established with the state. That charter outlines the school’s mission and includes operational, programmatic, and accountability requirements that may differ from those of traditional public schools. - Magnet schools emphasize specialized programs and/or curricula that may be only a small part of a traditional public school’s educational offerings—or may not be not available at all. Magnet schools are run by the public school system rather than private entities, but, unlike traditional public schools, typically require an application for admission. - Open enrollment allows parents to choose a traditional public school other than the one to which their child would be otherwise be assigned. Inter-district open enrollment allows attendance at a school in another school district, while intra-district open enrollment allows attendance elsewhere within the district in which a child resides. School Choice Development in the United States The importance of public support for education options is not a new concept. The Massachusetts Constitution of 1780, which was largely drafted by John Adams and served as a model for the U.S. Constitution, provides in Chapter V, Section II: Wisdom, and knowledge, as well as virtue, diffused generally among the body of the people, being necessary for the preservation of their rights and liberties; and as these depend on spreading the opportunities and advantages of education in the various parts of the country, and among the different orders of the people, it shall be the duty of legislatures and magistrates, in all future periods of this commonwealth, to cherish the interests of literature and the sciences, and all seminaries of them; especially the university at Cambridge, public schools, and grammar-schools in the towns; to encourage private societies and public institutions…public and private charity, industry and frugality, honesty and punctuality in their dealings; sincerity, and good humor, and all social affections and generous sentiments, among the people. Since the Massachusetts Constitution, public support for education has evolved and developed into a variety of education options across the United States. The section below depicts a series of milestones in this development. Vouchers. The first school choice program in America implemented the oldest school choice policy idea, the voucher. Vermont established the Town Tuitioning Program in 1869, the same year that Mill wrote about the subject. Some towns in Vermont lacked an elementary, middle, or high school, and this program gave parents a voucher, in the amount allocated for their student by the state, to use at another school in Vermont, or even in a different state. From the outset, vouchers could be used at both public and private nonsectarian schools. Vermont recently expanded this program following the 2022 Supreme Court’s ruling in Carson v. Makin so that the vouchers can now be used at religious schools. Magnet Schools. The first magnet school, opened in 1968 in Tacoma, Washington, was an elementary school with the goal of reducing racial isolation. The school focused on “high caliber instruction, resources, and amenities, with an admissions policy based on a system of controlled choice.” This magnet school allowed for public school choice in Washington by giving students an option to attend a school focused on advanced learning as opposed to their traditional public elementary schools’ standardized educational focus. The magnet school was considered a success and opened the door for other states, including Massachusetts, to launch their own magnet school programs the year after. Charter Schools. In 1991, Minnesota was the first state to enact legislation providing for charter schools, with City Academy opening in 1992. Today, 46 states provide for some form of charter schools. Tax-Credit Scholarships. Arizona pioneered school choice programs, enacting in 1997 the Arizona-Individual School Tuition Organization Tax Credit Scholarship program. It provides tax credits for “charitable contributions to school tuition organizations (STOs).” Those STOs, in turn, provide scholarships for attendance at qualified private schools. Education Savings Accounts. In addition to different forms of vouchers, magnet schools, and charter schools, some states have recently enacted ESAs. In 2011, for example, Arizona launched the Empowerment Scholarship Account program, which allows parents to use money that would have been spent on public education to pay, through an ESA, for a customized educational experience for their children. Unused money can be rolled over each year and used to pay for college. A number of states have adopted similar options; as of 2023, 14 states have ESAs or ESA-style accounts enacted, in which three are tax credit–funded ESAs and six are completely universal. ESAs appear to be following the same path as vouchers, implemented by direct grants to parents or by compensating them for educational expenses. Tax-Credit ESAs. In 2021, Kentucky established the first tax-credit ESA as part of its Education Opportunity Account Program. This program allows for people who donate to groups that fund education savings accounts to receive tax credits for their donations. Private School Choice Litigation Alexis de Tocqueville, the French diplomat and political philosopher, famously wrote in 1835 that “there is hardly a political question in the United States that does not sooner or later turn into a judicial one.” Significant social and cultural changes have followed this pattern. Abortion advocates, for example, turned to litigation after having little success persuading state legislatures to change or repeal their long-standing pro-life laws. The Supreme Court’s 1973 decision in Roe v. Wade effectively invalidated those laws, severely limiting legislative efforts to protect the unborn until the Court overruled Roe in 2022. The pattern is similar in the school choice context. The Wall Street Journal declared 2011 the “Year of School Choice” when 13 states enacted school choice legislation. That title was eclipsed in 2021 when “18 states enact[ed] seven new educational choice programs and expand[ed] 21 existing ones.” As of May 2023, 32 states provide some form of private school choice option, including: - 21 states have tax-credit scholarships; - 15 states, Puerto Rico, and Washington, D.C., have voucher programs; - 11 states have education savings accounts; - Nine states have tax-credit or deduction programs; and - Three states have tax-credit education savings accounts. Litigation over private school choice options, which focuses on the inclusion of religious schools, falls into three categories. - School choice opponents argue that providing for a religious school choice option is an “establishment of religion” prohibited by the First Amendment to the U.S. Constitution. - Opponents also argue that providing for a religious school choice option violates a ban, appearing in different forms in many state constitutions, on using public funds to aid religious schools. - School choice supporters have challenged the prohibition of any religious school choice option under these no-aid constitutional provisions, arguing that such provisions violate the First Amendment’s Free Exercise Clause. The following analysis examines each of these categories, concluding that school choice programs are likely to survive these legal challenges. Establishment Clause Challenges to School Choice Options School choice opponents argue that any form of government aid that, even indirectly, benefits a religious school violates the Constitution. Specifically, opponents argue that the Establishment Clause requires excluding religious schools from school choice programs. The Supreme Court has rejected this view. The First Amendment provides: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof.” Few areas of Supreme Court jurisprudence are as confusing as its Religion Clause cases. Stanford Law Professor Michael McConnell, a noted First Amendment scholar, writes that “a more confused and often counterproductive mode of interpreting the First Amendment would have been difficult to devise.” Justice Clarence Thomas agrees, writing that “our Establishment Clause jurisprudence is in hopeless disarray” and “in shambles.” The Metaphorical Wall. The Supreme Court has chosen, and then abandoned, several different approaches to interpreting and applying the Establishment Clause. The first was simply a metaphor. In a letter to the Danbury, Connecticut, Baptist Association dated January 1, 1802, President Thomas Jefferson stated the view that the First Amendment built “a wall of separation between Church & State.” The Supreme Court mentioned this metaphor only once, in 1872, before making it First Amendment doctrine in its 1947 decision in Everson v. Board of Education. Quoting from that single precedent, Justice Hugo Black wrote for the majority that the Establishment Clause “was intended to erect ‘a wall of separation between Church and State’” that “must be kept high and impregnable. We could not approve the slightest breach.” Especially as an interpretation of the Constitution, however, this metaphorical wall could not bear any weight. Its multiple problems included: (1) Jefferson used the metaphor in expressing his personal views—not as a substantive interpretation of the Establishment Clause; (2) Jefferson expressed those views in 1802, more than a decade after the First Amendment had been ratified; (3) Jefferson was not involved in writing either the Constitution or the Bill of Rights because he was the United States Minister to France at the time; and (4) a mere metaphor cannot yield a sound interpretation capable of consistent application. The wall of separation began a long and slow crumble almost immediately after the Supreme Court built it. - Writing just one year after the wall went up, Justice Robert Jackson warned against judges using “our own prepossessions” to interpret the Establishment Clause. In doing so, “we are likely to make the legal ‘wall of separation between church and state’ as winding as the famous serpentine wall designed by Mr. Jefferson for the University he founded.” - Justice Potter Stewart wrote in 1962 that constitutional adjudication “is not responsibly aided by the uncritical invocation of metaphors like ‘the wall of separation,’ a phrase nowhere to be found in the Constitution.” - Justice William Rehnquist wrote in 1985 that “[i]t is impossible to build sound constitutional doctrine upon a mistaken understanding of constitutional history, but unfortunately the Establishment Clause has been expressly freighted with Jefferson’s misleading metaphor for nearly 40 years.” The Lemon Test. Twenty-four years after building the wall of separation, the Supreme Court created a new test for identifying Establishment Clause violations. Lemon v. Kurtzman involved Establishment Clause challenges to two state laws. A Rhode Island statute authorized a salary supplement for teachers of secular subjects in private schools. A Pennsylvania statute reimbursed private schools for secular educational services such as teacher salaries and instructional materials. In both cases, the large majority of eligible schools were religiously affiliated. The Supreme Court announced a new three-part test for analyzing whether a law violates the Establishment Clause: “First, the statute must have a secular legislative purpose; second, its principal or primary effect must be one that neither advances nor inhibits religion; finally, the statute must not foster ‘an excessive government entanglement with religion.’” The Court found that both the Rhode Island and Pennsylvania statutes were unconstitutional because they violated the second Lemon prong by impermissibly advancing religion. Applying Lemon to School Choice. Thus begins the story of school choice litigation. In the first chapter, the Supreme Court applied Lemon to strike down government programs that provided aid directly to religious schools. Committee for Public Education and Religious Liberty v. Nyquist, decided two years after Lemon, involved a challenge to a New York state law that created three aid programs for private schools. These included direct grants to schools for the maintenance and repair of facilities and equipment, reimbursement to parents for a portion of private school tuition, and a tax benefit for those who did not qualify for the reimbursement. The Supreme Court held that all three programs violated one or more parts of the Lemon test. The maintenance and repair program “inevitably…subsidize[d] and advance[d] the religious mission of sectarian schools.” Even though the tuition reimbursement went to parents instead of directly to schools, a distinction that would become more important in later cases, “the effect of the aid is unmistakably to provide desired financial support for nonpublic, sectarian institutions.” And the tax benefit program suffered the same fate for the same reason. Aguilar v. Felton challenged a federal program that allowed funds to be used to pay the salaries of public school teachers who provided remedial instruction in private schools. Those instructional activities used government-provided materials and equipment, minimized interaction with private school personnel, and eliminated any visible religious symbols. The Supreme Court held that, even with these safeguards, this program failed the Lemon test because the necessary supervision and management amounted to “excessive entanglement of church and state.” It appeared that the effort to avoid advancing religion, Lemon’s first prong, required entanglement that violated Lemon’s third prong. In a second group of decisions, the Supreme Court applied Lemon but upheld programs that provided aid to parents rather than directly to religious schools. Mueller v. Allen involved a challenge to a 1955 Minnesota law that allowed taxpayers to claim a state income tax deduction for a portion of expenses incurred in educating their children. Applying Lemon, the Supreme Court voted 5–4 that this tax deduction program did not violate the Establishment Clause. The Court emphasized the distinction between “public funds…available only as a result of numerous, private choices of individual parents of school-age children” and “direct transmission of assistance from the state to the schools themselves.” Witters v. Washington Dept. of Services for the Blind involved a challenge to a state agency policy excluding religious schools from a program providing vocational rehabilitation assistance. The Washington constitution provided that “no public money or property shall be appropriated or applied to any religious worship, exercise or instruction, or the support of any religious establishment.” Citing this provision, the Washington Commission for the Blind had a policy “forbid[ding] the use of public funds to assist an individual in the pursuit of a career or degree in theology or related areas.” On this basis, the Commission denied Larry Witters’ application for vocational rehabilitation assistance because he was preparing for the ministry at a private Christian college. The Supreme Court unanimously reversed, holding that the Washington program provides assistance “directly to the student” rather than as a “direct subsidy to the religious school.” As a result, aid reaches a religious school “only as a result of the genuinely independent and private choices of aid recipients” and “the link between the State and [a religious] school [is] highly attenuated.” Lemon’s Demise. The Supreme Court’s 1993 decision in Lamb’s Chapel v. Center Moriches Union Free School District showed how badly Lemon had distorted Establishment Clause jurisprudence. This case involved a challenge to a school district policy allowing school facilities to be used for “social, civic, or recreational” but not “religious” purposes. The school district argued that allowing any religious use of its facilities would violate the Establishment Clause. Lemon’s “effects” prong, in particular, had led to the unusual argument that the Establishment Clause effectively required violating the Free Exercise Clause by categorically discriminating against religious organizations. Justice Antonin Scalia opened his concurring opinion with this indictment of Lemon: As to the Court’s invocation of the Lemon test: Like some ghoul in a late-night horror movie that repeatedly sits up in its grave and shuffles abroad, after being repeatedly killed and buried, Lemon stalks our establishment Clause jurisprudence…. When we wish to strike down a practice it forbids, we invoke it…when we wish to uphold a practice it forbids, we ignore it entirely…. Such a docile and useful monster is worth keeping around, at least in a somnolent state; one never knows when one might need him. In Zelman v. Simmons Harris, the Supreme Court applied these precedents in the contemporary school choice context. After a federal judge in 1995 placed the Cleveland, Ohio, public school district under state control, an audit found that it had failed to meet any of the state’s 18 standards for minimum acceptable performance. The Ohio legislature responded by enacting the Pilot Project Scholarship Program to provide tuition aid to parents in a school district, like Cleveland, that had been placed by court order under the state’s supervision and operational management. In this program, where the aid is spent “depends solely upon where parents who receive tuition aid choose to enroll their child.” The program included all private schools within a designated district that met state educational and non-discrimination standards, more than 80 percent of which at the time had a religious affiliation. Applying Lemon, the U.S. Court of Appeals for the Sixth Circuit held that the program violated the Constitution because it “has the primary effect of advancing religion, and…constitutes an endorsement of religion and sectarian education in violation of the Establishment Clause.” The Supreme Court reversed. Writing for the majority, Rehnquist noted that the Court’s precedents “have drawn a consistent distinction between government programs that provide aid directly to religious schools…and programs of true private choice, in which government aid reaches religious schools only as a result of the genuine and independent choices of private individuals.” Such a program of “true private choice,” Rehnquist explained, breaks “the circuit between government and religion.” Lemon has now gone the way of the “wall of separation.” As problems with the wall mounted, the Supreme Court soon described it as merely a “useful signpost.” Just two years after Lemon appeared, the Court similarly described its three prongs as “no more than helpful signposts.” By 2019, the Court acknowledged that the Lemon test had been “harshly criticized by Members of this Court, lamented by lower court judges, and questioned by a diverse roster of scholars.” Three years later, in Kennedy v. Bremerton School District, the Court stated that it had “long ago abandoned Lemon,” and, in Groff v. DeJoy, simply declared Lemon “abrogated.” Despite the confusion and jurisprudential detours, these Supreme Court decisions at least establish that school choice programs, such as tuition assistance through vouchers or tax benefits, that include religious schools do not violate the Establishment Clause if the aid or benefit is provided to parents, reaching a religious school solely by their independent and private decisions. State Constitution Challenges to School Choice Options Thirty-eight state constitutions include a provision that, in different forms, explicitly prohibits or restricts government aid to religious schools or institutions. School choice opponents argue that providing for a religious school choice option violates these no-aid constitutional provisions. This category of litigation, however, is complicated by the fact that no-aid provisions, like school choice options, come in different forms. As a result, it may be possible for state legislatures to provide a religious school choice option crafted in a way that can pass muster under a particular no-aid provision. No-Aid Provisions Were “Born of Bigotry.” Using state constitutions to exclude religious schools from government benefit programs was a strategy “born of bigotry.” Professor, and later U.S. Circuit Judge, Jay Bybee and author David Newton explain that its roots lay in anti-Catholic cultural and political prejudice dating back to the early 19th century. At its founding, the United States was “overwhelmingly Protestant” and religious tolerance often did not extend to other religious bodies or faiths. As a result, Catholics viewed public schools as Protestant schools because they “routinely required pupils to pray, sing hymns, and read from the Bible.” By the mid-19th century, however, Catholics’ share of the American population, as well as their political influence, were on the rise. “Perhaps the greatest source of friction between the Protestant majority and the Catholic minority,” Bybee and Newton write, “was the public school system.” Catholic demands for “public funding for their own schools” only intensified as states, starting with Massachusetts in 1852, began enacting compulsory education laws. The Protestant reaction included 19 states amending their constitutions between 1835 and 1875 to prohibit government aid or support for “sectarian” institutions. In 1838, for example, Florida added a constitutional provision which today reads: “No revenue of the state or any political subdivision or agency thereof shall ever be taken from the public treasury directly or indirectly in aid of any church, sect, or religious denomination or in aid of any sectarian institution.” “Not One Dollar” for Sectarian Schools. This movement to change state constitutions gained momentum after Congress nearly proposed a similar exclusionary amendment to the U.S. Constitution. Members of Congress had begun calling for such a constitutional amendment in 1871, and the effort drew national attention four years later. In a September 1875 speech, President Ulysses S. Grant called for “free schools” and argued that “not one dollar, appropriated for their support, [should] be appropriated to the support of any sectarian schools.” Neither “the State nor Nation [should] support institutions of learning other than those sufficient to afford to every child growing up in the land the opportunity of a good common school education, unmixed with sectarian, pagan, or atheistical dogmas.” Grant took the next step in his final address to Congress on December 7, 1875. He recommended amending the U.S. Constitution to require that states “maintain free public schools…forbidding the teaching in said schools of religious, atheistic, or pagan tenets; and prohibiting the granting of any school funds or school taxes, or any part thereof…for the benefit or in aid, directly or indirectly, of any religious sect or denomination.” A few days later, Representative James G. Blaine (R–ME), who had been Speaker of the House for the previous six years, introduced a resolution to amend the Constitution along these lines. It would prohibit “money raised by taxation in any State for the support of public schools or derived from any public funds therefor” from “ever be[ing] under the control of any religious sect or denomination.” The House of Representatives in August 1876 voted 180–7 in favor of this resolution, far more than the two-thirds required by the Constitution for proposing constitutional amendments, but fell four votes short of that threshold in the Senate. Even though the movement began before Blaine’s campaign, these no-aid provisions are often referred to collectively as “Blaine amendments.” Blaine Amendment Variations. On the surface, Blaine amendments appear to be absolute or all-encompassing prohibitions on aid to religious schools. A careful look at a particular no-aid provision’s text, though, might reveal some space that a current or future religious school choice option might occupy in order to pass constitutional muster. As an example, consider the text of Blaine’s own amendment that nearly went before the states for ratification. It would apply only to “money raised…for the support of public schools.” This language suggests that a state government could, separately or independently from public school funding, appropriate money that could, in some way, benefit or aid religious schools. Similar scrutiny of no-aid provisions in state constitutions reveals that they use different language regarding several common features. First, Blaine amendments identify the government funds or resources subject to religious exclusion. Some, for example, use narrow language similar to Blaine’s own proposal, such as “money raised for the support of the public schools” or “funds for educational purposes.” A restriction on money intended for public schools may not foreclose the legislature separately appropriating money to provide a religious school choice option. The West Virginia Supreme Court, for example, has held that the state constitution’s requirement that “free schools” be supported through an “invested school fund” does not contain “any prohibition on the Legislature using general revenue funds to support [other] educational initiatives.” Other Blaine amendments use broader language such as “any public fund or moneys whatever,” “money from the treasury,” or simply “public funds” or “public money.” Even this seemingly comprehensive language, however, may not foreclose all religious school choice options. The Nevada Constitution, for example, provides that “[n]o public funds of any kind or character whatever…shall be used for sectarian purpose.” In Schwartz v. Lopez, however, the Nevada Supreme Court held that funds deposited in an ESA established by parents to pay for their child’s educational expenses “are no longer public funds” but “belong to the parents.” Sectarian Exclusion. The second feature of state Blaine amendments is the government action being prohibited, with 24 of them applying their sectarian exclusion to money or funds that are “appropriated” or “drawn.” While this might apply to voucher programs that involve money appropriated for grants, the most common school choice program instead utilizes tax benefits such as deductions or credits. The Alabama Constitution provides that “[n]o money raised for the support of the public schools shall be appropriated to or used for the support of any sectarian or denominational school.” The Alabama Accountability Act provided tax credits for parents living in a “failing school” zone “who choose to send their children to a nonpublic school or a nonfailing public school.” In Magee v. Boyd, the Alabama Supreme Court held that, by utilizing tax credits rather than appropriated funds, the state was not actually collecting income tax and spending that revenue to help private schools. “The tax credit…merely allows the taxpayers to retain more of their earned income as an incentive to contributing to scholarship-granting organizations.” As such, tax credits did not constitute appropriations for purposes of the state’s Blaine amendment. Similarly, the Arizona Constitution provides that “[n]o public money…shall be appropriated for or applied to…the support of any religious establishment” and that “[n]o…appropriation of public money [shall be] made in aid of any church, or private or sectarian school.” In Kotterman v. Killian, however, the Arizona Supreme Court held that because “no money ever enters the state’s control as a result of this tax credit…we are not here dealing with ‘public money.’” In addition, the court held that a tax credit is not the same as an appropriation simply because it “diverts…funds that would otherwise be state revenue…. It does not follow…that reducing a taxpayer’s liability is the equivalent of spending a certain sum of money.” In Arizona Christian School Tuition Organization v. Winn, the U.S. Supreme Court similarly explained that while “tax credits and governmental expenditures can have similar economic consequences,” utilizing tax credits means that “taxpayers…spend their own money, not money the State has collected from…other taxpayers.” The Kentucky Supreme Court, however, has come to the opposite conclusion on this issue. The Kentucky Constitution provides that “any sum which may be produced by taxation or otherwise for purposes of common school education, shall be appropriated to the common schools, and to no other purpose.” The Kentucky Supreme Court held that a school choice program utilizing tax credits violated this provision. “Taxpayers who owe Kentucky income tax owe real dollars to the state and when they are not required to pay those real dollars in the first instance or have them refunded because [a] tax credit reduces or eliminates their tax bill, the public treasury is diminished and the Commonwealth and other taxpayers must subsidize that taxpayer’s personal choice to send money…for use at nonpublic schools.” The Kentucky court declined to follow the reasoning in Magee or Kotterman because the tax credits available in the Alabama and Arizona programs were “de minimis compared to the significant credits available…under the Kentucky [program].” The court did not explain how the potential amount of a tax credit—rather than its nature or operation—determined whether it constituted a “sum” within the meaning of the Kentucky Constitution. Government Objective. The third feature of state Blaine amendments is the purpose for which the prohibited government action would be taken. Examples of narrow language in this category include “for support of” or “for the use of” religious schools, while broader language would include “for the benefit of.” In either case, the language suggests a legislative intention to help or benefit religious schools. The previous discussion of how the Establishment Clause applies to school choice is relevant here. Programs that, for example, give grants or reimbursements directly to religious schools might be said to have the purpose of supporting or benefitting those religious schools. Voucher or tax benefit programs, as well as educational savings accounts, however, benefit parents by making a broader range of educational choices more affordable. Those programs would have the same effect even if every parent used the assistance at secular private schools. At the same time, parents who would use that assistance at a religious school would be primarily motivated by the educational benefit for their children rather than by the school itself. Nonetheless, even if some parents wanted to benefit a religious school by using the tuition assistance they received at that institution, that purpose cannot be attributed to the government and, therefore, should not run afoul of a Blaine amendment. Free Exercise Clause Challenges to Exclusionary School Choice Options The previous section explored how Blaine amendments do not necessarily block every effort to provide religious school choice options. The third category of school choice litigation challenges these exclusionary constitutional provisions themselves. Three Supreme Court decisions have invalidated such exclusions and, in doing so, severely undermined the constitutionality of all Blaine amendments. Trinity Lutheran v. Comer. In Trinity Lutheran Church of Columbia v. Comer, a church applied for a reimbursement grant from the Missouri Department of Natural Resources for the cost of resurfacing its learning center playground. Under the program’s objective criteria, the church ranked fifth out of 44 applicants in 2012, when the government awarded 14 grants. The government, however, categorically excluded churches or religious schools from the program under the Missouri Constitution’s Blaine amendment, which prohibited any “money…taken from the public treasury, directly or indirectly, in aid of any church, sect or denomination of religion.” The Supreme Court made clear that the “express discrimination against religious exercise here is not the denial of a grant, but rather the refusal to allow the Church—solely because it is a Church—to compete with secular organizations for a grant…. Here there is no question that Trinity Lutheran was denied a grant simply because of what it is—a church…. The rule is simple: No churches need apply.” Requiring a church to “renounce its religious character in order to participate in an otherwise generally available public benefit program, for which it is fully qualified…imposes a penalty on the free exercise of religion that must be subjected to the ‘most rigorous’ scrutiny.” Under this “stringent standard,” the Court held, “only a state interest ‘of the highest order’ can justify the Department’s discriminatory policy.” The government, however, “offers nothing more than Missouri’s policy preference for skating as far as possible from religious establishment concerns.” Trinity Lutheran thus rejected the notion that the Establishment somehow requires violating the Free Exercise Claue. Espinoza v. Montana. Espinoza v. Montana Department of Revenue involved that state’s Blaine amendment, which prohibited any public entity from “mak[ing] any direct or indirect appropriation or payment from any public fund or monies…for any sectarian purpose.” The legislature enacted a school choice program that granted tax credits for contributions to organizations that award scholarships for private school tuition. The Department of Revenue, asserted that the state constitution’s Blaine amendment required categorically excluding religious schools for the program. Citing Trinity Lutheran, the Supreme Court held that this exclusion violated the Free Exercise Clause. The Montana Constitution, the Court held, “discriminates based on religious status just like the Missouri policy in Trinity Lutheran.” The majority concluded that “[a] State need not subsidize private education. But once a state decides to do so, it cannot disqualify some private schools solely because they are religious.” In a concurring opinion, Justice Samuel Alito explained how bigotry and prejudice not only explain the origin of these Blaine amendments, but might also be relevant to their constitutionality. He referenced Ramos v. Louisiana, a case in which the Supreme Court held that laws in Louisiana and Oregon allowing non-unanimous verdicts in criminal cases violated the Sixth Amendment. In that case, the Court explained that “[t]hough it’s hard to say why these laws persist, their origins are clear” as part of a strategy to undermine African American participation on juries. Alito dissented in Ramos, disagreeing that such discriminatory origins were relevant to those statutes’ constitutionality. In his Espinoza concurrence, however, Alito wrote that, on that point, “I lost and Ramos is now precedent. If the original motivation for the laws mattered there, it certainly matters here.” Carson v. Makin.Carson v. Makin involved a Maine school choice program that categorically excluded sectarian schools. The Maine Constitution requires towns to make “suitable provision, at their own expense, for the support and maintenance of public schools,” and a Maine statute requires providing every school-age child in the state “an opportunity to receive the benefits of a free public education.” Because a majority of Maine counties have no public school, however, the state legislature enacted a program for paying the tuition “at the public or the approved private school of the parent’s choice at which the student is accepted.” The program has no geographic limitation and, while “approved” private schools include single-sex schools, the program expressly excludes sectarian schools. The Supreme Court reiterated that “a State violates the Free Exercise Clause when it excludes religious observers from otherwise available public benefits.” Citing the application of this “unremarkable” principle in Trinity Lutheran and Espinoza, the Supreme Court came to the same conclusion in this case. The Court also rejected the distinction between discrimination against a religious school based on its “religious character” and whether it would put public assistance to a “religious use.” Either way, using such a religion-based criterion to condition availability of a widely available public benefit unconstitutionally burdens the exercise of religion. Finally, as it had in Lamb’s Chapel and in Trinity Lutheran, the Court again rejected the idea that religious discrimination that violates the Free Exercise Clause is, in some way, necessary to comply with the Establishment Clause. The Supreme Court, therefore, has signaled in three different ways that Blaine amendments violate the Free Exercise Clause. - The Court has consistently come to that conclusion in individual cases. - The Court held in Ramos v. Louisiana that the discriminatory origin of laws can undermine their constitutionality. - In deciding these cases, the Court has consistently applied the principle that excluding churches or schools from even being eligible for generally available public benefits solely because they are religious—the objective of every Blaine amendment—violates the Free Exercise Clause. The principle that parents should be able to choose the best education for their children has long and deep roots. In the United States, nearly every state has implemented that principle by providing some form of school choice option. After failing to prevent legislative enactment of these programs, school choice opponents have turned to the courts either to eliminate all school choice options or, at least, to make religious school choice impossible. These litigation strategies will likely fail. The Supreme Court has held that programs in which government assistance is provided to parents, rather than directly to religious schools, do not violate the First Amendment’s Establishment Clause. At the state level, while 38 states have Blaine amendments in their constitutions, careful attention to their wording and judicial interpretations may identify how a school choice program can be crafted to withstand scrutiny while those provisions remain. Blaine amendments were “born of bigotry” and, hopefully, will be eliminated through individual or collective invalidation by the Supreme Court or repeal by each state’s citizens. And, finally, the Supreme Court has been clear that school choice programs that categorically exclude religious schools violate the Free Exercise Clause. These legal challenges should not stop school choice advocates from working to expand ways give parents more choice in their children’s education. Thomas Jipping is a Senior Legal Fellow in the Edwin Meese III Center for Legal and Judicial Studies at The Heritage Foundation. Caroline Heckman is an Administrative Assistant for the Institute for Constitutional Government at The Heritage Foundation.
3.3 Using ratio tables One way to support learners in developing their own mental strategies for solving proportion problems is through the use of ratio tables. Ratio tables are a way to symbolise the problem and can support learners in finding strategies for solution. They encourage approaches such as halving, doubling, and multiplying by 10. Activity 20 Reflecting Two learners have used ratio tables to work on the problem shown below. They have both taken slightly different approaches. Have a look at their workings below. Can you identify each of their strategies? Problem: Seedling plants come in boxes of 35 plants. How many plants would be in 16 boxes? - Repeated doubling to get to 16 boxes. This works because 16 is a power of 2. If the question was asking for 15 boxes, she could use the same strategy but then subtract one lot of 35. - Multiply by 10. - Separately also multiply original amount by 2, then multiply by 3 to get 6 lots. - Add 10 lots and 6 lots to get 16. Alejandro has used a building-up strategy. He did not just use a scalar multiplier. Instead, he had to find different parts and sum these. This can be known as the addition and scaling method. In the next example, Alejandro was given the unit amount (that 1 box contained 35 seedlings). This meant that he only needed to use multiplication. In the second example below, the unit cost is not given, so some division, as well as multiplication, is required. Activity 21 Proportion problem Think about how you might approach this problem before reading the learner’s response below. Problem: Mangoes are 2 for £3. How many could you buy for £7.50? This problem is slightly different from the previous examples because it is not as clear how to get from £3 to £7.50 as it was to get from 1 to 16. Because £7.50 is not a multiple of 3, it cannot be made using either Sophie’s or Alejandro’s strategy. - Double to get £6. - Need £1.50 more so adding another £3 will be too much. - Half the original amount to get £1.50. - Add £1.50 to £6 (1 lot to 4 lots) to get £7.50.
In the unlikely event of a volcanic supereruption at Yellowstone National Park, the northern Rocky Mountains would be blanketed in meters of ash, and millimeters would be deposited as far away as New York City, Los Angeles and Miami, according to a new study. An improved computer model developed by the study’s authors finds that the hypothetical, large eruption would create a distinctive kind of ash cloud known as an umbrella, which expands evenly in all directions, sending ash across North America. An example of the possible distribution of ash from a month-long Yellowstone supereruption. The distribution map was generated by a new model developed by the U.S. Geological Survey using wind information from January 2001. The improved computer model, detailed in a new study published in Geochemistry, Geophysics, Geosystems, finds that the hypothetical, large eruption would create a distinctive kind of ash cloud known as an umbrella, which expands evenly in all directions, sending ash across North America. Ash distribution will vary depending on cloud height, eruption duration, diameter of volcanic particles in the cloud, and wind conditions, according to the new study. A supereruption is the largest class of volcanic eruption, during which more than 1,000 cubic kilometers (240 cubic miles) of material is ejected. If such a supereruption were to occur, which is extremely unlikely, it could shut down electronic communications and air travel throughout the continent, and alter the climate, the study notes. A giant underground reservoir of hot and partly molten rock feeds the volcano at Yellowstone National Park. It has produced three huge eruptions about 2.1 million, 1.3 million and 640,000 years ago. Geological activity at Yellowstone shows no signs that volcanic eruptions, large or small, will occur in the near future. The most recent volcanic activity at Yellowstone—a relatively non-explosive lava flow at the Pitchstone Plateau in the southern section of the park—occurred 70,000 years ago. Researchers at the U.S. Geological Survey used a hypothetical Yellowstone supereruption as a case study to run their new model that calculates ash distribution for eruptions of all sizes. The model, Ash3D, incorporates data on historical wind patterns to calculate the thickness of ash fall for a supereruption like the one that occurred at Yellowstone 640,000 years ago. The new study provides the first quantitative estimates of the thickness and distribution of ash in cities around the U.S. if the Yellowstone volcanic system were to experience this type of huge, yet unlikely, eruption. Cities close to the modeled Yellowstone supereruption could be covered by more than a meter (a few feet) of ash. There would be centimeters (a few inches) of ash in the Midwest, while cities on both coasts would see millimeters (a fraction of an inch) of accumulation, according to the new study that was published online today in Geochemistry, Geophysics, Geosystems, a journal of the American Geophysical Union. The paper has been made available at no charge at http://onlinelibrary.wiley.com/doi/10.1002/2014GC005469/abstract. The model results help scientists understand the extremely widespread distribution of ash deposits from previous large eruptions at Yellowstone. Other USGS scientists are using the Ash3D model to forecast possible ash hazards at currently restless volcanoes in Alaska. Unlike smaller eruptions, whose ash deposition looks roughly like a fan when viewed from above, the spreading umbrella cloud from a supereruption deposits ash in a pattern more like a bull’s eye – heavy in the center and diminishing in all directions – and is less affected by prevailing winds, according to the new model. “In essence, the eruption makes its own winds that can overcome the prevailing westerlies, which normally dominate weather patterns in the United States,” said Larry Mastin, a geologist at the USGS Cascades Volcano Observatory in Vancouver, Washington, and the lead author of the new paper. Westerly winds blow from the west. “This helps explain the distribution from large Yellowstone eruptions of the past, where considerable amounts of ash reached the west coast,” he added. The three large past eruptions at Yellowstone sent ash over many tens of thousands of square kilometers (thousands of square miles). Ash deposits from these eruptions have been found throughout the central and western United States and Canada. Erosion has made it difficult for scientists to accurately estimate ash distribution from these deposits. Previous computer models also lacked the ability to accurately determine how the ash would be transported. Using their new model, the study’s authors found that during very large volcanic eruptions, the expansion rate of the ash cloud’s leading edge can exceed the average ambient wind speed for hours or days depending on the length of the eruption. This outward expansion is capable of driving ash more than 1,500 kilometers (932 miles) upwind – westward — and crosswind – north to south — producing a bull’s eye-like pattern centered on the eruption site. In the simulated modern-day eruption scenario, cities within 500 kilometers (311 miles) of Yellowstone like Billings, Montana, and Casper, Wyoming, would be covered by centimeters (inches) to more than a meter (more than three feet) of ash. Upper Midwestern cities, like Minneapolis, Minnesota, and Des Moines, Iowa, would receive centimeters (inches), and those on the East and Gulf coasts, like New York and Washington, D.C. would receive millimeters or less (fractions of an inch). California cities would receive millimeters to centimeters (less than an inch to less than two inches) of ash while Pacific Northwest cities like Portland, Oregon, and Seattle, Washington, would receive up to a few centimeters (more than an inch). Even small accumulations only millimeters or centimeters (less than an inch to an inch) thick could cause major effects around the country, including reduced traction on roads, shorted-out electrical transformers and respiratory problems, according to previous research cited in the new study. Prior research has also found that multiple inches of ash can damage buildings, block sewer and water lines, and disrupt livestock and crop production, the study notes. The study also found that other eruptions – powerful but much smaller than a Yellowstone supereruption — might also generate an umbrella cloud. “These model developments have greatly enhanced our ability to anticipate possible effects from both large and small eruptions, wherever they occur,” said Jacob Lowenstern, USGS Scientist-in-Charge of the Yellowstone Volcano Observatory in Menlo Park, California, and a co-author on the new paper. The American Geophysical Union is dedicated to advancing the Earth and space sciences for the benefit of humanity through its scholarly publications, conferences, and outreach programs. AGU is a not-for-profit, professional, scientific organization representing more than 62,000 members in 144 countries. Join our conversation on Facebook, Twitter, YouTube, and other social media channels. Notes for Journalists A PDF copy of this article can be downloaded at no cost by clicking on this link: http://onlinelibrary.wiley.com/doi/10.1002/2014GC005469/abstract. Or, you may order a copy of the final paper by emailing your request to Nanci Bompey at email@example.com. Please provide your name, the name of your publication, and your phone number. Neither the paper nor this press release is under embargo. “Modeling ash fall distribution from a Yellowstone supereruption” Larry G. Mastin: U.S. Geological Survey, Cascades Volcano Observatory, Vancouver, WA, USA; Alexa R. Van Eaton: U.S. Geological Survey, Cascades Volcano Observatory, Vancouver, WA, USA; and School of Earth and Space Exploration, Arizona State University, Tempe, Arizona, USA; Jacob B. Lowenstern: U.S. Geological Survey, Yellowstone Volcano Observatory, Menlo Park, CA, USA. Contact information for the authors: Larry G. Mastin: +1 (360) 993-8925, firstname.lastname@example.org Jacob B. Lowenstern: +1 (650) 329-5238, email@example.com +1 (202) 777-7524 Nanci Bompey | American Geophysical Union A promising target in the quest for a 1-million-year-old Antarctic ice core 24.05.2018 | University of Washington Tropical Peat Swamps: Restoration of Endangered Carbon Reservoirs 24.05.2018 | Leibniz-Zentrum für Marine Tropenforschung (ZMT) A research team led by physicists at the Technical University of Munich (TUM) has developed molecular nanoswitches that can be toggled between two structurally different states using an applied voltage. They can serve as the basis for a pioneering class of devices that could replace silicon-based components with organic molecules. The development of new electronic technologies drives the incessant reduction of functional component sizes. In the context of an international collaborative... At the LASYS 2018, from June 5th to 7th, the Laser Zentrum Hannover e.V. (LZH) will be showcasing processes for the laser material processing of tomorrow in hall 4 at stand 4E75. With blown bomb shells the LZH will present first results of a research project on civil security. At this year's LASYS, the LZH will exhibit light-based processes such as cutting, welding, ablation and structuring as well as additive manufacturing for... There are videos on the internet that can make one marvel at technology. For example, a smartphone is casually bent around the arm or a thin-film display is rolled in all directions and with almost every diameter. From the user's point of view, this looks fantastic. From a professional point of view, however, the question arises: Is that already possible? At Display Week 2018, scientists from the Fraunhofer Institute for Applied Polymer Research IAP will be demonstrating today’s technological possibilities and... So-called quantum many-body scars allow quantum systems to stay out of equilibrium much longer, explaining experiment | Study published in Nature Physics Recently, researchers from Harvard and MIT succeeded in trapping a record 53 atoms and individually controlling their quantum state, realizing what is called a... The historic first detection of gravitational waves from colliding black holes far outside our galaxy opened a new window to understanding the universe. A... 02.05.2018 | Event News 13.04.2018 | Event News 12.04.2018 | Event News 24.05.2018 | Ecology, The Environment and Conservation 24.05.2018 | Medical Engineering 24.05.2018 | Physics and Astronomy
Gross Domestic Product and How It Affects You Gross domestic product is the total value of everything produced in the country. It doesn't matter if it's produced by citizens or foreigners. If they are located within the country's boundaries, their production is included in GDP. To avoid double-counting, GDP includes the final value of the product, but not the parts that go into it. For example, a U.S. footwear manufacturer uses laces and other materials made in the United States. Only the value of the shoe gets counted; the shoelace does not. In the United States, the Bureau of Economic Analysis measures GDP quarterly. Each month, it revises the quarterly estimate as it receives updated data. The components of GDP include personal consumption expenditures plus business investment plus government spending plus (exports minus imports). Now that you know what the components are, it's easy to calculate a country's gross domestic product using this standard formula: C + I + G + (X - M). When economists talk about the “size” of an economy, they are referring to GDP. There are many different ways to measure a country's GDP. It's important to know all the different types and how they are used. Nominal GDP: This is the raw measurement that includes price increases. In 2018, nominal U.S. GDP was $20.494 trillion. Real GDP: To compare GDP by year, the BEA removes the effects of inflation. Otherwise, it might seem like the economy is growing when really it's suffering from double-digit inflation. The BEA calculates real GDP by using a price deflator. It tells you how much prices have changed since a base year. The BEA multiplies the deflator by the nominal GDP. The BEA makes the following three important distinctions: - Income from U.S. companies and people from outside the country are not included. That removes the impact of exchange rates and trade policies. - The effects of inflation are taken out. For example, it counts the value of a new car engine only after it's assembled in the vehicle. - Only the final product is counted. Real GDP is lower than nominal. In 2018, it was $18.566 trillion. The BEA provides it using 2012 as the base year in the National Income and Product Accounts, Table 1.1.6. Real Gross Domestic Product-Chained Dollars. Growth Rate: The GDP growth rate is the percentage increase in GDP from quarter to quarter. It tells you exactly whether the economy is growing quicker or slower than the quarter before. Most countries use real GDP to remove the effect of inflation. As bad as a recession is, you also don't want the growth rate to be too high. Then you'll get inflation. The ideal growth rate is between 2 -3%. The BEA calculates the U.S. GDP growth rate. It provides current U.S. GDP statistics monthly. In 2018, it was 2.9%. The U.S. GDP growth rate has changed each year since 1929 depending on the phase of the business cycle. GDP per Capita: GDP per capita is the best way to compare gross domestic product between countries. This divides the gross domestic product by the number of residents. It’s a good measure of the country's standard of living. Some countries have enormous economic outputs only because they have so many people. In 2018 the U.S. GDP per capita was $57,170. The best way to compare GDP per capita by year or between countries is with real GDP per capita. This takes out the effects of inflation, exchange rates, and differences in population. In 2007, the United States lost its position as the world's largest economy. How GDP Affects You GDP impacts personal finance, investments, and job growth. Investors look at a nations' growth rate to decide if they should adjust their asset allocation. They also compare country growth rates to find their best international opportunities. They purchase shares of companies that are in rapidly growing countries. The U.S. central bank, the Federal Reserve, uses the growth rate to determine monetary policy. It implements expansionary monetary policy to ward off recession and contractionary monetary policy to prevent inflation. Its primary tool is the federal funds rate. For example, if the growth rate is increasing then the Fed raises interest rates to stem inflation. In this case, you should lock in a fixed-rate mortgage. Your payments on an adjustable-rate mortgage will rise along with the fed funds rate. If growth slows or becomes negative, then you should update your resume. Slow economic growth leads to layoffs and unemployment. That can take several months. It takes time for executives to compile the layoff list and prepare exit packages. Use the GDP report from the BEA to determine which sectors of the economy are growing and which are declining. You can apply for jobs in growing sectors. Even during the 2008 financial crisis, health care industries continued to add jobs. This report also helps you determine whether you should invest in, say, a tech-specific mutual fund versus a fund that focuses on agribusiness. Difference Between GNP and GNI Gross national product measures the value of everything produced by a country's citizens, no matter where they are in the world. The World Bank now calculates gross national income instead, but the differences are insignificant. Problems With GDP One of the biggest criticisms of GDP it that it doesn't count the environmental costs. For example, the price of plastic is cheap because it doesn't include the cost of pollution. GDP doesn't measure how these costs impact the well-being of society. A country will improve its standard of living when it factors in environmental costs. Another criticism is that GDP doesn't include unpaid services. It leaves out child care and unpaid volunteer work. As a result, the economy undervalues these contributions to the quality of life. GDP also does not count the shadow or black economy. GDP underestimates economic output in countries where a lot of people receive their income from illegal activities. These products aren't taxed and don't show up in government records. The government estimates, but cannot accurately measure, this output. Global Financial Integrity estimated the black market contributed up to $2.2 trillion to the $128 trillion global economy in 2017. Likewise, societies only value what they measure. For example, Nordic countries rank high in the World Economic Forum's Global Competitiveness Report. Their budgets focus on the drivers of economic growth. These are world-class education, social programs, and a high standard of living. These factors create a skilled and motivated workforce. These countries also have a high tax rate. That slows GDP growth. But they use the revenues to invest in the long-term building blocks of economic growth. Riane Eisler's book, “The Real Wealth of Nations,” proposes changes to the U.S. economic system by giving value to activities at the individual, societal, and environmental levels.
Tải bản đầy đủ 2: Transcription Is the Synthesis of an RNA Molecule from a DNA Template From DNA to Proteins: Transcription and RNA Processing Within a single gene, only one of the two DNA strands, the template strand, is usually transcribed into RNA. ✔ Concept Check 2 What is the difference between the template strand and The transcription unit A transcription unit is a stretch 10.3 Under the electron microscope, DNA molecules undergoing transcription exhibit Christmas-tree-like structures. The trunk of each “Christmas tree” (a transcription unit) represents a DNA molecule; the tree branches (granular strings attached to the DNA) are RNA molecules that have been transcribed from the DNA. As the transcription apparatus proceeds down the DNA, transcribing more of the template, the RNA molecules become longer and longer. [Dr. Thomas Broker/Phototake.] double helix. Unlike replication, however, the transcription of a gene takes place on only one of the two nucleotide strands of DNA (Figure 10.4). The nucleotide strand used for transcription is termed the template strand. The other strand, called the nontemplate strand, is not ordinarily transcribed. Thus, within a gene, only one of the nucleotide strands is normally transcribed into RNA (there are some exceptions to this rule). Although only one strand within a single gene is normally transcribed, different genes may be transcribed from different strands, as illustrated in Figure 10.5. During transcription, an RNA molecule that is complementary and antiparallel to the DNA template strand is synthesized (see Figure 10.4). The RNA transcript has the same polarity and base sequence as that of the nontemplate strand, with the exception that RNA contains U rather than T. of DNA that encodes an RNA molecule and the sequences necessary for its transcription. How does the complex of enzymes and proteins that performs transcription—the transcription apparatus—recognize a transcription unit? How does it know which DNA strand to read and where to start and stop? This information is encoded by the DNA Included within a transcription unit are three critical regions: a promoter, an RNA-coding sequence, and a terminator (Figure 10.6). The promoter is a DNA sequence that the transcription apparatus recognizes and binds. It indicates which of the two DNA strands is to be read as the template and the direction of transcription. The promoter also determines the transcription start site, the first nucleotide that will be transcribed into RNA. In most transcription units, the promoter is located next to the transcription start site but is not, itself, transcribed. The second critical region of the transcription unit is the RNA-coding region, a sequence of DNA nucleotides that is copied into an RNA molecule. The third component of the transcription unit is the terminator, a sequence of nucleotides that signals where transcription is to end. Terminators are usually part of the coding sequence; that is, transcription stops only after the terminator has been copied into RNA. Molecular biologists often use the terms upstream and downstream to refer to the direction of transcription and the location of nucleotide sequences surrounding the RNA-coding 1 RNA synthesis is complementary and antiparallel to the template strand. 3 The nontemplate strand is not usually transcribed. 2 New nucleotides are added to the 3’-OH group of the growing RNA; so transcription proceeds in a 5’ 10.4 RNA molecules are synthesized that are complementary and antiparallel to one of the two nucleotide strands of DNA, the template strand. The Substrate for Transcription Genes a and c are transcribed from the (+) strand,… …and b is transcribed from the (–) strand. 10.5 RNA is transcribed from one DNA strand. In most organisms, each gene is transcribed from a single DNA strand, but different genes may be transcribed from one or the other of the two sequence. The transcription apparatus is said to move downstream as transcription takes place: it binds to the promoter (which is usually upstream of the start site) and moves toward the terminator (which is downstream of the start site). When DNA sequences are written out, often the sequence of only one of the two strands is listed. Molecular biologists typically write the sequence of the nontemplate strand, because it will be the same as the sequence of the RNA transcribed from the template (with the exception that U in RNA replaces T in DNA). By convention, the sequence on the nontemplate strand is written with the 5Ј end on the left and the 3Ј end on the right. The first nucleotide transcribed (the transcription start site) is numbered +1; nucleotides downstream of the start site are assigned positive numbers, and nucleotides upstream of the start site are assigned negative numbers. So, nucleotide +34 would be 34 nucleotides downstream of the start site, whereas nucleotide –75 would be 75 nucleotides upstream of the start site. There is no nucleotide assigned 0. A transcription unit is a piece of DNA that encodes an RNA molecule and the sequences necessary for its proper transcription. Each transcription unit includes a promoter, an RNA-coding region, and a terminator. RNA is synthesized from ribonucleoside triphosphates (rNTPs; Figure 10.7). In synthesis, nucleotides are added one at a time to the 3¿ -OH group of the growing RNA molecule. Two phosphate groups are cleaved from the incoming ribonucleoside triphosphate; the remaining phosphate group participates in a phosphodiester bond that connects the nucleotide to the growing RNA molecule. The overall chemical reaction for the addition of each nucleotide is: RNAn + rNTP : RNAn + 1 + PPi where PPi represents pyrophosphate. Nucleotides are always added to the 3Ј end of the RNA molecule, and the direction of transcription is therefore 5Ј : 3Ј (Figure 10.8), the same as the direction of DNA synthesis in replication. The synthesis of RNA is complementary and antiparallel to one of the DNA strands (the template strand). Unlike DNA synthesis, RNA synthesis does not require a primer. RNA is synthesized from ribonucleoside triphosphates. Transcription is 5Ј : 3Ј: each new nucleotide is joined to the 3Ј-OH group of the last nucleotide added to the growing RNA molecule. The Transcription Apparatus Recall that DNA replication requires a number of different enzymes and proteins. Transcription might initially appear to be quite different because a single enzyme—RNA polymerase—carries out all the required steps of transcription but, on closer inspection, the processes are actually similar. The action of RNA polymerase is enhanced by a number of accessory proteins that join and leave the polymerase at different stages of the process. Each accessory protein is responsible for providing or regulating a special function. Thus, transcription, like replication, requires an array of proteins. Bacterial RNA polymerase Bacterial cells typically possess only one type of RNA polymerase, which catalyzes the synthesis of all classes of bacterial RNA: mRNA, tRNA, and rRNA. Bacterial RNA polymerase is a large, multimeric enzyme (meaning that it consists of several polypeptide chains). 10.6 A transcription unit includes a promoter, a region that encodes RNA, and a terminator. RNA transcript 5’ From DNA to Proteins: Transcription and RNA Processing ϪO 9 P 9 O 9P9O9P9O9 CH Eukaryotic RNA polymerases RNA polymerase I RNA polymerase II Pre-mRNA, some snRNAs, snoRNAs, RNA polymerase III tRNAs, small rRNAs, some snRNAs, RNA polymerase IV Some siRNAs in plants 10.7 Ribonucleoside triphosphates are substrates used in At the heart of most bacterial RNA polymerases are five subunits (individual polypeptide chains) that make up the core enzyme. This enzyme catalyzes the elongation of the RNA molecule by the addition of RNA nucleotides. Other functional subunits join and leave the core enzyme at particular stages of the transcription process. The sigma () factor controls the binding of RNA polymerase to the promoter. Without sigma, RNA polymerase will initiate transcription at a random point along the DNA. After sigma has associated with the core enzyme (forming a holoenzyme), RNA polymerase binds stably only to the promoter region and initiates transcription at the proper start site. Sigma is required only for promoter binding and initiation; when a few RNA nucleotides have been joined together, sigma usually detaches from the core enzyme. Many bacteria have multiple types of sigma factors; each type of sigma initiates the binding of RNA polymerase to a particular set of promoters. Eukaryotic RNA polymerases Most eukaryotic cells possess three distinct types of RNA polymerase, each of 1 Initiation of RNA synthesis does not require a primer. 2 New nucleotides are added to the 3’ end of the RNA Bacterial cells possess a single type of RNA polymerase, consisting of a core enzyme and other subunits that participate in various stages of transcription. Eukaryotic cells possess three distinct types of RNA polymerase: RNA polymerase I transcribes rRNA; RNA polymerase II transcribes pre-mRNA, snoRNAs, and some snRNAs; and RNA polymerase III transcribes tRNAs, small rRNAs, and some ✔ Concept Check 3 What is the function of the sigma factor? The Process of Bacterial Transcription 3 DNA unwinds at the front of the transcription bubble… Now that we’ve considered some of the major components of transcription, we’re ready to take a detailed look at the process. Transcription can be conveniently divided into three 4 …and then rewinds. 10.8 In transcription, nucleotides are always added to the 3Ј end of the RNA molecule. which is responsible for transcribing a different class of RNA: RNA polymerase I transcribes rRNA; RNA polymerase II transcribes pre-mRNAs, snoRNAs, some miRNAs, and some snRNAs; and RNA polymerase III transcribes small RNA molecules—specifically tRNAs, small rRNA, some miRNAs, and some snRNAs (Table 10.3). A fourth RNA polymerase, named RNA polymerase IV, has been found in plants. It functions in the nucleus and transcribes siRNAs that play a role in DNA methylation and chromatin structure. All eukaryotic polymerases are large, multimeric enzymes, typically consisting of more than a dozen subunits. Some subunits are common to all RNA polymerases, whereas others are limited to one of the polymerases. As in bacterial cells, a number of accessory proteins bind to the core enzyme and affect its function. 1. initiation, in which the transcription apparatus assembles on the promoter and begins the synthesis of RNA; 2. elongation, in which DNA is threaded through RNA polymerase, the polymerase unwinding the DNA and The consensus sequence comprises the most nucleotides at each site. 5′–T A T A A A A G–3′ 5′–T C C A A T G C–3′ sequences 5′–A A T A G C C G–3′ 5′–T A C A G G A G–3′ 5′–T A Y A R N A C/G–3′ and guanine are indicated by R. N means that base is more 10.9 A consensus sequence consists of the most commonly encountered bases at each position in a group of related adding new nucleotides, one at a time, to the 3Ј end of the growing RNA strand; and 3. termination, the recognition of the end of the transcription unit and the separation of the RNA molecule from the DNA template. We will examine each of these steps in bacterial cells, where the process is best understood. the frequency of transcription for a particular gene. Promoters also have different affinities for RNA polymerase. Even within a single promoter, the affinity can vary with the passage of time, depending on the promoter’s interaction with RNA polymerase and a number of other factors. Essential information for the transcription unit—where it will start transcribing, which strand is to be read, and in what direction the RNA polymerase will move—is imbedded in the nucleotide sequence of the promoter. Promoters are DNA sequences that are recognized by the transcription apparatus and are required for transcription to take place. In bacterial cells, promoters are usually adjacent to an RNAcoding sequence. An examination of many promoters in E. coli and other bacteria reveals a general feature: although most of the nucleotides within the promoters vary in sequence, short stretches of nucleotides are common to many. Furthermore, the spacing and location of these nucleotides relative to the transcription start site are similar in most promoters. These short stretches of common nucleotides are called consensus sequences; “consensus sequence” refers to sequences that possess considerable similarity, or consensus (Figure 10.9). The presence of consensus in a set of nucleotides usually implies that the sequence is associated with an important The most commonly encountered consensus sequence, found in almost all bacterial promoters, is centered about 10 bp upsteam of the start site. Called the –10 consensus sequence or, sometimes, the Pribnow box, its consensus sequence is 5Ј–T A T A A T–3Ј 3Ј–A T A T T A–5Ј Initiation Initiation comprises all the steps necessary to begin RNA synthesis, including (1) promoter recognition, (2) formation of the transcription bubble, (3) creation of the first bonds between rNTPs, and (4) escape of the transcription apparatus from the promoter. Transcription initiation requires that the transcription apparatus recognize and bind to the promoter. At this step, the selectivity of transcription is enforced; the binding of RNA polymerase to the promoter determines which parts of the DNA template are to be transcribed and how often. Different genes are transcribed with different frequencies, and promoter binding is primarily responsible for determining and is often written simply as TATAAT (Figure 10.10). Remember that TATAAT is just the consensus sequence— representing the most commonly encountered nucleotides at each of these positions. In most prokaryotic promoters, the actual sequence is not TATAAT. Another consensus sequence common to most bacterial promoters is TTGACA, which lies approximately 35 nucleotides upstream of the start site and is termed the –35 consensus sequence (see Figure 10.10). The nucleotides on either side of the –10 and –35 consensus sequences and those between them vary greatly from promoter to promoter, 10.10 In bacterial promoters, consensus sequences are found upstream of the start site, approximately at positions –10 and –35. RNA transcript 5’ From DNA to Proteins: Transcription and RNA Processing suggesting that they are not very important in promoter The sigma factor associates with the core enzyme (Figure 10.11a) to form a holoenzyme, which binds to the –35 and –10 consensus sequences in the DNA promoter (Figure 10.11b). Although it binds only the nucleotides of consensus sequences, the enzyme extends from –50 to +20 when bound to the promoter. The holoenzyme initially binds weakly to the promoter but then undergoes a change in structure that allows it to bind more tightly and unwind the double-stranded DNA (Figure 10.11c). Unwinding begins within the –10 consensus sequence and extends Core RNA polymerase downstream for about 14 nucleotides, including the start site (from nucleotides –12 to +2). A promoter is a DNA sequence adjacent to a gene and required for transcription. Promoters contain short consensus sequences that are important in the initiation of transcription. After the holoenzyme has attached to the promoter, RNA polymerase is positioned over the start site for transcription (at position +1) and has unwound the DNA to 1 The sigma factor associates with the core enzyme to form a holoenzyme,… 2 …which binds to the –35 and –10 consensus sequences in the promoter, creating a closed complex. 3 The holoenzyme binds the promoter tightly and unwinds the double-stranded DNA, creating an open complex. 5 Two phosphate groups are cleaved from each subsequent nucleoside triphosphate, creating an RNA nucleotide that is added to the 3’ end of the growing RNA molecule. 6 The sigma factor is released as the RNA polymerase moves beyond the promoter. 5’ P P P Conclusion: RNA transcription is initiated when core RNA polymerase binds to the promoter with the help of sigma. 10.11 Transcription in bacteria is catalyzed by RNA polymerase, which must bind to the sigma factor to initiate transcription. 4 A nucleoside triphosphate complementary to the DNA at the start site serves as the first nucleotide in the RNA molecule. produce a single-stranded template. The orientation and spacing of consensus sequences on a DNA strand determine which strand will be the template for transcription and thereby determine the direction of transcription. The position of the start site is determined not by the sequences located there but by the location of the consensus sequences, which positions RNA polymerase so that the enzyme’s active site is aligned for the initiation of transcription at +1. If the consensus sequences are artificially moved upstream or downstream, the location of the starting point of transcription correspondingly changes. To begin the synthesis of an RNA molecule, RNA polymerase pairs the base on a ribonucleoside triphosphate with its complementary base at the start site on the DNA template strand (Figure 10.11d). No primer is required to initiate the synthesis of the 5Ј end of the RNA molecule. Two of the three phosphate groups are cleaved from the ribonucleoside triphosphate as the nucleotide is added to the 3Ј end of the growing RNA molecule. However, because the 5Ј end of the first ribonucleoside triphosphate does not take part in the formation of a phosphodiester bond, all three of its phosphate groups remain. An RNA molecule therefore possesses, at least initially, three phosphate groups at its 5Ј end (Figure 10.11e). transcription stops after the terminator has been transcribed, like a car that stops only after running over a speed bump. At the terminator, several overlapping events are needed to bring an end to transcription: RNA polymerase must stop synthesizing RNA, the RNA molecule must be released from RNA polymerase, the newly made RNA molecule must dissociate fully from the DNA, and RNA polymerase must detach from the Bacterial cells possess two major types of terminators. Rho-dependent terminators are able to cause the termination of transcription only in the presence of an ancillary protein called the rho factor. Rho-independent terminators are able to cause the end of transcription in the absence of rho. In bacteria, a group of genes is often transcribed into a single RNA molecule, which is termed a polycistronic RNA. Thus, polycistronic RNA is produced when a single terminator is present at the end of a group of several genes that are transcribed together, instead of each gene having its own terminator. Typically, eukaryotic genes are each transcribed and terminated separately, and so polycistronic mRNA is uncommon in eukaryotes. Elongation At the end of initiation, RNA polymerase Transcription ends after RNA polymerase transcribes a terminator. Bacterial cells possess two types of terminator: a rho-independent terminator, which RNA polymerase can recognize by itself; and a rho-dependent terminator, which RNA polymerase can recognize only with the help of the rho protein. undergoes a change in conformation (shape) and is thereafter no longer able to bind to the consensus sequences in the promoter. This change allows the polymerase to escape from the promoter and begin transcribing downstream. The sigma subunit is usually released after initiation, although some populations of RNA polymerase may retain sigma Transcription takes place within a short stretch of about 18 nucleotides of unwound DNA called the transcription bubble. As it moves downstream along the template, RNA polymerase progressively unwinds the DNA at the leading (downstream) edge of the transcription bubble, joining nucleotides to the RNA molecule according to the sequence on the template, and rewinds the DNA at the trailing (upstream) edge of the bubble. Transcription is initiated at the start site, which, in bacterial cells, is set by the binding of RNA polymerase to the consensus sequences of the promoter. No primer is required. Transcription takes place within the transcription bubble. DNA is unwound ahead of the bubble and rewound behind it. Termination RNA polymerase adds nucleotides to the 3Ј end of the growing RNA molecule until it transcribes a terminator. Most terminators are found upstream of the site at which termination actually takes place. Transcription therefore does not suddenly stop when polymerase reaches a terminator, as does a car stopping at a stop sign. Rather, The Basic Rules of Transcription Before we examine how RNA molecules are modified after transcription, let’s pause to summarize some of the general principles of bacterial transcription. 1. Transcription is a selective process; only certain parts of the DNA 2. RNA is transcribed from single-stranded DNA. Within a gene, only one of the two DNA strands—the template strand—is normally copied into RNA. 3. Ribonucleoside triphosphates are used as the substrates in RNA synthesis. Two phosphate groups are cleaved from a ribonucleoside triphosphate, and the resulting nucleotide is joined to the 3Ј-OH group of the growing RNA strand. 4. RNA molecules are antiparallel and complementary to the DNA template strand. Transcription is always in the 5Ј : 3Ј direction, meaning that the RNA molecule grows at the 3Ј end. 5. Transcription depends on RNA polymerase—a complex, multimeric enzyme. RNA polymerase consists of a core enzyme, which is capable of synthesizing RNA, and other subunits that may join transiently to perform additional functions. A sigma factor enables the core enzyme of RNA polymerase to bind to a promoter and From DNA to Proteins: Transcription and RNA Processing 6. Promoters contain short sequences crucial in the binding of RNA polymerase to DNA. 7. RNA polymerase binds to DNA at a promoter, begins transcribing at the start site of the gene, and ends transcription after a terminator has been transcribed. 1 A continuous sequence of nucleotides in the DNA… 10.3 Many Genes Have Complex What is a gene? As noted in Chapter 3, the definition of a gene changes as we explore different aspects of heredity. A gene was defined there as an inherited factor that determines a characteristic. This definition may have seemed vague, because it says only what a gene does but nothing about what a gene is. Nevertheless, this definition was appropriate for our purposes at the time, because our focus was on how genes influence the inheritance of traits. We did not have to consider the physical nature of the gene in learning the rules Knowing something about the chemical structure of DNA and the process of transcription now enables us to be more precise about what a gene is. Chapter 8 described how genetic information is encoded in the base sequence of DNA; so a gene consists of a set of DNA nucleotides. But how many nucleotides are encompassed in a gene and how is the information in these nucleotides organized? In 1902, Archibald Garrod suggested, correctly, that genes encode proteins. Proteins are made of amino acids; so a gene contains the nucleotides that specify the amino acids of a protein. We could, then, define a gene as a set of nucleotides that specifies the amino acid sequence of a protein, which indeed was, for many years, the working definition of a gene. As geneticists learned more about the structure of genes, however, it became clear that this concept of a gene was an oversimplification. Early work on gene structure was carried out largely through the examination of mutations in bacteria and viruses. This research led Francis Crick in 1958 to propose that genes and proteins are colinear—that there is a direct correspondence between the nucleotide sequence of DNA and the amino acid sequence of a protein (Figure 10.12). The concept of colinearity suggests that the number of nucleotides in a gene should be proportional to the number of amino acids in the protein encoded by that gene. In a general sense, this concept is true for genes found in bacterial cells and many viruses, although these genes are slightly longer than would be expected if colinearity were strictly applied (the mRNAs encoded by the genes contain sequences at their ends that do not specify amino acids). At first, eukaryotic genes and proteins also were generally assumed to be colinear, but there were hints that eukaryotic gene structure is fundamentally different. Eukaryotic cells contain far more DNA than is Arg Gly Tyr Thr Phe Ala Val Ser 2 …codes for a continuous sequence of amino acids in the protein. Conclusion: With colinearity, the number of nucleotides in the gene is proportional to the number of amino acids in 10.12 The concept of colinearity suggests that a continuous sequence of nucleotides in DNA encodes a continuous sequence of amino acids in a protein. As illustrated here, a codon specifies each amino acid. required to encode proteins. Furthermore, many large RNA molecules observed in the nucleus were absent from the cytoplasm, suggesting that nuclear RNAs undergo some type of change before they are exported to the cytoplasm. Most geneticists were nevertheless surprised by the announcement in the 1970s that four coding sequences in a gene from a eukaryotic virus were interrupted by nucleotides that did not specify amino acids. This discovery was made when the viral DNA was hybridized with the mRNA transcribed from it, and the hybridized structure was examined with the use of an electron microscope (Figure 10.13). The DNA was clearly much longer than the mRNA, because regions of DNA looped out from the hybridized molecules. These regions contained nucleotides in the DNA that were absent from the coding nucleotides in the mRNA. Many other examples of interrupted genes were subsequently discovered; it quickly became apparent that most eukaryotic genes consist of stretches of coding and noncoding When a continuous sequence of nucleotides in DNA encodes a continuous sequence of amino acids in a protein, the two are said to be colinear. In eukaryotes, not all genes are colinear with the proteins that they encode. Question: Is the coding sequence in a gene always 1 Mix DNA with RNA and heat to separate DNA strands. 2 Cool the mixture. DNA may reanneal with its introns are removed by splicing and the exons are joined to yield the mature RNA. Introns are common in eukaryotic genes but are rare in bacterial genes. All classes of eukaryotic genes—those that encode rRNA, tRNA, and proteins—may contain introns. The number and size of introns vary widely: some eukaryotic genes have no introns, whereas others may have more than 60; intron length varies from fewer than 200 nucleotides to more than 50,000. Introns tend to be longer than exons, and most eukaryotic genes contain more noncoding nucleotides than coding nucleotides. Finally, most introns do not encode proteins (an intron of one gene is not usually an exon for another), although geneticists are finding a growing number of exceptions. …or with RNA. Many eukaryotic genes contain exons and introns, both of which are transcribed into RNA, but introns are later removed by RNA processing. The number and size of introns vary from gene to gene; they are common in many eukaryotic genes but uncommon in The Concept of the Gene Revisited of DNA are seen Conclusion: Coding sequences in a gene may be interrupted by noncoding sequences. 10.13 The noncolinearity of eukaryotic genes was discovered by hybridizing DNA and mRNA. [Electromicrograph from O. L. Miller, B. R. Beatty, D. W. Fawcett/Visuals Unlimited.] ✔ Concept Check 4 What evidence indicated that eukaryotic genes are not collinear with their proteins? Many eukaryotic genes contain coding regions called exons and noncoding regions called intervening sequences or introns. For example, the ovalbumin gene has eight exons and seven introns; the gene for cytochrome b has five exons and four introns (Figure 10.14). The average human gene contains from 8 to 9 introns. All the introns and the exons are initially transcribed into RNA but, after transcription, the How does the presence of introns affect our concept of a gene? To define a gene as a sequence of nucleotides that encodes amino acids in a protein no longer seems appropriate, because this definition excludes introns, which do not specify amino acids. This definition also excludes nucleotides that encode the 5Ј and 3Ј ends of an mRNA molecule, which are required for translation but do not encode amino acids. And defining a gene in these terms also excludes sequences that encode rRNA, tRNA, and other RNAs that do not encode proteins. In view of our current understanding of DNA structure and function, we need a more satisfactory definition of gene. Many geneticists have broadened the concept of a gene to include all sequences in the DNA that are transcribed into a single RNA molecule. Defined in this way, a gene includes all exons, introns, and those sequences at the beginning and end of the RNA that are not translated into a protein. This definition also includes DNA sequences that encode rRNAs, tRNAs, and other types of nonmessenger RNA. Some geneticists have expanded the definition of a gene even further, to include the entire transcription unit—the promoter, the RNA coding sequence, and the terminator. The discovery of introns forced a reevaluation of the definition of the gene. Today, a gene is often defined as a DNA sequence that encodes an RNA molecule or the entire DNA sequence required to transcribe and encode an RNA molecule.
Two to three kilometres beneath the surface of Australia’s Northern Territory sits buried energy. The layered rock formations known as the Velkerri Shale were recently estimated to contain over 118 trillion cubic feet of gas. While these gas reserves are clearly large, what is really remarkable is their age. These rocks were deposited 1,400 million years ago, in an ocean known as the Roper Seaway. These rocks, and their encased oil and gas, are about a billion years older than rocks where oil and gas are usually found. The molecules that make up the oil and gas – called hydrocarbons because they consist of hydrogen and carbon atoms – are the long-decomposed remains of dead bacteria that inhabited ancient oceans. These are the most unconventional hydrocarbons yet discovered; “unconventional” because of the type of rock they’re contained in, and their age. This antiquity gives us a rare chance to use the remains of the bacteria to examine the chemistry of the ancient oceans, the composition of the ancient atmosphere, and the nature of life 1,400 million years ago. Recently we have learned a lot about Earth’s ancient marine environment. This has been achieved by analysing rare elements, particularly cerium (Ce) and molybdenum (Mo) extracted from the once-living organic matter within the Velkerri Shale. Ce and Mo act as indicators of how much oxygen was available in the oceans 1,400 million years ago. The studies reveal an ocean starved of oxygen, even in surface waters. At deeper depths this ocean was completely toxic, rich in hydrogen sulfide. These results indicate that Earth’s atmosphere at this time was oxygen-poor. In fact, it is likely that it contained less than 3% oxygen. Currently we enjoy 21% oxygen in Earth’s atmosphere and a largely oxygenated ocean. Further work by our colleagues has focused on extracting molecules of biological origin from the same rocks. These “biomarkers” have revealed an ocean dominated by bacteria. But why is the age of the Velkerri Shale so unusual? It’s got a lot to do with the unique sequence of events that need to occur to produce and preserve oil and gas. How oil and gas forms underground Oil and gas are produced from rocks known as black shales. These rocks contain high levels of organic matter (materials that were once living); in this case, the organic matter is the decomposed remains of bacteria. To convert organic matter to oil or gas requires heat, but not too much: you need to delicately cook the rocks to temperatures between 60℃ and 160℃ for a few million years. You do this by burying the rocks. Earth gets hotter as you go deeper, something miners know well. In fact, the deepest mine in the world (in South Africa) goes down 3.4 kilometres, and the rock temperatures down there are an incredible 60℃. Oil and gas form when the rocks get a little deeper than this. These temperatures are often referred to as the oil and gas “window”. If the organic matter does not reach this temperature window, oil and gas are not produced. Avoiding colliding plates The enemy of preserving ancient oil and gas is plate tectonics, or moving layers of rock at Earth’s surface. Much of the planet is sculpted by plate tectonics. In fact, major continental collisions happen quite episodically, during the assembly of supercontinents. These massive continental amalgamations occur in a regular 600-million-year cycle. But if oil- and gas-bearing black shales find themselves at a plate margin, they often get buried too deeply, heating them too much and destroying them. The fact that the Velkerri Shale contains vast quantities of oil and gas means that these rocks have not experienced temperatures above 160℃ for 1,400 million years. This is quite remarkable considering that Australia has been involved in major tectonic upheavals associated with the assembly of two supercontinents during this time, Rodinia and Pangaea. Most of the oil and gas we use as an energy source comes from rocks less than 500 million years old, and the vast majority are less than 300 million years old. Young rocks haven’t been around that long, so are less likely to have been sandwiched between colliding plates. These young oil and gas reserves formed after the Cambrian explosion, at a time when plant and animal life was abundant on Earth. Rethinking “Peak Oil” The Velkerri Shale contains enough gas to power Australia for more than 90 years, at current consumption rates. While only a proportion of this gas would be recoverable, a resource estimate of 118 trillion cubic feet forms a large onshore unconventional shale gas reserve in a country soon to be the largest gas exporter in the world. Oil and gas are non-renewable resources. The concept of “Peak Oil” or more broadly “Peak Hydrocarbons” refers to the point in time when the maximum rate of oil and gas extraction is reached. However, the technological development of unconventional oil and gas, and now the realisation that gas can be sourced from extremely old rocks, such as the Velkerri Shale, means that the arrival of “Peak Hydrocarbons” may be further delayed. The development of unconventional oil and gas remains contentious, and well-informed public debate will ultimately decide whether such shale gas resources are developed. If the Velkerri Shale moves from exploration to production, we will be making use of gas produced in a “slime world” that existed nearly a billion years before the first complex life on Earth evolved, where bacteria ruled the seas and the atmosphere was largely devoid of oxygen. This and ongoing work is a collaborative effort involving The University of Adelaide, The University of South Australia, The University of Wollongong, the Czech Academy of Sciences, the Northern Territory Geological Survey, Origin Energy and Santos.
Post Contents - Click On Links To Jump Ahead The Andromeda Galaxy The Andromeda Galaxy, a spiral galaxy which also has alternate scientific names of M31 and NGC224, is the closest full-sized galaxy to the Milky Way Galaxy, home of the Earth. There are smaller galaxies, such as the Large and Small Magellanic Clouds which orbit the Milky Way, is the closest large galaxy at a distance of about 2.5 million light-years from Earth. It gets its name from the fact that it is located in the night sky in the constellation Andromeda. The Andromeda Galaxy is thought to be somewhat larger than the Milky Way Galaxy, although not by quite as much as previously thought – we will go into more detail on this a little later. It is also easily visible from the night skies on moonless nights with the naked eye, despite being about 2.5 million light-years away. Andromeda Galaxy Facts In this section I will go over some facts about the Andromeda Galaxy, some of which we may explore in later sections. The Andromeda Galaxy is also known by various scientific designations such as M31, NGC 224, UGC454, and PGC 2557. The Andromeda Galaxy can be seen in the constellation of Andromeda at the coordinates of right ascension 00 hours 42 minutes and 44.3 seconds, and declination of +41 degrees 16 minutes and 9 seconds. The redshift value for Andromeda is -.001001; the minus sign indicates that this value is actually a blueshift which means that the Andromeda Galaxy is moving towards Earth and the Milky Way Galaxy. The radial velocity of Andromeda is about 187 miles(301 kilometers) per second and its distance has been determined to be approximately 2.54 million light-years away from Earth. The apparent magnitude of Andromeda is about 3.44 which means it is easily visible in moonless night skies with low light pollution, while it’s apparent magnitude is -21.5. It is scientifically classified as a type SA(s)b galaxy – a spiral galaxy with spiral arms and medium sized nucleus. It has a diameter of about 220000 light-years with a mass of around 1.5 trillion solar masses and is thought to contain approximately 1 trillion stars. Early Observations Of The Andromeda Galaxy Observations of the Andromeda Galaxy go back much further in time than one might think. The first recorded observation of the Andromeda Galaxy appears to be around the year 964 when a Persian astronomer named Abd al-Rahman al-Sufi described it in his Book of Stars as a ‘nebulous smear’. Later on in 1612 a German astronomer named Simon Marius described it based on some telescopic observations he had. The French mathematician and philosopher Pierre Louis Maupertuis observed Andromeda in 1745 as a blurry spot, conjecturing that it was some sort of island universe. Charles Messier was a French astronomer famous for his astronomical catalogue of nebulae and star clusters known as Messier objects, and he cataloged Andromeda as M31 in his book. William Herschel was a German born British astronomer who made observations of Andromeda and claimed he saw a reddish color in it’s core in 1785. Making an assessment of its color and magnitude, he came to a conclusion that the Andromeda Galaxy was around 2000 times the distance away as the star Sirius, which is about 8 light-years from Earth. This would have put Andromeda at a distance of around 16000 light-years from Earth, which as we now know, is grossly incorrect. In 1850 William Parsons, an Anglo-Irish astronomer who had built a 72 inch telescope known as the Leviathan of Parsonstown, made the first known drawing of the spiral structure of the Andromeda Galaxy. In 1864 William Huggins, an English astronomer who pioneered in astronomical spectroscopy, observed that the spectra of Andromeda displayed various frequencies which were superimposed with dark absorption lines corresponding to various elements that were more indicative of stars than of nebulae, leading him to believe that Andromeda was composed of stars. Later in 1885, a supernova was observed in the Andromeda Galaxy, known as S Andromedae – this was the only supernova(essentially an explosion of a massive star) that has ever been observed in Andromeda, even to this day. At the time it was first discovered it was believed to be only a nova(the transient bright appearance of a new star), since it was thought at that time that Andromeda was much closer to Earth than it actually was, thus grossly underestimating it’s luminosity. Isaac Roberts was a Welsh engineer who was also an amateur who is believed to have taken the first photographs of Andromeda in 1887. At that time it was widely believed that Andromeda was only a nebula in our own Milky Way Galaxy, and so Isaac Roberts thought that Andromeda was an early forming solar system in our galaxy. Vesto Melvin Slipher was an American astronomer who made the first measurements of the radial velocities of galaxies, also noting their redshift values which helped establish the fact that the Universe is expanding. In 1912 he used spectroscopy to measure the radial velocity of Andromeda with respect to our Solar System, which came out to a value of 190 miles per second, the largest velocity that had ever been recorded up to that time. In 1917 Heber Curtis, an American astronomer, discovered 11 novae in Andromeda and observed that they were about 10 magnitudes fainter than those occurring in other areas of the night sky. Because of this he estimated that the distance of the Andromeda Galaxy from Earth was about 500000 light-years, still far less than its actual distance of 2.5 million light-years that we know today, but definitely a big improvement over previous estimates. He thus came to the conclusion that Andromeda was an independent galaxy far outside our own Milky Way. In 1920 there was a Great Debate between Heber Curtis and another American astronomer Harlow Shapley concerning the nature of spiral nebulae and the size of the Universe. Shapley believed they were smaller nebulae lying within the outer regions of our own galaxy, while Curtis thought that they were independent galaxies which were quite large and at a very great distance from Earth. Of course, Heber Curtis turned out to be correct and was therefore the winner of the Great Debate. Ernst Opik was an Estonian astronomer, who in 1922 used the measured velocities of stars in the Andromeda Galaxy to calculate its distance from Earth, which he estimated at 1.5 million light-years, yet another significant improvement. Then in 1925 Edwin Hubble, a famous American astronomer, definitively proved that the Andromeda Galaxy was a large spiral galaxy, far away from Earth by observing the extragalactic Cepheid variable stars(a type of pulsating star) in Andromeda for the first time which confirmed the great distance of Andromeda from the Milky Way. In 1943 Walter Baade, a German astronomer working in the United States at the time, became the first person to actually resolve stars in the galactic core of Andromeda. He classified two different types of stars; type 1 which were young high velocity stars, and type 2 which were older red colored stars. He also discovered two types of Cepheid variable stars which caused him to come to the conclusion that the Andromeda Galaxy was almost twice as far away as the previous estimate of 1.5 million light-years, much closer to the distance of 2.5 million light-years that we know today. Radio emissions from the Andromeda Galaxy were first detected in 1950 by Hanbury Brown, a British astronomer, and Cyril Hazard with radio telescopes located at the Jodrell Bank Observatory in Manchester, in the United Kingdom. Later in the 1950’s the British astronomer John Baldwin and his associates made the first radio maps of the Andromeda Galaxy at the Cambridge Radio Astronomy Group at the University of Cambridge. In 2009 the very first planet is believed to have been discovered in the Andromeda Galaxy using a type of gravitational microlensing technique – detecting deflected light from the gravitational effects of a larger object – to seperate the mass of the planet from its parent star. Formation Of The Andromeda Galaxy It is now thought that the Andromeda Galaxy formed about 10 billion years ago from colliding and then merging with a number of smaller protogalaxies – these are essentially clouds of interstellar gas which are in the early stage of galaxy formation. These violent collisions formed the extended disk and galactic halo of Andromeda. During this formation epoch there would have been a very high rate of star formation in the Andromeda Galaxy causing it to become a luminous infrared galaxy for around another 100 million years. History Of The Andromeda Galaxy What is the history of the Andromeda Galaxy after this initial period of formation, roughly 100 million years? For billions of years gravitational forces, along with some possible mergers with smaller galaxies and star clusters, have been the main factor in the evolution of Andromeda. It is currently thought that somewhere around 85 percent of all the matter in the Universe is dark matter, with only the remaining 15 percent ordinary, or visible, matter. So the gravitational forces of all this matter, both ordinary matter and dark matter, was crucial for Andromeda to evolve to the galaxy that we observe today. I say ‘today’, but since Andromeda is about 2.5 million light-years away we are really seeing it 2.5 million years in the past! It is very interesting to mention that the Andromeda Galaxy and the Triangulum Galaxy had a very close encounter, or passage, with each other around 2 to 4 billion years ago. The gravitational forces from this event is thought to have resulted in very high rates of star formation across the disk of Andromeda, even creating some globular star clusters. During the past 2 billion years new star formation in Andromeda has greatly decreased, almost to the point of inactivity. There also seems to have been interactions with a number of satellite galaxies with Andromeda which is thought to have resulted in Andromeda’s Giant Stellar Stream – a stream of stars orbiting Andromeda which may have been smaller satellite galaxies and globular star clusters which were torn apart by intense gravitational tidal forces. How Far Away Is The Andromeda Galaxy? The Andromeda Galaxy is roughly 2.54 million light-years from Earth according our best estimates at the present time. Methods of estimating this distance include variance in light distribution from the luminosity of stars in the galaxy and their fluctuations, cepheid variable stars and their luminosity fluctuations, eclipses of binary stars in the galaxy with the comparison of apparent and absolute magnitudes, and measuring the luminosity of the brightest red-giant branch stars in the galaxy(TRGB method). Using these methods and averaging the results gives us the 2.54 million light-year figure for the distance of the Andromeda Galaxy. How Big Is The Andromeda Galaxy? Estimates for the size of the Andromeda Galaxy are still uncertain. In 2006 the spheroid of the Andromeda Galaxy was determined to have a higher stellar density than the Milky Way Galaxy and the stellar disk of Andromeda was estimated at about twice the diameter of the Milky Way – Andromeda was thought to contain over 1 trillion stars compared to around 400 billion stars in the Milky Way. This gave a total stellar mass estimate about twice that of the Milky Way at about 1.5 trillion solar masses with about 30 percent of this mass in the galactic core, 56 percent in the galactic disk, and the remaining 14 percent in the galactic halo. In 2018 study of radio frequency emissions from Andromeda seemed to indicate a mass more or less equal to that of the Milky Way in the range of .8 trillion solar masses – this contradicts the earlier studies but is by no means certain. Much ongoing research is still being done to determine how big the Andromeda Galaxy is. Luminosity Of The Andromeda Galaxy The Andromeda Galaxy appears to have a significantly older population of stars than the Milky Way Galaxy with many having ages greater than 7 billion years. However the estimated luminosity of Andromeda is about 26 billion units of radiant flux, which is about 25 percent greater than that of the Milky Way. This may be in large part because Andromeda is believed to have over twice the number of stars than in the Milky Way, around a trillion, despite having a predominately older star population. The absolute magnitude of Andromeda is believed to be about -21.5, although this is only an estimate, and some believe that the Andromeda Galaxy is the second brightest galaxy within a radius of about 32.6 million light-years of Earth, behind the Sombrero Galaxy, a lenticular galaxy visible in the constellation of Virgo about 31.1 million light-years from Earth. Structure Of The Andromeda Galaxy The Andromeda Galaxy is currently classified as a SA(s)b galaxy – a spiral disk galaxy with tightly bound spiral arms and a medium size nucleus. Andromeda may have to be reclassified as a barred galaxy since the Two Micron All-Sky Survey(2MASS) seems to indicate that it has a bar structure spanning its long axis. In past years Andromeda was thought to have a diameter of 70000 to 120000 light-years across, comparable to the Milky Way. However, starting in 2005 with observations from the two telescopes at the Keck Observatory near the top of Mauna Kea in the state of Hawaii in the USA(the primary mirrors are both 394 inches in diameter making them the 2nd largest astronomical telescope behind the Gran Telescopio Canarias on the island of La Palma in the Canaries, Spain which has a primary mirror 410 inches in diameter) discoveries have been made indicating that the diameter of the disk of Andromeda is much greater than previously thought, perhaps as much as three times as great. Observations of the Keck telescopes show that stars extending outward from Andromeda are actually part of the disk itself giving strong evidence of a much larger and expansive stellar as much as 220000 light-years across. Andromeda is inclined at an angle of 77 degrees relative to our galaxy(90 degrees would be edge on), the Milky Way, which enables a marginal view of some of the cross section structure, which seems to show a significant S-shaped warp rather than only a flat disk. This may be due in part to the gravitational influences of some of Andromeda’s closer satellite galaxies. Spectroscopic studies of Andromeda have enabled us to estimate rotational velocities inside the galaxy as a function of the distance from the galactic core. At a distance of 1300 light-years from the galactic core the rotational velocity is around 140 miles per second, and when moving out to about 7000 light-years from the core it drops to about 31 miles per second. From this point rotational velocities will keep rising until they reach a peak of about 160 miles per second at a radius of 33000 light-years from the galactic core. Going outward from the 33000 light-year radius velocities will slowly decline until they drop to 120 miles per second at a radius from the core of about 80000 light-years. From these radial velocity measurements it is possible to make an estimate that there is a concentrated mass equivalent to about 6 billion solar masses in the nucleus region of the Andromeda Galaxy. The Andromeda Galaxy is viewed close to edge on with respect to its orientation from Earth, 77 degrees as opposed to 90 degrees for a complete edge on view, so its spiral structure is somewhat difficult to discern. Given this fact, it still seems that the Andromeda Galaxy is pretty much an ordinary spiral galaxy with 2 spiral arms beginning at about 1600 light-years from the core and extending outward, separated from each other by at least 13000 light-years. The spiral arms are somewhat indistinct, possibly due to interaction with the satellite galaxies M32 and M110 and the resultant gravitational influence. In 1998 images from the European Space Agency’s Infrared Space Observatory indicated the presence of several overlapping rings of gas and dust in Andromeda, with an especially prominent one, called the ring of fire by some astronomers, at a distance of 32000 light-years from the galactic core. This is mostly cold gas and dust which cannot be seen in the visible wavelengths of light. The presence of these rings of gas and dust have led some scientists to believe that Andromeda could be evolving into a ring type of galaxy in the far distant future, although this is by no means certain at this time. Close examinations of the inner structure of Andromeda seem to suggest that a collision with the smaller satellite galaxy M32 some 200 million years ago may have been responsible for some of the ring structures. Computer simulations show that this smaller galaxy, M32, may have passed through the disk of Andromeda along its polar axis, stripping away about half of the mass of M32 and creating the ring structures in Andromeda. The galactic halo of Andromeda, an extended halo of stars surrounding the galactic disk, is thought to have followed a similar evolutionary path to the galactic halo of the Milky Way, resulting from the assimilation of up to one hundred smaller galaxies over the course of about 12 billion years. Galactic Center Of The Andromeda Galaxy Andromeda appears through Earth-bound telescopes to have a singular central bulge in it’s center, but in 1991 the Hubble Space Telescope was used to image the center of the Andromeda Galaxy, and what resulted was rather surprising. The nucleus of Andromeda really consists of two concentrations of stars, separated by about 4.9 light-years. The brighter concentration, called P1, is offset from the center of Andromeda, while the dimmer concentration of stars, called P2, is in the true center of the Andromeda Galaxy. In the true center of Andromeda, in the P2 concentration, there is a black hole which is estimated to be in the range of 110 million to 230 million solar masses. The radial velocity of the stars and other materials dispersed around it is estimated to be about 99.4 miles per second, or 160 kilometers per second. At the current time, there is not thought to be any black hole in the center of the P1 concentration, due to the characteristics of the distribution of stars and other materials around it. Contents Of The Andromeda Galaxy The types of objects which the Andromeda Galaxy contains are probably somewhat similar to that of our own Milky Way Galaxy, such as planets in orbit around stars, various types of stars in different stages of evolution, black holes, rogue planets drifting in the void of interstellar space, interstellar gas and dust, and so on. One difference is that Andromeda is more massive and contains over twice the number of stars as the Milky Way. At the current time it is thought that there are about 460 globular clusters in the Andromeda Galaxy. The most massive of these, called Globular One, is the most luminous globular cluster in the Local Group of galaxies, which includes Andromeda, the Milky Way, and Triangulum as the largest galaxies. It has several large stellar populations with a total of several million stars and is about twice as luminous as Omega Centauri, the most luminous globular cluster in the Milky Way Galaxy. It seems to be much too massive to be an ordinary globular cluster – it is now thought that Globular One might be the remnant core of a dwarf galaxy that was consumed by Andromeda in the distant past. G76 is the globular cluster with the greatest apparent brightness and is located in the eastern half of the southwest spiral arm of Andromeda. In 2006 another massive globular cluster was discovered which seems to have similar properties to G1, Globular One, and has a reddish color because of the concentration of interstellar gas. The globular clusters in the Andromeda Galaxy have a much larger range of ages than those in the Milky Way Galaxy, with some as old as Andromeda itself, around 12 billion years, and other much younger ones in the range of from several hundred million years to five billion years old. In 2005 a completely new type of star cluster was discovered in Andromeda – they contained hundreds of thousands of stars like regular globular clusters. However they were much different in that they were far larger across, hundreds of millions of light-years, and so hundreds of times less dense, making the distance between the stars in these new types of clusters far greater than in ordinary clusters. In 2012 a microquasar(a burst of radio emissions from a small black hole), which is a smaller version of a quasar, was discovered in the Andromeda Galaxy. The black hole which is believed to be causing this microquasar is only about 10 solar masses large and is located near the galactic center of Andromeda. This was the first microquasar observed outside of the Milky Way Galaxy. Satellites Of The Andromeda Galaxy The Andromeda Galaxy is like the Milky Way in that it has a number of smaller satellite galaxies, 14 of them are classified as dwarf galaxies. M32 and M110 are the best known and most easily observable of these. It is thought that M32 used to be a larger galaxy but had its stellar disk removed by M31 in the distant past. M110 may have a younger star population and also appears to be interacting with Andromeda, contributing to the galactic halo of Andromeda. In 2006 it was discovered that nine of these satellite galaxies lie in the same plane which intersects the galactic core of Andromeda, instead of being randomly distributed – this may very well be because of a common gravitational tidal force effect upon these galaxies. Andromeda Collision With Milky Way It is now know that the Andromeda Galaxy is slowly moving towards the Milky Way Galaxy at a rate of around 68 miles per second(110 kilometers per second), giving it a blueshift value of about .001001. The approaching velocity of the Andromeda Galaxy is much greater than its tangential, or sideways, velocity – this means that there is likely to be a direct collision between Andromeda and the Milky Way in about 4 billion years. One possible outcome of this is that the two galaxies will merge to form a giant elliptical galaxy or disc galaxy, which is a frequent occurance among galaxies in the Universe. The fate of the Earth and its Solar System would, of course, be unknown in such an occurance. We should not be concerned though since it is highly unlikely that humans and their civilization will be around in 4 billion years. Observing The Andromeda Galaxy The Andromeda Galaxy has an apparent magnitude of about 3.44, which makes it bright enough to be easily visible in moonless night skies with low levels of light pollution. It is probably best to view Andromeda during autumn nights in the Northern Hemisphere – here in the mid latitudes Andromeda reaches its zenith, the highest point, around midnight and so is visible for almost the whole night. In the Southern Hemisphere Andromeda is visible in the same months, which is spring there, but it stays close to the horizon and is much more difficult to observe unless you are close to the equator. Andromeda is easily visible through binoculars which can reveal some of its larger structure along with its two brightest satellite galaxies, M32 and M110. With a decent size amateur telescope Andromeda’s disk, some of the brightest globular clusters, dark dust lanes, and the large star cloud designated as NGC 206 are visible. 5 Incredible Facts About The Andromeda Galaxy Video How To Find The Great Andromeda Galaxy Video Let’s Photograph The Andromeda Galaxy Video
Chemistry revision notes rates of chemical reactions aqa science gcse chemistry factors affect the rate of a reaction edexcel science gcse chemistry what controls the speed of a reaction. This course deals with the experimental and theoretical aspects of chemical reaction kinetics, including transition-state theories, molecular beam scattering, classical techniques, quantum and statistical mechanical estimation of rate constants, pressure-dependence and chemical activation, modeling complex reacting mixtures, and uncertainty . Chemical kinetics: reaction rate the rate of a reaction is defined in terms of the rates with which the products are formed and the reactants (the reacting substances) are consumed for chemical systems it is usual to deal with the concentrations of substances, which is defined as. Enthalpy 2|page anjelina qureshi mrs gravell rates of reaction coursework chemistry year 11 enthalpy, in chemistry, is the heat content in a chemical reaction the enthalpy change is the amount of heat absorbed or released when a chemical reaction occurs at a constant pressure. The rate of a chemical reaction can be measured in two ways: 1) the first way is to measure how quickly the reactants (the substances on the left of the arrow in the equation) decrease. The rate of a certain chemical reaction is directly proportional to the square of the concentration of chemical a present and inversely proportional to the . Rate of a chemical reaction at a constant temperature, which of the following would be expected to affect the rate of a given chemical reaction. Rate laws, relative rates of reaction, the power law model, the rate constant, elementary reactions, non-elementary reactions, reversible reactions, batch system stoichiometric table, flow system stoichiometric table. Gcse science/rates of reaction coursework from wikibooks, open books for an open world the more molecules in a chemical the more concentrated it is when a . A grade gcse chemistry coursework, rates of reaction, decomposition of sodium thiosulphate, introduction, method, safety, results, discussion sodium thiosulphate decomposition for gcse, grade a easy. Coursework reaction of rates introduction and thiosulphate sodium between reaction of rate the how experimenting be will i coursework science of piece this in experiment the in reaction of rate the affects (hcl) acid hydrochloric and (sts) sodium-thiosulphate of concentration the how investigating also am i addition in . Chemistry coursework rates of reaction investigation the rate of a reaction the area of chemistry that deals with the rate or speed of chemical reactions is known . A chemistry course to cover selected topics covered in advanced high school chemistry courses, correlating to the standard topics as established by the american chemical society prerequisites: students should have a background in basic chemistry including nomenclature, reactions, stoichiometry . A chemical equation is symbolic representation of a chemical reaction where the reactant entities are given on the left hand side and the product entities on the right hand side here. Video created by university of manchester for the course introduction to physical chemistry this module explores the rate of reaction, stoichiometry and order, zero order reactions, first order reactions, second order reactions, . Reaction rate is a measure of how quickly the reactants in a reaction change into the products of the reaction the rate of a chemical reaction can be measured in two ways:. Use of rate of reaction in industry the chemical industry makes medicine and many other substances such as coursework rates of reaction – gcse science – marked by tough gcse topics broken down and explained by out team of coursework rates of reaction. Title – physical & chemical reactions by – charlotte mccoy rate of chemical reactions experiments there are many ways to change the rate of a chemical . For most chemical reactions, temperature is directly proportional to the rate of the chemical reaction therefore, increasing the temperature increases the rate of the reaction to a certain extent, but precaution is needed while increasing the temperature of the reaction to avoid accidents. Other times, through chemical reactions for teachers for schools for enterprise chemical reactions quiz course rate of a chemical reaction: modifying factors . Determining if the rate of a chemical reaction is affected by temperature - aims and objectives introduction: in this coursework, i will be conducting an experiment, to show a particular factor affects the rates of a chemical reaction. Ideas for coursework assignments or projects involving the rates or speed of chemical reactions on the factors affecting the rates of chemicals which also has brief descriptions of experimental methods and equations, particle pictures and fully explains all the factors affecting the rate of a chemical reaction. View notes - lab report 1 - rates of chemical reactions from chem 118 at university of alabama, birmingham to the solution and stir with a toothpick the solution should turn blue. - lab report kinetics of chemical reactions kinetics of chemical reactions is how fast a reaction occurs and determining how the presence of reactants affects reaction rates in this experiment the rate of reaction for fe+3 and i- is determined.
|Part of a series on| A trade union (or a labor union in the U.S.) is an association of workers forming a legal unit or legal personhood, usually called a "bargaining unit", which acts as bargaining agent and legal representative for a unit of employees in all matters of law or right arising from or in the administration of a collective agreement. Labour unions typically fund the formal organization, head office, and legal team functions of the labour union through regular fees or union dues. The delegate staff of the labour union representation in the workforce are made up of workplace volunteers who are appointed by members in democratic elections. Today, unions are usually formed for the purpose of securing improvement in pay, benefits, working conditions, or social and political status through collective bargaining by the increased bargaining power wielded by the banding of the workers. The trade union, through an elected leadership and bargaining committee, bargains with the employer on behalf of union members (rank and file members) and negotiates labour contracts (collective bargaining) with employers. The most common purpose of these associations or unions is "maintaining or improving the conditions of their employment". This may include the negotiation of wages, work rules, occupational health and safety standards, complaint procedures, rules governing status of employees including promotions, just cause conditions for termination, and employment benefits. Unions may organize a particular section of skilled workers (craft unionism), a cross-section of workers from various trades (general unionism), or attempt to organize all workers within a particular industry (industrial unionism). The agreements negotiated by a union are binding on the rank and file members and the employer and in some cases on other non-member workers. Trade unions traditionally have a constitution which details the governance of their bargaining unit and also have governance at various levels of government depending on the industry that binds them legally to their negotiations and functioning. Originating in Great Britain, trade unions became popular in many countries during the Industrial Revolution. Trade unions may be composed of individual workers, professionals, past workers, students, apprentices or the unemployed. Trade union density, or the percentage of workers belonging to a trade union, is highest in the Nordic countries. Definition [ edit ] Since the publication of the History of Trade Unionism (1894) by Sidney and Beatrice Webb, the predominant historical view is that a trade union "is a continuous association on wage earners for the purpose of maintaining or improving the conditions of their employment." Karl Marx described trade unions thus: "The value of labour-power constitutes the conscious and explicit foundation of the trade unions, whose importance for the […] working class can scarcely be overestimated. The trade unions aim at nothing less than to prevent the reduction of wages below the level that is traditionally maintained in the various branches of industry. That is to say, they wish to prevent the price of labour-power from falling below its value" (Capital V1, 1867, p. 1069). A modern definition by the Australian Bureau of Statistics states that a trade union is "an organization consisting predominantly of employees, the principal activities of which include the negotiation of rates of pay and conditions of employment for its members." Yet historian R.A. Leeson, in United we Stand (1971), said: Two conflicting views of the trade-union movement strove for ascendancy in the nineteenth century: one the defensive-restrictive guild-craft tradition passed down through journeymen's clubs and friendly societies, ... the other the aggressive-expansionist drive to unite all 'labouring men and women' for a 'different order of things'. Recent historical research by Bob James in Craft, Trade or Mystery (2001) puts forward the view that trade unions are part of a broader movement of benefit societies, which includes medieval guilds, Freemasons, Oddfellows, friendly societies, and other fraternal organizations. We rarely hear, it has been said, of the combination of masters, though frequently of those of workmen. But whoever imagines, upon this account, that masters rarely combine, is as ignorant of the world as of the subject. Masters are always and everywhere in a sort of tacit, but constant and uniform combination, not to raise the wages of labor above their actual rate[.] When workers combine, masters ... never cease to call aloud for the assistance of the civil magistrate, and the rigorous execution of those laws which have been enacted with so much severity against the combination of servants, labourers and journeymen. As Smith noted, unions were illegal for many years in most countries, although Smith argued that it should remain illegal to fix wages or prices by employees or employers. There were severe penalties for attempting to organize unions, up to and including execution. Despite this, unions were formed and began to acquire political power, eventually resulting in a body of labour law that not only legalized organizing efforts, but codified the relationship between employers and those employees organized into unions. History [ edit ] The origins of trade unions can be traced back to 18th century Britain, where the rapid expansion of industrial society then taking place drew women, children, rural workers and immigrants into the work force in large numbers and in new roles. They encountered a large hostility in their early existence from employers and government groups; at the time, unions and unionists were regularly prosecuted under various restraint of trade and conspiracy statutes. This pool of unskilled and semi-skilled labour spontaneously organized in fits and starts throughout its beginnings, and would later be an important arena for the development of trade unions. Trade unions have sometimes been seen as successors to the guilds of medieval Europe, though the relationship between the two is disputed, as the masters of the guilds employed workers (apprentices and journeymen) who were not allowed to organize. Trade unions and collective bargaining were outlawed from no later than the middle of the 14th century, when the Ordinance of Labourers was enacted in the Kingdom of England, but their way of thinking was the one that endured down the centuries, inspiring evolutions and advances in thinking which eventually gave workers their necessary rights. As collective bargaining and early worker unions grew with the onset of the Industrial Revolution, the government began to clamp down on what it saw as the danger of popular unrest at the time of the Napoleonic Wars. In 1799, the Combination Act was passed, which banned trade unions and collective bargaining by British workers. Although the unions were subject to often severe repression until 1824, they were already widespread in cities such as London. Workplace militancy had also manifested itself as Luddism and had been prominent in struggles such as the 1820 Rising in Scotland, in which 60,000 workers went on a general strike, which was soon crushed. Sympathy for the plight of the workers brought repeal of the acts in 1824, although the Combination Act 1825 severely restricted their activity. By the 1810s, the first labour organizations to bring together workers of divergent occupations were formed. Possibly the first such union was the General Union of Trades, also known as the Philanthropic Society, founded in 1818 in Manchester. The latter name was to hide the organization's real purpose in a time when trade unions were still illegal. National general unions [ edit ] The first attempts at setting up a national general union were made in the 1820s and 30s. The National Association for the Protection of Labour was established in 1830 by John Doherty, after an apparently unsuccessful attempt to create a similar national presence with the National Union of Cotton-spinners. The Association quickly enrolled approximately 150 unions, consisting mostly of textile related unions, but also including mechanics, blacksmiths, and various others. Membership rose to between 10,000 and 20,000 individuals spread across the five counties of Lancashire, Cheshire, Derbyshire, Nottinghamshire and Leicestershire within a year. To establish awareness and legitimacy, the union started the weekly Voice of the People publication, having the declared intention "to unite the productive classes of the community in one common bond of union." In 1834, the Welsh socialist Robert Owen established the Grand National Consolidated Trades Union. The organization attracted a range of socialists from Owenites to revolutionaries and played a part in the protests after the Tolpuddle Martyrs' case, but soon collapsed. More permanent trade unions were established from the 1850s, better resourced but often less radical. The London Trades Council was founded in 1860, and the Sheffield Outrages spurred the establishment of the Trades Union Congress in 1868, the first long-lived national trade union center. By this time, the existence and the demands of the trade unions were becoming accepted by liberal middle class opinion. In Principles of Political Economy (1871) John Stuart Mill wrote: If it were possible for the working classes, by combining among themselves, to raise or keep up the general rate of wages, it needs hardly be said that this would be a thing not to be punished, but to be welcomed and rejoiced at. Unfortunately the effect is quite beyond attainment by such means. The multitudes who compose the working class are too numerous and too widely scattered to combine at all, much more to combine effectually. If they could do so, they might doubtless succeed in diminishing the hours of labour, and obtaining the same wages for less work. They would also have a limited power of obtaining, by combination, an increase of general wages at the expense of profits. Legalization and expansion [ edit ] Trade unions were finally legalized in 1872, after a Royal Commission on Trade Unions in 1867 agreed that the establishment of the organizations was to the advantage of both employers and employees. This period also saw the growth of trade unions in other industrializing countries, especially the United States, Germany and France. In the United States, the first effective nationwide labour organization was the Knights of Labor, in 1869, which began to grow after 1880. Legalization occurred slowly as a result of a series of court decisions. The Federation of Organized Trades and Labor Unions began in 1881 as a federation of different unions that did not directly enrol workers. In 1886, it became known as the American Federation of Labor or AFL. In France, labour organization was illegal until 1884. The Bourse du Travail was founded in 1887 and merged with the Fédération nationale des syndicats (National Federation of Trade Unions) in 1895 to form the General Confederation of Labour (France). Prevalence worldwide [ edit ] The prevalence of unions in various countries can be measured by the concept of "union density", which is expressed as a percentage of the total number of workers in a given location who are trade union members. Trade union density around the world shows great variation. |Bosnia and Herzegovina||2012||30.0| |Hong Kong, China||2016||26.1| |Korea, Republic of||2015||10.1| |Lao People's Democratic Republic||2010||15.5| |Moldova, Republic of||2016||23.9| |Saint Vincent and the Grenadines||2010||4.9| |Taiwan, Republic of China||2010||39.3| |Tanzania, United Republic of||2015||24.3| |Trinidad and Tobago||2013||19.8| Trade unions by country [ edit ] Australia [ edit ] Supporters of unions, such as the ACTU or Australian Labor Party (ALP), often credit trade unions with leading the labour movement in the early 20th century. This generally sought to end child labour practices, improve worker safety, increase wages for both union workers and non-union workers, raise the entire society's standard of living, reduce the hours in a work week, provide public education for children, and bring other benefits to working class families. Melbourne Trades Hall was opened in 1859 with Trades and Labour Councils and Trades Halls opening in all cities and most regional towns in the next forty years. During the 1880s Trade unions developed among shearers, miners, and stevedores (wharf workers), but soon spread to cover almost all blue-collar jobs. Shortages of labour led to high wages for a prosperous skilled working class, whose unions demanded and got an eight-hour day and other benefits unheard of in Europe. Australia gained a reputation as "the working man's paradise." Some employers tried to undercut the unions by importing Chinese labour. This produced a reaction which led to all the colonies restricting Chinese and other Asian immigration. This was the foundation of the White Australia Policy. The "Australian compact", based around centralised industrial arbitration, a degree of government assistance particularly for primary industries, and White Australia, was to continue for many years before gradually dissolving in the second half of the 20th century. In the 1870s and 1880s, the growing trade union movement began a series of protests against foreign labour. Their arguments were that Asians and Chinese took jobs away from white men, worked for "substandard" wages, lowered working conditions and refused unionisation. Objections to these arguments came largely from wealthy land owners in rural areas. It was argued that without Asiatics to work in the tropical areas of the Northern Territory and Queensland, the area would have to be abandoned. Despite these objections to restricting immigration, between 1875 and 1888 all Australian colonies enacted legislation which excluded all further Chinese immigration. Asian immigrants already residing in the Australian colonies were not expelled and retained the same rights as their Anglo and Southern compatriots. The Barton Government which came to power after the first elections to the Commonwealth parliament in 1901 was formed by the Protectionist Party with the support of the Australian Labor Party. The support of the Labor Party was contingent upon restricting non-white immigration, reflecting the attitudes of the Australian Workers Union and other labour organisations at the time, upon whose support the Labor Party was founded. Baltic states [ edit ] In the Baltic states trade unions were the part of the Soviet Union trade union system and closely connected with the party in the state. Industrial actions were not a part of their activities. After 1990 trade unions in the Baltic states have experienced rapid loss of membership and economic power, while employers’ organisations increased both power and membership. Low financial and organisational capacity caused by declining membership adds to the problem of interest definition, aggregation and protection in negotiations with employers’ and state organisations. Even the difference exists in the way of organization trade union and density. Starting from 2008 the union density slightly decrease in Latvia and Lithuania. In case of Estonia this indicator is lower than in Latvia and Lithuania but stays stable average 7 percent from total number of employment. Historical legitimacy is one of the negative factors that determine low associational power. Belgium [ edit ] With 65% of the workers belonging to a union Belgium is a country with one of the highest percentages of labour union membership. Only the Scandinavian countries have a higher labour union density. The biggest union with around 1.7 million members is the Christian democrat Confederation of Christian Trade Unions (ACV-CSC) which was founded in 1904. The origins of the union can be traced back to the "Anti-Socialist Cotton Workers Union" that was founded in 1886. The second biggest union is the socialist General Federation of Belgian Labour (ABVV-FGTB) which has a membership of more than 1.5 million. The ABVV-FGTB traces its origins to 1857, when the first Belgian union was founded in Ghent by a group of weavers. The socialist union, in its current form, was founded in 1898. The third 'big' union in Belgium is the liberal General Confederation of Liberal Trade Unions of Belgium (ACLVB-CGSLB) which is relatively small in comparison to the first two with a little under 290 thousand members. The ACLVB-CGSLB was founded in 1920 in an effort to unite the many small liberal unions. Back then the liberal union was known as the "Nationale Centrale der Liberale Vakbonden van België". In 1930, the ACLVB-CGSLB adopted its current name. Besides these "big three" there is a long list of smaller unions, some more influential than others. These smaller unions tend to specialize in one profession or economic sector. Next to these specialized unions there is also the Neutral and Independent Union that reject the pillarization that, according to them, the "big three" represent. There is also a small Flemish nationalist union that exists only in the Flemish-speaking part of Belgium, called the Vlaamse Solidaire Vakbond. The last Belgian union worth mentioning is the very small, but highly active anarchist union called the Vrije Bond. Canada [ edit ] Canada's first trade union, the Labourers' Benevolent Association (now International Longshoremen's Association Local 273), formed in Saint John, New Brunswick in 1849. The union was formed when Saint John's longshoremen banded together to lobby for regular pay and a shorter workday. Canadian unionism had early ties with Britain and Ireland. Tradesmen who came from Britain brought traditions of the British trade union movement, and many British unions had branches in Canada. Canadian unionism ties with the United States eventually replaced those with Britain. Collective bargaining was first recognized in 1945, after the strike by the United Auto Workers at the General Motors' plant in Oshawa, Ontario. Justice Ivan Rand issued a landmark legal decision after the strike in Windsor, Ontario, involving 17,000 Ford workers. He granted the union the compulsory check-off of union dues. Rand ruled that all workers in a bargaining unit benefit from a union-negotiated contract. Therefore, he reasoned they must pay union dues, although they do not have to join the union. The post-World War II era also saw an increased pattern of unionization in the public service. Teachers, nurses, social workers, professors and cultural workers (those employed in museums, orchestras and art galleries) all sought private-sector collective bargaining rights. The Canadian Labour Congress was founded in 1956 as the national trade union center for Canada. In the 1970s the federal government came under intense pressures to curtail labour cost and inflation. In 1975, the Liberal government of Pierre Trudeau introduced mandatory price and wage controls. Under the new law, wages increases were monitored and those ruled to be unacceptably high were rolled back by the government. Pressures on unions continued into the 1980s and '90s. Private sector unions faced plant closures in many manufacturing industries and demands to reduce wages and increase productivity. Public sector unions came under attack by federal and provincial governments as they attempted to reduce spending, reduce taxes and balance budgets. Legislation was introduced in many jurisdictions reversing union collective bargaining rights, and many jobs were lost to contractors. Prominent domestic unions in Canada include ACTRA, the Canadian Union of Postal Workers, the Canadian Union of Public Employees, the Public Service Alliance of Canada, the National Union of Public and General Employees, and Unifor. International unions active in Canada include the International Alliance of Theatrical Stage Employees, United Automobile Workers, United Food and Commercial Workers, and United Steelworkers. Colombia [ edit ] Until around 1990 Colombian trade unions were among the strongest in Latin America. However, the 1980s expansion of paramilitarism in Colombia saw trade union leaders and members increasingly targeted for assassination, and as a result Colombia has been the most dangerous country in the world for trade unionists for several decades. Between 2000 and 2010 Colombia accounted for 63.12% of trade unionists murdered globally. According to the International Trade Union Confederation (ITUC) there were 2832 murders of trade unionists between 1 January 1986 and 30 April 2010, meaning that "on average, men and women trade unionists in Colombia have been killed at the rate of one every three days over the last 23 years." Costa Rica [ edit ] In Costa Rica, trade unions first appeared in the late 1800s to support workers in a variety of urban and industrial jobs, such as railroad builders and craft tradesmen. After facing violent repression, such as during the 1934 United Fruit Strike, unions gained more power after the 1948 Costa Rican Civil War. Today, Costa Rican unions are strongest in the public sector, including the fields of education and medicine, but also have a strong presence in the agricultural sector. In general, Costa Rican unions support government regulation of the banking, medical, and education fields, as well as improved wages and working conditions. Germany [ edit ] Trade unions in Germany have a history reaching back to the German revolution in 1848, and still play an important role in the German economy and society. In 1875 the SPD, the Social Democratic Party of Germany, which is one of the biggest political parties in Germany, supported the forming of unions in Germany. The most important labour organisation is the German Confederation of Trade Unions (Deutscher Gewerkschaftsbund – DGB), which represents more than 6 million people (31 December 2011) and is the umbrella association of several single trade unions for special economic sectors. The DBG is not the only Union Organization that represents the working trade. There are smaller organizations, such as the CGB, which is a Christian-based confederation, that represent over 1.5 million people. India [ edit ] In India, the Trade Union movement is generally divided on political lines. According to provisional statistics from the Ministry of Labour, trade unions had a combined membership of 24,601,589 in 2002. As of 2008, there are 11 Central Trade Union Organisations (CTUO) recognized by the Ministry of Labour. The forming of these unions was a big deal in India. It led to a big push for more regulatory laws which gave workers a lot more power. A trade union with nearly 2,000,000 members is the Self Employed Women's Association (SEWA) which protects the rights of Indian women working in the informal economy. In addition to the protection of rights, SEWA educates, mobilizes, finances, and exalts their members' trades. Multiple other organizations represent workers. These organizations are formed upon different political groups. These different groups allow different groups of people with different political views to join a Union. Japan [ edit ] Labour unions emerged in Japan in the second half of the Meiji period as the country underwent a period of rapid industrialization. Until 1945, however, the labour movement remained weak, impeded by lack of legal rights, anti-union legislation, management-organised factory councils, and political divisions between “cooperative” and radical unionists. In the immediate aftermath of the Second World War, the US Occupation authorities initially encouraged the formation of independent unions. Legislation was passed that enshrined the right to organise, and membership rapidly rose to 5 million by February 1947. The organisation rate, however, peaked at 55.8% in 1949 and subsequently declined to 18.2% (2006). The labour movement went through a process of reorganisation from 1987 to 1991 from which emerged the present configuration of three major labour union federations, Rengo, Zenroren, and Zenrokyo, along with other smaller national union organisations. Mexico [ edit ] Before the 1990s, unions in Mexico had been historically part of a state institutional system. From 1940 until the 1980s, during the worldwide spread of neoliberalism through the Washington Consensus, the Mexican unions did not operate independently, but instead as part of a state institutional system, largely controlled by the ruling party. During these 40 years, the primary aim of the labour unions was not to benefit the workers, but to carry out the state's economic policy under their cosy relationship with the ruling party. This economic policy, which peaked in the 1950s and 60s with the so-called "Mexican Miracle", saw rising incomes and improved standards of living but the primary beneficiaries were the wealthy. In the 1980s, Mexico began adhering to Washington Consensus policies, selling off state industries such as railroad and telecommunications to private industries. The new owners had an antagonistic attitude towards unions, which, accustomed to comfortable relationships with the state, were not prepared to fight back. A movement of new unions began to emerge under a more independent model, while the former institutionalized unions had become very corrupt, violent, and led by gangsters. From the 1990s onwards, this new model of independent unions prevailed, a number of them represented by the National Union of Workers / Unión Nacional de Trabajadores. Current old institutions like the Oil Workers Union and the National Education Workers' Union (Sindicato Nacional de Trabajadores de la Educación, or SNTE) are examples of how the use of government benefits are not being applied to improve the quality in the investigation of the use of oil or the basic education in Mexico as long as their leaders show publicly that they are living wealthily. With 1.4 million members, the teachers' union is Latin America's largest; half of Mexico's government employees are teachers. It controls school curriculums, and all teacher appointments. Until recently, retiring teachers routinely "gave" their lifelong appointment to a relative or "sell" it for anywhere in between $4,700 and $11,800. Nordic countries [ edit ] Trade unions (Danish/Norwegian: Fagforeninger, Swedish: Fackföreningar) have a long tradition in Scandinavian and Nordic society. Beginning in the mid-19th century, they today have a large impact on the nature of employment and workers' rights in many of the Nordic countries. One of the largest trade unions in Sweden is the Swedish Confederation of Trade Unions, (LO, Landsorganisationen), incorporating unions such as the Swedish Metal Workers' Union (IF Metall = Industrifacket Metall), the Swedish Electricians' Union (Svenska Elektrikerförbundet) and the Swedish Municipality Workers' Union (Svenska Kommunalarbetareförbundet, abbreviated Kommunal). One of the aims of IF Metall is to transform jobs into "good jobs", also called "developing jobs". Swedish system is strongly based on the so-called Swedish model, which argues the importance of collective agreements between trade unions and employers. Today, the world's highest rates of union membership are in the Nordic countries. As of 2018 or latest year, the percentage of workers belonging to a union (labour union density) was 90.4% in Iceland, 67.2% in Denmark, 66.1% in Sweden, 64.4 in Finland and 52.5% in Norway, while it is unknown in Greenland, Faroe Islands and the Åland Islands. Excluding full-time students working part-time, Swedish union density was 67% in 2018. In all the Nordic countries with a Ghent system—Sweden, Denmark and Finland—union density is about 70%. The considerably raised membership fees of Swedish union unemployment funds implemented by the new center-right government in January 2007 caused large drops in membership in both unemployment funds and trade unions. From 2006 to 2008, union density declined by six percentage points: from 77% to 71%. United Kingdom [ edit ] Moderate New Model Unions dominated the union movement from the mid-19th century and where trade unionism was stronger than the political labour movement until the formation and growth of the Labour Party in the early years of the 20th century. Trade unionism in the United Kingdom was a major factor in some of the economic crises during the 1960s and the 1970s, culminating in the "Winter of Discontent" of late 1978 and early 1979, when a significant percentage of the nation's public sector workers went on strike. By this stage, some 12,000,000 workers in the United Kingdom were trade union members. However, the election of the Conservative Party led by Margaret Thatcher at the general election in May 1979, at the expense of Labour's James Callaghan, saw substantial trade union reform which saw the level of strikes fall. The level of trade union membership also fell sharply in the 1980s, and continued falling for most of the 1990s. The long decline of most of the industries in which manual trade unions were strong – e.g. steel, coal, printing, the docks – was one of the causes of this loss of trade union members. In 2011 there were 6,135,126 members in TUC-affiliated unions, down from a peak of 12,172,508 in 1980. Trade union density was 14.1% in the private sector and 56.5% in the public sector. United States [ edit ] Labour unions are legally recognized as representatives of workers in many industries in the United States. In the United States, trade unions were formed based on power with the people, not over the people like the government at the time. Their activity today centres on collective bargaining over wages, benefits and working conditions for their membership, and on representing their members in disputes with management over violations of contract provisions. Larger unions also typically engage in lobbying activities and supporting endorsed candidates at the state and federal level. Most unions in America are aligned with one of two larger umbrella organizations: the AFL-CIO created in 1955, and the Change to Win Federation which split from the AFL-CIO in 2005. Both advocate policies and legislation on behalf of workers in the United States and Canada, and take an active role in politics. The AFL-CIO is especially concerned with global trade issues. In 2010, the percentage of workers belonging to a union in the United States (or total labour union "density") was 11.4%, compared to 18.3% in Japan, 27.5% in Canada and 70% in Finland. Union membership in the private sector has fallen under 7% – levels not seen since 1932. Unions allege that employer-incited opposition has contributed to this decline in membership. The most prominent unions are among public sector employees such as teachers, police and other non-managerial or non-executive federal, state, county and municipal employees. Members of unions are disproportionately older, male and residents of the Northeast, the Midwest, and California. Union workers in the private sector average 10-30% higher pay than non-union in America after controlling for individual, job, and labour market characteristics. Because of their inherently governmental function, public sector workers are paid the same regardless of union affiliation or non-affiliation after controlling for individual, job, and labour market characteristics. The economist Joseph Stiglitz has asserted that, "Strong unions have helped to reduce inequality, whereas weaker unions have made it easier for CEOs, sometimes working with market forces that they have helped shape, to increase it." The decline in unionization since the Second World War in the United States has been associated with a pronounced rise in income and wealth inequality and, since 1967, with loss of middle class income. Vatican (Holy See) [ edit ] The Association of Vatican Lay Workers represents lay employees in the Vatican. Structure and politics [ edit ] Unions may organize a particular section of skilled workers (craft unionism, traditionally found in Australia, Canada, Denmark, Norway, Sweden, Switzerland, the UK and the US), a cross-section of workers from various trades (general unionism, traditionally found in Australia, Belgium, Canada, Denmark, Netherlands, the UK and the US), or attempt to organize all workers within a particular industry (industrial unionism, found in Australia, Canada, Germany, Finland, Norway, South Korea, Sweden, Switzerland, the UK and the US). These unions are often divided into "locals", and united in national federations. These federations themselves will affiliate with Internationals, such as the International Trade Union Confederation. However, in Japan, union organization is slightly different due to the presence of enterprise unions, i.e. unions that are specific to a plant or company. These enterprise unions, however, join industry-wide federations which in turn are members of Rengo, the Japanese national trade union confederation. In Western Europe, professional associations often carry out the functions of a trade union. In these cases, they may be negotiating for white-collar or professional workers, such as physicians, engineers or teachers. Typically such trade unions refrain from politics or pursue a more liberal politics than their blue-collar counterparts. A union may acquire the status of a "juristic person" (an artificial legal entity), with a mandate to negotiate with employers for the workers it represents. In such cases, unions have certain legal rights, most importantly the right to engage in collective bargaining with the employer (or employers) over wages, working hours, and other terms and conditions of employment. The inability of the parties to reach an agreement may lead to industrial action, culminating in either strike action or management lockout, or binding arbitration. In extreme cases, violent or illegal activities may develop around these events. In other circumstances, unions may not have the legal right to represent workers, or the right may be in question. This lack of status can range from non-recognition of a union to political or criminal prosecution of union activists and members, with many cases of violence and deaths having been recorded historically. Unions may also engage in broader political or social struggle. Social Unionism encompasses many unions that use their organizational strength to advocate for social policies and legislation favourable to their members or to workers in general. As well, unions in some countries are closely aligned with political parties. Unions are also delineated by the service model and the organizing model. The service model union focuses more on maintaining worker rights, providing services, and resolving disputes. Alternately, the organizing model typically involves full-time union organizers, who work by building up confidence, strong networks, and leaders within the workforce; and confrontational campaigns involving large numbers of union members. Many unions are a blend of these two philosophies, and the definitions of the models themselves are still debated. In Britain, the perceived left-leaning nature of trade unions has resulted in the formation of a reactionary right-wing trade union called Solidarity which is supported by the far-right BNP. In Denmark, there are some newer apolitical "discount" unions who offer a very basic level of services, as opposed to the dominating Danish pattern of extensive services and organizing. In contrast, in several European countries (e.g. Belgium, Denmark, the Netherlands and Switzerland), religious unions have existed for decades. These unions typically distanced themselves from some of the doctrines of orthodox Marxism, such as the preference of atheism and from rhetoric suggesting that employees' interests always are in conflict with those of employers. Some of these Christian unions have had some ties to centrist or conservative political movements and some do not regard strikes as acceptable political means for achieving employees' goals. In Poland, the biggest trade union Solidarity emerged as an anti-communist movement with religious nationalist overtones and today it supports the right-wing Law and Justice party. Although their political structure and autonomy varies widely, union leaderships are usually formed through democratic elections. Some research, such as that conducted by the Australian Centre for Industrial Relations Research and Training, argues that unionized workers enjoy better conditions and wages than those who are not unionized. Shop types [ edit ] Companies that employ workers with a union generally operate on one of several models: - A closed shop (US) or a "pre-entry closed shop" (UK) employs only people who are already union members. The compulsory hiring hall is an example of a closed shop – in this case the employer must recruit directly from the union, as well as the employee working strictly for unionized employers. - A union shop (US) or a "post-entry closed shop" (UK) employs non-union workers as well, but sets a time limit within which new employees must join a union. - An agency shop requires non-union workers to pay a fee to the union for its services in negotiating their contract. This is sometimes called the Rand formula. In certain situations involving state public employees in the United States, such as California, "fair share laws" make it easy to require these sorts of payments. - An open shop does not require union membership in employing or keeping workers. Where a union is active, workers who do not contribute to a union may include those who approve of the union contract (free riders) and those who do not. In the United States, state level right-to-work laws mandate the open shop in some states. In Germany only open shops are legal; that is, all discrimination based on union membership is forbidden. This affects the function and services of the union. An EU case concerning Italy stated that, "The principle of trade union freedom in the Italian system implies recognition of the right of the individual not to belong to any trade union ("negative" freedom of association/trade union freedom), and the unlawfulness of discrimination liable to cause harm to non-unionized employees." In Britain, previous to this EU jurisprudence, a series of laws introduced during the 1980s by Margaret Thatcher's government restricted closed and union shops. All agreements requiring a worker to join a union are now illegal. In the United States, the Taft-Hartley Act of 1947 outlawed the closed shop. In 2006, the European Court of Human Rights found Danish closed-shop agreements to be in breach of Article 11 of the European Convention on Human Rights and Fundamental Freedoms. It was stressed that Denmark and Iceland were among a limited number of contracting states that continue to permit the conclusion of closed-shop agreements. Diversity of international unions [ edit ] Union law varies from country to country, as does the function of unions. For example, German and Dutch unions have played a greater role in management decisions through participation in corporate boards and co-determination than have unions in the United States. Moreover, in the United States, collective bargaining is most commonly undertaken by unions directly with employers, whereas in Austria, Denmark, Germany or Sweden, unions most often negotiate with employers associations. - "In the Continental European System of labour market regulation, the government plays an important role as there is a strong legislative core of employee rights, which provides the basis for agreements as well as a framework for discord between unions on one side and employers or employers' associations on the other. This model was said to be found in EU core countries such as Belgium, France, Germany, the Netherlands and Italy, and it is also mirrored and emulated to some extent in the institutions of the EU, due to the relative weight that these countries had in the EU until the EU expansion by the inclusion of 10 new Eastern European member states in 2004. - In the Anglo-Saxon System of labour market regulation, the government's legislative role is much more limited, which allows for more issues to be decided between employers and employees and any union or employers' associations which might represent these parties in the decision-making process. However, in these countries, collective agreements are not widespread; only a few businesses and a few sectors of the economy have a strong tradition of finding collective solutions in labour relations. Ireland and the UK belong to this category, and in contrast to the EU core countries above, these countries first joined the EU in 1973. - In the Nordic System of labour market regulation, the government's legislative role is limited in the same way as in the Anglo-Saxon system. However, in contrast to the countries in the Anglo-Saxon system category, this is a much more widespread network of collective agreements, which covers most industries and most firms. This model was said to encompass Denmark, Finland, Norway and Sweden. Here, Denmark joined the EU in 1973, whereas Finland and Sweden joined in 1995." The United States takes a more laissez-faire approach, setting some minimum standards but leaving most workers' wages and benefits to collective bargaining and market forces. Thus, it comes closest to the above Anglo-Saxon model. Also, the Eastern European countries that have recently entered into the EU come closest to the Anglo-Saxon model. In contrast, in Germany, the relation between individual employees and employers is considered to be asymmetrical. In consequence, many working conditions are not negotiable due to a strong legal protection of individuals. However, the German flavor or works legislation has as its main objective to create a balance of power between employees organized in unions and employers organized in employers associations. This allows much wider legal boundaries for collective bargaining, compared to the narrow boundaries for individual negotiations. As a condition to obtain the legal status of a trade union, employee associations need to prove that their leverage is strong enough to serve as a counterforce in negotiations with employers. If such an employees association is competing against another union, its leverage may be questioned by unions and then evaluated in a court trial. In Germany, only very few professional associations obtained the right to negotiate salaries and working conditions for their members, notably the medical doctors association Marburger Bund and the pilots association Vereinigung Cockpit. The engineers association Verein Deutscher Ingenieure does not strive to act as a union, as it also represents the interests of engineering businesses. Beyond the classification listed above, unions' relations with political parties vary. In many countries unions are tightly bonded, or even share leadership, with a political party intended to represent the interests of the working class. Typically this is a left-wing, socialist, or social democratic party, but many exceptions exist, including some of the aforementioned Christian unions. In the United States, trade unions are almost always aligned with the Democratic Party with a few exceptions. For example, the International Brotherhood of Teamsters has supported Republican Party candidates on a number of occasions and the Professional Air Traffic Controllers Organization (PATCO) endorsed Ronald Reagan in 1980. In Britain trade union movement's relationship with the Labour Party frayed as party leadership embarked on privatization plans at odds with what unions see as the worker's interests. However, it has strengthened once more after the Labour party's election of Ed Miliband, who beat his brother David Miliband to become leader of the party after Ed secured the trade union votes. Additionally, in the past, there was a group known as the Conservative Trade Unionists, or CTU, formed of people who sympathized with right wing Tory policy but were Trade Unionists. Historically, the Republic of Korea has regulated collective bargaining by requiring employers to participate, but collective bargaining has only been legal if held in sessions before the lunar new year. International unionization [ edit ] The largest trade union federation in the world is the Brussels-based International Trade Union Confederation (ITUC), which has approximately 309 affiliated organizations in 156 countries and territories, with a combined membership of 166 million. The ITUC is a federation of national trade union centres, such as the AFL-CIO in the United States and the Trades Union Congress in the United Kingdom. Other global trade union organizations include the World Federation of Trade Unions. National and regional trade unions organizing in specific industry sectors or occupational groups also form global union federations, such as Union Network International, the International Transport Workers Federation, the International Federation of Journalists, the International Arts and Entertainment Alliance or Public Services International. Criticisms [ edit ] In the United States, the outsourcing of labour to Asia, Latin America, and Africa has been partially driven by increasing costs of union partnership, which gives other countries a comparative advantage in labour, making it more profitable to purchase disorganized, low-wage labour from these regions. Milton Friedman, economist and advocate of laissez-faire capitalism, sought to show that unionization produces higher wages (for the union members) at the expense of fewer jobs, and that, if some industries are unionized while others are not, wages will tend to decline in non-unionized industries. On the other hand, several studies have emphasized so-called revitalization strategies where trade unions attempt to better represent labour market outsiders, such as the unemployed and precarious workers. Thus, for instance, trade unions in both Nordic and southern European countries have devised collective bargaining agreements that improved the conditions of temporary agency workers. Several studies have found evidence that trade unions can reduce competitiveness due to a reduction of business profit, which can then lead to job losses as it makes the business unable to compete. Unions have also been criticized for prolonging recessions and depressions due to discouraging investment. Union publications [ edit ] Several sources of current news exist about the trade union movement in the world. These include LabourStart and the official website of the international trade union movement Global Unions. A source of international news about unions is RadioLabour which provides daily (Monday to Friday) news reports. Labor Notes is the largest circulation cross-union publication remaining in the United States. It reports news and analysis about union activity or problems facing the labour movement. Another source of union news is the Workers Independent News, a news organization providing radio articles to independent and syndicated radio shows in the United States. Film [ edit ] - The 2010 British film Made in Dagenham, starring Sally Hawkins, dramatizes the Ford sewing machinists strike of 1968 that aimed for equal pay for women. - Trade unions were often portrayed in the scripts of Jim Allen. Examples include The Big Flame, The Rank and File and Days of Hope. These films all depict union leaders as untrustworthy and prone to betraying the striking workers. - The British National Union of Mineworkers has been portrayed in numerous films such as Brassed Off, Billy Elliot and Pride. - Bastard Boys, a 2007 dramatization of the 1998 Australian waterfront dispute. - The 2000 film Bread and Roses deals with the struggle of poorly paid janitorial workers in Los Angeles and their fight for better working conditions and the right to unionize. - Hoffa, a 1992 American biographical film directed by Danny DeVito and based on the life of Teamsters Union leader Jimmy Hoffa. - Matewan is a 1987 American drama film written and directed by John Sayles that dramatizes the events of the Battle of Matewan, a coal miners' strike in 1920 in Matewan, a small town in the hills of West Virginia. Haskell Wexler was nominated for the Academy Award for Best Cinematography. - The 1985 documentary film Final Offer by Sturla Gunnarsson and Robert Collision shows the 1984 union contract negotiations with General Motors. - The 1979 film Norma Rae, directed by Martin Ritt and starring Sally Field, is based on the true story of Crystal Lee Jordan's successful attempt to unionize her textile factory. - The 1978 film F.I.S.T., directed by Norman Jewison and starring Sylvester Stallone, is loosely based on the Teamsters Union and their former President Jimmy Hoffa. - The 1959 film I'm All Right Jack, a comedy with Peter Sellers playing the shop steward Fred Kite. - The 1954 film On the Waterfront, directed by Elia Kazan, concerns union violence among longshoremen. - Other documentaries: Made in L.A. (2007); American Standoff (2002); The Fight in the Fields (1997); With Babies and Banners: Story of the Women's Emergency Brigade (1979); Harlan County, USA (1976); The Inheritance (1964) - Other dramatizations: 10,000 Black Men Named George (2002); Matewan (1987); American Playhouse – "The Killing Floor" (1985); Salt of the Earth (1954); The Grapes of Wrath (1940); Black Fury (1935); Metello (1970). - The 2019 film The Irishman, directed by Martin Scorsese, starring Robert De Niro, Al Pacino and Joe Pesci, based on the 2004 non-fiction book I Heard You Paint Houses by Charles Brandt. See also [ edit ] - Labor federation competition in the United States - Labor Management Reporting and Disclosure Act - List of trade unions - New Unionism - Project Labor Agreement - Professional association - Salt (union organizing) - Textile and clothing trade unions - Union busting Notes and references [ edit ] Frost, Daniel (1 April 1967). "Labor's Antitrust Exemption". California Law Review. Archived from the original on 16 December 2017. Retrieved 15 December 2017. ...the United States Supreme Court again undertook the delicate task of defining the antitrust exemption granted labor unions by section six of the Clayton Act. - Webb, Sidney; Webb, Beatrice (1920). History of Trade Unionism. Longmans and Co. London. ch. I - Poole, M., 1986. Industrial Relations: Origins and Patterns of National Diversity. London UK: Routledge. - OECD. Retrieved: 1 December 2017. - "Industrial relations". ILOSTAT. Retrieved 9 October 2018. - "Trade Union Census". Australian Bureau of Statistics. Retrieved 27 July 2011. - (1928). The Guild and the Trade Union. The Age. - Kautsky, Karl (April 1901). "Trades Unions and Socialism". International Socialist Review. 1 (10). Retrieved 27 July 2011. - G. D. H. Cole (2010). Attempts at General Union. Taylor & Francis. p. 3. ISBN 9781136885167. - Principles of Political Economy (1871)Book V, Ch.10, para. 5 - "Trade union". Encyclopædia Britannica. - "Industrial relations"(PDF). International Labour Organisation. Retrieved 9 October 2018. - History of the ACTU.Archived 21 November 2008 at the Wayback Machine Australian Council of Trade Unions. - Markey, Raymond (1 January 1996). "Race and organized labor in Australia, 1850–1901". Highbeam Research. Archived from the original on 19 October 2017. Retrieved 14 June 2006. - Griffiths, Phil (4 July 2002). "Towards White Australia: The shadow of Mill and the spectre of slavery in the 1880s debates on Chinese immigration" (RTF). 11th Biennial National Conference of the Australian Historical Association. Retrieved 14 June 2006. - Dvorak, J., Karnite, R., Guogis, A. (2018). The Characteristic Features of Social Dialogue in the Baltics. STEPP: Socialinė teorija, empirija, politika ir praktika, Nr. 16, p. 26-36 - Dvorak, J., Civinskas, R. (2018). The Determinants of Cooperation and the Need for Better Communication between Stakeholders in EU Countries: The Case of Posted Workers. Polish Journal of Management Studies, Vol. 18 (1), p. 94-106 https://pjms.zim.pcz.pl/resources/html/article/details?id=183839 - "Aantal leden christelijke vakbond neemt jaar na jaar toe". Retrieved 16 January 2018. - "130 jaar ACV-geschiedenis". Retrieved 16 January 2018. - "Hoeveel leden telt het ABVV? - Vlaams ABVV - Socialistische vakbond in Vlaanderen - Algemeen Belgisch Vakverbond ABVV". www.vlaamsabvv.be. Retrieved 16 January 2018. - "Structuur en kerncijfers van de ACLVB". 12 October 2015. Retrieved 16 January 2018. - "Geschiedenis van de ACLVB". 12 October 2015. Retrieved 16 January 2018. - "For Whom The Bells Toll". Hatheway Labour Exhibit Center. Retrieved 6 May 2017. - "Archived copy". Archived from the original on 27 July 2013. Retrieved 15 July 2013.CS1 maint: archived copy as title (link) Retrieved 14 July 2013. - American Center for International Labor Solidarity (2006), Justice For All: The Struggle for Worker Rights in Colombia Archived 17 July 2010 at the Wayback Machine, p11 - An ILO mission in 2000 reported that "the number of assassinations, abductions, death threats and other violent assaults on trade union leaders and unionized workers in Colombia is without historical precedent". According to the Colombian Government, during the period 1991–99 there were 593 assassinations of trade union leaders and unionized workers while the National Trade Union School holds that 1 336 union members were assassinated." – ILO, 16 June 2000, Special ILO Representative for cooperation with Colombia to be appointed by Director-General - "By the 1990s, Colombia had become the most dangerous country in the world for unionists" – Chomsky, Aviva (2008), Linked labor histories: New England, Colombia, and the making of a global working class, Duke University Press, p11 - "Colombia has the world’s worst record on these assassinations..." – 20 November 2008, Colombia: Not Time for a Trade Deal - International Trade Union Confederation, 11 June 2010, ITUC responds to the press release issued by the Colombian Interior Ministry concerning its survey - International Trade Union Confederation (2010), Annual Survey of violations of trade union rights: Colombia - "Historia del Sindicalismo". SITRAPEQUIA website (in Spanish). San José: Sindicato de Trabajadores(as) Petroléros Químicos y Afines. 2014. Archived from the original on 5 May 2014. Retrieved 4 May 2014. - Herrera, Manuel (30 April 2014). "Sindicatos alzarán la voz contra modelo neoliberal en celebraciones del 1° de mayo". La Nacion (in Spanish). San Jose. Retrieved 7 May 2014. - Conradt, David. "Social Democratic Party of Germany". ENCYCLOPÆDIA BRITANNICA. 2017 Encyclopædia Britannica, Inc. - Fulton, L. (2015). "Trade Unions. Worker Participation. SEEurope Network". Worker-Participation.eu. SEEurope Network. Retrieved 15 November 2017. - Archived 3 October 2011 at the Wayback Machine - Sengupta, Meghna. "Trade Unions in India". Pocket Lawyer. Retrieved 15 November 2017. - Datta, Rekah. "From Development to Empowerment: The Self-Employed Women's Association in India". International Journal of Politics, Culture, and Society. - Chand, Smriti (17 February 2014). "6 Major Central Trade Unions of India". Your Article Library. Retrieved 15 November 2017. - Nimura, K. The Formation of Japanese Labor Movement: 1868–1914 (Translated by Terry Boardman). Retrieved 11 June 2011 - Cross Currents. Labor unions in Japan. CULCON. Retrieved 11 June 2011 - Weathers, C. (2009). Business and Labor. In William M. Tsutsui (Ed.), A Companion to Japanese History (pp. 493–510). Chichester, UK: Blackwell Publishing Ltd. - Jung, L. (30 March 2011). National Labour Law Profile: Japan. ILO. Retrieved 10 June 2011 - Japan Institute for Labour Policy and Training. Labor Situation in Japan and Analysis: 2009/2010. Archived 27 September 2011 at the Wayback Machine Retrieved 10 June 2011 - Dolan, R. E. & Worden, R. L. (Eds.). Japan: A Country Study. Labor Unions, Employment and Labor Relations. Washington: GPO for the Library of Congress, 1994. Retrieved 12 June 2011 - Dan La BotzU.S.-supported Economics Spurred Mexican Emigration, pt.1, interview at The Real News, 1 May 2010. - Juan Montes; José de Córdoba (21 December 2012). "Mexico Takes On Teachers Over School Control". Wall Street Journal. - Anders Bruhn, Anders Kjellberg and Åke Sandberg (2013) "A New World of Work Challenging Swedish Unions" in Åke Sandberg (ed.) Nordic Lights. Work, Management and Welfare in Scandinavia. Stockholm: SNS (pp. 155-160) - "Trade Union Density" OECD. Accessed: 06 October 2019. - Anders Kjellberg (2019) Kollektivavtalens täckningsgrad samt organisationsgraden hos arbetsgivarförbund och fackförbund, Department of Sociology, Lund University. Studies in Social Policy, Industrial Relations, Working Life and Mobility. Research Reports 2019:1, Appendix 3 (in English) Table A - Anders Kjellberg (2011) "The Decline in Swedish Union Density since 2007" Nordic Journal of Working Life Studies (NJWLS) Vol. 1. No 1 (August 2011), pp. 67-93 - Anders Kjellberg "The Decline in Swedish Union Density since 2007" Nordic Journal of Working Life Studies (NJWLS) Vol. 1. No 1 (August 2011), pp. 67–93 - Schifferes, Steve (8 March 2004). "The trade unions' long decline". BBC News. Retrieved 16 January 2014. - "United Kingdom: Industrial relations profile". EUROPA. 15 April 2013. Archived from the original on 3 December 2013. Retrieved 16 January 2014. - Kazin, Michael (1995). The Populist Persuasion. BasicBooks. p. 154. - Trade Union Density OECD. StatExtracts. Retrieved: 17 November 2011. - Union Members Summary Bureau of Labor Statistics, 27 January 2012 Retrieved: 26 February 2012 - "Not With a Bang, But a Whimper: The Long, Slow Death Spiral of America's Labor Movement". The New Republic. 6 June 2012. Retrieved 16 January 2018. - 8-31-2004 Union Membership Trends in the United States Gerald Mayer. Congressional Research Service. 31 Aug 2004 - Doree Armstrong (12 February 2014). Jake Rosenfeld explores the sharp decline of union membership, influence. UW Today. Retrieved 6 March 2015. See also: Jake Rosenfeld (2014) What Unions No Longer Do. Harvard University Press. ISBN 0674725115 Keith Naughton, Lynn Doan and Jeffrey Green (20 February 2015). As the Rich Get Richer, Unions Are Poised for Comeback. Bloomberg. Retrieved 6 March 2015. - "A 2011 study drew a link between the decline in union membership since 1973 and expanding wage disparity. Those trends have since continued, said Bruce Western, a professor of sociology at Harvard University who co-authored the study." - Stiglitz, Joseph E. (4 June 2012). The Price of Inequality: How Today's Divided Society Endangers Our Future (Kindle Locations 1148-1149). Norton. Kindle Edition. - Barry T. Hirsch, David A. Macpherson, and Wayne G. Vroman, "Estimates of Union Density by State," Monthly Labor Review, Vol. 124, No. 7, July 2001. - "The 10 Biggest Strikes in American History Archived 2 December 2013 at the Wayback Machine". Fox Business. 9 August 2011 - Amnesty International report 23 September 2005 – fear for safety of SINALTRAINAL member José Onofre Esquivel Luna - See the website of the Danish discount union "Det faglige Hus" at http://www.detfagligehus.dk/ (website in Danish) - Poland, Professor Jacek Tittenbrun of Poznan University. "The economic and social processes that led to the revolt of the Polish workers in the early eighties". www.marxist.com. Retrieved 16 January 2018. - Solidarność popiera Kaczyńskiego jak kiedyś Wałęsę at news.money.pl (in Polish) - See E McGaughey, 'Democracy or Oligarchy? Models of Union Governance in the UK, Germany and US' (2017) ssrn.com - "Australian Centre for Industrial Relations Research and Training report"(PDF). Acirrt.com. Archived from the original(PDF) on 22 July 2011. Retrieved 27 July 2011. - Eurofound website "FREEDOM OF ASSOCIATION/TRADE UNION FREEDOM", "Archived copy". Archived from the original on 17 April 2011. Retrieved 3 March 2012.CS1 maint: archived copy as title (link) - Eurofound, http://www.eurofound.europa.eu/eiro/2006/01/feature/dk0601104f.htm - Bamberg, Ulrich (June 2004). "The role of German trade unions in the national and European standardisation process" (PDF). TUTB Newsletter. 24–25. Archived from the original (PDF) on 26 July 2011. Retrieved 27 July 2011. - Gold, M., 1993. The Social Dimension – Employment Policy in the European Community. Basingstroke England UK: Macmillan Publishing - Hall, M., 1994. Industrial Relations and the Social Dimension of European Integration: Before and after Maastricht, pp. 281–331 in Hyman, R. & Ferner A., eds.: New Frontiers in European Industrial Relations, Basil Blackwell Publishing - Wagtmann, M.A. (2010): Module 3, Maritime & Port Wages, Benefits, Labour Relations. International Maritime Human Resource Management textbook modules. Available at: https://skydrive.live.com/?cid=f90c069a3e6bb729&id=F90C069A3E6BB729%21107#cid=F90C069A3E6BB729&id=F90C069A3E6BB729%21182 - Kramarz, Francis (19 October 2006). "Outsourcing, Unions, and Wages: Evidence from data matching imports, firms, and workers" (PDF). Retrieved 22 January 2007. - Friedman, Milton (2007). Price theory ([New ed.], 3rd printing ed.). New Brunswick, NJ: Transaction Publishers. ISBN 978-0-202-30969-9. - Vlandas, Timothee; Benassi, Chiara (2016). "Union inclusiveness and temporary agency workers" (PDF). European Journal of Industrial Relations. 22 (1): 5–22. doi:10.1177/0959680115589485. - Sherk, James "What Unions Do: How Labor Unions Affect Jobs and the Economy", The Heritage Foundation, 21/05/19, accessed 01/03/19 Further reading [ edit ] Britain [ edit ] - Aldcroft, D. H. and Oliver, M. J., eds. Trade Unions and the Economy, 1870–2000. (2000). - Campbell, A., Fishman, N., and McIlroy, J. eds. British Trade Unions and Industrial Politics: The Post-War Compromise 1945–64 (1999). - Clegg, H.A. et al. A History of British Trade Unions Since 1889 (1964); A History of British Trade Unions Since 1889: vol. 2 1911-1933. (1985); A History of British Trade Unionism Since 1889, vol. 3: 1934–51 (1994), The major scholarly history; highly detailed. - Davies, A. J. To Build a New Jerusalem: Labour Movement from the 1890s to the 1990s (1996). - Laybourn, Keith. A history of British trade unionism c. 1770-1990 (1992). - Minkin, Lewis. The Contentious Alliance: Trade Unions and the Labour Party (1991) 708 pp online - Pelling, Henry. A history of British trade unionism (1987). - Wrigley, Chris, ed. British Trade Unions, 1945-1995 (Manchester University Press, 1997) - Zeitlin, Jonathan. "From labour history to the history of industrial relations." Economic History Review 40.2 (1987): 159-184. Historiography - Directory of Employer's Associations, Trade unions, Joint Organisations, published by HMSO (Her Majesty's Stationery Office) on 1986 ISBN 0-11-361250-8 United States [ edit ] - Arnesen, Eric, ed. Encyclopedia of U.S. Labor and Working-Class History (2006), 3 vol; 2064pp; 650 articles by experts excerpt and text search - Beik, Millie, ed. Labor Relations: Major Issues in American History (2005) over 100 annotated primary documents excerpt and text search - Boris, Eileen, and Nelson Lichtenstein, eds. Major Problems In The History Of American Workers: Documents and Essays (2002) - Brody, David. In Labor's Cause: Main Themes on the History of the American Worker (1993) excerpt and text search - Dubofsky, Melvyn, and Foster Rhea Dulles. Labor in America: A History (2004), textbook, based on earlier textbooks by Dulles. - Taylor, Paul F. The ABC-CLIO Companion to the American Labor Movement (1993) 237pp; short encyclopedia - Zieger, Robert H., and Gilbert J. Gall, American Workers, American Unions: The Twentieth Century(3rd ed. 2002) excerpt and text search Other [ edit ] - Berghahn, Volker R., and Detlev Karsten. Industrial Relations in West Germany (Bloomsbury Academic, 1988). - European Commission, Directorate General for Employment, Social Affairs & Inclusion: Industrial Relations in Europe 2010. - Gumbrell-McCormick, Rebecca, and Richard Hyman. Trade unions in western Europe: Hard times, hard choices (Oxford UP, 2013). - Hodder, A. and L. Kretsos, eds. Young Workers and Trade Unions: A Global View (Palgrave-Macmillan, 2015). review - Kester, Gérard. Trade unions and workplace democracy in Africa (Routledge, 2016). - Kjellberg, Anders. "The Decline in Swedish Union Density since 2007", Nordic Journal of Working Life Studies (NJWLS) Vol. 1. No 1 (August 2011), pp. 67–93. - Kjellberg, Anders (2017) The Membership Development of Swedish Trade Unions and Union Confederations Since the End of the Nineteenth Century (Studies in Social Policy, Industrial Relations, Working Life and Mobility). Research Reports 2017:2. Lund: Department of Sociology, Lund University. - Lipton, Charles (1967). The Trade Union Movement of Canada: 1827–1959. (3rd ed. Toronto, Ont.: New Canada Publications, 1973). - Markovits, Andrei. The Politics of West German Trade Unions: Strategies of Class and Interest Representation in Growth and Crisis (Routledge, 2016). - McGaughey, Ewan, 'Democracy or Oligarchy? Models of Union Governance in the UK, Germany and US' (2017) ssrn.com - Misner, Paul. Catholic Labor Movements in Europe. Social Thought and Action, 1914-1965 (2015). online review - Mommsen, Wolfgang J., and Hans-Gerhard Husung, eds. The development of trade unionism in Great Britain and Germany, 1880-1914 (Taylor & Francis, 1985). - Orr, Charles A. "Trade Unionism in Colonial Africa" Journal of Modern African Studies, 4 (1966), pp. 65–81 - Panitch, Leo & Swartz, Donald (2003). From consent to coercion: The assault on trade union freedoms, third edition. Ontario: Garamound Press. - Ribeiro, Ana Teresa. "Recent Trends in Collective Bargaining in Europe." E-Journal of International and Comparative Labour Studies 5.1 (2016). online - Taylor, Andrew. Trade Unions and Politics: A Comparative Introduction (Macmillan, 1989). - Upchurch, Martin, and Graham Taylor. The Crisis of Social Democratic Trade Unionism in Western Europe: The Search for Alternatives (Routledge, 2016). - Visser, Jelle. "Union membership statistics in 24 countries." Monthly Labor Review. 129 (2006): 38+ online - Visser, Jelle. "ICTWSS: Database on institutional characteristics of trade unions, wage setting, state intervention and social pacts in 34 countries between 1960 and 2007." Institute for Advanced Labour Studies, AIAS, University of Amsterdam, Amsterdam (2011). online [ edit ] |Wikimedia Commons has media related to Trade union.| |Wikiquote has quotations related to: Trade union| |Wikisource has the text of the 1911 Encyclopædia Britannica article Trade Unions.| - LabourStart international trade union news service - New Unionism Network - Younionize Global Union Directory - Australian Council of Trade Unions (ACTU) – Australian Council of Trade Unions - Trade union membership 1993–2003 – European Industrial Relations Observatory report on membership trends in 26 European countries - Trade union membership 2003–2008 – European Industrial Relations Observatory report on membership trends in 28 European countries - Trade Union Ancestors – Listing of 5,000 UK trade unions with histories of main organizations, trade union "family trees" and details of union membership and strikes since 1900. - TUC History online – History of the British union movement - Trade EU – European Trade Directory - Short history of the UGT in Catalonia - United States - Jewish Law (Halakhah) Benjamin Brown, "Trade Unions, Strikes, and the Renewal of Halakhic Labor Law: Ideologies in the Rulings of Rabbis Kook, Uziel, and Feinstein"
Seeing deep into space requires large telescopes. The larger the telescope, the more light it collects, and the sharper the image it provides. For example, NASA’s Kepler space observatory, with a mirror diameter of under one meter, is searching for exoplanets orbiting stars up to 3,000 light-years away. By contrast, the Hubble Space Telescope, with a 2.4-meter mirror, has studied stars more than 10 billion light-years away. Now Caltech’s Sergio Pellegrino and colleagues are proposing a space observatory that would have a primary mirror with a diameter of 100 meters—40 times larger than Hubble’s. Space telescopes, which provide some of the clearest images of the universe, are typically limited in size due to the difficulty and expense of sending large items into space. Pellegrino’s team would circumvent that issue by shipping the mirror up as separate components that would be assembled, in space, by robots. Their design calls for the use of more than 300 deployable truss modules that could be unfolded to form a scaffolding upon which a commensurate number of small mirror plates could be placed to create a large segmented mirror. The assembly of the scaffolding and the attachment of the many mirrors is a task well-suited to robots, Pellegrino and his colleagues say. In their concept, a spider-like, six-armed “hexbot” would assemble the trusswork and then crawl across the structure to build the mirror atop it. It was modeled on the JPL RoboSimian system, which in 2015 completed the DARPA Robotics Challenge, a federal competition aimed at spurring the development of robots that could perform complicated tasks that would be dangerous for humans. The hexbot would run on electrical power from the telescope’s solar grid. It would use four of its arms to walk—with one leg moving at any given time, while the three others remain securely attached to the structure. The two remaining arms would be free to assemble the trusses and mirrors. The team opted to pursue an ambitious 100-meter design. “We wanted to study how different kinds of architectures perform as the diameter is increased,” says Pellegrino, Joyce and Kent Kresa Professor of Aeronautics and Professor of Civil Engineering in Caltech’s Division of Engineering and Applied Science, and Jet Propulsion Laboratory Senior Research Scientist. “We found that far away from the Earth, a structurally connected telescope is much heavier than an architecture based on separate spacecraft for the primary mirror, the optics, and the instrumentation.” The realization of such an assembly is still decades away. However, Pellegrino and his colleagues are already working on the various technologies that will be needed to make it possible. The entire space observatory would be composed of the fully assembled mirror-and-truss structure and three other parts, flying in formation. An optics and instrumentation unit would be located about 400 meters from the mirror; a control unit, stationed about 400 meters beyond that, would align the system and keep it working properly; and a thin shade, roughly 20 meters in diameter, would shield the mirror from the sun to keep its temperature stable and consistent across its diameter. The four-part assembly would be stationed at one of the sun–earth Lagrange points—locations between the sun and the earth where the pull of gravity from two bodies locks a satellite into orbit with them, allowing it to maintain a stable position. There, the space observatory could peer deep into space without drifting out of place. The Latest on: Space observatory via Google News The Latest on: Space observatory - According to NASA, this is the most popular view of planet Earth from spaceon May 11, 2020 at 9:43 am NASA launched Earth Observatory, a publishing outlet whose "mission is to share with the public the images, stories and discoveries about the envi ... - These are the best photos of Earth ever taken from spaceon May 11, 2020 at 8:19 am In honor of the 50th anniversary of Earth Day and the 20th anniversary of NASA’s Earth observatory, the agency asked the public to vote on the best images of Earth. Ater a five-round tournament and ... - ESA’s Observatory Detects 100th Asteroid Impact Event On The Moonon May 5, 2020 at 9:19 pm An observatory funded by the European Space Agency (ESA) has detected the latest asteroid impact event on the Moon. According to the ESA, information collected by the observatory provides valuable ... - The hunt for asteroid impacts on the moon heats up with new observatoryon May 5, 2020 at 2:12 pm Sometimes a flash in the night is actually an asteroid slamming into the moon. Because such impacts offer valuable information about Earth's own barrage of space rocks, scientists have established ... - Thirty years of the Hubble Space Telescopeon May 3, 2020 at 2:28 pm While it has been a public relations boon for NASA, Hubble’s true importance lies in its continued and vast contributions to astronomy. - Hubble Space Telescope celebrates 30 years of discoveries and awe-inspiring imageson May 2, 2020 at 12:05 am The Hubble Space Telescope launched 30 years ago on Friday, forever changing the way we see the universe. The telescope's ethereal, dreamy and almost fantasy-like views of space vistas have inspired ... - UC Berkeley Lab to build NOAA space weather instrumenton May 1, 2020 at 9:53 am The National Oceanic and Atmospheric Administration awarded a $7.5 million contract to the Space Sciences Laboratory at the University of California, Berkeley, to design, build and test an ion sensor ... - On This Day in Space! May 1, 1949: Neptune's moon Nereid discoveredon May 1, 2020 at 5:34 am Neptune's moon Nereid was discovered by the Dutch astronomer Gerard Kuiper. See how it happened in our On This Day in Space video series! - Hubble's impactful life alongside space debrison May 1, 2020 at 4:25 am During its 30 years in orbit around Earth, the NASA/ESA Hubble Space Telescope has witnessed the changing nature of spaceflight as the skies have filled with greater numbers of satellites, the ... via Bing News
Astronomers have detected carbon smog permeating the interstellar atmospheres of early galaxies, helping confirm that these ancient galaxies were mostly dust free. The discovery sheds light on how some of the first galaxies to form in the universe grew and evolved, researchers said. Gas and dust are the main components of the interstellar medium, the matter in galaxies that constitutes the building blocks of stars and planets. The gases hydrogen and helium make up 98 percent of all "normal" (i.e., not dark) matter in the universe. The other 2 percent — any elements heavier than helium, making up everything from dust to planets — was created from the fusion of hydrogen and helium atoms in the hearts of stars. [ The History & Structure of the Universe (Infographic) ] Since dust formed only after the birth of the first stars, scientists expected that galaxies began nearly dustless, getting dustier as they evolved. However, confirming what the interstellar medium in early galaxies was like has been a challenge for researchers. If there was less dust, it would make starlight from early galaxies bluer, but there are many other effects that could have made this light bluer as well, said study lead author Peter Capak, an astronomer at the California Institute of Technology in Pasadena. Now, using the powerful Atacama Large Millimeter/submillimeter Array (ALMA) observatory in Chile, astronomers have confirmed that early galaxies apparently have less dust than more evolved galaxies. "There is little dust in early galaxies about 1 billion years old, in the epoch of the first galaxies," Capak told Space.com. "We are starting to see the point where galaxies are truly primordial, some of the first to form stars." The researchers analyzed nine star-forming galaxies located about 13 billion light-years away from Earth. The scientists therefore viewed the galaxies when the universe was 1 billion years old or so, or only about seven percent of its current age. The team focused on the faint glow of ionized carbon. This element can become ionized, or electrically charged, due to the powerful ultraviolet radiation emitted by bright, massive stars. In the process, it will give off specific frequencies of radio waves. Since carbon has a strong chemical affinity for other elements, binding to make simple and complex organic molecules, it does not remain in an unbound, ionized state for very long. This means the radio glow of ionized carbon is probably a good marker for an early galaxy, which would possess much lower concentrations of the heavy elements that ionized carbon would bind to in later galaxies to form dust grains. Prior attempts to detect the radio glow of ionized carbon in early galaxies had failed for nearly 30 years. Some researchers had speculated that a few billion years more were needed for stars to manufacture sufficient quantities of carbon to be seen across vast cosmic distances. However, ALMA readily detected the haze of ionized carbon in these early galaxies. In comparison, similar galaxies existing about 2 billion years later had much less ionized carbon. Prior studies failed to detect this ancient radio glow, researchers said, because these studies focused on atypical galaxies undergoing mergers — dramatic activity that may have masked the faint signal from ionized carbon. By analyzing how ionized carbon is moving in these early galaxies, astronomers can learn details about star formation at the time, Capak added. This, in turn, could help solve the mystery of how galaxies were able to reach the massive sizes they attained early in the universe's history, he said. The scientists detailed their findings online today (June 24) in the journal Nature. - Meet ALMA: Amazing Photos from Giant Radio Telescope - Big Bang Theory: 5 Weird Facts About Seeing the Universe's Birth - Star Quiz: Test Your Stellar Smarts © 2013 Space.com. All rights reserved. More from Space.com.
How do you sum a number in C++? - Step 1: Get number by user. - Step 2: Get the modulus/remainder of the number. - Step 3: sum the remainder of the number. - Step 4: Divide the number by 10. - Step 5: Repeat the step 2 while number is greater than 0. Herein, how do you sum a number in C++? C++ Program to Display the Sum of the Digits of a given Number - * C++ program to Display the Sum of the digits of a given Number. - using namespace std; - int val, num, sum = 0; - cout << "Enter the number : "; - cin >> val; - num = val; - while (num != 0) Likewise, how do you sum a number in a for loop? Getting the sum using a for loop implies that you should: - Create an array of numbers, in the example int values. - Create a for statement, with an int variable from 0 up to the length of the array, incremented by one each time in the loop. - In the for statement add each of the array's elements to an int sum. Beside above, how do you find the digit sum of a number? The digit sum of a number, say 152, is just the sum of the digits, 1+5+2=8. If the sum of the digits is greater than nine then the process is repeated. For example, the sum of the digits for 786 is 7+8+6=21 and the sum of the digits for 21 is 3 so the digit sum of 786 is 3. What does += mean in C++? += Add AND assignment operator, It adds right operand to the left operand and assign the result to left operand. C += A is equivalent to C = C + A. -= Subtract AND assignment operator, It subtracts right operand from the left operand and assign the result to left operand.
Logic & Reasoning The Logic & Reasoning domain focuses on children developing the ability to think, reason, and use information to make good decisions and understand their surroundings. Children need to develop skills in logic and reasoning to ensure that they can seek solutions to problems and that they are not taken advantage of by other people or companies when they are older. As you choose curricula and assessments for your program, keep in mind the two domain elements of Logic & Reasoning and the accompanying examples of skills 3 to 5 year olds should be developing in this area: - Reasoning & Problem Solving - Children should recognize and understand cause and effect relationships. - Using past knowledge to classify, compare, and contrast events, objects, and experiences is also an important skill for children to develop. - Seeking multiple solutions to a problem or task is another sign that children are growing in this area of the Framework. - Symbolic Representation - Children should participate and act out roles in dramatic play. - An ability to recognize the difference between pretend situations and reality is another skill children need to develop. - It is also important for children to begin to understand that symbols or objects can represent something else at this stage in their development. For more in-depth information about this domain or additional resources, visit the Cognition & General Knowledge page on the Early Childhood Learning & Knowledge Center's (ECLKC) website. Kaplan offers a variety of interactive games, books, activity guides, and dramatic play items to help promote logic and reasoning skills in children. Transparent Marble Run - Item Number: 84211 - In Stock 3 years & up. Roll your way into a good time with this marbulous-marble run! The durable, transparent pieces allow children to visually track the entire course of their marble as it makes it's journey through several fun twists and turns. 48 transparent pieces, plus 16 marbles. Promo Pricing: $29.95 - Item Number: 32682 - In Stock 3 years & up. Reach inside and identify mysterious objects by touch! Builds tactile, communication, matching, and memory skills. With openings on both sides, two children can play together and develop cooperative play skills. Hardwood and vinyl box measures 8"H x 10 3/4"W x 6"D. Colors may vary. - Item Number: 30614 - In Stock 2 years & up. Balance, stack, nest and create a wide variety of patterns and shapes with 12 hexagonal pieces. Activity guide includes over 100 examples from easy to more complex designs. Gears! Gears! Gears!® (95 Pieces) - Item Number: 30994 - In Stock 3 years & up. Challenge patterning and problem solving skills with this exciting set of gears, cranks, connectors, pillars, and interlocking plates! Includes 95 pieces in one size for hundreds of 3 dimensional possibilities. Mathematics Knowledge & Skills This domain refers to children understanding the concepts of numbers, shapes, and patterns among other math subjects. Young children need to develop early math skills to help them connect ideas and to question and analyze the world around them. The mathematic skills 3 to 5 year olds develop will be the foundation of how they solve more complex mathematical concepts and equations later in life. Remember the following elements and examples of mathematical skills 3 to 5 year olds should be developing as you create lesson plans based on suggested guidelines from the Head Start Child Development and Learning Framework: - Number Concepts & Quantities - Children should understand that numbers represent quantities and that they go in a certain order. - Associating numbers and quantities with written numerals is another skill they should be developing at this stage. - Children should also learn how to determine quantity by using one-to-one counting or by identifying the number of items without counting. - Number Relationships & Operations - Using a range of strategies to compare the quantity of two objects is one example of the skills children should be developing in this domain element. - Children should also be able to recognize that numbers can be used to make another number. - Identifying a new number when others are combined or separated is another skill 3 to 5 year olds need to develop. - Geometry & Spatial Sense - Children should be able to understand shapes, their attributes, and how they are related to one another. - Comparing an object based on size and shape is another important skill for children to learn. - Directions and the position of objects are other important concepts children need to develop in this area. - Children should recognize patterns and how they are sequenced. - Duplicating and extending simple patterns are other skills children need to acquire in this stage of development. - Creating their own patterns is a great indicator of how children's skills are growing in this domain element. - Measurement & Comparison - Comparing objects by their length, weight, and size are necessary skills 3 to 5 year olds should learn. - Using size or length to organize objects into a certain order is another example of what skills children should be developing in this area of the Framework. - Using tools and techniques to measure and compare objects also demonstrates the knowledge and skills children are acquiring at this stage in their development. For more in-depth information about this domain, visit the Mathematics Knowledge & Skills page on ECLKC's website. Kaplan offers a variety of math materials that give children the opportunity to explore various mathematical concepts. Children can learn about measurements, quantity, shape, patterns, and more with our educational math games, number blocks, books, charts, and other math items. See Saw Sorter - Item Number: 91038 - In Stock 1 year & up. Stack the shapes on opposite ends of the see-saw to create a balanced platform. Shapes fit snugly in routed holes. Promotes color and shape recognition, balancing, color-matching, and sequencing. Sturdy wooden construction with rounded corners and smooth edges. Safe non-toxic paints with a clear finish. Montessori Sorting Box - Item Number: 1470 - In Stock 3 years & up. What an inventive way to experience colors, shapes, and numerals! 18" box has 10 compartments and a lid with corresponding slots. Follow one of the included header strips and slip tiles through appropriate slots. Includes 4 number sequence strips, 2 color sequence strips, 50 counting tiles, 20 color tiles, and a wooden storage box. Measures 18"L x 2 1/2"H x 6 1/4"W. The Big Tape - Item Number: 15834 - In Stock 3 years & up. The Big Tape Measure is just like a carpenter's, except that it is kid-friendly. Measure everything in centimeters or inches using this retractable tape measure. The Big Tape Measure is big and chuncky and easy for little hands to use and understand size relationships. 36"/ 100 cm long. Science Knowledge & Skills The Science Knowledge & Skills domain concerns the emerging ability of children to gather information about their surroundings and then organize that information into theories and knowledge. Children are very curious during the early childhood years and encouraging children to explore and learn about the world around them will help them develop an interest in the environment and other science subjects. As you design your Head Start program to meet school readiness goals, consider these elements and examples of the Science Knowledge & Skills domain: - Scientific Skills & Method - Children should use their senses and other tools to observe and collect information and then use that information to ask questions and investigate science processes and relationships. - Children need to be able to observe and discuss similarities, differences, and comparisons among objects at this stage in their development. - Another indicator of development in this area is whether children can use past experiences and what they are currently learning to make predictions and discuss explanations and generalizations in the science world. - Conceptual Knowledge of the Natural & Physical World - Observing, discussing, and describing living things and natural processes are a few skills 3 to 5 year olds need to develop in this area. - Children should also be able to make observations, describe, and discuss material properties and how substances transform. - Acquiring knowledge of the concepts and facts of science and then using that knowledge to understand relationships in nature and the environment is another skill children need to learn at this stage in their development. Visit the Science Knowledge & Skills page on ECLKC's website for more in-depth information and resources about this domain. Be sure to check out Kaplan's science book sets, aquariums, magnets, microscopes, scales, bug and animal sets, and our educational light table. Our science materials and supplies make great additions to the classroom and can help your Head Start program meet requirements and guidelines. Root Vue Farm Set - Item Number: 31325 - In Stock 4 years & up. Durable unit with acrylic viewing window, built-in water basin and drainage reservoir, light shield that keeps plants growing but can be removed for viewing roots, 8 super-expanding grow mix wafers, 3 packets of seeds, identification labels and 16 page booklet with complete instructions and experiments. Measures 9 1/4"H x 16"W x 5 3/4"D. Science Habitat Center - Blue - Item Number: 91188 - Ships From Manufacturer This one of a kind center allows you to create a garden, a butterfly house, ladybug habitat, or whatever science project you wish. A spacious learning environment for your entire classroom. A 5" diameter screw-on cap provides easy access. Comes in blue frame only, with a Mega-Tray, vent plug, and super plug. Remove the dome and turn the center into a Sand & Water Center. Easy to move on (4) 3" locking casters. 2 snap-on caddies are included. Some assembly required. 33"H x 21"W x 21"D. Life Cycle Puzzles (Set of 4) - Item Number: 71413 - In Stock 3 years & up. Four colorful puzzles represent the life cycles of a frog, ant, chicken and oak tree. Each highly detailed, photographic image has a matching picture beneath the easy-to-grasp piece. Each puzzle has 4 pieces. Social Studies Knowledge & Skills The Social Studies Knowledge & Skills domain refers to children understanding how people relate to one another and the world around them. The social studies skills children develop this stage in their development will help them learn about themselves, their family, community, nation, and the world. Keep in mind the following domain elements and skill examples when you are choosing assessments and curricula for your Head Start program: - Self, Family, & Community - Children should be able to identify their personal and family structure and understand and respect the similarities and differences among people. - Being able to recognize the work associated with a variety of jobs is another skill children should be developing at this stage. - Understanding why there are rules at home and in the classroom and laws in the community is also an important concept 3 to 5 year olds need to develop. - People & the Environment - Children should be able to recognize different aspects of the environment, such as land formations, roads, buildings, trees, and bodies of water. - They should also recognize that the environment is shared by people, plants, and animals. - Understanding that activities, such as recycling or turning off the water when they brush their teeth, can help take care of the environment is another important concept children need to learn at this stage in their development. - History & Events - Children should be able to differentiate between the past, present, and future. - The ability to recognize what events happened in the past is an important skill for 3 to 5 year olds to develop. - Understanding how people live and that what they do changes over time will help children be successful now and in the future. For more in-depth information about this domain, visit the Social Studies Knowledge & Skills page on ECLKC's website. Kaplan offers a wide variety of globes, multi-cultural items and activities, character education book sets, maps, activity books, and much more. Our social studies products will help children learn about diversity, traditions, history, and other cultures. - Item Number: 70482 - In Stock 3 years & up. Encourage children to protect the earth by increasing environmental awareness. One large board and 20 tiles. Match the recyclable item with the correct recycling bin. Activity guide included. Friends At Work and Play Paperback Book and Poster Set - Item Number: 47871 - In Stock 1 - 5 years. The message is simple. Everyone's contribution counts. Each one of us has an important role to play in helping to make our communities better places to live, regardless of our background or ability. The 11" x 17" posters show how young children and community helpers from different backgrounds and diverse abilities work and learn together. Set includes "Friends at Work and Play" book, twelve posters, and a teacher's guide. Posters are printed on heavy stock with a… More » This domain of the Head Start Child Development and Learning Framework focuses on children's physical well-being, which refers to nutrition, exercise, personal hygiene, safety practices, and development of gross and fine motor skills. Learning good health habits early in life will help build a strong foundation for lifelong healthy living. Consider these four main elements and examples for Physical Development and Health when you are planning your Head Start program's curricula and assessments: - Physical Health Status - Children should have good health overall. - Children need to get a sufficient amount of rest and exercise for their age. - Maintaining physical growth within the appropriate ranges of weight, height, and age is also important in determining how a child develops mentally, physically, and emotionally during this stage of their life. - Health Knowledge & Practice - Children should understand and practice healthy and safe habits. - Completing personal care tasks independently is another indicator of how children are growing in this area of the Framework. - Children cooperating during doctor or dental visits and recognizing the importance of those visits shows a major step they have taken in the developmental process. - Gross Motor Skills - Understanding how the body moves and how to control their own body is an indicator of growth in this area of the Framework. - Balanced movements and coordination for a range of activities are also skills 3 to 5 year olds should be developing. - Fine Motor Skills - Children should be developing increased hand strength, dexterity, and eye-hand coordination. - Manipulating a range or objects, such as blocks or art supplies, is another indicator of the skills children need to acquire at this stage in their development. For more in-depth information about this domain, visit the Physical Development & Health page on ECLKC's website. Kaplan's selection of playground, nutrition, and active play items would make great additions to your Head Start program's lesson plans and curricula. Learning About Nutrition Through Activities Preschool Program (5 Students) - Item Number: 89301 - Ships From Manufacturer 3 years & up. This research-based, 24-week nutrition curriculum provides teachers with everything they need to promote healthy eating -- in just 20 minutes a day! With a focus on helping children learn to taste, eat, and enjoy fruits and vegetables, this curriculum incorporates both classroom and parent activities with a developmentally apppropriate approach. This kit has enough materials for 5 students. Health and Your Body Book Set (Set of 4) - Item Number: 62422 - In Stock 4 years & up. Inspire good habits with these easy to read introductions to healthy living. Whether learning about proper nutrition or understanding how our bodies work, kids will get the facts they need to help keep them in good health. Social / Emotional The Social & Emotional Development domain concerns the social and emotional skills that help children have and maintain healthy relationships, develop a healthy personal identity, regulate behavior and emotions, and foster secure attachment to adults. The emotional and social well-being of children will indicate the type of relationships they have as adults and how they will adjust to new environments in the future. As you consider how to promote continuous quality improvement in your Head Start program, remember the following elements and examples of the Social & Emotional Development domain: - Social Relationships - Children should interact and have healthy relationships with adults and their peers. - Communicating with others and accepting or requesting guidance also indicates that 3 to 5 year olds are developing good social skills. - Recognizing and labeling the emotions of others and using socially appropriate behavior with adults and peers are other examples of how children should be developing emotionally and socially. - Self-Concept & Self-Efficacy - Children need to identify their own personal characteristics, preferences, thoughts, and feelings. - Showing confidence in accomplishing tasks, meeting goals, and making decisions indicates that 3 to 5 year olds are developing increasing skills in this area of the Framework. - Demonstrating age-appropriate independence in a range of activities and in decision making should also be skills children are developing. - Children recognizing and regulating their own emotions and behaviors is a sign that they are developing at the right pace for this age group. - Following rules, routines, and directions with minimal input from adults also indicates that children are growing in this area of the Framework. - Children should also be learning to handle impulses or distractions independently from adults. - Emotional & Behavioral Health - Expressing a healthy range of emotions and being open to learning positive alternatives to negative behaviors shows that children are developing good emotional and behavioral habits. - Adapting to new environments with age-appropriate emotions and behaviors also indicates that children are growing in this area of the Framework. Visit the Social & Emotional Development page on ECLKC's website for more in-depth information and resources about this domain. Kaplan's selection of social and emotional books, CDs, assessments, and other products and resources are valuable tools you can use to create a Head Start program that can help children meet school readiness goals. Bilingual Feeling Buddies® Self-Regulation Toolkit - English/Spanish - Item Number: 27717 - Temporarily Out of Stock 4 - 7 years. Your number 1 curriculum for school success, Pre-K to 2nd grade!The year-long bilingual Feeling Buddies® Curriculum trains teachers while developing emotional intelligence, self-regulation and life-readiness skills in children. It incorporates literacy, music and movement. Socially Strong, Emotionally Secure - Item Number: 20225 - In Stock Now more than ever, adults must help children develop the skills necessary to navigate through life successfully. By focusing on building social and emotional strength, we increase children's resilience and prepare them to handle the challenges in life. The strategies and activities in "Socially Strong, Emotionally Secure" help children become socially and emotionally healthy for life. Organized into five chapters, the activities support and build resilience in children ages 3 to 8. 160 pages. Paperback. DECA Preschool Program, 2nd Edition - Item Number: 29026 - In Stock This innovative program allows early childhood professionals and families to screen, assess, support, and evaluate outcomes in order to promote resilience and healthy social and emotional development in preschoolers. Kit includes: Preschool Record Forms (set of 40); User's Guide and Technical Manual; Strategy Guide; For Now and Forever Family Guides (set of 20); Building Your Bounce (2 copies); and FLIP IT!® Emotions Felt Set - Item Number: 62572 - In Stock 3 years & up. Teaching emotional literacy helps children develop social skills by recognizing and responding to social cues appropriately. Children can interpret their own emotions by matching "feeling" words with pictures. Actual photographs printed on felt. 12 pictures, 12 feeling words, 6 page lesson guide. Average figure 6" tall. Creative Arts Expression The Creative Arts Expression domain refers to children participating in a range of activities for creative and imaginative expression. Creative arts, such as music, art, and drama, help engage children's minds, bodies, and senses and help them develop positive forms of self-expression. The following domain elements and examples for Creative Arts Expression are important to remember as you make decisions for your Head Start center: - Children should be participating in music activities or experimenting with musical instruments. - Creative Movement & Dance - Using their body to dance to different beats in music or to express their ideas or feelings is one example of how 3 to 5 year olds should be developing at this stage of their life. - Children should use a range of materials and techniques to make art. - Discussing their own projects and other people's art projects indicates growth and development in this area of the Framework. - Children should also reflect their thoughts and feelings in their art. - Children should portray events, characters, or stories through dialogue, actions, or objects. - Using their imagination and creativity to assume roles in dramatic play is a great of example of how children should be growing in the Creative Arts Expression domain. For more in-depth information about this domain, visit the Creative Arts Expression page on ECLKC's website. Kaplan's offers a wide variety of arts and crafts, dramatic play items, and music and dance materials. Any of these creative expression items would make great additions to your Head Start program. Music and Movement Activity Kit - Item Number: 88742 - In Stock 3 years & up. Creative movements and musical fun make up this kit, along with scarf and bean bag activities. Children will learn to identify body parts through upbeat music and locomotor and non-locomotor movement challenges. Includes 16 scarves, 12 beanbags and an award-winning CD with a song to cool down and 2 instrumentals, 11 songs in all. 15-Player Rhythm Band Kit - Item Number: 30397 - Temporarily Out of Stock Includes: 3- 8" rhythm sticks, 1 pair of maracas,1 crow sounder, 1 pair of sand blocks, 1 pair of tap-tap blocks, 1 frame drum, 1 cymbal, 1 triangle, 2 wrist bells, 3 jingle sticks and instructional DVD.Please note: If an instrument becomes unavailable, a suitable substitute may be found as a replacement to complete the set. Big Box of Art Materials Promo Pricing: $59.95 - Item Number: 34055 - In Stock 4 years & up. Little artists can't resist this colorful collection of collage materials: buttons, sticky shapes, beads, assorted paper, macaroni, burlap, sponges, felt, craft sticks, mini-frames, yarn, wood shapes, and more. Also includes a teaching guide. Tabletop Puppet Theater - Item Number: 62623 - In Stock 3 years & up. Compact theater sets up in minutes! With chalkboard marquee and closeable curtain. Folds flat for easy storage. Some assembly required. Approaches to Learning The Approaches to Learning domain refers to certain behaviors you can observe that indicate the ways in which children become engaged in learning experiences and social interactions. Observing these behaviors will help you develop programs that accommodate children's approaches to learning, which will then contribute to their learning in other domains and in their success in school and life. Remember the domain elements and examples of this domain as you work to put together a Head Start program that meets requirements and helps children with school readiness goals: - Initiative & Curiosity - Children should be asking questions and seeking new independence at this stage in their development. They should also show an interest in a variety of topics and activities and have a desire to learn and be creative. - Children who demonstrate flexibility, inventiveness, and imagination in their approach to tasks and activities are also showing their growth in this domain of the Framework. - An eagerness to learn and discuss a variety of ideas and topics an excellent example of how children need to develop in this area. - Persistence & Attentiveness - Children's attention spans should be growing during this stage in their development. They should be able to begin and maintain projects and activities with interest and persistence. - Setting goals, developing, and following through on plans for projects and activities is another indicator of how children are gaining skills in this domain of the Framework. - Children need to learn to resist distractions, maintain attention, and continue with the task or activity they are working on even if they find it to be frustrating or challenging. - Planning, initiating, and completing learning activities with peers are great indicators of children's interest in group experiences. - Cooperative play with others and inviting peers to play are other examples of how children should be developing in this domain of the Framework. - Children should also be willing to help, share, and cooperate in a group at this stage in their development. Visit the Approaches to Learning page on ECLKC's website for more in-depth information and resources about this domain. Kaplan's selection of differentiated instruction and inclusion products will help your program teach children effectively. We offer a variety of lesson plans, study cards, reading strips, books, classroom labels, and more for you to incorporate into your Head Start program. - Item Number: 89312 - In Stock PreK & up. Children can explore decomposition, composting, life cycles, and environmental educational with ease. Three separate, aerated compartments enable children to view the entire decomposition process clearly and make side-by-side comparisons between different materials. Teacher guide included. The Language Development domain focuses on children developing abilities in receptive and expressive language and often includes understanding and using at least one or more languages. Language skills are important for children to develop because they dictate how children will communicate now and in the future. Keep in mind the following elements and examples of the Language Development domain as you decide what curricula best serve the needs of the children your Head Start program serves: - Receptive Language - Children should be able to comprehend different forms of language, different rules for using language, and increasingly complex vocabulary during this stage of their development. - Paying attention to language in learning experiences, such conversations, stories, and songs, also indicates how children are growing in this area of the Framework. - Expressive Language - Children should be able to use different forms of language to express ideas and needs, use different grammar rules for various purposes, and use a more complex and varied vocabulary. - Engaging in communication and storytelling with peers and adults is another indicator that children are able to use language efficiently. For more in-depth information about this domain, visit the Language Development page on ECLKC's website. Kaplan offers a variety of language and literacy materials to help children increase their vocabulary and learn about the writing process. Sounds At Home Listening Lotto - Item Number: 22948 - In Stock 4 years & up. Can you recognize a whistling teapot, a ringing telephone or a door opening. Develop listening skills while having fun hearing sounds around the house. Includes 12 photographic game boards, 120 tokens and audio CD featuring common sounds heard at home. 1-12 players. Kaplan Kids Puppets - Set of 7 - Item Number: 46713 - In Stock 2 years & up. Come join the fun as you learn with the Kaplan Kids. Add excitement to circle time, storytelling, language lessons and more. Our adorable puppets have their own personalities plus they include diversity and inclusion. Each puppet is 14" long with a working mouth, and moveable arms and legs. They have permanently stitched features and are made of soft plush durable polyester that is easy to clean and maintain. Oral Language Builders - Item Number: 61305 - In Stock A picture can paint a thousand words, and using these photos in a variety of activities can help stimulate the development of both oral and written language skills. Each set has 68 photos. 6" x 8" cards. Literacy Knowledge & Skills The Literacy Knowledge & Skills domain refers to the basic concepts and skills that help create a strong foundation for reading and writing. Knowing how to read and write will help children gain knowledge in other areas and encourage them to succeed in school. Remember the following domain elements and examples as you make decisions for your Head Start program: - Book Appreciation & Knowledge - Children should recognize how books are read and the common basic characteristics of books at this stage in their development. - Showing interest in shared reading experiences, looking at books independently, asking and answering questions about materials, and being interested in different genres of books are all great examples of how children gain book appreciation and knowledge. - Children retelling stories from books through conversation and creative outlets is another indicator of the skills 3 to 5 year olds are developing. - Phonological Awareness - Children identifying and discriminating between words in language, separate syllables in words, and sounds and phonemes in language are great indicators of the skills they have gained in this domain element. - Alphabet Knowledge - Children should learn to recognize the letters of the alphabet and recognize that each letter has a distinct sound(s) associated with it. - Learning to identify alphabet letters and associate the correct sounds with those letters should also be a skill children learn during this stage in their development. - Print Concepts & Conventions - Children should be able to recognize print in everyday life and understand that print conveys meaning, which they must then work to understand. - Understanding the conventions of print and recognizing the relationship between spoken and written words are also important concepts children need to learn. - Early Writing - Children need to recognize that writing is a way of communicating and start experimenting with writing tools and materials. - Copying, tracing, or independently writing letters or words shows that children are beginning to learn how to write. Visit the Literacy Knowledge & Skills page on ECLKC's website for more in-depth information and resources about this domain. Kaplan's selection of language and literacy materials can help children learn the alphabet, phonics, and more. I Went Walking Story Set and Book - Item Number: 52334 - In Stock 2 years & up. This timeless story will come to life with the 14" high soft puppet and 6 animal finger puppets to share this delightful book. Includes 7 story props and paperback book. Surface wash. A-Z Magnatab Uppercase - Item Number: 62326 - In Stock 3 years & up. Learning to write the curves and lines of the alphabet just got easier with this sensory-driven activity. Follow the directional arrows and trace each letter with the magnetic stylus to pull the beads up into the holes. Lightly trace over the letters again with the tip of a finger and the metal balls drop back into the base. 11 1/2"L x 9 1/4"W x 1/2"H. English Language Development The English Language Development domain only applies to children who are dual language learners (DLLs). The development of English language skills for children who speak a different first language is important because the ability to understand and speak at least some English will help them in their studies and in everyday life. Be sure to remember these domain elements and examples when you are choosing curricula and assessments for your Head Start program: - Receptive English Language Skills - Nonverbally responding or acknowledging common words or phrases and participating with movement or gestures while others dance and sing in English are good indicators that DLLs have developed the ability to understand English. - Following directions in English with minimal assistance or nonverbally responding to questions are also examples of behaviors DLLs may show as they are learning English. - Expressive English Language Skills - Repeating words or phrases or requesting simple items in English shows that DLLs have started to develop an ability to speak English. - Children who use an increasingly complex English vocabulary or construct simple sentences are also showing that they have learned skills in how to use English to communicate. - Engagement in English Literacy Activities - Eagerness to participate in songs or stories in English and repeating part of a poem or song are examples of behaviors that show DLLs understand and can respond to items presented in English. - Talking with others about a story read in English or telling a story in English indicates that DLLs have grown in their engagement with the English language. For more in-depth information about this domain, visit the English Language Development page on ECLKC's website. Kaplan's offers a variety of bilingual and English language learner materials to help children learn and become fluent in the English language. These items would be great additions to your Head Start program, especially if you have DLLs enrolled in your program. Putumayo Kids Global CD Collection (Set of 4) - Item Number: 88806 - In Stock Take a cultural journey around the world! This cohesive collection is a well-rounded way to learn about the world with upbeat music that will spark cultural curiosity in young learners. These Playground CD's include Latin, French, African and World music. Each CD focuses on a unique topic and will delight both children and adults. Includes 4 CD's. - Item Number: 83136 - In Stock Grade K & up. The WhisperPhone® Solo is a hands-free, acoustical voice-feedback headset that enables learners to focus and hear themselves ten times more clearly as they learn by processing language aloud. Provide your students with this valuable reinforcement. This unit is fund certified for Title 1 and Reading First, dishwasher safe, reversible for either ear and battery free. Basic Spanish Bingo Game - Item Number: 21037 - In Stock 4 years & up. Nurture Spanish skills with the popular Bingo game featuring 50 basic Spanish picture words with photos! Includes resource guide and directions in English and Spanish. For 3 to 36 players.
Geology of New Zealand The geology of New Zealand is noted for its volcanic activity, earthquakes and geothermal areas because of its position on the boundary of the Australian Plate and Pacific Plates. New Zealand is part of Zealandia, a microcontinent nearly half the size of Australia that broke away from the Gondwanan supercontinent about 83 million years ago. New Zealands early separation from other landmasses and subsequant evolution has created a unique fossil record and modern ecology. New Zealand's geology can be simplified into three phases. First the basement rocks of New Zealand formed. These rocks were once part of the super-continent of Gondwana, along with South America, Africa, Madagascar, India, Antarctica and Australia. The rocks that now form the, mostly submerged, continent of Zealandia were then nestled between Eastern Australia and Western Antarctica. Secondly New Zealand drifted away from Gondwana and many sedimentary basins formed, which later became the sedimentary rocks covering the geological basement. The final phase is represented by the uplift of the Southern Alps and the eruptions of the Taupo Volcanic Zone. - 1 Basement rocks (Cambrian-Cretaceous) - 2 Separation from Gondwana (Cretaceous-Eocene) - 3 Sedimentary basins and allochthons (Cretaceous-Recent) - 4 Volcanic activity - 5 Modern tectonic setting and earthquakes - 6 Paleoclimate of New Zealand - 7 Geological hazards - 8 Geological resources - 9 History of New Zealand geology - 10 See also - 11 References - 12 Further reading - 13 External links Basement rocks (Cambrian-Cretaceous) New Zealand's basement rocks range in age from mid-Cambrian in north-west Nelson to Cretaceous near Kaikoura. These rocks were formed in a marine environment before New Zealand separated from Gondwana. They are divided into the "Western Province", consisting mainly of greywacke, granite and gneiss, and an "Eastern Province", consisting mainly of greywacke and schist. The provinces are further divided into terranes – large slices of crust with different geological histories that have been brought together by tectonic activity (subduction and strike-slip faulting) to form New Zealand. The Western Province is older than the Eastern Province and outcrops along the west coast of the South Island from Nelson to Fiordland. The Western Province is divided into the Buller and Takaka terranes which formed in mid-Cambrian to Devonian time (510–400 Ma). This includes New Zealand's oldest rocks, trilobite containing greywacke, which are found in the Cobb Valley in north-west Nelson. Large sections of the of Western Province have been intruded by plutonic rocks or metamorphosed to gneiss. These plutonic basement rocks are subdivided into the Hohonu, Karamea, Median and Paparoa batholiths. These rocks form the foundations beneath offshore Taranaki, and much of the West Coast, Buller, north-west Nelson, Fiordland and Stewart Island. Most of these plutonic rocks were formed in Devonian-Carboniferous time (380–335 Ma) and Jurassic-Cretaceous time (155–100 Ma). The Median Batholith represents a long-lived batholith dividing the Western and Eastern Provinces. Before Zealandia's separation from Gondwana it stretched from Queensland, through what is now New Zealand, into West Antarctica. It marks the site of a former subduction zone on the edge of Gondwana. The Eastern Province underlies more of New Zealand than the Western Province, including the greywacke and schist of the Southern Alps and all of the basement rocks of the North Island. The Eastern Province contains seven main terranes, the Drumduan, Brook Street, Murihiku, Dun Mountain-Maitai, Caples, Torlesse Composite (Rakaia, Aspiring and Pahau terranes) and Waipapa Composite (Morrinsville and Hunua Terranes). They are mostly composed of greywacke together with argillite, except for the Brook Street and Dun Mountain-Maitai Terranes which have significant igneous components (see Dun Mountain Ophiolite Belt). New Zealand's greywacke is mostly from the Caples, Torlesse Composite (Rakaia and Pahau) and Waipapa Composite (Morrinsville and Hunua) terranes formed in Carboniferous-Cretaceous time (330–120 Ma). Much of these rocks were deposited as submarine fans. They have different origins, as shown by different chemical compositions and different fossils. In general, the sedimentary basement terranes become younger from West to East across the country, as the newer terranes were scraped off the subducting paleo-Pacific Plate, and accreted to the boundary of Gondwana over hundreds of millions of years. Many rocks in the Eastern Province have been metamorphosed into the Haast Schist, due to exposure to high pressures and temperatures. Rocks grade continuously from greywacke (e.g., in Canterbury) to high-grade schist (e.g., around the Caples-Torlesse boundary in Otago and Marlborough, and Torlesse rocks just to the East of the Alpine Fault). The Alpine Fault that corresponds to the line of the Southern Alps has separated the basement rocks that used to be adjacent by about 480 km. Separation from Gondwana (Cretaceous-Eocene) The Australia-New Zealand continental fragment of Gondwana split from the rest of Gondwana in the late Cretaceous time (95–90 Ma). Then around 83 Ma, Zealandia started to separate from Australia forming the Tasman Sea, initially separating from the south. By 75 Ma, Zealandia was essentially separate from Australia and Antarctica, although only shallow seas might have separated Zealandia and Australia in the north. Dinosaurs continued to live in New Zealand after it separated from Gondwana, as shown by sauropod footprints from 70 million years ago in Nelson. During the Cretaceous extension large normal faults formed throughout New Zealand. this meant that dinosaurs had about 20 million years to evolve unique New Zealand species. The Hawks Crag Breccia formed next to their scarps and it has become New Zealand's best uranium mineral deposit. Currently, New Zealand has no native snakes or land mammals (other than bats). Both marsupials and placental mammals did not evolve and reach Australia in time to be on New Zealand when it drifted away 85 million years ago. The evolution and dispersal of snakes is less certain, but there is no hard evidence of them being in Australia before the opening of the Tasman Sea. The multituberculates, another type of mammal which is now extinct, may have been in time to cross the land bridge to New Zealand. The landmasses continued to separate until early Eocene times (53 Ma). The Tasman Sea, and part of Zealandia then locked together with Australia to form the Australian Plate (40 Ma), and a new plate boundary was created between the Australian Plate and Pacific Plate. Zealandia ended up at a pivot point between the Pacific and Australian Plates, with spreading in the south, and convergence in the north, where the Pacific Plate was subducted beneath the Australian Plate. A precursor to the Kermadec Arc was created. The convergent part of the plate boundary propagated through Zealandia from the north, eventually forming a proto-Alpine Fault in Miocene times (23 Ma). The various ridges and basins north of New Zealand relate to previous positions of the plate boundary. Sedimentary basins and allochthons (Cretaceous-Recent) Erosion and deposition has lead to much of Zealandia now being covered in sedimentary rocks that formed in swamps and marine sedimentary basins. Much of New Zealand was low lying around Mid Eocene-Oligocene times (40–23 Ma). Swamps became widespread, forming coal. The land subsided further, and marine organisms produced limestone deposits. Limestone of Oligocene-Early Miocene age formed in many areas, including the King Country, known for the Waitomo Glowworm Cave. In the South Island, limestone is present in Buller, Nelson, and the West Coast, including the Pancake Rocks at Punakaiki in Oligocene-Early Miocene times (34–15 Ma). It is debated whether all of New Zealand was submerged at this time or if small islands remained as "arcs" preserving fauna and flora. An allochthon is land that formed elsewhere and slid on top of other land (in other words, the material of an enormous landslide). Much of the land of Northland and East Cape were created in this manner. Around 25–22 Ma, Northland and East Cape were adjacent, with East Cape near Whangarei. Northland-East Cape was an undersea basin. Much of the land that now forms Northland-East Cape was higher land to the Northeast (composed of rocks formed 90–25 Ma). The Pacific-Australian plate boundary was further to the Northeast, with the Pacific Plate subducting under the Australian Plate. Layers of rocks were peeled off the higher land, from the top down, and slid Southwest under the influence of gravity, to be stacked the right way up, but in reverse order. Most of the material to slide were sedimentary rocks, however, the last rocks to be slid across were slabs of oceanic crust (ophiolites), mainly basalt. Widespread volcanic activity also occurred (23–15 Ma), and is intermixed with the foreign rocks. Sedimentary basins formed on the allochthons while they were moving. East Cape was later separated from Northland and moved further south and east to its present position. Volcanism is recorded in New Zealand throughout its whole geological history. Most volcanism in New Zealand, both modern and ancient, has been caused by the subduction of one tectonic plate under another, this causes melting in the mantle, the layer of the earth below the crust. This produces a volcanic arc, composed of mainly basalt, andesitic and rhyolitic. Basaltic eruptions tend to be fairly placid, producing scoria cones and lava flows, such as the volcanic cones in the Auckland volcanic field, although Mount Tarawera's violent 1886 eruption was an exception. Andesitic eruptions tend to form steep stratovolcanoes, including mountains such as Ruapehu, Tongariro and Taranaki, islands such as Little Barrier, White and Raoul Islands, or submarine seamounts like Monowai Seamount. Rhyolitic eruptions with large amounts of water tend to cause violent eruptions, producing calderas, such as Lake Taupo and Lake Rotorua. New Zealand also has many volcanoes which are not clearly related to plate subduction including the extinct Dunedin Volcano and Banks Peninsula, and the dormant Auckland Volcanic Field. The South Island has no currently active volcanoes. However, in the late Cretaceous (100–65 Ma), there was widespread volcanic activity in Marlborough, West Coast, Canterbury and Otago; and in Eocene times (40 Ma), there was volcanic activity in Oamaru. The most well known Miocene volcanic centres are the intra-plate Dunedin Volcano and Banks Peninsulas. The Dunedin Volcano which later eroded to form Otago Peninsula near Dunedin was built up by a series of mainly basaltic intra-plate volcanic eruptions in Miocene times (16–10 Ma). Banks Peninsula near Christchurch was built from two mainly basaltic intra-plate volcanoes in Miocene times (12–6 Ma and 9.5–7.5 Ma), corresponding to the Lyttelton and Akaroa Harbours. Southland's Solander Islands were active around 1 to 2 million years ago. There are also minor volcanic from a similar time period throughout Canterbury, Otago and also on the Chatham Islands. Intra-plate basaltic volcanic eruptions also occurred in the North Island, near the Bay of Islands in Northland, in the Late Miocene (10 Mya), and again more recently (0.5 Mya). The South Auckland volcanic field was active in Pleistocene times (1.5–0.5 Ma). The Auckland volcanic field started erupting around 250,000 years ago. It includes around 50 distinct eruptions, with most of the prominent cones formed in the last 30,000 years, and the most recent eruption, which formed Rangitoto Island, around 600 years ago. The field is currently dormant and further eruptions are expected. Over time the volcanic field has slowly been drifting northwards. Volcanism in the North Island has been dominated by a series of volcanic arcs which have evolved into the still active Taupo Volcanic Zone. Over time, volcanic activity has moved south and east, as the plate boundary moved eastward. This started in Miocene times (23 Ma) when a volcanic arc became active to the west of Northland, and gradually moved South down to New Plymouth, where Taranaki is still active. It produced mainly andesitic strato-volcanoes. The Northland volcanoes include the volcanoes that produced the Waipoua Plateau (site of Waipoua Forest, with large Kauri trees) and Kaipara volcano. The Waitakere volcano (22–16 Ma) has mainly been eroded, but conglomerate from the volcano forms the Waitakere Ranges, and produced most of the material that makes up the Waitemata sandstones and mudstones. Lahars produced the coarser Parnell Grit. Notable visible volcanoes in the Waikato include Karioi and Pirongia (2.5 Ma). The volcanoes off the West coast of the North Island, together with Taranaki and the Tongariro Volcanic Centre, are responsible for the black iron sand on many of the beaches between Taranaki and Auckland. Shortly after (18 Ma), a volcanic arc developed further east to create the Coromandel Ranges and undersea Colville Ridge. The initial activity was andesitic but later became rhyolitic (12 Ma). In the Kauaeranga Valley, volcanic plugs remain, as does a lava lake that now forms the top of Table Mountain. Active geothermal systems, similar to those that now exist near Rotorua, were present around 6 Ma, and produced the gold and silver deposits that were later mined in the Coromandel gold rush. Later (5–2 Ma), volcanic activity moved further south to form the Kaimai Range. Active volcanoes and geothermal areas After this, activity shifted further East to the Taupo Volcanic Zone, which runs from the Tongariro Volcanic Centre (Ruapehu and Tongariro), through Taupo, Rotorua, and out to sea to form the Kermadec Ridge. Activity was initiated around 2 Ma, and continues to this day. The Tongariro Volcanic Centre is composed of andesitic volcanoes, while the areas around Taupo and Rotorua are largely rhyolitic with minor basalt. Early eruptions between Taupo and Rotorua around 1.25 Ma, and 1 Ma, were large enough to produce an ignimbrite sheet that reached Auckland, Napier, and Gisborne. This includes vast pumice deposits generated from eruptions in the Taupo Volcanic Zone occur throughout the central North Island, Bay of Plenty, Waikato, King Country and Wanganui regions. Every so often, there are swarms of earthquakes within an area of the Taupo Volcanic Zone, which last for years. These earthquake swarms indicate that some movement of magma is occurring below the surface. While they have not resulted in an eruption in recent times, there is always the potential for a new volcano to be created, or a dormant volcano to come to life. The Tongariro Volcanic Centre developed over the last 275,000 years and contains the active andesitic volcanic cones of Ruapehu, Tongariro, and Ngauruhoe (really a side cone of Tongariro). Ruapehu erupts about once a decade, and while the eruptions cause havoc for skiers, plane flights and hydroelectric dams, the eruptions are relatively minor. However, the sudden collapse of the crater wall caused major problems when it generated a lahar in 1953, that destroyed a rail bridge, and caused 151 deaths at Tangiwai. The last significant eruption was 1995–96. Ngauruhoe last erupted 1973–75. Taranaki is a perfectly formed andesitic strato-volcano, that last erupted in 1755. Lake Taupo, the largest lake in the North Island, is a volcanic caldera, responsible for rhyolitic eruptions about once every 1,000 years. The largest eruption in the last 65,000 years was the cataclysmic Oruanui Eruption 26,500 years ago, producing 530 cubic kilometres of magma. The most recent eruption, around 233 AD was also a major event, the biggest eruption worldwide in the last 5,000 years. The eruption caused a pyroclastic flow that devastated the land from Waiouru to Rotorua in 10 minutes. The Okataina volcanic centre, to the East of Rotorua, is also responsible for major cataclysmic rhyolitic eruptions. The last eruption, of Tarawera and Lake Rotomahana in 1886, was a relatively minor eruption, which was thought to have destroyed the famous Pink and White Terraces, and covered much of the surrounding countryside in ash, killing over 100 people. In 2017 researchers rediscovered the locations of the Pink and White Terraces using a forgotten survey from 1859. Many lakes around Rotorua are calderas from rhyolitic eruptions. For example, Lake Rotorua erupted around 13,500 years ago. A line of undersea volcanoes extends out along the Kermadec Ridge. White Island, in the Bay of Plenty, represents the southern end of this chain and is a very active andesitic volcano, erupting with great frequency. It has the potential to cause a tsunami in the Bay of Plenty, as does the dormant Mayor Island volcano. The Taupo Volcanic Zone is known for its geothermal activity. For example, Rotorua and the surrounding area have many areas with geysers, silica terraces, fumaroles, mud-pools, hot springs, etc. Notable geothermal areas include Whakarewarewa, Tikitere, Waimangu, Waiotapu, Craters of the Moon and Orakei Korako. Geothermal energy is used to generate electricity at Wairakei, near Taupo. Hot pools abound throughout New Zealand. Geothermal energy is used to generate electricity in the Taupo Volcanic Zone. Modern tectonic setting and earthquakes New Zealand is currently astride the convergent boundary between the Pacific and Australian Plates. Over time, the relative motion of the plates has altered and the current configuration is geologically recent. Currently the Pacific Plate is subducted beneath the Australian Plate from around Tonga in the north, through the Tonga Trench, Kermadec Trench, and Hikurangi Trough to the east of the North Island of New Zealand, down to Cook Strait. Through most of the South Island, the plates slide past each other (Alpine Fault), with slight obduction of the Pacific Plate over the Australian Plate, forming Southern Alps. From Fiordland south, the Australian Plate subducts under the Pacific Plate forming the Puysegur Trench. This configuration has lead to volcanism and extension in the North Island forming the Taupo Volcanic Zone and uplift in the South Island forming the Southern Alps. The Pacific Plate is colliding with the Australian Plate at a rate of about 40 mm/yr. The East coast of the North Island is being compressed and lifted by this collision, producing the North Island and Marlborough Fault Systems. The East Coast of the North Island is also rotating clockwise, relative to Northland, Auckland and Taranaki, stretching the Bay of Plenty, and producing the Hauraki Rift (Hauraki Plains and Hauraki Gulf) and Taupo Volcanic Zone. The East Coast of the South Island is sliding obliquely towards the Alpine Fault, relative to Westland, causing the Southern Alps to rise about 10 mm/yr (although they are also worn down at a similar rate). The Hauraki Plains, Hamilton, Bay of Plenty, Marlborough Sounds, and Christchurch are sinking. The Marlborough Sounds are known for their sunken mountain ranges. As Wellington rises, and Marlborough sinks, Cook Strait is being shifted further south. Great stress is built up in the earth's crust due to the constant movement of the tectonic plates. This stress is released by earthquakes, which can occur on the plate boundary or on any of thousands of smaller faults throughout New Zealand. Because the Pacific Plate is subducting under the eastern side of the North Island, there are frequent deep earthquakes east of a line from the Bay of Plenty to Nelson (the approximate edge of the subducted plate), with the earthquakes being deeper to the west, and shallower to the east. Because the Australian Plate is subducting under the Pacific Plate in Fiordland, there are frequent deep earthquakes near Fiordland, with the earthquakes being deeper to the east and shallower near the west. Shallow earthquakes are more widespread, occurring almost everywhere throughout New Zealand (especially the Bay of Plenty, East Cape to Marlborough, and Alpine Fault). However, Northland, Waikato, and Otago are relatively stable. Canterbury had been without a major earthquake in recorded history until the Mw 7.1 Canterbury earthquake on 4 September 2010. The volcanic activity in the central North island also creates many shallow earthquakes. Paleoclimate of New Zealand Since Zealandia separated from Gondwana (80 mya) in the Cretaceous the climate has typically been far warmer than today. However, since the Quaternary glaciation (2.9 mya) Zealandia has experienced climate either cooler or only slightly warmer than today. In the Cretaceous, New Zealand was positioned at 80 degrees south at the boundary between Antarctica and Australia. But it was covered in trees as the climate of 90 million years ago was much warmer and wetter than today. During the warm Eocene Period vast swamps covered New Zealand which became coal seams in Southland and Waikato. In the Miocene there are paleontological records of warm lakes in Central Otago with palm trees and small land mammals. Over the past 30,000 years three major climate events are recorded in New Zealand, the last glacial maximum's coldest period from 28-18,000 years ago, a transitional period from 18-11,000 years ago and the Holocene Inter Glacial which has been occurring for the past 11,000 years. Throughout the last glacial maximum, global sea levels were about 130 metres (430 feet) lower than present levels. When this happened the North Island, South Island, and Stewart Island were joined together. Temperatures dropped by about 4–5 °C. Much of the Southern Alps and Fiordland were glaciated, but the rest of New Zealand was largely ice-free. The land to the North of Hamilton was forested, but much of the rest of New Zealand was covered in grass or shrubs, due to the cold and dry climate. This lack of vegetation cover lead to greater wind erosion and the deposition of loess (windblown dust). The study of New Zealand's paleoclimate has settled some of the debate regarding links between the Little Ice Age (LIA) in the Northern Hemisphere and the climate in New Zealand at the same time. The key facts to emerge are that New Zealand did experience a noticeable cooler climate, but at a slightly later date than in the Northern Hemisphere. The largest earthquake in New Zealand was an M8.2 event in the Wairarapa, in 1855, and the most deaths (261) occurred in a M7.8 earthquake in Hawkes Bay in 1931. Widespread property damage was caused by the 2010 Canterbury earthquake, which measured 7.1; The M6.3 aftershock of 22 February 2011 (2011 Canterbury earthquake) resulted in 185 fatalities. Most recently, the M7.8 Kaikoura earthquake struck just after midnight on November 14, 2016, killing two people in the remote Kaikoura area northeast of Christchurch Numerous aftershocks of M5.0 or greater are spread over a large area between Wellington and Culverden. New Zealand is at risk from tsunamis that are generated from both local and international faults. The eastern coast of New Zealand is most at risk as the Pacific Ocean is more tectonically active than the Tasman Sea. Locally the faults along the North Islands east coast provide the greatest risk. Minor tsunamis have occurred in New Zealand from earthquakes in Chile, Alaska and Japan. There are many potentially dangerous volcanoes in the Taupo Volcanic Zone. The most severe volcanic eruption since the arrival of Europeans is the Tarawera eruption in 1886. A lahar from Mount Ruapehu derailed a train in 1955 killing 151 people. Even a minor eruption at Ruapehu could cause the loss of electricity for Auckland, due to ash on the power lines, and in the Waikato River (stopping the generation of hydroelectric power). Many parts of New Zealand are susceptible to landslides, particaly due to deforestaion and the high earthquake risk. Much of the North Island is steep, and composed of soft mudstone known as papa, that easily generates landslides. New Zealand main geological resources are Coal, Gold, Oil and natural gas. Coal has been mined in Northland, the Waikato, Taranaki, Nelson and Westland, Canterbury, Otago, and Southland. The West Coast contains some of New Zealand’s best bituminous coal. The largest coal deposits occur in Southland.Gold has been mined in the Coromandel and Kaimai Ranges (especially the Martha Mine at Waihi), Westland, Central Otago, and Eastern Otago (especially Macraes Mine), and on the west coast of the South Island. The only area in New Zealand with significant known oil and gas deposits is the Taranaki area, but many other offshore areas have the potential for deposits. Iron sand is also plentiful on the west coast from Taranaki to Auckland. Jade (Pounamu in Māori) from South Island ophiolites continues to be extracted, mostly from alluvium, and worked for sale. Groundwater reservoirs are extracted throughout the country, but are particularly valuable in the dryer eastern regions of both the North and South Islands. History of New Zealand geology The detailed study of New Zealand's geology began with Julius von Haast and Ferdinand von Hochstetter who created numerous regional geological maps of the country during resource exploration in the mid-late 1800's. In 1865 James Hector was appointed to found the Geological Survey of New Zealand. Patrick Marshall coined the terms andesite line and ignimbrite in the early 1900's while working in the Taupo Volcanic Zone. Harold Wellman discovered the Alpine Fault and its 480 km offset in 1941. Even though Wellman proved that large block of land could move consid erable distances the New Zealand geological survey was largely a late adopter of plate tectonics. Charles Cotton became an international authority on geomorphology using New Zealand active tectonics and variable climate to create universally applicably rules. His major works becoming standard text books in New Zealand and overseas. Charles Fleming established the Wanganui Basin as a classic site for studying past sea levels and climates. In 1975 the palaeontologist Joan Wiffen discovered the first dinosaur fossils in New Zealand. The Geological Survey of New Zealand now known as GNS Science has done extensive mapping through New Zealand at 1:250,000 and 1:50:000 scales. The most modern map series are the "QMAPs" at 1:250,000. New Zealand's geological research is published by GNS Science, in the New Zealand Journal of Geology and Geophysics, and internationally. A Map showing the distribution of earthquakes in New Zealand can be obtained from Te Ara: The Encyclopedia of New Zealand. Current earthquake and volcanic activity can be obtained from the GeoNet website. The universities of Auckland, Canterbury, Massey, Otago, Victoria and Waikato are activity engaged in geological research in New Zealand, Antarctica, the wider South Pacific and elsewhere. - Alpine Fault - Geography of New Zealand - Hikurangi Trench - Indo-Australian Plate - Kaikoura Canyon - List of dinosaurs of New Zealand - List of earthquakes in New Zealand - List of rock formations in New Zealand - Marlborough Fault System - New Zealand geologic time scale - North Island Fault System - Pacific Plate - Stratigraphy of New Zealand - Zealandia (continent) - Geology of the Northland Region - Geology of the Auckland Region - Geology of the Waikato-King Country Region - Taupo Volcanic Zone - Geology of the Raukumara Region - Geology of Taranaki - Geology of the Wellington Region - Geology of the Tasman District - Geology of Canterbury, New Zealand - Geology of the Otago Region - Wallis, G. P.; Trewick, S. A. (2009). "New Zealand phylogeography: evolution on a small continent". Molecular Ecology. 18 (17): 3548–3580. doi:10.1111/j.1365-294X.2009.04294.x. PMID 19674312. - New Zealand within Gondwana from Te Ara: The Encyclopedia of New Zealand - "New Zealand Geology: an illustrated guide" (PDF). www.geotrips.org.nz. - Mortimer, N (2004). "New Zealand's geological foundations". Gondwana Research. 7 (1): 261–272. - Mortimer, N. (2004). "New Zealand's Geological Foundations". Gondwana Research. 7 (1): 261–272. doi:10.1016/S1342-937X(05)70324-5. ISSN 1342-937X. - "New Zealand Stratigraphic Lexicon". GNS Science. - Mortimer, N.; Rattenbury, M. S.; King, P. R.; Bland, K. J.; Barrell, D. J. A.; Bache, F.; Begg, J. G. "High-level stratigraphic scheme for New Zealand rocks". New Zealand Journal of Geology and Geophysics. 57 (4): 402–419. - "Dinosaur footprints found in Nelson on show in Lower Hutt". www.stuff.co.nz. - Beck, A. C.; Reed, J. J.; Willett, R. W. (1958). "Uranium mineralization in the Hawks Crag Breccia of the Lower Buller Gorge Region, South Island, New Zealand". New Zealand Journal of Geology and Geophysics. 1 (3): 432–450. doi:10.1080/00288306.1958.10422773. ISSN 0028-8306. - Scanlon, John D.; Lee, Michael S.Y.; Archer, Michael (2003). "Mid-Tertiary elapid snakes (Squamata, Colubroidea) from Riversleigh, northern Australia: early steps in a continent-wide adaptive radiation". Geobios. 36 (5): 573–601. doi:10.1016/S0016-6995(03)00056-1. ISSN 0016-6995. - Yuan, C.-X.; Ji, Q.; Meng, Q.-J.; Tabrum, A. R.; Luo, Z.-X. (2013). "Earliest Evolution of Multituberculate Mammals Revealed by a New Jurassic Fossil". Science. 341 (6147): 779–783. doi:10.1126/science.1237970. ISSN 0036-8075. - New Zealand splits from Gondwana from Te Ara: The Encyclopedia of New Zealand - Knapp, Michael; Mudaliar, Ragini; Havell, David; Wagstaff, Steven J.; Lockhart, Peter J.; Paterson, Adrian (2007). "The Drowning of New Zealand and the Problem of Agathis". Systematic Biology. 56 (5): 862–870. doi:10.1080/10635150701636412. ISSN 1076-836X. - Jiao, Ruohong; Seward, Diane; Little, Timothy A.; Herman, Frédéric; Kohn, Barry P. (2017). "Constraining provenance, thickness and erosion of nappes using low-temperature thermochronology: the Northland Allochthon, New Zealand". Basin Research. 29 (1): 81–95. doi:10.1111/bre.12166. ISSN 0950-091X. - "Eruptions and deposition of volcaniclastic rocks in the Dunedin Volcanic Complex, Otago Peninsula, New Zealand", Ulrike Martin - Molloy, Catherine; Shane, Phil; Augustinus, Paul (2009). "Eruption recurrence rates in a basaltic volcanic field based on tephra layers in maar sediments: Implications for hazards in the Auckland volcanic field". Geological Society of America Bulletin. 121 (11–12): 1666–1677. doi:10.1130/B26447.1. ISSN 0016-7606. - Hayward, Bruce W. (1979). "Eruptive history of the early to mid miocene Waitakere volcanic arc, and palaeogeography of the Waitemata Basin, Northern New Zealand". Journal of the Royal Society of New Zealand. 9 (3): 297–320. doi:10.1080/03036758.1979.10419410. ISSN 0303-6758. - "Taupo – Eruptive History". Global Volcanism Program. Smithsonian Institution. Retrieved 2008-03-16. - Bunn, Rex; Nolden, Sascha (2017-06-07). "Forensic cartography with Hochstetter's 1859 Pink and White Terraces survey: Te Otukapuarangi and Te Tarata". Journal of the Royal Society of New Zealand. 0 (0): 1–18. doi:10.1080/03036758.2017.1329748. ISSN 0303-6758. - Bunn and Nolden, Rex and Sascha (December 2016). "Te Tarata and Te Otukapuarangi: Reverse engineering Hochstetter's Lake Rotomahana Survey to map the Pink and White Terrace locations". Journal of New Zealand Studies. NS23: 37–53. - Matthew Hall (2004) Existing and Potential Geothermal Resource for Electricity Generation. Ministry for Economic Development. - Diagram showing the Australian-Pacific Plate Boundary - DeMets, C.; Gordon, R. G.; Argus, D. F.; Stein, S. (1990). "Current plate motions". Geophysical Journal International. 101 (2): 425–478. doi:10.1111/j.1365-246X.1990.tb06579.x. ISSN 0956-540X. - Little, Timothy A.; Cox, Simon; Vry, Julie K.; Batt, Geoffrey (2005). "Variations in exhumation level and uplift rate along the obliqu-slip Alpine fault, central Southern Alps, New Zealand". Geological Society of America Bulletin. 117 (5): 707. doi:10.1130/B25500.1. ISSN 0016-7606. - New Zealand uplift and sinking from Te Ara: The Encyclopedia of New Zealand - McGlone, MS; Buitenwerf, R; Richardson, SJ (2016). "The formation of the oceanic temperate forests of New Zealand". New Zealand Journal of Botany. 54 (2): 128–155. doi:10.1080/0028825X.2016.1158196. ISSN 0028-825X. - Worthy, T. H.; Tennyson, A. J. D.; Archer, M.; Musser, A. M.; Hand, S. J.; Jones, C.; Douglas, B. J.; McNamara, J. A.; Beck, R. M. D. (2006). "Miocene mammal reveals a Mesozoic ghost lineage on insular New Zealand, southwest Pacific". Proceedings of the National Academy of Sciences. 103 (51): 19419–19423. doi:10.1073/pnas.0605684103. ISSN 0027-8424. - Alloway, Brent V.; Lowe, David J.; Barrell, David J. A.; Newnham, Rewi M.; Almond, Peter C.; Augustinus, Paul C.; Bertler, Nancy A. N.; Carter, Lionel; Litchfield, Nicola J.; McGlone, Matt S.; Shulmeister, Jamie; Vandergoes, Marcus J.; Williams, Paul W. (2007). "Towards a climate event stratigraphy for New Zealand over the past 30 000 years (NZ-INTIMATE project)". Journal of Quaternary Science. 22 (1): 9–35. doi:10.1002/jqs.1079. ISSN 0267-8179. - "NZ paleoclimate poster". www.gns.cri.nz. GNS Science. - Williams, Paul W.; McGlone, Matt; Neil, Helen; Zhao, Jian-Xin (2015). "A review of New Zealand palaeoclimate from the Last Interglacial to the global Last Glacial Maximum". Quaternary Science Reviews. 110: 92–106. doi:10.1016/j.quascirev.2014.12.017. ISSN 0277-3791. - New Zealand during the last glacial maximum from Te Ara: The Encyclopedia of New Zealand - David Wratt; Jim Salinger; Rob Bell; Drew Lorrey & Brett Mullan. "Past climate variations over New Zealand". NIWA. Retrieved 5 June 2014. - Science, GNS. "Where were NZs largest earthquakes? / New Zealand Earthquakes / Earthquakes / Science Topics / Learning / Home - GNS Science". www.gns.cri.nz. Retrieved 2018-11-27. - Nathan, Simon (2 March 2009). "Rock and mineral names – Local names". Te Ara – the Encyclopedia of New Zealand. Retrieved 28 December 2011. - Nathan, Simon (24 September 2011). "Rock and mineral names". Te Ara – the Encyclopedia of New Zealand. Retrieved 28 December 2011. - "Coal Overview". Crown Minerals, Ministry of Economic Development. 17 December 2008. - "NZ Gold History Archived 5 February 2014 at Archive.is," New Zealand Gold Merchants Ltd., retrieved 5 February 2014. - "Petroleum Overview". Crown Minerals, Ministry of Economic Development. 26 June 2008. - "Huge ironsands expansion - Quarrying & Mining Magazine". Quarrying & Mining Magazine. 2014-11-11. Retrieved 2018-02-01. - Johnston, M. R.; Nineteenth-century observations of the Dun Mountain Ophiolite Belt, Nelson, New Zealand and trans-Tasman correlations, Geological Society, London, Special Publications 2007, v. 287, p. 375-387 - Suggate, Richard Patrick; Punga, Martin Theodore Te (1978). The Geology of New Zealand. E.C. Keating, Government Printer. ISBN 9780477010573. - Grapes, R. H. (2008). History of Geomorphology and Quaternary Geology. Geological Society of London. p. 295. ISBN 9781862392557. - Cotton, C. A. (2018-02-07). Geomorphology of New Zealand, Vol. 1: An Introduction to the Study of Land-Forms (Classic Reprint). Fb&c Limited. ISBN 9780267981571. - GNS Science - Map showing the distribution of earthquakes in New Zealand from Te Ara: The Encyclopedia of New Zealand. - Geonet Archived January 8, 2008, at the Wayback Machine. – Current New Zealand Earthquake and Volcanic Activity. - Graham, Ian J. et al.;A continent on the move : New Zealand geoscience into the 21st century – The Geological Society of New Zealand in association with GNS Science, 2008. ISBN 978-1-877480-00-3 - Campbell, Hamish; Hutching, Gerard; In Search of Ancient New Zealand, Penguin Books in association with GNS Science, 2007, ISBN 978-0-14-302088-2 - Te Ara: The Encyclopedia of New Zealand An Overview of New Zealand Geology - Hot Stuff to Cold Stone – Aitken, Jefley; GNS Science, 1997. ISBN 0-478-09602-X. - Rocked and Ruptured – Aitken, Jefley; Reed Books, in association with GNS Science, 1999. ISBN 0-7900-0720-7. - The Rise and Fall of the Southern Alps – Coates, Glenn; Canterbury University Press, 2002. ISBN 0-908812-93-0. - Plate Tectonics for Curious Kiwis – Aitken, Jefley; GNS Science, 1996. ISBN 0-478-09555-4. - Lava and Strata: A guide to the volcanoes and rock formations of Auckland – Homer, Lloyd; Moore, Phil & Kermode, Les; Landscape Publications and the Institute of Geological and Nuclear Sciences, 2000. ISBN 0-908800-02-9. - Vanishing volcanoes : a guide to the landforms and rock formations of Coromandel Peninsula – Homer, Lloyd; Moore, Phil; Landscape Publications and the Institute of Geological and Nuclear Sciences, 1992. ISBN 0-908800-01-0. - Reading the rocks : a guide to geological features of the Wairarapa Coast – Homer, Lloyd; Moore, Phil & Kermode, Les; Landscape Publications and the Institute of Geological and Nuclear Sciences, 1989. ISBN 0-908800-00-2 - Paleographic Maps of New Zealand from late Cretaceous time from GNS Science - Geological Society of New Zealand - New Zealand Journal of Geology and Geophysics - A simple geological map of New Zealand from Te Ara: The Encyclopedia of New Zealand
Speed of sound The speed of sound is the distance travelled per unit time by a sound wave as it propagates through an elastic medium. In dry air at 20 °C (68 °F), the speed of sound is 343 metres per second (1,125 ft/s; 1,235 km/h; 767 mph; 667 kn), or a kilometre in 2.91 s or a mile in 4.69 s. |Sound pressure||p, SPL| |Particle velocity||v, SVL| |Sound intensity||I, SIL| |Sound power||P, SWL| |Sound energy density||w| |Sound exposure||E, SEL| |Speed of sound||c| The speed of sound in an ideal gas depends only on its temperature and composition. The speed has a weak dependence on frequency and pressure in ordinary air, deviating slightly from ideal behavior. In common everyday speech, speed of sound refers to the speed of sound waves in air. However, the speed of sound varies from substance to substance: sound travels most slowly in gases; it travels faster in liquids; and faster still in solids. For example, (as noted above), sound travels at 343 m/s in air; it travels at 1,484 m/s in water (4.3 times as fast as in air); and at 5,120 m/s in iron. In an exceptionally stiff material such as diamond, sound travels at 12,000 m/s; which is around the maximum speed that sound will travel under normal conditions. Sound waves in solids are composed of compression waves (just as in gases and liquids), and a different type of sound wave called a shear wave, which occurs only in solids. Shear waves in solids usually travel at different speeds, as exhibited in seismology. The speed of compression waves in solids is determined by the medium's compressibility, shear modulus and density. The speed of shear waves is determined only by the solid material's shear modulus and density. In fluid dynamics, the speed of sound in a fluid medium (gas or liquid) is used as a relative measure for the speed of an object moving through the medium. The ratio of the speed of an object to the speed of sound in the fluid is called the object's Mach number. Objects moving at speeds greater than Mach1 are said to be traveling at supersonic speeds. Sir Isaac Newton computed the speed of sound in air as 979 feet per second (298 m/s), which is too low by about 15%, but had neglected the effect of fluctuating temperature; that was later rectified by Laplace. During the 17th century, there were several attempts to measure the speed of sound accurately, including attempts by Marin Mersenne in 1630 (1,380 Parisian feet per second), Pierre Gassendi in 1635 (1,473 Parisian feet per second) and Robert Boyle (1,125 Parisian feet per second). In 1709, the Reverend William Derham, Rector of Upminster, published a more accurate measure of the speed of sound, at 1,072 Parisian feet per second. Derham used a telescope from the tower of the church of St Laurence, Upminster to observe the flash of a distant shotgun being fired, and then measured the time until he heard the gunshot with a half second pendulum. Measurements were made of gunshots from a number of local landmarks, including North Ockendon church. The distance was known by triangulation, and thus the speed that the sound had travelled was calculated. The transmission of sound can be illustrated by using a model consisting of an array of balls interconnected by springs. For real material the balls represent molecules and the springs represent the bonds between them. Sound passes through the model by compressing and expanding the springs, transmitting energy to neighbouring balls, which transmit energy to their springs, and so on. The speed of sound through the model depends on the stiffness of the springs, and the mass of the balls. As long as the spacing of the balls remains constant, stiffer springs transmit energy more quickly, and more massive balls transmit energy more slowly. Effects like dispersion and reflection can also be understood using this model. In a real material, the stiffness of the springs is called the elastic modulus, and the mass corresponds to the density. All other things being equal (ceteris paribus), sound will travel more slowly in spongy materials, and faster in stiffer ones. For instance, sound will travel 1.59 times faster in nickel than in bronze, due to the greater stiffness of nickel at about the same density. Similarly, sound travels about 1.41 times faster in light hydrogen (protium) gas than in heavy hydrogen (deuterium) gas, since deuterium has similar properties but twice the density. At the same time, "compression-type" sound will travel faster in solids than in liquids, and faster in liquids than in gases, because the solids are more difficult to compress than liquids, while liquids in turn are more difficult to compress than gases. Some textbooks mistakenly state that the speed of sound increases with increasing density. This is usually illustrated by presenting data for three materials, such as air, water and steel, which also have vastly different compressibilities which more than make up for the density differences. An illustrative example of the two effects is that sound travels only 4.3 times faster in water than air, despite enormous differences in compressibility of the two media. The reason is that the larger density of water, which works to slow sound in water relative to air, nearly makes up for the compressibility differences in the two media. A practical example can be observed in Edinburgh, when the "One o' Clock Gun" is fired at the eastern end of Edinburgh Castle. Standing at the base of the western end of the Castle Rock, the sound of the Gun can be heard through the rock, slightly before it arrives by the air route, partly delayed by the slightly longer route. It is particularly effective if a multi-gun salute such as for "The Queen's Birthday" is being fired. Compression and shear wavesEdit In a gas or liquid, sound consists of compression waves. In solids, waves propagate as two different types. A longitudinal wave is associated with compression and decompression in the direction of travel, and is the same process in gases and liquids, with an analogous compression-type wave in solids. Only compression waves are supported in gases and liquids. An additional type of wave, the transverse wave, also called a shear wave, occurs only in solids because only solids support elastic deformations. It is due to elastic deformation of the medium perpendicular to the direction of wave travel; the direction of shear-deformation is called the "polarization" of this type of wave. In general, transverse waves occur as a pair of orthogonal polarizations. These different waves (compression waves and the different polarizations of shear waves) may have different speeds at the same frequency. Therefore, they arrive at an observer at different times, an extreme example being an earthquake, where sharp compression waves arrive first, and rocking transverse waves seconds later. The speed of a compression wave in fluid is determined by the medium's compressibility and density. In solids, the compression waves are analogous to those in fluids, depending on compressibility, density, and the additional factor of shear modulus. The speed of shear waves, which can occur only in solids, is determined simply by the solid material's shear modulus and density. The speed of sound in mathematical notation is conventionally represented by c, from the Latin celeritas meaning "velocity". In general, the speed of sound c is given by the Newton–Laplace equation: - Ks is a coefficient of stiffness, the isentropic bulk modulus (or the modulus of bulk elasticity for gases); - ρ is the density. Thus the speed of sound increases with the stiffness (the resistance of an elastic body to deformation by an applied force) of the material, and decreases with increase in density. For ideal gases the bulk modulus K is simply the gas pressure multiplied by the dimensionless adiabatic index, which is about 1.4 for air under normal conditions of pressure and temperature. - p is the pressure; - ρ is the density and the derivative is taken isentropically, that is, at constant entropy s. In a non-dispersive medium, the speed of sound is independent of sound frequency, so the speeds of energy transport and sound propagation are the same for all frequencies. Air, a mixture of oxygen and nitrogen, constitutes a non-dispersive medium. However, air does contain a small amount of CO2 which is a dispersive medium, and causes dispersion to air at ultrasonic frequencies (> 28 kHz). In a dispersive medium, the speed of sound is a function of sound frequency, through the dispersion relation. Each frequency component propagates at its own speed, called the phase velocity, while the energy of the disturbance propagates at the group velocity. The same phenomenon occurs with light waves; see optical dispersion for a description. Dependence on the properties of the mediumEdit The speed of sound is variable and depends on the properties of the substance through which the wave is travelling. In solids, the speed of transverse (or shear) waves depends on the shear deformation under shear stress (called the shear modulus), and the density of the medium. Longitudinal (or compression) waves in solids depend on the same two factors with the addition of a dependence on compressibility. In fluids, only the medium's compressibility and density are the important factors, since fluids do not transmit shear stresses. In heterogeneous fluids, such as a liquid filled with gas bubbles, the density of the liquid and the compressibility of the gas affect the speed of sound in an additive manner, as demonstrated in the hot chocolate effect. In gases, adiabatic compressibility is directly related to pressure through the heat capacity ratio (adiabatic index), while pressure and density are inversely related to the temperature and molecular weight, thus making only the completely independent properties of temperature and molecular structure important (heat capacity ratio may be determined by temperature and molecular structure, but simple molecular weight is not sufficient to determine it). In low molecular weight gases such as helium, sound propagates faster as compared to heavier gases such as xenon. For monatomic gases, the speed of sound is about 75% of the mean speed that the atoms move in that gas. For a given ideal gas the molecular composition is fixed, and thus the speed of sound depends only on its temperature. At a constant temperature, the gas pressure has no effect on the speed of sound, since the density will increase, and since pressure and density (also proportional to pressure) have equal but opposite effects on the speed of sound, and the two contributions cancel out exactly. In a similar way, compression waves in solids depend both on compressibility and density—just as in liquids—but in gases the density contributes to the compressibility in such a way that some part of each attribute factors out, leaving only a dependence on temperature, molecular weight, and heat capacity ratio which can be independently derived from temperature and molecular composition (see derivations below). Thus, for a single given gas (assuming the molecular weight does not change) and over a small temperature range (for which the heat capacity is relatively constant), the speed of sound becomes dependent on only the temperature of the gas. In non-ideal gas behavior regimen, for which the van der Waals gas equation would be used, the proportionality is not exact, and there is a slight dependence of sound velocity on the gas pressure. Humidity has a small but measurable effect on the speed of sound (causing it to increase by about 0.1%–0.6%), because oxygen and nitrogen molecules of the air are replaced by lighter molecules of water. This is a simple mixing effect. Altitude variation and implications for atmospheric acousticsEdit In the Earth's atmosphere, the chief factor affecting the speed of sound is the temperature. For a given ideal gas with constant heat capacity and composition, the speed of sound is dependent solely upon temperature; see Details below. In such an ideal case, the effects of decreased density and decreased pressure of altitude cancel each other out, save for the residual effect of temperature. Since temperature (and thus the speed of sound) decreases with increasing altitude up to 11 km, sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source. The decrease of the speed of sound with height is referred to as a negative sound speed gradient. However, there are variations in this trend above 11 km. In particular, in the stratosphere above about 20 km, the speed of sound increases with height, due to an increase in temperature from heating within the ozone layer. This produces a positive speed of sound gradient in this region. Still another region of positive gradient occurs at very high altitudes, in the aptly-named thermosphere above 90 km. Practical formula for dry airEdit The approximate speed of sound in dry (0% humidity) air, in meters per second, at temperatures near 0 °C, can be calculated from where is the temperature in degrees Celsius (°C). This equation is derived from the first two terms of the Taylor expansion of the following more accurate equation: Dividing the first part, and multiplying the second part, on the right hand side, by √273.15 gives the exactly equivalent form The value of 331.3 m/s, which represents the speed at 0 °C (or 273.15 K), is based on theoretical (and some measured) values of the heat capacity ratio, γ, as well as on the fact that at 1 atm real air is very well described by the ideal gas approximation. Commonly found values for the speed of sound at 0 °C may vary from 331.2 to 331.6 due to the assumptions made when it is calculated. If ideal gas γ is assumed to be 7/5 = 1.4 exactly, the 0 °C speed is calculated (see section below) to be 331.3 m/s, the coefficient used above. This equation is correct to a much wider temperature range, but still depends on the approximation of heat capacity ratio being independent of temperature, and for this reason will fail, particularly at higher temperatures. It gives good predictions in relatively dry, cold, low pressure conditions, such as the Earth's stratosphere. The equation fails at extremely low pressures and short wavelengths, due to dependence on the assumption that the wavelength of the sound in the gas is much longer than the average mean free path between gas molecule collisions. A derivation of these equations will be given in the following section. A graph comparing results of the two equations is at right, using the slightly different value of 331.5 m/s for the speed of sound at 0 °C. Speed of sound in ideal gases and airEdit For an ideal gas, K (the bulk modulus in equations above, equivalent to C, the coefficient of stiffness in solids) is given by thus, from the Newton–Laplace equation above, the speed of sound in an ideal gas is given by - γ is the adiabatic index also known as the isentropic expansion factor. It is the ratio of specific heats of a gas at a constant-pressure to a gas at a constant-volume( ), and arises because a classical sound wave induces an adiabatic compression, in which the heat of the compression does not have enough time to escape the pressure pulse, and thus contributes to the pressure induced by the compression; - p is the pressure; - ρ is the density. Using the ideal gas law to replace p with nRT/V, and replacing ρ with nM/V, the equation for an ideal gas becomes - cideal is the speed of sound in an ideal gas; - R (approximately 8.314,5 J · mol−1 · K−1) is the molar gas constant(universal gas constant); - k is the Boltzmann constant; - γ (gamma) is the adiabatic index. At room temperature, where thermal energy is fully partitioned into rotation (rotations are fully excited) but quantum effects prevent excitation of vibrational modes, the value is 7/5 = 1.400 for diatomic molecules, according to kinetic theory. Gamma is actually experimentally measured over a range from 1.399,1 to 1.403 at 0 °C, for air. Gamma is exactly 5/3 = 1.6667 for monatomic gases such as noble gases; - T is the absolute temperature; - M is the molar mass of the gas. The mean molar mass for dry air is about 0.028,964,5 kg/mol; - n is the number of moles; - m is the mass of a single molecule. This equation applies only when the sound wave is a small perturbation on the ambient condition, and the certain other noted conditions are fulfilled, as noted below. Calculated values for cair have been found to vary slightly from experimentally determined values. Newton famously considered the speed of sound before most of the development of thermodynamics and so incorrectly used isothermal calculations instead of adiabatic. His result was missing the factor of γ but was otherwise correct. Numerical substitution of the above values gives the ideal gas approximation of sound velocity for gases, which is accurate at relatively low gas pressures and densities (for air, this includes standard Earth sea-level conditions). Also, for diatomic gases the use of γ = 1.400,0 requires that the gas exists in a temperature range high enough that rotational heat capacity is fully excited (i.e., molecular rotation is fully used as a heat energy "partition" or reservoir); but at the same time the temperature must be low enough that molecular vibrational modes contribute no heat capacity (i.e., insignificant heat goes into vibration, as all vibrational quantum modes above the minimum-energy-mode, have energies too high to be populated by a significant number of molecules at this temperature). For air, these conditions are fulfilled at room temperature, and also temperatures considerably below room temperature (see tables below). See the section on gases in specific heat capacity for a more complete discussion of this phenomenon. For air, we use a simplified symbol Additionally, if temperatures in degrees Celsius(°C) are to be used to calculate air speed in the region near 273 kelvin, then Celsius temperature θ = T − 273.15 may be used. Then For dry air, where θ (theta) is the temperature in degrees Celsius(°C). Making the following numerical substitutions, is the molar gas constant in J/mole/Kelvin, and is the mean molar mass of air, in kg; and using the ideal diatomic gas value of γ = 1.4000. Using the first two terms of the Taylor expansion: The derivation includes the first two equations given in the Practical formula for dry air section above. Effects due to wind shearEdit The speed of sound varies with temperature. Since temperature and sound velocity normally decrease with increasing altitude, sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source. Wind shear of 4 m/(s · km) can produce refraction equal to a typical temperature lapse rate of 7.5 °C/km. Higher values of wind gradient will refract sound downward toward the surface in the downwind direction, eliminating the acoustic shadow on the downwind side. This will increase the audibility of sounds downwind. This downwind refraction effect occurs because there is a wind gradient; the sound is not being carried along by the wind. For sound propagation, the exponential variation of wind speed with height can be defined as follows: - U(h) is the speed of the wind at height h; - ζ is the exponential coefficient based on ground surface roughness, typically between 0.08 and 0.52; - dU/dH(h) is the expected wind gradient at height h. In the 1862 American Civil War Battle of Iuka, an acoustic shadow, believed to have been enhanced by a northeast wind, kept two divisions of Union soldiers out of the battle, because they could not hear the sounds of battle only 10 km (six miles) downwind. In the standard atmosphere: - T0 is 273.15 K (= 0 °C = 32 °F), giving a theoretical value of 331.3 m/s (= 1086.9 ft/s = 1193 km/h = 741.1 mph = 644.0 kn). Values ranging from 331.3-331.6 may be found in reference literature, however; - T20 is 293.15 K (= 20 °C = 68 °F), giving a value of 343.2 m/s (= 1126.0 ft/s = 1236 km/h = 767.8 mph = 667.2 kn); - T25 is 298.15 K (= 25 °C = 77 °F), giving a value of 346.1 m/s (= 1135.6 ft/s = 1246 km/h = 774.3 mph = 672.8 kn). In fact, assuming an ideal gas, the speed of sound c depends on temperature only, not on the pressure or density (since these change in lockstep for a given temperature and cancel out). Air is almost an ideal gas. The temperature of the air varies with altitude, giving the following variations in the speed of sound using the standard atmosphere—actual conditions may vary. |Speed of sound |Density of air |Characteristic specific acoustic impedance Given normal atmospheric conditions, the temperature, and thus speed of sound, varies with altitude: |Sea level||15 °C (59 °F)||340||1,225||761||661| |11,000 m−20,000 m (Cruising altitude of commercial jets, and first supersonic flight) |−57 °C (−70 °F)||295||1,062||660||573| |29,000 m (Flight of X-43A)||−48 °C (−53 °F)||301||1,083||673||585| Effect of frequency and gas compositionEdit General physical considerationsEdit The medium in which a sound wave is travelling does not always respond adiabatically, and as a result the speed of sound can vary with frequency. The limitations of the concept of speed of sound due to extreme attenuation are also of concern. The attenuation which exists at sea level for high frequencies applies to successively lower frequencies as atmospheric pressure decreases, or as the mean free path increases. For this reason, the concept of speed of sound (except for frequencies approaching zero) progressively loses its range of applicability at high altitudes. The standard equations for the speed of sound apply with reasonable accuracy only to situations in which the wavelength of the soundwave is considerably longer than the mean free path of molecules in a gas. The molecular composition of the gas contributes both as the mass (M) of the molecules, and their heat capacities, and so both have an influence on speed of sound. In general, at the same molecular mass, monatomic gases have slightly higher speed of sound (over 9% higher) because they have a higher γ (5/3 = 1.66…) than diatomics do (7/5 = 1.4). Thus, at the same molecular mass, the speed of sound of a monatomic gas goes up by a factor of This gives the 9% difference, and would be a typical ratio for speeds of sound at room temperature in helium vs. deuterium, each with a molecular weight of 4. Sound travels faster in helium than deuterium because adiabatic compression heats helium more, since the helium molecules can store heat energy from compression only in translation, but not rotation. Thus helium molecules (monatomic molecules) travel faster in a sound wave and transmit sound faster. (Sound travels at about 70% of the mean molecular speed in gases; the figure is 75% in monatomic gases and 68% in diatomic gases). Note that in this example we have assumed that temperature is low enough that heat capacities are not influenced by molecular vibration (see heat capacity). However, vibrational modes simply cause gammas which decrease toward 1, since vibration modes in a polyatomic gas gives the gas additional ways to store heat which do not affect temperature, and thus do not affect molecular velocity and sound velocity. Thus, the effect of higher temperatures and vibrational heat capacity acts to increase the difference between the speed of sound in monatomic vs. polyatomic molecules, with the speed remaining greater in monatomics. Practical application to airEdit By far the most important factor influencing the speed of sound in air is temperature. The speed is proportional to the square root of the absolute temperature, giving an increase of about 0.6 m/s per degree Celsius. For this reason, the pitch of a musical wind instrument increases as its temperature increases. The speed of sound is raised by humidity but decreased by carbon dioxide. The difference between 0% and 100% humidity is about 1.5 m/s at standard pressure and temperature, but the size of the humidity effect increases dramatically with temperature. The carbon dioxide content of air is not fixed, due to both carbon pollution and human breath (e.g., in the air blown through wind instruments). The dependence on frequency and pressure are normally insignificant in practical applications. In dry air, the speed of sound increases by about 0.1 m/s as the frequency rises from 10 Hz to 100 Hz. For audible frequencies above 100 Hz it is relatively constant. Standard values of the speed of sound are quoted in the limit of low frequencies, where the wavelength is large compared to the mean free path. Mach number, a useful quantity in aerodynamics, is the ratio of air speed to the local speed of sound. At altitude, for reasons explained, Mach number is a function of temperature. Aircraft flight instruments, however, operate using pressure differential to compute Mach number, not temperature. The assumption is that a particular pressure represents a particular altitude and, therefore, a standard temperature. Aircraft flight instruments need to operate this way because the stagnation pressure sensed by a Pitot tube is dependent on altitude as well as speed. A range of different methods exist for the measurement of sound in air. The earliest reasonably accurate estimate of the speed of sound in air was made by William Derham, and acknowledged by Isaac Newton. Derham had a telescope at the top of the tower of the Church of St Laurence in Upminster, England. On a calm day, a synchronized pocket watch would be given to an assistant who would fire a shotgun at a pre-determined time from a conspicuous point some miles away, across the countryside. This could be confirmed by telescope. He then measured the interval between seeing gunsmoke and arrival of the sound using a half-second pendulum. The distance from where the gun was fired was found by triangulation, and simple division (distance/time) provided velocity. Lastly, by making many observations, using a range of different distances, the inaccuracy of the half-second pendulum could be averaged out, giving his final estimate of the speed of sound. Modern stopwatches enable this method to be used today over distances as short as 200–400 meters, and not needing something as loud as a shotgun. Single-shot timing methodsEdit If a sound source and two microphones are arranged in a straight line, with the sound source at one end, then the following can be measured: - The distance between the microphones (x), called microphone basis. - The time of arrival between the signals (delay) reaching the different microphones (t). Then v = x/t. Kundt's tube is an example of an experiment which can be used to measure the speed of sound in a small volume. It has the advantage of being able to measure the speed of sound in any gas. This method uses a powder to make the nodes and antinodes visible to the human eye. This is an example of a compact experimental setup. A tuning fork can be held near the mouth of a long pipe which is dipping into a barrel of water. In this system it is the case that the pipe can be brought to resonance if the length of the air column in the pipe is equal to (1 + 2n)λ/4 where n is an integer. As the antinodal point for the pipe at the open end is slightly outside the mouth of the pipe it is best to find two or more points of resonance and then measure half a wavelength between these. Here it is the case that v = fλ. High-precision measurements in airEdit The effect from impurities can be significant when making high-precision measurements. Chemical desiccants can be used to dry the air, but will in turn contaminate the sample. The air can be dried cryogenically, but this has the effect of removing the carbon dioxide as well; therefore many high-precision measurements are performed with air free of carbon dioxide rather than with natural air. A 2002 review found that a 1963 measurement by Smith and Harlow using a cylindrical resonator gave "the most probable value of the standard speed of sound to date." The experiment was done with air from which the carbon dioxide had been removed, but the result was then corrected for this effect so as to be applicable to real air. The experiments were done at 30 °C but corrected for temperature in order to report them at 0 °C. The result was 331.45 ± 0.01 m/s for dry air at STP, for frequencies from 93 Hz to 1,500 Hz. Speed of sound in solidsEdit In a solid, there is a non-zero stiffness both for volumetric deformations and shear deformations. Hence, it is possible to generate sound waves with different velocities dependent on the deformation mode. Sound waves generating volumetric deformations (compression) and shear deformations (shearing) are called pressure waves (longitudinal waves) and shear waves (transverse waves), respectively. In earthquakes, the corresponding seismic waves are called P-waves (primary waves) and S-waves (secondary waves), respectively. The sound velocities of these two types of waves propagating in a homogeneous 3-dimensional solid are respectively given by - K is the bulk modulus of the elastic materials; - G is the shear modulus of the elastic materials; - E is the Young's modulus; - ρ is the density; - ν is Poisson's ratio. The last quantity is not an independent one, as E = 3K(1 − 2ν). Note that the speed of pressure waves depends both on the pressure and shear resistance properties of the material, while the speed of shear waves depends on the shear properties only. Typically, pressure waves travel faster in materials than do shear waves, and in earthquakes this is the reason that the onset of an earthquake is often preceded by a quick upward-downward shock, before arrival of waves that produce a side-to-side motion. For example, for a typical steel alloy, K = 170 GPa, G = 80 GPa and ρ = 7,700 kg/m3, yielding a compressional speed csolid,p of 6,000 m/s. This is in reasonable agreement with csolid,p measured experimentally at 5,930 m/s for a (possibly different) type of steel. The shear speed csolid,s is estimated at 3,200 m/s using the same numbers. The speed of sound for pressure waves in stiff materials such as metals is sometimes given for "long rods" of the material in question, in which the speed is easier to measure. In rods where their diameter is shorter than a wavelength, the speed of pure pressure waves may be simplified and is given by: where E is the Young's modulus. This is similar to the expression for shear waves, save that Young's modulus replaces the shear modulus. This speed of sound for pressure waves in long rods will always be slightly less than the same speed in homogeneous 3-dimensional solids, and the ratio of the speeds in the two different types of objects depends on Poisson's ratio for the material. Speed of sound in liquidsEdit In a fluid the only non-zero stiffness is to volumetric deformation (a fluid does not sustain shear forces). Hence the speed of sound in a fluid is given by where K is the bulk modulus of the fluid. In fresh water, sound travels at about 1481 m/s at 20 °C (see the External Links section below for online calculators). Applications of underwater sound can be found in sonar, acoustic communication and acoustical oceanography. In salt water that is free of air bubbles or suspended sediment, sound travels at about 1500 m/s (1500.235 m/s at 1000 kilopascals, 10 °C and 3% salinity by one method). The speed of sound in seawater depends on pressure (hence depth), temperature (a change of 1 °C ~ 4 m/s), and salinity (a change of 1‰ ~ 1 m/s), and empirical equations have been derived to accurately calculate the speed of sound from these variables. Other factors affecting the speed of sound are minor. Since temperature decreases with depth while pressure and generally salinity increase, the profile of the speed of sound with depth generally shows a characteristic curve which decreases to a minimum at a depth of several hundred meters, then increases again with increasing depth (right). For more information see Dushaw et al. A simple empirical equation for the speed of sound in sea water with reasonable accuracy for the world's oceans is due to Mackenzie: - T is the temperature in degrees Celsius; - S is the salinity in parts per thousand; - z is the depth in meters. The constants a1, a2, …, a9 are with check value 1550.744 m/s for T = 25 °C, S = 35 parts per thousand, z = 1,000 m. This equation has a standard error of 0.070 m/s for salinity between 25 and 40 ppt. See Technical Guides. Speed of Sound in Sea-Water for an online calculator. Other equations for the speed of sound in sea water are accurate over a wide range of conditions, but are far more complicated, e.g., that by V. A. Del Grosso and the Chen-Millero-Li Equation. Speed of sound in plasmaEdit - mi is the ion mass; - μ is the ratio of ion mass to proton mass μ = mi/mp; - Te is the electron temperature; - Z is the charge state; - k is Boltzmann constant; - γ is the adiabatic index. In contrast to a gas, the pressure and the density are provided by separate species, the pressure by the electrons and the density by the ions. The two are coupled through a fluctuating electric field. When sound spreads out evenly in all directions in three dimensions, the intensity drops in proportion to the inverse square of the distance. However, in the ocean there is a layer called the 'deep sound channel' or SOFAR channel which can confine sound waves at a particular depth. In the SOFAR channel, the speed of sound is lower than that in the layers above and below. Just as light waves will refract towards a region of higher index, sound waves will refract towards a region where their speed is reduced. The result is that sound gets confined in the layer, much the way light can be confined in a sheet of glass or optical fiber. Thus, the sound is confined in essentially two dimensions. In two dimensions the intensity drops in proportion to only the inverse of the distance. This allows waves to travel much further before being undetectably faint. - Speed of Sound - "The Speed of Sound". mathpages.com. Retrieved 3 May 2015. - Bannon, Mike; Kaputa, Frank. "The Newton–Laplace Equation and Speed of Sound". Thermal Jackets. Retrieved 3 May 2015. - Murdin, Paul (25 December 2008). Full Meridian of Glory: Perilous Adventures in the Competition to Measure the Earth. Springer Science & Business Media. pp. 35–36. ISBN 9780387755342. - Fox, Tony (2003). Essex Journal. Essex Arch & Hist Soc. pp. 12–16. - Dean, E. A. (August 1979). Atmospheric Effects on the Speed of Sound, Technical report of Defense Technical Information Center - Everest, F. (2001). The Master Handbook of Acoustics. New York: McGraw-Hill. pp. 262–263. ISBN 0-07-136097-2. - "CODATA Value: molar gas constant". Physics.nist.gov. Retrieved 24 October 2010. - U.S. Standard Atmosphere, 1976, U.S. Government Printing Office, Washington, D.C., 1976. - Uman, Martin (1984). Lightning. New York: Dover Publications. ISBN 0-486-64575-4. - Volland, Hans (1995). Handbook of Atmospheric Electrodynamics. Boca Raton: CRC Press. p. 22. ISBN 0-8493-8647-0. - Singal, S. (2005). Noise Pollution and Control Strategy. Oxford: Alpha Science International. p. 7. ISBN 1-84265-237-0. It may be seen that refraction effects occur only because there is a wind gradient and it is not due to the result of sound being convected along by the wind. - Bies, David (2004). Engineering Noise Control, Theory and Practice. London: Spon Press. p. 235. ISBN 0-415-26713-7. As wind speed generally increases with altitude, wind blowing towards the listener from the source will refract sound waves downwards, resulting in increased noise levels. - Cornwall, Sir (1996). Grant as Military Commander. New York: Barnes & Noble. p. 92. ISBN 1-56619-913-1. - Cozens, Peter (2006). The Darkest Days of the War: the Battles of Iuka and Corinth. Chapel Hill: The University of North Carolina Press. ISBN 0-8078-5783-1. - A B Wood, A Textbook of Sound (Bell, London, 1946) - "Speed of Sound in Air". Phy.mtu.edu. Retrieved 13 June 2014. - Nemiroff, R.; Bonnell, J., eds. (19 August 2007). "A Sonic Boom". Astronomy Picture of the Day. NASA. Retrieved 24 October 2010. - Zuckerwar, Handbook of the speed of sound in real gases, p. 52 - L. E. Kinsler et al. (2000), Fundamentals of acoustics, 4th Ed., John Wiley and sons Inc., New York, USA. - J. Krautkrämer and H. Krautkrämer (1990), Ultrasonic testing of materials, 4th fully revised edition, Springer-Verlag, Berlin, Germany, p. 497 - "Speed of Sound in Water at Temperatures between 32–212 oF (0–100 oC) — imperial and SI units". The Engineering Toolbox. - Wong, George S. K.; Zhu, Shi-ming (1995). "Speed of sound in seawater as a function of salinity, temperature, and pressure". The Journal of the Acoustical Society of America. 97 (3): 1732. doi:10.1121/1.413048. - APL-UW TR 9407 High-Frequency Ocean Environmental Acoustic Models Handbook, pp. I1-I2. - Robinson, Stephen (22 Sep 2005). "Technical Guides - Speed of Sound in Sea-Water". National Physical Laboratory. Retrieved 7 December 2016. - "How Fast Does Sound Travel?". Discovery of Sound in the Sea. University of Rhode Island. Retrieved 30 November 2010. - Dushaw, Brian D.; Worcester, P. F.; Cornuelle, B. D.; Howe, B. M. (1993). "On Equations for the Speed of Sound in Seawater". Journal of the Acoustical Society of America. 93 (1): 255–275. Bibcode:1993ASAJ...93..255D. doi:10.1121/1.405660. - Kenneth V., Mackenzie (1981). "Discussion of sea-water sound-speed determinations". Journal of the Acoustical Society of America. 70 (3): 801–806. Bibcode:1981ASAJ...70..801M. doi:10.1121/1.386919. - Del Grosso, V. A. (1974). "New equation for speed of sound in natural waters (with comparisons to other equations)". Journal of the Acoustical Society of America. 56 (4): 1084–1091. Bibcode:1974ASAJ...56.1084D. doi:10.1121/1.1903388. - Meinen, Christopher S.; Watts, D. Randolph (1997). "Further Evidence that the Sound-Speed Algorithm of Del Grosso Is More Accurate Than that of Chen and Millero". Journal of the Acoustical Society of America. 102 (4): 2058–2062. Bibcode:1997ASAJ..102.2058M. doi:10.1121/1.419655. - Calculation: Speed of Sound in Air and the Temperature - Speed of sound: Temperature Matters, Not Air Pressure - Properties of the U.S. Standard Atmosphere 1976 - The Speed of Sound - How to Measure the Speed of Sound in a Laboratory - Teaching Resource for 14-16 Years on Sound Including Speed of Sound - Technical Guides. Speed of Sound in Pure Water - Technical Guides. Speed of Sound in Sea-Water - Did Sound Once Travel at Light Speed? - Acoustic Properties of Various Materials Including the Speed of Sound - Technical Guides - Speed of Sound in Pure Water (provides a calculator for the speed in water) - Discovery of Sound in the Sea (uses of sound by humans and other animals)
What Are Equivalent Fractions? Explained For Primary School Equivalent fractions come up a lot in KS2 maths and some children, parents, and even teachers at primary school can be a little unsure as to what they are and how to find them. This article aims to make things a little clearer. This blog is part of our series of blogs designed for teachers, schools and parents supporting home learning. - What are equivalent fractions? - To understand equivalent fractions, make sure you know the basics of fractions - Examples of equivalent fractions - How to work out equivalent fractions - When do children learn about equivalent fractions in primary school? - How do equivalent fractions relate to other areas of maths? - Equivalent fractions questions - Equivalent fractions resources What are equivalent fractions? Equivalent fractions are two or more fractions that are all equal even though they different numerators and denominators. For example the fraction 1/2 is equivalent to (or the same as) 25/50 or 500/1000. Each time the fraction in its simplest form is ‘one half’. Remember, a fraction is a part of a whole: the denominator (bottom number) represents how many equal parts the whole is split into; the numerator (top number) represents the amount of those parts. To understand equivalent fractions, make sure you know the basics of fractions If the concept of equivalent fractions already sounds a bit confusing and you’re not yet clear on what the difference is between whole numbers, denominators of a fraction and different numerators you may want to loop back to our fractions for kids article. This breaks down the first fraction steps that Key Stage 1 and Key Stage 2 children must take at school, together with clear examples of how to find the value of a fraction using concrete resources, maths manipulatives, pictorial representations and number lines; the difference between unit fractions and non unit fractions; all the way up to proper and improper fractions. It’s been written as a guide for children and parents to work through together in clear digestible chunks. Equivalent Fractions: Understanding and Comparing Fractions Worksheets Download these FREE understanding and comparing fractions worksheets for Year 3 pupils, intended to help pupils independently practise what they've been learning. Examples of equivalent fractions Here are some examples of equivalent fractions using a bar model and showing the ‘parts’ each numerator is referring to out of the ‘whole’ ie the denominator. 4/6 = four out of six parts, also shown as a : Although 8/12 may look like a different fraction, it is actually equivalent to 4/6 because eight out of 12 parts is the same as four out of six parts, as shown below: 2/3, or two out of three, is another fraction equivalent to both 4/6 and 8/12. The three fractions 2/3, 4/6 and 8/12 are shown below respectively in a fraction wall to demonstrate their equivalence. How to work out equivalent fractions To work out equivalent fractions, both the numerator and denominator of a fraction must be multiplied by the same number. What this means is in fact you’re multiplying by 1, and we know that multiplying by 1 doesn’t change the original number so the fraction will be equivalent. For example you can multiply by 2/2 or 6/6 and you’re still multiplying by 1. Equivalent fractions to 3/5 3/5 x 2/2 = 6/10 3/5 x 3/3 = 9/15 3/5 x 4/4 = 12/20 So, 3/5 = 6/10 = 9/15 = 12/20. Another way to find equivalent fractions is to divide both the numerator and the denominator of the fraction by the same number – this is called simplifying fractions, because both the numerator and denominator digits will get smaller. For example, to simplify the fraction 9/12, find a number that both the numerator and denominator can be divided by (also known as a ‘common factor’), such as 3. 9/12 ÷ 3/3 = 3/4, so 9/12 and 3/4 are equivalent fractions, with 3/4 being the fraction in its simplest form. When do children learn about equivalent fractions in primary school? Equivalent fractions KS2 The concept of equivalent fractions isn’t introduced until Year 3, where children recognise and show, using diagrams, equivalent fractions with small denominators. In Year 4, they will recognise and show, using diagrams, families of common equivalent fractions. The National Curriculum’s non-statutory guidance also advises that pupils use factors and multiples to recognise equivalent fractions and simplify where appropriate (for example, 6/9 = 2/3 or ¼ = 2/8). In Year 5, pupils are taught to identify, name and write equivalent fractions of a given fraction, represented visually, including tenths and hundredths. In Year 6, they will begin adding and subtracting fractions with different denominators and mixed numbers, using the concept of equivalent fractions. Non-statutory guidance for Year 6 suggests that common factors can be related to finding equivalent fractions and that children practise calculations with simple fractions… including listing equivalent fractions to identify fractions with common denominators. How do equivalent fractions relate to other areas of maths? Children will need to have a strong knowledge of equivalent fractions to be able to convert between fractions, decimals and percentages. Knowledge of times tables, the lowest common multiple and highest common factor are also important for equivalent fractions Wondering about how to explain other key maths vocabulary to your children? Check out our Primary Maths Dictionary, or try these other terms related to equivalent fractions: - What Is A Unit Fraction? - What Is BODMAS (and BIDMAS)? - Properties of shape - What are 2D shapes? - What are 3D shapes? Equivalent fractions questions 1. Write the missing values: 3/4 = 9/? = ?/24 (Answer: 12, 18) 2. Circle the two fractions that have the same value: (Answer: ½ and 5/10) 3. Tick two shapes that have ¾ shaded. (Answer: top left (6/8) and bottom right (12/16) as both = 3/4) 4. Shade ¼ of this shape. (Answer: Any 9 triangles shaded) 5. Ahmed says, ‘One-third of this shape is shaded.’ Is he correct? Explain how you know. (Answer: Yes – it would be 2/6 (imagine the middle square split into halves too) which = 1/3) In maths, ‘equivalent’ means that two (or more) values, quantities etc. are the same. Equivalent fractions are fractions that may look different but are actually represent the same quantity. 2/3 and 6/9 are examples of equivalent fractions. Equivalent fractions can be explained as fractions that have different numerators and denominators but represent the same value. Equivalent fractions resources - Year 3 Equivalent Fractions Worksheet - Year 6 Equivalent Fractions, Decimals and Percentages Worksheet - Tarsia Puzzle Equivalent Fractions and Decimals (Year 5) - Printable Maths Resources Fraction Walls Online 1-to-1 maths lessons trusted by schools and teachers Every week Third Space Learning’s maths specialist tutors support thousands of primary school children with weekly online 1-to-1 lessons and maths interventions. Since 2013 we’ve helped over 110,000 children become more confident, able mathematicians. Learn more or request a personalised quote to speak to us about your needs and how we can help. Our online tuition for maths programme provides every child with their own professional one to one maths tutor
Neural networks, the machine learning algorithm based on the human brain In the brain, a neural network is a circuit of neurons linked through chemical and/or electrical impulses. Neurons use these signals to communicate with each other in order to perform a certain function or action, for example, carrying out a cognitive task such as thinking, remembering, and learning. The neuron sends out an electrical signal through its axon or nerve fiber. The end of the axon has many branches, called dendrites. When the signal reaches the dendrites, chemicals called neurotransmitters are released into the gap between cells. The cells on the other side of the gap contain receptors where the neurotransmitters bind to trigger changes in the cells. Sometimes neurotransmitters cause an electrical signal to be transmitted down the receiving cell. Others can block the signal from continuing, preventing the message from being carried on to other nerve cells. In this way, large numbers of neurons can communicate with each other, forming large-scale brain networks. Now, this is how biological neural networks work. If we’re talking about them it’s because it’s important to understand the basic functioning of biological neural networks in order to explain the origin and functioning of artificial neural networks —a node-based computing system that somewhat imitates the neurons in the human brain to help machines learn. So what is a neural network in machine learning? If you programmed a computer to do something, the computer would always do the same thing. It would react to certain situations the way you “told” it to. This is what an algorithm is: a set of instructions to solve a certain kind of problem. But there are limitations in the instructions that humans can write down in a code. We can’t use a simple code to teach a computer how to interpret the natural language or how to make predictions, in effect how to "think" for itself. This is because a code can't be large enough to cover all possible situations, such as all of the decisions we make when we drive - like predicting what other drivers will do and deciding what we will do based on that. However, a computer is not able to react differently or correctly to these special conditions because it simply does not (or cannot) have pre-configured specific responses to them. But what if it could figure them out by itself? This is what machine learning is for —to “train” computers to learn from data and develop predictive capacities and decision-making abilities. Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a type of machine learning. Their design is inspired by the way that biological neurons signal to one another. In artificial neural networks, the counterparts of biological neurons are layers of interconnected nodes that transmit signals to other nodes, using information from the analysis of data to give an output. Artificial neural networks have three types of layers: - Input layers, where the input data is placed; - Hidden layers, where processing occurs through weighted connections; - Output layers, where the response to the “stimuli” is delivered. Each individual node takes in data and assigns a weight to it, giving it more or less importance. Data that is weighted more heavily contributes more to the output compared with other data. If the output data exceeds a given threshold, it “fires” the node, passing the data to the next layer in the network. Deep neural networks and deep learning When neural networks have more than one hidden layer to process the input data, they can learn more complex tasks because they have more “neurons” to process that data through all the hidden layers combined. These multi-layered neural networks are called deep neural networks and what they do is called deep learning. We can compare it to what the brain of a 3-year-old kid knows versus what the brain of a 30-year-old adult knows. The toddler may be just as smart as the adult but is not as experienced as the adult (doesn't have as much data), therefore, she doesn’t have as much information or information processing ability as the adult when trying to solve problems. This is precisely why neural networks need to be trained. They need to be fed large data sets so the network can find the appropriate weighting to use to best map inputs to outputs. Neural networks do this by applying optimization algorithms, such as gradient backward propagation. In this way, deep learning can even surpass human-level accuracy because it can sift through and sort huge amounts of data. Just like we’ve organized our schools in different grades according to the student's level of knowledge at each stage, deep neural networks build different levels of hierarchical knowledge in their layers. For example, they can store information about basic shapes in their initial layer and end up completely recognizing an object and its characteristics in the output layer. Deep neural networks and deep learning are both subsets of machine learning. What are Convolutional Neural Networks? Convolutional neural networks (CNNs) are a class of artificial neural networks that use connectivity patterns to process pixel data. In fact, convolutional neural networks are mainly used for image recognition and classification tasks because they are arranged to be especially good at that. They normally employ matrix multiplication to recognize patterns within images, for which they require a lot of computing power and training. They have three kinds of layers: - Convolutional layer, which performs the convolution —the search for specific features in the input image through feature detectors called filters. - Pooling layer, where the dimensions of the feature maps are reduced in size, while their important characteristics are preserved. This reduces the number of parameters and calculations that need to be made, improving efficiency. - Fully-connected layer, where all the nodes and inputs from all layers are connected, weighted, and activated and the classification occurs. This may also be preceded or include a rectified linear unit layer. This replaces all negative values received as inputs by zeros and acts as an activation function. What are Recurrent Neural Networks? Recurrent neural networks (RNN) are a kind of artificial neural network that is specialized in processing sequential time series data. Their deep learning algorithm is designed to solve temporal problems like those present in speech recognition, sales forecasting, and automatic image captioning. Recurrent neural networks take information from prior inputs and apply this to current inputs and outputs. While traditional neural networks have inputs and outputs that are independent of each other, the output of recurrent neural networks depends on the other elements in the sequence. This approach is used in speech recognition because human languages work with sequences of words, not individual words. So, in order to interpret speech, recurrent neural networks need to “understand” whole sentences and not only individual words. For example, in order for the idiomatic expression, "give someone the cold shoulder," to make sense, each word needs to be expressed in a specific order. For a recurrent network to accurately interpret this idiom, it needs to account for the position of each word and then use that information to predict the next word in the sequence. Why are neural networks important? All types of neural networks can boost artificial intelligence’s performance to the next level in their own way. In general, they are important because they have applications in many areas. For example, in the aerospace industry, they are used to improve fault diagnosis and autopilot in aircraft and spacecraft. In medicine, convolutional neural networks can help with medical diagnosis through the processing and comparison of medical imaging data (such as X-ray, CT scan, or ultrasound). Neural networks have also applications in security systems, for example, face recognition systems (which compare a detected face with the ones present in the database to identify the individual), signature verification (mainly used to avoid forgeries in banks and other financial institutions), etc. They allow self-driving cars to navigate the roads, detect pedestrians and other vehicles, and make decisions. Because of their predictive abilities, neural networks are also used in weather forecasting and stock market predictions. But neural networks can be found also in the most basic, everyday technology we have today. For example, Google Translate uses a neural machine translation system in order to process and be able to translate whole sentences with increasing accuracy. Apple’s Siri uses a deep neural network to recognize the voice command that activates it (Hey Siri), as well as the speech that follows. Neural networks are useful because of their efficiency. Plus, they bear great technological potential as they grow in size and in their problem-solving capabilities. NASA will now take "the next steps in solar sailing" after the LightSail 2 spacecraft burned up in Earth's atmosphere.
Slavery in Kentucky 1792 to 1865 Development and General Status of Slavery It is impossible to understand slavery in Kentucky without some knowledge of the method by which the land was settled in the latter part of the eighteenth century. Between 1782 and 1802 the seven States which had interest in western lands ceded their rights to the United States and all that territory with the exception of Kentucky and the Connecticut Reserve in Ohio was made a part of the public domain. Hence, one of the distinguishing features of the settlement of Kentucky as compared with Ohio was that in the latter State the land was sold by the Federal Government to settlers coming from all parts of the country but particularly from the northeastern section. The result of this was that few citizens of Ohio held more than 640 acres. Kentucky had been reserved by Virginia and consequently the method of settlement was purely a matter governed by that State and was separate and apart from the system which was employed by the United States Government. Furthermore, Kentucky lands were all given out by 1700, just one year after the beginning of our national period: The federal land policy was at that time just beginning. Virginia gave out the lands in Kentucky by what is known as the patent system, and all the settlers in Kentucky held their lands by one of three different kinds of rights. In the first plate there were those who were given tracts in the new territory as a reward for military services which had been rendered in the Revolution. This had been provided for by the legislature of Virginia as early as December, 1778. No land north of the Ohio River was to be granted out as a military bounty until all the “good lands” in the Kentucky region had been exhausted. The size of these tracts was to be governed by the official status of the recipient in the late war, and the bounties finally granted by Virginia ranged all the way from one hundred to fifteen thousand acres. The Virginia legislature of 1779 found it necessary to establish a second method of settlement in Kentucky in response to the demands of the large number of people who were migrating to the west of the Vlleghenies. Provision was made for the granting of preemption rights to new settlers and also for the introduction of a very generous system of settlement rights. These settlement and preemption rights were almost inseparable, as the latter was dependent upon the former. It was provided that four hundred acres of land would be given to every person or family who had settled in the region before the first of January, 1778.3 The word “settlement” was stated to mean either a residence of one year in the territory or the raising of a crop of corn. In addition to the above grant every man who had built only a cabin or made any improvement on the land was entitled to a preemption of one thousand acres, providing such improvements had been made prior to January 1, 1778. Preference in the grants was to be given to the early settlers and even the most famous heroes of the Revolution were not allowed to interfere with the rights of those who held a certificate of settlement. Thus far provision had been made only for those who had settled before 1778. To them was given the best of the land. Thereafter all settlement and preemption rights ceased and the further distribution of land in Kentucky was by means of treasury warrants. A person desiring land in Kentucky would appear at one of the Virginia land offices and make an entry and pay a fee amounting to about two cents per acre. The paper he would receive would give the approximate location of the tract and the recipient would proceed to have the land surveyed at his pleasure. Within three months after the survey had been made lie was to appear at the land office and have the same recorded. A copy of this record was to be taken to the assistant register of the land office in Kentucky and there it was to remain six months in order to give prior settlers, if any, the right to prove their claims to the property. No such evidence being produced a final record of the patent was to be made and a copy given to the original grantee. An interesting example of this method of settlement is shown by the experience of Abraham Lincoln, the grandfather of President Lincoln. On March 4, 1780, soon after the establishment of the new system, he appeared at the land office in Richmond, Virginia, and was given three treasury warrants, each for four hundred acres of land in Kentucky. The first and third of these warrants were not returned for the final recording until May 16,1787, at which time Beverly Randolph, Governor of Virginia, issued a final deed of 800 acres of land in Lincoln County, Kentucky, to Abraham Lincoln. The second treasury warrant was not returned until July 2, 1798, more than a decade after the death of Abraham Lincoln and six years after Kentucky had become a State. At that time the warrant was presented with a record of the survey by Mordecai Lincoln, the eldest son of Abraham. After some period of investigation the deed for the four hundred acres in Jefferson County was turned over to Mordecai Lincoln on April 26, 1799. The result of this method of granting land was that Kentucky was settled by a comparatively few men who rented their property to tenants. A. large number of the military bounties were never settled by the original owners but were farmed by the later incoming tenant class. George Washington had been given five thousand acres and this land was actually settled by the poorer white element. In the case of the land warrant property it was true that it was usually granted to the poorer class of early settlers but as in the instance of the Lincoln family the land soon passed into the hands of the wealthier settlers either by purchase or through law suits. It is commonly stated that Daniel Boone thus became landless and was forced to migrate to Missouri. Thus we see that Kentucky was distinctly different from all the other settlements to the west of the Alleghenies in the original system of land tenure and she further inherited from her mother State of Virginia the ancient theory of a landed aristocracy which was based upon tenantry. The early inhabitants of Kentucky can be easily divided into three classes, the landed proprietors, their slaves, and the tenant class of whites. The second and third classes tended to keep alive the status of the former and led to the perpetuation of the landed aristocracy. In Kentucky, however, the laws of descent were always against primogeniture and this resulted in the division of the lands of the wealthier class with each new generation. The institution of slavery in Kentucky, as in every other State, depended for the most part upon the existence of large plantations. The only reason Kentucky had such large estates was because of the method by which the land was given out by the mother State. Economically Kentucky was not adapted to plantation life. The greater part of the State required then, as it still does, the personal care and supervision of the owner or tenant. The original distribution of land made this impossible and there grew up a large class of landholders who seldom labored with their hands, because of the traditional system. A large number of inhabitants as early as 1805, Michaux found, were cultivating their lands themselves, but those who could do so had all the work done by Negro slaves. With passing years, while Kentucky maintained slavery, it came to have a social system not like that in the South but one more like the typical structure of the middle nineteenth century West. There were several reasons for this. In the first place, the absence of the policy of primogeniture in time came to distribute the lands over a much larger population. In the second place, while all the land in Kentucky had been granted by the year 1790, the patrician landholding element was completely submerged by the flood of so-called plebeians who came in soon after Kentucky became a State. In 1790 there were only 81,133 white people in Kentucky, and although all the land had been granted, the white population in the next decade nearly tripled, reaching 179,871 in 1800, and this increase, at a slightly smaller rate, continued down to about 1820. Still further the nature of the soil made it more profitable for the wealthier landed class to let out their holdings to the incoming whites who did their own work and in time came to own the property. “Each year increased this element of the state at the expense of the larger properties.” A study of the growth of the slave and white population of Kentucky from 1790 to 18G0 is necessary to an adequate understanding of the slave problem. POPULATION FROM 1790 TO 1860 WITH RATES OF INCREASE Per Cent lncruare Per Cent Increase Per Cent Increase Per Cent Increase It will be found advantageous to deal with two sets of figures—one relating to the slave population within the State and the other with the slave increase in Kentucky as compared with the general increase throughout the United States. It would not be of any value to compare the figures for Kentucky with those of any other State, for that would involve the discussion of local factors which are beyond the scope of this investigation. First of all we shall take the census statistics for the State for all eight of the enumerations which were taken during the slavery era. The figures for the year 1790 were originally taken when Kentucky was a part of the State of Virginia, but they are included, since Kentucky became a State before the census was published. Furthermore they furnish an interesting light upon the growth of the slave population during the first decade of the new commonwealth. The important part of this table is in the increases, on a percentage basis, in the slave and white populations. Another viewpoint of the growth of the slave population may be seen in this little table: Ratio of Slaves to tub Total Population Here it will be seen that the proportion of slaves increased down to 1830 and then began to decline. Most authorities are agreed that this was in a large measure due to the enactment of the law of 1833 forbidding the importation of slaves Free Negro and Slave Population of the United States, 1790 to I860, with Rates of Increase Per Cent Increaat Per Cent Increase into Kentucky. But before dealing with that question it would be well to have before us the figures for the whole country at the same period. The facts seem more significant, if we compare the slave increase in Kentucky with that of the Negroes in the country as a whole. Bearing in mind that Kentucky was a comparatively new region when it became a State and that at that time slavery was firmly established along the seaboard, we are not surprised to find that the slave increase in Kentucky was much more rapid for the first three or four decades than it was in the nation as a whole. After the year 1830 the increase in the United States, on a percentage basis, was much greater than in Kentucky. It seems that the institution started in with a boom and then eventually died down in Kentucky. There were several reasons for this fact. A glance at the increase of whites in Kentucky for the last three decades will show that they were forging ahead while the slaves were relatively declining. This was due to a large amount of immigration of that class of white people who were not slaveholding. A second factor was the non-importation act of 1833. About the same time there came to be a conviction among a large portion of the population that slavery in Kentucky was economically unprofitable. There is abundant ground for the position that the law of 1833 was passed because of a firm conviction that there were enough slaves in the State. The only ones who could profit by any amount of importation were the slave dealers and beyond a certain point even their trade would prove unprofitable. If there was ever a single slaveholder who defended importation on the ground that more slaves were needed in Kentucky he never spoke out in public and gave his reasons for such a position. Unfortunately there are few statistics concerning the number of slaveholders in Kentucky. Cassius M. Clay in his appeal to the people in 1845 stated that there were 31,495 owners of slaves in the State.12 The same year the auditor’s tax books showed that there were 176,107 slaves in Kentucky.13 This would mean an average of 5.5 slaves for each owner. The accuracy of these figures is substantiated by those for the census of 1850 which gave 210,981 slaves held by 38,456 slaveholders or an average of 5.4 to each owner. These holders were classified according to the number of slaves held as follows: This distribution shows that, although the average number of slaves held may have been 5.4 for each slaveholder, 21,528 or 50 per cent of them held less than five slaves each, and that 34,129 or 88 per cent held less than 20 each. Of the 132,920 free families in the State only 28 per cent held any slaves at all. This was somewhat below the average for the whole South. The total number of families holding slaves in the United States, by the census of 1850, was 347,525. AVith an average of 5.7 persons to each family there were about 2,000,000 persons in the relation of slave owners, or about one third of the whole white population of the slave States. In South Carolina, Alabama, Mississippi, and Louisiana about one half of the wliite population was thus classified. As stated above, this percentage in Kentucky was only twenty-eight. This comparison can be more clearly shown by a table of the slave States from the census of 1850 showing the number of white people, the slaveholders, slaves, and the average number of slaves for each slaveholder. Among the fourteen real slaveholding States of the Union Kentucky stood ninth in the number of slaves in 1850, but was third in the number of slave owners and with the exception of Missouri had less slaves for each owner than any other State. From the third column of this table, however, we are rather surprised to find that not only in Missouri but in Arkansas, Maryland and Tennessee the number of slaveholders was smaller in proportion to the total white population than in Kentucky. Helper in his Impending Crisis made the following interesting table from the census figures for 1850. He set a perfectly arbitrary valuation of $400 on each slave, but, if one takes into account the infants and the aged unable to work, his general appraisement of the slave group is fair enough for the time and for a basis of comparison. It will be seen at a glance that after taking out the value of the slaves in all the States Kentucky was the richest southern commonwealth. From the three preceding tables it is apparent that while the Kentucky slaveholders represented about 28 per cent of the white population of the State, on the average they held less slaves than in the other Southern States. Slave property in Kentucky was a much smaller part of the wealth of the commonwealth than in the States to the south. The relatively large number of holders is to be explained by the type of slavery which existed in the State. Many persons held a few servants in bondage and those who held many slaves were very few in number. The question of the sale of slaves from Kentucky into the southern market presents a much more formidable problem. The chief charge that the anti-slavery people made against Kentucky was that the State regularly bred and reared slaves for the market in the lower South. What was the attitude of the Kentucky slaveholder and the people in general on the question of the domestic slave trade! There is no doubt that in the later years of slavery there were sold in the State many slaves who ultimately found their way into the southern market notwithstanding the contempt of the average Kentucky slaveholder for the slave trade. This trend of opinion will be seen as we proceed. If the sentiment was decidedly against such human commerce how did so many slaves become victims of the slave trader? There were five general causes which led to the sale of slaves in Kentucky: (1) When they became so unruly that the master was forced to sell; (2) when their sale was necessary to settle an estate; (3) when the master was reduced to the need of the money value in preference to the labor; (4) when captured runaways were unclaimed after one year; and (5) when the profit alone was desired by unscrupulous masters. Many other reasons have been given, but a careful investigation of all available material confines practically every known case of sale to one of the above classifications. Mrs. Stowe in her Key to Uncle Tom’s Cabin maintained that the prevalence of the slave trade n Kentucky was duo to the impoverishment of the soil beyond recovery and the decrease in the economic value of the slave to its owner. This argument is fallacious, for the very blue-grass region which held most of the slaves is today the most fertile section of the State. As long as a slave conducted himself in accordance with the spirit of the slave code there was little chance of his owner selling him against his will. The president of the Constitutional Convention of 1849 stated that in the interior of the State, where slaves were the most numerous, very few Negroes were sold out of the State and that they were mostly those whose bad and ungovernable disposition was such that their owners could no longer control them. A true picture of the average master’s attitude has been given us by Prof. N. S. Shaler. “What negroes there were,” said he, “belonged to a good class. The greater number of them were from families which had been owned by the ancestors of their masters in Virginia. In my grandfather’s household and those of his children there were some two dozen of these blacks. They were well eared for; none of them were ever sold, though there was the common threat that ‘if you don’t behave, you will be sold South.’ One of the commonest bits of instruction my grandfather gave me was to remember that my people had in a century never bought or sold a slave except to keep families together. By that he meant that a gentleman of his station should not run any risk of appearing as a ‘negro trader,’ the last word of opprobium to be slung at a man. So far as I can remember, this rule was well kept and social ostracism was likely to be visited on any one who was fairly suspected of buying or selling slaves for profit. This state of opinion was, I believe, very general among the better class of slave owners in Kentucky. When negroes were sold it was because they were vicious and intractable. Yet there were exceptions to this high-minded humor.” When a master had a bad Negro about the only thing that could be done for the sake of discipline was to sell him. If the owner kept the slave, the latter would corrupt his fellows and if he were set free, the master would reward where he ought to punish. The human interest which the owner took in his servant when the demands of the institution necessitated his sale is shown in the case of the Negro Prank, owned by A. Barnett, of Greensburg. ‘Witness these words of the master in a runaway advertisement: “His transgressions impelled me, some years since to take him to New Orleans and sell him, where he became the property of a Spaniard, who branded him on each cheek thus, and which is plain to be seen when said negro is newly shaved. I went to New Orleans again last May, where, having my feelings excited by the tale Frank told me, “I purchased him again.” After the master had gone to all this trouble in the interest of the slave the latter ran away shortly after his return to Kentucky. It was often necessary to sell slaves in order to settle an estate. It was seldom possible for a man to will his property in Negroes without some divisions becoming necessary at the hands of the executor in the just interest of the heirs. These public auctions usually took place on court day, at the courthouse door and were conducted by the master commissioner of the circuit court. The following advertisement reveals the necessity and the procedure: SALE OF NEGROES By virtue of a decree of the Fayette Circuit, the undersigned will, as Commissioner to carry into effect said decree, sell to the highest bidder, on the public square in the city of Lexington, on Monday the 10th of March next, being county court day, the following slaves, to wit: Keiser, Carr, Sally, Bob, Susan, Sam, Sarah and Ben; belonging to the estate of Alexander Culbertson, deceased. The sale to be on a credit of three months, the purchaser to give bond with approved security. The sale to take place between the hours of 11 o’clock in the morning and 3 o’clock in the evening. February 26, 1834 John Clark, Commissioner On the same day the sheriff of the county might appear at the courthouse door in accordance with a previous announcement and auction off any unclaimed runaway that h&d been lodged in the county jail or hired out under his authority for a period of a year or more. The slaves thus sold were usually fugitives from the lower South who had been apprehended on their way to Ohio or Indiana. Although the utmost publicity would have been given to their capture, in accordance with the law, few of the planters of the far South seem ever to have claimed their property. The usual legal code in this matter is shown by the notice below: NOTICE: Agreeably to an act of the General Assembly, passed January 11, 1845, I will, on the first Monday of May, 1846, before the Court House door, in the city of Louisville, sell to the highest bidder, on a credit of six months, the purchaser giving bond with good security, having the force and effect of a replevin bond, JOHN, a runaway slave, 18 or 19 years of age, 5 feet 3 or 4 inches high, a rather heavy built, supposed to be the property of Daniel Me Caleb or Calip, residing on the coast some twenty miles below New Orleans. F. S. J. Ronald Deputy Sheriff Feb. 25, 1846. for James Harrison Sheriff Jefferson Co. Under the three causes of sale thus far cited the blame would not be placed upon the master. In the case of the unruly Negro the owner was according to the ethics of that day not at fault. In the settlement of an estate the slaveholder was no longer a factor, for his demise alone had brought the sale. In the case of the runaway the owner was unknown. Mrs. Stowe probably showed the attitude of the average Kentucky master when she pictured Uncle Tom as being sold for the southern market only because of the economic necessities of the owner. When in such a position the master felt called upon to explain the necessities of the case. He was very careful not to be cast under the suspicion of public opinion as a “slave trader,” which, as Shaler has said, was the “last word of opprobrium.” Witness a few instances in evidence: NEGROES FOR SALE A yellow negro woman of fine constitution, and two children, from the country, and sold for no fault but to raise money. Will not be sold to go down the river. Her husband, a fine man, can be had also. Apply at the store of Jarvis and Trabue—3rd & Main The editor of the Lexington Reporter was very careful not to get under the ban of his constituents when he was forced to sell a farm hand and his wife. A negro man, a first rate farm hand, about 27 years of age; and a very likely woman, the wife of the man, about 22 years of age, a good house servant. They will not be sold separately, or to any person wishing to take them out of the State. Enquire at this office. In 1834 Thomas J. Allen, a citizen of Louisville, desired to exchange his property in the city for 40 or 50 slaves, but he specifically stated that they were to be for his own use and that he wanted them to be “in families.” The same attitude appears in the case of a house servant for sale with the reasons for such specifically stated: July 9, 1834. I wish to sell a negro woman, who has been accustomed to house work. She is an excellent cook, washes and scours, and is in every respect, an active and intelligent servant. I do not require her services, which is my only reason for wishing to dispose of her. The prevalence of statements giving the reasons for and the restrictions upon these sales should show beyond any reasonable doubt that public opinion would not tolerate any suspicion of a heartless traffic in slaves. These sentiments were especially prevalent in the central portion of the State. The only case known to the writer where a large number of slaves were sold without any qualification was near Harrodsburg in August, 1845; but in this instance all the man’s property, including 450 acres of land, was sold at the same time.20 There were, naturally, some unscrupulous masters who cared little for the fate of their slaves when sold. They placed no restrictions upon the sale, either in destination or in the break-up of family ties. We will cite only two, one for the earlier and one for the later period, noticeable chiefly for the lack of regard for Negro family life. NEGROES FOR SALE The subscriber has for sale a negro man and woman, each about 24 years of age, both are excellent plantation hands, together with two children. They will be sold separately or altogether. I wish to sell a negro woman and four children. The woman is 22 years old, of good character, a good cook and washer. The children are very likely, from 6 years down to I1/*- I will sell them together or separately to suit purchasers. J. T. Underwood. The aggregate of all these causes was sufficient to bring about a supply for the southern market. The question now arises as to how the demand was met commercially. To what extent were there slave traders in Kentucky? George Prentice, the famous editor of the Louisville Journal, himself a loyal exponent of slavery, early pointed out that Kentucky had an ample supply of Negreos and that they were being sent south in large numbers. He further stated that any one who wanted slaves could always purchase them by leaving an order in Louisville.29 This opinion was expressed at a time when the non-importation act of 1833 had been in force for sixteen years, which meant that Kentucky was producing slaves faster than she needed them. It was only two months after this that Richard Henry Collins in an editorial in the Maysville Engle gave a flagrant example of a slave trader in Kentucky who violated the spirit as well as the letter of the law. But the sentiment of the people on the slave dealer had been expressed much earlier. In 1833 a Lexington editor felt exasperated because of the appearance of a large group of slaves in the streets of the city on their way to be sold south. W lien another trader appeared with his Negro slaves held together with a chain he voiced his wrath in this fashion: “A few weeks ago we gave an aecount of a eompany of men, women and children, part of them manacled, passing through the streets. Last week, a number of slaves were driven through the main street of our city, among them were a number manacled together, two abreast, all connected by, and supporting, a heavy iron eliain, which extended the whole length of the line.” About the same time a citizen of Danville sold a Negro woman to a regular slave trader. The news spread around the town rapidly and to save himself from the threats of the gathering mob the owner was compelled for his own safety to follow the slave dealer and repurchase the woman at a decided increase in price. It is very difficult to find out how many slave dealers there were in the State, for few of them ever came out in the open and advertised their trade. As would be expected from its size and situation Louisville was the place where the dealer could ply his trade to the best advantage. It was the central business point and the port from which most slaves from Kentucky were shipped down the Ohio and Mississippi. There is no mention in the newspapers of any dealers there before the year 1845. Thereafter there were several who advertised for any number of slaves and made no secret of the purpose of purchase. In the Journal for October 20, 1845, William Kelly called for all persons who had slaves to sell to see him and offered them the highest prices. He further stated that lie had slaves for sale. His name does not often appear in succeeding years. During the next decade there were four regular dealers who apparently did considerable business: T. Arterburn, J. Arterburn, William F. Talbott, and Thomas Powell. Later John Mattingly came upon the scene presumably from St. Louis. In July, 1845, the Arterburn brothers began a series of advertisements which ran for several years. “We wish to purchase 100 negroes for the Southern market, for which we will pay the highest prices in cash.” Talbott began his publicity in 1848 with these words: “The subscriber wishes to purchase 100 negroes, for which, he will pay the highest cash – prices. Can always be found at the Louisville Hotel.” Two years later he was still advertising, but had ceased placing any limit on the number to be bought and had moved his quarters to the Hotel O’Kain. Thomas Powell also began in 1848 with this stock phraseology— “Persons having negroes for sale can find a purchaser at the highest cash prices by calling on the subscriber, on Sixth Street, between Main and Market, adjoining H. Duncan’s stable.” This advertisement ran continually for a period of two years. John Mattingly evidently came from Missouri in the same year, and remained until 1852, when he returned to St. Louis to ply his trade. “While he was in Louisville he ran an advertisement in the Journal after this fashion: “The undersigned wishes to purchase 100 negroes both men and women, for which he will pay the highest cash prices. Those who have negroes for sale would do well to call on him at the Galt House.” It is noticeable that none of the Louisville directories for this period mention any slave dealers. This failure may have been due merely to the fact that there were so few traders in the city and that they were more or less transient residents. On the other hand, public opinion apparently never acknowledged that there were any real citizens of the city engaged in the slave trade. Beginning in 1840 the Louisville Journal published a weekly paper called Louisville Prices Current. In 1855 this was succeeded by the Commercial Review and Louisville Prices Current, which was published by the Louisville Chamber of Commerce. These two papers devoted themselves exclusively to the commercial transactions of the city and gave price quotations weekly for every conceivable kind of goods in the market together with the volume of sales. Strange to say, there has not been found a single issue of either of these papers, which mentions the selling price of slaves or any transaction in Negroes. If there was a trade in slaves which was regarded purely as a commercial enterprise, as some would have us think, then it is very hard to understand why these splendid trade papers did not contain any account of the business. There were some Louisville business men who bought and sold slaves as only one of the branches of their commercial activities. This would account to some extent for the failure to list traders in the local directories for it is noticeable that such men never called themselves slave dealers. As early as the year 1825 John Stickney established the Louisville Intelligence Office on Main Street, which was a sort of labor and real estate exchange. He advertised that he sold books; had money to loan; houses for rent and sale; horses and Negroes for sale and hire; carriages for sale; conducted a labor exchange, and recommended the best boarding houses. A year later J. C. Gentry opened the “ Western Horse Market” at the corner of Market and Fourth Streets. He advertised that he conducted a livery stable, and also sold on commission, at public or private sale, horses, carriages, cattle, wagons and slaves; and that he would conduct an auction on Wednesdays and Saturdays.30 A similar case was that of A. C. Scott, who in 1854 opened a real estate and land office but who stated in the press that he not only bought and sold land and rented houses but that he would sell and hire slaves.40 Consequently Scott was listed as a real estate and land agent in the local directories. It is impossible to determine how many of these occasional slave dealers there were, but in so far as available material shows these three were the only ones to announce their trade publicly. It would appear from all the evidence at hand that while Kentucky furnished many slaves for the southern market there was no general internal slave trade, as a commercial enterprise. There were in Louisville, however, a few heartless business men who took advantage of the decreasing value of slave labor in Kentucky and the rising prices of slaves in the far South. In this respect, Kentucky became a field of supply for the slave markets of the lower South. Unfortunately there are no statistics available by which the number of slaves sent south can be computed. The most comprehensive anti-slavery publication on the internal slave trade was unable to decide with certainty what proportion of slaves for the southern market was furnished by each of the so-called breeding States. The author of Slavery and Internal Slave Trade in the United States estimated that 80,000 slaves were annually exported from seven States to the South. He gave no figures that were not his own estimates. He ranked the seven States, however, in the order of the number of slaves which he thought they furnished as follows: Virginia, Maryland, North Carolina, Kentucky, Tennessee, Missouri and Delaware. Martin estimates that Kentucky sent on the average about 5,000 slaves to the southern market.42 Again this must be considered purely conjectural. It is reasonable to suppose that during the last two decades of the slavery era there were few slaves imported into Kentucky that were intended for the purely Kentucky market. What Negroes came into Kentucky were for the most part on their way to the more profitable southern trade. The average death rate among the slaves during this period was 1.9 per one hundred and the birth rate was 3.2, or an excess of births over deaths of 1.1 per hundred. This would make the annual natural increase among the slave population about 2,000 per year. Comparing this with the growth of the slave group from 1840 to 1850 we find that the increase of slaves was much more. But it was during the next decade that the slave trade reached its height and here we find that the slave population increased 14,302, whereas the natural increase during that period should have been 23,190. Hence the slaves failed to reach even their natural increase by a deficiency of 8,688. Taken literally that would mean that during the ten-year period that number of slaves were exported from Kentucky. But it is reasonable to suppose that many more than that were sent to the South. With the exception of the last decade, however, the slave population of Kentucky increased faster than the mere natural increase of the Negroes. The law would not permit of any importation of slaves intended for Kentucky, so the export of purely Kentucky slaves appears never to have been prominent except during the decade from 1850 to 1860. The selling price of slaves naturally presents itself at this point. In Kentucky these records are very few because the tax books in practically all the counties of the State have been destroyed. We have no accurate statements extant before about the year 1855. The prices which we have obtained are quotations from the auction of slaves of estates to settle the interests of the heirs. On January court day, in 1855, there were sold in the settlement of estates in Bourbon, Fayette, Clark and Franklin Counties Negro men who brought $1,260, $1,175, $1,070, $1,378, $1,295, $1,015 and $1,505. The county commissioner of Harrison auctioned the slaves of the deceased George Kirk patrick with the following prices received: The county commissioner at Henderson received the following prices for slaves in the settlement of several estates on January 28, 1858: This sale is most significant for the cases of “Delpliv,” 80 years old, and “Cupid,” 85 years of age. It is difficult to account for such a sale in any discussion of the slave trade, but it does show the humanitarian side of Kentucky slavery. Negroes at such an age had no economic value even if they were given away, because the expense of their maintenance was more than the value of any possible labor they could perform. At Georgetown in December of the same year we have this record: The auction of the slaves of the estate of Spencer C. Graves at Lexington in April, 1859, brought these prices: doubt that the value of slaves was determined entirely by the increasing demand for slaves in the lower South and was in no way an indication of the value of slave labor within Kentucky. As was pointed out earlier in this chapter, the labor value of an agricultural slave in the State steadily decreased after about the year 1830. Was slavery profitable to the Kentucky planters? In the many debates on the slavery question which took place after 1830 no one ever stood out in the affirmative. The only ones to discuss the economic side of the issue were those in opposition to slavery. As has often been said of the Kentucky situation, “the program was to use negroes to raise corn to feed hogs to feed negroes, who raised more corn to feed more hogs.” Tobacco was the largest crop raised in the State and corn came next. Neither proved to be peculiarly adapted to slave labor. There were few large plantations in the State where it could be made advantageous. What Negro work there was to be done was never confined to any particular kind of cultivation but was used in the manner of farm labor today in the State. Squire Turner, of Madison County, in the Constitutional Convention of 1849 made a careful summary of the existing economic problems of slavery. “There are,” said he, “about $61,000,000 worth of slave property in the state which produces less than three per cent profit on the capital invested, or about half as much as the moneyed capital would yield. There are about 200,000 slaves in Kentucky. Of these about seventy-five per cent are superannuated, sick, women in unfit condition for labor, and infants unable to work, who yield no profit. Show me a man that has forty or fifty slaves on his estate, and if there are ten out of that number who are available and valuable, it is as much as you can expect. But my calculation allows you to have seventy-five per cent who are barely able to maintain themselves, to pay for their own clothing, fuel, house room and doctor’s bills. Is there any gentleman who has a large number of slaves, who will say that they are any more profitable than that?” No one in the convention answered the last question put by Squire Turner. But regardless of such an economic condition, not a single piece of remedial legislation was passed and the members of the Constitutional Convention added a provision to the Bill of Eights which rooted the slavery system firmer than ever. That most admirable of all southern characters, and at the same time the most difficult to understand, the Kentucky master, took little heed of a question of dollars and cents when it interfered with his moral and humanitarian sentiments. He had inherited, in most cases, the slaves that were his. He knew well enough that the system did not pay but supposing that he should turn his slaves loose, what would become of them? What could they do for a living? The experience of later years proved that his apparently obstinate temperament was mixed with a good deal of wisdom, for once the slaves were set free their status was not to any great extent ameliorated if they w^ent abroad from the plantation where they had lived from childhood. There was a certain amount of profit in the labor of able-bodied slaves but they only represented a fraction of the Negroes whom the master w’as called upon to support. The law compelled the owner to maintain his old and helpless slaves and this represented the spirit of the large majority of the slaveholders. Those w’ere rare cases indeed when an owner was hailed into court for failing to provide for an infirm member of his slave household. The true Kentuckian never begrudged the expense that such support incurred. One of the ablest lawyers of the State, Benjamin Hardin, made the statement that “if it were not for supporting my slaves, I would never go near a courthouse.” Rev. Stuart Robinson, speaking before the Kentucky Colonization Society in 1849, gave another viewpoint of the economic value of the slave. “The increase of slaves in Kentucky,” said he, “has hardly reached three thousand annually for eighteen years past. The increase since 1840 has been 27,653—the increase for the year just closed 2,921. In twenty-six counties, embracing one fourth of the slave population—some of them the largest slave-holding counties—there has been an actual decrease in the last year of 881 slaves. In twelve other counties the increase has been only twenty-three. There are ten counties in the State, which contain one third of all the slave population of Kentucky; in these ten counties, the increase of slaves for five years past has been 2,728—an increase of less than one per cent per annum. Nor is this slow increase of slavery to be attributed to any stagnation or decline of public prosperity, for in the meantime the state has been growing in population and wealth as heretofore. During these five years the taxable property of the Commonwealth has increased in value more than seventy-six millions. Now this decrease of slaves while the other property of the commonwealth is increasing must arise from one of three causes— and in either case the inference is the same as to the fate of slavery in Kentucky. (1) Is it because the climate is unhealthy to the African? If so then African labor cannot continue. (2) Is it owing to emigration? Then something is wrong in the system of labor, that causes the emigration of our people—for no finer soil—no more desirable residence can be found in the world. (3) Or is it owing to the domestic slave trade? Then for some reason slave labor is less profitable here than elsewhere, and must soon be given up. These figures quoted by the speaker on the slave population for year by year are available in the auditor’s tax books for the years 1840 to 1859: The very small growth shown here would barely account for the natural increase among the slaves by virtue of the high birth rate. The mortality rates were about the same for slaves as for whites. The relative decline was undoubtedly due to the rising prices for slaves which were sent to the South and the consequent decreasing value of a slave’s labor to the Kentuckian. lie knew beyond a doubt that the time would eventually come when he would have to part with his slave and that portion of the holders who were not averse to selling their chattels did so during this period.
An artist's impression of what Mars' surface and atmosphere might look like, if Mars were terraformed. Another view of a terraformed Mars For centuries people have speculated about the possibility of life on Mars due to the planet's proximity and similarity to Earth. Serious searches for evidence of life began in the 19th century, and continue via telescopic investigations and landed missions. While early work focused on phenomenology and bordered on fantasy, modern scientific inquiry has emphasized the search for water, chemical biosignatures in the soil and rocks at the planet's surface, and the search for biomarker gases in the atmosphere. Mars is of particular interest for the study of the origins of life, because of its similarity to the early Earth. This is especially so as Mars has a cold climate and lacks plate tectonics or continental drift, and has remained almost unchanged since the end of the Hesperian period. At least two thirds of Mars' surface is more than 3.5 billion years old, and Mars may thus hold the best record of the prebiotic conditions leading to abiogenesis, even if life does not or has never existed there. It remains an open question whether life currently exists on Mars, or has existed there in the past, and fictional Martians have been a recurring feature of popular entertainment of the 20th and 21st centuries. Mars' polar ice caps were observed as early as the mid-17th century, and they were first proven to grow and shrink alternately, in the summer and winter of each hemisphere, by William Herschel in the latter part of the 18th century. By the mid-19th century, astronomers knew that Mars had certain other similarities to Earth, for example that the length of a day on Mars was almost the same as a day on Earth. They also knew that its axial tilt was similar to Earth's, which meant it experienced seasons just as Earth does — but of nearly double the length owing to its much longer year. These observations led to the increase in speculation that the darker albedo features were water, and brighter ones were land. It was therefore natural to suppose that Mars may be inhabited by some form of life. In 1854, William Whewell, a fellow of Trinity College, Cambridge, who popularized the word scientist, theorized that Mars had seas, land and possibly life forms. Speculation about life on Mars exploded in the late 19th century, following telescopic observation by some observers of apparent Martian canals — which were later found to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilization. This idea led British writer H. G. Wells to write The War of the Worlds in 1897, telling of an invasion by aliens from Mars who were fleeing the planet’s desiccation. Spectroscopic analysis of Mars' atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen were present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. Chemical, physical, geological and geographic attributes shape the environments on Mars. Isolated measurements of these factors may be insufficient to deem an environment habitable, but the sum of measurements can help predict locations with greater or lesser habitability potential. The two current ecological approaches for predicting the potential habitability of the Martian surface use 19 or 20 environmental factors, with emphasis on water availability, temperature, presence of nutrients, an energy source, and protection from Solar ultraviolet and galactic cosmic radiation. Scientists do not know the minimum number of parameters for determination of habitability potential, but they are certain it is greater than one or two of the factors in the table below. Similarly, for each group of parameters, the habitability threshold for each is to be determined. Laboratory simulations show that whenever multiple lethal factors are combined, the survival rates plummet quickly. There are no full-Mars simulations published yet that include all of the biocidal factors combined. · Temperature · Extreme diurnal temperature fluctuations · Low pressure (Is there a low-pressure threshold for terrestrial anaerobes?) · Strong ultraviolet germicidal irradiation ·Galactic cosmic radiation and solar particle events (long-term accumulated effects) · Solar UV-induced volatile oxidants, e.g., O 2–, O–, H2O2, O3 · Climate/variability (geography, seasons, diurnal, and eventually, obliquity variations) · Substrate (soil processes, rock microenvironments, dust composition, shielding) · High CO2 concentrations in the global atmosphere · Transport (aeolian, ground water flow, surface water, glacial) The loss of the Martian magnetic field strongly affected surface environments through atmospheric loss and increased radiation; this change significantly degraded surface habitability. When there was a magnetic field, the atmosphere would have been protected from erosion by solar wind, which would ensure the maintenance of a dense atmosphere, necessary for liquid water to exist on the surface of Mars. Soil and rock samples studied in 2013 by NASA's Curiosity rover's onboard instruments brought about additional information on several habitability factors. The rover team identified some of the key chemical ingredients for life in this soil, including sulfur, nitrogen, hydrogen, oxygen, phosphorus and possibly carbon, as well as clay minerals, suggesting a long-ago aqueous environment — perhaps a lake or an ancient streambed — that was neutral and not too salty. On December 9, 2013, NASA reported that, based on evidence from Curiosity studying Aeolis Palus, Gale Crater contained an ancient freshwater lake which could have been a hospitable environment for microbial life. The confirmation that liquid water once flowed on Mars, the existence of nutrients, and the previous discovery of a past magnetic field that protected the planet from cosmic and Solar radiation, together strongly suggest that Mars could have had the environmental factors to support life. However, the assessment of past habitability is not in itself evidence that Martian life has ever actually existed. If it did, it was probably microbial, existing communally in fluids or on sediments, either free-living or as biofilms, respectively. No definitive evidence for biosignatures or organics of Martian origin has been identified, and assessment will continue not only through the Martian seasons, but also back in time as the Curiosity rover studies what is recorded in the depositional history of the rocks in Gale Crater. While scientists have not identified the minimum number of parameters for determination of habitability potential, some teams have proposed hypotheses based on simulations. Although Mars soils are likely not to be overtly toxic to terrestrial microorganisms, life on the surface of Mars is extremely unlikely because it is bathed in radiation and it is completely frozen. Therefore, the best potential locations for discovering life on Mars may be at subsurface environments that have not been studied yet. The extensive volcanism in the past possibly created subsurface cracks and caves within different strata where liquid water could have been stored, forming large aquifers with deposits of saline liquid water, minerals, organic molecules, and geothermal heat – potentially providing a habitable environment away from the harsh surface conditions. Although liquid water does not appear at the surface of Mars, several modeling studies suggest that potential locations on Mars could include regions where thin films of salty liquid brine or perchlorate may form near the surface that may provide a potential location for terrestrial salt and cold-loving microorganisms (halophilepsychrophilic). Various salts present in the Martian soil may act as an antifreeze and could keep water liquid well below its normal freezing point, if water was present at certain favorable locations. Astrobiologists are keen to find out more, as not much is known about these brines at the moment. The briny water may or may not be habitable to microbes from Earth or Mars. Another researcher argues that although chemically important, thin films of transient liquid water are not likely to provide suitable sites for life. In addition, an astrobiology team asserted that the activity of water on salty films, the temperature, or both are less than the biological thresholds across the entire Martian surface and shallow subsurface. The damaging effect of ionizing radiation on cellular structure is one of the prime limiting factors on the survival of life in potential astrobiological habitats. Even at a depth of 2 meters beneath the surface, any microbes would probably be dormant, cryopreserved by the current freezing conditions, and so metabolically inactive and unable to repair cellular degradation as it occurs. Also, solar ultraviolet (UV) radiation proved particularly devastating for the survival of cold-resistant microbes under simulated surface conditions on Mars, as UV radiation was readily and easily able to penetrate the salt-organic matrix that the bacterial cells were embedded in. In addition, NASA's Mars Exploration Program states that life on the surface of Mars is unlikely, given the presence of superoxides that break down organic (carbon-based) molecules on which life is based. In 1965, the Mariner 4 probe discovered that Mars had no global magnetic field that would protect the planet from potentially life-threatening cosmic radiation and solar radiation; observations made in the late 1990s by the Mars Global Surveyor confirmed this discovery. Scientists speculate that the lack of magnetic shielding helped the solar wind blow away much of Mars's atmosphere over the course of several billion years. As a result, the planet has been vulnerable to radiation from space for about 4 billion years. Currently, ionizing radiation on Mars is typically two orders of magnitude (or 100 times) higher than on Earth. Even the hardiest cells known could not possibly survive the cosmic radiation near the surface of Mars for that long. After mapping cosmic radiation levels at various depths on Mars, researchers have concluded that any life within the first several meters of the planet's surface would be killed by lethal doses of cosmic radiation. The team calculated that the cumulative damage to DNA and RNA by cosmic radiation would limit retrieving viable dormant cells on Mars to depths greater than 7.5 metres below the planet's surface. Even the most radiation-tolerant Earthly bacteria would survive in dormant spore state only 18,000 years at the surface; at 2 meters —the greatest depth at which the ExoMars rover will be capable of reaching— survival time would be 90,000 to half million years, depending on the type of rock. The Radiation assessment detector (RAD) on board the Curiosity rover is currently quantifying the flux of biologically hazardous radiation at the surface of Mars today, and will help determine how these fluxes vary on diurnal, seasonal, solar cycle and episodic (flare, storm) timescales. These measurements will allow calculations of the depth in rock or soil to which this flux, when integrated over long timescales, provides a lethal dose for known terrestrial organisms. Research published in January 2014 of data collected by the RAD instrument, revealed that the actual absorbed dose measured is 76 mGy/year at the surface, and that "ionizing radiation strongly influences chemical compositions and structures, especially for water, salts, and redox-sensitive components such as organic matter." Regardless of the source of Martian organic matter (meteoritic, geological, or biological), its carbon bonds are susceptible to breaking and reconfigurating with surrounding elements by ionizing charged particle radiation. These improved subsurface radiation estimates give insight into the potential for the preservation of possible organic biosignatures as a function of depth as well as survival times of possible microbial or bacterial life forms left dormant beneath the surface. The report concludes that the in situ "surface measurements —and subsurface estimates— constrain the preservation window for Martian organic matter following exhumation and exposure to ionizing radiation in the top few meters of the Martian surface." After carbon, nitrogen is arguably the most important element needed for life. Thus, measurements of nitrate over the range of 0.1% to 5% are required to address the question of its occurrence and distribution. There is nitrogen (as N2) in the atmosphere at low levels, but this is not adequate to support nitrogen fixation for biological incorporation. Nitrogen in the form of nitrate, if present, could be a resource for human exploration both as a nutrient for plant growth and for use in chemical processes. On Earth, nitrates correlate with perchlorates in desert environments, and this may also be true on Mars. Nitrate is expected to be stable on Mars and to have formed in shock and electrical processes. Currently there is no data on its availability. Further complicating estimates of the habitability of the Martian surface is the fact that very little is known on the growth of microorganisms at pressures close to the conditions found on the surface of Mars. Some teams determined that some bacteria may be capable of cellular replication down to 25 mbar, but that is still above the atmospheric pressures found on Mars (range 1–14 mbar). In another study, twenty-six strains of bacteria were chosen based on their recovery from spacecraft assembly facilities, and only Serratia liquefaciens strain ATCC 27592 exhibited growth at 7 mbar, 0°C, and CO2-enriched anoxic atmospheres. A series of artist's conceptions of past water coverage on Mars. Liquid water, necessary for life as we know it, cannot exist on the surface of Mars except at the lowest elevations for minutes or hours. Liquid water does not appear at the surface itself, but it could form in minuscule amounts around dust particles in snow heated by the Sun. Also, the ancient equatorial ice sheets beneath the ground may slowly sublimate or melt, accessible from the surface via caves. Water on Mars exists almost exclusively as water ice, located in the Martian polar ice caps and under the shallow Martian surface even at more temperate latitudes. A small amount of water vapor is present in the atmosphere. There are no bodies of liquid water on the Martian surface because its atmospheric pressure at the surface averages 600 pascals (0.087 psi)—about 0.6% of Earth's mean sea level pressure—and because the temperature is far too low, (210 K (−63 °C)) leading to immediate freezing. Despite this, about 3.8 billion years ago, there was a denser atmosphere, higher temperature, and vast amounts of liquid water flowed on the surface, including large oceans. It has been estimated that the primordial oceans on Mars would have covered between 36% and 75% of the planet. Analysis of Martian sandstones, using data obtained from orbital spectrometry, suggests that the waters that previously existed on the surface of Mars would have had too high a salinity to support most Earth-like life. Tosca et al. found that the Martian water in the locations they studied all had water activity, aw ≤ 0.78 to 0.86—a level fatal to most Terrestrial life.Haloarchaea, however, are able to live in hypersaline solutions, up to the saturation point. In June 2000, possible evidence for current liquid water flowing at the surface of Mars was discovered in the form of flood-like gullies. Additional similar images were published in 2006, taken by the Mars Global Surveyor, that suggested that water occasionally flows on the surface of Mars. The images did not actually show flowing water. Rather, they showed changes in steep crater walls and sediment deposits, providing the strongest evidence yet that water coursed through them as recently as several years ago. There is disagreement in the scientific community as to whether or not the recent gully streaks were formed by liquid water. Some suggest the flows were merely dry sand flows. Others suggest it may be liquid brine near the surface, but the exact source of the water and the mechanism behind its motion are not understood. In May 2007, the Spirit rover disturbed a patch of ground with its inoperative wheel, uncovering an area extremely rich in silica (90%). The feature is reminiscent of the effect of hot spring water or steam coming into contact with volcanic rocks. Scientists consider this as evidence of a past environment that may have been favorable for microbial life, and theorize that one possible origin for the silica may have been produced by the interaction of soil with acid vapors produced by volcanic activity in the presence of water. Trace amounts of methane in the atmosphere of Mars were discovered in 2003 and verified in 2004. As methane is an unstable gas, its presence indicates that there must be an active source on the planet in order to keep such levels in the atmosphere. It is estimated that Mars must produce 270 ton/year of methane, but asteroid impacts account for only 0.8% of the total methane production. Although geologic sources of methane such as serpentinization are possible, the lack of current volcanism, hydrothermal activity or hotspots are not favorable for geologic methane. It has been suggested that the methane was produced by chemical reactions in meteorites, driven by the intense heat during entry through the atmosphere. Although research published in December 2009 ruled out this possibility, research published in 2012 suggest that a source may be organic compounds on meteorites that are converted to methane by ultraviolet radiation. Distribution of methane in the atmosphere of Mars in the Northern Hemisphere during summer The existence of life in the form of microorganisms such as methanogens is among possible, but as yet unproven sources. If microscopic Martian life is producing the methane, it probably resides far below the surface, where it is still warm enough for liquid water to exist. Since the 2003 discovery of methane in the atmosphere, some scientists have been designing models and in vitro experiments testing growth of methanogenic bacteria on simulated Martian soil, where all four methanogen strains tested produced substantial levels of methane, even in the presence of 1.0wt% perchlorate salt. The results reported indicate that the perchlorates discovered by the Phoenix Lander would not rule out the possible presence of methanogens on Mars. A team led by Levin suggested that both phenomena—methane production and degradation—could be accounted for by an ecology of methane-producing and methane-consuming microorganisms. In June 2012, scientists reported that measuring the ratio of hydrogen and methane levels on Mars may help determine the likelihood of life on Mars. According to the scientists, "...low H2/CH4 ratios (less than approximately 40) indicate that life is likely present and active." Other scientists have recently reported methods of detecting hydrogen and methane in extraterrestrial atmospheres. In contrast to the findings described above, studies by Kevin Zahnle, a planetary scientist at NASA's Ames Research Center, and two colleagues, conclude that "there is as yet no compelling evidence for methane on Mars". They argue that the strongest reported observations of the gas to date have been taken at frequencies where interference from methane in Earth's atmosphere is particularly difficult to remove, and are thus unreliable. Additionally, they claim that the published observations most favorable to interpretation as indicative of Martian methane are also consistent with no methane being present on Mars. The Curiosity rover, which landed on Mars in August 2012, is able to make measurements that distinguish between different isotopologues of methane; but even if the mission is to determine that microscopic Martian life is the seasonal source of the methane, the life forms probably reside far below the surface, outside of the rover's reach. The first measurements with the Tunable Laser Spectrometer (TLS) in the Curiosity rover indicated that there is less than 5 ppb of methane at the landing site at the point of the measurement. On July 19, 2013, NASA scientists published the results of a new analysis of the atmosphere of Mars, reporting a lack of methane around the landing site of the Curiosity rover. On September 19, 2013, NASA again reported no detection of atmospheric methane with a measured value of 0.18±0.67 ppbv corresponding to an upper limit of only 1.3 ppbv (95% confidence limit) and, as a result, conclude that the probability of current methanogenic microbial activity on Mars is reduced. India's Mars Orbiter Mission, launched on November 5, 2013, will search for methane in the atmosphere of Mars using its Methane Sensor for Mars (MSM). The orbiter is scheduled to arrive at Mars on September 24, 2014. The Mars Trace Gas Mission orbiter planned to launch in 2016 would further study the methane, if present, as well as its decomposition products such as formaldehyde and methanol. In February 2005, it was announced that the Planetary Fourier Spectrometer (PFS) on the European Space Agency's Mars Express Orbiter had detected traces of formaldehyde in the atmosphere of Mars. Vittorio Formisano, the director of the PFS, has speculated that the formaldehyde could be the byproduct of the oxidation of methane and, according to him, would provide evidence that Mars is either extremely geologically active or harbouring colonies of microbial life. NASA scientists consider the preliminary findings well worth a follow-up, but have also rejected the claims of life. NASA maintains a catalog of 34 Mars meteorites. These assets are highly valuable since they are the only physical samples available of Mars. Studies conducted by NASA's Johnson Space Center show that at least three of the meteorites contain potential evidence of past life on Mars, in the form of microscopic structures resembling fossilized bacteria (so-called biomorphs). Although the scientific evidence collected is reliable, its interpretation varies. To date, none of the original lines of scientific evidence for the hypothesis that the biomorphs are of exobiological origin (the so-called biogenic hypothesis) have been either discredited or positively ascribed to non-biological explanations. Over the past few decades, seven criteria have been established for the recognition of past life within terrestrial geologic samples. Those criteria are: Is the geologic context of the sample compatible with past life? Is the age of the sample and its stratigraphic location compatible with possible life? Does the sample contain evidence of cellular morphology and colonies? Is there any evidence of biominerals showing chemical or mineral disequilibria? Is there any evidence of stable isotope patterns unique to biology? Are there any organic biomarkers present? Are the features indigenous to the sample? For general acceptance of past life in a geologic sample, essentially most or all of these criteria must be met. All seven criteria have not yet been met for any of the Martian samples, but continued investigations are in progress. As of 2010, reexaminations of the biomorphs found in the three Martian meteorites are underway with more advanced analytical instruments than previously available. An electron microscope reveals bacteria-like structures in meteorite fragment ALH84001 The ALH84001 meteorite was found in December 1984 in Antarctica, by members of the ANSMET project; the meteorite weighs 1.93 kilograms (4.3 lb). The sample was ejected from Mars about 17 million years ago and spent 11,000 years in or on the Antarctic ice sheets. Composition analysis by NASA revealed a kind of magnetite that on Earth, is only found in association with certain microorganisms. Then, in August 2002, another NASA team led by Thomas-Keptra published a study indicating that 25% of the magnetite in ALH 84001 occurs as small, uniform-sized crystals that, on Earth, is associated only with biologic activity, and that the remainder of the material appears to be normal inorganic magnetite. The extraction technique did not permit determination as to whether the possibly biological magnetite was organized into chains as would be expected. The meteorite displays indication of relatively low temperature secondary mineralization by water and shows evidence of preterrestrial aqueous alteration.[clarification needed] Evidence of polycyclic aromatic hydrocarbons (PAHs) have been identified with the levels increasing away from the surface. Some structures resembling the mineralized casts of terrestrial bacteria and their appendages (fibrils) or by-products (extracellular polymeric substances) occur in the rims of carbonate globules and preterrestrial aqueous alteration regions. The size and shape of the objects is consistent with Earthly fossilizednanobacteria, but the existence of nanobacteria itself is controversial. In November 2009, NASA scientists reported after more detailed analyses, that a biogenic explanation is a more viable hypothesis for the origin of the magnetites in the meteorite. In 1998, a team from NASA's Johnson Space Center obtained a small sample for analysis. Researchers found preterrestrial aqueous alteration phases and objects of the size and shape consistent with Earthly fossilizednanobacteria, but the existence of nanobacteria itself is controversial. Analysis with gas chromatography and mass spectrometry (GC-MS) studied its high molecular weight polycyclic aromatic hydrocarbons in 2000, and NASA scientists concluded that as much as 75% of the organic matter in Nakhla "may not be recent terrestrial contamination". This caused additional interest in this meteorite, so in 2006, NASA managed to obtain an additional and larger sample from the London Natural History Museum. On this second sample, a large dendritic carbon content was observed. When the results and evidence were published on 2006, some independent researchers claimed that the carbon deposits are of biologic origin. However, it was remarked that since carbon is the fourth most abundant element in the Universe, finding it in curious patterns is not indicative or suggestive of biological origin. The Shergotty meteorite, a 4 kg Martian meteorite, fell on Earth on Shergotty, India on August 25, 1865 and was retrieved by witnesses almost immediately. This meteorite is relatively young, calculated to have been formed on Mars only 165 million years ago from volcanic origin. It is composed mostly of pyroxene and thought to have undergone preterrestrial aqueous alteration for several centuries. Certain features in its interior suggest remnants of a biofilm and its associated microbial communities. Work is in progress on searching for magnetites within alteration phases. Geysers on Mars Artist concept showing sand-laden jets erupt from geysers on Mars. Close up of dark dune spots, probably created by cold geyser-like eruptions. The seasonal frosting and defrosting of the southern ice cap results in the formation of spider-like radial channels carved on 1 meter thick ice by sunlight. Then, sublimed CO2 – and probably water –increase pressure in their interior producing geyser-like eruptions of cold fluids often mixed with dark basaltic sand or mud. This process is rapid, observed happening in the space of a few days, weeks or months, a growth rate rather unusual in geology – especially for Mars. A team of Hungarian scientists proposes that the geysers' most visible features, dark dune spots and spider channels, may be colonies of photosynthetic Martian microorganisms, which over-winter beneath the ice cap, and as the sunlight returns to the pole during early spring, light penetrates the ice, the microorganisms photosynthesize and heat their immediate surroundings. A pocket of liquid water, which would normally evaporate instantly in the thin Martian atmosphere, is trapped around them by the overlying ice. As this ice layer thins, the microorganisms show through grey. When the layer has completely melted, the microorganisms rapidly desiccate and turn black, surrounded by a grey aureole. The Hungarian scientists believe that even a complex sublimation process is insufficient to explain the formation and evolution of the dark dune spots in space and time. Since their discovery, fiction writer Arthur C. Clarke promoted these formations as deserving of study from an astrobiological perspective. A multinational European team suggests that if liquid water is present in the spiders' channels during their annual defrost cycle, they might provide a niche where certain microscopic life forms could have retreated and adapted while sheltered from solar radiation. A British team also considers the possibility that organic matter, microbes, or even simple plants might co-exist with these inorganic formations, especially if the mechanism includes liquid water and a geothermal energy source. However, they also remark that the majority of geological structures may be accounted for without invoking any organic "life on Mars" hypothesis. It has been proposed to develop the Mars Geyser Hopper lander to study the geysers up close. Planetary protection of Mars aims to prevent biological contamination of the planet. A major goal is to preserve the planetary record of natural processes by preventing human-caused microbial introductions, also called forward contamination. There is abundant evidence as to what can happen when organisms from regions on Earth that have been isolated from one another for significant periods of time are introduced into each other's environment. Species that are constrained in one environment can thrive – often out of control – in another environment much to the detriment of the original species that were present. In some ways this problem could be compounded if life forms from one planet were introduced into the totally alien ecology of another world. The prime concern of hardware contaminating Mars, derives from incomplete spacecraft sterilization of some hardy terrestrial bacteria (extremophiles) despite best efforts. Hardware includes landers, crashed probes, end of mission disposal of hardware, and hard landing of entry, descent, and landing systems. This has prompted research on radiation-resistant microorganisms including Brevundimonas, Rhodococcus, Pseudomonas genera and Deinococcus radiodurans survival rates under simulated Martian conditions. Results from one of these this experimental irradiation experiments, combined with previous radiation modeling, indicate that Brevundimonas sp. MV.7 emplaced only 30 cm deep in Martian dust could survive the cosmic radiation for up to 100,000 years before suffering 10⁶ population reduction. Surprisingly, the diurnal Mars-like cycles in temperature and relative humidity affected the viability of Deinococcus radiodurans cells quite severely. In other simulations, Deinococcus radiodurans also failed to grow under low atmospheric pressure, under 0 °C, or in the absence of oxygen. Mariner 4 probe performed the first successful flyby of the planet Mars, returning the first pictures of the Martian surface in 1965. The photographs showed an arid Mars without rivers, oceans, or any signs of life. Further, it revealed that the surface (at least the parts that it photographed) was covered in craters, indicating a lack of plate tectonics and weathering of any kind for the last 4 billion years. The probe also found that Mars has no global magnetic field that would protect the planet from potentially life-threatening cosmic rays. The probe was able to calculate the atmospheric pressure on the planet to be about 0.6 kPa (compared to Earth's 101.3 kPa), meaning that liquid water could not exist on the planet's surface. After Mariner 4, the search for life on Mars changed to a search for bacteria-like living organisms rather than for multicellular organisms, as the environment was clearly too harsh for these. Liquid water is necessary for known life and metabolism, so if water was present on Mars, the chances of it having supported life may have been determinant. The Viking orbiters found evidence of possible river valleys in many areas, erosion and, in the southern hemisphere, branched streams. Carl Sagan poses next to a replica of the Viking landers. The primary mission of the Viking probes of the mid-1970s was to carry out experiments designed to detect microorganisms in Martian soil because the favorable conditions for the evolution of multicellular organisms ceased some four billion years ago on Mars. The tests were formulated to look for microbial life similar to that found on Earth. Of the four experiments, only the Labeled Release (LR) experiment returned a positive result,[dubious– discuss] showing increased 14CO2 production on first exposure of soil to water and nutrients. All scientists agree on two points from the Viking missions: that radiolabeled 14CO2 was evolved in the Labeled Release experiment, and that the GCMS detected no organic molecules. However, there are vastly different interpretations of what those results imply. A 2011 astrobiology textbook notes that the GCMS was the decisive factor due to which "For most of the Viking scientists, the final conclusion was that the Viking missions failed to detect life in the Martian soil." One of the designers of the Labeled Release experiment, Gilbert Levin, believes his results are a definitive diagnostic for life on Mars. Levin's interpretation is disputed by many scientists. A 2006 astrobiology textbook noted that "With unsterilized Terrestrial samples, though, the addition of more nutrients after the initial incubation would then produce still more radioactive gas as the dormant bacteria sprang into action to consume the new dose of food. This was not true of the Martian soil; on Mars, the second and third nutrient injections did not produce any further release of labeled gas." Other scientists argue that superoxides in the soil could have produced this effect without life being present. An almost general consensus discarded the Labeled Release data as evidence of life, because the gas chromatograph & mass spectrometer, designed to identify natural organic matter, did not detect organic molecules. The results of the Viking mission concerning life are considered by the general expert community, at best, as inconclusive. In 2007, during a Seminar of the Geophysical Laboratory of the Carnegie Institution (Washington, D.C., USA), Gilbert Levin's investigation was assessed once more. Levin still maintains that his original data were correct, as the positive and negative control experiments were in order. Moreover, Levin's team, on 12 April 2012, reported a statistical speculation, based on old data —reinterpreted mathematically through cluster analysis— of the Labeled Release experiments, that may suggest evidence of "extant microbial life on Mars." Critics counter that the method has not yet been proven effective for differentiating between biological and non-biological processes on Earth so it is premature to draw any conclusions. A research team from the National Autonomous University of Mexico headed by Rafael Navarro-González, concluded that the GCMS equipment (TV-GC-MS) used by the Viking program to search for organic molecules, may not be sensitive enough to detect low levels of organics.Klaus Biemann, the principal investigator of the GCMS experiment on Viking wrote a rebuttal. Because of the simplicity of sample handling, TV–GC–MS is still considered the standard method for organic detection on future Mars missions, so Navarro-González suggests that the design of future organic instruments for Mars should include other methods of detection. After the discovery of perchlorates on Mars by the Phoenix lander, practically the same team of Navarro-González published a paper arguing that the Viking GCMS results were compromised by presence of perchlorates. A 2011 astrobiology textbook notes that "while perchlorate is too poor an oxidizer to reproduce the LR results (under the conditions of that experiment perchlorate does not oxidize organics), it does oxidize, and thus destroy, organics at the higher temperatures used in the Viking GCMS experiment." Biemann has written a commentary critical of this Navarro-González paper as well, to which the latter have replied; the exchange was published in December 2011. The claim for life on Mars, in the form of Gillevinia straata, is based on old data reinterpreted as sufficient evidence of life, mainly by Gilbert Levin. The evidence supporting the existence of Gillevinia straata microorganisms relies on the data collected by the two Mars Viking landers that searched for biosignatures of life, but the analytical results were, officially, inconclusive. As a result, the hypothetical Gillevinia straata would not be a bacterium (which rather is a terrestrial taxon), but a member of the kingdom 'Jakobia' in the biosphere 'Marciana' of the 'Solaria' system. The intended effect of the new nomenclature was to reverse the burden of proof concerning the life issue, but the taxonomy proposed by Crocco has not been accepted by the scientific community and is considered a single nomen nudum. Further, no Mars mission has found traces of biomolecules. The Phoenix mission landed a robotic spacecraft in the polar region of Mars on May 25, 2008 and it operated until November 10, 2008. One of the mission's two primary objectives was to search for a "habitable zone" in the Martian regolith where microbial life could exist, the other main goal being to study the geological history of water on Mars. The lander has a 2.5 meter robotic arm that was capable of digging shallow trenches in the regolith. There was an electrochemistry experiment which analysed the ions in the regolith and the amount and type of antioxidants on Mars. The Viking program data indicate that oxidants on Mars may vary with latitude, noting that Viking 2 saw fewer oxidants than Viking 1 in its more northerly position. Phoenix landed further north still. Phoenix's preliminary data revealed that Mars soil contains perchlorate, and thus may not be as life-friendly as thought earlier. The pH and salinity level were viewed as benign from the standpoint of biology. The analysers also indicated the presence of bound water and CO2. ExoMars is a European-led multi-spacecraft programme currently under development by the European Space Agency (ESA) and the Russian Federal Space Agency for launch in 2016 and 2018. Its primary scientific mission will be to search for possible biosignatures on Mars, past or present. A rover with a 2 metres (6.6 ft) core drill will be used to sample various depths beneath the surface where liquid water may be found and where microorganisms might survive cosmic radiation. Mars Sample Return Mission — The best life detection experiment proposed is the examination on Earth of a soil sample from Mars. However, the difficulty of providing and maintaining life support over the months of transit from Mars to Earth remains to be solved. Providing for still unknown environmental and nutritional requirements is daunting. Should dead organisms be found in a sample, it would be difficult to conclude that those organisms were alive when obtained. ^Wallace, Alfred Russel (1907). Is Mars habitable?: A critical examination of Professor Percival Lowell's book 'Mars and its canals,' with an alternative explanation. London: Macmillan. OCLC263175453.[page needed] ^ abcdeConrad, P. G.; Archer, D.; Coll, P.; De La Torre, M.; Edgett, K.; Eigenbrode, J. L.; Fisk, M.; Freissenet, C. et al. (2013). "Habitability Assessment at Gale Crater: Implications from Initial Results". 44th Lunar and Planetary Science Conference1719: 2185. Bibcode:2013LPICo1719.2185C.|displayauthors= suggested (help) ^Schuerger, Andrew C.; Golden, D. C.; Ming, Doug W. (2012). "Biotoxicity of Mars soils: 1. Dry deposition of analog soils on microbial colonies and survival under Martian conditions". Planetary and Space Science72 (1): 91–101. Bibcode:2012P&SS...72...91S. doi:10.1016/j.pss.2012.07.026. ^ ab"Mars Contamination Dust-Up". Astrobiology Magazine. 17 May 2010. Retrieved 2013-07-04. "Whenever multiple biocidal factors are combined, the survival rates plummet quickly," ^ abcSummons, Roger E.; Amend, Jan P.; Bish, David; Buick, Roger; Cody, George D.; Des Marais, David J.; Dromart, Gilles; Eigenbrode, Jennifer L. et al. (2011). "Preservation of Martian Organic and Environmental Records: Final Report of the Mars Biosignature Working Group". Astrobiology11 (2): 157–81. Bibcode:2011AsBio..11..157S. doi:10.1089/ast.2010.0506. PMID21417945. "There is general consensus that extant microbial life on Mars would probably exist (if at all) in the subsurface and at low abundance."|displayauthors= suggested (help) ^Dehant, V.; Lammer, H.; Kulikov, Y. N.; Grießmeier, J. -M.; Breuer, D.; Verhoeven, O.; Karatekin, Ö.; Hoolst, T. et al. (2007). "Planetary Magnetic Dynamo Effect on Atmospheric Protection of Early Earth and Mars". Geology and Habitability of Terrestrial Planets. Space Sciences Series of ISSI 24. pp. 279–300. doi:10.1007/978-0-387-74288-5_10. ISBN978-0-387-74287-8.|displayauthors= suggested (help) ^ abc"Study: Surface of Mars Devoid of Life". Space.com. 29 January 2007. Retrieved 28 May 2013. "After mapping cosmic radiation levels at various depths on Mars, researchers have concluded that any life within the first several yards of the planet's surface would be killed by lethal doses of cosmic radiation." ^ abcDartnell, L. R.; Desorgher, L.; Ward, J. M.; Coates, A. J. (2007). "Modelling the surface and subsurface Martian radiation environment: Implications for astrobiology". Geophysical Research Letters34 (2): L02207. Bibcode:2007GeoRL..34.2207D. doi:10.1029/2006GL027494. "Bacteria or spores held dormant by freezing conditions cannot metabolise and become inactivated by accumulating radiation damage. We find that at 2 m depth, the reach of the ExoMars drill, a population of radioresistant cells would need to have reanimated within the last 450,000 years to still be viable. Recovery of viable cells cryopreserved within the putative Cerberus pack-ice requires a drill depth of at least 7.5 m." ^ abcRichard A. Lovet (February 2, 2007). "Mars Life May Be Too Deep to Find, Experts Conclude". National Geographic News. "That's because any bacteria that may once have lived on the surface have long since been exterminated by cosmic radiation sleeting through the thin Martian atmosphere." ^ abDartnell, L. R.; Desorgher, L.; Ward, J. M.; Coates, A. J. (2007). "Modelling the surface and subsurface Martian radiation environment: Implications for astrobiology". Geophysical Research Letters34 (2). Bibcode:2007GeoRL..3402207D. doi:10.1029/2006GL027494. "The damaging effect of ionising radiation on cellular structure is one of the prime limiting factors on the survival of life in potential astrobiological habitats." ^ abcDartnell, L. R.; Desorgher, L.; Ward, J. M.; Coates, A. J. (2007). "Martian sub-surface ionising radiation: biosignatures and geology". Biogeosciences4 (4): 545–558. Bibcode:2007BGeo....4..545D. doi:10.5194/bg-4-545-2007. "This ionising radiation field is deleterious to the survival of dormant cells or spores and the persistence of molecular biomarkers in the subsurface, and so its characterisation. [..] Even at a depth of 2 meters beneath the surface, any microbes would probably be dormant, cryopreserved by the current freezing conditions, and so metabolically inactive and unable to repair cellular degradation as it occurs." ^ abc"Scientists find evidence Mars subsurface could hold life". Digital Journal – Science. 21 January. Retrieved 2013-06-05. "There can be no life on the surface of Mars because it is bathed in radiation and it's completely frozen. However, life in the subsurface would be protected from that. - Prof. Parnell."Check date values in: |date= (help) ^ abSteigerwald, Bill (January 15, 2009). "Martian Methane Reveals the Red Planet is not a Dead Planet". NASA's Goddard Space Flight Center (NASA). Retrieved June 6 24, 2013. "If microscopic Martian life is producing the methane, it probably resides far below the surface, where it's still warm enough for liquid water to exist"Check date values in: |accessdate= (help) ^Michalski, Joseph R.; Cuadros, Javier; Niles, Paul B.; Parnell, John; Deanne Rogers, A.; Wright, Shawn P. (2013). "Groundwater activity on Mars and implications for a deep biosphere". Nature Geoscience6 (2): 133–8. Bibcode:2013NatGe...6..133M. doi:10.1038/ngeo1706. ^De Morais, A. (2012). "A Possible Biogeochemical Model for Mars". 43rd Lunar and Planetary Science Conference43: 2943. Bibcode:2012LPI....43.2943D. "The extensive volcanism at that time much possibly created subsurface cracks and caves within different strata, and the liquid water could have been stored in these subterraneous places, forming large aquifers with deposits of saline liquid water, minerals organic molecules, and geothermal heat – ingredients for life as we know on Earth." ^Hecht, Michael H.; Vasavada, Ashwin R. (2006). "Transient liquid water near an artificial heat source on Mars". International Journal of Mars Science and Exploration2: 83–96. Bibcode:2006IJMSE...2...83H. doi:10.1555/mars.2006.0006. "In summary, on present-day Mars, liquid water is unlikely except as the result of a quick and dramatic change in environmental conditions such as from a landslide that exposes buried ice to sunlight (Costard et al. 2002), or from the introduction of an artificial heat source." ^ abcdHaberle, Robert M.; McKay, Christopher P.; Schaeffer, James; Cabrol, Nathalie A.; Grin, Edmon A.; Zent, Aaron P.; Quinn, Richard (2001). "On the possibility of liquid water on present-day Mars". Journal of Geophysical Research: Planets106 (El0): 23317–26. Bibcode:bibcode=2001JGR...10623317H. doi:10.1029/2000JE001360. "Introduction: The mean annual surface pressure and temperature on present-day Mars do not allow for the stability of liquid water on the surface. […] Conclusion: It is possible, even likely, that solar-heated liquid water never forms on present-day Mars." ^Hassler, Donald M.; Zeitlin, Cary; Wimmer-Schweingruber, Robert F.; Ehresmann, Bent; Rafkin, Scot; Martin, Cesar; Boettcher, Stephan; Koehler, Jan et al. (2013). "The Radiation Environment on the Martian Surface and during MSL's Cruise to Mars". EGU General Assembly 201315: 12596. Bibcode:2013EGUGA..1512596H.|displayauthors= suggested (help) ^Heldmann, Jennifer L.; Toon, Owen B.; Pollard, Wayne H.; Mellon, Michael T.; Pitlick, John; McKay, Christopher P.; Andersen, Dale T. (2005). "Formation of Martian gullies by the action of liquid water flowing under current Martian environmental conditions". Journal of Geophysical Research110 (E5): E05004. Bibcode:2005JGRE..11005004H. doi:10.1029/2004JE002261. ^Kostama, V.-P.; Kreslavsky, M. A.; Head, J. W. (2006). "Recent high-latitude icy mantle in the northern plains of Mars: Characteristics and ages of emplacement". Geophysical Research Letters33 (11): 11201. Bibcode:2006GeoRL..3311201K. doi:10.1029/2006GL025946. ^Baker, V. R.; Strom, R. G.; Gulick, V. C.; Kargel, J. S.; Komatsu, G.; Kale, V. S. (1991). "Ancient oceans, ice sheets and the hydrological cycle on Mars". Nature352 (6336): 589. Bibcode:1991Natur.352..589B. doi:10.1038/352589a0. ^Allen, Carlton C.; Albert, Fred G.; Chafetz, Henry S.; Combie, Joan; Graham, Catherine R.; Kieft, Thomas L.; Kivett, Steven J.; McKay, David S. et al. (2000). "Microscopic Physical Biomarkers in Carbonate Hot Springs: Implications in the Search for Life on Mars". Icarus147 (1): 49–67. Bibcode:2000Icar..147...49A. doi:10.1006/icar.2000.6435. PMID11543582.|displayauthors= suggested (help) ^Wade, Manson L.; Agresti, David G.; Wdowiak, Thomas J.; Armendarez, Lawrence P.; Farmer, Jack D. (1999). "A Mössbauer investigation of iron-rich terrestrial hydrothermal vent systems: Lessons for Mars exploration". Journal of Geophysical Research104 (E4): 8489–507. Bibcode:1999JGR...104.8489W. doi:10.1029/1998JE900049. PMID11542933. ^Agresti, D. G.; Wdowiak, T. J.; Wade, M. L.; Armendarez, L. P.; Farmer, J. D. (1995). "A Mossbauer Investigation of Hot Springs Iron Deposits". Abstracts of the Lunar and Planetary Science Conference26: 7. Bibcode:1995LPI....26....7A. ^Agresti, D. G.; Wdowiak, T. J.; Wade, M. L.; Armendarez, L. P. (1997). "Mössbauer Spectroscopy of Thermal Springs Iron Deposits as Martian Analogs". Early Mars: Geologic and Hydrologic Evolution916: 1. Bibcode:1997LPICo.916....1A. ^Mumma, M. J.; Novak, R. E.; Disanti, M. A.; Bonev, B. P. (2003). "A Sensitive Search for Methane on Mars". American Astronomical Society35: 937. Bibcode:2003DPS....35.1418M. ^ abKral, T. A.; Goodhart, T.; Howe, K. L.; Gavin, P. (2009). "Can Methanogens Grow in a Perchlorate Environment on Mars?". 72nd Annual Meeting of the Meteoritical Society72: 5136. Bibcode:2009M&PSA..72.5136K. ^ abHowe, K. L.; Gavin, P.; Goodhart, T.; Kral, T. A. (2009). "Methane Production by Methanogens in Perchlorate-supplemented Media". 40th Lunar and Planetary Science Conference40: 1287. Bibcode:2009LPI....40.1287H. ^ abcdefEvidence for ancient Martian life. E. K. Gibson Jr., F. Westall, D. S. McKay, K. Thomas-Keprta, S. Wentworth, and C. S. Romanek, Mail Code SN2, NASA Johnson Space Center, Houston TX 77058, USA. ^McKay, David S.; Gibson, Everett K.; Thomas-Keprta, Kathie L.; Vali, Hojatollah; Romanek, Christopher S.; Clemett, Simon J.; Chillier, Xavier D. F.; Maechling, Claude R. et al. (1996). "Search for Past Life on Mars: Possible Relic Biogenic Activity in Martian Meteorite ALH84001". Science273 (5277): 924–30. Bibcode:1996Sci...273..924M. doi:10.1126/science.273.5277.924. PMID8688069.|displayauthors= suggested (help) ^Kieffer, H. H. (2000). "Annual Punctuated CO2 Slab-Ice and Jets on Mars". International Conference on Mars Polar Science and Exploration: 93. Bibcode:2000mpse.conf...93K. ^Portyankina, G.; Markiewicz, W. J.; Garcia-Comas, M.; Keller, H. U.; Bibring, J.-P.; Neukum, G. (2006). "Simulations of Geyser-type Eruptions in Cryptic Region of Martian South Polar Cap". Fourth International Conference on Mars Polar Science and Exploration1323: 8040. Bibcode:2006LPICo1323.8040P. ^Horváth, A.; Gánti, T.; Gesztesi, A.; Bérczi, Sz.; Szathmáry, E. (2001). "Probable Evidences of Recent Biological Activity on Mars: Appearance and Growing of Dark Dune Spots in the South Polar Region". 32nd Annual Lunar and Planetary Science Conference32: 1543. Bibcode:2001LPI....32.1543H. ^ abPócs, T.; Horváth, A.; Gánti, T.; Bérczi, Sz.; Szathemáry, E. (2004). "Possible crypto-biotic-crust on Mars?". Proceedings of the Third European Workshop on Exo-Astrobiology545: 265–6. Bibcode:2004eab..conf..265P. ^Gánti, Tibor; Horváth, András; Bérczi, Szaniszló; Gesztesi, Albert; Szathmáry, Eörs (2003). "Dark Dune Spots: Possible Biomarkers on Mars?". Origins of Life and Evolution of the Biosphere33 (4/5): 515–57. doi:10.1023/A:1025705828948. ^Horváth, A.; Gánti, T.; Bérczi, Sz.; Gesztesi, A.; Szathmáry, E. (2002). "Morphological Analysis of the Dark Dune Spots on Mars: New Aspects in Biological Interpretation". 33rd Annual Lunar and Planetary Science Conference33: 1108. Bibcode:2002LPI....33.1108H. ^Orme, Greg M.; Ness, Peter K. (June 9, 2003). "Martian Spiders". Marsbugs10 (23): 5–7. Archived from the original on September 27, 2007. Retrieved September 6, 2009. ^Manrubia, S. C.; Prieto Ballesteros, O.; González Kessler, C.; Fernández Remolar, D.; Córdoba-Jabonero, C.; Selsis, F.; Bérczi, S.; Gánti, T. et al. (2004). "Comparative analysis of geological features and seasonal processes in 'Inca City' and 'Pityusa Patera' regions on Mars". Proceedings of the Third European Workshop on Exo-Astrobiology545: 77–80. Bibcode:2004eab..conf...77M. ISBN92-9092-856-5.|displayauthors= suggested (help) ^ abNess, Peter K.; Orme, Greg M. (2002). "Spider-Ravine Models and Plant-Like Features on Mars – Possible Geophysical and Biogeophysical Modes of Origin". Journal of the British Interplanetary Society55 (3/4): 85–108. Bibcode:2002JBIS...55...85N. ^de Vera, Jean-Pierre; Möhlmann, Diedrich; Butina, Frederike; Lorek, Andreas; Wernecke, Roland; Ott, Sieglinde (2010). "Survival Potential and Photosynthetic Activity of Lichens Under Mars-Like Conditions: A Laboratory Study". Astrobiology10 (2): 215–27. Bibcode:2010AsBio..10..215D. doi:10.1089/ast.2009.0362. PMID20402583. ^de Vera, J.-P. P.; Schulze-Makuch, D.; Khan, A.; Lorek, A.; Koncz, A.; Möhlmann, D.; Spohn, T. (2012). "The adaptation potential of extremophiles to Martian surface conditions and its implication for the habitability of Mars". EGU General Assembly 201214: 2113. Bibcode:2012EGUGA..14.2113D. ^Sánchez, F. J.; Mateo-Martí, E.; Raggio, J.; Meeßen, J.; Martínez-Frías, J.; Sancho, L. G.; Ott, S.; de la Torre, R. (2012). "The resistance of the lichen Circinaria gyrosa (nom. Provis.) towards simulated Mars conditions—a model test for the survival capacity of an eukaryotic extremophile". Planetary and Space Science72 (1): 102–10. Bibcode:2012P&SS...72..102S. doi:10.1016/j.pss.2012.08.005. ^Strom, R.G., Steven K. Croft, and Nadine G. Barlow, "The Martian Impact Cratering Record," Mars, University of Arizona Press, ISBN 0-8165-1257-4, 1992.[page needed] ^Raeburn, P. 1998. Uncovering the Secrets of the Red Planet Mars. National Geographic Society. Washington D.C.[page needed] ^Moore, P. et al. 1990. The Atlas of the Solar System. Mitchell Beazley Publishers NY, NY.[page needed] ^"Astrobiology". Biology Cabinet. September 26, 2006. Retrieved 2011-01-17. ^ abBianciardi, Giorgio; Miller, Joseph D.; Straat, Patricia Ann; Levin, Gilbert V. (2012). "Complexity Analysis of the Viking Labeled Release Experiments". International Journal of Aeronautical and Space Sciences13 (1): 14–26. Bibcode:2012IJASS..13...14B. doi:10.5139/IJASS.2012.13.1.14.
Microplastics are tiny fragments of degraded plastic, no greater than 5 mm in diameter. They are oceanic pollutants that can drift thousands of miles in the surface layers of the open sea and can also find their way down the water column to various depths. While studies to measure and monitor the presence of microplastics in regions of the world’s oceans have been conducted for the past 50 years, they have made use of disparate methods of collection and analysis, meaning that the data could not be combined or compared easily. Large data sets to help follow the trends in microplastic pollution have thus not been available to researchers in general. This is what prompted a global team of oceanographers, led by researchers from Kyushu University, to review the data from previous published and unpublished expeditions to sample microplastics in the oceans. They calibrated and processed these data in order to build a publicly available dataset for assessing trends in the abundance of microplastics more accurately. “Although the observation of microplastics dates back to the 1970s, standardized data spanning the globe is still limited,” explained Atsuhiko Isobe, professor at Kyushu University’s Research Institute for Applied Mechanics. To create the new dataset, a total of 8,218 pelagic microplastic samples, collected from oceans around the world, were synthesized and standardized. The data set contains raw, calibrated, processed, and gridded data that is now comparable. Samples were adjusted for different types of collection, as well as for conditions of ocean turbulence and wind, as these factors affect estimates of abundance. “We collected published and unpublished data on microplastic distribution from around the world and calibrated to account for differences such as in collection method and wave height to create standardized, state-of-the-art 2D maps of microplastic abundance,” explained Professor Isobe. The researchers estimated that there were 24.4 trillion pieces of microplastics in the world’s upper ocean layer, which equates with somewhere between 82,000 and 578,000 tons of plastic, or roughly 30 billion 500 ml plastic water bottles. “Our dataset provides realistic amounts of microplastics in the wild to help researchers trying to assess the true impact they are having on aquatic organisms and the environment,” said Professor Isobe. “While this work improves our grasp of the actual situation, the total amount of microplastics is still likely to be much greater since this is just what we can estimate on the surface. For us to get a clearer picture, we must develop 3D maps probing the depths of the oceans and continue to fill the gaps within our dataset.” “Though we are making progress, we still have much to learn to get a complete picture of the fate of plastic debris and the effect it is having on the environment.” The research is published in the journal Microplastics and Nanoplastics.
A hexagon is a closed 2D shape that is made up of straight lines. It is a two-dimensional shape with six sides, six vertices, and six edges. The name is divided into hex, which means six, and gonia, which means corners. |2.||Types of Hexagon| |3.||Properties of a Hexagon| |5.||FAQs on Hexagon| Hexagon is a two-dimensional geometrical shape that is made of six sides, having the same or different dimensions of length. Some real-life examples of the hexagon are a hexagonal floor tile, pencil, clock, a honeycomb, etc. A hexagon is either regular(with 6 equal side lengths and angles) or irregular(with 6 unequal side lengths and angles). Types of Hexagon Hexagons can be classified based on their side lengths and internal angles. Considering the sides and angles of a hexagon, the types of the hexagon are, - Regular Hexagon: A regular hexagon is one that has equal sides and angles. All the internal angles of a regular hexagon are 120°. The exterior angles measure 60°. The sum of the interior angles of a regular hexagon is 6 times 120°, which is equal to 720°. The sum of the exterior angles is equal to 6 times 60°, which is equal to 360°. - Irregular Hexagon: An irregular hexagon has sides and angles of different measurements. All the internal angles are not equal to 120°. But, the sum of all interior angles is the same, i.e 720 degrees. - Convex Hexagon: A convex hexagon is one in which all the interior angles measure less than 180°. Convex hexagons can be regular or irregular, which means they can have equal or unequal side lengths and angles. All the vertices of the convex hexagon are pointed outwards. - Concave Hexagon: A concave hexagon is one in which at least one of the interior angles is greater than 180°. There is at least one vertex that points inwards. Properties of Hexagon A hexagon is a flat two-dimensional shape with six sides. It may or may not have equal sides and angles. Based on these facts, the important properties of a hexagon are as follows. - It has six sides, six edges, and six vertices - All the side lengths are equal or unequal in measurement - All the internal angles are equal to 120° in a regular hexagon - The sum of the internal angles is always equal to 720° - All the external angles are equal to 60° in a regular hexagon - Sum of the exterior angles is equal to 360° in a hexagon - The number of diagonals (a line segment joining two vertices of a polygon) that can be drawn is 9 - A regular hexagon is also a convex hexagon since all its internal angles are less than 180° - A regular hexagon can be split into six equilateral triangles - A regular hexagon is symmetrical as each of its side lengths is equal - The opposite sides of a regular hexagon are always parallel to each other. As with any polygon, a regular hexagon also has a different formula to calculate the area, perimeter, and a number of diagonals. Let us look into each one of them. Diagonals of a Hexagon A diagonal is a segment of a line, that connects any two non-adjacent vertices of a polygon. The number of diagonals of a polygon is given by n(n-3)/2, where 'n' is the number of sides of a polygon. The number of diagonals in a hexagon is given by, 6 (6 - 3) / 2 = 6(3)/2, which is 9. Out of the 9 diagonals, 6 of them pass through the center of the hexagon. Sum of Interior Angles of Hexagon The sum of internal angles formed by a regular hexagon is 720˚ (because each angle is 120˚ and there are 6 such angles adding up to 720˚). It is given by the formula for regular polygon, where n is a number of sides, which has a value of 6 for hexagonal shape. The formula is (n-2) × 180°. Therefore, (6-2) ×180° which gives us 720°. Area of a Regular Hexagon The area of a regular hexagon is the space or the region occupied by the shape. It is measured in square units. Let us divide the hexagon into 6 equilateral triangles as shown below. Let us calculate the area of one triangle and multiply it by 6 to get the entire area of the hexagon. Area of one equilateral triangle is √3a2/4 square units. Hence, the area of a regular hexagon formed by combining 6 such triangles is, 6 × √3a2/4 = 3√3a2/2 square units Therefore, the formula for the regular hexagon area is 3√3a2/2 square units. Perimeter of a Hexagon Perimeter is the total length of the boundary or the outline of a shape. Considering the side of a regular hexagon as 'a' units, the regular hexagon perimeter is given by summing up the length of all the sides which is equal to 6a units. Therefore, the perimeter of a regular hexagon = 6a units, and the perimeter of an irregular hexagon = (a + b + c + d + e + f) units, where, a, b, c, d, e, and f are the side-lengths of the hexagon. ☛ Topics Related to Hexagon Check out some interesting articles related to hexagons. Example 1: What is the area of a regular hexagon with sides equal to 3 units? Area of a regular hexagon = 3√3a2/2 square units. Given side 'a' = 3 units Therefore, area = 3(√3)32/2 = (3 × √3 × 9) /2 = (27× √3) / 2 = 23.382 square units, Example 2: Find the length of each side of a regular hexagon, if the hexagon's area is 150√3 square units. Use the length of the sides to find the perimeter of the hexagon. Applying the formula of area of a regular hexagon, Area of a regular hexagon = 3√3a2/2 square units. Therefore, 150√3 = 3√3a2/2 300√3 = 3√3a2 Canceling √3 on both sides, 300/3 = a2 100 = a2 a = √100 Therefore, the length of each side, a = 10 units. Therefore, the length of the sides of the hexagon = 10 units. Perimeter of a regular hexagon = 6a units. a = 10 units. Therefore, Perimeter = 6 × 10 Therefore, the given regular hexagon perimeter = 60 units. FAQs on Hexagon What is a Hexagon? A hexagon is a two-dimensional flat shape that has six angles, six edges, and six vertices. A hexagon can have equal or unequal sides and interior angles. It is a 6-sided polygon with two types - regular hexagon and irregular hexagon. Are all 6-sided Shapes Hexagons? A hexagon is a two-dimensional shape having 6 sides. It may be equal or unequal. Therefore, all six-sided closed shapes are hexagons. What are the Three Attributes of a Hexagon? The three attributes of a hexagon are: - It has 6 sides - It has 6 angles - It has 6 corners Does a Hexagon Always Have Equal Sides? Hexagon may not necessarily have all sides equal. It can have sides of variable lengths too. The hexagon having equal sides is called a regular hexagon and the one with different sides is called an irregular hexagon. How are Hexagons Classified? A hexagon is classified based on the side lengths and angles. Based on this, hexagons are classified into regular (equal side-lengths and angles) and irregular (unequal side-lengths and angles) hexagons. Convex hexagons are the ones in which all the interior angles are less than 180° and concave hexagons are the ones in which at least one of the interior angles is greater than 180°. What is the Sum of Interior Angles of a Hexagon? In a hexagon, the sum of all 6 interior angles is always 720º. The sum of interior angles of a polygon is calculated using the formula, (n-2) × 180°, where 'n' is the number of sides of the polygon. Since a hexagon has 6 sides, taking 'n' as 6 we get. (6-2) × 180° gives 720°. What is the Value of an Angle in a Regular Hexagon? The measure of an angle in a regular hexagon is 120°. How Many Diagonals Can be Drawn in a Regular Hexagon? The formula to calculate the number of diagonals of a polygon is n(n-3)/2, where 'n' is the number of sides of the polygon. A hexagon has 6 sides. therefore, the number of diagonals is 6(6-3)/2, which is equal to 9. How Many Lines of Symmetry are there in a Regular Hexagon? For all regular polygons, the number of lines of symmetry is equal to the number of sides. Thus, for a regular hexagon, there are six lines of symmetry. What is the Formula to Calculate the Area of a Regular Hexagon? The formula to calculate the regular hexagon area is 3√3a2/2 square units, where 'a' is the side length of the regular hexagon. How to Calculate Area of a Hexagon? We can determine the area of a hexagon by identifying the length of the side of the hexagon. To find the area of a hexagon we use the formula, A = (3√3 S2)/2. Always write the final answer of area in square units. What is the Formula to Calculate the Perimeter of a Hexagon? The formula to calculate the regular hexagon perimeter is 6a units, where 'a' is the side length of the hexagon. In the case of an irregular hexagon, we add the side lengths. Mathematically, it can be expressed as, Perimeter of Hexagon = (a + b+ c+ d + e + f) units.
General Science & Ability Notes for CSS The Solar System (CSS 2008/2009) Our solar system consists of the sun, planets, dwarf planets (or plutoids), moons, an asteroid belt, comets, meteors, and other objects. The sun is the center of our solar system; the planets, over 61 moons, the asteroids, comets, meteoroids and other rocks and gas all orbit the Sun. Our solar system is always in motion. Eight known planets and their moons, along with comets, asteroids, and other space objects orbit the Sun. The Sun is the biggest object in our solar system. It contains more than 99% of the solar system’s mass. Astronomers think the solar system is more than 4.5 billion years old. The sun lies at the heart of the solar system, where it is by far the largest object. It holds 99.8 percent of the solar system’s mass and is roughly 109 times the diameter of the Earth — about one million Earths could fit inside the sun. Structure of the Sun The Sun’s core has a tremendously high temperature and pressure. The temperature is roughly 15 million °C. At this temperature, nuclear fusion occurs, turning four hydrogen nuclei into a single helium nucleus plus a LOT of energy. This “hydrogen burning” releases gamma rays (high-energy photons) and neutrinos (particles with no charge and almost no mass). This is the lower atmosphere of the Sun and the part that we see (since it emits light at visible wavelengths). This layer is about 300 miles (500km) thick. The temperature is about 5,500 °C. This reddish layer is an area of rising temperatures. The temperature ranges from 6,000 °C (at lower altitudes) to 50,000 °C (at higher altitudes). This layer is a few thousand miles (or kilometers) thick. It appears red because hydrogen atoms are in an excite state and emit radiation near the red part of the visible spectrum. The Chromosphere is visible during solar eclipses (when the moon blocks the Photosphere). This is the outer layer of the Sun’s atmosphere. The corona extends for millions of miles and the temperatures are tremendous, reaching one million °C. The solar wind is a continuous stream of ions (electrically charged particles) that are given off by magnetic anomalies on the Sun. The solar wind is emitted where the Sun’s magnetic field loops out into space instead of looping back into the Sun. These magnetic anomalies in the Sun’s corona are called coronal holes. In X-ray photographs of the Sun, coronal holes are black areas. It takes the solar wind about 4.5 days to reach Earth; it has a velocity of about 250 miles/sec (400 km/sec). Since the particles are emitted from the Sun as the Sun rotates, the solar wind blows in a pinwheel pattern through the solar system. The solar wind affects the entire Solar System, including buffeting comets’ tails away from the Sun, causing auroras on Earth (and some other planets), the disruption of electronic communications on Earth, pushing spacecraft around, etc. Sunspots are relatively cool, dark patches on the sun’s surface. They come in many shapes and sizes; they often appear in groups. These spots are much bigger than the Earth; they can be over 10 times the diameter of the Earths. Temperatures in sunspots are much cooler than elsewhere on the Sun. Usually the Sun’s surface temperature is about 5,500 °C (9,900 °F) but in sunspots, the temperature is between 2,700 °C to 4,200 °C (4,900 °F to 7,600 °F). The Sun, like many objects in the Solar System (including Earth) has a magnetic field. A magnetic field is basically invisible magnetic lines (flux) travelling into and out of the Sun produced by electrically charged particles. These lines would usually travel through the Sun regularly, or, in other words, following the route they took when they entered it. However, as the Sun is a ball of gas, it spins faster towards its equator than towards its poles. This muddled way of spinning (known as differential rotation) disturbs the route of the magnetic field, and causes some of the lines of magnetism to warp and twist. Sometimes they become so distorted that they “snap” like elastic bands and pop out of the Sun’s surface. The intense magnetism in these magnetic fields lines is powerful enough to push back some of the hot gases travelling outwards from the Sun. This prevents some of the heat from reaching the surface and causes the area to be cooler. And as heat energy also produces light, areas – or spots – where the Sun is cooler are also darker than other areas, and they become known as sunspots. The sunspot cycle was discovered by S. Heinrich Schwabe in 1843 (he started his observations in 1826). Explosions in the Corona‹Solar Flares and Coronal Mass Ejections Studies of the corona reveal dramatic, violent events called solar flares and coronal mass ejections (CMEs). Solar flares release energy from magnetic loops in the corona, heating the gases of the corona and sending particles and radiation out into the solar system. A coronal mass ejection occurs when an explosion in the corona pushes millions or billions of metric tons of material out into space. The frequency of occurrence of both solar flares and CMEs follows the pattern of the 11-year sunspot cycle (as the number of sunspots increases, so does the number of solar flares and CMEs). Both kinds of solar explosions seem to result from the sudden release of energy stored in coronal magnetic fields. Important Features of the Sun (CSS 2009) - Mass: 1.98892 x 1030 kg - The diameter of the Sun is 1.391 million kilometers or 870,000 miles.( It holds 99.8 percent of the solar system’s mass and is roughly 109 times the diameter of the Earth — about one million Earths could fit inside the sun.Diameter of the Sun in kilometers: 1,391,000 km , Diameter of the Sun in miles: 864,000 miles , Diameter of the Sun in meters: 1,391,000,000 meters ) - The circumference of the Sun is 4,379,000 km.) - The sun lies at the heart of the solar system, where it is by far the largest object. - The visible part of the sun is roughly 10,000 degrees F (5,500 degrees C), while temperatures in the core reach more than 27 million degrees F (15 million degrees C), driven by nuclear reactions. - One would need to explode 100 billion tons of dynamite every second to match the energy produced by the Sun. - The radius of the Sun, the measurement from the exact center of the Sun out to its surface, is 695,500 kilometers. (Radius of the Sun in miles: 432,200 miles , Radius of the Sun in meters: 695,500,000 meters , Radius of the Sun compared to Earth: 109 Earths ) - The density of the Sun is 1.4 grams per cubic centimeter. - The volume of the Sun is 1.412 x 1018 km3. - The sun is one of more than 100 billion stars in the Milky Way - It orbits some 25,000 light years from the galactic core, completing a revolution once every 250 million years or so. - The sun is relatively young, part of a generation of stars known as Population I, which are relatively rich in elements heavier than helium. - The sun was born roughly 4.6 billion years ago. - The sun has enough nuclear fuel to stay much as it is now for another 5 billion years.
Chapter 13 Temperature, Kinetic Theory, and the Gas Laws - State the ideal gas law in terms of molecules and in terms of moles. - Use the ideal gas law to calculate pressure change, temperature change, volume change, or the number of molecules or moles in a given volume. - Use Avogadro’s number to convert between number of molecules and number of moles. In this section, we continue to explore the thermal behavior of gases. In particular, we examine the characteristics of atoms and molecules that compose gases. (Most gases, for example nitrogen,and oxygen,are composed of two or more atoms. We will primarily use the term “molecule” in discussing a gas because the term can also be applied to monatomic gases, such as helium.) Gases are easily compressed. We can see evidence of this in [link], where you will note that gases have the largest coefficients of volume expansion. The large coefficients mean that gases expand and contract very rapidly with temperature changes. In addition, you will note that most gases expand at the same rate, or have the sameThis raises the question as to why gases should all act in nearly the same way, when liquids and solids have widely varying expansion rates. The answer lies in the large separation of atoms and molecules in gases, compared to their sizes, as illustrated in Figure 2. Because atoms and molecules have large separations, forces between them can be ignored, except when they collide with each other during collisions. The motion of atoms and molecules (at temperatures well above the boiling temperature) is fast, such that the gas occupies all of the accessible volume and the expansion of gases is rapid. In contrast, in liquids and solids, atoms and molecules are closer together and are quite sensitive to the forces between them. To get some idea of how pressure, temperature, and volume of a gas are related to one another, consider what happens when you pump air into an initially deflated tire. The tire’s volume first increases in direct proportion to the amount of air injected, without much increase in the tire pressure. Once the tire has expanded to nearly its full size, the walls limit volume expansion. If we continue to pump air into it, the pressure increases. The pressure will further increase when the car is driven and the tires move. Most manufacturers specify optimal tire pressure for cold tires. (See Figure 3.) At room temperatures, collisions between atoms and molecules can be ignored. In this case, the gas is called an ideal gas, in which case the relationship between the pressure, volume, and temperature is given by the equation of state called the ideal gas law. IDEAL GAS LAW The ideal gas law states that whereis the absolute pressure of a gas,is the volume it occupies,is the number of atoms and molecules in the gas, andis its absolute temperature. The constantis called the Boltzmann constant in honor of Austrian physicist Ludwig Boltzmann (1844–1906) and has the value The ideal gas law can be derived from basic principles, but was originally deduced from experimental measurements of Charles’ law (that volume occupied by a gas is proportional to temperature at a fixed pressure) and from Boyle’s law (that for a fixed temperature, the productis a constant). In the ideal gas model, the volume occupied by its atoms and molecules is a negligible fraction ofThe ideal gas law describes the behavior of real gases under most conditions. (Note, for example, thatis the total number of atoms and molecules, independent of the type of gas.) Let us see how the ideal gas law is consistent with the behavior of filling the tire when it is pumped slowly and the temperature is constant. At first, the pressureis essentially equal to atmospheric pressure, and the volumeincreases in direct proportion to the number of atoms and moleculesput into the tire. Once the volume of the tire is constant, the equationpredicts that the pressure should increase in proportion to the number N of atoms and molecules. Example 1: Calculating Pressure Changes Due to Temperature Changes: Tire Pressure Suppose your bicycle tire is fully inflated, with an absolute pressure of(a gauge pressure of just under) at a temperature ofWhat is the pressure after its temperature has risen toAssume that there are no appreciable leaks or changes in volume. The pressure in the tire is changing only because of changes in temperature. First we need to identify what we know and what we want to know, and then identify an equation to solve for the unknown. We know the initial pressurethe initial temperatureand the final temperatureWe must find the final pressureHow can we use the equationAt first, it may seem that not enough information is given, because the volumeand number of atomsare not specified. What we can do is use the equation twice:andIf we dividebywe can come up with an equation that allows us to solve for Since the volume is constant,andare the same and they cancel out. The same is true forandandwhich is a constant. Therefore, We can then rearrange this to solve for where the temperature must be in units of kelvins, becauseandare absolute temperatures. 1. Convert temperatures from Celsius to Kelvin. 2. Substitute the known values into the equation. The final temperature is about 6% greater than the original temperature, so the final pressure is about 6% greater as well. Note that absolute pressure and absolute temperature must be used in the ideal gas law. MAKING CONNECTIONS: TAKE-HOME EXPERIMENT—REFRIGERATING A BALLOON Inflate a balloon at room temperature. Leave the inflated balloon in the refrigerator overnight. What happens to the balloon, and why? Example 2: Calculating the Number of Molecules in a Cubic Meter of Gas How many molecules are in a typical object, such as gas in a tire or water in a drink? We can use the ideal gas law to give us an idea of how largetypically is. Calculate the number of molecules in a cubic meter of gas at standard temperature and pressure (STP), which is defined to beand atmospheric pressure. Because pressure, volume, and temperature are all specified, we can use the ideal gas lawto find 1. Identify the knowns. 2. Identify the unknown: number of molecules, 3. Rearrange the ideal gas law to solve for 4. Substitute the known values into the equation and solve for This number is undeniably large, considering that a gas is mostly empty space.is huge, even in small volumes. For example,of a gas at STP hasmolecules in it. Once again, note thatis the same for all types or mixtures of gases. Moles and Avogadro’s Number It is sometimes convenient to work with a unit other than molecules when measuring the amount of substance. A mole (abbreviated mol) is defined to be the amount of a substance that contains as many atoms or molecules as there are atoms in exactly 12 grams (0.012 kg) of carbon-12. The actual number of atoms or molecules in one mole is called Avogadro’s numberin recognition of Italian scientist Amedeo Avogadro (1776–1856). He developed the concept of the mole, based on the hypothesis that equal volumes of gas, at the same pressure and temperature, contain equal numbers of molecules. That is, the number is independent of the type of gas. This hypothesis has been confirmed, and the value of Avogadro’s number is One mole always containsparticles (atoms or molecules), independent of the element or substance. A mole of any substance has a mass in grams equal to its molecular mass, which can be calculated from the atomic masses given in the periodic table of elements. Check Your Understanding 1 The active ingredient in a Tylenol pill is 325 mg of acetaminophenFind the number of active molecules of acetaminophen in a single pill. Example 3: Calculating Moles per Cubic Meter and Liters per Mole Calculate: (a) the number of moles inof gas at STP, and (b) the number of liters of gas per mole. Strategy and Solution (a) We are asked to find the number of moles per cubic meter, and we know from Example 2 that the number of molecules per cubic meter at STP isThe number of moles can be found by dividing the number of molecules by Avogadro’s number. We letstand for the number of moles, (b) Using the value obtained for the number of moles in a cubic meter, and converting cubic meters to liters, we obtain This value is very close to the accepted value of 22.4 L/mol. The slight difference is due to rounding errors caused by using three-digit input. Again this number is the same for all gases. In other words, it is independent of the gas. The (average) molar weight of air (approximately 80%and 20%isThus the mass of one cubic meter of air is 1.28 kg. If a living room has dimensionsthe mass of air inside the room is 96 kg, which is the typical mass of a human. Check Your Understanding 2 The density of air at standard conditionsandisAt what pressure is the densityif the temperature and number of molecules are kept constant? The Ideal Gas Law Restated Using Moles A very common expression of the ideal gas law uses the number of moles,rather than the number of atoms and molecules,We start from the ideal gas law, and multiply and divide the equation by Avogadro’s numberThis gives Note thatis the number of moles. We define the universal gas constantand obtain the ideal gas law in terms of moles. IDEAL GAS LAW (IN TERMS OF MOLES) The ideal gas law (in terms of moles) is The numerical value ofin SI units is In other units, You can use whichever value ofis most convenient for a particular problem. Example 4: Calculating Number of Moles: Gas in a Bike Tire How many moles of gas are in a bike tire with a volume ofa pressure of(a gauge pressure of just under), and at a temperature of Identify the knowns and unknowns, and choose an equation to solve for the unknown. In this case, we solve the ideal gas law,for the number of moles 1. Identify the knowns. 2. Rearrange the equation to solve forand substitute known values. The most convenient choice forin this case isbecause our known quantities are in SI units. The pressure and temperature are obtained from the initial conditions in Example 1, but we would get the same answer if we used the final values. The ideal gas law can be considered to be another manifestation of the law of conservation of energy (see Chapter 7.6 Conservation of Energy). Work done on a gas results in an increase in its energy, increasing pressure and/or temperature, or decreasing volume. This increased energy can also be viewed as increased internal kinetic energy, given the gas’s atoms and molecules. The Ideal Gas Law and Energy Let us now examine the role of energy in the behavior of gases. When you inflate a bike tire by hand, you do work by repeatedly exerting a force through a distance. This energy goes into increasing the pressure of air inside the tire and increasing the temperature of the pump and the air. The ideal gas law is closely related to energy: the units on both sides are joules. The right-hand side of the ideal gas law inisThis term is roughly the amount of translational kinetic energy ofatoms or molecules at an absolute temperatureas we shall see formally in Chapter 13.4 Kinetic Theory: Atomic and Molecular Explanation of Pressure and Temperature. The left-hand side of the ideal gas law iswhich also has the units of joules. We know from our study of fluids that pressure is one type of potential energy per unit volume, so pressure multiplied by volume is energy. The important point is that there is energy in a gas related to both its pressure and its volume. The energy can be changed when the gas is doing work as it expands—something we explore in Chapter 14 Heat and Heat Transfer Methods—similar to what occurs in gasoline or steam engines and turbines. PROBLEM-SOLVING STRATEGY: THE IDEAL GAS LAW Step 1 Examine the situation to determine that an ideal gas is involved. Most gases are nearly ideal. Step 2 Make a list of what quantities are given, or can be inferred from the problem as stated (identify the known quantities). Convert known values into proper SI units (K for temperature, Pa for pressure,for volume, molecules forand moles for). Step 3 Identify exactly what needs to be determined in the problem (identify the unknown quantities). A written list is useful. Step 4 Determine whether the number of molecules or the number of moles is known, in order to decide which form of the ideal gas law to use. The first form isand involvesthe number of atoms or molecules. The second form isand involvesthe number of moles. Step 5 Solve the ideal gas law for the quantity to be determined (the unknown quantity). You may need to take a ratio of final states to initial states to eliminate the unknown quantities that are kept fixed. Step 6 Substitute the known quantities, along with their units, into the appropriate equation, and obtain numerical solutions complete with units. Be certain to use absolute temperature and absolute pressure. Step 7 Check the answer to see if it is reasonable: Does it make sense? Check Your Understanding 3 Liquids and solids have densities about 1000 times greater than gases. Explain how this implies that the distances between atoms and molecules in gases are about 10 times greater than the size of their atoms and molecules. - The ideal gas law relates the pressure and volume of a gas to the number of gas molecules and the temperature of the gas. - The ideal gas law can be written in terms of the number of molecules of gas: whereis pressure,is volume,is temperature,is number of molecules, andis the Boltzmann constant - A mole is the number of atoms in a 12-g sample of carbon-12. - The number of molecules in a mole is called Avogadro’s number - A mole of any substance has a mass in grams equal to its molecular weight, which can be determined from the periodic table of elements. - The ideal gas law can also be written and solved in terms of the number of moles of gas: whereis number of moles andis the universal gas constant, - The ideal gas law is generally valid at temperatures well above the boiling temperature. 1: Find out the human population of Earth. Is there a mole of people inhabiting Earth? If the average mass of a person is 60 kg, calculate the mass of a mole of people. How does the mass of a mole of people compare with the mass of Earth? 2: Under what circumstances would you expect a gas to behave significantly differently than predicted by the ideal gas law? 3: A constant-volume gas thermometer contains a fixed amount of gas. What property of the gas is measured to indicate its temperature? Problems & Exercises 1: The gauge pressure in your car tires isat a temperature ofwhen you drive it onto a ferry boat to Alaska. What is their gauge pressure later, when their temperature has dropped to 2: Convert an absolute pressure ofto gauge pressure in(This value was stated to be just less thanin Example 4. Is it?) 3: Suppose a gas-filled incandescent light bulb is manufactured so that the gas inside the bulb is at atmospheric pressure when the bulb has a temperature of(a) Find the gauge pressure inside such a bulb when it is hot, assuming its average temperature is(an approximation) and neglecting any change in volume due to thermal expansion or gas leaks. (b) The actual final pressure for the light bulb will be less than calculated in part (a) because the glass bulb will expand. What will the actual final pressure be, taking this into account? Is this a negligible difference? 4: Large helium-filled balloons are used to lift scientific equipment to high altitudes. (a) What is the pressure inside such a balloon if it starts out at sea level with a temperature ofand rises to an altitude where its volume is twenty times the original volume and its temperature is(b) What is the gauge pressure? (Assume atmospheric pressure is constant.) 5: Confirm that the units ofare those of energy for each value of(a)(b)and (c) 6: In the text, it was shown thatfor gas at STP. (a) Show that this quantity is equivalent toas stated. (b) About how many atoms are there in one(a cubic micrometer) at STP? (c) What does your answer to part (b) imply about the separation of atoms and molecules? 7: Calculate the number of moles in the 2.00-L volume of air in the lungs of the average person. Note that the air is at(body temperature). 8: An airplane passenger hasof air in his stomach just before the plane takes off from a sea-level airport. What volume will the air have at cruising altitude if cabin pressure drops to 9: (a) What is the volume (in) of Avogadro’s number of sand grains if each grain is a cube and has sides that are 1.0 mm long? (b) How many kilometers of beaches in length would this cover if the beach averages 100 m in width and 10.0 m in depth? Neglect air spaces between grains. 10: An expensive vacuum system can achieve a pressure as low asatHow many atoms are there in a cubic centimeter at this pressure and temperature? 11: The number density of gas atoms at a certain location in the space above our planet is aboutand the pressure isin this space. What is the temperature there? 12: A bicycle tire has a pressure ofat a temperature ofand contains 2.00 L of gas. What will its pressure be if you let out an amount of air that has a volume ofat atmospheric pressure? Assume tire temperature and volume remain constant. 13: A high-pressure gas cylinder contains 50.0 L of toxic gas at a pressure ofand a temperature ofIts valve leaks after the cylinder is dropped. The cylinder is cooled to dry ice temperatureto reduce the leak rate and pressure so that it can be safely repaired. (a) What is the final pressure in the tank, assuming a negligible amount of gas leaks while being cooled and that there is no phase change? (b) What is the final pressure if one-tenth of the gas escapes? (c) To what temperature must the tank be cooled to reduce the pressure to 1.00 atm (assuming the gas does not change phase and that there is no leakage during cooling)? (d) Does cooling the tank appear to be a practical solution? 14: Find the number of moles in 2.00 L of gas atand underof pressure. 15: Calculate the depth to which Avogadro’s number of table tennis balls would cover Earth. Each ball has a diameter of 3.75 cm. Assume the space between balls adds an extra 25.0% to their volume and assume they are not crushed by their own weight. 16: (a) What is the gauge pressure in acar tire containing 3.60 mol of gas in a 30.0 L volume? (b) What will its gauge pressure be if you add 1.00 L of gas originally at atmospheric pressure andAssume the temperature returns toand the volume remains constant. 17: (a) In the deep space between galaxies, the density of atoms is as low asand the temperature is a frigid 2.7 K. What is the pressure? (b) What volume (in) is occupied by 1 mol of gas? (c) If this volume is a cube, what is the length of its sides in kilometers? - ideal gas law - the physical law that relates the pressure and volume of a gas to the number of gas molecules or number of moles of gas and the temperature of the gas - Boltzmann constant - a physical constant that relates energy to temperature; - Avogadro’s number - the number of molecules or atoms in one mole of a substance; - the quantity of a substance whose mass (in grams) is equal to its molecular mass Check Your Understanding 1 We first need to calculate the molar mass (the mass of one mole) of acetaminophen. To do this, we need to multiply the number of atoms of each element by the element’s atomic mass. Then we need to calculate the number of moles in 325 mg. Then use Avogadro’s number to calculate the number of molecules. Check Your Understanding 2 The best way to approach this question is to think about what is happening. If the density drops to half its original value and no molecules are lost, then the volume must double. If we look at the equationwe see that when the temperature is constant, the pressure is inversely proportional to volume. Therefore, if the volume doubles, the pressure must drop to half its original value, and Check Your Understanding 3 Atoms and molecules are close together in solids and liquids. In gases they are separated by empty space. Thus gases have lower densities than liquids and solids. Density is mass per unit volume, and volume is related to the size of a body (such as a sphere) cubed. So if the distance between atoms and molecules increases by a factor of 10, then the volume occupied increases by a factor of 1000, and the density decreases by a factor of 1000. Problems & Exercises (a) 0.136 atm (b) 0.135 atm. The difference between this value and the value from part (a) is negligible. (c) 2.16 K (d) No. The final temperature needed is much too low to be easily achieved for a large object.
Carathéodory: Conformal representation 1. By an isogonal (winkeltreu) representation of two areas on one another we mean a one-one, continuous, and continuously differentiable, representation of the areas, which is such that two curves of the first area which intersect at an angle a are transformed into two curves intersecting at the same angle a. If the sense of rotation of a tangent is preserved, an isogonal transformation is called conformal. Disregarding as trivial the Euclidean magnification (Ahnlichkeitstransformation) of the plane, we may say that the oldest known transformation of this kind is the stereographic projection of the sphere, which was used by Ptolemy (flourished in the second quarter of the second century; died after A.D. 161) for the representation of the celestial sphere; it transforms the sphere conformally into a plane. A quite different conformal representation of the sphere on a plane area is given by Mercator's Projection; in this the spherical earth, cut along a meridian circle, is conformally represented on a plane strip. The first map constructed by this transformation was published by Mercator (1512-1594) in 1568, and the method has been universally adopted for the construction of sea-maps. 2. A comparison of two maps of the same country, one constructed by stereographic projection of the spherical earth and the other by Mercator's Projection, will show that conformal transformation does not imply similarity of corresponding figures. Other non-trivial conformal representations of a plane area on a second plane area are obtained by comparing the various stereographic projections of the spherical earth which correspond to different positions of the centre of projection on the earth's surface. It was considerations such as these which led Lagrange (1736-1813) in 1779 to obtain all conformal representations of a portion of the earth's surface on a plane area wherein all circles of latitude and of longitude are represented by circular arcs. 3. In 1822 Gauss (1777-1855) stated and completely solved the general problem of finding all conformal transformations which transform a sufficiently small neighbourhood of a point on an arbitrary analytic surface into a plane area. This work of Gauss appeared to give the whole inquiry its final solution; actually it left unanswered the much more difficult question whether and in what way a given finite portion of the surface can be represented on a portion of the plane. This was first pointed out by Riemann (1826-1866), whose Dissertation (1851) marks a turning-point in the history of the problem which has been decisive for its whole later development; Riemann not only introduced all the ideas which have been at the basis of all subsequent investigation of the problem of conformal representation, but also showed that the problem itself is of fundamental importance for the theory of functions. 4. Riemann enunciated, among other results, the theorem that every simply-connected plane area which does not comprise the whole plane can be represented conformally on the interior of a circle. In the proof of this theorem, which forms the foundation of the whole theory, he assumes as obvious that a certain problem in the calculus of variations possesses a solution, and this assumption, as Weierstrass (1815-1897) first pointed out, invalidates his proof Quite simple, analytic, and in every way regular problems in the calculus of variations axe now known which do not always possess solutions. Nevertheless, about fifty years after Riemann, Hilbert was able to prove rigorously that the particular problem which arose in Riemann's work does possess a solution; this theorem is known as Dirichlet's Principle. Meanwhile, however, the truth of Riemann's conclusions had been established in a rigorous manner by Carl Neumann and, in particular, by H A Schwarz. The theory which Schwarz created for this purpose is particularly elegant, interesting and instructive; it is, however, somewhat intricate, and uses a number of theorems from the theory of the logarithmic potential, proofs of which must be included in any complete account of the method. During the present century the work of a number of mathematicians has created new methods which make possible a very simple treatment of our problem; it is the purpose of the following pages to give an account of these methods which, while as short as possible, shall yet be essentially complete. JOC/EFR August 2006 The URL of this page is:
Co-ordinate Geometry is a relatively modern and immensely useful branch of mathematics. The idea of giving points in the plane co-ordinates makes it much easier to deal with many properties of geometry that had previously been tackled using so-called Euclidean geometry (i.e., theorems). Coordinates are pairs of numbers that are used to determine points in a plane, relative to a special point called the origin. The origin has coordinates (0, 0). One fundamental idea in co-ordinate geometry is that of the equation of a line. In this topic, we examine the idea of the equation of a line and its properties, e.g., slope. We also consider other basic concepts such as distance, midpoint and the area of a triangle. It is also important to note that many of the ideas from this topic come into many others, e.g., the circle, graphs and linear programming. There is also a very close link between the Argand diagram in Complex Numbers and co-ordinate geometry. For this course the study of the Line can be divided into the following sections: Distance and Midpoint If you draw this out on a graph so that a right angle triangle is formed by the line segment and lines parallel to the X and Y axis through the points at each end of the line segment you can see that this is actually an application of Pythagoras’ theorem. Slope of a Line If lines are parallel, their slopes are the same. Equation of a Line Area of a Triangle This site includes some material relevant to coordinate geometry. Each link brings you through to a number of questions on that topic, and by clicking on the question number, you are shown a worked solution of that question.
|Part of a series on| Sediment transport is the movement of solid particles (sediment), typically due to a combination of gravity acting on the sediment, and/or the movement of the fluid in which the sediment is entrained. Sediment transport occurs in natural systems where the particles are clastic rocks (sand, gravel, boulders, etc.), mud, or clay; the fluid is air, water, or ice; and the force of gravity acts to move the particles along the sloping surface on which they are resting. Sediment transport due to fluid motion occurs in rivers, oceans, lakes, seas, and other bodies of water due to currents and tides. Transport is also caused by glaciers as they flow, and on terrestrial surfaces under the influence of wind. Sediment transport due only to gravity can occur on sloping surfaces in general, including hillslopes, scarps, cliffs, and the continental shelf—continental slope boundary. Sediment transport is important in the fields of sedimentary geology, geomorphology, civil engineering, hydraulic engineering and environmental engineering (see applications, below). Knowledge of sediment transport is most often used to determine whether erosion or deposition will occur, the magnitude of this erosion or deposition, and the time and distance over which it will occur. Aeolian or eolian (depending on the parsing of æ) is the term for sediment transport by wind. This process results in the formation of ripples and sand dunes. Typically, the size of the transported sediment is fine sand (<1 mm) and smaller, because air is a fluid with low density and viscosity, and can therefore not exert very much shear on its bed. Aeolian sediment transport is common on beaches and in the arid regions of the world, because it is in these environments that vegetation does not prevent the presence and motion of fields of sand. Wind-blown very fine-grained dust is capable of entering the upper atmosphere and moving across the globe. Dust from the Sahara deposits on the Canary Islands and islands in the Caribbean, and dust from the Gobi desert has deposited on the western United States. This sediment is important to the soil budget and ecology of several islands. In geology, physical geography, and sediment transport, fluvial processes relate to flowing water in natural systems. This encompasses rivers, streams, periglacial flows, flash floods and glacial lake outburst floods. Sediment moved by water can be larger than sediment moved by air because water has both a higher density and viscosity. In typical rivers the largest carried sediment is of sand and gravel size, but larger floods can carry cobbles and even boulders. Fluvial sediment transport can result in the formation of ripples and dunes, in fractal-shaped patterns of erosion, in complex patterns of natural river systems, and in the development of floodplains. Coastal sediment transport takes place in near-shore environments due to the motions of waves and currents. At the mouths of rivers, coastal sediment and fluvial sediment transport processes mesh to create river deltas. As glaciers move over their beds, they entrain and move material of all sizes. Glaciers can carry the largest sediment, and areas of glacial deposition often contain a large number of glacial erratics, many of which are several metres in diameter. Glaciers also pulverize rock into "glacial flour", which is so fine that it is often carried away by winds to create loess deposits thousands of kilometres afield. Sediment entrained in glaciers often moves approximately along the glacial flowlines, causing it to appear at the surface in the ablation zone. In hillslope sediment transport, a variety of processes move regolith downslope. These include: - Soil creep - Tree throw - Movement of soil by burrowing animals - Slumping and landsliding of the hillslope These processes generally combine to give the hillslope a profile that looks like a solution to the diffusion equation, where the diffusivity is a parameter that relates to the ease of sediment transport on the particular hillslope. For this reason, the tops of hills generally have a parabolic concave-up profile, which grades into a convex-up profile around valleys. As hillslopes steepen, however, they become more prone to episodic landslides and other mass wasting events. Therefore, hillslope processes are better described by a nonlinear diffusion equation in which classic diffusion dominates for shallow slopes and erosion rates go to infinity as the hillslope reaches a critical angle of repose. Large masses of material are moved in debris flows, hyperconcentrated mixtures of mud, clasts that range up to boulder-size, and water. Debris flows move as granular flows down steep mountain valleys and washes. Because they transport sediment as a granular mixture, their transport mechanisms and capacities scale differently from those of fluvial systems. Sediment transport is applied to solve many environmental, geotechnical, and geological problems. Measuring or quantifying sediment transport or erosion is therefore important for coastal engineering. Several sediment erosion devices have been designed in order to quantify sediment erosion (e.g., Particle Erosion Simulator (PES)). One such device, also referred to as the BEAST (Benthic Environmental Assessment Sediment Tool) has been calibrated in order to quantify rates of sediment erosion. Movement of sediment is important in providing habitat for fish and other organisms in rivers. Therefore, managers of highly regulated rivers, which are often sediment-starved due to dams, are often advised to stage short floods to refresh the bed material and rebuild bars. This is also important, for example, in the Grand Canyon of the Colorado River, to rebuild shoreline habitats also used as campsites. Sediment discharge into a reservoir formed by a dam forms a reservoir delta. This delta will fill the basin, and eventually, either the reservoir will need to be dredged or the dam will need to be removed. Knowledge of sediment transport can be used to properly plan to extend the life of a dam. Geologists can use inverse solutions of transport relationships to understand flow depth, velocity, and direction, from sedimentary rocks and young deposits of alluvial materials. Flow in culverts, over dams, and around bridge piers can cause erosion of the bed. This erosion can damage the environment and expose or unsettle the foundations of the structure. Therefore, good knowledge of the mechanics of sediment transport in a built environment are important for civil and hydraulic engineers. When suspended sediment transport is increased due to human activities, causing environmental problems including the filling of channels, it is called siltation after the grain-size fraction dominating the process. Initiation of motion For a fluid to begin transporting sediment that is currently at rest on a surface, the boundary (or bed) shear stress exerted by the fluid must exceed the critical shear stress for the initiation of motion of grains at the bed. This basic criterion for the initiation of motion can be written as: This is typically represented by a comparison between a dimensionless shear stress and a dimensionless critical shear stress . The nondimensionalization is in order to compare the driving forces of particle motion (shear stress) to the resisting forces that would make it stationary (particle density and size). This dimensionless shear stress, , is called the Shields parameter and is defined as: And the new equation to solve becomes: The equations included here describe sediment transport for clastic, or granular sediment. They do not work for clays and muds because these types of floccular sediments do not fit the geometric simplifications in these equations, and also interact thorough electrostatic forces. The equations were also designed for fluvial sediment transport of particles carried along in a liquid flow, such as that in a river, canal, or other open channel. Only one size of particle is considered in this equation. However, river beds are often formed by a mixture of sediment of various sizes. In case of partial motion where only a part of the sediment mixture moves, the river bed becomes enriched in large gravel as the smaller sediments are washed away. The smaller sediments present under this layer of large gravel have a lower possibility of movement and total sediment transport decreases. This is called armouring effect. Other forms of armouring of sediment or decreasing rates of sediment erosion can be caused by carpets of microbial mats, under conditions of high organic loading. Critical shear stress The Shields diagram empirically shows how the dimensionless critical shear stress (i.e. the dimensionless shear stress required for the initiation of motion) is a function of a particular form of the particle Reynolds number, or Reynolds number related to the particle. This allows the criterion for the initiation of motion to be rewritten in terms of a solution for a specific version of the particle Reynolds number, called . This can then be solved by using the empirically derived Shields curve to find as a function of a specific form of the particle Reynolds number called the boundary Reynolds number. The mathematical solution of the equation was given by Dey. Particle Reynolds number In general, a particle Reynolds number has the form: Where is a characteristic particle velocity, is the grain diameter (a characteristic particle size), and is the kinematic viscosity, which is given by the dynamic viscosity, , divided by the fluid density, . The specific particle Reynolds number of interest is called the boundary Reynolds number, and it is formed by replacing the velocity term in the particle Reynolds number by the shear velocity, , which is a way of rewriting shear stress in terms of velocity. where is the bed shear stress (described below), and is the von Kármán constant, where The particle Reynolds number is therefore given by: Bed shear stress The boundary Reynolds number can be used with the Shields diagram to empirically solve the equation which solves the right-hand side of the equation In order to solve the left-hand side, expanded as the bed shear stress needs to be found, . There are several ways to solve for the bed shear stress. The simplest approach is to assume the flow is steady and uniform, using the reach-averaged depth and slope. because it is difficult to measure shear stress in situ, this method is also one of the most-commonly used. The method is known as the depth-slope product. For a river undergoing approximately steady, uniform equilibrium flow, of approximately constant depth h and slope angle θ over the reach of interest, and whose width is much greater than its depth, the bed shear stress is given by some momentum considerations stating that the gravity force component in the flow direction equals exactly the friction force. For a wide channel, it yields: For shallow slope angles, which are found in almost all natural lowland streams, the small-angle formula shows that is approximately equal to , which is given by , the slope. Rewritten with this: Shear velocity, velocity, and friction factor For the steady case, by extrapolating the depth-slope product and the equation for shear velocity: The depth-slope product can be rewritten as: is related to the mean flow velocity, , through the generalized Darcy-Weisbach friction factor, , which is equal to the Darcy-Weisbach friction factor divided by 8 (for mathematical convenience). Inserting this friction factor, For all flows that cannot be simplified as a single-slope infinite channel (as in the depth-slope product, above), the bed shear stress can be locally found by applying the Saint-Venant equations for continuity, which consider accelerations within the flow. The criterion for the initiation of motion, established earlier, states that In this equation, - , and therefore - is a function of boundary Reynolds number, a specific type of particle Reynolds number. For a particular particle Reynolds number, will be an empirical constant given by the Shields Curve or by another set of empirical data (depending on whether or not the grain size is uniform). Therefore, the final equation to solve is: Some assumptions allow the solution of the above equation. The first assumption is that a good approximation of reach-averaged shear stress is given by the depth-slope product. The equation then can be rewritten as: Moving and re-combining the terms produces: where R is the submerged specific gravity of the sediment. The second assumption is that the particle Reynolds number is high. This typically applies to particles of gravel-size or larger in a stream, and means the critical shear stress is constant. The Shields curve shows that for a bed with a uniform grain size, Later researchers have shown this value is closer to for more uniformly sorted beds. Therefore the replacement is used to insert both values at the end. The equation now reads: This final expression shows the product of the channel depth and slope is equal to the Shield's criterion times the submerged specific gravity of the particles times the particle diameter. For a typical situation, such as quartz-rich sediment in water , the submerged specific gravity is equal to 1.65. Plugging this into the equation above, For the Shield's criterion of . 0.06 * 1.65 = 0.099, which is well within standard margins of error of 0.1. Therefore, for a uniform bed, For these situations, the product of the depth and slope of the flow should be 10% of the diameter of the median grain diameter. The mixed-grain-size bed value is , which is supported by more recent research as being more broadly applicable because most natural streams have mixed grain sizes. If this value is used, and D is changed to D_50 ("50" for the 50th percentile, or the median grain size, as an appropriate value for a mixed-grain-size bed), the equation becomes: Which means that the depth times the slope should be about 5% of the median grain diameter in the case of a mixed-grain-size bed. Modes of entrainment The sediments entrained in a flow can be transported along the bed as bed load in the form of sliding and rolling grains, or in suspension as suspended load advected by the main flow. Some sediment materials may also come from the upstream reaches and be carried downstream in the form of wash load. The location in the flow in which a particle is entrained is determined by the Rouse number, which is determined by the density ρs and diameter d of the sediment particle, and the density ρ and kinematic viscosity ν of the fluid, determine in which part of the flow the sediment particle will be carried. Here, the Rouse number is given by P. The term in the numerator is the (downwards) sediment the sediment settling velocity ws, which is discussed below. The upwards velocity on the grain is given as a product of the von Kármán constant, κ = 0.4, and the shear velocity, u∗. |Mode of Transport||Rouse Number| |Initiation of motion||>7.5| |Bed load||>2.5, <7.5| |Suspended load: 50% Suspended||>1.2, <2.5| |Suspended load: 100% Suspended||>0.8, <1.2| The settling velocity (also called the "fall velocity" or "terminal velocity") is a function of the particle Reynolds number. Generally, for small particles (laminar approximation), it can be calculated with Stokes' Law. For larger particles (turbulent particle Reynolds numbers), fall velocity is calculated with the turbulent drag law. Dietrich (1982) compiled a large amount of published data to which he empirically fit settling velocity curves. Ferguson and Church (2006) analytically combined the expressions for Stokes flow and a turbulent drag law into a single equation that works for all sizes of sediment, and successfully tested it against the data of Dietrich. Their equation is In this equation ws is the sediment settling velocity, g is acceleration due to gravity, and D is mean sediment diameter. is the kinematic viscosity of water, which is approximately 1.0 x 10−6 m2/s for water at 20 °C. and are constants related to the shape and smoothness of the grains. |Constant||Smooth Spheres||Natural Grains: Sieve Diameters||Natural Grains: Nominal Diameters||Limit for Ultra-Angular Grains| The expression for fall velocity can be simplified so that it can be solved only in terms of D. We use the sieve diameters for natural grains, , and values given above for and . From these parameters, the fall velocity is given by the expression: In 1935, Filip Hjulström created the Hjulström curve, a graph which shows the relationship between the size of sediment and the velocity required to erode (lift it), transport it, or deposit it. The graph is logarithmic. Åke Sundborg later modified the Hjulström curve to show separate curves for the movement threshold corresponding to several water depths, as is necessary if the flow velocity rather than the boundary shear stress (as in the Shields diagram) is used for the flow strength. This curve has no more than a historical value nowadays, although its simplicity is still attractive. Among the drawbacks of this curve are that it does not take the water depth into account and more importantly, that it does not show that sedimentation is caused by flow velocity deceleration and erosion is caused by flow acceleration. The dimensionless Shields diagram is now unanimously accepted for initiation of sediment motion in rivers. Formulas to calculate sediment transport rate exist for sediment moving in several different parts of the flow. These formulas are often segregated into bed load, suspended load, and wash load. They may sometimes also be segregated into bed material load and wash load. Bed load moves by rolling, sliding, and hopping (or saltating) over the bed, and moves at a small fraction of the fluid flow velocity. Bed load is generally thought to constitute 5-10% of the total sediment load in a stream, making it less important in terms of mass balance. However, the bed material load (the bed load plus the portion of the suspended load which comprises material derived from the bed) is often dominated by bed load, especially in gravel-bed rivers. This bed material load is the only part of the sediment load that actively interacts with the bed. As the bed load is an important component of that, it plays a major role in controlling the morphology of the channel. Bed load transport rates are usually expressed as being related to excess dimensionless shear stress raised to some power. Excess dimensionless shear stress is a nondimensional measure of bed shear stress about the threshold for motion. Bed load transport rates may also be given by a ratio of bed shear stress to critical shear stress, which is equivalent in both the dimensional and nondimensional cases. This ratio is called the "transport stage" and is an important in that it shows bed shear stress as a multiple of the value of the criterion for the initiation of motion. When used for sediment transport formulae, this ratio is typically raised to a power. The majority of the published relations for bedload transport are given in dry sediment weight per unit channel width, ("breadth"): Due to the difficulty of estimating bed load transport rates, these equations are typically only suitable for the situations for which they were designed. Notable bed load transport formulae Meyer-Peter Müller and derivatives The transport formula of Meyer-Peter and Müller, originally developed in 1948, was designed for well-sorted fine gravel at a transport stage of about 8. The formula uses the above nondimensionalization for shear stress, Their formula reads: Their experimentally determined value for is 0.047, and is the third commonly used value for this (in addition to Parker's 0.03 and Shields' 0.06). Wilcock and Crowe In 2003, Peter Wilcock and Joanna Crowe (now Joanna Curran) published a sediment transport formula that works with multiple grain sizes across the sand and gravel range. Their formula works with surface grain size distributions, as opposed to older models which use subsurface grain size distributions (and thereby implicitly infer a surface grain sorting). Their expression is more complicated than the basic sediment transport rules (such as that of Meyer-Peter and Müller) because it takes into account multiple grain sizes: this requires consideration of reference shear stresses for each grain size, the fraction of the total sediment supply that falls into each grain size class, and a "hiding function". The "hiding function" takes into account the fact that, while small grains are inherently more mobile than large grains, on a mixed-grain-size bed, they may be trapped in deep pockets between large grains. Likewise, a large grain on a bed of small particles will be stuck in a much smaller pocket than if it were on a bed of grains of the same size. In gravel-bed rivers, this can cause "equal mobility", in which small grains can move just as easily as large ones. As sand is added to the system, it moves away from the "equal mobility" portion of the hiding function to one in which grain size again matters. Their model is based on the transport stage, or ratio of bed shear stress to critical shear stress for the initiation of grain motion. Because their formula works with several grain sizes simultaneously, they define the critical shear stress for each grain size class, , to be equal to a "reference shear stress", . They express their equations in terms of a dimensionless transport parameter, (with the "" indicating nondimensionality and the "" indicating that it is a function of grain size): is the volumetric bed load transport rate of size class per unit channel width . is the proportion of size class that is present on the bed. They came up with two equations, depending on the transport stage, . For : and for : This equation asymptotically reaches a constant value of as becomes large. Wilcock and Kenworthy In 2002, Peter Wilcock and Kenworthy T.A. , following Peter Wilcock (1998), published a sediment bed-load transport formula that works with only two sediments fractions, i.e. sand and gravel fractions. Peter Wilcock and Kenworthy T.A. in their article recognized that a mixed-sized sediment bed-load transport model using only two fractions offers practical advantages in terms of both computational and conceptual modeling by taking into account the nonlinear effects of sand presence in gravel beds on bed-load transport rate of both fractions. In fact, in the two-fraction bed load formula appears a new ingredient with respect to that of Meyer-Peter and Müller that is the proportion of fraction on the bed surface where the subscript represents either the sand (s) or gravel (g) fraction. The proportion , as a function of sand content , physically represents the relative influence of the mechanisms controlling sand and gravel transport, associated with the change from a clast-supported to matrix-supported gravel bed. Moreover, since spans between 0 and 1, phenomena that vary with include the relative size effects producing ‘‘hiding’’ of fine grains and ‘‘exposure’’ of coarse grains. The ‘‘hiding’’ effect takes into account the fact that, while small grains are inherently more mobile than large grains, on a mixed-grain-size bed, they may be trapped in deep pockets between large grains. Likewise, a large grain on a bed of small particles will be stuck in a much smaller pocket than if it were on a bed of grains of the same size, which the Meyer-Peter and Müller formula refers to. In gravel-bed rivers, this can cause ‘‘equal mobility", in which small grains can move just as easily as large ones. As sand is added to the system, it moves away from the ‘‘equal mobility’’ portion of the hiding function to one in which grain size again matters. Their model is based on the transport stage,i.e. , or ratio of bed shear stress to critical shear stress for the initiation of grain motion. Because their formula works with only two fractions simultaneously, they define the critical shear stress for each of the two grain size classes, , where represents either the sand (s) or gravel (g) fraction . The critical shear stress that represents the incipient motion for each of the two fractions is consistent with established values in the limit of pure sand and gravel beds and shows a sharp change with increasing sand content over the transition from a clast- to matrix-supported bed. They express their equations in terms of a dimensionless transport parameter, (with the "" indicating nondimensionality and the ‘‘’’ indicating that it is a function of grain size): is the volumetric bed load transport rate of size class per unit channel width . is the proportion of size class that is present on the bed. They came up with two equations, depending on the transport stage, . For : and for : This equation asymptotically reaches a constant value of as becomes large and the symbols have the following values: In order to apply the above formulation, it is necessary to specify the characteristic grain sizes for the sand portion and for the gravel portion of the surface layer, the fractions and of sand and gravel, respectively in the surface layer, the submerged specific gravity of the sediment R and shear velocity associated with skin friction . Kuhnle et al. For the case in which sand fraction is transported by the current over and through an immobile gravel bed, Kuhnle et al.(2013), following the theoretical analysis done by Pellachini (2011), provides a new relationship for the bed load transport of the sand fraction when gravel particles remain at rest. It is worth mentioning that Kuhnle et al. (2013) applied the Wilcock and Kenworthy (2002) formula to their experimental data and found out that predicted bed load rates of sand fraction were about 10 times greater than measured and approached 1 as the sand elevation became near the top of the gravel layer. They, also, hypothesized that the mismatch between predicted and measured sand bed load rates is due to the fact that the bed shear stress used for the Wilcock and Kenworthy (2002) formula was larger than that available for transport within the gravel bed because of the sheltering effect of the gravel particles. To overcome this mismatch, following Pellachini (2011), they assumed that the variability of the bed shear stress available for the sand to be transported by the current would be some function of the so-called "Roughness Geometry Function" (RGF), which represents the gravel bed elevations distribution. Therefore, the sand bed load formula follows as: the subscript refers to the sand fraction, s represents the ratio where is the sand fraction density, is the RGF as a function of the sand level within the gravel bed, is the bed shear stress available for sand transport and is the critical shear stress for incipient motion of the sand fraction, which was calculated graphically using the updated Shields-type relation of Miller et al.(1977). Suspended load is carried in the lower to middle parts of the flow, and moves at a large fraction of the mean flow velocity in the stream. A common characterization of suspended sediment concentration in a flow is given by the Rouse Profile. This characterization works for the situation in which sediment concentration at one particular elevation above the bed can be quantified. It is given by the expression: Here, is the elevation above the bed, is the concentration of suspended sediment at that elevation, is the flow depth, is the Rouse number, and relates the eddy viscosity for momentum to the eddy diffusivity for sediment, which is approximately equal to one. Experimental work has shown that ranges from 0.93 to 1.10 for sands and silts. The Rouse profile characterizes sediment concentrations because the Rouse number includes both turbulent mixing and settling under the weight of the particles. Turbulent mixing results in the net motion of particles from regions of high concentrations to low concentrations. Because particles settle downward, for all cases where the particles are not neutrally buoyant or sufficiently light that this settling velocity is negligible, there is a net negative concentration gradient as one goes upward in the flow. The Rouse Profile therefore gives the concentration profile that provides a balance between turbulent mixing (net upwards) of sediment and the downwards settling velocity of each particle. Bed material load Bed material load comprises the bed load and the portion of the suspended load that is sourced from the bed. Three common bed material transport relations are the "Ackers-White", "Engelund-Hansen", "Yang" formulae. The first is for sand to granule-size gravel, and the second and third are for sand though Yang later expanded his formula to include fine gravel. That all of these formulae cover the sand-size range and two of them are exclusively for sand is that the sediment in sand-bed rivers is commonly moved simultaneously as bed and suspended load. The bed material load formula of Engelund and Hansen is the only one to not include some kind of critical value for the initiation of sediment transport. It reads: where is the Einstein nondimensionalization for sediment volumetric discharge per unit width, is a friction factor, and is the Shields stress. The Engelund-Hansen formula is one of the few sediment transport formulae in which a threshold "critical shear stress" is absent. Wash load is carried within the water column as part of the flow, and therefore moves with the mean velocity of main stream. Wash load concentrations are approximately uniform in the water column. This is described by the endmember case in which the Rouse number is equal to 0 (i.e. the settling velocity is far less than the turbulent mixing velocity), which leads to a prediction of a perfectly uniform vertical concentration profile of material. Some authors have attempted formulations for the total sediment load carried in water. These formulas are designed largely for sand, as (depending on flow conditions) sand often can be carried as both bed load and suspended load in the same stream or shoreface. Bed Load Sediment Mitigation at Intake Structures Riverside intake structures used in water supply, canal diversions, and water cooling can experience entrainment of bed load (sand-size) sediments. These entrained sediments produce multiple deleterious effects such as reduction or blockage of intake capacity, feedwater pump impeller damage or vibration, and result in sediment deposition in downstream pipelines and canals. Structures that modify local near-field secondary currents are useful to mitigate these effects and limit or prevent bed load sediment entry. - Sedimentology – The study of natural sediments and of the processes by which they are formed - Exner equation – Law of sediment aggradation - Hydrology – Science of the movement, distribution, and quality of water on Earth and other planets - Stream capacity – Total amount of sediment a stream can transport - Anderson, R (1990). "Eolian ripples as examples of self-organization in geomorphological systems". Earth-Science Reviews. 29 (1–4): 77. doi:10.1016/0012-8252(0)90029-U. - Kocurek, Gary; Ewing, Ryan C. (2005). "Aeolian dune field self-organization – implications for the formation of simple versus complex dune-field patterns". Geomorphology. 72 (1–4): 94. Bibcode:2005Geomo..72...94K. doi:10.1016/j.geomorph.2005.05.005. - Goudie, A; Middleton, N.J. (2001). "Saharan dust storms: nature and consequences". Earth-Science Reviews. 56 (1–4): 179. Bibcode:2001ESRv...56..179G. doi:10.1016/S0012-8252(01)00067-8. - "Dust Storm Spreads Out of Gobi Desert". Earthobservatory.nasa.gov. 13 April 2006. Retrieved 2022-05-08. - Ashton, Andrew; Murray, A. Brad; Arnault, Olivier (2001). "Formation of coastline features by large-scale instabilities induced by high-angle waves". Nature. 414 (6861): 296–300. Bibcode:2001Natur.414..296A. doi:10.1038/35104541. PMID 11713526. S2CID 205023325. - Roering, Joshua J.; Kirchner, James W.; Dietrich, William E. (1999). "Evidence for nonlinear, diffusive sediment transport on hillslopes and implications for landscape morphology". Water Resources Research. 35 (3): 853. Bibcode:1999WRR....35..853R. doi:10.1029/1998WR900090. - Grant, J.; Walker, T.R.; Hill, P.S.; Lintern, D.G. (2013). "BEAST-A portable device for quantification of erosion in intact sediment cores". Methods in Oceanography. 5: 39–55. doi:10.1016/j.mio.2013.03.001. - Shields, A. (1936) Anwendung der Ähnlichkeitsmechanik und der Turbulenzforschung auf die Geschiebebewegung; In Mitteilungen der Preussischen Versuchsanstalt für Wasserbau und Schiffbau, Heft 26 (Online; PDF; 3,8 MB) - Sharmeen, Saniya; Willgoose, Garry R. (2006). "The interaction between armouring and particle weathering for eroding landscapes". Earth Surface Processes and Landforms. 31 (10): 1195–1210. Bibcode:2006ESPL...31.1195S. doi:10.1002/esp.1397. S2CID 91175516. - Walker, T.R.; Grant, J. (2009). "Quantifying erosion rates and stability of bottom sediments at mussel aquaculture sites in Prince Edward Island, Canada". Journal of Marine Systems. 75 (1–2): 46–55. Bibcode:2009JMS....75...46W. doi:10.1016/j.jmarsys.2008.07.009. - Dey S. (1999) Sediment threshold. Applied Mathematical Modelling, Elsevier, Vol. 23, No. 5, 399-417. - Hubert Chanson (2004). The Hydraulics of Open Channel Flow: An Introduction. Butterworth-Heinemann, 2nd edition, Oxford, UK, 630 pages. ISBN 978-0-7506-5978-9. - Whipple, Kelin (2004). "Hydraulic Roughness" (PDF). 12.163: Surface processes and landscape evolution. MIT OCW. Retrieved 2009-03-27. - Parker, G (1990). "Surface-based bedload transport relation for gravel rivers". Journal of Hydraulic Research. 28 (4): 417–436. doi:10.1080/00221689009499058. - Whipple, Kelin (September 2004). "IV. Essentials of Sediment Transport" (PDF). 12.163/12.463 Surface Processes and Landscape Evolution: Course Notes. MIT OpenCourseWare. Retrieved 2009-10-11. - Moore, Andrew. "Lecture 20—Some Loose Ends" (PDF). Lecture Notes: Fluvial Sediment Transport. Kent State. Retrieved 23 December 2009. - Dietrich, W. E. (1982). "Settling Velocity of Natural Particles" (PDF). Water Resources Research. 18 (6): 1615–1626. Bibcode:1982WRR....18.1615D. doi:10.1029/WR018i006p01615. - Ferguson, R. I.; Church, M. (2006). "A Simple Universal Equation for Grain Settling Velocity". Journal of Sedimentary Research. 74 (6): 933–937. doi:10.1306/051204740933. - The long profile – changing processes: types of erosion, transportation and deposition, types of load; the Hjulstrom curve. coolgeography.co.uk. Last accessed 26 Dec 2011. - Special Topics: An Introduction to Fluid Motions, Sediment Transport, and Current-generated Sedimentary Structures; As taught in: Fall 2006. Massachusetts Institute of Technology. 2006. Last accessed 26 Dec 2011. - Meyer-Peter, E; Müller, R. (1948). Formulas for bed-load transport. Proceedings of the 2nd Meeting of the International Association for Hydraulic Structures Research. pp. 39–64. - Fernandez-Luque, R; van Beek, R (1976). "Erosion and transport of bedload sediment". J. Hydrol. Res. 14 (2): 127–144. doi:10.1080/00221687609499677. - Cheng, Nian-Sheng (2002). "Exponential Formula for Bedload Transport". Journal of Hydraulic Engineering. 128 (10): 942. doi:10.1061/(ASCE)0733-9429(2002)128:10(942). hdl:10356/83917. - Wilson, K. C. (1966). "Bed-load transport at high shear stress". J. Hydraul. Div. ASCE. 92 (6): 49–59. doi:10.1061/JYCEAJ.0001562. - Wiberg, Patricia L.; Dungan Smith, J. (1989). "Model for Calculating Bed Load Transport of Sediment". Journal of Hydraulic Engineering. 115: 101. doi:10.1061/(ASCE)0733-9429(1989)115:1(101). - Wilcock, Peter R.; Crowe, Joanna C. (2003). "Surface-based Transport Model for Mixed-Size Sediment". Journal of Hydraulic Engineering. 129 (2): 120. doi:10.1061/(ASCE)0733-9429(2003)129:2(120). - Parker, G.; Klingeman, P. C.; McLean, D. G. (1982). "Bedload and Size Distribution in Paved Gravel-Bed Streams". Journal of the Hydraulics Division. ASCE. 108 (4): 544–571. doi:10.1061/JYCEAJ.0005854. - Wilcock, P. R. (1998). "Two-fraction model of initial sediment motion in gravel-bed rivers". Science. 280 (5362): 410–412. Bibcode:1998Sci...280..410W. doi:10.1126/science.280.5362.410. PMID 9545213. - Wilcock, Peter R.; Kenworthy, T. (2002). "A two-fraction model for the transport of sand/gravel mixtures". Water Resour. Res. 38 (10): 1194. Bibcode:2002WRR....38.1194W. doi:10.1029/2001WR000684. - Kuhnle, R. A.; Wren, D. G.; Langendoen, E. J.; Rigby, J. R. (2013). "Sand Transport over an Immobile Gravel Substrate". Journal of Hydraulic Engineering. 139 (2): 167–176. doi:10.1061/(ASCE)HY.1943-7900.0000615. - Pellachini, Corrado (2011). Modelling fine sediment transport over an immobile gravel bed (phd). Trento: Unitn-eprints. - Nikora, V; Goring, D; McEwan, I; Griffiths, G (2001). "Spatially averaged open-channel flow over rough bed". J. Hydraul. Eng. 127 (2): 123–133. doi:10.1061/(ASCE)0733-9429(2001)127:2(123). - Miller, M.C.; McCave, I.N.; Komar, P.D. (1977). "Threshold of sediment motion under unidirectional currents". Sedimentology. 24 (4): 507–527. Bibcode:1977Sedim..24..507M. doi:10.1111/j.1365-3091.1977.tb00136.x. - Harris, Courtney K. (March 18, 2003). "Lecture 9: Suspended Sediment Transport II" (PDF). Sediment transport processes in coastal environments. Virginia Institute of Marine Science. Archived from the original (PDF) on 28 May 2010. Retrieved 23 December 2009. - Moore, Andrew. "Lecture 21—Suspended Sediment Transport" (PDF). Lecture Notes: Fluvial Sediment Transport. Kent State. Retrieved 25 December 2009. - Ackers, P.; White, W.R. (1973). "Sediment Transport: New Approach and Analysis". Journal of the Hydraulics Division. ASCE. 99 (11): 2041–2060. doi:10.1061/JYCEAJ.0003791. - Ariffin, J.; A.A. Ghani; N.A. Zakaira; A.H. Yahya (14–16 October 2002). "Evaluation of equations on total bed material load" (PDF). International Conference on Urban Hydrology for the 21st Century. Kuala Lumpur. - Yang, C (1979). "Unit stream power equations for total load". Journal of Hydrology. 40 (1–2): 123. Bibcode:1979JHyd...40..123Y. doi:10.1016/0022-1694(79)90092-1. - Bailard, James A. (1981). "An Energetics Total Load Sediment Transport Model For a Plane Sloping Beach". Journal of Geophysical Research. 86 (C11): 10938. Bibcode:1981JGR....8610938B. doi:10.1029/JC086iC11p10938. - Natato, T.; Ogden, F.L. (1998). "Sediment control at water intakes along sand-bed rivers". Journal of Hydraulic Engineering. 126 (6): 589–596. doi:10.1061/(ASCE)0733-9429(1998)124:6(589). - Liu, Z. (2001), Sediment Transport. - Moore, A. Fluvial sediment transport lecture notes, Kent State. - Wilcock, P. Sediment Transport Seminar, January 26–28, 2004, University of California at Berkeley - Southard, J. B. (2007), Sediment Transport and Sedimentary Structures - Linwood, J,G Suspended Sediment Concentration and Discharge in a West London River.