text
stringlengths
4
602k
Despite decades of attempts, gravitational waves continue to elude direct detection. However, one new technology could soon change that. SLAC theorists are watching closely as their experimentalist colleagues at Stanford make ready a device that will scrutinize Einstein's century-old equivalence principle, which says objects with different mass and compositions accelerate at the same speed under gravity. The precision of the Stanford experiment will be the highest ever achieved for a test of the equivalence principle. While the Stanford researchers' immediate goal is to examine this principle, the experiment will also demonstrate technology proposed for use in the search for gravitational waves. "Directly observing gravitational waves would revolutionize astrophysics," said SLAC graduate student Surjeet Rajendran. "They could offer a snapshot of the big bang, as well as other early universe processes." Gravitational waves are ripples in space-time that travel at the speed of light. Current research suggests that these waves are created by massive objects, particularly by two-body systems composed of neutron stars, white dwarves or black holes, to name a few. Since the effects of gravitational waves are infinitesimal, their observation can be disrupted by almost anything—variations in atmosphere, the earth's shaking. Although several experiments have tried, none have directly observed these elusive waves. In the early stages of one gravitational wave detector, unknown signals were eventually traced back to nearby logging activity. The hunt for direct detection of gravitational waves requires both extreme sensitivity and the ability to silence background noise. Advancements in atomic technology, which will be showcased in the Stanford experiment, offer both of these features. The experiment's magnetically shielded vacuum pipe, which extends through a 30-foot shaft in the Varian Physics Building's basement, represents a pristine environment—"a tube of nothing," so to speak, that shields out almost all background noise. Researchers will launch millions of atoms to the top of this cylinder of nothingness. After atoms, each with slightly different mass, are pulled down the tube by gravity, measurements of their states will be conducted down to an accuracy of 15 decimal places. Proof that this extreme precision can be achieved makes possible new types of gravitational wave detectors, which would extend the measuring capability of current methods. The collaboration that has formed between SLAC theorists and Stanford experimentalists has proposed two such detectors, one terrestrial and another satellite-based, that aim to utilize the core atomic technology of the Stanford experiment. For the Stanford experiment, changes in trajectories and velocities will be crucial to revealing possible violations of the equivalence principle. The same is true of the proposed detectors, where the paths of atoms will potentially echo reverberations thought to be found in gravitational waves. "The success of this experiment will be a dramatic turning point for our proposal," postdoctoral student Peter Graham said. "It will serve as a wake-up call and prove the power of this technology." Matt Cunningham, SLAC Today, March 24, 2008 Above image: Stanford graduate student Jason Hogan and SLAC's Surjeet Rajendran (center) and Peter Graham (right) discuss an experiment that may have vast scientific implications.
The Federal Reserve System, also known as the Fed, is the central banking system of the United States. It was created in 1913 to provide the country with a stable and flexible monetary and financial system. The Fed is made up of three key parts: the Board of Governors, the Federal Reserve Banks, and the Federal Open Market Committee (FOMC). The Board of Governors is a seven-member board appointed by the President of the United States and confirmed by the Senate. The Federal Reserve Banks are 12 regional banks located throughout the country. The FOMC is responsible for setting monetary policy and consists of the Board of Governors and five of the 12 Federal Reserve Bank presidents. The Fed’s primary function is to regulate the money supply and interest rates in the United States. It does this by buying and selling government securities, such as Treasury bonds, on the open market. When the Fed buys these securities, it injects money into the economy, which can stimulate economic growth. When it sells securities, it removes money from the economy, which can help to curb inflation. In addition to regulating the money supply, the Fed also acts as a lender of last resort, providing loans to banks that are experiencing financial difficulties. This helps to prevent bank runs and stabilize the banking system. The Fed also plays a role in regulating the financial system and ensuring the safety and soundness of banks. It does this by supervising and examining banks and other financial institutions, and by enforcing consumer protection laws. Overall, the Fed plays a critical role in maintaining the stability and strength of the U.S. economy and financial system. Some roles played by the Federal Reserve: - Monetary policy: The Federal Reserve is responsible for setting monetary policy in the United States. This includes determining the appropriate level of interest rates and regulating the money supply to achieve specific economic goals, such as low inflation or full employment. - Lender of last resort: The Federal Reserve acts as a lender of last resort to banks and other financial institutions that are experiencing financial difficulties. By providing emergency loans, the Fed can help to prevent bank runs and stabilize the financial system. - Regulation and supervision: The Federal Reserve supervises and regulates banks and other financial institutions to ensure that they are operating safely and soundly. This includes conducting regular examinations of banks and enforcing consumer protection laws. - Open market operations: The Federal Reserve conducts open market operations by buying and selling government securities, such as Treasury bonds. This can help to regulate the money supply and influence interest rates in the economy. - Payment system oversight: The Federal Reserve oversees the payment system in the United States, ensuring that payments are processed efficiently and securely. This includes operating the Fedwire Funds Service, which is used by banks to transfer large sums of money between accounts. - Research and analysis: The Federal Reserve conducts research and analysis on economic and financial issues, which is used to inform its policy decisions. This includes monitoring economic indicators, such as inflation and unemployment rates, and analyzing trends in the financial markets.
The Maurya Empire at its largest extent under Ashoka the Great. The Lion Capital of Ashoka |Preceding State(s)||Nanda Dynasty of Magadha |Head of State||Samraat (Emperor)| |First Emperor||Chandragupta Maurya| |Government||Centralized Absolute Monarchy with Divine Right of Kings as described in the Arthashastra| |Administration||Inner Council of Ministers (Mantriparishad) under a Mahamantri with a larger assembly of ministers (Mantrinomantriparisadamca). Extensive network of officials from treasurers (Sannidhatas) to collectors (Samahartas) and clerks (Karmikas). Provincial administration under regional viceroys (Kumara or Aryaputra) with their own Mantriparishads and supervisory officials (Mahamattas). Provinces divided into districts run by lower officials and similar stratification down to individual villages run by headmen and supervised by Imperial officials (Gopas). |Area||5 million km² (Southern Asia and parts of Central Asia)| |Population||50 million (one third of the world population )| |Currency||Silver Ingots (Panas)| |Dissolution||Military coup by Pusyamitra Sunga| |Succeeding state||Sunga Empire| The Maurya Empire (322 – 185 B.C.E.), ruled by the Mauryan dynasty, was a geographically extensive and powerful political and military empire in ancient India. Originating from the kingdom of Magadha in the Indo-Gangetic plains of modern Bihar, Eastern Uttar Pradesh and Bengal, the empire's capital city was at Pataliputra near modern Patna. Chandragupta Maurya founded the Empire in 322 B.C.E. after overthrowing the Nanda Dynasty. He began rapidly expanding his power westward across central and western India. Local powers had been disrupted by the westward withdrawal of Alexander the Great's and his Macedonian and Persian armies. By 316 B.C.E. the empire had fully occupied Northwestern India, defeating and conquering the satraps left by Alexander. At its zenith, the Empire stretched to the northern natural boundaries of the Himalaya Mountains, and to the east into Assam. To the west, it reached beyond modern Pakistan and significant portions of Afghanistan, including the modern Herat and Kandahar provinces and Balochistan. Emperor Bindusara expanded the Empire into India's central and southern regions, but it excluded a small portion of unexplored tribal and forested regions near Kalinga, India. The Mauryan Empire was arguably the largest empire to rule the Indian subcontinent. Its decline began fifty years after Ashoka's rule ended, and it dissolved in 185 B.C.E. with the rise of the Sunga Dynasty in Magadha. Under Chandragupta, the Mauryan Empire conquered the trans-Indus region, defeating its Macedonian rulers. Chandragupta then defeated the invasion led by Seleucus I, a Greek general from Alexander's army. Under Chandragupta and his successors internal and external trade, and agriculture and economic activities, all thrived and expanded across India. Chadragupta created a single and efficient system of finance, administration, and security. The Mauryan empire stands as one of the most significant periods in Indian history. After the Kalinga War, the Empire experienced a half century of peace and security under Ashoka. India was a prosperous and stable empire of great economic and military power. Its political and trade influence extended across Western and Central Asia into Europe. During that time Mauryan India also enjoyed an era of social harmony, religious transformation, and expansion of learning and the sciences. Chandragupta Maurya's embrace of Jainism increased social and religious renewal and reform across his society. Ashoka's embrace of Buddhism was the foundation of social and political peace and non-violence across all of India. The era fostered the spread of Buddhist ideals into Sri Lanka, Southeast Asia, West Asia, and Mediterranean Europe. Chandragupta's minister Kautilya Chanakya wrote the Arthashastra, considered one of the greatest treatises on economics, politics, foreign affairs, administration, military arts, war, and religion ever produced. Archaeologically, the period of Mauryan rule in Southern Asia falls into the era of Northern Black Polished Ware (NBPW). The Arthashastra and the Edicts of Ashoka serve as primary sources of written records of the Mauryan times. The Lion Capital of Asoka at Sarnath, remains the emblem of India. Alexander set up a Macedonian garrison and satrapies (vassal states) in the trans-Indus region of modern day Pakistan, ruled previously by kings Ambhi of Taxila and Porus of Pauravas (modern day Jhelum). Following Alexander's advance into the Punjab, a brahmin named Chanakya (real name Vishnugupt, also known as Kautilya) traveled to Magadha, a kingdom large and militarily-powerful and feared by its neighbors, but its king Dhana, of the Nanda Dynasty, dismissed him. The prospect of battling Magadha deterred Alexander's troops from going further east: He returned to Babylon, and re-deployed most of his troops west of the Indus river. When Alexander died in Babylon, soon after in 323 B.C.E., his empire fragmented, and local kings declared their independence, leaving several smaller satraps in a disunited state. Chandragupta Maurya deposed Dhana. The Greek generals Eudemus, and Peithon, ruled until around 316 B.C.E., when Chandragupta Maurya (with the help of Chanakya, now his adviser) surprised and defeated the Macedonians and consolidated the region under the control of his new seat of power in Magadha. Mystery and controversy shrouds Chandragupta Maurya's rise to power. On the one hand, a number of ancient Indian accounts, such as the drama Mudrarakshasa (Poem of Rakshasa-Rakshasa was the prime minister of Magadha) by Visakhadatta, describe his royal ancestry and even link him with the Nanda family. the earliest Buddhist texts, Mahaparinibbana Sutta refer to a kshatriya tribe known as the Maurya. Any conclusions require further historical evidence. Chandragupta first emerges in Greek accounts as "Sandrokottos." As a young man he may have met Alexander. Accounts say that he also met the Nanda king, angered him, and made a narrow escape. Chanakya originally intended to train a guerrilla army under Chandragupta's command. The Mudrarakshasa of Visakhadutta, as well as the Jaina work Parisishtaparvan, discuss Chandragupta's alliance with the Himalayan king Parvatka, sometimes identified with Porus. That Himalayan alliance gave Chandragupta a composite and powerful army made up of Yavanas (Greeks), Kambojas, Shakas (Scythians), Kiratas (Nepalese), Parasikas (Persians), and Bahlikas (Bactrians). With the help of those frontier martial tribes from Central Asia, Chandragupta defeated the Nanda/Nandin rulers of Magadha and founded the powerful Maurya empire in northern India. |Approximate Dates of Mauryan Dynasty| |Emperor||Reign start||Reign end| |Chandragupta Maurya||322 B.C.E.||298 B.C.E.| |Bindusara||297 B.C.E.||272 B.C.E.| |Asoka The Great||273 B.C.E.||232 B.C.E.| |Dasaratha||232 B.C.E.||224 B.C.E.| |Samprati||224 B.C.E.||215 B.C.E.| |Salisuka||215 B.C.E.||202 B.C.E.| |Devavarman||202 B.C.E.||195 B.C.E.| |Satadhanvan||195 B.C.E.||187 B.C.E.| |Brihadratha||187 B.C.E.||185 B.C.E.| The approximate extent of the Magadha state in the fifth century B.C.E. The Maurya Empire when first founded by Chandragupta Maurya c. 320 B.C.E., after conquering the Nanda Empire when only about twenty years old. Chanakya encouraged Chandragupta and his army to take over the throne of Magadha. Using his intelligence network, Chandragupta gathered many young men from across Magadha and other provinces, men upset over the corrupt and oppressive rule of king Dhana, plus resources necessary for his army to fight a long series of battles. Those men included the former general of Taxila, other accomplished students of Chanakya, the representative of King Porus of Kakayee, his son Malayketu, and the rulers of small states. Preparing to invade Pataliputra, Maurya hatched a plan. He had a battle announced and the Magadhan army mustered from the city to a distant battlefield to engage Maurya's forces. Maurya's general and spies meanwhile bribed the corrupt general of Nanda. He also managed to create an atmosphere of civil war in the kingdom, which culminated in the death of the heir to the throne. Chanakya managed to win over popular sentiment. Ultimately Nanda resigned, handing power to Chandragupta, went into exile and disappeared from history. Chanakya contacted the prime minister, Rakshasa, and made him understand that he owed loyalty to Magadha rather than to the Magadha dynasty, insisting that he continue in office. Chanakya also reiterated that choosing to resist would start a war that would severely affect Magadha and destroy the city. Rakshasa accepted Chanakya's reasoning, and Chandragupta Maurya was legitimately installed as the new King of Magadha. Rakshasa became Chandragupta's chief adviser, and Chanakya assumed the position of an elder statesman. Having become the king of one of India's most powerful states, Chandragupta invaded the Punjab. One of Alexander's richest satraps, Peithon, satrap of Media, had tried to raise a coalition against him. Chandragupta managed to conquer the Punjab capital of Taxila, an important center of trade and Hellenistic culture, increasing his power and consolidating his control. Chandragupta again fought with the Greeks when Seleucus I, ruler of the Seleucid Empire, tried to reconquer the northwestern parts of India, during a campaign in 305 B.C.E., but failed. The two rulers finally concluded a peace treaty: A marital treaty (Epigamia), implying either a marital alliance between the two dynastic lines or a recognition of marriage between Greeks and Indians. Chandragupta received the satrapies of Paropamisadae (Kamboja and Gandhara), Arachosia (Kandhahar), and Gedrosia (Balochistan), and Seleucus I received 500 war elephants that would play a decisive role in his victory against western Hellenistic kings at the Battle of Ipsus in 301 B.C.E. Diplomatic relations established, several Greeks, such as the historian Megasthenes, Deimakos, and Dionysius, resided at the Mauryan court. Chandragupta established a strong centralized state with a complex administration at Pataliputra, which, according to Megasthenes, was "surrounded by a wooden wall pierced by 64 gates and 570 towers—(and) rivaled the splendors of contemporaneous Persian sites such as Susa and Ecbatana." Chandragupta's son Bindusara extended the rule of the Mauryan empire towards southern India. He also had a Greek ambassador, Deimachus (Strabo 1–70), at his court. Megasthenes described a disciplined multitude under Chandragupta, who live simply, honestly, and did not know writing. Chandragupta died after reigning for twenty four years. His son, Bindusara, also known as Amitrochates (destroyer of foes) in Greek accounts, succeeded him in 298 B.C.E. Little information regarding Bindusara exists. Still, some credit him with the incorporation of the southern peninsular India. According to Jain tradition, his mother was a woman by the name of Durdhara. The Puranas assign him a reign of twenty five years. He has been identified with the Indian title Amitraghata (slayer of Enemies), found in Greek texts as Amitrochates. Contemporary historians consider Chandragupta's grandson Ashokavardhan Maurya, better known as Ashoka (ruled 273-232 B.C.E.), as perhaps the greatest of Indian monarchs, and perhaps the world. H.G. Wells calls him the "greatest of kings." As a young prince, Ashoka served as a brilliant commander who crushed revolts in Ujjain and Taxila. As an ambitious and aggressive monarch, he re-asserted the Empire's superiority in southern and western India. But his conquest of Kalinga proved the pivotal event of his life. Although Ashoka's army succeeded in overwhelming Kalinga forces of royal soldiers and civilian units, an estimated 100,000 soldiers and civilians died in the furious warfare, including over 10,000 of Ashoka's own men. Hundreds of thousands of people became refugees. When he personally witnessed the devastation, Ashoka began feeling remorse, and he cried, "what have I done?" Although the annexation of Kalinga was completed, Ashoka embraced the teachings of Gautama Buddha, and renounced war and violence. For a monarch in ancient times, this was an historic feat. After Ashoka's renunciation of war to acquire territory, he established friendly relations with the three Tamil dynasties of Chola, Chera and Pandya (known as Tamilakam or "Land of Tamils") at the southern tip of India, the only territory in India not directly under his control. Ashoka implemented principles of ahimsa by banning hunting and violent sports activity and ending indentured and forced labor (many thousands of people in war-ravaged Kalinga had been forced into hard labor and servitude). While he maintained a large and powerful army, to keep the peace and maintain authority, Ashoka expanded friendly relations with states across Asia and Europe, and he sponsored Buddhist missions. He undertook a massive public works building campaign across the country. Over forty years of peace, harmony and prosperity made Ashoka one of the most successful and famous monarchs in Indian history. He remains an idealized figure of inspiration in modern India. The Edicts of Ashoka, set in stone, have been found throughout the Subcontinent. Ranging from as far west as Afghanistan and as far south as Andhra (Nellore District), Ashoka's edicts state his policies and accomplishments. Although written for the most part in Prakrit, two of them had been written in Greek, and one in both Greek and Aramaic. Ashoka's edicts refer to the Greeks, Kambojas, and Gandharas as peoples forming a frontier region of his empire. They also attest to Ashoka's having sent envoys to the Greek rulers in the West as far as the Mediterranean. The edicts precisely name each of the rulers of the Hellenic world at the time such as Amtiyoko (Antiochus), Tulamaya (Ptolemy), Amtikini (Antigonos), Maka (Magas) and Alikasudaro (Alexander) as recipients of Ashoka's proselytism. The Edicts also accurately locate their territory "600 yojanas away" (a yojanas being about seven miles), corresponding to the distance between the center of India and Greece (roughly 4,000 miles). The Empire divided into four provinces, with the imperial capital at Pataliputra. From Ashokan edicts, the names of the four provincial capitals follow: Tosali (in the east), Ujjain in the west, Suvarnagiri (in the south), and Taxila (in the north). The head of the provincial administration had been the Kumara (royal prince), who governed the provinces as king's representative. Mahamatyas and council of ministers assisted the kumara. That organizational structure mirrored the imperial level with the Emperor and his Mantriparishad (Council of Ministers). Historians theorize that the organization of the Empire was in line with the extensive bureaucracy described by Kautilya in the Arthashastra: A sophisticated civil service governed everything from municipal hygiene to international trade. The expansion and defense of the empire was made possible by what appears to have been the largest standing army of its time. According to Megasthenes, the empire wielded a military of 600,000 infantry, 30,000 cavalry, and 9,000 war elephants. A vast espionage system collected intelligence for both internal and external security purposes. Having renounced offensive warfare and expansionism, Ashoka nevertheless continued to maintain that large army, to protect the Empire and instill stability and peace across West and South Asia. For the first time in South Asia, political unity and military security allowed for a common economic system and enhanced trade and commerce, with increased agricultural productivity. The previous situation involving hundreds of kingdoms, many small armies, powerful regional chieftains, and internecine warfare, gave way to a disciplined central authority. Farmers were freed of tax and crop collection burdens from regional kings, paying instead to a nationally-administered and strict-but-fair system of taxation as advised by the principles in the Arthashastra. Chandragupta Maurya established a single currency across India, and a network of regional governors and administrators and a civil service provided justice and security for merchants, farmers and traders. The Mauryan army wiped out many gangs of bandits, regional private armies, and powerful chieftains who sought to impose their own supremacy in small areas. Although regimental in revenue collection, Maurya also sponsored many public works and waterways to enhance productivity, while internal trade in India expanded greatly due to newfound political unity and internal peace. Under the Indo-Greek friendship treaty, and during Ashoka's reign, an international network of trade expanded. The Khyber Pass, on the modern boundary of Pakistan and Afghanistan, became a strategically-important port of trade and intercourse with the outside world. Greek states and Hellenic kingdoms in West Asia became important trade partners of India. Trade also extended through the Malay peninsula into Southeast Asia. India's exports included silk goods and textiles, spices, and exotic foods. An exchange of scientific knowledge and technology with Europe and West Asia enriched the Empire further. Ashoka also sponsored the construction of thousands of roads, waterways, canals, hospitals, rest-houses, and other public works. The easing of many overly-rigorous administrative practices, including those regarding taxation and crop collection, helped increase productivity and economic activity across the Empire. In many ways, the economic situation in the Maurya Empire compares to the Roman Empire several centuries later, both having extensive trade connections and organizations similar to corporations. While Rome had organizational entities largely used for public state-driven projects, Mauryan India had numerous private commercial entities which existed purely for private commerce. The Mauryas had to contend with pre-existing private commercial entities, hence their concern about keeping the support of those pre-existing organizations. The Romans lacked such pre-existing entities. Emperor Chandragupta Maurya became the first major Indian monarch to initiate a religious transformation at the highest level when he embraced Jainism, a religious movement resented by orthodox Hindu priests who usually attended the imperial court. At an older age, Chandragupta renounced his throne and material possessions to join a wandering group of Jain monks. Chandragupta became a disciple of Acharya Bhadrabahu. In his last days, he observed the rigorous but self purifying Jain ritual of santhara i.e. fast unto death, at Shravan Belagola in Karnatka. His successor, Emperor Bindusara, preserved Hindu traditions and distanced himself from Jain and Buddhist movements.Samprati, the grandson of Ashoka also embraced Jainism. Samrat Samprati had been influenced by the teachings of Jain monk Arya Suhasti Suri, building many Jain Temples across India. Some of them still stand in towns of Ahmedabad, Viramgam, Ujjain & Palitana. Just like Ashoka, Samprati sent messengers & preachers to Greece, Persia & Middle-East for the spread of Jainism. But till date no research has been done in this area. Thus, Jainism became a vital force under the Mauryan Rule. Chandragupta & Samprati, are credited for spread of Jainism in Southern India. Lakhs of Jain Temples & Jain Stupas were erected during their reign. But due to lack of royal patronage & its strict principles, along with rise of Shankaracharya & Ramanujacharya, Jainism, once the major religion of southern India, declined. But when Ashoka embraced Buddhism, following the Kalinga War, he renounced expansionism and aggression, and the harsher injunctions of the Arthashastra on the use of force, intensive policing, and ruthless measures for tax collection and against rebels. Ashoka sent a mission led by his son and daughter to Sri Lanka, whose king Tissa adopted Buddhist ideals, making Buddhism the state religion. Ashoka sent many Buddhist missions to West Asia, Greece and South East Asia, and commissioned the construction of monasteries, schools and publication of Buddhist literature across the empire. He built as many as 84,000 stupas across India, and increased the popularity of Buddhism in Afghanistan. Ashoka helped convene the Third Buddhist Council of India and South Asia's Buddhist orders, near his capital, a council that undertook much work of reform and expansion of the Buddhist religion. Buddhism continued to thrive after Ashoka for nearly 600 years until a combination of events eclipsed the faith into near annihilation in India. First, Buddhism declined in the wake of the invasion of the White Huns during the fifth century C.E. The decline accelerated in the twelfth century C.E. with the fall of the Pala dynasty and Muslim destruction of temples and monasteries. Second, a golden age of Sanskrit during the Gupta dynasty (fourth to sixth centuries C.E.), that restructured and revitalized Gupta civilization in accord with Hinduism, forced Buddhism into recession. While himself a Buddhist, Ashoka retained the membership of Hindu priests and ministers in his court, and he maintained religious freedom and tolerance although the Buddhist faith grew in popularity with his patronage. Indian society began embracing the philosophy of ahimsa, and given the increased prosperity and improved law enforcement, crime and internal conflicts reduced dramatically. Due to Buddhism's and Jainism's inherent anti-caste teaching and philosophy, the caste system and traditional practice of discrimination among the social groups fell into disfavor as Hinduism began absorbing the ideals and values of Jain and Buddhist teachings. Social freedom began expanding in an age of peace and prosperity. Few architectural remains of the Maurya period have been found. Remains of a hypostyle building with about eighty columns of a height of about ten meters have been found in Kumhrar, five kilometers from Patna Railway station, one of the few sites Mauryas located. The style resembles Persian Achaemenid architecture. The grottoes of Barabar Caves provide another example of Mauryan architecture, especially the decorated front of the Lomas Rishi grotto. The Mauryas offered those to the Buddhist sect of the Ajivikas. The Pillars of Ashoka, often exquisitely decorated, constitute outstanding examples of Maurya architecture with more than forty spread throughout the sub-continent. Ashoka was followed for fifty years by a succession of weaker kings. Brhadrata, the last ruler of the Mauryan dynasty, held territories that had shrunk considerably from the time of emperor Ashoka, although he still upheld the Buddhist faith. Brhadrata was assassinated in 185 B.C.E. during a military parade by the commander-in-chief of his guard, the Brahmin general Pusyamitra Sunga, who then took over the throne and established the Sunga dynasty. Buddhist records such as the Asokavadana reveal that the assassination of Brhadrata and the rise of the Sunga empire led to a wave of persecution for Buddhists, and a resurgence of Hinduism. Pusyamitra may have been the main instigator of the persecutions, although later Sunga kings seem to have been more supportive of Buddhism. Other historians point to a lack of archaeological evidence supporting the claim of persecution of Buddhists. The fall of the Mauryas left the Khyber Pass unguarded, and a wave of invasions followed. The Greco-Bactrian king, Demetrius, capitalizing on the break-up, he conquered southern Afghanistan and Pakistan around 180 B.C.E., forming the Indo-Greek Kingdom. The Indo-Greeks maintained control the trans-Indus region, conducting campaigns into central India, for about a century. Buddhism flourished under them, one of their kings Menander becoming a key promoter of Buddhism. He established the new capital of Sagala, the modern city of Sialkot. The extent of their domains, and the length of their rule, remain unclear. Numismatic evidence indicates that they controlled territory in the subcontinent until the beginning of the Common Era. The Scythian tribes, renamed Indo-Scythians, brought about the demise of the Indo-Greeks in 70 B.C.E., seizing the region of Mathura, and Gujarat. All links retrieved September 22, 2016. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: The history of this article since it was imported to New World Encyclopedia:
Open educational resources Open Educational Resources (OER) are freely accessible, usually openly licensed documents and media that are useful for teaching, learning, educational, assessment and research purposes. Although some people consider the use of an open format to be an essential characteristic of OER, this is not a universally acknowledged requirement. Defining the Scope and Nature of Open Educational Resources There are numerous working definitions for the idea of open educational resources (OER). Often cited is the William and Flora Hewlett Foundation which defines OER as: "teaching, learning, and research resources that reside in the public domain or have been released under an intellectual property license that permits their free use and re-purposing by others. Open educational resources include full courses, course materials, modules, textbooks, streaming videos, tests, software, and any other tools, materials, or techniques used to support access to knowledge". The Organization for Economic Co-operation and Development (OECD) defines OER as: "digitised materials offered freely and openly for educators, students, and self-learners to use and reuse for teaching, learning, and research. OER includes learning content, software tools to develop, use, and distribute content, and implementation resources such as open licences". (Notably, this is the definition cited by Wikipedia's sister project, Wikiversity.) By way of comparison, the Commonwealth of Learning "has adopted the widest definition of Open Educational Resources (OER) as ‘materials offered freely and openly to use and adapt for teaching, learning, development and research’". The WikiEducator project suggests that OER refers "to educational resources (lesson plans, quizzes, syllabi, instructional modules, simulations, etc.) that are freely available for use, reuse, adaptation, and sharing'. Given the diversity of users, creators and sponsors of open educational resources, it is not surprising to find a variety of use cases and requirements. For this reason, it may be as helpful to consider the differences between descriptions of open educational resources as it is to consider the descriptions themselves. One of several tensions in reaching a consensus description of OER (as found in the above definitions) is whether there should be explicit emphasis placed on specific technologies. For example, a video can be openly licensed and freely used without being a streaming video. A book can be openly licensed and freely used without being an electronic document. This technologically driven tension is deeply bound up with the discourse of open-source licensing. For more, see Licensing and Types of OER later in this article. There is also a tension between entities which find value in quantifying usage of OER and those which see such metrics as themselves being irrelevant to free and open resources. Those requiring metrics associated with OER are often those with economic investment in the technologies needed to access or provide electronic OER, those with economic interests potentially threatened by OER, or those requiring justification for the costs of implementing and maintaining the infrastructure or access to the freely available OER. While a semantic distinction can be made delineating the technologies used to access and host learning content from the content itself, these technologies are generally accepted as part of the collective of open educational resources. Since OER are intended to be available for a variety of educational purposes, organizations using OER presently neither award degrees nor provide academic or administrative support to students seeking college credits towards a diploma from a degree granting accredited institution. In open education, there is an emerging effort by some accredited institutions to offer free certifications, or achievement badges, to document and acknowledge the accomplishments of participants. The term learning object was coined in 1994 by Wayne Hodgins and quickly gained currency among educators and instructional designers, popularizing the idea that digital materials can be designed to allow easy reuse in a wide range of teaching and learning situations. The OER movement originated from developments in open and distance learning (ODL) and in the wider context of a culture of open knowledge, open source, free sharing and peer collaboration, which emerged in the late 20th century. OER and Free/Libre Open Source Software (FLOSS), for instance, have many aspects in common, a connection first established in 1998 by David Wiley, who introduced the concept of open content by analogy with open source. The MIT OpenCourseWare project is credited for having sparked a global Open Educational Resources Movement after announcing in 2001 that it was going to put MIT's entire course catalog online and launching this project in 2002. In a first manifestation of this movement, MIT entered a partnership with Utah State University, where assistant professor of instructional technology David Wiley set up a distributed peer support network for the OCW's content through voluntary, self-organizing communities of interest. In 2005 OECD’s Centre for Educational Research and Innovation (CERI) launched a 20-month study to analyse and map the scale and scope of initiatives regarding "open educational resources" in terms of their purpose, content, and funding. The report "Giving Knowledge for Free: The Emergence of Open Educational Resources", published in May 2007, is the main output of the project, which involved a number of expert meetings in 2006. In September 2007, the Open Society Institute and the Shuttleworth Foundation convened a meeting in Cape Town to which thirty leading proponents of open education were invited to collaborate on the text of a manifesto. The Cape Town Open Education Declaration was released on 22 January 2008, urging governments and publishers to make publicly funded educational materials available at no charge via the internet. Licensing and Types of OER Open educational resources often involve issues relating to intellectual property rights. Traditional educational materials, such as textbooks, are protected under conventional copyright terms. However, alternative and more flexible licensing options have become available as a result of the work of Creative Commons, an organisation that provides ready-made licensing agreements that are less restrictive than the "all rights reserved" terms of standard international copyright. These new options have become a "critical infrastructure service for the OER movement." Another license, typically used by developers of OER software, is the GNU Public License from the FOSS community. Open licensing allows uses of the materials that would not be easily permitted under copyright alone. There is ongoing discussion in the OER community regarding the idea of there being an implicit reliance on explicit licensing. For example, knowledge found in the public domain may or may not be considered a legitimate open educational resource depending on whether the absence of an open license prevents it from meeting differing criteria of openness. Related to the discussion on licensing is discussion on reusage which a license may or may not clearly indicate. Types of open educational resources include: full courses, course materials, modules, learning objects, open textbooks, openly licensed (often streamed) videos, tests, software, and other tools, materials, or techniques used to support access to knowledge. OER may be freely and openly available static resources, dynamic resources which change over time in the course of having knowledge seekers interacting with and updating them (such as this Wikipedia article), or a course or module with a combination of these resources. OER policy Open educational resources policies are principles or tenets adopted by governing bodies in support of the use of open content and practices in educational institutions. Such policies are emerging increasingly at the country, state/province and more local level. Some major OER programs include: - OER Africa, an initiative established by the South African Institute for Distance Education (Saide) to play a leading role in driving the development and use of OER across all education sectors on the African continent. - Wikiwijs (the Netherlands), a program intended to promote the use of open educational resources (OER) in the Dutch education sector; - The Open educational resources programme (phases one and two) (United Kingdom), funded by HEFCE, the UK Higher Education Academy and JISC, which has supported pilot projects and activities around the open release of learning resources, for free use and repurposing worldwide. Institutional Support A large part of the early work on open educational resources was funded by universities and foundations such as the William and Flora Hewlett Foundation, which was the main financial supporter of open educational resources in the early years and has spent more than $110 million in the 2002 to 2010 period, of which more than $14 million went to MIT. The Shuttleworth Foundation, which focuses on projects concerning collaborative content creation, has contributed as well. With the British government contributing £5.7m, institutional support has also been provided by the UK funding bodies JISC and HEFCE. UNESCO is taking a leading role in "making countries aware of the potential of OER." The organisation has instigated debate on how to apply OERs in practice and chaired vivid discussions on this matter through its International Institute of Educational Planning (IIEP). Believing that OERs can widen access to quality education, particularly when shared by many countries and higher education institutions, UNESCO also champions OERs as a means of promoting access, equity and quality in the spirit of the Universal Declaration of Human Rights. Recently, the 2012 Paris OER Declaration was approved during the 2012 OER World Congress held in UNESCO HQ. A parallel initiative Connexions, came out of Rice University starting in 1999. In contrast to the OCW projects, content licenses are required to be open under a Creative Commons Attribution only license. The hallmark of Connexions is the use of a custom XML format CNXML, designed to aid and enable mixing and reuse of the content. Other initiatives derived from MIT OpenCourseWare are China Open Resources for Education and OpenCourseWare in Japan. The OpenCourseWare Consortium, founded in 2005 to extend the reach and impact of open course materials and foster new open course materials, counted more than 200 member institutions from around the world in 2009. In 2003, the ownership of Wikipedia and Wiktionary projects was transferred to the Wikimedia Foundation, a non-profit charitable organization whose goal is to collecting and developing free educational content and to disseminate it effectively and globally. Wikipedia ranks in the top-ten most visited websites worldwide since 2007. OER Commons was spearheaded in 2007 by ISKME, a nonprofit education research institute dedicated to innovation in open education content and practices, as a way to aggregate, share, and promote open educational resources to educators, administrators, parents, and students. OER Commons also provides educators tools to align OER to the Common Core State Standards; to evaluate the quality of OER to OER Rubrics developed by Achieve; and to contribute and share OERs with other teachers and learners worldwide. To further promote the sharing of these resources among educators, in 2008 ISKME launched the OER Commons Teacher Training Initiative, which focuses on advancing Open Educational Practices and on building opportunities for systemic change in teaching and learning. One of the first OER resources for K-20 education is Curriki. A nonprofit organization, Curriki provides an Internet site for open source curriculum (OSC) development, to provide universal access to free curricula and instructional materials for students up to the age of 18 (K-12). By applying the open source process to education, Curriki empowers educational professionals to become an active community in the creation of good curricula. Kim Jones serves as Curriki's Executive Director. In August 2006 WikiEducator was launched to provide a venue for planning education projects built on OER, creating and promoting open education resources (OERs), and networking towards funding proposals. Its Wikieducator's Learning4Content project builds skills in the use of MediaWiki and related free software technologies for mass-collaboration in the authoring of free content and claims to be the world's largest wiki training project for education. By 30 June 2009 the project facilitated 86 workshops training 3,001 educators from 113 different countries. Peer production has also been utilized in producing collaborative open education resources (OERs). Writing Commons, an international open textbook spearheaded by Joe Moxley at the University of South Florida, has evolved from a print textbook into a crowd-sourced resource for college writers around the world. Massive open online course (MOOC) platforms have also generated interest in building online eBooks. The Cultivating Change Community (CCMOOC) at the University of Minnesota is one such project founded entirely on a grassroots model to generate content. In 10 weeks, 150 authors contributed more than 50 chapters to the CCMOOC eBook and companion site. Another project is the Free Education Initiative from the Saylor Foundation, which is currently more than 80% of the way towards its initial goal of providing 241 college-level courses across 13 subject areas. The Saylor Foundation makes use of university and college faculty members and subject experts to assist in this process, as well as to provide peer review of each course to ensure its quality. The foundation also supports the creation of new openly licensed materials where they are not already available as well as through its Open Textbook Challenge. In 2006, the African Virtual University (AVU) released 73 modules of its Teacher Education Programs as Open Education Resources to make the courses freely available for all. In 2010, the AVU developed the OER Repository which has contributed to increase the number of Africans that use, contextualize, share and disseminate the existing as well as future academic content. The online portal http://oer.avu.org serves as a platform where the 219 modules of Mathematics, Physics, Chemistry, Biology, ICT in education, and teacher education professional courses are published. The modules are available in three different languages – English, French, and Portuguese, making the AVU the leading African institution in providing and using Open Education Resources International programs - Europe - Learning Resource Exchange for schools (LRE) is a service launched by European Schoolnet in 2004 enabling educators to find multilingual open educational resources from many different countries and providers. Currently, more than 200,000 learning resources are searchable in one portal based on language, subject, resource type and age range. - India -National Council Of Educational Research and Training digitized all its textbooks from 1st standard to 12th standard .The textbooks are available online for free. Central Institute of Educational Technology , a constituent Unit of NCERT digitized more than thousand audio and video programmes. All the educational AV material developed by CIET is presently available at Sakshat Portal an initiative of Ministry of Human Resources and Development. In addition, NROER ( National Repository for Open Educational Resources) houses variety of e content. - US - Washington State's Open Course Library Project is a collection of expertly developed educational materials – including textbooks, syllabi, course activities, readings, and assessments – for 81 high-enrolling college courses. 42 courses have been completed so far, providing faculty with a high-quality option that will cost students no more than $30 per course. - Bangladesh is the first country to digitize a complete set of textbooks for grades 1-12. Distribution is free to all. - Uruguay sought up to 1,000 digital learning resources in a Request For Proposals (RFP) in June 2011. - South Korea has announced a plan to digitize all of its textbooks and to provide all students with computers and digitized textbooks. - The California Learning Resources Network Free Digital Textbook Initiative at high school level, initiated by former Gov. Arnold Schwarzenegger. - The Shuttleworth Foundation's Free high school science texts for South Africa - Saudi Arabia had a comprehensive project in 2008 to digitize and improve the Math and Science text books in all k-12 grades. - Saudi Arabia started a project in 2011 to digitize all text books other than Math and Science. With the advent of growing international awareness and implementation of open educational resources, a global OER logo was adopted for use in multiple languages by UNESCO. The design of the Global OER logo creates a common global visual idea, representing "subtle and explicit representations of the subjects and goals of OER". Its full explanation and recommendation of use is available from UNESCO. Critical discourse about OER as a movement External discourse External criticism of the OER movement has been accused of insularity and failure to connect with the larger world: "OERs will not be able to help countries reach their educational goals unless awareness of their power and potential can rapidly be expanded beyond the communities of interest that they have already attracted." More fundamentally, doubts have been cast on the altruistic motives typically claimed for OERs. The very project has been accused of imperialism in that the creation and dissemination of knowledge according to the economic, political and cultural preferences of highly developed countries for the use of less developed countries is alleged to be a self-serving imposition. Internal discourse Within the open educational resources movement, OER are an essentially contested and active concept. One example of this can be found in the conceptions of gratis versus libre knowledge as found in the discourse about massive open online courses which may offer free courses but charge for end-of-course awards or course verification certificates from commercial entities. A second example of essentially contested ideas in OER can be found in the usage of different OER logos which can be interpreted as indicating more or less allegiance to the notion of OER as a global movement. See also - Distance education - Free education - Free High School Science Texts - George Siemens - IMS Global - Internet Archive - Khan Academy - Libre knowledge - Massive open online courses (MOOCs) - MIT OpenCourseWare - Open access - Open content - Open Library - Open source curriculum - Project Gutenberg - Question and Test Interoperability specification - Stephen Downes - Virginia Open Education Foundation - Wikipedia itself! - Writing Commons - Kauppinen, Ilkka (29). "Different meanings of 'knowledge as commodity' in the context of higher education". Critical Sociology. doi:10.1177/0896920512471218. Retrieved 23 April 2013. - Sanchez, Claudia. "The use of technological resources for education: a new professional competency for teachers". Intel® Learning Series blog. Intel Corporation. Retrieved 23 April 2013. - "What is OER?". wiki.creativecommons.org. Creative Commons. Retrieved 18 April 2013. - "Open Educational Resources". The William and Flora Hewlitt Foundation. Retrieved 27 March 2013. - "Giving Knowledge for Free: THE EMERGENCE OF OPEN EDUCATIONAL RESOURCES". Center for Educational Research and Innovation. Retrieved 28 March 2013. - "Open educational resources". Wikiversity (English). Wikimedia Foundation. Retrieved 17 April 2013. - "Open Educational Resources (OER)". CoL.org. Commonwealth of Learning. Retrieved 16 April 2013. - "Oer". WikiEducator.org. WikiEducator.org. Retrieved 17 April 2013. - "Defining OER". WikiEducator.org. Open Education Resource Foundation. Retrieved 18 April 2013. - (CERI), Center for Educational Research and Innovation (2007). Giving Knowledge for Free: The Emergence of Open Educational Resources. Executive Summary (Policy implications and recommendations): Organization for Economic Co-Operation and Development (OECD). p. 15. ISBN 978-92-64-03174-6. - (CERI), Center for Educational Research and Innovation (2007). Giving Knowledge for Free: The Emergence of Open Educational Resources. Executive Summary (What are open educational resources?): Organization for Economic Co-Operation and Development (OECD). p. 10. ISBN 978-92-64-03174-6. - Hafner, Katie (2010-04-16). "Higher Education Reimagined With Online Courseware". New York Times (New York). Retrieved 2010-12-19. - Johnstone, Sally M. (2005). "Open Educational Resources Serve the World". Educause Quarterly 28 (3). Retrieved 2010-11-01. - Wiley, David (2006-02-06), Expert Meeting on Open Educational Resources, Centre for Educational Research and Innovation, retrieved 2010-12-03 - "FOSS solutions for OER - summary report". Unesco. 2009-05-28. Retrieved 2011-02-20. - Hylén, Jan (2007). Giving Knowledge for Free: The Emergence of Open Educational Resources. Paris, France: OECD Publishing. doi:10.1787/9789264032125-en. Retrieved 2010-12-03. - Grossman, Lev (1998-07-18). "New Free License to Cover Content Online". Netly News. Archived from the original on 2000-06-19. Retrieved 2010-12-27. - Wiley, David (1998). "Open Content". OpenContent.org. Retrieved 2010-01-12. - Guttenplan, D. D. (2010-11-01). "For Exposure, Universities Put Courses on the Web". New York Times (New York). Retrieved 2010-12-19. - Ticoll, David (2003-09-04). "MIT initiative could revolutionize learning". The Globe and Mail (Toronto). Archived from the original on 2003-09-20. Retrieved 2010-12-20. - "Open Educational Resources". CERI. Retrieved 2011-01-02. - Giving Knowledge for Free: The Emergence of Open Educational Resources. Paris, France: OECD Publishing. 2007. doi:10.1787/9789264032125-en. Retrieved 2010-12-03. - Deacon, Andrew; Catherine Wynsculley (2009). "Educators and the Cape Town Open Learning Declaration: Rhetorically reducing distance". International Journal of Education and Development using ICT 5 (5). Retrieved 2010-12-27. - "The Cape Town Open Education Declaration". Cape Town Declaration. 2007. Retrieved 2010-12-27. - Atkins, Daniel E.; John Seely Brown, Allen L. Hammond (2007-02). "A Review of the Open Educational Resources (OER) Movement: Achievements, Challenges, and New Opportunities". Menlo Park, CA: The William and Flora Hewlett Foundation. p. 13. Retrieved 2010-12-03. - Hylén, Jan (2007). Giving Knowledge for Free: The Emergence of Open Educational Resources. Paris, France: OECD Publishing. p. 30. doi:10.1787/9789264032125-en. Retrieved 2010-12-03. - Giving Knowledge for Free: The Emergence of Open Educational Resources. Centre for Educational Research and Innovation (CERI), OECD. 2007. Retrieved 24 April 2013. - "Introducing OER Africa". South African Institute for Distance Education. - "Trend Report: Open Educational Resources 2013". SURF. Open Educational Resources Special Interest Group (SIG OER). March 2013. - "Open educational resources programme - phase 1". - "Open educational resources programme - phase 2". - "OER Policy Registry". Creative Commons. Retrieved 24 April 2013. - Swain, Harriet (2009-11-10). "Any student, any subject, anywhere". The Guardian (London). Retrieved 2010-12-19. - "Open educational resources programme - phase 2". JISC. 2010. Retrieved 2010-12-03. - "Open educational resources programme - phase 1". JISC. 2009. Retrieved 2010-12-03. - "Initiative Background". Taking OER beyond the OER Community. 2009. Retrieved 2011-01-01. - Communiqué: The New Dynamics of Higher Education and Research for Societal Change and Development, UNESCO World Conference on Higher Education, 2009 - "UNESCO Paris OER Declaration 2012". 2012. Retrieved 2012-06-27. - Attwood, Rebecca (2009-09-24). "Get it out in the open". Times Higher Education (London). Retrieved 2010-12-18. - "What is WikiEducator? (October 2006)". COL. Retrieved 2010-12-21. - "The Purpose of Learning for Content - outcomes and results". Wikieducator. 2010-02-10. Retrieved 2010-12-28. - "About.""Writing Commons". CC BY-NC-ND 3.0. Retrieved 11 February 2013. - Anders, Abram (November 9, 2012). "Experimenting with MOOCs: Network-based Communities of Practice.". Great Plains Alliance for Computers and Writing Conference. Mankato, MN. Text "https://cultivatingchange.wp.d.umn.edu/community/ccmooc-experimenting-with-moocs-at-gpacw/ " ignored (help); - "About.""Cultivating Change Community". CC BY-NC 3.0. Retrieved 11 February 2013. - OER ECONOMICS "MU OER PORTAL". Wikieducator. - Thibault, Joseph. "241 OER Courses with Assessments in Moodle: How Saylor.org has created one of the largest Free and Open Course Initiatives on the web". Moodlenews.com. Retrieved 30 January 2012. - "Saylor Foundation to Launch Multi-Million Dollar Open Textbook Challenge! | College Open Textbooks Blog". Collegeopentextbooks.org. 2011-08-09. Retrieved 2011-10-21. - PM opens e-content repository - http://ceibal.org.uy/index.php?option=com_content&view=article&id=486:licitacion-publica-internacional-no-01522011-seleccion-de-proveedor-para-la-adquisicion-de-plataforma-educativa-on-line-yo-recursos-educativos-digitales-para-educacion-primaria-y-media-uruguaya&catid=51:convocatorias-vigentes&Itemid=82 (PDF in Spanish) - Mello, Jonathas. "Global OER Logo". UNESCO. United Nations Educational, Scientific and Cultural Organization. Retrieved 16 April 2013. - Mulder, Jorrit (2008). Knowledge Dissemination in Sub-Saharan Africa: What Role for Open Educational Resources (OER)?. Amsterdam: University of Amsterdam. p. 14. - "UNESCO and COL promote wider use of OERs". International Council for Open and Distance Education. 2010-06-24. Retrieved 2011-01-01. - Mulder, Jorrit (2008). Knowledge Dissemination in Sub-Saharan Africa: What Role for Open Educational Resources (OER)?. Amsterdam: University of Amsterdam. pp. 58–67. Retrieved 2011-01-01. - Scanlon, Eileen (February/March 2012). "Digital futures: Changes in scholarship, open educational resources and the inevitability of interdisciplinarity". Arts and Humanities in Higher Education 11: 177–184. doi:10.1177/1474022211429279. - "OER: Articles, Books, Presentations and Seminars". EduCause.edu. Educause. Retrieved 23 April 2013. - Rivard, Ry. "Coursera begins to make money". InsideHigherEd.com. Inside Higher Ed. Retrieved 25 April 2013. - Carey, Kevin. "The Brave New World of College Branding". Chronicle.com. The Chronicle of Higher Education. Retrieved 25 April 2013. - Inamorato, Andreia. "George Siemens' interview on MOOCs and Open Education". YouTube.com. Open Content Online blog. Retrieved 13 May 2013. - "Open Library". Open Library. One web page for every book. Internet Archive. Retrieved 3 April 2013. - Downes, Stephen (2011). Free Learning: Essays on Open Educational Resources and Copyright. National Research Council Canada. - Downes, Stephen. "The Role of Open Educational Resources in Personal Learning". YouTube.com. Universitat Oberta de Catalunya (UoC). Retrieved 13 May 2013.
Download PDFOne goal of a mathematics education is that students make significant connections among different branches of mathematics. Connections—such as those between arithmetic and algebra, between two-dimensional and three-dimensional geometry, between compass-and-straightedge constructions and transformations, and between calculus and analytic geometry—form the backbone of important mathematical understandings. Another such connection is described in the Common Core State Standards for Mathematics: Students should “describe transformations as functions that take points in the plane as inputs and give other points as outputs” (Standard G-CO2, CCSSI 2010, p. 76). The short length of this quotation belies its importance in relating geometric transformations and functions. Both topics are emphasized in the Common Core: Transformations are the conceptual basis for congruence and similarity, and functions get an entire conceptual category of their own. If our response to having students “describe transformations as functions” is limited to explaining how transformations fit the abstract definition of function, then students miss an opportunity for fruitful sensory-motor interactions with geometric functions. (We use geometric functions to refer to geometric transformations treated as mathematical functions.) If students have no meaningful way to connect, for instance, dilations and translations in the geometric realm with linear functions in the algebraic realm, the connection between geometric and algebraic functions will be a factoid, a bit of trivia without real value. In this article, we describe a way of forging a strong connection between geometric and algebraic functions, a connection that can deepen students’ concept of function and develop their appreciation for the interconnectedness of geometry and algebra. The activities described in this article (and additional geometric functions activities) are freely available in classroom-ready form, including student worksheets and teacher notes. Each activity is supported by an interactive Web page (http://geometricfunctions.org/links/connecting-functions), powered by Web Sketchpad®, that runs in any modern Web browser. No other software is required. Each activity page includes a Web sketch (as illustrated in fig. 1), customized construction tools, a scrolling copy of the worksheet, links to a downloadable worksheet, brief video clips, and other support materials. A GEOMETRIC FUNCTIONS APPROACH The textbook Geometry: A Transformation Approach (Coxford and Usiskin 1971) broke new ground in cultivating the connections between transformations and functions. Now, with the availability of dynamic mathematics software such as The Geometer’s Sketchpad®, Cabri Geometry®, and GeoGebra, the benefits for students’ understanding of function concepts are stronger still. Cognitive scientists tell us that students build abstract mathematical concepts by connecting those concepts to the physical world through conceptual metaphors (Lakoff and Núñez 2000; Radford 2012), such as the metaphor that numbers are points on a line. Geometric functions are based on a similar metaphor—that geometric variables are movable points. (For the purposes of this article, we will consider movable points on the plane.) This metaphor enables students to use dynamic software to create a point (the independent variable), construct another point (the dependent variable) that depends on the first, and drag to observe the resulting covariation and relative rate of change. In other words, a geometric function relates the preimage point—the independent variable x—with its image—the dependent variable that is a function of x. Computer-based kinesthetic and visual encounters with geometric functions make this and other related metaphors real to students and enable them to build abstract function concepts based on their sensory-motor experiences. For instance, in figure 2a, the student uses Web Sketchpad to create a point (independent variable x), to construct its image under a dilation, and to label the image using meaningful function notation. She then drags x while observing that the dependent variable moves in the same direction as the independent variable but at twice the speed. In figure 2b, she restricts the domain to a polygon and again drags to observe the corresponding range. (In the Web version of this article, this and all following figures are interactive; readers can drag the variables, edit the parameters, and press the buttons.) (a)(b) Continuous dragging in a dynamic geometry environment is an innate feature of the software but one that is often missing from students’ manipulation of symbolically expressed functions. Dragging variables can be particularly important in building students’ understanding of Cartesian graphs, which students often comprehend as static pictures (Cuoco and Goldenberg 1997; Hazzan and Goldenberg 1996). CONTROLLING THE INPUT Once students have been introduced to geometric functions and have used activities such as this dilation activity to identify and describe function characteristics, including domain, range, and relative rate of change, they are ready for a sequence of “Cartesian connection” activities that relate geometric functions to symbolically expressed algebraic functions, graphed on Cartesian axes. Because the algebraic functions with which students are already familiar have one-dimensional numeric values as their input and output, students begin this process with what could be described as a journey from Flatland to Lineland, similar to that of Edwin Abbott’s narrator A Square in the classic book Flatland: A Romance in Many Dimensions (Abbott 1884). To undertake this journey, students restrict the variables of geometric functions, points in a two-dimensional plane (Flatland), to a one-dimensional domain (Lineland). Domain, Function, and Range Restricting the domain in this way is at the heart of the first connection activity, Reduce the Dimension. Students construct examples of four geometric function families (reflection, translation, rotation, and dilation) and label the dependent variables meaningfully. Students move points to manipulate the input value. They observe the dependent variable, as the output value is traced, to discern the behavior of all four functions. (Fig. 3a shows reflection and dilation; fig. 3b shows rotation and translation.) (a) (b)Students construct each transformation a second time, this time restricting the independent variable to a segment (shown in red in figs. 4a and 4b). They again vary x to observe the traces and notice that the range of each function is also a segment, though possibly having a different direction or length. (This is one of many rich opportunities for students to think about and discuss relative rate of change and consider how the relative speed and the direction of the variables determine the direction and length of the range.) (a)(b)Students may be puzzled at this stage: What would it mean for one of these functions to “live” in Lineland? Can both the domain and range lie in the same one-dimensional space? Even though the range, like the domain, is a segment, the two segments differ in location, direction, or length. So the question becomes, How might each of these functions be adjusted to make the range coincide with the domain? Small groups of students move mirrors, center points, vectors, and angles to experiment with this question for all four functions. As students describe their results, they may make observations such as these: • “When we put the dilation center point on the domain line, the range was also on the domain line, no matter what scale factor we chose.” • “We tried the same thing with the rotation center point, but there were only two angles of rotation that worked—180° and 360°.” • “For reflection, we thought the only way to arrange the mirror was perpendicular to the domain line, but when we tried a mirror parallel to the domain line we realized that it worked if the mirror was right on top of the domain line. A funny thing happened: Those two arrangements of the mirror gave exactly the same results as the 180° and 360° rotations!” • “For translation, we had to make the vector parallel to the domain line, but once we did so, we could make the vector any length we wanted to.” Quantifying the Transformation In the second activity, Number the Domain, students extend these discoveries by using a number line as the restricted domain, allowing them to easily measure the numeric values of the input and output. As students drag the independent variable (preimage) back and forth along its number line, they observe the resulting continuous variation not only of the output point (image) but also of the numeric values corresponding to both input and output. By attending to these values, students may be surprised—and even excited—to discover that the dilation function multiplies the value of its input by the scale factor and that the translation function adds to the value of its input the directed length of the translation vector. By experimenting with different vectors and scale factors, students realize that these two geometric transformations allow them to perform multiplication and addition on the numeric value of the independent variable. Composition of Functions In the third activity, Compose on a Line, students compose dilation and translation on the number line. They construct a number line and a dilation function with its domain restricted to the line, using the origin of the number line as the center of dilation. They then construct a translation function using as its input the output of the dilation function. (See figs. 5a and 5b.) (a) (b) After measuring the values of the independent, intermediate, and dependent variables, students change the dilation scale factor and the length of the translation vector and then drag x to observe the effects on the geometric and numeric representations of the output behavior. Because all three variables (independent, intermediate, and dependent) in this activity appear on the same number line, their overlapping labels and motions may be confusing. In the next two activities, students build and explore clearer visual representations by separating the input and output axes. DISTINGUISHING THE OUTPUT Goldenberg, Lewis, and O’Keefe (1992) invented dynagraphs to emphasize function behavior, particularly the relative rate of change of the independent and dependent variables. A dynagraph has two parallel horizontal axes—an input axis above and an output axis below. By putting the variables on parallel axes and connecting them with a segment, students can easily compare their relative motions. Dynagraphs In the fourth activity, Create a Dynagraph, students construct a composed dilation-translation function in the form of a dynagraph. They construct the dilation on the upper (input) axis, transfer the resulting intermediate variable to the lower (output) axis, and then construct the translation. Figure 6a shows the completed construction, in readiness for students to drag, observe, and analyze. (After all the care that students took in the first three activities to get the input and output on the same line, they may be concerned that the variables no longer live on a single line. This concern can lead to an interesting discussion relating to the distinction between an abstract concept and the various representations that we might use to emphasize different features and different behaviors.) (a) (b) Students have measured each variable to help them relate the numeric changes to the visual changes in their locations. They have also measured v, the location of the tip of the translation vector, and they can relate both the scale factor and the translation vector to the displayed values of the variables. In figure 6b, students have constructed and traced a segment that connects the input and output variables. Dragging x and analyzing the traces can give students additional insights into function behavior. For instance, a student may notice that one of the traced segments in figure 6b connects the origin of the input axis to the tip of the translation vector on the output axis; he or she may try different scale factors and translation vectors to find that this is true for any similar composition (or linear function) and be spurred to explain his discovery. Students are encouraged to act out the motion of the variables by means of finger and hand gestures and also to work as teams to perform a dynagraph dance. In a dynagraph dance, the team uses tape to mark parallel axes on the classroom floor, and team members then take on the roles of variables moving along the axes according to particular scale factors and translation vectors. Playing the role of dynagraph variables (for instance, for a scale factor of –2 and a vector length of +4) is not only fun for students but also useful in making function behavior concrete and comprehensible. Students should particularly be asked to act out functions with scale factors of 1, 0, and –1 and to explain the observed function behavior in terms of their physical experience. Students conclude this activity by solving “mystery” dynagraphs that show a pair of connected independent and dependent variables. The students’ job is to determine the exact scale factor and translation vector used to produce each mystery function. A Cartesian Representation In the fifth activity, Connect to Cartesian, students adopt a different strategy to separate the input and output axes, using a 90-degree rotation rather than a downward translation. As with the dynagraph representation, they begin by constructing a dilation on a horizontal number line (the input axis). They then construct a second number line (the output axis) using the same origin, rotate the new line to make it vertical, and construct the translation along this second line. Figure 7a shows the completed construction, in readiness for varying x. In figure 7b, students have constructed a vertical line through x and a horizontal line through T(D(x)) so that the intersection of these two lines tracks the values of both x and T(D(x)). By tracing the intersection point of these two perpendicular lines, students realize that they have used transformations to invent the Cartesian graph representation of the linear function T(D(x)) = s • x + v, a function they may already know as y = mx + b. This connection supports a geometric interpretation of the slope-intercept form of a linear function: The slope is a scale factor applied to the input variable (and thus corresponds to the relative rate of change of the variables), and the intercept is a vertical translation, shifting the value of the output variable. Students complete this activity by solving mystery graphs, graphs that show a particular path for the traced intersection point in figure 7b. The challenge is to adjust their composed function’s scale factor and vector so that dragging x causes the intersection to follow the mystery graph. (a) (b) TRANSFORMATIONS AS FUNCTIONS In these five activities, geometric functions provide students an alternative environment for engaging with function concepts. This environment emphasizes variation as students drag the variables back and forth on the screen, enabling them to ground abstract function concepts in sensory-motor experiences. The activities also reveal a deep connection between geometry and algebra with the “same” function created by dilation and translation in one realm and by multiplication and addition in the other. As students move from two dimensions to one dimension and from dynagraphs to Cartesian graphs, they attend to function behavior and relative rate of change; come to see multiplication as scaling on the number line and addition as translation on the number line; and grow increasingly aware of (and comfortable with) the composition of multiplication and addition that defines a linear function. Editor’s note: The activities described in this article use Web Sketchpad®, work with any modern Web browser, and can be accessed from http://geometricfunctions.org/links/connecting-functions. REFERENCES Abbott, Edwin A. 1884. Flatland: A Romance in Many Dimensions. London: Seeley and Co. Common Core State Standards Initiative (CCSSI). 2010. Common Core State Standards for Mathematics. Washington, DC: National Governors Association Center for Best Practices and the Council of Chief State School Officers. http://www.corestandards.org/wp-content/uploads/Math_Standards.pdf. Coxford, Arthur F., and Zalman Usiskin. 1971. Geometry: A Transformation Approach. New York: Laidlaw Bros. Cuoco, Al A., and E. Paul Goldenberg. 1997. “Dynamic Geometry as a Bridge from Euclidean Geometry to Analysis.” In Geometry Turned On! Dynamic Software in Learning, Teaching, and Research, edited by James King and Doris Schattschneider, pp. 33–46. MAA Notes, no. 41. Washington, DC: Mathematical Association of America. Goldenberg, E. Paul, Philip Lewis, and James O’Keefe. 1992. “Dynamic Representation and the Development of a Process Understanding of Function.” In The Concept of Function: Aspects of Epistemology and Pedagogy, edited byGuershon Harel and Ed Dubinsky, pp. 235–60. MAA Notes no. 25. Washington, DC: Mathematical Association of America. Hazzan, Orit, and E. Paul Goldenberg. 1996. “Students’ Understanding of the Notion of Function.” Dynamic Geometry Environments:International Journal of Computers for Mathematical Learning 6 (3): 263–91. Lakoff, George, and Rafael Núñez. 2000. Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being. New York: Basic Books. Radford, Luis, 2012. “Towards an Embodied, Cultural, and Material Conception of Mathematics Cognition.” In Proceedings of the 12th International Congress on Mathematical Education: Intellectual and Attitudinal Changes, edited by Sung Je Cho. New York: Springer.
Study sets, textbooks, questions Upgrade to remove ads C&E Unit 7 Study Guide: Economics Terms in this set (57) A tangible object that can be bought and used to meet the needs and wants of consumers. Our goods and services are limited due to this economic problem. Man-made objects that are used to make other goods and services. A resource that can be naturally replenished. A fixed income paid on a monthly basis. An alternative that we sacrifice when we make a decision. the most desirable sacrifice when we make a decision That which is necessary for survival. Natural resources used to make goods and services. The person who combines land, labor, and capital to make goods and services. A person who works on an assembly line describes this factor of production. Deciding whether to use one additional unit of some good. thinking at the margin The degree to which resources are being used efficiently. A point of underutilization would be plotted to the _______ of the production possibilities frontier. Knowledge and skill gained on the job that is used to make goods and services. If an economy grows, the production possibilities frontier will shift to the _________. A reward that is offered to persuade people to take certain economic actions. A person who buys or uses goods and services. A line that shows the ways an economy can maximize its output production possibilities frontier Using fewer resources than an economy is capable of using In the example of "guns or butter," the trade-off for producing more butter is _______. In the example of "guns or butter," the trade-off for producing more guns is _______. A cost that remains the same no matter how many goods are produced. The payment for the service of one unit of labor. Fixed costs plus variable costs. A person who makes goods and services. The extra cost of producing one more unit of a good. The expenses that change with the number of goods produced. When using the cost benefit analysis, you want your benefits to be ______ than your costs. Actions one person performs for another. What are the 3 Basic Economic Questions??? (Remember why we even have to answer these 3 questions/ the goal of economics!!!!) What to produce? How to produce it? For Whom to produce? What are the 5 basic characteristics of a market economy? 1. Individual Freedom 3. Dealing with Externalities 4. Higher per capita GDP 5. Private citizens improving a good or service goods or raw materials used to make finished products the amount of money left over after all the costs of production have been paid an economic system in which private citizens own and use the factors of production in order to seek a profit machines control production the breaking down of a job into separate, smaller tasks, which are performed by different workers division of labor goods bought in the market and not used in the production of other goods competition is allowed to flourish with a minimum of government interference free market economy new goods and services father of economics and author of The Wealth of Nations takes place when people, businesses, regions, and even countries concentrate on goods and services that they can produce better than anyone else a level of production in which the marginal product of labor decreases as the number of workers increases law of diminishing marginal returns machines perform physical tasks working class employee who performs manual or unskilled labor blue collar worker guides the nation's resources to their most productive use and helps the market to self-regulate itself a manufacturing process in which interchangeable parts are added to a product to create an end product places where the prices of goods and services are determined as exchange takes place perform tasks that require less physical labor/skilled workers white collar worker perform tasks that require less physical labor/skilled workers the idea that the consumer is the king or ruler of an economy and determines what products will be produced the struggle between buyers and sellers to get the best products at the lowest prices, keeps cost of production low and quality high in a free market economy, these own the factors of production and are the consumers of goods and services an organization that uses resources to produce a product which it sells, changes factors of production into products firms purchase the factors of production from households in the _______ market goods and services produced by firms are purchased by households in the ________ market. Recommended textbook explanations Principles of Economics N. Gregory Mankiw Economics: Principles in Action Arthur O'Sullivan, Steven M. Sheffrin Economics: Concepts and Choices Economics: Today and Tomorrow Roger LeRoy Miller Sets with similar terms Introduction to Economics Unit 1 Vocabulary Words Sets found in the same folder C&E Goal 5 Political Parties WAHHS Voice Vocabulary English part 1 C&E Goal 10 Other sets by this creator vocab list 4 EMD Vocab 7 GitM vocab 13 WS Red Unit 10 There was a time when Nabisco, Post Cereals, Kellogg’s, and General Mills sold nearly 75 percent of the breakfast cereal purchased in the United States. These firms were part of an oligopoly, but those days are now gone. Describe what has happened to reduce the monopoly power of these firms in the market. Think about the breakfast cereal shelves in your grocery store. You are bullish on Telecom stock. The current market price is $50 per share, and you have$5,000 of your own to invest. You borrow an additional $5,000 from your broker at an interest rate of 8% per year and invest$10,000 in the stock. a. What will be your rate of return if the price of Telecom stock goes up by 10% during the next year? (Ignore the expected dividend.) The spreading effect causes the average total cost curve to A. rise, while the diminishing returns effect causes it to fall. B. rise, as does the diminishing returns effect. C. fall, while the diminishing returns effect causes it to rise. D. fall, as does the diminishing returns effect. E. remain constant until the beginning of the diminishing marginal returns effect. What is the relationship between the demand schedule and the demand curve? Other Quizlet sets New Bar Rollout Recipes BIOL 101 CH 8 Disparities Exam 2 Biology Chapter 10 a negative externality such as pollution can be corrected by Leslie wins a cash prize of $2000, she will need to use this money to pay bills next month. Leslie puts this money in a savings account because her main priority is True or False: GDP is the most comprehensive of a country's total output and is a key measure of the nations economic health (S) when the price is expected to fall in the future
Presentation on theme: "Unit 1: B.1-B.2 In which you will learn about: Physical v. chemical properties Physical v. chemical changes Density."— Presentation transcript: Unit 1: B.1-B.2 In which you will learn about: Physical v. chemical properties Physical v. chemical changes Density B.1 Physical Properties of Water Matter: anything that occupies spaced and has mass Matter can be distinguished by its physical properties Physical property: a property that can be observed/or measured without changing the chemical makeup of the substance What are some physical properties? color color melting and boiling point melting and boiling point odor odor Other Physical Properties Density: the mass of a material within a given volume – The density of liquid water is usually given as 1 g/mL, but it’s actually temperature dependent – 1 cm 3 = 1 mL (this is super useful for the rest of the year so MEMORIZE it now!) Freezing point: the temperature at which a substance changes from a liquid to a solid – For water, it is of course, 0°C What others can you think of? Graphite — layer structure of carbon atoms reflects physical properties. This allows layers to easily be removed. This easy transfer of layers is why we use it in pencils! Water Is Never Pure Water is the only ordinary liquid found in naturally in our environment – Because so many substances dissolve readily in water, quite a few liquids are actually water solutions – A water-based solution is an aqueous solution BTW, what’s a chemical property? A property that can only be observed and/or measured if the substance is chemically altered (Example: flammability) Physical Changes – can be observed without changing the identity of the substance Some physical changes would be boiling of a liquid boiling of a liquid melting of a solid melting of a solid dissolving a solid in a liquid to give a homogeneous mixture — a SOLUTION. dissolving a solid in a liquid to give a homogeneous mixture — a SOLUTION. Chemical Properties and Chemical Change Burning hydrogen (H 2 ) in oxygen (O 2 ) gives H 2 O.Burning hydrogen (H 2 ) in oxygen (O 2 ) gives H 2 O. Chemical Properties and Chemical Change Chemical change or chemical reaction — transformation of one or more atoms or molecules into one or more different molecules.Chemical change or chemical reaction — transformation of one or more atoms or molecules into one or more different molecules. Burning hydrogen (H 2 ) in oxygen (O 2 ) gives H 2 O.Burning hydrogen (H 2 ) in oxygen (O 2 ) gives H 2 O. Sure Signs of a Chemical Change HeatHeat Odor changeOdor change Gas Produced (not from boiling!)Gas Produced (not from boiling!) Precipitate – a solid formed by mixing two liquids togetherPrecipitate – a solid formed by mixing two liquids together Color changeColor change http://jchemed.chem.wisc.edu/JCESof t/CCA/CCA0/MOVIES/S1047.MOV Physical vs. Chemical Properties Examples: – melting point – flammable – density – magnetic – tarnishes in air physical chemical physical chemical Physical vs. Chemical Changes Examples: – rusting iron – dissolving in water – burning a log – melting ice – grinding spices Chemical Physical Chemical Physical Physical Most of Chemistry Concerns Chemical Properties &Changes BUT, physical properties & changes are important, too! ALL mixtures can be separated physically. They can be separated based on their PHYSICAL properties. B.2 DENSITY - an important and useful physical property Mercury 13.6 g/cm 3 21.5 g/cm 3 Aluminum 2.7 g/cm 3 Platinum Problem Problem A piece of copper has a mass of 57.54 g. It is 9.36 cm long, 7.23 cm wide, and 0.95 mm thick. Calculate density (g/cm 3 ). Strategy 1. Get dimensions in common units. 2. Calculate volume in cubic centimeters. 3. Calculate the density. SOLUTION 1. Get ALL dimensions in common units. 2. Calculate volume in cubic centimeters. 3. Calculate the density. (9.36 cm)(7.23 cm)(0.095 cm) = 6.4 cm 3 Note only 2 significant figures in the answer! PROBLEM: Mercury (Hg) has a density of 13.6 g/cm 3. What is the mass of 95 mL of Hg in grams? In pounds? Strategy 1.Use density to calc. mass (g) from volume. 2.Convert mass (g) to mass (lb) Need to know conversion factor = 454 g / 1 lb PROBLEM: Mercury (Hg) has a density of 13.6 g/cm 3. What is the mass of 95 mL of Hg? First, note that 1 cm 3 = 1 mL 1.Convert volume to mass PROBLEM: Mercury (Hg) has a density of 13.6 g/cm 3. What is the mass of 95 mL of Hg? 2.Convert mass (g) to mass (lb) Learning Check Osmium is a very dense metal. What is its density in g/cm 3 if 50.00 g of the metal occupies a volume of 2.22cm 3 ? 1) 2.25 g/cm 3 2)22.5 g/cm 3 3)111 g/cm 3 Solution 2) Placing the mass and volume of the osmium metal into the density setup, we obtain D = mass = 50.00 g = volume2.22 cm 3 volume2.22 cm 3 = 22.522522 g/cm 3 = 22.5 g/cm 3 = 22.522522 g/cm 3 = 22.5 g/cm 3 Volume Displacement A solid displaces a matching volume of water when the solid is placed in water. 33 mL 25 mL DENSITYDENSITY INTENSIVE Density is an INTENSIVE property of matter. – does NOT depend on quantity of matter. – Temperature is also intensive EXTENSIVE Contrast with EXTENSIVE – depends on quantity of matter. – mass and volume are extensive Styrofoam Brick Density Depends on Temperature Most density tables are given with a specific temperature because substances expand when heated. Direct vs. Inverse Proportions Directly proportional – the relationship between two variables can be expressed as y/x = k where k is a constant. Graphs of directly proportional variables are linear. How do mass and volume relate? If mass is your y variable, and volume is your x variable, y/x = k! (m/V = D) The graph is linear, showing a directly proportional relationship between mass and volume. Notice that the slope = density, a CONSTANT! Mass Volume Inverse proportions will come later In inversely proportional relationships, yx = k This type of graph is curved. We will see this a lot more when we get to the gas laws later in the year. HOMEWORK EXERCISES 1) What is a physical property? 2) Identify three physical properties of water. 3) How does the density of solid water compare to the density of liquid water? 4) Describe a setting where you might observe water as a solid, a liquid, and a gas all at the same time. 5) Distinguish between physical changes and chemical changes. 6) A star is estimated to have a mass of 2 x 10 36 kg. Assuming it to be a sphere of average radius 7.0 x 10 5 km, calculate the average density of the star in units of grams per cubic centimeter. CONTINUED… HOMEWORK END 7) Classify the following as physical or chemical changes. – a) Moth balls gradually vaporize in a closet. – b) Hydrofluoric acid attacks glass, and is used to etch calibration marks on glass laboratory utensils. – c) A French chef making a sauce with brandy is able to burn off the alcohol from the brandy, leaving just the brandy flavoring. – d) Chemistry majors sometimes get holes in the cotton jeans they wear to lab because of acid spills.
How to Graph Polynomials and Construct Their Equations From Graphs The nice thing about factorised polynomials is that they are quite easy to graph. First, some terminology needs to be clarified. The roots of such an expression are known as the zeros of the equation (when the polynomial is equal to zero) and these are known as the x-intercepts of the corresponding graph. So, (x-1)²(x+3)(x+5)³ has roots at x = 1, -3 and -5; (x-1)²(x+3)(x+5)³ = 0 has zeros at x = 1, -3 and -5; and, when we graph y = (x-1)²(x+3)(x+5)³, the x-intercepts will be found at x = 1, -3 and -5. I have been guilty of not using this terminology strictly as, in high schools, the terms are often used interchangeably. Perhaps I should be more precise, but there are enough battles to fight in the schooling system. The factors provide information about where the zeros/roots/x-intercepts are, as well as details about their nature (whether they are single, double, or triple roots, for example). If a polynomial function has already been factorised, identifying its roots and producing a quick sketch of the graph is quite easy using the skills you will discover here. In the next videos/posts you will see how to handle repeated roots and negative coefficients. After that, I will demonstrate how this collection of curve-sketching skills can be used to create quick preliminary sketches of the graphs required for the Higher School Certificate Examinations in New South Wales, Australia. Of course, if the polynomial function has not been factorised, you need to learn how to achieve this goal! The first lessons that I will provide will be based on the Factor Theorem, the Remainder Theorem, and Polynomial Division. I will add material about these matters as I am able. Some graphs of polynomials rise rapidly as you move to the right along the x-axis, and others descend rapidly. In this video we discover that this property is governed by the leading coefficient (the coefficient of the largest power of x ... i.e. the term with the highest degree). If the leading coefficient is positive, the graph rises to the right; if the leading coefficient is negative, the graph descends to the right. The good news is that ,whether a polynomial is factorised or not, it is quite easy to determine the sign of this coefficient. This key piece of information, plus a knowledge of the roots, allows you to determine the general shape of the polynomial. If the leading coefficient of a polynomial is non-monic (that is, not equal to one), then the effect of this number is to increase or decrease the steepness of the curve. Whether a polynomial is completely factorised or not, it is very important to determine the value of the leading coefficient. Large numbers indicate that the graph is very steep and smaller numbers indicate that it is more flat. This key piece of information, plus a knowledge of the roots, allows you to determine the general shape of the polynomial. If factors are repeated in a polynomial, that is, if they have degree greater than one, then the corresponding roots are repeated. We discover in this video that factors with an even degree cause the polynomial to behave 'like a parabola' (a U-shape) near the corresponding x-intercept, and factors with odd degree behave rather 'like a cubic' (an S-shape) near their corresponding x-intercept. So far, we have learned how to sketch polynomial functions simply by understanding a few principles concerning their roots and leading coefficient. Now it is time for me to demonstrate this set of skills. As students complete their thirteen years of schooling in NSW, Australia, they sit for the Higher School Certificate examinations. With few exceptions, each year one of the questions in their mathematics paper requires that they sketch a polynomial equation. These questions are primarily designed so that they can demonstrate how to use calculus to determine the curve's stationary points and points of inflection (inflexion in the UK). It will help you enormously if you can sketch the general shape of the curve within seconds ... before addressing all the parts of the question that seek details about the curve. In this way you will know where to expect stationary points and points of inflection. Therefore, you will be in an excellent position to identify any careless errors that you may make with the detailed work (calculus). If your final graph conforms to the general shape of your original draft, this will also provide you with a boost to your confidence. That can help reduce your stress levels during the examination. Because this is such a useful group of skills, I have chosen to demonstrate them by taking seven questions from recent HSC examination papers and graphing each one. The HSC Mathematics Examination questions that I discuss in this video are: 2003 Q05a f(x) = x^4 -4x³ (also 2007 Q06b) 2004 Q04b f(x) = x³ - 3x² 2005 Q04b f(x) = (x + 3)(x² - 9) 2006 Q05a f(x) = 2x²(3 - x) 2008 Q08a f(x) = x^4 - 8x² 2010 Q06a f(x) = (x + 2)(x² + 4) 2012 Q14a f(x) = 3x^4 + 4x³ -12x² Note that, because the 2010 question contained a quadratic factor that did not have real roots, the graphing skills described in these videos were not particularly relevant or helpful. In other words, although these are very powerful tools/skills to have, they are not always successful in helping to graph the function (e.g. if the factors are not linear). When I was at school I wanted to know how to construct my own formulae. Eventually, I learned a few clever techniques, and this is one of them. If you know the x-intercepts and y-intercept of a polynomial (from a graph), it is quite 'easy' to deduce the basic polynomial function that generates that graph. The method is simply a reversal of the process that we have discussed in the last few videos. Because of this, learning to create formulae in this way will help you to consolidate your understanding of how to sketch polynomials. You have to use the same key insights in order to find their formulae! Strictly speaking, polynomials do not appear in this video at all! I have included the video here because it is a 'fun' way of exploring the concept of zeros of an equation (most of the videos about polynomials, so far, have been based on the concept of zeros). On this occasion I show that it is possible to use the concept of zeros (when the product of a number of factors equals zero) to produce graphs composed of many component parts. In particular, I show how to create just one equation that produces the words I LOVE YOU on graph paper. This might be a very 'nerdy' way to express your love for someone! Click on the image at right so you can see and print the equation. You will find appropriate graph paper here. Whne you have finished, you might now ask another question: what kind of graph is created if the product of factors is not equal to zero? I will be answering this question when I eventually produce a series of videos discussing asymptotes. Although polynomials do not appear in this video at all, I have included the video and downloadable material here because they illustrate how zeros can be used to construct equations. So, this is meant to be a 'fun' and also a rather serious video at the same time. If you are looking for a really unusual and 'nerdy' way of proposing to your girlfriend, and you both share a passion for mathematics and graphing, then this may be what you are looking for. This idea was inspired a little bit by an unusual proposal that I saw on YouTube where a young man proposed in binary code. On a more serious note, you will notice that this video is quite long. This is because I needed to review the three graphing skills discussed in previous videos: how to modify the equation for a circle to create long, thin ellipses, how to move any graph to new locations on the coordinate plane, and how to create one large equation that is composed of many smaller ones. In the video you will discover how to create just one equation that produces the words WILL YOU MARRY ME? on graph paper. Click on the image at right so you can see and print the equation. You will find appropriate graph paper here. Enjoy and learn! Download the full instructions (including equation and graph paper) in PDF format here or, if you want to give the material to someone without their being aware of what they are graphing, download the same instructions without the first explanatory page here. Note: If you are sharing the PDF file, and not a printout, then make sure you change the filename first because it contains the text "WILL YOU MARRY ME?" Have you ever wanted to create formulae for your own graphs? This is your chance to develop some very powerful mathematical skills based on a simple principle. The principle is that any number multiplied by zero equals zero! We learn and understand this at quite a young age. In fact, 'zero' would rank as most people's favourite multiplication table! Surprisingly, this very simple concept allows us to construct the most amazingly complex equations (and their graphs) without much effort. Take the time to watch this video and then download the PDF workbook and see if you can complete the twelve questions contained therein! Believe it or not, if you understand this concept well, then you will be able to understand the behaviour of asymptotes and hyperbolae quite well. Enjoy and learn! woow Your amazing until i met u i thought chain rule is really difficult but tnz to you now i realize im totally wrong tnz sir keep it up Sandakalee W (on a CCM YouTube video about the Chain Rule)
A mineral is, broadly speaking, a solid chemical compound that occurs naturally in pure form. Minerals are most commonly associated with rocks due to the presence of minerals within rocks. These rocks may consist of one type of mineral, or may be an aggregate of two or more different types of minerals, spacially segregated into distinct phases. Compounds that occur only in living beings are usually excluded, but some minerals are often biogenic (such as calcite) and/or are organic compounds in the sense of chemistry (such as mellite). Moreover, living beings often synthesize inorganic minerals (such as hydroxylapatite) that also occur in rocks. In geology and mineralogy, the term "mineral" is usually reserved for mineral species: crystalline compounds with a fairly well-defined chemical composition and a specific crystal structure. Minerals without a definite crystalline structure, such as opal or obsidian, are then more properly called mineraloids. If a chemical compound may occur naturally with different crystal structures, each structure is considered different mineral species. Thus, for example, quartz and stishovite are two different minerals consisting of the same compound, silicon dioxide. The International Mineralogical Association (IMA) is the world's premier standard body for the definition and nomenclature of mineral species. As of November 2018, the IMA recognizes 5,413 official mineral species. out of more than 5,500 proposed or traditional ones. The chemical composition of a named mineral species may vary somewhat by the inclusion of small amounts of impurities. Specific varieties of a species sometimes have conventional or official names of their own. For example, amethyst is a purple variety of the mineral species quartz. Some mineral species can have variable proportions of two or more chemical elements that occupy equivalent positions in the mineral's structure; for example, the formula of mackinawite is given as (Fe,Ni) 8, meaning Fe 8, where x is a variable number between 0 and 9. Sometimes a mineral with variable composition is split into separate species, more or less arbitrarily, forming a mineral group; that is the case of the silicates Ca 4, the olivine group. Besides the essential chemical composition and crystal structure, the description of a mineral species usually includes its common physical properties such as habit, hardness, lustre, diaphaneity, colour, streak, tenacity, cleavage, fracture, parting, specific gravity, magnetism, fluorescence, radioactivity, as well as its taste or smell and its reaction to acid. Minerals are classified by key chemical constituents; the two dominant systems are the Dana classification and the Strunz classification. Silicate minerals comprise approximately 90% of the Earth's crust. Other important mineral groups include the native elements, sulfides, oxides, halides, carbonates, sulfates, and phosphates. One definition of a mineral encompasses the following criteria: - Formed by a natural process (anthropogenic compounds are excluded). - Stable or metastable at room temperature (25 °C). In the simplest sense, this means the mineral must be solid. Classical examples of exceptions to this rule include native mercury, which crystallizes at −39 °C, and water ice, which is solid only below 0 °C; because these two minerals were described before 1959, they were grandfathered by the International Mineralogical Association (IMA). Modern advances have included extensive study of liquid crystals, which also extensively involve mineralogy. - Represented by a chemical formula. Minerals are chemical compounds, and as such they can be described by fixed or a variable formula. Many mineral groups and species are composed of a solid solution; pure substances are not usually found because of contamination or chemical substitution. For example, the olivine group is described by the variable formula (Mg, Fe)2SiO4, which is a solid solution of two end-member species, magnesium-rich forsterite and iron-rich fayalite, which are described by a fixed chemical formula. Mineral species themselves could have a variable composition, such as the sulfide mackinawite, (Fe, Ni)9S8, which is mostly a ferrous sulfide, but has a very significant nickel impurity that is reflected in its formula. - Ordered atomic arrangement. This generally means crystalline; however, crystals are also periodic, so the broader criterion is used instead. An ordered atomic arrangement gives rise to a variety of macroscopic physical properties, such as crystal form, hardness, and cleavage. There have been several recent proposals to classify biogenic or amorphous substances as minerals. The formal definition of a mineral approved by the IMA in 1995: "A mineral is an element or chemical compound that is normally crystalline and that has been formed as a result of geological processes." - Usually abiogenic (not resulting from the activity of living organisms). Biogenic substances are explicitly excluded by the IMA: "Biogenic substances are chemical compounds produced entirely by biological processes without a geological component (e.g., urinary calculi, oxalate crystals in plant tissues, shells of marine molluscs, etc.) and are not regarded as minerals. However, if geological processes were involved in the genesis of the compound, then the product can be accepted as a mineral." The first three general characteristics are less debated than the last two. Mineral classification schemes and their definitions are evolving to match recent advances in mineral science. Recent changes have included the addition of an organic class, in both the new Dana and the Strunz classification schemes. The organic class includes a very rare group of minerals with hydrocarbons. The IMA Commission on New Minerals and Mineral Names adopted in 2009 a hierarchical scheme for the naming and classification of mineral groups and group names and established seven commissions and four working groups to review and classify minerals into an official listing of their published names. According to these new rules, "mineral species can be grouped in a number of different ways, on the basis of chemistry, crystal structure, occurrence, association, genetic history, or resource, for example, depending on the purpose to be served by the classification." The Nickel (1995)[clarification needed] exclusion of biogenic substances was not universally adhered to. For example, Lowenstam (1981) stated that "organisms are capable of forming a diverse array of minerals, some of which cannot be formed inorganically in the biosphere." The distinction is a matter of classification and less to do with the constituents of the minerals themselves. Skinner (2005) views all solids as potential minerals and includes biominerals in the mineral kingdom, which are those that are created by the metabolic activities of organisms. Skinner expanded the previous definition of a mineral to classify "element or compound, amorphous or crystalline, formed through biogeochemical processes," as a mineral. Recent advances in high-resolution genetics and X-ray absorption spectroscopy are providing revelations on the biogeochemical relations between microorganisms and minerals that may make Nickel's (1995) biogenic mineral exclusion obsolete and Skinner's (2005) biogenic mineral inclusion a necessity. For example, the IMA-commissioned "Working Group on Environmental Mineralogy and Geochemistry " deals with minerals in the hydrosphere, atmosphere, and biosphere. The group's scope includes mineral-forming microorganisms, which exist on nearly every rock, soil, and particle surface spanning the globe to depths of at least 1600 metres below the sea floor and 70 kilometres into the stratosphere (possibly entering the mesosphere). Biogeochemical cycles have contributed to the formation of minerals for billions of years. Microorganisms can precipitate metals from solution, contributing to the formation of ore deposits. They can also catalyze the dissolution of minerals. Prior to the International Mineralogical Association's listing, over 60 biominerals had been discovered, named, and published. These minerals (a sub-set tabulated in Lowenstam (1981)) are considered minerals proper according to the Skinner (2005) definition. These biominerals are not listed in the International Mineral Association official list of mineral names, however, many of these biomineral representatives are distributed amongst the 78 mineral classes listed in the Dana classification scheme. Another rare class of minerals (primarily biological in origin) include the mineral liquid crystals that have properties of both liquids and crystals. To date, over 80,000 liquid crystalline compounds have been identified. The Skinner (2005) definition of a mineral takes this matter into account by stating that a mineral can be crystalline or amorphous, the latter group including liquid crystals. Although biominerals and liquid mineral crystals, are not the most common form of minerals, they help to define the limits of what constitutes a mineral proper. The formal Nickel (1995) definition explicitly mentioned crystallinity as a key to defining a substance as a mineral. A 2011 article defined icosahedrite, an aluminium-iron-copper alloy as mineral; named for its unique natural icosahedral symmetry, it is a quasicrystal. Unlike a true crystal, quasicrystals are ordered but not periodic. Rocks, ores, and gems Minerals are not equivalent to rocks. A rock is an aggregate of one or more minerals or mineraloids. Some rocks, such as limestone or quartzite, are composed primarily of one mineral – calcite or aragonite in the case of limestone, and quartz in the latter case. Other rocks can be defined by relative abundances of key (essential) minerals; a granite is defined by proportions of quartz, alkali feldspar, and plagioclase feldspar. The other minerals in the rock are termed accessory, and do not greatly affect the bulk composition of the rock. Rocks can also be composed entirely of non-mineral material; coal is a sedimentary rock composed primarily of organically derived carbon. In rocks, some mineral species and groups are much more abundant than others; these are termed the rock-forming minerals. The major examples of these are quartz, the feldspars, the micas, the amphiboles, the pyroxenes, the olivines, and calcite; except for the last one, all of these minerals are silicates. Overall, around 150 minerals are considered particularly important, whether in terms of their abundance or aesthetic value in terms of collecting. Commercially valuable minerals and rocks are referred to as industrial minerals. For example, muscovite, a white mica, can be used for windows (sometimes referred to as isinglass), as a filler, or as an insulator. Ores are minerals that have a high concentration of a certain element, typically a metal. Examples are cinnabar (HgS), an ore of mercury, sphalerite (ZnS), an ore of zinc, or cassiterite (SnO2), an ore of tin. Gems are minerals with an ornamental value, and are distinguished from non-gems by their beauty, durability, and usually, rarity. There are about 20 mineral species that qualify as gem minerals, which constitute about 35 of the most common gemstones. Gem minerals are often present in several varieties, and so one mineral can account for several different gemstones; for example, ruby and sapphire are both corundum, Al2O3. Nomenclature and classification Minerals are classified by variety, species, series and group, in order of increasing generality. The basic level of definition is that of mineral species, each of which is distinguished from the others by unique chemical and physical properties. For example, quartz is defined by its formula, SiO2, and a specific crystalline structure that distinguishes it from other minerals with the same chemical formula (termed polymorphs). When there exists a range of composition between two minerals species, a mineral series is defined. For example, the biotite series is represented by variable amounts of the endmembers phlogopite, siderophyllite, annite, and eastonite. In contrast, a mineral group is a grouping of mineral species with some common chemical properties that share a crystal structure. The pyroxene group has a common formula of XY(Si,Al)2O6, where X and Y are both cations, with X typically bigger than Y; the pyroxenes are single-chain silicates that crystallize in either the orthorhombic or monoclinic crystal systems. Finally, a mineral variety is a specific type of mineral species that differs by some physical characteristic, such as colour or crystal habit. An example is amethyst, which is a purple variety of quartz. Two common classifications, Dana and Strunz, are used for minerals; both rely on composition, specifically with regards to important chemical groups, and structure. James Dwight Dana, a leading geologist of his time, first published his System of Mineralogy in 1837; as of 1997, it is in its eighth edition. The Dana classification assigns a four-part number to a mineral species. Its class number is based on important compositional groups; the type gives the ratio of cations to anions in the mineral, and the last two numbers group minerals by structural similarity within a given type or class. The less commonly used Strunz classification, named for German mineralogist Karl Hugo Strunz, is based on the Dana system, but combines both chemical and structural criteria, the latter with regards to distribution of chemical bonds. As of November 2018[update], 5,413 mineral species are approved by the IMA. They are most commonly named after a person (45%), followed by discovery location (23%); names based on chemical composition (14%) and physical properties (8%) are the two other major groups of mineral name etymologies. The word "species" (from the Latin species, "a particular sort, kind, or type with distinct look, or appearance") comes from the classification scheme in Systema Naturae by Carl Linnaeus. He divided the natural world into three kingdoms – plants, animals, and minerals – and classified each with the same hierarchy. In descending order, these were Phylum, Class, Order, Family, Tribe, Genus, and Species. The abundance and diversity of minerals is controlled directly by their chemistry, in turn dependent on elemental abundances in the Earth. The majority of minerals observed are derived from the Earth's crust. Eight elements account for most of the key components of minerals, due to their abundance in the crust. These eight elements, summing to over 98% of the crust by weight, are, in order of decreasing abundance: oxygen, silicon, aluminium, iron, magnesium, calcium, sodium and potassium. Oxygen and silicon are by far the two most important – oxygen composes 47% of the crust by weight, and silicon accounts for 28%. The minerals that form are directly controlled by the bulk chemistry of the parent body. For example, a magma rich in iron and magnesium will form mafic minerals, such as olivine and the pyroxenes; in contrast, a more silica-rich magma will crystallize to form minerals that incorporate more SiO2, such as the feldspars and quartz. In a limestone, calcite or aragonite (both CaCO3) form because the rock is rich in calcium and carbonate. A corollary is that a mineral will not be found in a rock whose bulk chemistry does not resemble the bulk chemistry of a given mineral with the exception of trace minerals. For example, kyanite, Al2SiO5 forms from the metamorphism of aluminium-rich shales; it would not likely occur in aluminium-poor rock, such as quartzite. The chemical composition may vary between end member species of a solid solution series. For example, the plagioclase feldspars comprise a continuous series from sodium-rich end member albite (NaAlSi3O8) to calcium-rich anorthite (CaAl2Si2O8) with four recognized intermediate varieties between them (given in order from sodium- to calcium-rich): oligoclase, andesine, labradorite, and bytownite. Other examples of series include the olivine series of magnesium-rich forsterite and iron-rich fayalite, and the wolframite series of manganese-rich hübnerite and iron-rich ferberite. Chemical substitution and coordination polyhedra explain this common feature of minerals. In nature, minerals are not pure substances, and are contaminated by whatever other elements are present in the given chemical system. As a result, it is possible for one element to be substituted for another. Chemical substitution will occur between ions of a similar size and charge; for example, K+ will not substitute for Si4+ because of chemical and structural incompatibilities caused by a big difference in size and charge. A common example of chemical substitution is that of Si4+ by Al3+, which are close in charge, size, and abundance in the crust. In the example of plagioclase, there are three cases of substitution. Feldspars are all framework silicates, which have a silicon-oxygen ratio of 2:1, and the space for other elements is given by the substitution of Si4+ by Al3+ to give a base unit of [AlSi3O8]−; without the substitution, the formula would be charge-balanced as SiO2, giving quartz. The significance of this structural property will be explained further by coordination polyhedra. The second substitution occurs between Na+ and Ca2+; however, the difference in charge has to accounted for by making a second substitution of Si4+ by Al3+. Coordination polyhedra are geometric representations of how a cation is surrounded by an anion. In mineralogy, coordination polyhedra are usually considered in terms of oxygen, due its abundance in the crust. The base unit of silicate minerals is the silica tetrahedron – one Si4+ surrounded by four O2−. An alternate way of describing the coordination of the silicate is by a number: in the case of the silica tetrahedron, the silicon is said to have a coordination number of 4. Various cations have a specific range of possible coordination numbers; for silicon, it is almost always 4, except for very high-pressure minerals where the compound is compressed such that silicon is in six-fold (octahedral) coordination with oxygen. Bigger cations have a bigger coordination numbers because of the increase in relative size as compared to oxygen (the last orbital subshell of heavier atoms is different too). Changes in coordination numbers leads to physical and mineralogical differences; for example, at high pressure, such as in the mantle, many minerals, especially silicates such as olivine and garnet, will change to a perovskite structure, where silicon is in octahedral coordination. Other examples are the aluminosilicates kyanite, andalusite, and sillimanite (polymorphs, since they share the formula Al2SiO5), which differ by the coordination number of the Al3+; these minerals transition from one another as a response to changes in pressure and temperature. In the case of silicate materials, the substitution of Si4+ by Al3+ allows for a variety of minerals because of the need to balance charges. Changes in temperature and pressure and composition alter the mineralogy of a rock sample. Changes in composition can be caused by processes such as weathering or metasomatism (hydrothermal alteration). Changes in temperature and pressure occur when the host rock undergoes tectonic or magmatic movement into differing physical regimes. Changes in thermodynamic conditions make it favourable for mineral assemblages to react with each other to produce new minerals; as such, it is possible for two rocks to have an identical or a very similar bulk rock chemistry without having a similar mineralogy. This process of mineralogical alteration is related to the rock cycle. An example of a series of mineral reactions is illustrated as follows. Orthoclase feldspar (KAlSi3O8) is a mineral commonly found in granite, a plutonic igneous rock. When exposed to weathering, it reacts to form kaolinite (Al2Si2O5(OH)4, a sedimentary mineral, and silicic acid): - 2 KAlSi3O8 + 5 H2O + 2 H+ → Al2Si2O5(OH)4 + 4 H2SiO3 + 2 K+ Under low-grade metamorphic conditions, kaolinite reacts with quartz to form pyrophyllite (Al2Si4O10(OH)2): - Al2Si2O5(OH)4 + SiO2 → Al2Si4O10(OH)2 + H2O As metamorphic grade increases, the pyrophyllite reacts to form kyanite and quartz: - Al2Si4O10(OH)2 → Al2SiO5 + 3 SiO2 + H2O Alternatively, a mineral may change its crystal structure as a consequence of changes in temperature and pressure without reacting. For example, quartz will change into a variety of its SiO2 polymorphs, such as tridymite and cristobalite at high temperatures, and coesite at high pressures. Classifying minerals ranges from simple to difficult. A mineral can be identified by several physical properties, some of them being sufficient for full identification without equivocation. In other cases, minerals can only be classified by more complex optical, chemical or X-ray diffraction analysis; these methods, however, can be costly and time-consuming. Physical properties applied for classification include crystal structure and habit, hardness, lustre, diaphaneity, colour, streak, cleavage and fracture, and specific gravity. Other less general tests include fluorescence, phosphorescence, magnetism, radioactivity, tenacity (response to mechanical induced changes of shape or form), piezoelectricity and reactivity to dilute acids. Crystal structure and habit Crystal structure results from the orderly geometric spatial arrangement of atoms in the internal structure of a mineral. This crystal structure is based on regular internal atomic or ionic arrangement that is often expressed in the geometric form that the crystal takes. Even when the mineral grains are too small to see or are irregularly shaped, the underlying crystal structure is always periodic and can be determined by X-ray diffraction. Minerals are typically described by their symmetry content. Crystals are restricted to 32 point groups, which differ by their symmetry. These groups are classified in turn into more broad categories, the most encompassing of these being the six crystal families. These families can be described by the relative lengths of the three crystallographic axes, and the angles between them; these relationships correspond to the symmetry operations that define the narrower point groups. They are summarized below; a, b, and c represent the axes, and α, β, γ represent the angle opposite the respective crystallographic axis (e.g. α is the angle opposite the a-axis, viz. the angle between the b and c axes): |Crystal family||Lengths||Angles||Common examples| |Isometric||a=b=c||α=β=γ=90°||Garnet, halite, pyrite| |Tetragonal||a=b≠c||α=β=γ=90°||Rutile, zircon, andalusite| |Orthorhombic||a≠b≠c||α=β=γ=90°||Olivine, aragonite, orthopyroxenes| |Hexagonal||a=b≠c||α=β=90°, γ=120°||Quartz, calcite, tourmaline| |Monoclinic||a≠b≠c||α=γ=90°, β≠90°||Clinopyroxenes, orthoclase, gypsum| |Triclinic||a≠b≠c||α≠β≠γ≠90°||Anorthite, albite, kyanite| The hexagonal crystal family is also split into two crystal systems – the trigonal, which has a three-fold axis of symmetry, and the hexagonal, which has a six-fold axis of symmetry. Chemistry and crystal structure together define a mineral. With a restriction to 32 point groups, minerals of different chemistry may have identical crystal structure. For example, halite (NaCl), galena (PbS), and periclase (MgO) all belong to the hexaoctahedral point group (isometric family), as they have a similar stoichiometry between their different constituent elements. In contrast, polymorphs are groupings of minerals that share a chemical formula but have a different structure. For example, pyrite and marcasite, both iron sulfides, have the formula FeS2; however, the former is isometric while the latter is orthorhombic. This polymorphism extends to other sulfides with the generic AX2 formula; these two groups are collectively known as the pyrite and marcasite groups. Polymorphism can extend beyond pure symmetry content. The aluminosilicates are a group of three minerals – kyanite, andalusite, and sillimanite – which share the chemical formula Al2SiO5. Kyanite is triclinic, while andalusite and sillimanite are both orthorhombic and belong to the dipyramidal point group. These differences arise corresponding to how aluminium is coordinated within the crystal structure. In all minerals, one aluminium ion is always in six-fold coordination with oxygen. Silicon, as a general rule, is in four-fold coordination in all minerals; an exception is a case like stishovite (SiO2, an ultra-high pressure quartz polymorph with rutile structure). In kyanite, the second aluminium is in six-fold coordination; its chemical formula can be expressed as AlAlSiO5, to reflect its crystal structure. Andalusite has the second aluminium in five-fold coordination (AlAlSiO5) and sillimanite has it in four-fold coordination (AlAlSiO5). Differences in crystal structure and chemistry greatly influence other physical properties of the mineral. The carbon allotropes diamond and graphite have vastly different properties; diamond is the hardest natural substance, has an adamantine lustre, and belongs to the isometric crystal family, whereas graphite is very soft, has a greasy lustre, and crystallises in the hexagonal family. This difference is accounted for by differences in bonding. In diamond, the carbons are in sp3 hybrid orbitals, which means they form a framework where each carbon is covalently bonded to four neighbours in a tetrahedral fashion; on the other hand, graphite is composed of sheets of carbons in sp2 hybrid orbitals, where each carbon is bonded covalently to only three others. These sheets are held together by much weaker van der Waals forces, and this discrepancy translates to large macroscopic differences. Twinning is the intergrowth of two or more crystals of a single mineral species. The geometry of the twinning is controlled by the mineral's symmetry. As a result, there are several types of twins, including contact twins, reticulated twins, geniculated twins, penetration twins, cyclic twins, and polysynthetic twins. Contact, or simple twins, consist of two crystals joined at a plane; this type of twinning is common in spinel. Reticulated twins, common in rutile, are interlocking crystals resembling netting. Geniculated twins have a bend in the middle that is caused by start of the twin. Penetration twins consist of two single crystals that have grown into each other; examples of this twinning include cross-shaped staurolite twins and Carlsbad twinning in orthoclase. Cyclic twins are caused by repeated twinning around a rotation axis. This type of twinning occurs around three, four, five, six, or eight-fold axes, and the corresponding patterns are called threelings, fourlings, fivelings, sixlings, and eightlings. Sixlings are common in aragonite. Polysynthetic twins are similar to cyclic twins through the presence of repetitive twinning; however, instead of occurring around a rotational axis, polysynthetic twinning occurs along parallel planes, usually on a microscopic scale. Crystal habit refers to the overall shape of crystal. Several terms are used to describe this property. Common habits include acicular, which describes needlelike crystals as in natrolite, bladed, dendritic (tree-pattern, common in native copper), equant, which is typical of garnet, prismatic (elongated in one direction), and tabular, which differs from bladed habit in that the former is platy whereas the latter has a defined elongation. Related to crystal form, the quality of crystal faces is diagnostic of some minerals, especially with a petrographic microscope. Euhedral crystals have a defined external shape, while anhedral crystals do not; those intermediate forms are termed subhedral. The hardness of a mineral defines how much it can resist scratching. This physical property is controlled by the chemical composition and crystalline structure of a mineral. A mineral's hardness is not necessarily constant for all sides, which is a function of its structure; crystallographic weakness renders some directions softer than others. An example of this property exists in kyanite, which has a Mohs hardness of 5½ parallel to but 7 parallel to . The most common scale of measurement is the ordinal Mohs hardness scale. Defined by ten indicators, a mineral with a higher index scratches those below it. The scale ranges from talc, a phyllosilicate, to diamond, a carbon polymorph that is the hardest natural material. The scale is provided below: |Mohs hardness||Mineral||Chemical formula| Lustre and diaphaneity Lustre indicates how light reflects from the mineral's surface, with regards to its quality and intensity. There are numerous qualitative terms used to describe this property, which are split into metallic and non-metallic categories. Metallic and sub-metallic minerals have high reflectivity like metal; examples of minerals with this lustre are galena and pyrite. Non-metallic lustres include: adamantine, such as in diamond; vitreous, which is a glassy lustre very common in silicate minerals; pearly, such as in talc and apophyllite; resinous, such as members of the garnet group; silky which is common in fibrous minerals such as asbestiform chrysotile. The diaphaneity of a mineral describes the ability of light to pass through it. Transparent minerals do not diminish the intensity of light passing through them. An example of a transparent mineral is muscovite (potassium mica); some varieties are sufficiently clear to have been used for windows. Translucent minerals allow some light to pass, but less than those that are transparent. Jadeite and nephrite (mineral forms of jade are examples of minerals with this property). Minerals that do not allow light to pass are called opaque. The diaphaneity of a mineral depends on the thickness of the sample. When a mineral is sufficiently thin (e.g., in a thin section for petrography), it may become transparent even if that property is not seen in a hand sample. In contrast, some minerals, such as hematite or pyrite, are opaque even in thin-section. Colour and streak Colour is the most obvious property of a mineral, but it is often non-diagnostic. It is caused by electromagnetic radiation interacting with electrons (except in the case of incandescence, which does not apply to minerals). Two broad classes of elements (idiochromatic and allochromatic) are defined with regards to their contribution to a mineral's colour: Idiochromatic elements are essential to a mineral's composition; their contribution to a mineral's colour is diagnostic. Examples of such minerals are malachite (green) and azurite (blue). In contrast, allochromatic elements in minerals are present in trace amounts as impurities. An example of such a mineral would be the ruby and sapphire varieties of the mineral corundum. The colours of pseudochromatic minerals are the result of interference of light waves. Examples include labradorite and bornite. In addition to simple body colour, minerals can have various other distinctive optical properties, such as play of colours, asterism, chatoyancy, iridescence, tarnish, and pleochroism. Several of these properties involve variability in colour. Play of colour, such as in opal, results in the sample reflecting different colours as it is turned, while pleochroism describes the change in colour as light passes through a mineral in a different orientation. Iridescence is a variety of the play of colours where light scatters off a coating on the surface of crystal, cleavage planes, or off layers having minor gradations in chemistry. In contrast, the play of colours in opal is caused by light refracting from ordered microscopic silica spheres within its physical structure. Chatoyancy ("cat's eye") is the wavy banding of colour that is observed as the sample is rotated; asterism, a variety of chatoyancy, gives the appearance of a star on the mineral grain. The latter property is particularly common in gem-quality corundum. The streak of a mineral refers to the colour of a mineral in powdered form, which may or may not be identical to its body colour. The most common way of testing this property is done with a streak plate, which is made out of porcelain and coloured either white or black. The streak of a mineral is independent of trace elements or any weathering surface. A common example of this property is illustrated with hematite, which is coloured black, silver, or red in hand sample, but has a cherry-red to reddish-brown streak. Streak is more often distinctive for metallic minerals, in contrast to non-metallic minerals whose body colour is created by allochromatic elements. Streak testing is constrained by the hardness of the mineral, as those harder than 7 powder the streak plate instead. Cleavage, parting, fracture, and tenacity By definition, minerals have a characteristic atomic arrangement. Weakness in this crystalline structure causes planes of weakness, and the breakage of a mineral along such planes is termed cleavage. The quality of cleavage can be described based on how cleanly and easily the mineral breaks; common descriptors, in order of decreasing quality, are "perfect", "good", "distinct", and "poor". In particularly transparent minerals, or in thin-section, cleavage can be seen as a series of parallel lines marking the planar surfaces when viewed from the side. Cleavage is not a universal property among minerals; for example, quartz, consisting of extensively interconnected silica tetrahedra, does not have a crystallographic weakness which would allow it to cleave. In contrast, micas, which have perfect basal cleavage, consist of sheets of silica tetrahedra which are very weakly held together. As cleavage is a function of crystallography, there are a variety of cleavage types. Cleavage occurs typically in either one, two, three, four, or six directions. Basal cleavage in one direction is a distinctive property of the micas. Two-directional cleavage is described as prismatic, and occurs in minerals such as the amphiboles and pyroxenes. Minerals such as galena or halite have cubic (or isometric) cleavage in three directions, at 90°; when three directions of cleavage are present, but not at 90°, such as in calcite or rhodochrosite, it is termed rhombohedral cleavage. Octahedral cleavage (four directions) is present in fluorite and diamond, and sphalerite has six-directional dodecahedral cleavage. Minerals with many cleavages might not break equally well in all of the directions; for example, calcite has good cleavage in three directions, but gypsum has perfect cleavage in one direction, and poor cleavage in two other directions. Angles between cleavage planes vary between minerals. For example, as the amphiboles are double-chain silicates and the pyroxenes are single-chain silicates, the angle between their cleavage planes is different. The pyroxenes cleave in two directions at approximately 90°, whereas the amphiboles distinctively cleave in two directions separated by approximately 120° and 60°. The cleavage angles can be measured with a contact goniometer, which is similar to a protractor. Parting, sometimes called "false cleavage", is similar in appearance to cleavage but is instead produced by structural defects in the mineral, as opposed to systematic weakness. Parting varies from crystal to crystal of a mineral, whereas all crystals of a given mineral will cleave if the atomic structure allows for that property. In general, parting is caused by some stress applied to a crystal. The sources of the stresses include deformation (e.g. an increase in pressure), exsolution, or twinning. Minerals that often display parting include the pyroxenes, hematite, magnetite, and corundum. When a mineral is broken in a direction that does not correspond to a plane of cleavage, it is termed to have been fractured. There are several types of uneven fracture. The classic example is conchoidal fracture, like that of quartz; rounded surfaces are created, which are marked by smooth curved lines. This type of fracture occurs only in very homogeneous minerals. Other types of fracture are fibrous, splintery, and hackly. The latter describes a break along a rough, jagged surface; an example of this property is found in native copper. Tenacity is related to both cleavage and fracture. Whereas fracture and cleavage describes the surfaces that are created when a mineral is broken, tenacity describes how resistant a mineral is to such breaking. Minerals can be described as brittle, ductile, malleable, sectile, flexible, or elastic. Specific gravity numerically describes the density of a mineral. The dimensions of density are mass divided by volume with units: kg/m3 or g/cm3. Specific gravity measures how much water a mineral sample displaces. Defined as the quotient of the mass of the sample and difference between the weight of the sample in air and its corresponding weight in water, specific gravity is a unitless ratio. Among most minerals, this property is not diagnostic. Rock forming minerals – typically silicates or occasionally carbonates – have a specific gravity of 2.5–3.5. High specific gravity is a diagnostic property of a mineral. A variation in chemistry (and consequently, mineral class) correlates to a change in specific gravity. Among more common minerals, oxides and sulfides tend to have a higher specific gravity as they include elements with higher atomic mass. A generalization is that minerals with metallic or adamantine lustre tend to have higher specific gravities than those having a non-metallic to dull lustre. For example, hematite, Fe2O3, has a specific gravity of 5.26 while galena, PbS, has a specific gravity of 7.2–7.6, which is a result of their high iron and lead content, respectively. A very high specific gravity becomes very pronounced in native metals; kamacite, an iron-nickel alloy common in iron meteorites has a specific gravity of 7.9, and gold has an observed specific gravity between 15 and 19.3. Other properties can be used to diagnose minerals. These are less general, and apply to specific minerals. Dropping dilute acid (often 10% HCl) onto a mineral aids in distinguishing carbonates from other mineral classes. The acid reacts with the carbonate ([CO3]2−) group, which causes the affected area to effervesce, giving off carbon dioxide gas. This test can be further expanded to test the mineral in its original crystal form or powdered form. An example of this test is done when distinguishing calcite from dolomite, especially within the rocks (limestone and dolomite respectively). Calcite immediately effervesces in acid, whereas acid must be applied to powdered dolomite (often to a scratched surface in a rock), for it to effervesce. Zeolite minerals will not effervesce in acid; instead, they become frosted after 5–10 minutes, and if left in acid for a day, they dissolve or become a silica gel. When tested, magnetism is a very conspicuous property of minerals. Among common minerals, magnetite exhibits this property strongly, and magnetism is also present, albeit not as strongly, in pyrrhotite and ilmenite. Some minerals exhibit electrical properties – for example, quartz is piezoelectric – but electrical properties are rarely used as diagnostic criteria for minerals because of incomplete data and natural variation. Minerals can also be tested for taste or smell. Halite, NaCl, is table salt; its potassium-bearing counterpart, sylvite, has a pronounced bitter taste. Sulfides have a characteristic smell, especially as samples are fractured, reacting, or powdered. Radioactivity is a rare property; minerals may be composed of radioactive elements. They could be a defining constituent, such as uranium in uraninite, autunite, and carnotite, or as trace impurities. In the latter case, the decay of a radioactive element damages the mineral crystal; the result, termed a radioactive halo or pleochroic halo, is observable with various techniques, such as thin-section petrography. As the composition of the Earth's crust is dominated by silicon and oxygen, silicate elements are by far the most important class of minerals in terms of rock formation and diversity. However, non-silicate minerals are of great economic importance, especially as ores. Non-silicate minerals are subdivided into several other classes by their dominant chemistry, which includes native elements, sulfides, halides, oxides and hydroxides, carbonates and nitrates, borates, sulfates, phosphates, and organic compounds. Most non-silicate mineral species are rare (constituting in total 8% of the Earth's crust), although some are relatively common, such as calcite, pyrite, magnetite, and hematite. There are two major structural styles observed in non-silicates: close-packing and silicate-like linked tetrahedra. close-packed structures is a way to densely pack atoms while minimizing interstitial space. Hexagonal close-packing involves stacking layers where every other layer is the same ("ababab"), whereas cubic close-packing involves stacking groups of three layers ("abcabcabc"). Analogues to linked silica tetrahedra include SO4 (sulfate), PO4 (phosphate), AsO4 (arsenate), and VO4 (vanadate). The non-silicates have great economic importance, as they concentrate elements more than the silicate minerals do. The largest grouping of minerals by far are the silicates; most rocks are composed of greater than 95% silicate minerals, and over 90% of the Earth's crust is composed of these minerals. The two main constituents of silicates are silicon and oxygen, which are the two most abundant elements in the Earth's crust. Other common elements in silicate minerals correspond to other common elements in the Earth's crust, such as aluminium, magnesium, iron, calcium, sodium, and potassium. Some important rock-forming silicates include the feldspars, quartz, olivines, pyroxenes, amphiboles, garnets, and micas. The base unit of a silicate mineral is the [SiO4]4− tetrahedron. In the vast majority of cases, silicon is in four-fold or tetrahedral coordination with oxygen. In very high-pressure situations, silicon will be in six-fold or octahedral coordination, such as in the perovskite structure or the quartz polymorph stishovite (SiO2). In the latter case, the mineral no longer has a silicate structure, but that of rutile (TiO2), and its associated group, which are simple oxides. These silica tetrahedra are then polymerized to some degree to create various structures, such as one-dimensional chains, two-dimensional sheets, and three-dimensional frameworks. The basic silicate mineral where no polymerization of the tetrahedra has occurred requires other elements to balance out the base 4- charge. In other silicate structures, different combinations of elements are required to balance out the resultant negative charge. It is common for the Si4+ to be substituted by Al3+ because of similarity in ionic radius and charge; in those cases, the [AlO4]5− tetrahedra form the same structures as do the unsubstituted tetrahedra, but their charge-balancing requirements are different. The degree of polymerization can be described by both the structure formed and how many tetrahedral corners (or coordinating oxygens) are shared (for aluminium and silicon in tetrahedral sites). Orthosilicates (or nesosilicates) have no linking of polyhedra, thus tetrahedra share no corners. Disilicates (or sorosilicates) have two tetrahedra sharing one oxygen atom. Inosilicates are chain silicates; single-chain silicates have two shared corners, whereas double-chain silicates have two or three shared corners. In phyllosilicates, a sheet structure is formed which requires three shared oxygens; in the case of double-chain silicates, some tetrahedra must share two corners instead of three as otherwise a sheet structure would result. Framework silicates, or tectosilicates, have tetrahedra that share all four corners. The ring silicates, or cyclosilicates, only need tetrahedra to share two corners to form the cyclical structure. The silicate subclasses are described below in order of decreasing polymerization. Tectosilicates, also known as framework silicates, have the highest degree of polymerization. With all corners of a tetrahedra shared, the silicon:oxygen ratio becomes 1:2. Examples are quartz, the feldspars, feldspathoids, and the zeolites. Framework silicates tend to be particularly chemically stable as a result of strong covalent bonds. Forming 12% of the Earth's crust, quartz (SiO2) is the most abundant mineral species. It is characterized by its high chemical and physical resistivity. Quartz has several polymorphs, including tridymite and cristobalite at high temperatures, high-pressure coesite, and ultra-high pressure stishovite. The latter mineral can only be formed on Earth by meteorite impacts, and its structure has been composed so much that it had changed from a silicate structure to that of rutile (TiO2). The silica polymorph that is most stable at the Earth's surface is α-quartz. Its counterpart, β-quartz, is present only at high temperatures and pressures (changes to α-quartz below 573 °C at 1 bar). These two polymorphs differ by a "kinking" of bonds; this change in structure gives β-quartz greater symmetry than α-quartz, and they are thus also called high quartz (β) and low quartz (α). Feldspars are the most abundant group in the Earth's crust, at about 50%. In the feldspars, Al3+ substitutes for Si4+, which creates a charge imbalance that must be accounted for by the addition of cations. The base structure becomes either [AlSi3O8]− or [Al2Si2O8]2− There are 22 mineral species of feldspars, subdivided into two major subgroups – alkali and plagioclase – and two less common groups – celsian and banalsite. The alkali feldspars are most commonly in a series between potassium-rich orthoclase and sodium-rich albite; in the case of plagioclase, the most common series ranges from albite to calcium-rich anorthite. Crystal twinning is common in feldspars, especially polysynthetic twins in plagioclase and Carlsbad twins in alkali feldspars. If the latter subgroup cools slowly from a melt, it forms exsolution lamellae because the two components – orthoclase and albite – are unstable in solid solution. Exsolution can be on a scale from microscopic to readily observable in hand-sample; perthitic texture forms when Na-rich feldspar exsolve in a K-rich host. The opposite texture (antiperthitic), where K-rich feldspar exsolves in a Na-rich host, is very rare. Feldspathoids are structurally similar to feldspar, but differ in that they form in Si-deficient conditions, which allows for further substitution by Al3+. As a result, feldspathoids cannot be associated with quartz. A common example of a feldspathoid is nepheline ((Na, K)AlSiO4); compared to alkali feldspar, nepheline has an Al2O3:SiO2 ratio of 1:2, as opposed to 1:6 in the feldspar. Zeolites often have distinctive crystal habits, occurring in needles, plates, or blocky masses. They form in the presence of water at low temperatures and pressures, and have channels and voids in their structure. Zeolites have several industrial applications, especially in waste water treatment. Phyllosilicates consist of sheets of polymerized tetrahedra. They are bound at three oxygen sites, which gives a characteristic silicon:oxygen ratio of 2:5. Important examples include the mica, chlorite, and the kaolinite-serpentine groups. The sheets are weakly bound by van der Waals forces or hydrogen bonds, which causes a crystallographic weakness, in turn leading to a prominent basal cleavage among the phyllosilicates. In addition to the tetrahedra, phyllosilicates have a sheet of octahedra (elements in six-fold coordination by oxygen) that balance out the basic tetrahedra, which have a negative charge (e.g. [Si4O10]4−) These tetrahedra (T) and octahedra (O) sheets are stacked in a variety of combinations to create phyllosilicate groups. Within an octahedral sheet, there are three octahedral sites in a unit structure; however, not all of the sites may be occupied. In that case, the mineral is termed dioctahedral, whereas in other case it is termed trioctahedral. The kaolinite-serpentine group consists of T-O stacks (the 1:1 clay minerals); their hardness ranges from 2 to 4, as the sheets are held by hydrogen bonds. The 2:1 clay minerals (pyrophyllite-talc) consist of T-O-T stacks, but they are softer (hardness from 1 to 2), as they are instead held together by van der Waals forces. These two groups of minerals are subgrouped by octahedral occupation; specifically, kaolinite and pyrophyllite are dioctahedral whereas serpentine and talc trioctahedral. Micas are also T-O-T-stacked phyllosilicates, but differ from the other T-O-T and T-O-stacked subclass members in that they incorporate aluminium into the tetrahedral sheets (clay minerals have Al3+ in octahedral sites). Common examples of micas are muscovite, and the biotite series. The chlorite group is related to mica group, but a brucite-like (Mg(OH)2) layer between the T-O-T stacks. Because of their chemical structure, phyllosilicates typically have flexible, elastic, transparent layers that are electrical insulators and can be split into very thin flakes. Micas can be used in electronics as insulators, in construction, as optical filler, or even cosmetics. Chrysotile, a species of serpentine, is the most common mineral species in industrial asbestos, as it is less dangerous in terms of health than the amphibole asbestos. Inosilicates consist of tetrahedra repeatedly bonded in chains. These chains can be single, where a tetrahedron is bound to two others to form a continuous chain; alternatively, two chains can be merged to create double-chain silicates. Single-chain silicates have a silicon:oxygen ratio of 1:3 (e.g. [Si2O6]4−), whereas the double-chain variety has a ratio of 4:11, e.g. [Si8O22]12−. Inosilicates contain two important rock-forming mineral groups; single-chain silicates are most commonly pyroxenes, while double-chain silicates are often amphiboles. Higher-order chains exist (e.g. three-member, four-member, five-member chains, etc.) but they are rare. The pyroxene group consists of 21 mineral species. Pyroxenes have a general structure formula of XY(Si2O6), where X is an octahedral site, while Y can vary in coordination number from six to eight. Most varieties of pyroxene consist of permutations of Ca2+, Fe2+ and Mg2+ to balance the negative charge on the backbone. Pyroxenes are common in the Earth's crust (about 10%) and are a key constituent of mafic igneous rocks. Amphiboles have great variability in chemistry, described variously as a "mineralogical garbage can" or a "mineralogical shark swimming a sea of elements". The backbone of the amphiboles is the [Si8O22]12−; it is balanced by cations in three possible positions, although the third position is not always used, and one element can occupy both remaining ones. Finally, the amphiboles are usually hydrated, that is, they have a hydroxyl group ([OH]−), although it can be replaced by a fluoride, a chloride, or an oxide ion. Because of the variable chemistry, there are over 80 species of amphibole, although variations, as in the pyroxenes, most commonly involve mixtures of Ca2+, Fe2+ and Mg2+. Several amphibole mineral species can have an asbestiform crystal habit. These asbestos minerals form long, thin, flexible, and strong fibres, which are electrical insulators, chemically inert and heat-resistant; as such, they have several applications, especially in construction materials. However, asbestos are known carcinogens, and cause various other illnesses, such as asbestosis; amphibole asbestos (anthophyllite, tremolite, actinolite, grunerite, and riebeckite) are considered more dangerous than chrysotile serpentine asbestos. Cyclosilicates, or ring silicates, have a ratio of silicon to oxygen of 1:3. Six-member rings are most common, with a base structure of [Si6O18]12−; examples include the tourmaline group and beryl. Other ring structures exist, with 3, 4, 8, 9, 12 having been described. Cyclosilicates tend to be strong, with elongated, striated crystals. Tourmalines have a very complex chemistry that can be described by a general formula XY3Z6(BO3)3T6O18V3W. The T6O18 is the basic ring structure, where T is usually Si4+, but substitutable by Al3+ or B3+. Tourmalines can be subgrouped by the occupancy of the X site, and from there further subdivided by the chemistry of the W site. The Y and Z sites can accommodate a variety of cations, especially various transition metals; this variability in structural transition metal content gives the tourmaline group greater variability in colour. Other cyclosilicates include beryl, Al2Be3Si6O18, whose varieties include the gemstones emerald (green) and aquamarine (bluish). Cordierite is structurally similar to beryl, and is a common metamorphic mineral. Sorosilicates, also termed disilicates, have tetrahedron-tetrahedron bonding at one oxygen, which results in a 2:7 ratio of silicon to oxygen. The resultant common structural element is the [Si2O7]6− group. The most common disilicates by far are members of the epidote group. Epidotes are found in variety of geologic settings, ranging from mid-ocean ridge to granites to metapelites. Epidotes are built around the structure [(SiO4)(Si2O7)]10− structure; for example, the mineral species epidote has calcium, aluminium, and ferric iron to charge balance: Ca2Al2(Fe3+, Al)(SiO4)(Si2O7)O(OH). The presence of iron as Fe3+ and Fe2+ helps understand oxygen fugacity, which in turn is a significant factor in petrogenesis. Other examples of sorosilicates include lawsonite, a metamorphic mineral forming in the blueschist facies (subduction zone setting with low temperature and high pressure), vesuvianite, which takes up a significant amount of calcium in its chemical structure. Orthosilicates consist of isolated tetrahedra that are charge-balanced by other cations. Also termed nesosilicates, this type of silicate has a silicon:oxygen ratio of 1:4 (e.g. SiO4). Typical orthosilicates tend to form blocky equant crystals, and are fairly hard. Several rock-forming minerals are part of this subclass, such as the aluminosilicates, the olivine group, and the garnet group. The aluminosilicates –bkyanite, andalusite, and sillimanite, all Al2SiO5 – are structurally composed of one [SiO4]4− tetrahedron, and one Al3+ in octahedral coordination. The remaining Al3+ can be in six-fold coordination (kyanite), five-fold (andalusite) or four-fold (sillimanite); which mineral forms in a given environment is depend on pressure and temperature conditions. In the olivine structure, the main olivine series of (Mg, Fe)2SiO4 consist of magnesium-rich forsterite and iron-rich fayalite. Both iron and magnesium are in octahedral by oxygen. Other mineral species having this structure exist, such as tephroite, Mn2SiO4. The garnet group has a general formula of X3Y2(SiO4)3, where X is a large eight-fold coordinated cation, and Y is a smaller six-fold coordinated cation. There are six ideal endmembers of garnet, split into two group. The pyralspite garnets have Al3+ in the Y position: pyrope (Mg3Al2(SiO4)3), almandine (Fe3Al2(SiO4)3), and spessartine (Mn3Al2(SiO4)3). The ugrandite garnets have Ca2+ in the X position: uvarovite (Ca3Cr2(SiO4)3), grossular (Ca3Al2(SiO4)3) and andradite (Ca3Fe2(SiO4)3). While there are two subgroups of garnet, solid solutions exist between all six end-members. Other orthosilicates include zircon, staurolite, and topaz. Zircon (ZrSiO4) is useful in geochronology as the Zr4+ can be substituted by U6+; furthermore, because of its very resistant structure, it is difficult to reset it as a chronometer. Staurolite is a common metamorphic intermediate-grade index mineral. It has a particularly complicated crystal structure that was only fully described in 1986. Topaz (Al2SiO4(F, OH)2, often found in granitic pegmatites associated with tourmaline, is a common gemstone mineral. Native elements are those that are not chemically bonded to other elements. This mineral group includes native metals, semi-metals, and non-metals, and various alloys and solid solutions. The metals are held together by metallic bonding, which confers distinctive physical properties such as their shiny metallic lustre, ductility and malleability, and electrical conductivity. Native elements are subdivided into groups by their structure or chemical attributes. The gold group, with a cubic close-packed structure, includes metals such as gold, silver, and copper. The platinum group is similar in structure to the gold group. The iron-nickel group is characterized by several iron-nickel alloy species. Two examples are kamacite and taenite, which are found in iron meteorites; these species differ by the amount of Ni in the alloy; kamacite has less than 5–7% nickel and is a variety of native iron, whereas the nickel content of taenite ranges from 7–37%. Arsenic group minerals consist of semi-metals, which have only some metallic traits; for example, they lack the malleability of metals. Native carbon occurs in two allotropes, graphite and diamond; the latter forms at very high pressure in the mantle, which gives it a much stronger structure than graphite. The sulfide minerals are chemical compounds of one or more metals or semimetals with a sulfur; tellurium, arsenic, or selenium can substitute for the sulfur. Sulfides tend to be soft, brittle minerals with a high specific gravity. Many powdered sulfides, such as pyrite, have a sulfurous smell when powdered. Sulfides are susceptible to weathering, and many readily dissolve in water; these dissolved minerals can be later redeposited, which creates enriched secondary ore deposits. Sulfides are classified by the ratio of the metal or semimetal to the sulfur, such as M:S equal to 2:1, or 1:1. Many sulfide minerals are economically important as metal ores; examples include sphalerite (ZnS), an ore of zinc, galena (PbS), an ore of lead, cinnabar (HgS), an ore of mercury, and molybdenite (MoS2, an ore of molybdenum. Pyrite (FeS2), is the most commonly occurring sulfide, and can be found in most geological environments. It is not, however, an ore of iron, but can be instead oxidized to produce sulfuric acid. Related to the sulfides are the rare sulfosalts, in which a metallic element is bonded to sulfur and a semimetal such as antimony, arsenic, or bismuth. Like the sulfides, sulfosalts are typically soft, heavy, and brittle minerals. Oxide minerals are divided into three categories: simple oxides, hydroxides, and multiple oxides. Simple oxides are characterized by O2− as the main anion and primarily ionic bonding. They can be further subdivided by the ratio of oxygen to the cations. The periclase group consists of minerals with a 1:1 ratio. Oxides with a 2:1 ratio include cuprite (Cu2O) and water ice. Corundum group minerals have a 2:3 ratio, and includes minerals such as corundum (Al2O3), and hematite (Fe2O3). Rutile group minerals have a ratio of 1:2; the eponymous species, rutile (TiO2) is the chief ore of titanium; other examples include cassiterite (SnO2; ore of tin), and pyrolusite (MnO2; ore of manganese). In hydroxides, the dominant anion is the hydroxyl ion, OH−. Bauxites are the chief aluminium ore, and are a heterogeneous mixture of the hydroxide minerals diaspore, gibbsite, and bohmite; they form in areas with a very high rate of chemical weathering (mainly tropical conditions). Finally, multiple oxides are compounds of two metals with oxygen. A major group within this class are the spinels, with a general formula of X2+Y3+2O4. Examples of species include spinel (MgAl2O4), chromite (FeCr2O4), and magnetite (Fe3O4). The latter is readily distinguishable by its strong magnetism, which occurs as it has iron in two oxidation states (Fe2+Fe3+2O4), which makes it a multiple oxide instead of a single oxide. The halide minerals are compounds in which a halogen (fluorine, chlorine, iodine, or bromine) is the main anion. These minerals tend to be soft, weak, brittle, and water-soluble. Common examples of halides include halite (NaCl, table salt), sylvite (KCl), fluorite (CaF2). Halite and sylvite commonly form as evaporites, and can be dominant minerals in chemical sedimentary rocks. Cryolite, Na3AlF6, is a key mineral in the extraction of aluminium from bauxites; however, as the only significant occurrence at Ivittuut, Greenland, in a granitic pegmatite, was depleted, synthetic cryolite can be made from fluorite. The carbonate minerals are those in which the main anionic group is carbonate, [CO3]2−. Carbonates tend to be brittle, many have rhombohedral cleavage, and all react with acid. Due to the last characteristic, field geologists often carry dilute hydrochloric acid to distinguish carbonates from non-carbonates. The reaction of acid with carbonates, most commonly found as the polymorph calcite and aragonite (CaCO3), relates to the dissolution and precipitation of the mineral, which is a key in the formation of limestone caves, features within them such as stalactite and stalagmites, and karst landforms. Carbonates are most often formed as biogenic or chemical sediments in marine environments. The carbonate group is structurally a triangle, where a central C4+ cation is surrounded by three O2− anions; different groups of minerals form from different arrangements of these triangles. The most common carbonate mineral is calcite, which is the primary constituent of sedimentary limestone and metamorphic marble. Calcite, CaCO3, can have a high magnesium impurity. Under high-Mg conditions, its polymorph aragonite will form instead; the marine geochemistry in this regard can be described as an aragonite or calcite sea, depending on which mineral preferentially forms. Dolomite is a double carbonate, with the formula CaMg(CO3)2. Secondary dolomitization of limestone is common, in which calcite or aragonite are converted to dolomite; this reaction increases pore space (the unit cell volume of dolomite is 88% that of calcite), which can create a reservoir for oil and gas. These two mineral species are members of eponymous mineral groups: the calcite group includes carbonates with the general formula XCO3, and the dolomite group constitutes minerals with the general formula XY(CO3)2. The sulfate minerals all contain the sulfate anion, [SO4]2−. They tend to be transparent to translucent, soft, and many are fragile. Sulfate minerals commonly form as evaporites, where they precipitate out of evaporating saline waters. Sulfates can also be found in hydrothermal vein systems associated with sulfides, or as oxidation products of sulfides. Sulfates can be subdivided into anhydrous and hydrous minerals. The most common hydrous sulfate by far is gypsum, CaSO4⋅2H2O. It forms as an evaporite, and is associated with other evaporites such as calcite and halite; if it incorporates sand grains as it crystallizes, gypsum can form desert roses. Gypsum has very low thermal conductivity and maintains a low temperature when heated as it loses that heat by dehydrating; as such, gypsum is used as an insulator in materials such as plaster and drywall. The anhydrous equivalent of gypsum is anhydrite; it can form directly from seawater in highly arid conditions. The barite group has the general formula XSO4, where the X is a large 12-coordinated cation. Examples include barite (BaSO4), celestine (SrSO4), and anglesite (PbSO4); anhydrite is not part of the barite group, as the smaller Ca2+ is only in eight-fold coordination. The phosphate minerals are characterized by the tetrahedral [PO4]3− unit, although the structure can be generalized, and phosphorus is replaced by antimony, arsenic, or vanadium. The most common phosphate is the apatite group; common species within this group are fluorapatite (Ca5(PO4)3F), chlorapatite (Ca5(PO4)3Cl) and hydroxylapatite (Ca5(PO4)3(OH)). Minerals in this group are the main crystalline constituents of teeth and bones in vertebrates. The relatively abundant monazite group has a general structure of ATO4, where T is phosphorus or arsenic, and A is often a rare-earth element (REE). Monazite is important in two ways: first, as a REE "sink", it can sufficiently concentrate these elements to become an ore; secondly, monazite group elements can incorporate relatively large amounts of uranium and thorium, which can be used in monazite geochronology to date the rock based on the decay of the U and Th to lead. The Strunz classification includes a class for organic minerals. These rare compounds contain organic carbon, but can be formed by a geologic process. For example, whewellite, CaC2O4⋅H2O is an oxalate that can be deposited in hydrothermal ore veins. While hydrated calcium oxalate can be found in coal seams and other sedimentary deposits involving organic matter, the hydrothermal occurrence is not considered to be related to biological activity. It has been suggested that biominerals could be important indicators of extraterrestrial life and thus could play an important role in the search for past or present life on the planet Mars. Furthermore, organic components (biosignatures) that are often associated with biominerals are believed to play crucial roles in both pre-biotic and biotic reactions. On January 24, 2014, NASA reported that current studies by the Curiosity and Opportunity rovers on Mars will now be searching for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and/or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on the planet Mars is now a primary NASA objective. - Amateur geology - Mineral (nutrient), also known as Dietary mineral - Isomorphism (crystallography) - List of minerals – A list of minerals for which there are articles on Wikipedia - List of minerals (complete) – List of minerals, intended to be as complete as possible - Mineral collecting - Polymorphism (materials science) - Wenk, Hans-Rudolf; Bulakh, Andrei (2004). Minerals: Their Constitution and Origin. Cambridge University Press. p. 10. ISBN 978-0-521-52958-7. - Stephenson, Tim; Stephenson, Carolyn. "Rocks & Minerals". Gem Rock. Creetown Gem Rock Museum. Retrieved 18 July 2019. - John P. Rafferty, ed. (2011): Minerals; page 1. In the series Geology: Landforms, Minerals, and Rocks. Rosen Publishing Group. ISBN 9781615304899 - Austin Flint Rogers and Paul Francis Kerr (1942): Optical mineralogy, 2nd ed., page 374. McGraw-Hill; ISBN 978-1114108523 - Marco Pasero, ed. (2018): "The New IMA List of Minerals – A Work in Progress", version of November 2018. Published by the International Mineralogical Association (IMA), Commission on New Minerals Nomenclature and Classification (CNMNC); last accessed on 2019-02-11 - "IMA Database of Mineral Properties/ RRUFF Project". Department of Geosciences, University of Arizona. Retrieved 2 November 2018. - "Definition of mineral variety". mindat.org. Retrieved 1 March 2018. - "Mineral - Silicates". - Dyar, Gunter, and Tasa (2007). Mineralogy and Optical Mineralogy. Mineralogical Society of America. pp. 2–4. ISBN 978-0-939950-81-2.CS1 maint: Multiple names: authors list (link) - "Mercury". Mindat.org. Retrieved 3 April 2018. - "Ice". Mindat.org. Retrieved 3 April 2018. - "Mackinawite". Mindat.org. Retrieved 3 April 2018. - Chesterman & Lowe 2008, pp. 13–14 - Nickel, Ernest H. (1995). "The definition of a mineral". The Canadian Mineralogist. 33 (3): 689–90. - "Dana Classification 8th edition – Organic Compounds". mindat.org. Retrieved 3 April 2018. - "Nickel-Strunz Classification – silicates (Germanates) 10th edition". mindat.org. Retrieved 3 April 2018. - Mills, J.S.; Hatert, F.; Nickel, E.H.; Ferraris, G. (2009). "The standardisation of mineral group hierarchies: application to recent nomenclature proposals". European Journal of Mineralogy. 21 (5): 1073–80. Bibcode:2009EJMin..21.1073M. doi:10.1127/0935-1221/2009/0021-1994. - IMA divisions Archived 2011-08-10 at the Wayback Machine. Ima-mineralogy.org (2011-01-12). Retrieved on 2011-10-20. - H.A., Lowenstam (1981). "Minerals formed by organisms". Science. 211 (4487): 1126–31. Bibcode:1981Sci...211.1126L. doi:10.1126/science.7008198. JSTOR 1685216. PMID 7008198. - Skinner, H.C.W. (2005). "Biominerals". Mineralogical Magazine. 69 (5): 621–41. Bibcode:2005MinM...69..621S. doi:10.1180/0026461056950275. - "Working Group on Environmental Mineralogy and Geochemistry". Commissions, working groups and committees. International Mineralogical Association. 3 August 2011. Retrieved 4 April 2018. - Takai, K. (2010). "Limits of life and the biosphere: Lessons from the detection of microorganisms in the deep sea and deep subsurface of the Earth.". In Gargaud, M.; Lopez-Garcia, P.; Martin, H. (eds.). Origins and Evolution of Life: An Astrobiological Perspective. Cambridge: Cambridge University Press. pp. 469–86. ISBN 978-1-139-49459-5. - Roussel, E.G.; Cambon Bonavita, M.; Querellou, J.; Cragg, B.A.; Prieur, D.; Parkes, R.J.; Parkes, R.J. (2008). "Extending the Sub-Sea-Floor Biosphere". Science. 320 (5879): 1046. Bibcode:2008Sci...320.1046R. doi:10.1126/science.1154545. PMID 18497290. - Pearce, D.A.; Bridge, P.D.; Hughes, K.A.; Sattler, B.; Psenner, R.; Russel, N.J. (2009). "Microorganisms in the atmosphere over Antarctica". FEMS Microbiology Ecology. 69 (2): 143–57. doi:10.1111/j.1574-6941.2009.00706.x. PMID 19527292. - Newman, D.K.; Banfield, J.F. (2002). "Geomicrobiology: How Molecular-Scale Interactions Underpin Biogeochemical Systems". Science. 296 (5570): 1071–77. Bibcode:2002Sci...296.1071N. doi:10.1126/science.1010716. PMID 12004119. - Warren, L.A.; Kauffman, M.E. (2003). "Microbial geoengineers". Science. 299 (5609): 1027–29. doi:10.1126/science.1072076. JSTOR 3833546. PMID 12586932. - González-Muñoz, M.T.; Rodriguez-Navarro, C.; Martínez-Ruiz, F.; Arias, J.M.; Merroun, M.L.; Rodriguez-Gallego, M. (2010). "Bacterial biomineralization: new insights from Myxococcus-induced mineral precipitation". Geological Society, London, Special Publications. 336 (1): 31–50. Bibcode:2010GSLSP.336...31G. doi:10.1144/SP336.3. - Veis, A. (1990). "Biomineralization. Cell Biology and Mineral Deposition. by Kenneth Simkiss; Karl M. Wilbur On Biomineralization. by Heinz A. Lowenstam; Stephen Weiner". Science. 247 (4946): 1129–30. Bibcode:1990Sci...247.1129S. doi:10.1126/science.247.4946.1129. JSTOR 2874281. PMID 17800080. - Official IMA list of mineral names (updated from March 2009 list) Archived 2011-07-06 at the Wayback Machine. uws.edu.au - Bouligand, Y. (2006). "Liquid crystals and morphogenesis.". In Bourgine, P.; Lesne, A. (eds.). Morphogenesis: Origins of Patterns and Shape. Cambridge: Springer Verlag. pp. 49 ff. ISBN 978-3-642-13174-5. - Gabriel, C.P.; Davidson, P. (2003). "Mineral Liquid Crystals from Self-Assembly of Anisotropic Nanosystems". Topics in Current Chemistry. 226: 119–72. doi:10.1007/b10827. - K., Hefferan; J., O'Brien (2010). Earth Materials. Wiley-Blackwell. ISBN 978-1-4443-3460-9. - Bindi, L.; Paul J. Steinhardt; Nan Yao; Peter J. Lu (2011). "Icosahedrite, Al63Cu24Fe13, the first natural quasicrystal". American Mineralogist. 96 (5–6): 928–31. Bibcode:2011AmMin..96..928B. doi:10.2138/am.2011.3758. - Commission on New Minerals and Mineral Names, Approved as new mineral Archived 2012-03-20 at the Wayback Machine - Chesterman & Lowe 2008, pp. 15–16 - Chesterman & Lowe 2008, pp. 719–21 - Chesterman & Lowe 2008, pp. 747–48 - Chesterman & Lowe 2008, pp. 694–96 - Chesterman & Lowe 2008, pp. 728–30 - Dyar & Gunter 2008, p. 15 - Chesterman & Lowe 2008, p. 14 - Chesterman and Cole, pp. 531–32 - Chesterman & Lowe 2008, pp. 14–15 - Dyar & Gunter 2008, pp. 20–22 - Dyar & Gunter 2008, pp 558–59 - Dyar & Gunter 2008, p. 556 - Harper, Douglas. "Online Etymology Dictionary". etymonline. Retrieved 28 March 2018. - Wilk, H (1986). "Systematic Classification of Minerals" (Hardcover). In Wilk, H (ed.). The Magic of Minerals. Berlin: Springer. p. 154. doi:10.1007/978-3-642-61304-3_7. ISBN 978-3-642-64783-3. - Dyar & Gunter 2008, pp. 4–7 - Dyar & Gunter 2008, p. 586 - Dyar & Gunter 2008, p. 141 - Dyar & Gunter 2008, p. 14 - Dyar & Gunter 2008, p. 585 - Dyar & Gunter 2008, pp. 12–17 - Dyar & Gunter 2008, p. 549 - Dyar & Gunter 2008, p. 579 - Dyar & Gunter 2008, pp. 22–23 - Dyar & Gunter 2008, pp. 69–80 - Dyar & Gunter 2008, pp. 654–55 - Dyar & Gunter 2008, p. 581 - Dyar & Gunter 2008, pp. 631–32 - Dyar & Gunter 2008, p. 166 - Dyar & Gunter 2008, pp. 41–43 - Chesterman & Lowe 2008, p. 39 - Dyar & Gunter 2008, pp. 32–39 - Chesterman & Lowe 2008, p. 38 - Dyar & Gunter 2008, pp. 28–29 - "Kyanite". Mindat.org. Retrieved 3 April 2018. - Dyar and Darby, pp. 26–28 - Busbey et al. 2007, p. 72 - Dyar & Gunter 2008, p. 25 - Dyar & Gunter 2008, p. 23 - Dyar & Gunter 2008, pp. 131–44 - Dyar & Gunter 2008, p. 24 - Dyar & Gunter 2008, pp. 24–26 - Busbey et al. 2007, p. 73 - Dyar & Gunter 2008, pp. 39–40 - Chesterman & Lowe 2008, pp. 29–30 - Chesterman & Lowe 2008, pp. 30–31 - Dyar & Gunter 2008, pp. 31–33 - Dyar & Gunter 2008, pp. 30–31 - Dyar & Gunter 2008, pp. 43–44 - "Hematite". Mindat.org. Retrieved 3 April 2018. - "Galena". Mindat.org. Retrieved 3 April 2018. - "Kamacite". Webmineral.com. Retrieved 3 April 2018. - "Gold". Mindat.org. Retrieved 3 April 2018. - Dyar & Gunter 2008, pp. 44–45 - "Mineral Identification Key: Radioactivity, Magnetism, Acid Reactions". Mineralogical Society of America. Archived from the original on 2012-09-22. Retrieved 2012-08-15. - Helman, Daniel S. (2016). "Symmetry-based electricity in minerals and rocks: A summary of extant data, with examples of centrosymmetric minerals that exhibit pyro- and piezoelectricity". Periodico di Mineralogia. 85 (3). doi:10.2451/2016PM590. - Dyar & Gunter 2008, p. 641 - Dyar & Gunter 2008, p. 681 - Dyar & Gunter 2008, pp. 641–43 - Dyar & Gunter 2008, p. 104 - Dyar & Gunter 2008, p. 5 - Dyar & Gunter 2008, pp. 104–20 - Dyar & Gunter 2008, p. 105 - Dyar & Gunter 2008, pp. 104–17 - Chesterman and Cole, p. 502 - Dyar & Gunter 2008, pp. 578–83 - Dyar & Gunter 2008, pp. 583–88 - Dyar & Gunter 2008, p. 588 - Dyar & Gunter 2008, pp. 589–93 - Chesterman & Lowe 2008, p. 525 - Dyar & Gunter 2008, p. 110 - Dyar & Gunter 2008, pp. 110–13 - Dyar & Gunter 2008, pp. 602–05 - Dyar & Gunter 2008, pp. 593–95 - Chesterman & Lowe 2008, p. 537 - "09.D Inosilicates". Webmineral.com. Retrieved 2012-08-20. - Dyar & Gunter 2008, p. 112 - Dyar & Gunter 2008 pp. 612–13 - Dyar & Gunter 2008, pp. 606–12 - Dyar & Gunter 2008, pp. 611–12 - Dyar & Gunter 2008, pp. 113–15 - Chesterman & Lowe 2008, p. 558 - Dyar & Gunter 2008, pp. 617–21 - Dyar & Gunter 2008, pp. 612–27 - Chesterman & Lowe 2008, pp. 565–73 - Dyar & Gunter 2008, pp. 116–17 - Chesterman & Lowe 2008, p. 573 - Chesterman & Lowe 2008, pp. 574–75 - Dyar & Gunter 2008, pp. 627–34 - Dyar & Gunter 2008, pp. 644–48 - Chesterman & Lowe 2008, p. 357 - Dyar & Gunter 2008, p. 649 - Dyar & Gunter 2008, pp. 651–54 - Dyar & Gunter 2008, p. 654 - Chesterman & Lowe 2008, p. 383 - Chesterman & Lowe 2008, pp. 400–03 - Dyar & Gunter 2008, pp. 657–60 - Dyar & Gunter 2008, pp. 663–64 - Dyar & Gunter 2008, pp. 660–63 - Chesterman & Lowe 2008, pp. 425–30 - Chesterman & Lowe 2008, p. 431 - Dyar & Gunter 2008, p. 667 - Dyar & Gunter 2008, pp. 668–69 - Chesterman & Lowe 2008, p. 453 - Chesterman & Lowe 2008, pp. 456–57 - Dyar & Gunter 2008, p. 674 - Dyar & Gunter 2008, pp. 672–73 - Dyar & Gunter 2008, pp. 675–80 - Steele, Andrew; Beaty, David, eds. (September 26, 2006). "Final report of the MEPAG Astrobiology Field Laboratory Science Steering Group (AFL-SSG)". The Astrobiology Field Laboratory (.doc) |url=(help). Mars Exploration Program Analysis Group (MEPAG) – NASA. p. 72. Retrieved 2009-07-22. - Grotzinger, John P. (January 24, 2014). "Introduction to Special Issue – Habitability, Taphonomy, and the Search for Organic Carbon on Mars". Science. 343 (6169): 386–87. Bibcode:2014Sci...343..386G. doi:10.1126/science.1249944. PMID 24458635. - Various (January 24, 2014). "Exploring Martian Habitability". Science. 343 (6169): 345–452.CS1 maint: Uses authors parameter (link) - Various (January 24, 2014). "Special Collection – Curiosity – Exploring Martian Habitability". Science. Retrieved January 24, 2014.CS1 maint: Uses authors parameter (link) - Grotzinger, J.P.; et al. (January 24, 2014). "A Habitable Fluvio-Lacustrine Environment at Yellowknife Bay, Gale Crater, Mars". Science. 343 (6169): 1242777. Bibcode:2014Sci...343A.386G. CiteSeerX 10.1.1.455.3973. doi:10.1126/science.1242777. PMID 24324272. - Busbey, A.B.; Coenraads, R.E.; Roots, D.; Willis, P. (2007). Rocks and Fossils. San Francisco: Fog City Press. ISBN 978-1-74089-632-0. - Chesterman, C.W.; Lowe, K.E. (2008). Field guide to North American rocks and minerals. Toronto: Random House of Canada. ISBN 978-0394502694. - Dyar, M.D.; Gunter, M.E. (2008). Mineralogy and Optical Mineralogy. Chantilly, VA: Mineralogical Society of America. ISBN 978-0939950812. - Hazen, R.M.; Grew, Edward S.; Origlieri, Marcus J.; Downs, Robert T. (March 2017). "On the Mineralogy of the 'Anthropocene Epoch'" (PDF). American Mineralogist. 102 (3): 595. Bibcode:2017AmMin.102..595H. doi:10.2138/am-2017-5875. Retrieved August 14, 2017. On the creation of new minerals by human activity. |Wikimedia Commons has media related to Minerals.| |The Wikibook Historical Geology has a page on the topic of: Minerals| |The Wikibook High School Earth Science has a page on the topic of: Earth's Minerals| - Mindat mineralogical database, largest mineral database on the Internet - "Mineralogy Database" by David Barthelmy (2009) - "Mineral Identification Key II" Mineralogical Society of America - "American Mineralogist Crystal Structure Database" - Minerals and the Origins of Life (Robert Hazen, NASA) (video, 60m, April 2014). - The private lives of minerals: Insights from big-data mineralogy (Robert Hazen, 15 February 2017)
Coral reefs concentrate between one quarter and one third of the total marine biodiversity, according to different estimates, though they only cover about 0.1% of the global oceanic surface and are confined to tropical and sub-tropical latitudes. The extraordinary diversity in invertebrates and fish species is often compared to that of arthropods and vertebrates of tropical primary rainforests. Half a billion humans depend partly or totally on the goods and services provided by coral reef ecosystems. However, coral reefs are now recognized as being among the most fragile of all environments in the face of localized anthropic pressures (overfishing and various atmospheric and water pollutions) and of their climatic consequences of planetary dimensions. Within the last 30 years, about 20% of all coral reefs have being totally destroyed while another estimated 60% are damaged to some extent – a few beyond recovery - and only 20% can still be regarded as unharmed. With a human population growing at faster rates in developing nations, biodiversity concerns are conflicting with the pressures to exploit local mineral resources and to develop agricultural and seafood production. While residents often deplore the gradual changes in the natural habitats in which they were brought up, socio-economic pressures originally due to post-colonial emancipation and now to global trade tend to resist any policy that may be considered restrictive or receding. Almost every aspect of this economic growth relies on fossil fuel energy consumption and to a lesser extent on vegetal biomass burning, which links development with climatic changes, and also with the generation of polluting wastes. Educational programs and non-government associations propose alternatives to established practices in habitat and resource management and waste disposal by end users. Research scientists continue to explore natural biodiversity in remote pristine environments (especially In a previous chapter of this series on biodiversity the various ecological consequences of climatic changes, of chemical and microbial pollutions and of overexploitation of natural resources have been reviewed for coral reef ecosystems. Suggestions have been made on the basis of recent publications by experts on various subjects, as to how modern techniques and innovative approaches could be used appropriately to complement the above initiatives. In this chapter, a holistic concept is proposed that (i) integrates cutting-edge molecular research and standard technologies with field sampling and laboratory simulations of natural habitats (ii) using holobiont-based 2. Biodiversity is our responsibility: The future of mankind depends on it 2.1. If the origins of life remain controversial, biodiversity is a miracle of sorts Basic forms of life like bacteria are reputedly capable of withstanding extreme conditions, to the point that scientists of repute such as Sir Francis Crick, Carl Sagan or Stephen Hawking have speculated on an alien origin to terrestrial life, which is now held as a tenet of the modern panspermia theory by some exobiologists. Dormant bacterial spores or alike would have been seeded on our planet, possibly from different sources and at different times. Those that encountered favorable “starting” conditions, supposedly in the chemolithotrophic environments around oceanic ridges, would have initiated the evolutionary scenario we know. Posed under such terms, the true Satellite views of the ionized portion of our atmosphere show it as a barely perceptible glow that outlines the shape of our planet against the black outer space background. Just under it, blue expanses of oceanic waters spread as a delicate film less than one-thousand times thinner than the supporting “blue marble”, but over two thirds of its surface (Figure1). Life forms occur at the sea-air and soil-air interfaces, just where geoclimatic fluxes and exchanges are the most rapid and subjected to biotic influences. Thus marine biodiversity thrives under specific conditions that are only found between the oxygen-rich lower atmosphere (at cloud canopy level on Figure 1), and the sunlit oceanic surfaces. One-third of all known marine species is concentrated on confettis occupying one-thousandth of the oceanic surfaces, also thanks to favorable conditions afforded by land influence, i.e. marine volcanoes and continental shelves under tropical-subtropical latitudes. This circumstantial miracle is called coral reefs. 2.2. Extinction events are natural over long periods after which biodiversity has to reinvent itself Any lasting change in the biogeochemistry of any of the three components (atmosphere, seawater and land) will disrupt the interfacial equilibrium that supports the many thousands of life forms that interact constantly within an ecosystem. Mass species extinctions occurred several times (6 or 7 described) in the history of our planet since it became life-supporting, every time followed with new and better adapted life forms and a biodiversity climax attained after long periods of environmental stability. Changes in soil mineral strata indicate the occurrence of biodiversity-modifying events such as occupation by seas or the occurrence of an ice-age. Discrete organic layers may indicate the presence of a tropical rainforest or of a dry land savannah. Datable fossil evidence within these strata, together with paleontological reconstructions, point out the floristic and faunistic peculiarities of the times. Core drills in ice provide datable evidence of biogeoclimatic episodes within the last few millennia, while core drills in massive scleractinian corals give accurate calendar-like records of recurrent or of accidental climatic events affecting their biotope. Speciation usually goes along with occupation of new territories and new habitats, the first colonizers having acquired the necessary adaptations to cope with evolving external demands – the Cambrian explosion (545 million years ago) being the most dramatic example of such adaptive diversification at all scales. Along with this, evidences of accidental episodes of massive species extinctions are witnessed by the sudden “disappearance” of terrestrial and of marine life, that are attributable to tectonic, telluric or meteoritic impacts and to their profound and lasting climatologic and geochemical consequences. The most significant mass extinction is undoubtedly the Permian-Triassic Great Dying where a 96% loss of all non-microbial marine life occurred within ten million years. The precise causes of mass extinction events may be in connection with continuous tectonic movements with their telluric and volcanic outbreaks and their climatic consequences, to collisions with meteoritic bodies, and to a lesser extent to the appearance of dominant predators, parasites or microbial diseases, or to combinations thereof. Common to many extinction events, however, is the massive release of greenhouse and of toxic gases (carbon dioxide, methane, hydrogen sulfide etc.). The water solubility of CO2 being nearly 30 times that of oxygen, water acidification occurs that impacts preferentially all calcifying organisms with low metabolic rates and weak respiratory systems: most coral genera died out during the Great Dying, along with calcareous sponges, calcifying algae, echinoderms, bryozoans etc. . Interestingly, profound taxonomic changes in all major phyla seem to follow extinction events, resulting in a better adapted biodiversity. Nothing is known, however, on the consequence of such changes in microbial life or on the putative role microbial associates had on the 2.3. Brutal human influence may lead to hazardous extinction events Man is of very recent occurrence in the history of terrestrial life, and until the advent of the industrial age (especially since the middle of the 19th century) his planetary influence was minimal, and the destructive potential of its inventions was purely local. Since then, his population has increased seven fold while consuming or exploiting 42% of the terrestrial net primary production . The single most environment-impacting activity today is the production of energy from fossil fuel, and the single most mechanically destructive invention is that of nuclear weapons. More insidious are the thousands of new chemical species (mostly organic) that are produced and improperly disposed of once used, and the new strains of mutant microorganisms that pop up unpredictably and threaten to create worldwide pandemias affecting both wildlife and humans. Without reviewing the subject of negative traces of human doings (see ), a reflection on the accelerating pace of carbon volatilization in the atmosphere is necessary to appreciate its effect on marine biota and on coral reef ecosystems in particular. In one year, we extract the equivalent of one million years' worth of fossil fuel natural production . In other words, using primitive technology e.g. internal combustion engines, we enrich our atmosphere in carbon dioxide at a pace at which terrestrial and marine autotrophs will not be able to keep up with in their efforts to create new biomass and quench free carbon excess and reduce seawater acidification rates. 3. Coral reef management: between the economy's hammer and the climate's anvil Ecosystems at immediate risk are biodiversity hotspots (primary rainforests and land-connected reefs systems), but most ecosystems with endemic species at all latitudes will be profoundly affected by acclimation of competing aliens. Being able to analyze, quantify and predict changes is the first step to avoid losing control… indeed environmental scientists should endorse the role of a general practice physician making an overall check-up on an ailing patient and prescribing a course of medications and exercises for recovery, or that of an investigator using state-of-the-art forensics such as DNA amplification to evidence causes of a crime and provide evidence to lawyers. This will become a necessity, beyond economic pressures worldwide (q.v. ). 3.1. Climate changes have profound biogeoclimatic consequences Climatic changes act globally, and the effects of naturally occurring destructive episodes are now superseded by those due to ever-growing man-induced emissions of various sorts. The latter include carbon, nitrogen or sulfur transfers that are generated essentially through the combustion of fossil organic matter or of agricultural or livestock breeding activities, and manifesting themselves differently according to their biogeochemical state. First, their volatilization as simple molecular species which are responsible for temperature rises through the glasshouse shielding effect, while halocarbon emissions tend to destroy the protective ozone layer of the upper atmosphere. Elevated surface water temperatures and genotoxic radiations are responsible for the bleaching effect and cellular stresses observed on subtidal corals, and death may follow if the exposures are lasting. Also, strong seasonal evaporation of large volumes of surface waters leads to atmospheric pressure instabilities, resulting in the occurrence of more frequent and more severe hurricanes. The mechanical damage due to wave action and to the sudden input of large volumes of freshwater and alien material (silt and debris) in coastal zones may entirely destroy some reef portions in single episodes; Second, the restitution of most of the above molecules at the air-water interface and their incidence on biomineralization of many marine invertebrates and of their larvae, and of coralline algae; In the 22nd century, seawater acidification may be as large as 0.5 pH units below the pH levels recorded in early 20th century. The extent of seawater acidification due to a shift in bioavailability of HCO3- concentration, if still debated in 2012, may shortly become dramatic for the survival of calcifying invertebrates (corals, mollusks, echinoderms, some sponges etc. and their larvae) and of coralline algae. 3.2. Societal changes: think globally, act locally Socio-economic determinants are linked to the development of industries, commerce and urbanism, their primary effects being direct, localized and traceable. industrial activities typically generate important volumes of polluting agents that dissipate during the processing stages. Furthermore, residues are managed according to criteria of productivity and cost-effectiveness i.e. with little concern about recycling alternatives, and about speciation into toxicants that make their way from the discharge areas into waterways and eventually into the sea. Ore excavation or grading of metal-rich top soil cause faunistic and floristic deteriorations, i.e. loss of endemic wildlife, facilitate bioerosion, cause the constant remodeling of river mouths with the finer particles and destroy almost life forms with the accumulation of heavier deposits in basins. In the sea, bioaccumulation of organic pollutants by primary consumers can expectedly reach very high concentrations in top predators, e.g. lipid-bound chlorinated biphenyls. Heavy metals that leach out from anti fouling formulations are accumulated by filter-feeding invertebrates, and take a heavy toll on the reproductive success of many life forms found in intertidal and subtidal life forms around harbor waters; intensive agriculture and farming generate various type of pollution, (i) enrichment by fertilizers and farm animals disjecta, (ii) pesticides and systemic herbicides, (iii) veterinary products, hormones that alter the quality of underground water reserves, while excess runoff benefits undesirable primary production in coastal zones, e.g. “green tides” and microalgal blooms; global commerce favors dissemination of alien species by shifting large amounts of ballast water by container ships and cleaning of hulls; urban development in tropical zones eliminates mangrove forests and seagrass beds that are important coastal component of fringing reefs. Shifting into the sea of very large volumes of untreated sewerage generated on land causes eutrophication in coastal reef systems with the introduction of novel microbial diseases. 3.3. Scientists should play a central role in biodiversity issues At present, there are no reliable estimates of the rate at which coral reef biodiversity is going to be affected by climate and anthropic forcings by year 2100. The various proxies used by climatologists and reef ecologists predict two types of scenarios: (i) short term collapse of tropical reef biodiversity as early as 2050 if current trends are not curbed, (ii) a good level of resilience if present-day conditions are maintained as an absolute maximum, allowing photosymbiotic systems to adapt gradually. By limiting direct interference (q.v. In 2012, scientific expertise is mostly directed to the discovery and documentation of new species in remote or in hitherto neglected regions, or to report the loss of biodiversity in areas impacted by human activities. Consulting is sought for the delineation of protected areas and establishing quotas for fishing. Science-based management is necessary, in particular in regions suffering from direct and rapidly growing human influence . Particularly at risk are young developing nations and small islands that are placed under tremendous pressure from major industrial countries to exploit and export their natural resources – a socio-political scenario that tends to divide local populations and sometimes accelerate environmental issues. Thus, care must be taken of the “growth crises” of these culturally fragile ethnic groups undergoing these economic pressures, and a premium should be placed on the management of their natural patrimony, in which their culture is firmly rooted. What comes out most often in recent discussion groups is: (i) that new monitoring tools, new environmental technologies and new models must be developed within a consensus mode between interested parties (ii) scientific expertise and field experience must be federated into autonomous consortia, (iii) implementation stages must follow a rigorously coordinated and step-wise approach. The present feeling is that scientific expertise is only marginally called upon and often misquoted in the decision-making of matters of economic development and urbanization in developing countries. By following the above recommendations, habilitation of the scientist as an advisor or as a mediator playing a crucial role in decision-making will be greatly facilitating the social dialogue on ecological matters. 4. State-of-the-art: New concepts, novel approaches, cutting-edge technology etc... How to find a suitable management compromise between climate and human forcings without altering the economy and the social development of developing nations? This is everybody’s concern: entrepreneurs, merchants, economists, businessmen, politicians, educators, end-users and consumers and of course, scientists. From the scientist’s point of view, each identifiable environmental issue has its specific set of solutions, and should be treated with the same care as an outpatient’s condition is in a hospital. The physician makes a global assessment and recommends that a series of analyses be made by specialists before he confirms his diagnosis and prescribes an adapted treatment. In most cases, the patient will recover successfully – sometimes he has to be hospitalized for some time, and on rare occasions he may not leave the hospital alive. After an overall check-up in which the probable stressor(s) is (are) identified, the environmental investigator will hand the case over to specialists who will each carry out a set of biochemical, molecular, microbiological and imaging tests on specific model organisms, on their tissues, cells, body fluids and associated microbes. Once the diagnosis is confirmed, a preliminary report is made that may contain special recommendations, followed by regular visits to evaluate the resilience of the system and the potential for the recovery of its lost biodiversity. To be able to adapt this medical approach to an ailing ecosystem, the environmentalist needs to find representative biological study models (i.e. sentinel species that are sensitive to the stressors, but not to the point of immediate eradication at the onset of a mild exposure). He also needs to study characteristic and observable symptoms, their evolution and their succession. Early markers of an organism exposed to a stressor can be detected using functional genomics with suitable molecular tools, and the evolution of the responses can be followed thenceforth. For instance, an organism undergoing abiotic stress is usually more prone to microbial infection than a conspecific control organism, calling for physiological and bacteriological / fungal / viral / parasitic analyses. Unfortunately, very few field investigators are in a position to use these tools routinely, let alone in association - especially molecular tools that have been adapted from the medical world to specific biological models, and only for research. 4.1. Shifting from a consumer-minded to a conservation-minded attitude Coral reef organisms tend to live in close association in order to gain optimal access to essential resources such as hard substratum, access to light, appropriate food etc., the co-occurrence of which is limited in shallow tropical waters , which are naturally efficient at nutrient cycling (oligotrophic regime). These constraints are reflected in the highly sophisticated communal assemblages formed by coralline and fleshy algae, invertebrates, vertebrates, protists etc. which communicate essentially via surface-to-surface or distant interactions of immunological and/or chemical nature. All forms of associations are encountered, ranging from obligate parasitism to symbiosis via commensalism, from prey hunting to filter feeding via surface browsing, etc. The scientific literature on coral reef biology has traditionally emphasized on competitive aspects between members sharing a same habitat, especially in connection with secondary metabolites emitted by sessile or sedentary invertebrates and having allelopathic or growth inhibiting activities [8,9]. Only recently have the cooperative and functional aspects of interspecies and cross - phyletic communication been explored - stepping from a more “medical” attitude (i.e. pharmacologically-oriented) to a more “ecological” one (i.e. conservation-oriented). Discoveries such as bacterial inter - communication via quorum-sensing signals, biofilm studies, and of course the progresses made in genomics have greatly contributed to this attitude change (e.g. the coral probiotic hypothesis of Reshef et al. . In the eighties, the term The biodiversity concept certainly helped in the funding of sampling expeditions in diversity Rapid progress in molecular science, and genomics in particular, has made it theoretically possible to retrieve useful information from crude environmental samples, i.e. doing away with cultivation restrictions and going beyond conventional taxonomy based on morphological traits. However, metagenomic characterization of marine microorganisms from the plankton, from biofilms or associated with macrobiota and their exsudates, generates phenomenal volumes of data from which crucial information extraction is a difficult and costly task for biocomputing specialists. Advocated by some as a new paradigm for biodiversity studies , “data-intensive science” taking a naïve (data-driven, i.e. non-theory-based) approach looks for truly novel and surprising patterns that are “born from the data”. But because of meta-data sorting problems and costs, and since environmental problems cannot wait for new hypotheses to emerge, others think that “knowledge-based science” can at least confront existing hypotheses against the meta-data background and guide investigators into detecting novel information . 4.2. The holobiont as an evolutionary concept, the extended holobiontas a conservation concept Closer to hands-on science, genomists have coined a very useful concept, originaly to account for the functional dynamics associated with bioconstructing organisms such as coral, sponges, or coralline algae in tropical marine ecosystems: that of the The hologenome theory was later generalized to terrestrial eukaryotic-prokaryotic systems, including man and its microflora , in order to account for the coevolutionay and cross-kingdom aspects of symbiosis . Finally, the holobiont is also a practical concept insofar as it can be conveniently transposed from its natural sites to a microcosm or mesocosm aquarium setup, in order to evaluate stress impacts on both the host and its associated microbiota. The concept of 4.3. The “omics” or high-dimensional biology revolution “Omics” technologies are primarily aimed at the universal detection of genes ( Stress reactions have been studied on of components of the coral holobiont. In De Salvo and collaborators performed a medium-scale micro transcriptomics experiment on the Carribean coral Indeed, candidate genes that are directly implicated (e.g. corals’ response to stress) or functionally interconnected (e.g. to genes related to immunity) with cnidarians-dinoflagellate symbiosis point out the major role of key proteins , and their fast-evolving adaptation in the face of environmental challenges. Oxidative stress in zooxanthellae produces reactive oxygen species (ROS) with hydrogen peroxide diffusion into the host cell which activates a cellular cascade resulting into the photosymbiont expulsion and polyp bleaching. It may be that recognition of a suitable zooxanthellae clade by the coral host is a selective process during which other strains are actively expelled through immunity and apoptosis , the photosymbiont being more susceptible than the host to e.g. elevated temperatures and possibly UV levels . Massive and laminar species are supposedly more resistant to environmental fluctuations than shorter-lived branched or encrusting species. Investigators in showed that laminar corals (using A comprehensive expressed sequence tags (EST) transcriptomics dataset on the symbiotic zooxanthellae has recently indicated some unique regulatory characteristics not found in free dinoflagellates – once completed and annotated, the complete The third major functional component of the coral holobiont is the bacterial and archaeal microbiome, also susceptible to large composition shifts during e.g. heat stress , as revealed by metagenomic studies. Bacterial consortia are described as host-specific, each profile having its specific and its generalist strains, the relative composition of which being affected by environmental conditions. Ideally, holobiont-wide analysis will benefit from the combined knowledge of the molecular biology of coral, zooxanthellae and prokaryotes that are necessary to define a fully functional system used as a control. The fitness descriptors will need to take into account (i) geographic variations, e.g. chemotypes for widely dispersed taxa), as well as (ii) short natural environmental fluctuations that fall into the natural physiological tolerance of a given population. The introduction of a stressor (significant in intensity and/or duration) will allow investigators to model the precise interactions of molecular events that affect the three components of this holobiont, and place the physiological responses of the different parties into a system-wide sequence, from early responses to total collapse. Using a holistic and clinical-like approach , issue - specific network models can be created by confronting data collected on “stressed” holobionts against homeostatically regulated “no-stress” conspecific controls. 4.4. Merging “omics” with imaging and physiological / ecotoxicological approaches Creating a multi-approach and comprehensive tool to evaluate the health status of corals under climatic or direct anthropic threat provides a more robust assessment than when using a single analytical method. The omics revolution is coming of age, and large scale data collection (metadata) are more easily tractable using modern bioinformatics algorithms (next generation sequencing) than before. The number of research papers dealing with molecular aspects of photosymbiosis in corals, with stress responses leading to the host-symbiont rupture, with the detection of early markers of stress (before actual symptoms are physiologically or visually expressed), with the resilience potential of massive vs. branched growth forms, with innate immunity, with the onset and development of bacterial pathogenicity following stress, has increased tremendously within the last few years. Each paper brings its unique and useful light into one of the most fascinating biological phenomenon. As of today, however, scientists are little more than spectators of a dramatic acceleration of the destructive impacts of civilization on the most fragile of all marine ecosystems. Politicians and the media are only implementing or relaying preventive anti-pollution policies, and what high-tech research has to offer to monitor what is actually occurring goes well beyond the understanding of the layman. What is proposed here is to select the most informative analytical strategy as the core component, molecular biology (omics) being the choice alternative to detect early stress responses, and to complement it with physiological measures that are privileged for evidencing (i) adaptability and (ii) loss of function. Physiological monitoring is important in comparing the tolerance range of corals various growth forms, to short lived or limited stress exposure that do not cause changes in the composition of zooxanthellae , and allow for gradual acclimation. This laboratory experiment on Imaging tools have a greater impact to non-scientists, and their can offer excellent visual “proof” of an ongoing stress response, in the field (e.g. time-lapse photography of entire holobiont), or inside the component under examination. For example high-resolution imaging mass spectrometry or NanoSIMS can make isotope tracing at single cell level. When linked to molecular visualization methods, such as In combination, -omics, physiological/ecotoxicological and imaging tools provide a potentially formidable combination for measuring stress responses in coral holobionts and their separate components. 5. What modern technologies can do for the environment Ideally, we need a multi approach diagnosis tool focused on the sentinel species undergoing stress, on the profile of its microbial associates, and to be able to estimate the loss of the epibiotic and encrusting macrofauna and flora which lives in association with the living host, i.e. the evolution of the overall biodiversity from the earliest stress symptoms detectable on the host, to its death. The basic requirements for a diagnostic tool are: to consider different components of the holobiont model (i.e. host, photosymbionts and microbiota) and then integrate the results of the different analyses into a single comprehensive “holistic” diagnosis; for each analysis and each approach (molecular, microscopic, ecotoxicologic), to be able to define dose/exposure limits of the stressor that correspond to threshold responses along a continuum such as: normal tolerance/acclimation/resilience/no-return/rapid death; to propose corrective measures wherever some critical point is reached in one of the above limits. to list the algal and animal species that are usually associated with the sentinel holobiont; to categorize each species with respect to its location within the holobiont system (encrusting, epibiotic, mucus-bound, free-living) and to its degree of dependence to the host for each type (predator, commensal, parasite, symbiont). 5.1. Measuring stress responses, diagnosing overall fitness and proposing corrective measures On the basis of the above requirements for a diagnostic tool, we propose an 8-step procedure to achieve a compromise between experimental robustness and implementation simplicity. The sequence is shown in Fig. 6, and each step is detailed in the text. STEP ONE (ISSUE) - Identification of the problem (in the field) Each biodiversity issue is different, and the first task is to identify the source of the problem. Abiotic stresses should not include first-degree biological interference due to competition, predation or outbreaks of invasive or of alien opportunists, including benthic and pelagic macro and microbiota. On the other hand, we shall consider the departures from “normal” or standard macro and microbiodiversity components resulting from the host species being under abiotic stress, e.g. expulsion or “bail out” of photosymbionts and loss of useful bacterial strains, or disappearance of vegetal and animal associates, which are at the very heart of the biodiversity loss issue. Most common abiotic stressors include: thermal stress, dessication, irradiation, hypo-hypersalinities, silting, heavy metal accumulation, organic compounds, mechanical damage due to wave or wind action and to anchoring. Thus, abiotic stressors can be climatic or pollution bound, chronic or accidental and have their effects combined. It may be necessary to combine field and laboratory (aquarium) studies in which stressors can be analyzed and modulated individually. STEP TWO (METHODS) - The choice of the adequate holobiont (field) The choice of a biological taxon may depend on the type of stressor. Corals and coralline algae might be more suitable to evaluate thermal, ultra-violet and acidity stresses, whereas bioaccumulators such as sponges or bivalve mollusks might be more suitable for silting and heavy metal stresses. Furthermore, a good model is one which is sensitive enough to the stressor, at the same time displaying a range of responses that can be calibrated usefully against different concentrations / exposure times. Finally, the model host-species must be (i) representative of the area under investigation, (ii) common enough for sampling at statistically significant scale and (iii) amenable to aquarium studies. STEP THREE (METHODS) - How to measure stress in a model holobiont system Abiotic stress will affect all components of the holobiont diversely and in a network connection manner, with gradual loss of function and accompanying morbidity symptoms. Stress studies will therefore be dealing with the host organism, its photosymbionts, and also with its associated microbiome. This will be achieved through a combination of cutting-edge and classical approaches, e.g. (i) STEP FOUR (DATABASE) - Making a robust set of control data for each analysis The sentinel holobiont must be sampled in reputedly undisturbed areas, whatever the type of analysis: physiological, metagenomic, taxonomic (associate biodiversity) etc. and natural variations from a statistically significant number of replicates must be recorded. When dealing with a photosystem holobiont, aquarium studies should consider the three components separately: (i) the holobiont e.g. coral, sponge…, (ii) the photosymbionts, e.g. zooxanthelae, cyanobacteria, (ii) the associated microbes e.g. mucus and tissue-bound bacteria. In each case, a minimum of three (and up to five) independent analyses using different analytical approaches must be undertaken, e.g. one or two –omics, one or two physiological or ecotoxicological, one using imaging etc. For each analysis, the average values or estimates will set the 0 mark or control score of a future 0-10 scale. This step is crucial and the natural variability must not be too large with scores never exceeding those of responses to mild-severe stresses. STEP FIVE (DATABASE) – Set experimental calibration scales (Fig. 7) A useful way to calibrate a response scale STEP SIX (ANALYSIS) - Compare stress vs. standard profiles Once each experimental scale is established per analysis, environmental samples can be rated by attributing an average performance score based on STEP SEVEN (TREATMENT) - The single grid of impact or radar chart (Fig. 8 top and middle) Then scores for each of the 3-5 analyses on each of the biological components will be reported to a single grid of impact, e.g. on a coral (5), its photosymbionts (4) and its microbiome (3). STEP EIGHT (REPORT) - Health status of sentinel species and recommendations for amendment (Fig. 8 bottom) Let us say we have a 7.4 average score on the coral host, 6.6 on the microbiome and 4.2 on the photosymbiont.. This indicates (i) what biological component is most affected by the environmental stress (ii) where appropriate action is to be taken. Here, holobiont looks normal but early signs of stress are detected on the zooxanthellae (ROS and oxidative stress products), without signs of microbial dysfunction. On the basis of these visual diagrams and numerical scores that are explicit enough to be understood and followed by non-scientists. Useful recommendations can be made to customers - if the principle behind this nicknamed “INDICORAL” environmental tool can be validated as a standard, legal enforcement can follow. 5.2. Strengths, limitations and future of environmental diagnosis tools What is described here is a custom-designed diagnostic tool that integrates the critical snapshot information from different analytical approaches into a single easy-to-read layout, with a consensus fitness index. Using the multivariate “radar chart” model in conjunction with the fitness index then allows us to point out the weaknesses of the holobiont health at a given time, after which corrective measure can be proposed. Such a chart is made up of radiating spokes each representing a performance scale, for example rated from 0 to 10 in a given test. The spokes or performance scales correspond to a single test type, either molecular or visual or physiological. A robust radar chart typically integrates at least one test of each sort, in order to miss as little useful information as possible when establishing the final diagnosis. For example, the holobiont may “look normal” (as compared to control conspecifics) using a visual scale, perform close to optimal using a physiological assay, yet present strong signs of stress using an omics approach that detects early molecular responses. Or else, the holobiont may be suffering from a pollutant that acutely undermines its respiration, with no apparent effect on its microbiome nor on its appearance, and so on. The objection that immediately comes to mind is that such an endeavour is time and money consuming, knowing how difficult it is to perform discriminating omics tests alone, especially when dealing with meta-data and the mathematical treatment that follows. The answer has to be optimistic: giant strides are being made in analyzing environmental meta-data more efficiently and cost-effectively, a good thing since some environmental issues are becoming rapidly critical. The other objection is that you need a whole team of specialists to devise and run a single “radar experiment”. The answer is two-fold: (i) once a suitable sentinel holobiont is chosen, the tedious part is to establish a reliable control database for each experiment type, mostly in the laboratory and using precise instrumentation. This database may initially require a panel of experts to set it up, but it does not have to be repeated for subsequent investigations. The other aspect is that we are creating a tool, not a thorough investigation into fundamentals, i.e. the experts must make sure that only essential information is retained, e.g. use identified molecular or microbial markers instead of profiles, set critical or threshold values within the 0-10 scale of responses (e.g. onset of symptoms appearance, loss of pigmentation, resilience limits, and so on), 0-2 representing the natural variability observed in controls, and 8-10 representing loss (no-return point to immediate death). This simplified representation relies on strong metrics, not on checking hypotheses which is the job of researchers. The custom association of 3 to 5 different tests types on the coral host and/or the photosymbionts and/or the mucus-bound bacterial flora spans a whole range of dysfunction possibilities. 5.3. Predicting biodiversity loss using the extended holobiont concept The health status of the coral holobiont directly affects all associated life forms according to the degree of dependence on the host. Ultimately, the loss of the host determines the biodiversity loss of all dependant flora fauna and microbes. Changes in fish biodiversity and pressures on feeding niches have been reported in relation to coral bleaching and loss . 6. Measuring, predicting and hopefully mending As pointed out earlier, over half a billion human live off goods and services of coral reef ecosystems at large, mostly in Asia. But not only: research on some 20,000 chemicals extracted from reef invertebrates has inspired the design of novel anticancer agents, antibiotics, anti-inflammatory, painkillers etc. and the exploration of the complex biosynthetic pathways leading to the production of these “miracle” molecules is only starting. Knowing the paramount importance of host-microbial symbioses in the making of these molecules, biodiversity loss will inevitably lead to chemodiversity loss, and opportunities will no longer exist to investigate the full bio-inspiration potential of what our holobiont systems can produce better than we can. Marine ecology is a young science, and its developments are accelerating in response (i) to the urge of measuring the reactions of organisms facing the fluctuations of their environment, (ii) to the understanding of how they form complex interaction networks around key molecules acting as mediators of antibioses and symbioses, feeding hierarchies and occupational strategies for essential resources. Holobiont-wide systems biology is coming of age and will allow us to understand how various components of a holobiont system respond to stress in a coordinated manner, in the face of sudden and brutal environmental disasters, or of steadily increasing climatic or anthropogenic forcings, against the background of naturally fluctuating levels of stress. Ecosystem-wide resilience to environmental challenges can no longer be hoped for, and total biodiversity wipe-out of coral reefs is highly unlikely. The most probable scenario is that some reef systems might face near-total biodiversity collapse (e.g. in tropical zones that are within direct influence of urban expansion), while coral species from subtropical and remote localities will be constantly trying to acclimatize, each ecosystem striving to reach an overall “resilience equilibrium” at the cost of some of its biological diversity. In 2013, most biological phenomena can be measured at all scales, from single cells to whole ecosystems, directly or using proxies and appropriate metrics. In combination, -omics, physiological/ecotoxicological and imaging tools provide a potentially formidable combination for measuring stress responses in coral holobionts and their separate components. Designing environmental tools such as proposed here might soon or later become a necessity, not only for coral reefs, but for all endangered ecosystems, marine and terrestrial. I wish to dedicate this environmental tool concept to those who guided and encouraged my early endeavours as an enthusiastic observer of nature: to the great naturalist René Catala (1901 - 1987), and to my Australian and New Zealander academic mentors, from Sydney to Townsville via Auckland. La Barre S. Coral reef biodiversity in the face of climatic changes. Chapter 4 in: Biodiversity Loss in a Changing Planet, vol. 4 of Biodiversity series, Intech (ISBN 979-953-307-252-3), 77-112, http://www.intechopen.com/subject/biological-sciences/biodiversity (open access, Nov. 2011). Benton MJ. When life nearly died: the greatest mass extinction of all time. London: Thames & Hudson 2005 ISBN 0-500-28573-X. Knoll AH, Bambach RK, Canfield DE, Grotzinger JP (1996). Comparative Earth history and Late Permian mass extinction. Science 1996;273(5274): 452–457, ISSN 0036-8075 Falkowski P. Tenth Annual Roger Revelle Commemorative Lecture: The once and future ocean. Oceanography 2009;22(2) 246-251, doi:10.5670/oceang.2009.57 Vitousek PM, Mooney HA, Lubchenko J, Mellilo JM. Human domination of Earth’s ecosystems. Science 1997;277(5325): 494-499 . DOI: 10.1126/science.277.5325.494 Connell JH. Diversity in tropical rain forests and coral reef high diversity of trees and corals is maintained only in a nonequilibrium state. Science 1978;199(4335): 1302-1310, ISSN: 0036-8075 Baker B. New ocean policy depends on biological research. BioScience 2012;62(5) 524, doi: 10.1525/bio.2012.62.5.19 Jackson JBC, Buss L. Allelopathy and spatial competition among coral reef invertebrates. PNAS 1975;72(12) 5160-5163, ISSN: 0027-8424 Pawlik JR, Steindler L, Helkel TP, Beer S, Ilan M. Chemical warfare on coral reefs: sponge metabolites differentially affect coral symbiosis in situ. Limnol. Oceanogr. 2007;52 (2) 907-911, ISSN: 0024-3590 Reshef I, Koren O, Loya Y, Zilber-Rosenberg I, Rosenberg E. The coral probiotic hypothesis. Environmental Microbiology 2006;8(2) 2067-2073, ISSN: 1462-2920 Wilson EO, editor, Peter FM, associate editor, Biodiversity, National Academy Press, March 1988, 521 pages ISBN 0-309-03783-2 ; ISBN 0-309-03739-5 Bonney R, Cooper CB, Dickinson J, Kelling S, Phillips T, Rosenberg KV, Shirk J. Citizen Science: a developing tool for expanding science knowledge and scientific literacy. BioScience 2009;59(11), 977-984, doi: 10.1525/bio.2009.59.11.9 Reyers B, Polasky S, Tallis H, Mooney HA, Larigauderie A. Finding common ground for biodiversity and ecosystem services. BioScience 2012;62(5) 503-507, doi:10.1525/bio.2012.62.5.1 Kelling S, Hochachka WM, Fink D, Riedewald M, Caruana R, Ballard G, Hooker G. Data-intensive science: a new paradigm for biodiversity studies. BioScience 2009;59(7) 613-620, doi: 10.1525/bio.2009.59.7.12 Nichols JD, Cooch EG, Nichols JM, Sauer JR. Studying biodiversity: is a new paradigm really needed? BioScience 2012;62(5) 497-502, doi: 10.1525/bio.2012.62.5.11 Rosenberg E, Koren O, Reshef L, Efrony R, Zilber-Rosenberg I. The role of microorganisms in coral health, disease and evolution. Nature Rev. Microbiol. 2007;5 355–362, doi:10.1038/nrmicro1635 Grottoli AG, Rodrigues LJ, Palardy JE. Heterotrophic plasticity and resilience in bleached corals. Nature 2006;440, 1186-1189, doi:10.1038/nature04565 Rosenberg E, Kushmaro A, Kramarsky-Winter E, Banin E, Loya Y.The role of microorganisms in coral bleaching.The ISME Journal 2009;3, 139-146, doi:10.1038/ismej.2008.104 Rosenberg E, Zilber-Rosenberg I. Symbiosis and development: the hologenome concept. Birth Defects Research, part C 2011;93, 56-66, DOI: 10.1002/bdrc.20196 Rohwer F, Seguritan V, Azam F, Knowlton N. Diversity and distribution of coral-associated bacteria. Mar. Ecol. Prog. Ser. 2002;243, 1-10, ISSN: 0171-8630 Horgan RP, Kenny LC. Omic technologies: genomics, transcriptomics, proteomics and metabolomics. SAC review, The Obstetrician and Genaecologist 2011;13, 189-195, DOI: 10.1576/toag.220.127.116.11672 Wang Z, Gerstein M, Snyder M. RNA-seq: a revolutionary tool for transcriptomics. Nature Reviews Genetics 2009;10, 57-63, doi:10.1038/nrg2484 De Salvo MK et al. Differential gene expression during thermal stress and bleaching in Montastrea flaveolota. Molecular Ecology 2008;17(17), 3952-3971, DOI: 10.1111/j.1365-294X.2008.03879.x Traylor-Knowles N et al. Production of a reference transcriptome and transcriptomic database (PocilloporaBase) for thecauliflower coral, Pocillopora damicornis. BMC Genomics 2011;12:585, doi:10.1186/1471-2164-12-585 Voolstra CR et al. Rapid evolution of coral proteins responsible for interaction with the environment. PLoS ONE 2011;6(5) e20392, doi: 10.1371/journal.pone.0020392 Strychar KB, Coates MC, Sammarco PW, Piva TJ. Bleaching as a pathogenic response in scleractinian corals, evidenced by high concentrations of apoptotic and necrotic zooxanthellae. J. Exp. Mar. Biol. Ecol. 2004;304, pp. 99-121, http://dx.doi.org/10.1016/j.jembe.2003.11.023 Strychar KB, Sammarco PW. Effets of heat stress on photopigments of zooxanthellae (Symbiodinium spp.) symbiotic with the corals Acropora hyacinthus, Porites solida and Favites complanata. International Journal of Biology, 2012;4(1) 3-19. DOI: 10.5539/ijb.v4n1p3 Chow AM, Ferrier-Pagès C, Khalouei S, Reynaud S, Brown IA. Increased light intensity induces heat shock protein Hsp60 in coral species. Cell stress and Chaperones 2009;4, 469-476, doi: 10.1007/s12192-009-0100-6 Mydlarz LD, Palmer CV. The presence of multiple phenoloxidases in Carribean reef-building corals. Comparative Biochemistry and Physiology, Part A. 2011;159, 372-378, http://dx.doi.org/10.1016/j.cbpa.2011.03.029 Palmer CV, McGinty ES, Cummings DJ, Smith SM, Bartels E, Mydlarz LD. Patterns of ecological immunology: variation in the responses of Carribean corals to elevated temperature and a pathogen elicitor. The Journal of Experimental Biology 2011;214(24), 4220-4249, doi:10.1242/jeb.057349 Vidal-Dupiol J et al. Innate immune responses of a scleractinian coral to vibriosis. Journal of Biological Chemistry 2011;286(25) 22688-22698, doi:10.1242/jeb.057349 Martinez-Luis S, Ballesteros J, Gutiérrez M. Antibacterial constituents from Pseudoalteromonas sp. Rev. Latinoamer. Quim. 2011;39(1-2).75-83, ISSN: 0370-5943 Kvennefors CE et al. Regulation of bacterial communities through antimicrobial activity by the coral holobiont. Microb. Ecol. 2012;63, 605-618, 10.1007/s00248-011-9946-0 Shnit-Orland M, Sivan A, Kushmaro A. Antibacterial activity of Pseudoalteromonas in the coral holobiont. Microbiol. Ecol. 2012;64, 851-859, 10.1007/s00248-012-0086-y Rao D, Webb JS, Kjelleberg S. Microbial colonization and competition on the marine alga Ulva australis. Appl. Environ. Microbiol. 2006;72(8) 5547-5555, doi:10.1128/AEM.00449-06 Franks A, Egan SH, Holmström C, James S, Lappin-Scott H, Kjelleberg S. Inhibition of fungal colonization by Pseudoalteromonas tunicata provides a competitive advantage during surface colonization. Appl. Environ. Microbiol. 2006;72(9),6079-6087, doi:10.1128/AEM.00559-06 Miller DJ et al. The innate immune repertoire in Cnidaria – ancestral complexity and stochastic gene loss. Genome Biology 2007;8, R59, doi: 1186/gb-2007-8-4-r59 Shinzato C et al. Using the Acropora digitifera genome to understand coral responses to environmental change. Nature 2011;476, 320-324, doi:10.1038/nature10249 Shinzato C et al.The repertoire of chemical defense genes in the coral Acropora digitifera genome. Zoological Science 2012;29, 510-517, doi: http://dx.doi.org/10.2108/zsj.29.510 Palmer CV, Traylor-Knowles N.Towards an integrated network of coral immune mechanisms. Proc. Roy. Soc. B – Biological Sciences 2012;279(4), 4106-4114, doi: 10.1098/rspb.2012.1477 Kenkel CD et al. Development of gene expression markers of acute heat-light stress in reef-building corals of the genus Porites. PLoS ONE 2011;6(10), e26914, doi: 10.1371/journal.pone.0026914 Vidal-Dupiol J, Ladrière O, Meistertzheim A-L, Fouré L, Adjeroud M, Mitta G. Physiological responses of the scleractinian coral Pocillopora damicornis to bacterial stress from Vibrio corallilyticus.The Journal of Experimental Biology 2011;214(9),1533-1545, doi:10.1242/jeb.053165 Bayer T et al. Symbiodinium transcriptomes: genome insights into the dinoflagellate symbionts of reef-building corals. PLoS ONE 2012;7(4), e35269, doi: 10.1371/journal.pone.0035269 Fabina NS, Putnam HM, Franklin EC, Stat M, Gates R. Transmission mode predicts specificity and interaction patterns in coral-Symbiodinium networks. PLoS ONE 2012;7(9), e44970, doi:10.1371/journal.pone.0044970 Mouchka ME, Hewson I, Harvell D. Coral-associated bacterial assemblages: current knowledge and the potential for climate-driven impacts. Integrative and Comparative Biology 2010;50(4), 662-674, doi:10.1093/icb/icq061 Barabábasi A-L, Oltvai ZN. Network biology: understanding the cell’s functional organization. Nature Reviews – Genetics 2004(5), 101-113, doi:10.1038/nrg1272 Bellantuono et al. Resistance to thermal stress in corals without changes in symbiont composition. Proc. Roy. Soc. - series B 2012;279, 1100-1107, doi: 10.1098/rspb.2011.1780 Pett-Ridge J, Weber PK. NanoSIP: NanoSIMS Applications for Microbial Ecology. In: Microbial Systems Biology: Methods and Protocols. Methods in Molecular Biology, 2012;881, 375-408, DOI 10.1007/978-1-61779-827-6_13 Palmer CV, Modi CK, Mydlarz LD. Coral Fluorescent Proteins as Antioxidants. PLoS ONE 2009;4(10): e7298, doi:10.1371/journal.pone.0007298 Barott K, Smith J, Dinsdale E, Hatay M, Sandin S et al. Hyperspectral and physiological analyses of coral-algal interactions. PLoS ONE 2009;4(11): e8043, doi:10.1371/journal.pone.0008043 Prachett MS, Hoey AS, Wilson SK, Messmer V, Graham AJ. Changes in biodiversity and functioning of reef fish assemblages following coral bleaching and coral loss. Diversity 2011;3, 424-452, ISSN : 1424-2818 Plaisance L, Knowlton N, Paulay G, Meyer C. Reef-associated crustacean fauna: biodiversity estimates using semi-quantitative sampling and DNA barcoding. Coral Reefs 2009;28, 977-986, doi:10.1007/s00338-009-0543-3
|The Simple English Wiktionary has a definition for: integral.| In calculus, an integral is the space under a graph of an equation (sometimes said as "the area under a curve"). An integral is the reverse of a derivative. A derivative is the steepness (or "slope"), as the rate of change, of a curve. The word "integral" can also be used as an adjective meaning "related to integers". The symbol for integration, in calculus, is: as a tall letter "S". This symbol was first used by Gottfried Wilhelm Leibniz, who used it as a stylized "ſ" (for summa, Latin for sum) to mean the summation of the area covered by an equation, such as y = f(x). Integrals and derivatives are part of a branch of mathematics called calculus. The link between these two is very important, and is called the Fundamental Theorem of Calculus. The theorem says that an integral can be reversed by a derivative, similar to how an addition can be reversed by a subtraction. Integration helps when trying to multiply units into a problem. For example, if a problem with rate, , needs an answer with just distance, one solution is to integrate with respect to time. This means multiplying in time to cancel the time in . This is done by adding small slices of the rate graph together. The slices are close to zero in width, but adding them forever makes them add up to a whole. This is called a Riemann Sum. Adding these slices together gives the equation that the first equation is the derivative of. Integrals are like a way to add many tiny things together by hand. It is like summation, which is adding . The difference with integration is that we also have to add all the decimals and fractions in between. Another time integration is helpful is when finding the volume of a solid. It can add two-dimensional (without width) slices of the solid together forever until there is a width. This means the object now has three dimensions: the original two and a width. This gives the volume of the three-dimensional object described. Methods of Integration[change | change source] Antiderivative[change | change source] If we take the function , for example, and anti-differentiate it, we can say that an integral of is . We say an integral, not the integral, because the antiderivative of a function is not unique. For example, also differentiates to . Because of this, when taking the antiderivative a constant C must be added. This is called an indefinite integral. This is because when finding the derivative of a function, constants equal 0, as in the function - . Note the 0: we cannot find it if we only have the derivative, so the integral is Simple Equations[change | change source] A simple equation such as can be integrated with respect to x using the following technique. To integrate, you add 1 to the power x is raised to, and then divide x by the value of this new power. Therefore, integration of a normal equation follows this rule: The at the end is what shows that we are integrating with respect to x, that is, as x changes. This can be seen to be the inverse of differentiation. However, there is a constant, C, added when you integrate. This is called the constant of integration. This is required because differentiating an integer results in zero, therefore integrating zero (which can be put onto the end of any integrand) produces an integer, C. The value of this integer would be found by using given conditions. Equations with more than one terms are simply integrated by integrating each individual term: Integration involving e and ln[change | change source] There are certain rules for integrating using e and the natural logarithm. Most importantly, is the integral of itself (with the addition of a constant of integration): The natural logarithm, ln, is useful when integrating equations with . These cannot be integrated using the formula above (add one to the power, divide by the power), because adding one to the power produces 0, and a division by 0 is not possible. Instead, the integral of is : In a more general form: The two vertical bars indicated a absolute value; the sign (positive or negative) of is ignored. This is because there is no value for the natural logarithm of negative numbers. Properties[change | change source] Sum of functions[change | change source] The integral of a sum of functions is the sum of each function's integral. that is, The proof of this is straightforward: The definition of an integral is a limit of sums. Thus Note that both integrals have the same limits. Constants in integration[change | change source] When a constant is in an integral with a function, the constant can be taken out. Further, when a constant c is not accompanied by a function, its value is c * x. That is, This can only be done with a constant. Proof is again by the definition of an integral. Other[change | change source] If a, b and c are in order (i.e. after each other on the x-axis), the integral of f(x) from point a to point b plus the integral of f(x) from point b to c equals the integral from point a to c. That is, - if they are in order. (This also holds when a, b, c are not in order if we define .) - . This follows the fundamental theorem of calculus (FTC): F(a)-F(a)=0 - Again, following the FTC:
- Inertial confinement fusion Inertial confinement fusion (ICF) is a process where nuclear fusionreactions are initiated by heating and compressing a fuel target, typically in the form of a pellet that most often contains a mixture of deuteriumand tritium. To compress and heat the fuel, energy is delivered to the outer layer of the target using high-energy beams of laser light, electrons or ions, although for a variety of reasons, almost all ICF devices to date have used lasers. The heated outer layer explodes outward, producing a reaction force against the remainder of the target, accelerating it inwards, and sending shock waves into the center. A sufficiently powerful set of shock waves can compress and heat the fuel at the center so much that fusion reactions occur. The energy released by these reactions will then heat the surrounding fuel, which may also begin to undergo fusion. The aim of ICF is to produce a condition known as "ignition", where this heating process causes a chain reactionthat burns a significant portion of the fuel. Typical fuel pellets are about the size of a pinhead and contain around 10 milligramsof fuel: in practice, only a small proportion of this fuel will undergo fusion, but if all this fuel were consumed it would release the energy equivalent to burning a barrel of oil. ICF is one of two major branches of fusion energyresearch, the other being magnetic confinement fusion. To date most of the work in ICF has been carried out in the United States, and generally has seen less development effort than magnetic approaches. When it was first proposed, ICF appeared to be a practical approach to fusion powerproduction, but experiments during the 1970s and 80's demonstrated that the efficiency of these devices was much lower than expected. For much of the 1980s and 90s ICF experiments focused primarily on nuclear weapons research. More recent advances suggest that major gains in performance are possible, once again making ICF attractive for commercial power generation. A number of new experiments are underway or being planned to test this new "fast ignition" approach. Fusion reactions combine lighter atoms, such as hydrogen, together to form larger ones. Generally the reactions take place at such high temperatures that the atoms have been ionized, their electrons stripped off by the heat; thus, fusion is typically described in terms of "nuclei" instead of "atoms". Nuclei are positively charged, and thus repel each other due to the electrostatic force. Counteracting this is the strong forcewhich pulls nucleons together, but only at very short ranges. Thus a fluid of nuclei will generally not undergo fusion, the nuclei must be forced together before the strong force can pull them together into stable collections. Fusion reactions on a scale useful for energy production require a very large amount of energy to initiate in order to overcome the so-called " Coulomb barrier" or "fusion barrier energy". Generally less energy will be needed to cause lighter nuclei to fuse, as they have less charge and thus a lower barrier energy, and when they do fuse, more energy will be released. As the mass of the nuclei increase, there is a point where the reaction no longer gives off net energy — the energy needed to overcome the energy barrier is greater than the energy released in the resulting fusion reaction. The key to practical fusion power is to select a fuel that requires the minimum amount of energy to start, that is, the lowest barrier energy. The best fuel from this standpoint is a one to one mix of deuteriumand tritium; both are heavy isotopes of hydrogen. The D-T (deuterium & tritium) mix has a low barrier because of its high ratio of neutrons to protons. The presence of neutral neutrons in the nuclei helps pull them together via the strong force; while the presence of positively charged protons pushes the nuclei apart via Coloumbic forces (the electromagneticforce). Tritium has one of the highest ratios of neutrons to protons of any stable or moderately unstable nuclide -- two neutrons and one proton. Adding protons or removing neutrons increases the energy barrier. In order to create the required conditions, the fuel must be heated to tens of millions of degrees, and/or compressed to immense pressures. The temperature and pressure required for any particular fuel to fuse is known as the Lawson criterion. These conditions have been known since the 1950s when the first H-bombs were built. ICF mechanism of action In a "hydrogen bomb" the fusion fuel is compressed and heated with a separate fission bomb. A variety of mechanisms transfers the energy of the "trigger"'s explosion into the fusion fuel. The use of a nuclear bomb to ignite a fusion reaction makes the concept less than useful as a power source. Not only would the triggers be prohibitively expensive to produce, but there is a minimum size that such a bomb can be built, defined roughly by the critical mass of the plutoniumfuel used. Generally it seems difficult to build nuclear devices smaller than about 1 kiloton in size, which would make it a difficult engineering problem to extract power from the resulting explosions. Also the smaller a thermonuclear bomb is, the "dirtier" it is, that is to say, the percentage of energy produced in the explosion by fusion is decreased while the percent produced by fission reactions tends toward unity (100%). This did not stop efforts to design such a system however, leading to the PACERconcept. If some source of compression could be found, other than a nuclear bomb, then the size of the reaction could be scaled down. This idea has been of intense interest to both the bomb-making and fusion energy communities. It was not until the 1970s that a potential solution appeared in the form of very large, very high power, high energy lasers, which were then being built for weapons and other research. The D-T mix in such a system is known as a "target", containing much less fuel than in a bomb design (often only micro or milligrams), and leading to a much smaller explosive force. [ [http://www.llnl.gov/nif/library/ife.pdf Inertial Fusion Energy] ] [http://www.nuc.berkeley.edu/thyd/icf/IFE.html Inertial Fusion Energy: A Tutorial on the Technology and Economics] ] Generally ICF systems use a single laser, the "driver", whose beam is split up into a number of beams which are subsequently individually amplified by a trillion times or more. These are sent into the reaction chamber (called a target chamber) by a number of mirrors, positioned in order to illuminate the target evenly over its whole surface. The heat applied by the driver causes the outer layer of the target to explode, just as the outer layers of an H-bomb's fuel cylinder does when illuminated by the X-rays of the nuclear device. The material exploding off the surface causes the remaining material on the inside to be driven inwards with great force, eventually collapsing into a tiny near-spherical ball. In modern ICF devices the density of the resulting fuel mixture is as much as one-hundred times the density of lead, around 1000 g/cm³. This density is not high enough to create any useful rate of fusion on its own. However, during the collapse of the fuel, shock waves also form and travel into the center of the fuel at high speed. When they meet their counterparts moving in from the other sides of the fuel in the center, the density of that spot is raised much further. Given the correct conditions, the fusion rate in the region highly compressed by the shock wave can give off significant amounts of highly energetic alpha particles. Due to the high density of the surrounding fuel, they move only a short distance before being "thermalized", losing their energy to the fuel as heat. This additional energy will cause additional fusion reactions in the heated fuel, giving off more high-energy particles. This process spreads outward from the center, leading to a kind of self sustaining burn known as "ignition". 1. Laser beams or laser-produced X-rays rapidly heat the surface of the fusion target, forming a surrounding plasma envelope. 2. Fuel is compressed by the rocket-like blowoff of the hot surface material. 3. During the final part of the capsule implosion, the fuel core reaches 20 times the density of lead and ignites at 100,000,000 ˚C. 4. Thermonuclear burn spreads rapidly through the compressed fuel, yielding many times the input energy. Issues with the successful achievement of ICF The primary problems with increasing ICF performance since the early experiments in the 1970s have been of energy delivery to the target, controlling symmetry of the imploding fuel, preventing premature heating of the fuel (before maximum density is achieved), preventing premature mixing of hot and cool fuel by hydrodynamicinstabilities and the formation of a 'tight' shockwave convergence at the compressed fuel center. In order to focus the shock wave on the center of the target, the target must be made with extremely high precision and sphericitywith aberrations of no more than a few micrometres over its surface (inner and outer). Likewise the aiming of the laser beams must be extremely precise and the beams must arrive at the same time at all points on the target. Beam timing is a relatively simple issue though and is solved by using delay lines in the beams' optical path to achieve picosecond levels of timing accuracy. The other major problem plaguing the achievement of high symmetry and high temperatures/densities of the imploding target are so called "beam-beam" imbalance and beam anisotropy. These problems are, respectively, where the energy delivered by one beam may be higher or lower than other beams impinging on the target and of "hot spots" within a beam diameter hitting a target which induces uneven compression on the target surface, thereby forming Rayleigh–Taylor instabilities in the fuel, prematurely mixing it and reducing heating efficacy at the time of maximum compression. All of these problems have been substantially mitigated to varying degrees in the past two decades of research by using various beam smoothing techniques and beam energy diagnostics to balance beam to beam energy though RT instability remains a major issue. Target design has also improved tremendously over the years. Modern cryogenic hydrogenice targets tend to freeze a thin layer of deuteriumjust on the inside of a plastic sphere while irradiating it with a low power IRlaser to smooth its inner surface while monitoring it with a microscopeequipped camera, thereby allowing the layer to be closely monitored ensuring its "smoothness". [ [http://www.lle.rochester.edu/pub/progress/doe_apr02.pdf Inertial Confinement Fusion Program Activities, April 2002] ] Cryogenic targets filled with a deuterium tritium(D-T) mixture are "self-smoothing" due to the small amount of heat created by the decay of the radioactive tritium isotope. This is often referred to as "beta-layering". [ [http://www.lle.rochester.edu/pub/progress/MarDOE06.pdf Inertial Confinement Fusion Program Activities, March 2006] ] Certain targets are surrounded by a small metal cylinder which is irradiated by the laser beams instead of the target itself, an approach known as "indirect drive". [ [http://fire.pppl.gov/iaea04_lindl.pdf Recent Advances in Indirect Drive ICF Target Physics] ] In this approach the lasers are focused on the inner side of the cylinder, heating it to a superhot plasma which radiates mostly in X-rays. The X-rays from this plasma are then absorbed by the target surface, imploding it in the same way as if it had been hit with the lasers directly. The absorption of thermal x-rays by the target is more efficient than the direct absorption of laser light, however these " hohlraum"s or "burning chambers" also take up considerable energy to heat on their own thus significantly reducing the overall efficiency of laser-to-target energy transfer. They are thus a debated feature even today; the equally numerous "direct-drive" design does not use them. Most often, indirect drive hohlraum targets are used to simulate thermonuclear weapons tests due to the fact that the fusion fuel in them is also imploded mainly by X-ray radiation. A variety of ICF drivers are being explored. Lasers have improved dramatically since the 1970s, scaling up in energy and power from a few joules and kilowatts to megajoules (see NIF laser) and hundreds of terawatts, using mostly frequency doubled or tripled light from neodymium glass amplifiers. Heavy ion beams are particularly interesting for commercial generation, as they are easy to create, control, and focus. On the downside, it is very difficult to achieve the very high energy densities required to implode a target efficiently, and most ion-beam systems require the use of a hohlraum surrounding the target to smooth out the irradiation, reducing the overall efficiency of the coupling of the ion beam's energy to that of the imploding target further. Brief history of ICF The first laser-driven "ICF" experiments (though strictly speaking, these were only high intensity laser-hydrogen plasma interaction experiments) were carried out using ruby lasers soon after these were invented in the 1960s. It was realized that the power available from existing lasers was far too low to be truly useful in achieving significant fusion reactions, but were useful in establishing preliminary theories describing high intensity light and plasma interactions. A major step in the ICF program took place in 1972, when John Nuckolls of the Lawrence Livermore National Laboratory(LLNL) published a seminal article in "Nature" that predicted that ignition could be achieved with laser energies about 1 kJ, while "high gain" would require energies around 1 MJ. [Nuckolls et al, " [http://www.nature.com/nature/journal/v239/n5368/pdf/239139a0.pdf Laser Compression of Matter to Super-High Densities: Thermonuclear (CTR) Applications] ", "Nature" Vol. 239, 1972, pp. 129] [John Lindl, " [http://www.osti.gov/bridge/servlets/purl/10126383-6NAuBK/native/10126383.pdf The Edward Teller Medal Lecture: The Evolution Toward Indirect Drive and Two Decades of Progress Toward ICF Ignition and Burn] ", 11th International Workshop on Laser Interaction and Related Plasma Phenomena, December 1994. Retrieved on May 7 2008.] The primary problems in making a practical ICF device would be building a laser of the required energy and making its beams uniform enough to collapse a fuel target evenly. At first it was not obvious that the energy issue could ever be addressed, but a new generation of laser devices first invented in the late 1960s pointed to ways to build devices of the required power. Starting in the early-1970s several labs started experiments with such devices, including krypton fluoride excimer lasers at the Naval Research Laboratory(NRL) and the solid-state lasers (s) at Lawrence Livermore National Laboratory(LLNL). What followed was a series of advances followed by seemingly intractable problems that characterized fusion research in general. High energy ICF experiments (multi hundred joules per shot and greater experiments) began in earnest in the early-1970s, when lasers of the required energy and power were first designed. This was some time after the successful design of magnetic confinement fusionsystems, and around the time of the particularly successful tokamakdesign that was introduced in the early '70s. Nevertheless, high funding for fusion research stimulated by the multiple energy crises during the mid to late 1970s produced rapid gains in performance, and inertial designs were soon reaching the same sort of "below breakeven" conditions of the best magnetic systems. LLNL was, in particular, very well funded and started a major laser fusion development program. Their Janus laserstarted operation in 1974, and validated the approach of using Nd:glass lasers to generate very high power devices. Focusing problems were explored in the Long path laserand Cyclops laser, which led to the larger Argus laser. None of these were intended to be practical ICF devices, but each one advanced the state of the art to the point where there was some confidence the basic approach was valid. At the time it was believed that making a much larger device of the Cyclops type could both compress and heat the ICF targets, leading to ignition in the "short term". This was a misconception based on extrapolation of the fusion yields seen from experiments utilizing the so called "exploding pusher" type of fuel capsules. During the period spanning the years of the late 70's and early 80's the estimates for laser energy on target needed to achieve ignition doubled almost yearly as the various plasma instabilities and laser-plasma energy coupling loss modes were gradually understood. The realization that the simple exploding pusher target designs and mere few kilojoule (kJ) laser irradiation intensities would never scale to high gain fusion yields led to the effort to increase laser energies to the hundred kJ level in the UV and to the production of advanced ablator and cryogenic DT ice target designs. One of the earliest serious and large scale attempts at an ICF driver design was the Shiva laser, a 20-beam neodymium doped glass laser system built at the Lawrence Livermore National Laboratory(LLNL) that started operation in 1978. Shiva was a "proof of concept" design intended to demonstrate compression of fusion fuel capsules to many times the liquid density of hydrogen. In this, Shiva succeeded and compressed its pellets to 100 times the liquid density of deuterium. However, due to the laser's strong coupling with hot electrons, premature heating of the dense plasma (ions) was problematic and fusion yields were low. This failure by Shiva to efficiently heat the compressed plasma pointed to the use of optical frequency multipliers as a solution which would frequency triple the infrared light from the laser into the ultraviolet at 351 nm. Newly discovered schemes to efficiently frequency triple high intensity laser light discovered at the Laboratory for Laser Energeticsin 1980 enabled this method of target irradiation to be experimented with in the 24 beam OMEGA laser and the NOVETTE laser, which was followed by the Nova laserdesign with 10 times the energy of Shiva, the first design with the specific goal of reaching ignition conditions. Nova also failed in its goal of achieving ignition, this time due to severe variation in laser intensity in its beams (and differences in intensity between beams) caused by filamentation which resulted in large non-uniformity in irradiation smoothness at the target and asymmetric implosion. The techniques pioneered earlier could not address these new issues. But again this failure led to a much greater understanding of the process of implosion, and the way forward again seemed clear, namely the increase in uniformity of irradiation, the reduction of hot-spots in the laser beams through beam smoothing techniques to reduce Rayleigh–Taylor instabilityimprinting on the target and increased laser energy on target by at least an order of magnitude. Funding for fusion research was severely constrained in the 80's, but Nova nevertheless successfully gathered enough information for a next generation machine. The resulting design, now known as the National Ignition Facility, started construction at LLNL in 1997. NIF's main objective will be to operate as the flagship experimental device of the so called nuclear stewardship program, supporting LLNLs traditional bombmaking role. Originally intended to start construction in the early 1990s, NIF is now scheduled for fusion experiments starting in 2009 when the remaining lasers in the 192-beam array are installed. As of November 2007, ninety-six of the lasers have been completed and commissioned. The first credible attempts at ignition are scheduled for 2010. A more recent development is the concept of "fast ignition", which may offer a way to directly heat the high density fuel after compression, thus decoupling the heating and compression phases of the implosion. In this approach the target is first compressed "normally" using a driver laser system, and then when the implosion reaches maximum density (at the stagnation point or "bang time"), a second ultra-short pulse ultra-high power petawatt(PW) laser delivers a single pulse focussed on one side of the core, dramatically heating it and hopefully starting fusion ignition. The two types of fast ignition are the "plasma bore-through" method and the "cone-in-shell" method. In the first method the petawatt laser is simply expected to bore straight through the outer plasma of an imploding capsule and to impinge on and heat the dense core, whereas in the cone-in-shell method, the capsule is mounted on the end of a small high-z cone such that the tip of the cone projects into the core of the capsule. In this second method, when the capsule is imploded, the petawatt has a clear view straight to the high density core and does not have to waste energy boring through a 'corona' plasma; however, the presence of the cone affects the implosion process in significant ways that are not fully understood. Several projects are currently underway to explore the fast ignition approach, including upgrades to the OMEGA laserat the University of Rochester, the GEKKO XIIdevice in Japan, and an entirely new £500m facility, known as HiPER, proposed for construction in the European Union. If successful, the fast ignition approach could dramatically lower the total amount of energy needed to be delivered to the target; whereas NIF uses UV beams of 2 MJ, HiPER's driver is 200 kJ and heater 70 kJ, yet the predicted fusion gains are nevertheless even higher than on NIF. Finally, using a different approach entirely is the z-pinchdevice. Z-pinch uses massive amounts of electrical current which is switched into a small number of extremely fine wires. The wires heat and vaporize so quickly they fill the target with x-rays, which implode the fuel pellet. In order to direct the x-rays onto the pellet the target consists of a cylindrical metal capsule with the wiring and fuel within. Challenges to this approach include relatively low drive temperatures, resulting in slow implosion velocities and potentially large instability growth, and preheat caused by high-energy x-rays. [ [http://www.sandia.gov/pulsedpower/prog_cap/pub_papers/010607a.pdf Z-Pinch Power Plant a Pulsed Power Driven System for Fusion Energy] ] [ [http://adsabs.harvard.edu/abs/2002AIPC..651....3G Fast Z-Pinch Study in Russia and Related Problems] ] Inertial confinement fusion as an energy source Practical power plants built using ICF have been studied since the late 1970s when ICF experiments were beginning to ramp up to higher powers; they are known as inertial fusion energy, or IFE plants. These devices would deliver a successive stream of targets to the reaction chamber, several a second typically, and capture the resulting heat and neutron radiation from their implosion and fusion to drive a conventional Laser driven systems were initially believed to be able to generate commercially useful amounts of energy. However, as estimates of the energy required to reach ignition grew dramatically during the 1970s and '80s, these hopes were abandoned. Given the low efficiency of the laser amplification process (about 1 to 1.5%), and the losses in generation (steam-driven turbine systems are typically about 35% efficient), fusion gains would have to be on the order of 350 just to break even. These sorts of gains appeared to be impossible to generate, and ICF work turned primarily to weapons research. With the recent introduction of fast ignition, things have changed dramatically. In this approach gains of 100 are predicted in the first experimental device, HiPER. Given a gain of about 100 and a laser efficiency of about 1%, HiPER produces about the same amount of "fusion" energy as electrical energy was needed to create it. Additionally newer laser devices appear to be able to greatly improve driver efficiency. Current designs use xenon flash lampsFact|date=October 2007 to produce an intense flash of white light, some of which is absorbed by the Nd:glass that produces the laser power. In total about 1 to 1.5% of the electrical power fed into the flash tubes is turned into useful laser light. Newer designs replace the flash lamps with laser diodes that are tuned to produce most of their energy in a frequency range that is strongly absorbed. Initial experimental devices offer efficiencies of about 10%, and it is suggested that 20% is a real possibility with some additional development. With "classical" devices like NIF about 330 MJ of electrical power are used to produce the driver beams, producing an expected yield of about 20 MJ, with the maximum credible yield of 45 MJ. Using the same sorts of numbers in a reactor combining fast ignition with newer lasers would offer dramatically improved performance. HiPER requires about 270 kJ of laser energy, so assuming a first-generation diode laser driver at 10% the reactor would require about 3 MJ of electrical power. This is expected to produce about 30 MJ of fusion power.Fact|date=January 2008 Even a very poor conversion to electrical energy appears to offer real-world power output, and incremental improvements in yield and laser efficiency appear to be able to offer a commercially useful output. ICF systems face some of the same secondary power extraction problems as magnetic systems in generating useful power from their reactions. One of the primary concerns is how to successfully remove heat from the reaction chamber without interfering with the targets and driver beams. Another serious concern is that the huge number of neutrons released in the fusion reactions react with the plant, causing them to become intensely radioactive themselves, as well as mechanically weakening metals. Fusion plants built of conventional metals like steelwould have a fairly short lifetime and the core containment vessels will have to be replaced frequently. One current concept in dealing with both of these problems, as shown in the HYLIFE-II baseline design, is to use a "waterfall" of "flibe", a molten mix of fluorine, lithiumand berylliumsalts, which both protect the chamber from neutrons, as well as carrying away heat. The flibe is then passed into a heat exchangerwhere it heats water for use in the turbines. [ [http://www.ap.columbia.edu/SMproceedings/6.InertialConcepts/6.IFE_Subgroups.pdf Snowmass Fusion Summer Study, Inertial Fusion Concepts Working Group Subgroup 3: Inertial Fusion Power Plant Concepts] ] Another, Sombrero, uses a reaction chamber built of carbon fibrewhich has a very low neutron cross section. Cooling is provided by a molten ceramic, chosen because of its ability to stop the neutrons from traveling any further, while at the same time being an efficient heat transfer agent. [ [http://fti.neep.wisc.edu/pdf/fdm862.pdf SOMBRERO - A Solid Breeder Moving Bed KrF Laser Driven IFE Power Reactor] ] As a power source, even the best IFE reactors would be hard-pressed to deliver the same economics as coal, although they would have advantages in terms of less pollution and global warming. Coal can simply be dug up and burned for little financial cost, one of the main costs being shipping. In terms of the turbomachinery and generators, an IFE plant would likely cost the same as a coal plant of similar power, and one might suggest that the "combustion chamber" in an IFE plant would be similar to those for a coal plant. On the other hand, a coal plant has no equivalent to the driver laser, which would make the IFE plant much more expensive. Additionally, extraction of deuterium and its formation into useful fuel pellets is considerably more expensive than coal processing, although the cost of shipping it is much lower (in terms of energy per unit mass). It is generally estimated that an IFE plant would have long-term operational costs about the same as coal, discounting development. HYLIFE-II claims to be about 40% less expensive than a coal plant of the same size, but considering the problems with NIF, it is simply too early to tell if this is realistic or not. The various phases of such a project are the following, the sequence of inertial confinement fusion development follows much the same outline: * burning demonstration: reproducible achievement of some fusion energy release (not necessarily a Q factor of >1). * high gain demonstration: experimental demonstration of the feasibility of a reactor with a sufficient energy gain. * industrial demonstration: validation of the various technical options, and of the whole data needed to define a commercial reactor. * commercial demonstration: demonstration of the reactor ability to work over a long period, while respecting all the requirements for safety, liability and cost. At the moment, according to the available data [This chapter is based on data available in June 2006, when Laser Megajouleand NIF lasers are not yet into complete service.] , inertial confinement fusion experiments have not gone beyond the first phase, although Nova and others have repeatedly demonstrated operation within this realm. In the short term a number of new systems are expected to reach the second stage. NIF is expected to be able to quickly reach this sort of operation when it starts, but the date for the start of fusion experiments is currently suggested to be somewhere between 2010 and 2014. Laser Mégajoulewould also operate within the second stage, and was initially expected to become operational in 2010. Fast ignition systems work well within this range. Finally, the z-pinch machine, not using lasers, is expected to obtain a high fusion energy gain, as well as capability for repetitive working, starting around 2010. For a true industrial demonstration, further work is required. In particular, the laser systems need to be able to run at high operating frequencies, perhaps one to ten times a second. Most of the laser systems mentioned in this article have trouble operating even as much as once a day. Parts of the HiPER budget are dedicated to research in this direction as well. Because they convert electricity into laser light with much higher efficiency, diode lasers also run cooler, which in turn allows them to be operated at much higher frequencies. HiPER is currently studying devices that operate at 1 MJ at 1 Hz, or alternately 100 kJ at 10 Hz. Inertially confined fusion and the nuclear weapons program The very hot and dense conditions encountered during an Inertial Confinement Fusion experiment are similar to those created in a thermonuclear weapon, and have applications to the nuclear weapons program. ICF experiments might be used, for example, to help determine how warhead performance will degrade as it ages, or as part of a program of designing new weapons. Retaining knowledge and corporate expertise in the nuclear weapons program is another motivation for pursuing ICF. [ Richard Garwin, Arms Control Today, 1997] [ [http://www.llnl.gov/nif/project/missions_security.html NIF: Stockpile Stewardship] ] . Funding for the NIF facility in the United States is sourced from the 'Nuclear Weapons Stewardship' program, and the goals of the program are oriented accordingly. [ [http://www.llnl.gov/nif/project/missions.html National Ignition Facility Project: MIssions] ] It has been argued that some aspects of ICF research may violate the Comprehensive Test Ban Treatyor the Nuclear Non-Proliferation Treaty. [ [http://www.ieer.org/reports/fusion/chap5.html Nuclear Disarmament and Non-Proliferation Issues Related to Explosive Confinement Fusion] ] . In the long term, despite the formidable technical hurdles, ICF research might potentially lead to the creation of a " pure fusion weapon". [ [http://www.princeton.edu/~globsec/publications/pdf/7_2Jones.pdf Jones and von Hippel, Science and Global security, 1998, Volume 7 p129-150 ] ] Inertial confinement fusion as a neutron source Inertial confinement fusion has the potential to produce orders of magnitude more neutrons than spallation. Neutrons are capable of locating hydrogen atoms in molecules, resolving atomic thermal motion and studying collective excitations of photons more effectively than X-rays. Neutron scatteringstudies of molecular structures could resolve problems associated with protein folding, diffusion through membanes, proton transfer mechanisms, dynamics of molecular motors, etc. by modulating thermal neutrons into beams of slow neutrons [cite journal | author=Taylor, Andrew | title=A Route to the Brightest Possible Neutron Source? | journal=Science | volume=315 | month=February | year=2007 | pages=1092–1095 | pmid=17322053 | doi=10.1126/science.1127185 ] . Antimatter catalyzed nuclear pulse propulsion Laboratory for Laser Energetics Bubble fusionis controversially claimed to be an "acoustic" form of inertial confinement fusion Notes and references * [http://www.nuc.berkeley.edu/thyd/icf/IFE.html Inertial Fusion Energy: A Tutorial on the Technology and Economics] * [http://www.llnl.gov/nif/nif.html National Ignition Facility Project] * [http://zpinch.sandia.gov/ Zpinch Home Page] * [http://physicsweb.org/articles/news/9/9/2/1/ Europe plans laser-fusion facility] "(Physicsweb)" * [http://www.guardian.co.uk/technology/2007/dec/06/laserfusion Lasers point the way to clean energy] "(The Guardian)" * [http://other.nrl.navy.mil/LaserFusionEnergy/ National Laser Fusion Energy Development Plan] * [http://www.ile.osaka-u.ac.jp/index_e.html Institute of Laser Engineering Osaka University] Wikimedia Foundation. 2010. Look at other dictionaries: Confinement fusion — may refer to: Bubble fusion, or acoustic inertial confinement fusion Inertial confinement fusion Magnetic confinement fusion This disambiguation page lists articles associated with the same title. If an … Wikipedia inertial confinement — noun In fusion studies, short term containment of plasma arising from inertial resistance to outward forces, achieved mainly by using a powerful laser • • • Main Entry: ↑inert … Useful english dictionary Magnetic confinement fusion — TCV inner view, with graphite clad torus Magnetic confinement fusion is an approach to generating fusion power that uses magnetic fields to confine the hot fusion fuel in the form of a plasma. Magnetic confinement is one of two major branches of… … Wikipedia Fusion power — The Sun is a natural fusion reactor. Fusion power is the power generated by nuclear fusion processes. In fusion reactions two light atomic nuclei fuse together to form a heavier nucleus (in contrast with fission power). In doing so they release a … Wikipedia Inertial fusion power plant — An Inertial fusion power plant is intended to produce electric power by use of inertial confinement fusion techniques on an industrial scale. This type of power plant is still in a research phase.It is frequently assumed that the only medium term … Wikipedia fusion reactor — Physics. a reactor for producing atomic energy by nuclear fusion. Cf. reactor (def. 4). * * * Introduction also called fusion power plant or thermonuclear reactor a device to produce electrical power from the energy released in a nuclear… … Universalium Fusion rocket — A fusion rocket is a theoretical design for a rocket driven by fusion power which could provide efficient and long term acceleration in space without the need to carry a large fuel supply. The design relies on the development of fusion power… … Wikipedia Fusion energy gain factor — The fusion energy gain factor, usually expressed with the symbol Q, is the ratio of fusion power produced in a nuclear fusion reactor to the power required to maintain the plasma in steady state.In a fusion power reactor a plasma must be… … Wikipedia Inertial electrostatic confinement — (often abbreviated as IEC) is a concept for retaining a plasma using an electrostatic field. The field accelerates charged particles (either ions or electrons) radially inward, usually in a spherical but sometimes in a cylindrical geometry. Ions… … Wikipedia Fusion hydrogène-bore — Fusion aneutronique La fusion aneutronique est une réaction de fusion nucléaire au cours de laquelle la proportion d’énergie libérée sous forme de neutrons reste minime, typiquement inférieure au seuil d’1 % de l’énergie totale. Les… … Wikipédia en Français
ESS1 Earth’s Place in the Universe ESS1. A The Universe And Its Stars K-2 Patterns of movement of the sun, moon, and stars as seen from Earth can be observed, described, and predicted. 3-5 Stars range greatly in size and distance from Earth and this can explain their relative brightness. 6-8 The solar system is part of the Milky Way, which is one of many billions of galaxies. 9-12 Light spectra from stars are used to determine their characteristics, processes, and lifecycles. Solar activity creates the elements through nuclear fusion. The development of technologies has provided the astronomical data that provide the empirical evidence for the Big Bang theory. ESS1. B Earth And The Solar System 3-5 The Earth’s orbit and rotation, and the orbit of the moon around the Earth cause observable patterns. 6-8 The solar system contains many varied objects held together by gravity. Solar system models explain and predict eclipses, lunar phases, and seasons. 9-12 Kepler’s laws describe common features of the motions of orbiting objects. Observations from astronomy and space probes provide evidence for explanations of solar system formation. Changes in Earth’s tilt and orbit cause climate changes such as Ice Ages. ESS1. C The History Of Planet Earth K-2 Some events on Earth occur very quickly; others can occur very slowly. 3-5 Certain features on Earth can be used to order events that have occurred in a landscape. 6-8 Rock strata and the fossil record can be used as evidence to organize the relative occurrence of major historical events in Earth’s history. 9-12 The rock record resulting from tectonic and other geoscience processes as well as objects from the solar system can provide evidence of Earth’s early history and the relative ages of major geologic formations. ESS2 Earth’s Systems ESS2. A Earth Materials And Systems K-2 Wind and water change the shape of the land. 3-5 Four major Earth systems interact. Rainfall helps to shape the land and affects the types of living things found in a region. Water, ice, wind, organisms, and gravity break rocks, soils, and sediments into smaller pieces and move them around. 6-8 Energy flows and matter cycles within and among Earth’s systems, including the sun and Earth’s interior as primary energy sources. Plate tectonics is one result of these processes. 9-12 Feedback effects exist within and among Earth’s systems. ESS2. B Plate Tectonics And Large-Scale System Interactions K-2 Maps show where things are located. One can map the shapes and kinds of land and water in any area. 3-5 Earth’s physical features occur in patterns, as do earthquakes and volcanoes. Maps can be used to locate features and determine patterns in those events. 6-8 Plate tectonics is the unifying theory that explains movements of rocks at Earth’s surface and geological history. Maps are used to display evidence of plate movement. 9-12 Radioactive decay within Earth’s interior contributes to thermal convection in the mantle. ESS2.C The Roles Of Water In Earth’s Surface Processes K-2 Water is found in many types of places and in different forms on Earth. 3-5 Most of Earth’s water is in the ocean and much of the Earth’s fresh water is in glaciers or underground. 6-8 Water cycles among land, ocean, and atmosphere, and is propelled by sunlight and gravity. Density variations of seawater drive interconnected ocean currents. Water movement causes weathering and erosion, changing landscape features. 9-12 The planet’s dynamics are greatly influenced by water’s unique chemical and physical properties. ESS2.D Weather And Climate K-2 Weather is the combination of sunlight, wind, snow or rain, and temperature in a particular region and time. People record weather patterns over time. 3-5 Climate describes patterns of typical weather conditions over different scales and variations. Historical weather patterns can be analyzed. 6-8 Complex interactions determine local weather patterns and influence climate, including the role of the ocean. The planet’s dynamics are greatly influenced by water’s unique chemical and physical properties. 9-12 The role of radiation from the sun and its interactions with the atmosphere, ocean, and land are the foundation for the global climate system. Global climate models are used to predict future changes, including changes influenced by human behavior and natural factors. K-2 Plants and animals can change their local environment. 3-5 Living things can affect the physical characteristics of their environment. 6-8 The fossil record documents the existence, diversity, extinction, and change of many life forms and their environments through Earth’s history. The fossil record and comparisons of anatomical similarities between organisms enables the inference of lines of evolutionary descent. (LS4.A) Changes in biodiversity can influence humans’ resources and ecosystem services they rely on. (LS4.D) 9-12 The biosphere and Earth’s other systems have many interconnections that cause a continual co-evolution of Earth’s surface and life on it. ESS3 Earth and Human Activity ESS3.A Natural Resources K-2 Living things need water, air, and resources from the land, and they live in places that have the things they need. Humans use natural resources for everything they do. 3-5 Energy and fuels humans use are derived from natural sources and their use affects the environment. Some resources are renewable over time, others are not. 6-8 Humans depend on Earth’s land, ocean, atmosphere, and biosphere for different resources, many of which are limited or not renewable. Resources are distributed unevenly around the planet as a result of past geologic processes. 9-12 Resource availability has guided the development of human society and use of natural resources has associated costs, risks, and benefits. ESS3.B Natural Hazards K-2 In a region, some kinds of severe weather are more likely than others. Forecasts allow communities to prepare for severe weather. 3-4 A variety of hazards result from natural processes; humans cannot eliminate hazards but can reduce their impacts. 6-8 Mapping the history of natural hazards in a region and understanding related geological forces. Natural hazards and other geological events have shaped the course of human history at local, regional, and global scales. 9-12 Natural hazards and other geological events have shaped the course of human history at local, regional, and global scales. ESS3.C Human Impacts on Earth Systems K-2 Things people do can affect the environment but they can make choices to reduce their impacts. 3-5 Societal activities have had major effects on the land, ocean, atmosphere, and even outer space. Societal activities can also help protect Earth’s resources and environments. 6-8 Human activities have altered the biosphere, sometimes damaging it, although changes to environments can have different impacts for different living things. Activities and technologies can be engineered to reduce people’s impacts on Earth. 9-12 Sustainability of human societies and the biodiversity that supports them requires responsible management of natural resources, including the development of technologies. When you research information you must cite the reference. Citing for websites is different from citing from books, magazines and periodicals. The style of citing shown here is from the MLA Style Citations (Modern Language Association). When citing a WEBSITE the general format is as follows. Author Last Name, First Name(s). "Title: Subtitle of Part of Web Page, if appropriate." Title: Subtitle: Section of Page if appropriate. Sponsoring/Publishing Agency, If Given. Additional significant descriptive information. Date of Electronic Publication or other Date, such as Last Updated. Day Month Year of access < URL >. Amsel, Sheri. "Lessons for Earth Science Standards (Appendix E.)" Exploring Nature Educational Resource ©2005-2024. January 14, 2024 < http://exploringnature.org/db/view/1900 >
" longer vs shorter ") and measuring in non-standard units (" The pencil is 3 erasers long ") and progresses to measuring length, weight, capacity and temperature in customary and metric units. Collecting and Representing Data. Printable worksheets Learning games Educational videos Lessons + Filters 46 results Filters. Understand the place value of three-digit numbers, thereby learning addition and subtraction. CCSS.Math.Content.2.MD.D.9 Generate measurement data by measuring lengths of several objects to the nearest whole unit, or by making repeated measurements of the same object. The answer key is automatically generated and is placed on the second page of the file. Kids learn about standard units to measure length, weight, and capacity. Here are the 10 Units that will be included in the 2nd Grade: Math Made Fun Curriculum. Measuring Capacity in Liters Worksheet 2. You get the ruler you get and you don't throw a fit! Share on Pinterest. Awesome Non Standard Measurement worksheets to familiarize and develop an understanding by exploring measuring activities using the classroom objects.Includes . 46 filtered results . More Literacy Units. Spelling Grade 4. The temperature worksheets are sure to provide 1st grade through 8th grade students with adequate practice in reading thermometers, shading them, comparing temperatures, ordering them from the warmest to the coldest and vice versa, converting between . Still, you should read and approve each of them yourself for quality and appropriateness before giving them to a child. Starting typically in grade 2 or 3, children practice easy conversions, such as changing a bigger unit into smaller units (4 cm into 40 mm) and the other way around . Measurements Worksheets: Distance, Volume And Weight. Measurement Worksheets for 2nd Graders Understanding how different things are measured and the various units used for measurement prepares the children for real-life problems. Weight Worksheets. 2nd Grade MD Worksheets: 2nd Grade Math Worksheets, Measurement & Data by Educational Emporium 17 $7.00 $3.99 PDF These 20 worksheets (2 per standard) are aligned to meet all Measurement & Data Common Core standards for 2nd grade math. 2nd Grade Writing Worksheets - Best Coloring Pages For Kids www.bestcoloringpagesforkids.com. Shoe Measurement Get practice using a ruler with this shoe measurement worksheet. Measurement Math Worksheets for 2nd Grade. Weight Measurement Worksheet 1. 309. The previous grade worksheets build the conceptual knowledge of measurement aspects. . Grade 2 measurement worksheets Our grade 2 measurement worksheets focus on the measurement of length, weight, capacity and temperature. This second grade measurement activities unit is the . K8 Learning System; Printable Worksheets . 2ndgradeworksheets.net-Free worksheets and printables for teachers. Printable PDFs for Grade 2 Measurement Worksheets Estimate lengths using units of inches, feet, centimeters, and meters. Work with equal groups to learn the basics of multiplication. . This capacity worksheet has students create their own capacity charts by drawing equivalent measures. Second grade measurement worksheets get your child thinking about length. Mrs Patton-For My Second Act. Subtracting 1-Digit from 2-Digit Numbers | Regrouping Elevate skills to the next level as kids subtract a single-digit numeral from a 2-digit numeral, regrouping in the ones place. Unit 6: Graphs and Data. Understands even and odd numbers. PDF; Students love hands-on! 2nd Grade X Math X Measurement X. 309. Common Core State Standards: 2.MD.1. Build second graders' measurement skills with these worksheets that help them learn how to accurately measure the length of a variety of images and objects. Report Ad. 2nd grade. Award winning educational materials designed to help kids succeed. Pin By Byanka Owen-Rodriguez On Education | Phonics Worksheets Free Unit 4: Addition and Subtraction with 2-Digit and 3-Digit Numbers. . Math. 3. Print full size. $97 99. CCSS.Math.Content.2.MD.A.1. Progress. Grade 2 Measurement worksheets on measuring the same lengths in both inches and centimeters. 2nd grade Subtracting Length Worksheets For Free. The free downloadable worksheets and exciting and informative charts would be of great use to children . At the same time, they also improve estimating, comparing, and rounding skills. Printable worksheets for standard American linear measurement, including inches, feet, and yards. In this math worksheet, your child will choose the best measurement tool for measuring different items. Grade 3. Start for free now! Skills Measuring length, Reading a ruler, Understanding measurements, Using centimeters (cm) Common Core Standards: Grade 2 Measurement & Data. It is hands-on, common core aligned and user-friendly for . Browse measuring 2nd grade resources on Teachers Pay Teachers, a marketplace trusted by millions of teachers for original educational resources. Free Math Worksheets for Grade 2. Grams and Kilograms Worksheets 2nd Grade Mass Worksheets Reading Scales Worksheets. Measurement Worksheets. This measuring math worksheet gives your child practice reading measurements on a ruler. You can generate the worksheets either in html or PDF format both are . of our new measurement tool, I pass out the rulers and we start exploring. Then, get more precise. Write the phrase "out of 16" on the board. Numbers. Measurements of lengths, working with time and money is made interesting in . Measuring Centimeters. Jan 26, 2014 - 2nd Grade Measurement - Worksheets, Lessons, and Printables. Unit 1: Number Sense to 1,000. Start for free now! Browse measuring second grade resources on Teachers Pay Teachers, a marketplace trusted by millions of teachers for original educational resources. When autocomplete results are available use up and down arrows to review and enter to select. Along with that, these 2nd grade math worksheets can enhance the logical skills of students which helps them in the long run. Measuring Capacity in Liters Worksheet 3. Length Using a benchmark to estimate lengths (inches) Search Printable 2nd Grade Measuring in Centimeter Worksheets In second grade, young math learners are picking up rulers, and this math worksheet collection gives them a chance to practice measuring in centimeters and talk about different units of measurement. This second grade measurement activities unit is the perfect blend of practice pages! Measuring length is an important skill that children will use both in math practice and everyday life. There are 150 worksheets in the 2nd grade Math Buzz set. 1. Measuring Capacity in Liters Worksheet 1. Reading Clocks Part 2. Open House Treat Bag Toppers Challenge your students with one of Turtle Diary's Units Of Measurement quizzes for second grade. . Second Grade Measurement Worksheets. Measurement Math Worksheets for 2nd Grade. Aligned to common core standards 2.MD.A.1 . In this measurement unit, students will be able to estimate lengths and measure around the room. On this page you will find measuring worksheets for: Grade 3; Grade 4; Grade 5; Grade 6; Basic instructions for the worksheets. Measuring Capacity in Liters Worksheet. Reading and writing the numbers from 1-1,000. This is a comprehensive collection of free printable math worksheets for sixth grade, organized by topics such as multiplication, division, exponents, place value, algebraic thinking, decimals, measurement units, ratio, percent, prime factorization, gcf, lcm . Also, estimating length of objects and comparing to its actual length is an important skill. Browse Printable 2nd Grade Measurement Tools and Strategy Worksheets. Spelling Grade 3. homeschool second themoffattgirls measuring. Which Holds More Pints and Cups Worksheet. These are a great way to test kid's knowledge and prepare them for harder subjects. Learn to use addition and subtraction within 100 and also solve simple word problems in these grade 2 worksheets. It contains estimating capacity of real-life objects, comparing two quantities, reading graduated cylinder, reading jug . View PDF. Includes using rulers to measure to the nearest inch, half inch, quarter inch, and eighth inch. 2nd Grade Math: Measurement | 2nd Grade Math, Math Measurement www.pinterest.co.kr. . Measurement worksheets for kindergarten through grade 6. Browse Printable 2nd Grade Measurement Worksheets. Page through a plethora of worksheets, meticulously designed for 2nd grade, 3rd grade, and 4th grade children and prepare them to effortlessly estimate, measure, and compare liquid volumes. Choose your own type of ruler to use on the worksheet. This file has 5 worksheets, one for each day of the week. Show the measurements by making a line plot, where the horizontal scale is marked off in whole-number units. . Our grade 2 science worksheets introduce concepts in the life sciences, earth sciences and the physical sciences in ways that students can relate to their everyday lives. This measurement worksheet is great for practicing subtracting length. 2nd grade measurement worksheets offer various benefits. $5.75. The topic "measurement" for grade-2 is a step next to kindergarten skills and length measurement. Measurement is an integral part of our day to day life that develops over time. Aligned to common core standards 2.MD.A.1 . These worksheets use standard, customary units: gallons, quarts . Grade 2. In these problems, they comprehend the scenarios and find the final answer. Measure centimeters and millimeters, convert from centimeters to millimeters, circle the greater length, and more. Measuring Worksheets: Weight In The Metric System. . Estimate and measure length in inches - grade 2 measurement worksheet Author: K5 Learning Subject: Grade 2 Measurement Worksheets - lengths, weights, capacities and temperatures Keywords: Grade 2 measurement worksheets length weight capacity temperature metric customary Created Date: 10/12/2017 1:52:24 PM 2-sided sheet could be a review or a quiz. K8 Learning System; Printable Worksheets . Browse measuring 2nd grade resources on Teachers Pay Teachers, a marketplace trusted by millions of teachers for original educational resources. 2nd grade Subtracting Length Worksheets For Free. Take Assessment Test. Grade 2 measurement worksheets Using benchmarks to measure lengths Measuring in non-standard units Measuring in inches and centimeters Estimating and measuring Differences in length Units of length (customary) Units of length (metric) Comparing weights using a balance scale Weights in non-standard units Weights in ounces / pounds / kilograms PDF; Students love hands-on! Spelling Grade 2. Common Core: 2.MD.1 and 2.MD.3. Browse Catalog. Free Grade 4 Measuring Worksheets Measurement Worksheets Algebra Source: i.pinimg.com. 2nd Grade Measurement Worksheets www.math-salamanders.com. Create an unlimited supply of worksheets for conversion of metric measuring units for grades 2-7. Upgrade to remove ads. . Browse Catalog. See more ideas about math measurement, 2nd grade math, measurement activities. At this point, they should also understand the need for a standard unit of . Measure the length of an object by selecting and using appropriate tools such as rulers, yardsticks, meter sticks, and measuring tapes. Students start by using their understanding of units of capacity to measuring objects, then move onto capacity charts, reading measuring cups, comparing capacities and converting between different capacity units. Worksheet. Rulers are to be collected at the end of the lesson. Math Buzz: Week 1 Worksheets 1 through 5 FREE . Capacity practice for grade 4. Rulers are for measuring, not sword play or spinning. This worksheet originally published in Math Made Easy for 2nd Grade by . Unit 5: Geometry and Fractions. Spelling Grade 5. This second grade measurement activities unit is the . They are randomly generated, printable from your browser, and include the answer key. ), learning how to measure and then applying measurements in real-life situations. Grade 2 science worksheet. . Measure and estimate lengths in standard units. Liven up your math class with the measurement worksheets here, that contain umpteen exercises covering the key phases of measurement - identifying the attributes (length, weight, capacity, time, etc. Jan 26, 2014 - 2nd Grade Measurement - Worksheets, Lessons, and Printables. 2nd Grade Worksheets. . Plenty of worksheets are available for practice. Browse measuring second grade resources on Teachers Pay Teachers, a marketplace trusted by millions of teachers for original educational resources. Weight Measurement Worksheet 4. Touch device users, explore by touch or with swipe gestures. Weight Measurement Worksheet 3. Worksheet #1 Worksheet #2 Worksheet #3 Similar: Estimating and measuring (inches) Estimating and measuring (cm) Thousands of parents and educators are turning to the kids' learning app that makes real learning truly fun. The worksheets on this page are written at the 2nd grade reading level. Great to use as instant practice, these 2nd grade math worksheet pdfs help kids find the sum of 2-digit addends and apply the concept to solve word problems as well. Measurement is a practical topic that students will need for their whole lives, and these worksheets will help students go from basic concepts of length in both the English and metric systems to telling time, temperature, and more. Measurement Line Plot Straw Plot. Ruler worksheets contain reading pointer, drawing pointer to show the reading, measuring bars, reading tapes to measure long bars and more. On this hands-on second grade math worksheet, kids measure the length and width of their hand and the length of their fingers to the nearest inch or half inch. Solve simple put-together, take-apart, and compare problems using information presented in a bar graph. https:/. Using a measuring wheel, Using a ruler, Using a tape measure, Using a yardstick. Second graders practice making estimates, measuring length in inches and centimeters . Estimating Measurements. Blog Posts - Ms. Stocker's 2nd Grade msstockers2ndgrade.weebly.com. Math. worksheets measurement grade 2nd math worksheet measuring measurements maths second pdf scales sheet answers measure practice reading length . . Unit 3: Addition and Subtraction Fluency within 100. Introduction of SplashLearn's printable measurement worksheets for 2nd graders can be the perfect way to prep the kids for the concept of measurement.
In mathematics, a functor is a type of mapping between categories which is applied in category theory. Functors can be thought of as homomorphisms between categories. In the category of small categories, functors can be thought of more generally as morphisms. Functors were first considered in algebraic topology, where algebraic objects (like the fundamental group) are associated to topological spaces, and algebraic homomorphisms are associated to continuous maps. Nowadays, functors are used throughout modern mathematics to relate various categories. Thus, functors are generally applicable in areas within mathematics that category theory can make an abstraction of. - associates to each object in C an object in D, - associates to each morphism in C a morphism in D such that the following two conditions hold: - for every object in C, - for all morphisms and in C. Covariance and contravariance There are many constructions in mathematics that would be functors but for the fact that they "turn morphisms around" and "reverse composition". We then define a contravariant functor F from C to D as a mapping that - associates to each object in C an object in D, - associates to each morphism in C a morphism in D such that - for every object in C, - for all morphisms and in C. Note that contravariant functors reverse the direction of composition. Ordinary functors are also called covariant functors in order to distinguish them from contravariant ones. Note that one can also define a contravariant functor as a covariant functor on the opposite category . Some authors prefer to write all expressions covariantly. That is, instead of saying is a contravariant functor, they simply write (or sometimes ) and call it a functor. Contravariant functors are also occasionally called cofunctors. Every functor induces the opposite functor , where and are the opposite categories to and . By definition, maps objects and morphisms identically to . Since does not coincide with as a category, and similarly for , is distinguished from . For example, when composing with , one should use either or . Note that, following the property of opposite category, . Bifunctors and multifunctors A bifunctor (also known as a binary functor) is a functor whose domain is a product category. For example, the Hom functor is of the type Cop × C → Set. It can be seen as a functor in two arguments. The Hom functor is a natural example; it is contravariant in one argument, covariant in the other. A multifunctor is a generalization of the functor concept to n variables. So, for example, a bifunctor is a multifunctor with n = 2. Diagram: For categories C and J, a diagram of type J in C is a covariant functor . (Category theoretical) presheaf: For categories C and J, a J-presheaf on C is a contravariant functor . Presheaves: If X is a topological space, then the open sets in X form a partially ordered set Open(X) under inclusion. Like every partially ordered set, Open(X) forms a small category by adding a single arrow U → V if and only if . Contravariant functors on Open(X) are called presheaves on X. For instance, by assigning to every open set U the associative algebra of real-valued continuous functions on U, one obtains a presheaf of algebras on X. Constant functor: The functor C → D which maps every object of C to a fixed object X in D and every morphism in C to the identity morphism on X. Such a functor is called a constant or selection functor. Endofunctor: A functor that maps a category to itself. Identity functor: in category C, written 1C or idC, maps an object to itself and a morphism to itself. Identity functor is an endofunctor. Diagonal functor: The diagonal functor is defined as the functor from D to the functor category DC which sends each object in D to the constant functor at that object. Limit functor: For a fixed index category J, if every functor J → C has a limit (for instance if C is complete), then the limit functor CJ → C assigns to each functor its limit. The existence of this functor can be proved by realizing that it is the right-adjoint to the diagonal functor and invoking the Freyd adjoint functor theorem. This requires a suitable version of the axiom of choice. Similar remarks apply to the colimit functor (which is covariant). Power sets: The power set functor P : Set → Set maps each set to its power set and each function to the map which sends to its image . One can also consider the contravariant power set functor which sends to the map which sends to its inverse image Dual vector space: The map which assigns to every vector space its dual space and to every linear map its dual or transpose is a contravariant functor from the category of all vector spaces over a fixed field to itself. Fundamental group: Consider the category of pointed topological spaces, i.e. topological spaces with distinguished points. The objects are pairs (X, x0), where X is a topological space and x0 is a point in X. A morphism from (X, x0) to (Y, y0) is given by a continuous map f : X → Y with f(x0) = y0. To every topological space X with distinguished point x0, one can define the fundamental group based at x0, denoted π1(X, x0). This is the group of homotopy classes of loops based at x0. If f : X → Y is a morphism of pointed spaces, then every loop in X with base point x0 can be composed with f to yield a loop in Y with base point y0. This operation is compatible with the homotopy equivalence relation and the composition of loops, and we get a group homomorphism from π(X, x0) to π(Y, y0). We thus obtain a functor from the category of pointed topological spaces to the category of groups. In the category of topological spaces (without distinguished point), one considers homotopy classes of generic curves, but they cannot be composed unless they share an endpoint. Thus one has the fundamental groupoid instead of the fundamental group, and this construction is functorial. Algebra of continuous functions: a contravariant functor from the category of topological spaces (with continuous maps as morphisms) to the category of real associative algebras is given by assigning to every topological space X the algebra C(X) of all real-valued continuous functions on that space. Every continuous map f : X → Y induces an algebra homomorphism C(f) : C(Y) → C(X) by the rule C(f)(φ) = φ ∘ f for every φ in C(Y). Tangent and cotangent bundles: The map which sends every differentiable manifold to its tangent bundle and every smooth map to its derivative is a covariant functor from the category of differentiable manifolds to the category of vector bundles. Doing this constructions pointwise gives the tangent space, a covariant functor from the category of pointed differentiable manifolds to the category of real vector spaces. Likewise, cotangent space is a contravariant functor, essentially the composition of the tangent space with the dual space above. Group actions/representations: Every group G can be considered as a category with a single object whose morphisms are the elements of G. A functor from G to Set is then nothing but a group action of G on a particular set, i.e. a G-set. Likewise, a functor from G to the category of vector spaces, VectK, is a linear representation of G. In general, a functor G → C can be considered as an "action" of G on an object in the category C. If C is a group, then this action is a group homomorphism. Tensor products: If C denotes the category of vector spaces over a fixed field, with linear maps as morphisms, then the tensor product defines a functor C × C → C which is covariant in both arguments. Forgetful functors: The functor U : Grp → Set which maps a group to its underlying set and a group homomorphism to its underlying function of sets is a functor. Functors like these, which "forget" some structure, are termed forgetful functors. Another example is the functor Rng → Ab which maps a ring to its underlying additive abelian group. Morphisms in Rng (ring homomorphisms) become morphisms in Ab (abelian group homomorphisms). Free functors: Going in the opposite direction of forgetful functors are free functors. The free functor F : Set → Grp sends every set X to the free group generated by X. Functions get mapped to group homomorphisms between free groups. Free constructions exist for many categories based on structured sets. See free object. Homomorphism groups: To every pair A, B of abelian groups one can assign the abelian group Hom(A,B) consisting of all group homomorphisms from A to B. This is a functor which is contravariant in the first and covariant in the second argument, i.e. it is a functor Abop × Ab → Ab (where Ab denotes the category of abelian groups with group homomorphisms). If f : A1 → A2 and g : B1 → B2 are morphisms in Ab, then the group homomorphism Hom(f, g): Hom(A2, B1) → Hom(A1, B2) is given by φ ↦ g ∘ φ ∘ f. See Hom functor. Representable functors: We can generalize the previous example to any category C. To every pair X, Y of objects in C one can assign the set Hom(X, Y) of morphisms from X to Y. This defines a functor to Set which is contravariant in the first argument and covariant in the second, i.e. it is a functor Cop × C → Set. If f : X1 → X2 and g : Y1 → Y2 are morphisms in C, then the group homomorphism Hom(f, g) : Hom(X2, Y1) → Hom(X1, Y2) is given by φ ↦ g ∘ φ ∘ f. Functors like these are called representable functors. An important goal in many settings is to determine whether a given functor is representable. Two important consequences of the functor axioms are: - F transforms each commutative diagram in C into a commutative diagram in D; - if f is an isomorphism in C, then F(f) is an isomorphism in D. One can compose functors, i.e. if F is a functor from A to B and G is a functor from B to C then one can form the composite functor G ∘ F from A to C. Composition of functors is associative where defined. Identity of composition of functors is identity functor. This shows that functors can be considered as morphisms in categories of categories, for example in the category of small categories. A small category with a single object is the same thing as a monoid: the morphisms of a one-object category can be thought of as elements of the monoid, and composition in the category is thought of as the monoid operation. Functors between one-object categories correspond to monoid homomorphisms. So in a sense, functors between arbitrary categories are a kind of generalization of monoid homomorphisms to categories with more than one object. Relation to other categorical concepts Functors are often defined by universal properties; examples are the tensor product, the direct sum and direct product of groups or vector spaces, construction of free groups and modules, direct and inverse limits. The concepts of limit and colimit generalize several of the above. Universal constructions often give rise to pairs of adjoint functors. Functors sometimes appear in functional programming. For instance, the programming language Haskell has a class fmap is a polytypic function used to map functions (morphisms on Hask, the category of Haskell types) between existing types to functions between some new types. - Mac Lane, Saunders (1971), Categories for the Working Mathematician, Springer-Verlag: New York, p. 30, ISBN 978-3-540-90035-1 - Carnap, The Logical Syntax of Language, p. 13–14, 1937, Routledge & Kegan Paul - Jacobson (2009), p. 19, def. 1.2. - Jacobson (2009), p. 19–20. - Popescu, Nicolae Popescu, Liliana (1979). Theory of categories. Dordrecht: Springer Netherlands. p. 12. ISBN 9789400995505. Retrieved 23 April 2016. - Mac Lane, Saunders; Moerdijk, Ieke (1992), Sheaves in geometry and logic: a first introduction to topos theory, Springer, ISBN 978-0-387-97710-2 - Hazewinkel, Michiel; Gubareni, Nadezhda Mikhaĭlovna; Gubareni, Nadiya; Kirichenko, Vladimir V. (2004), Algebras, rings and modules, Springer, ISBN 978-1-4020-2690-4 - Jacobson (2009), p. 20, ex. 2. |Look up functor in Wiktionary, the free dictionary.| - Hazewinkel, Michiel, ed. (2001), "Functor", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 - see functor in nLab and the variations discussed and linked to there. - André Joyal, CatLab, a wiki project dedicated to the exposition of categorical mathematics - Hillman, Chris. "A Categorical Primer". CiteSeerX: 10 .1 .1 .24 .3264: formal introduction to category theory. - J. Adamek, H. Herrlich, G. Stecker, Abstract and Concrete Categories-The Joy of Cats - Stanford Encyclopedia of Philosophy: "Category Theory" — by Jean-Pierre Marquis. Extensive bibliography. - List of academic conferences on category theory - Baez, John, 1996,"The Tale of n-categories." An informal introduction to higher order categories. - WildCats is a category theory package for Mathematica. Manipulation and visualization of objects, morphisms, categories, functors, natural transformations, universal properties. - The catsters, a YouTube channel about category theory. - ‹See Tfm›
Ratio Unit Vocabulary Terms in this set (17) A relationship between two quantities, normally expressed as the quotient of one divided by another. A comparison of two numbers or measurements. A special ratio in which the two terms are in different units. Compares measurements of two different types. The ratio of two measurements, in which the second term (denominator) is 1. The second quantity in the comparison is 1 unit. Using ratio language to describe a ratio relationship between two quantities. When two ratios have the same value when simplified. A visual model that uses rectangles to represent the parts of a ratio. Double Number Line A number line with a scale on top and a different scale on the bottom so that you can organize and compare items that change regularly according to a rule or pattern. The plane determined by a horizontal number line, called the x-axis, and a vertical number line, called the y-axis, intersecting at a point called the origin. Each point in this plane can be specified by an ordered pair of numbers. To draw on a graph or map. A change in the form or units of an expression. A change in the form of a measurement, different units, without a change in the size or amount. A ratio whose second term is 100. Means parts per 100. A quantity used as a standard of measurement. An equation stating that two ratios are equal. A statement showing that one ratio is equal to another. Two equal products obtained by multiplying the second term of each ratio by the first term of the other ratio in a proportion. You take each denominator aCROSS the "equals" sign and MULTIPLY it on the other fraction's numerator. A written statement indicating the equality of two expressions. A ratio where 2 parts are compared to one another. A ratio where one part of the total is compared to the total.
Calculations plus experimental data help map nuclear phase diagram, offering insight into transition that mimics formation of visible matter in universe today. To get a better understanding of the subatomic soup that filled the early universe, and how it “froze out” to form the atoms of today’s world, scientists are taking a closer look at the nuclear phase diagram. Like a map that describes how the physical state of water morphs from solid ice to liquid to steam with changes in temperature and pressure, the nuclear phase diagram maps out different phases of the components of atomic nuclei—from the free quarks and gluons that existed at the dawn of time to the clusters of protons and neutrons that make up the cores of atoms today. But “melting” atoms and their subatomic building blocks is far more difficult than taking an ice cube out of the freezer on a warm day. It requires huge particle accelerators like the Relativistic Heavy Ion Collider, a nuclear physics scientific user facility at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory, to smash atomic nuclei together at close to the speed of light, and sophisticated detectors and powerful supercomputers to help physicists make sense of what comes out. By studying the collision debris and comparing those experimental observations with predictions from complex calculations, physicists at Brookhaven are plotting specific points on the nuclear phase diagram to reveal details of this extraordinary transition and other characteristics of matter created at RHIC. To plot the key points where the transition takes place, the scientists are looking for large fluctuations in the excess of certain kinds of particles produced from collision to collision (for example, more protons than antiprotons, or more positively vs. negatively charged particles). “At RHIC’s top energy, where we know we’ve essentially “melted” the protons and neutrons to produce a plasma of quarks and gluons—similar to what existed some 13.8 billion years ago—protons and antiprotons are produced in nearly equal amounts,” said Frithjof Karsch, a theoretical physicist mapping out this new terrain. “But as you go to lower energies, where a denser quark soup is produced, we expect to see more protons than antiprotons, with the excess number of protons fluctuating from collision to collision.” By looking at millions of collision events over a wide range of energies—essentially conducting a beam energy scan—RHIC’s detectors can pick up the fluctuations as likely signatures of the transition. But they can’t measure precisely the temperatures or densities at which those fluctuations were produced—the data you need to plot points on the phase diagram map. Scientists … continue to narrow the search for landmarks on the nuclear phase diagram and expand our understanding of how the matter of the early universe transformed into the stuff of our familiar everyday world. “That’s where the supercomputers come in,” says Karsch. Supercomputers can simulate the types of fluctuations you would expect for the wide range of temperatures and densities at RHIC. They start by mathematically modeling all the possible interactions of subatomic quarks and gluons as governed by the theory of Quantum Chromodynamics, or QCD, which includes variables such as temperature and density in the equations. Because the number of values for these and many other variables in the equations of QCD is very large, only supercomputers can handle the complex calculations. To simplify the problem, the computers look at interactions of quarks and gluons placed at discrete points on an imaginary four-dimensional “lattice” that accounts for three spatial dimensions plus time. The lattice consists of about 300,000 grid points, and on each point the values of 48 variables need to be adjusted to characterize a specific configuration of the interacting quarks and gluons. Supercomputers use Monte Carlo sampling—more or less trying random numbers, like rolling a pair of dice—to find the most probable configuration of these values. “But there are many such configurations and we have to explore them all to allow for the many possible ways those quarks and gluons can interact,” said Karsch, leader of the nuclear physics lattice QCD group at Brookhaven and director of a DOE-sponsored “Scientific Discovery through Advanced Computing” (SciDAC-3) partnership program, “Computing Properties of Hadrons, Nuclei and Nuclear Matter from Quantum Chromodynamics.” To build these lattice QCD configurations, the scientists used Blue Gene supercomputers of the New York State Center for Computational Science, hosted by Brookhaven, as well as two new prototype racks of the Blue Gene/Q supercomputers at Brookhaven and at the RIKEN/BNL Research Center—a center founded and funded by the Japanese RIKEN laboratory in a cooperative agreement with Brookhaven. The machines turned out over 10,000 of these most probable configurations for each temperature. The scientists then loaded the lattice configurations onto a different kind of supercomputer—the Graphic Processing Unit (GPU) cluster operated by the US based lattice QCD consortium, USQCD, at DOE’s Thomas Jefferson Accelerator Laboratory and another GPU cluster at Bielefeld University in Germany. “GPUs are the kinds of computers that were invented to make video games,” said Brookhaven theoretical physicist Swagato Mukherjee, who coordinated the simulations and analysis. “They have very fast processors that can perform many simultaneous operations and draw every single pixel at the same time. That’s what you need to see fast-moving graphics, but it’s also very useful for these complex physics problems where we need to perform many simultaneous repetitive operations on each of the stored configurations of values to calculate the fluctuations of the excess particle numbers.” The scientists used 800 GPUs at Jefferson Lab and at Bielefeld University to analyze their 10,000 most probable configurations at each temperature and calculated the fluctuations of excess particle numbers for various combinations of temperature and density relevant for collisions at RHIC. By matching the fluctuations measured in real RHIC collisions at a given beam energy with the calculated values, they could use the supercomputed calculations to identify the temperature and density at which those fluctuations took place—the coordinates they needed to plot a point on the phase diagram map. Repeating the process for many experimentally measured values of fluctuations over the wide range of beam energies available at RHIC is helping scientists trace the line on the map that shows how the transition from quark soup to ordinary matter changes with temperature and density. These studies will reveal the conditions under which the transition is abrupt, with a sharp dividing line between phases like the distinct forms of liquid water and vapor (a first-order phase transition), and where it is smooth and continuous with no distinct phases—as they expect is the case at RHIC’s highest energies. They may even be able to identify whether there is a “critical point,” the specific temperature and density at which the type of transition switches from continuous to abrupt. “If found, this would define the starting point for an even richer phase structure at higher densities and lower temperatures,” said Karsch. “The existence of a ‘critical point’ is a prerequisite for the existence of many exotic phases of the nuclear matter that may exist within the core of compact stellar objects, such as neutron stars, so finding one means RHIC gives us access to studying these exotic forms of matter in a controllable way with our experiments.” To be confident of their findings and explore other aspects of quark-gluon plasma, the scientists are using the same approach to study fluctuations in the excess number of particles containing heavier “strange” quarks. “Different hadrons—bound states of three quarks or a quark and an antiquark pair—can melt at different temperatures,” Karsch said. “We have to show that it happens within a very narrow temperature range for these different kinds of particles to identify the critical point.” The analysis of strange quarks has revealed that, like the ordinary protons and neutrons consisting of lighter up and down quarks, hadrons containing the heavier strange quarks also melt in the same temperature region. In this analysis the scientists have also found that for the temperatures achieved at top RHIC energies, the strange quarks remain strongly interacting even within the soup of quarks and gluons—a piece of evidence for the strongly coupled nature of the nearly perfect fluid created at RHIC. The lattice QCD group at Brookhaven now prepares for a new round of simulations. The plan is to use new software developed under the SciDAC-3 partnership program and computing resources applied for by USQCD through the DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program—and ideally, the world largest supercomputer, TITAN, at Oak Ridge National Laboratory, which combines more than 18,000 GPUs. With that processing power analyzing existing data and future collisions at RHIC, scientists will continue to narrow the search for landmarks on the nuclear phase diagram and expand our understanding of how the matter of the early universe transformed into the stuff of our familiar everyday world. The supercomputing analyses of QCD and RHIC data are funded by the DOE Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. Original source of article: http://www.bnl.gov/rhic/news2/news.asp?a=4281&t=today
Study sets, textbooks, questions Upgrade to remove ads MTEL 54 (study.com) CH.4-7 Terms in this set (165) with arrows between events that include both words and pictures enables ELL students to comprehend steps or sequential events illustrates similarities and differences between concepts and provides an opportunity for students to display their knowledge without the frustration of sentence structure. great visual tool to help ELL students understand relationships and organization of ideas.The visual representation gained from graphic organizers strengthens their understanding, therefore making the content comprehensible. Direct Vocabulary Instruction using concrete examples, breaking down words into prefixes, root words, and suffixes, and using lots of visuals helps to create connections. Pre-teaching vocabulary is also effective when introducing new content material. Pre-teaching vocabulary involves introducing the vocabulary words prior to the introduction of the content. This enables students to be familiar with the vocabulary words as you use them in your instruction. a great strategy that encourages and builds confidence in their writing ability. Sentence stems help ELL students focus on the major content rather than sentence structure. The stems provide scaffolding to practice correct sentence structure. Sentence stems can also easily be modified to fit any content area. A few examples of sentence stems you could use include the following: The main point of the passage is ...; I believe ...because ...; The main character can be described as ... because they ... I do, we do, you do great strategy that models concepts, provides practice, and encourages independence. First, you should complete a math problem on the board while the students observe. Next, the class will complete one together, and finally, the students will work on problems independently is essential so that ELL students have the opportunity to assess their environment and determine their next step. A beginner ELL student will be watching peers for clues. Intermediate or advanced ELL students will need time to gather their thoughts in their native language and then translate them into English. Speaking slowly and clearly while providing them with a little extra time is important. This wait time allows ELL students to process the information before they react or respond. Total Response Signal gauge which students are ready to move on without embarrassing anyone. Asking for a thumbs up, pencils down, hands folded, etc., allows you to quickly scan the room and provide extra assistance to the ELL students as needed. Speaking in complete sentences Rather than speaking in half sentences like 'turn in when finished' or answering a question with a mere yes or no, use complete sentences to provide instructions and answer questions. When a student asks if he or she may use the restroom, respond in a complete sentence, such as, 'Yes, you may use the restroom.' By doing so, you are providing examples that ELL students will soon begin to copy. is a sentence with missing words that the student can fill in, often using a word bank. Ex- ''An apple is _____, but an orange is _____'' This takes pressure off of English learners to produce an original sentence in a new language while also modeling how to write a complete sentence. gives the student lots of instructional support in the beginning and then slowly removes it as the learner acquires the skills to write independently. Differentiated writing strategies Sentence frames/stems, scaffolding, THINK-WRITE-PAIR-SHARE After posing a question or prompt, students individually take a moment to think about their answer. Then, they write their response before getting together with a partner to share what they've written. As they consult with their peers, they can make changes or add to their writing. Finally, volunteers can share their written responses with the class. SNOWBALL:the teacher instructs students to respond to a writing prompt on a piece of paper. After students finish writing, they crumple up their paper to resemble a snowball, and toss it to a teacher-directed area (a specific corner of the room, or an empty trash bin, for example). Students then randomly select a piece of paper and read the response on the paper. As an extension, students can write a response to what they read, and then repeat the exercise so that multiple students have a chance to read and respond to each student's work. Differentiated Reading strategies earlier stages of English proficiency ELL can READ BOOKS IN materials in their native language. This may seem counterintuitive, but literacy in the native language is a huge predictor of success in second language acquisition. ALSO bilingual books, graphic novels, and picture books. READ ALOUD to English Language Learners as much as possible, or provide AUDIO resources to supplement the text.It makes comprehension more likely because students are receiving language input in multiple ways. It also demonstrates reading fluency, allowing students to hear the proper pace, tone, and pronunciation of the English language THINK A-LOUDS into your reading, or speaking your thoughts about the text aloud so students become familiar with the process of interacting with the text. PARTNER READING AND CHORAL READING Use a variety of resources during reading instruction, including picture dictionaries and other graphics, to help reinforce the text. Finally, try to make reading less overwhelming by breaking down reading assignments into smaller portions STORY MAP: is a graphic organizer that combines reading comprehension and writing, while also teaching students how to effectively summarize a text. As students read literature, they keep track of essential information from the story on their story map, such as the setting, characters, rising action, climax, and resolution. This task can be done individually, or with the teacher modeling for students how to fill in the information. Afterwards, students can work with a partner to re-tell the story using their story map, and write a summary of the story. Thus, this strategy effectively combines listening, speaking, reading, and writing. Lower proficiency students will need the most support with this task. The teacher will need to pre-teach the vocabulary associated with the elements of literature. Providing sentence frames, a word bank, and interactive support through the use of partners, will ensure that English Language Learner students can meet the standard. GALLERY: the teacher poses a prompt related to a text read in class. Each student responds to the prompt on a small piece of paper, such as a sticky note. Students then place their response on the wall. Students rotate around the room, reading one another's responses. They then break into smaller groups to discuss the prompt, commenting on each other's ideas from the wallpapering exercise. INTERVIEW: teacher provides pairs of students with pre-written interview questions related to a topic being studied in class. First, Student A asks Student B the questions and fills them in on his or her sheet. Then, Student B does the same. Afterwards, Student A shares Student B's responses with the class, and Student B shares Student A's responses. This works great as an introductory lesson for a new unit because it elicits students' prior knowledge with the subject. TALKING CHIPS: are poker chips that students place in the center of the table when they have something to contribute to the conversation. The teacher provides each student with a pre-determined number of chips. When a student's chips are gone, he or she cannot add to the conversation until all students have used their chips. They can then collect their chips from the center of the table, redistribute, and continue the conversation or begin a new one. PICTURE DICTATION: sequencing activity for lower proficiency students because of the visual component. Small groups of students are provided with a series of images that relate to a text read in class. As students listen to the teacher read the text aloud, they work together to assemble the pictures in the correct order INFORMATION GAP:Student A has some of the information already filled in, while Student B has the rest of the information. Without looking at each other's papers, the students must communicate with one another in order to complete their charts. Differentiating Instruction Strategies in Mainstream Classrooms ACTIVATE PRIOR KNOWLEDGE: preteach non-cultural topics, give ELLs copy of text in native language. RATING VOCAB WORDS: Go over the words with students and ask them to rate each word according to how well they know it. You can use the following system: Holding one finger up means the student has never seen or heard the word before. Holding two fingers up means the student has heard or seen the word, but is not sure what it means. Holding three fingers up means the student knows the meaning of the word well enough to teach it to a classmate. WORD SORT: is an activity where students arrange words into different categories to make sense of them. OPEN SORT: means students determine their own categories. For example, you might ask students to create categories for the following set of words about geographical landforms: island, volcano, mountain, lake, pond, reef, waterfall, river, canyon, valley, desert, sand, grass, water, and algae. Students might create the following categories: wet landforms, dry landforms, and characteristics of landforms. CLOSED SORT: means that the categories are pre-determined. For example, you might provide students with a chart that has several categories already listed and ask them to place each word under one of the columns where each student works with a partner to discuss a topic after thinking about it independently. Then, each pair shares their responses with the rest of the class. Differentiated Instruction in Mainstream classrooms -Activating prior knowledge -Doing open and closed word sorts -Using cooperative learning strategies like think-pair-share -Providing sentence frames for writing assignments break your teaching into smaller parts so that you do not give students too much information at once. You don't want to overwhelm students who are already struggling with many academic concepts, so breaking the information into smaller pieces helps them to absorb it without becoming frustrated. you're breaking a lesson into smaller, manageable sections. This makes learning easier and increases comprehension. The teacher can either chunk the text for students or have the students chunk it themselves. In a social studies class, for example, you may have students read a story with three or more paragraphs. As they are doing this, you may chunk each paragraph and check for student comprehension. You may also take two lines at a time, read the lines, then check for comprehension. KWL (Know, Want to know, Learned) Chart to activate prior knowledge The letter K stands for what students already know about a subject. For example, if students are reading the book The Outsiders in class, you may begin by asking them what they know about social class systems (rich versus poor). Or you may ask them what they know about gangs. The letter W stands for what students want to know. Using our Outsiders reference, students may want to know how gangs today differ from gangs during the 1950s and 1960s. They may want to know why social status was so important to the characters in the book. Information in this chart will reflect each student's own experiences with social class, gangs, or whatever themes they come up with. ELLs will likely have different experiences than non-ELL students. This would be a good time to open up a discussion about cultural perspectives as they relate to the reading and understanding of literature. The final column is L, for what students learned after reading the story. This may be different for each student, so you may spend a little more time putting information into this column. However, it will allow you to assess your students' abilities to comprehend what they have read. Building upon prior knowledge Guided practice: the teacher may practice reading a passage or solving a math equation as your ELLs and other students follow along. You may use guided reading practice for the whole class, or you may break struggling readers into smaller groups so that they can receive more personalized instruction. Adapting a lesson for boredom and difficulty strategies Try breaking up the lesson with a game or activity. This is also an opportunity to get your students up and moving around and using a variety of English skills. Lack of understanding/lesson too difficult If you feel that most students are missing the point, ask each student to write down and submit two or three questions he or she has related to the lesson. Go through the questions and try to address the most common concerns. You can also divide the class into small groups and assign each group a question to answer. Lesson too easy In this situation, you can increase the pace of the lesson, use more difficult examples, or have a pop quiz to determine if the material really is cemented in your learners' minds. Sometimes ELLs feel confident when a book is open in front of them, but not when asked to speak or write in English on the fly, so it's important to ensure that ELLs feel confident with new aspects of the English language. Massachusetts Integrated Model of Literacy reading, writing, listening, and speaking skills allow students to make connections between the language they use to communicate on a daily basis with the academic language used in content subjects. as per the ELA and Literacy Framework, Cynthia plans to teach words like 'work,' 'do,' and 'make' in her ELL classes. Cynthia also wants her students to learn words like 'analyze,' 'compose,' and 'research' since these are words students use in an academic context. Collaborative with Content teachers The ELA and Literacy Framework states that collaboration between educators is often helpful for providing students with the knowledge they need. Let's take a practical example. Cynthia teaches third-grade English learners. Collaboration means that Cynthia often communicates with the third-grade teacher in order to determine the academic needs of students. The grade-level teacher interacts with English learners in math, science, and reading, and thus, the teacher is able to see what specific needs the students have. The teacher can then inform Cynthia of those needs so that Cynthia can more precisely target her instruction. Similarly, Cynthia collaborates with the teacher by informing her about the progress her English learners are making and even by giving some tips to the teacher regarding working with English learners. Implementation of standards for grade level As Cynthia reviews the standards for math, she realizes that her English learners need to be able to use vocabulary such as 'half,' 'denominator,' 'number line,' etc. This is vocabulary for third graders as they study fractions. Thus, Cynthia plans on a vocabulary lesson in which her English learners will be not only learning this vocabulary but using it in context. Each content subject has its own framework teachers can consult depending on the grade level. For instance, for math there is the Massachusetts Curriculum Framework for Mathematics. Connection to Language Development Standards The ELA and Literacy Framework specifically states that this framework should be used in conjunction with the language standards that serve to lead students toward proficiency in English. The state of Massachusetts has the WIDA (World-Class Instructional Design and Assessment) standards. Using the MA Curriculum Framework for ELA in ESL Classrooms n the state of Massachusetts, all teachers, including those of English learners, use the English Language Arts and Literacy Framework. This is the basic guide that defines how to implement instructional practices in the classroom to help students succeed in school. The framework proposes an integrated model of literacy. This is a guiding principle that states that the skills of reading, writing, listening, and speaking, which students use to communicate on a daily basis, are closely connected to academic language. For this reason, teachers of English learners work in collaboration with grade-level teachers in order to include English language arts standards in their English language learning sessions. Such standards are in alignment with the WIDA standards for the development of the English language. The standards include social as well as academic language for ELA, math, science, and social studies. Teaching ELL "Special Populations" English language learners include students whose situations are different from ''typical'' learners and who have different instructional needs. Newcomers (new to school) Long-term ELLs (been in ELL classroom for several yrs) ELLs with trauma- abuse ect Student with Interrupted Formal Education (SIFE) ELLS with Low Socioeconomic status 2 important indicators of learning disabilities in ELLS lack of progress in learning English AND low academic achievement. The professional approach to identify ELLs with learning disabilities requires formal assessment of specialized staff. To illustrate, Yara's ESL teacher, Diane, can report on Yara's progress in English. When Sondra talks to Diane, she confirms that no progress is in place, plus Diane also notes lack of understanding and ability to produce basic writing. So now Sondra has a strong basis to speak to the specialized team, which is the special education department in her school. Reasonable causes for ELL learning disability Difficulty reading and/or comprehending content Difficulty spelling correctly Poor writing skills Poor skills to solving math word problems Difficulty staying focused and/or following directions Difficulty retaining information Difficulty establishing positive relationships with peers, teachers, etc. (this can include aggressive behavior, reluctance to socialize/speak, etc.) Teaching ELLs with learning disabilites Once your school specialized team determines that an ELL student has a learning disability, the special education support is one aspect. The other aspect is how to teach ESL to those students. First, you must monitor the ELL with a learning disability with particular attention to the area where they struggle. To illustrate, Yara's Individualized Educational Plan (IEP) highlights her problems are comprehending content and writing skills. Thus, these are the areas of focus for Diane, her ESL teacher. This means that Diane will provide Yara with ESL instruction that includes literacy skills activities. The main literacy skills include specific reading skills (phonological awareness, word level fluency, phonics, comprehension, etc.) and writing skills (correct spelling, structures, etc.). Second, you don't always have to apply the standard ESL lessons. Instead, you would specifically program activities that target the ELL's difficulties. For example, Yara struggles with reading comprehension. Diane often plans reading lessons with a time when ESL students can answer questions that clarify the text. As Yara listens to her peers, she begins to understand some aspects. Also, Diane has an activity when ESL students ask each other questions about the text to check for comprehension. Third, you can continue to benefit ELL students with a learning disability through the regular ESL instruction. This may seem contradictory but some research indicates that ELLs with a learning disability benefit from the same type of ESL instruction regular ELLs receive. For instance, Diane knows this and she continues to assign the regular tasks to her ESL students. When it is time to produce writing, she talks to Yara and asks her questions that trigger the production of content that can be put in writing. This way, Yara receives guided-instruction for writing. Finally, it is key for ESL teachers to remember that ELLs with learning disabilities make great progress through vocabulary development. Yara, for example, begins to receive individual assistance from Diane, who makes sure Yara understands new words, connects learned with new vocabulary, and retains vocabulary. Although slowly, Yara begins to make progress, which is the main objective of the whole assistance she receives. Lack of progress and low academic achievement are two important indicators of a learning disability in ELLs. However, not all ELLs with these characteristics have a learning disability. This can only be assessed by a special education team after formal assessment. ESL teachers can watch out for certain signs that can help identify a learning disability. The signs include difficulty reading, comprehending content, spelling correctly, staying focused and/or following directions, retaining information, establishing positive relationships with others, poor writing skills, and poor skills solving math word problems. To teach ESL requires four main approaches: monitor and attention to the learning disability, programming that addresses the learning disability, continuation of regular ESL instruction, and vocabulary development. ELLs in school districts with fewer than 100 ELLs; who may be referred for special services who DON'T actually need them. failure to refer ELLs who DO require special services. Questions to ask BEFORE referring Ells for Special Ed. Did the student receive appropriate interventions in the general education environment? Was the student assessed in their primary language? Are you familiar with the 'silent phase' of language acquisition? Have you considered cultural differences? Is your team in place? Intervention and Assessment for ELLs possibly needing Special Ed. Has the parent been notified and attended problem-solving conferences (with an interpreter present) during the process? Have you followed your school, district, and/or state policies for Response to Intervention regarding instructional and behavioral concerns? Have you used professional judgement with the length of implementing interventions? (Some interventions may take longer than others to implement due to planning required.) BEFORE SPED referral: Was the Student assessed in native language ? Testing students in their primary language is a crucial part of the informal evaluation process. An evaluator or educator can't tell the difference between a student's degree of knowledge and competency if the student doesn't understand the directions or the questions being asked. Consider this: could you pass a simple oral first grade spelling test if the words were read in Urdu? It sounds like a 'no-brainer', but yet this unfair practice still occurs. Before SPED referral consider: Silent Phase Similar to the 'honeymoon' period at the beginning of the school year before negative behaviors appear with students, ELLs go through a 'silent phase'. During this phase, ELLs are slowly acquiring English language skills that begin with new vocabulary and syntax. During this phase, ELLs are apt to appear dazed, confused, may point rather than speak, and appear not to be listening. They're actually familiarizing themselves with the English language and its foundations. Many educators may be confused by this demeanor, assuming incorrectly that the student has low cognitive abilities. Before SPED referral consider: Cultural Differences When working with an ELL, have you thought about cultural differences? Think about English speaking students who raise their hands for assistance. Now consider the ELL. Some cultures believe asking for help is a sign of weakness. Other cultures may even frown upon asking for help from a female teacher. Some cultures believe asking for help is putting the individual before the group, which is perceived as a matter of disrespect. Another difference is that some ELLs aren't familiar with the Latin alphabet or font in which the English alphabet is rooted, such as ELLs from places like China or Russia. Lastly, other ELLs have never written or read printed text from left to right. TEAM in place for SPED referral for ELL student A bilingual psychologist A bilingual social worker A general education teacher A special education teacher A speech language pathologist A teacher of the ELL student An advocate who is familiar with the cultural nuances of the student's native country, if possible Teaching ELLS with special needs (SPED) To begin, you should know for a fact that a specific student is in the special education program at your school. Never assume that a student needs special education because of low achievement. Next, you need to gather information from the special education team about the exact disability and how it affects academic performance. Once you have this information, you can target the language skill you need to determine an instructional strategy to help the student. For example, Jean is an ELL teacher who has just learned that her ELL student Olesya has dysgraphia. This disability causes Olesya to have difficulty with handwriting, spelling, and organizing ideas, so her English writing skills are very poor. The good news is that now that Jean knows the writing skills affected by the disability, she can prepare activities to target these skills. Jean includes easy spelling activities in her lesson plans and assists Olesya while she works. Also, Jean shows students several ways to organize their ideas before writing, such as brainstorming, making a map, and drawing a story line. Teaching ELLS with SPED strategy: target the language skill the key to determining an instructional strategy for ELLs with special needs is to target the language skills affected by their disability. instructional strategy may include activities using a variety of tools that can be visual, auditory, or that promote hands-on work with language. Note that any adjustment you make for a student with special needs should work out for the entire class. This way, you do not disrupt others or change lesson plans at the last minute. Finally, once you target the skill your student needs to develop, you are on the way to helping her/him while the special education team is using more specialized approaches. Remember that ongoing collaboration with the special education team is important to adjust to changes. Sign of Delay and teaching strategy for ELLS with SPED Difficulty comprehending written content Ask comprehension check questions. Difficulty reading Use visuals that illustrate the content. Poor writing skills Teach organization of ideas. Difficulty staying focused and/or following directions Ask questions that get the attention of student and/or have student repeat instructions. Difficulty retaining information Repeat relevant information. Poor vision Adjust material/class setting to make visualization easier. Difficulty relating written material to sounds Drill through repetition of relevant patterns. often stand out because they complete assigned work quickly, comprehend new concepts immediately, and may even become bored or disinterested because the course content is too simple. Gifted ELLs criteria Does the student do extra work that was not assigned? Does the student ask relevant and thoughtful questions, even if these questions are unrelated to class content? Does the student bring in outside reading or other material to discuss with you? Does the student take advantage of your office hours or give up free time to increase learning opportunities? Do the student's assignments and assessment grades reflect strong comprehension of material taught in class? Strategies for teaching Gifted ELLS INDIVIDUAL ATTENTION: encourage your gifted learners to take advantage of your office hours or other time outside of class. ADDITIONAL ASSIGNMENTS: to make any additional or extra credit work available to all learners. It's important that you stress that the extra work is purely voluntary and the choice of whether or not to undertake the tasks will have no effect on grades or your own perception of the student. CONVERSATION WORK GROUPS:placing gifted learners into small conversation and/or work groups. If possible, try to do this during voluntary or additional study time so that other students in the class don't feel as though they are not adequate enough for this special group. Small groups have several advantages for gifted students. First of all, gifted students may be able to raise and challenge the level of communication that takes place in a peer setting. These groups can also facilitate deeper thought from a student viewpoint and can encourage understanding and comprehension of varied opinions and ideas. Additionally, students in the group can check answers and compare learning approaches as well as share tips and advice. CHOOSING MATERIAL: When choosing additional material to give to gifted learners, try to make sure the material is both challenging and appropriate. If you teach ELLs of different levels, try giving material reserved for more advanced classes to lower-level learners. You can also create original content that emphasizes the areas you feel certain students may excel in. Be careful not to overestimate the ability of your gifted learners. If you challenge them too much, they may become discouraged and lose focus. Also, don't always choose material that is language focused. Rather, choose content that covers a variety of topics with English as the medium of delivery rather than as the main focus. Sometimes a gifted student excels in one area, like language acquisition, while they may struggle in other others like math or science, no matter the language they are taught in. Take into consideration the strengths of your ELLs and their goals. It's vital to remain focused so that learning can be guided and beneficial rather than too much of a chore. Oral Language Components VOCABULARY - the understanding of different words PHONOLOGICAL SKILLS - sounds of a word SYNTAX - understanding of grammar rules and the order of words for the language MORPHOLOGICAL SKILLS - understanding the word parts and forms PRAGMATICS - understanding the social rules of communication Oral Language Assessments ORAL PROMPTS: open-ended question to which students will orally respond PICTURE PROMPT: aKA picture-cued description when you provide a picture and students orally describe what is happening in the picture or the scenery in the picture ROLE PLAY: two or more students have set roles and tasks to complete (not same as reading scripts in play-more free and unscripted) ORAL SUMMARY: AKA text retelling is when the educator reads a story then students will orally summarize the story ORAL INTERVIEW: which is when the teacher asks questions to enter a dialogue with the student. Some educators opt to have a discussion about the story instead of asking students to retell the story. The summary and/or discussion can take place as an entire class or students can be broken up into small groups. t is very important to have set guidelines and directions for the students in each of these assessments so they know how their oral language skills are being evaluated. Grading rubrics are ideal since they will show the students what you are looking for and the weight for each part of the assessment. 5 COMPONENTS OF EFFECTIVE ORAL LANGUAGE INSTRUCTION FOR ELLS DEVELOPING LISTENING/SPEAKING SKILLS:Create pairs of people to act out speaking and listening and have students model this, taking turns presenting and listening to information. From here, you can teach students the concept of having the floor, as in 'the speaker has the floor'. From this, students learn to recognize when it's an appropriate time to talk and when it's time to listen. TEACH VARIETY OF SPOKEN TEXTS:The second component of oral language instruction is teaching a variety of spoken texts. According to British linguist, Michael Halliday, there are seven different functions of language. They are instrumental (language of expressing needs), regulatory (influencing others), interactional (getting along with others), personal (expressing personal feelings), heuristic (learning about one's environment), imaginary (creating stories), and representational (communicating information). There we go! Seven functions of language. So, the texts you use need to embrace all seven of these functions. CREATE A LANGUAGE LEARNING ENVIRONMENT:three parts to this: the physical environment, classroom culture, and opportunities for communication. The physical environment can be enriched through creative toys, dress up boxes, and tables to display and discuss work. Classroom culture is promoted by being sensitive to cultural differences, emphasizing equality, and teaching students to take turns. Opportunities for discussion can be developed by modeling listening and speaking, reading as a class, and reciting raps, poems, or songs to introduce new sounds and rhythms. VOCAB AND CONCEPTUAL KNOWLEDGE: PROMOTE AUDITORY MEMORY:the process of listening, processing, and remembering. Students have to learn how to process various kinds of information, and teachers can develop auditory memory by a mixture of repetition and performance. Repeat songs, poems, or plays to develop memory and get students used to memorizing and recalling this sort of information. Games like 'Simon Says', that rely on the ability to memorize and recall information, is a fun and practical way as well. Comprehensible Output Hypothesis by Swain ELL students learn language when they realize there is a gap in their language skills. For example, a student makes a language mistake, becomes aware of the mistake because of feedback, and then tries again. Producing the correct message through trial and error enables the student to modify language appropriately in the future. 3 Functions of Comprehensible Output Hypothesis Noticing function- when language learners realize there is a gap between what they want to say and what they are able to say. Hypothesis-testing function: describes when the ELL student speaks a sentence to test whether it is correct. If it is incorrect, the other speaker will give feedback by correcting the sentence. Think back to the car conversation. Nicolas says, 'I want to buy a car blue.' According to the hypothesis-testing function, Laura will fix the mistake by correctly using the phrase 'blue car' in her response. Metalinguistic function: ELL students reflect on the sentences they produce and the feedback they receive from others. In our example, Nicolas becomes aware that he made a mistake and what the correct language should be. In the future he will say 'blue car' and not 'car blue.' Cooperative Learning Teams cooperative learning teams, students work together to complete a task, which offers plenty of opportunities for communication for every level of ELL student. Beginners can use gestures or short words to communicate while more advanced students can use more complex language. Another benefit to cooperative learning teams is the immediate feedback ELL students receive. the vocabulary, grammar, and pronunciation of a language. Dialects are standard and non-standard. There are regional and dialectical variations of Standard English in in both the United States and the United Kingdom. In the United Kingdom, there are two dialects: standard English and regional variation. In the United States, there is Standard American English and somewhere between three and 24 regional variations. Social scientist suggest that New England, Southern, and Western are the three main regional variations of English, but there are sub-dialects within those variations that bring the number up to 24. However, it is important to note that in both instances of British English and American English, it is impossible to determine the exact amount of dialects because the language changes from one person to the next. accent is simply the way a person pronounces words within one's dialect. common regional dialects of English in the UK Standard English: Standard English is taught in school and is what you would think of when you read British literature or watched a movie set in England. Cockney: Cockney is the working class dialect in London. Some argue that to be a true Cockney, one must have been born with hearing distance of the Bow Bells in London's Cheapside District. Yorkshire: Influenced by 13th century Viking invasions, the dialect is found in Northern England. Southeast Midlands: The dialect is influenced by Scandinavian, French and Middle English. It has a drawl to the words. They tend to use older words such as 'thee' and 'thou'. 24 regional dialects in the United States Pacific Northwest: The dialect is influenced by Native American languages. The word potluck comes from the Native American word potlatch. Pacific Southwest: Influenced by gold mining settlers and bear a slang-like attitude toward the language. Rocky Mountain: This dialect is heavily influenced by frontier settlers and Native American languages. Southwestern: The dialect is heavily influenced by Mexican variations of Spanish. San Francisco Urban: The large influx of settlers have churned out a dialect that is a combination of Midwestern and Northeastern English. Upper Midwestern: This is the dialect people think of when they think of the Midwest. It bears the iconic twang in the pronunciation of many words. North Midland: North Midlanders call doughnuts dunkers or fatcakes because they are in the transition zone between North and South, East and West. Ozark: The dialect is twangy and similar to the one used in the Appalachian Mountains. South Midland: These speakers use words such as 'reckon' and 'ragamuffin' in their everyday speech. They also add an A before gerunds, and replace TH with F. Eastern New England: The 'R' at the end of a words is replaced with an 'H'. Car is pronounced 'CAH'. Boston Urban: The Boston Urban dialect is the traditional Boston sound that bears the Southie accent. Other sub-dialects based on class exist within this dialect. Western New England: The 'T' may be dropped from words, although this dialect is very subtle. Hudson Valley: The dialect was influenced by Dutch settlers. Hudson Valley speakers say crullers for doughnuts. New York City: The mix of ethnicities is largely responsible for the way English sounds in New York City. Speakers tend to replace the TH with a D sound. Bonac: This dialect is a combination of New England and New York City. Inland Northern: The dialect combines Western New England and the Midwest together. They call doughnuts friedcakes. Chicago Urban: The dialect is influenced by the Northern Cities Vowel Shift, which happens with short vowel sounds mimic long vowel sounds. Pennsylvania German-English: The small dialect retained some German grammar rules from when the German settlers were living here. Gullah: The dialect is a Creole mix. Southern Appalachian: The G of ING is dropped. Virginia Piedmont: The dialect boasts a drawl on Rs that come before a vowel. Coastal Southern: The dialect is similar to the Virginia Piedmont dialect, except it retained more vocabulary from Colonial English. Gulf Southern: The Deep South dialect is influenced by the French and English settlers. Louisiana: Deeply influenced by French settlers, this dialect has a handful of sub-dialects depending on what part of the region you are from. 8 major dialect areas in North America New England, the West, North Central, Canada, the South, the Midlands, the North, and the Mid-Atlantic/NY region. These specific names came from the 'Atlas of North American English', which is based on the work of sociolinguist William Labov. Within these eight areas, there are many subdialects, and even some dialects (such as African American Vernacular English or AAVE) that are not region-specific. There are upwards of 25, but the exact number is debated, since the lines between dialects are often very blurry and hard to define. Using a double Modal in Southern dialect several varieties of southern English, especially in the Carolinas, double modals are completely acceptable. A modal is a word that qualifies a verb, such as 'might,' 'may,' 'could,' 'should,' etc. In many dialects of English, each verb can only have one modal. In dialects of southern English, however, they can be doubled. An example of a double modal is the sentence, 'I might could do that. African American Vernacular English (AAVE) Another widespread and highly stigmatized variety of English is African American Vernacular English. Speakers of AAVE, especially children, are often told that grammatical aspects of their dialect are 'incorrect'. One of these aspects is habitual 'be.' In AAVE, 'be' can be used to indicate something that is done regularly, and it is distinct from 'is'. For example, the sentence 'He be playing basketball' means that he regularly plays basketball. That is, basketball is a habitual activity for him. By contrast, the sentence 'He is playing basketball' means he is currently playing basketball, regardless of whether he plays it regularly. Children of this dialect develop and use this feature as a distinct grammatical form indicating a specific type of activity. In schools and with speakers of other dialects, however, they are often treated as though they are speaking incorrectly. Basic Interpersonal Communicative Skills (BICS) BICS are skills required for basic fluent communication in social settings. Basically, when students engage on the playground, cafeteria, or a coffee shop after school, they are using BICS. Another term for this is conversational language. Researchers estimate that fully acquiring BICS takes roughly 1- 3 years. Cognitive Academic Language Proficiency (CALP) skills needed to use language abstractly and as a tool for learning. Academic language used in the classroom. According to most researchers, developing full CALP skills requires 5 to 7 years for students who are mostly literate in their native language, and 7 to 10 years for students without any pre-existing literacy. Teaching ELLs BICS first in their early years Early speaking skills with ELL students should be focused on BICS, learning the skills for social communication. The goal is to eventually work up to CALP fluency, but by starting with social communication, we build up confidence and comfort with language before introducing more abstract concepts. Strategies for teaching Speaking to ELLS AVOID OVER-CORRECTION: avoid this early on. Let the student become familiar with the sounds and uses of words and sentences first. TEACH DIVERSE SPOKEN LANGUAGE: The wider range of activities you can bring in, the more experience your student has using language in various settings USE STRUCTURED ACTIVITIES: have students read a script, be it in a play or a conversation, or a song. Then, once the student is comfortable, have students select their own adjectives to use, rather than those in the script, to make the dialogue more personal. Memorization exercises, like songs and rhymes, are also great for building up familiarity with sounds and words within a spoken language. BALANCE LISTENING AND SPEAKING: listening is a big part of verbal communication. Give students plenty of chances to listen to a language and then interact with it. By balancing both listening and speaking skills, students learn to think about the ways that people around them use language. One of the biggest hurdles for ESL students is the fear of making mistakes, particularly in front of their peers. Because of this, it's important to create an atmosphere that not only accepts errors but also encourages students to make them. If students could speak English perfectly they wouldn't need ESL training. Making mistakes is essential to the learning process and should be a part of any well-run ESL class. 5 components that allow students to explore the language and retain new knowledge. VOCABULARY:Having a solid vocabulary is a necessary component of any English conversation. However, when students are just beginning to learn English it's important to not place too much emphasis on simply memorizing words. Proper usage is more important than having a large vocabulary. Because of this, any vocabulary training should include a significant amount of usage examples in addition to dictionary definitions. Words that have multiple meanings but the same pronunciation like, 'may' or 'fly' should be examined in all of the appropriate contexts. Conversational vocabulary should be geared towards common items and everyday situations. You can adjust the difficulty depending on the level of your students. PRONUNCIATION:One way to practice this idea is to write words on index cards and have individual students pronounce the words, making corrections to mispronounced words as needed. It's also important to practice vocabulary words that are spelled the same put pronounced differently, as in 'desert' (dry, arid land or leaving someone) or 'windy' (strong wind or a curvy road). INTONATION: The best way to demonstrate how intonation works is to demonstrate it yourself or find good audio clips online to share with the class. One way to explain intonation is to simply use punctuation and a simple situation. Think of how many ways a word like 'really' can be intoned in order to change the intended meaning. GRAMMAR:The key when teaching grammar is to focus on correct grammar patterns without making the students sound like they are reading from a text. One way to accomplish this is by listening to English being spoken naturally . LISTENING:English language television shows, movies, radio, audio books and podcasts are all convenient sources of conversational English. The big advantage of these types of media is that students can choose topics they are interested in. This freedom of choice can go a long way in preventing boredom. ongoing and happen during learning. Examples include teacher checklists, student self-assessments, and exit tickets. A teacher listening in on small group discussions and jotting down quick notes would also be considered an informal assessment. occur after learning. Examples include administering a test at the end of a unit or giving weekly spelling quizzes. Many factors can affect student performance on formal assessments, so the data can be skewed. Assessments for testing ELLs SPEAKING skills DIALOGUES & ROLE PLAYING: You can present students with pre-written dialogues to use in role-playing, or students can participate in writing them. This kind of assessment is two-fold: not only do students get practice speaking, but they also gain experience using language in a variety of real-world situations. Students can role-play making a deposit at the bank, ordering dinner in a restaurant, or socializing at a party. INTERVIEWS:You can provide the interview questions or students can write them. Students can rotate partners to learn more about their classmates. As you circulate through the classroom, listen in on student conversations and provide support as needed.You can also use interviews to determine students' background knowledge. For example, Student A interviews Student B about a topic and writes down his or her answers, and then Student B does the same with Student A. Students can then share their partners' responses with a small group or with the class. For example, prior to a lesson about the ocean, students can interview one another about their experiences: Have they ever been swimming in the ocean? Is the Earth mostly water or land? What kinds of animals live in the ocean? What is the biggest ocean in the world? Students can use sentence frames to help organize their discussions. Sentence frames are pre-written sentences that provide a template for students to fill in with their own words and phrases. Thus, students actually complete a sentence that is already partially written. You can provide students with a handout or display a large poster of helpful sentence frames for class discussions, including: I agree/disagree with _____ because _____. In my opinion, _____. I would like to add that _____. Providing sentence frames encourages ELL students to take language risks, which builds their confidence and improves their vocabulary. As an informal assessment, you can sit in on each group of students for a few minutes and take notes, complete a checklist, or use a rubric to assess speaking skills. Assessments for testing Ells LISTENING SKILLS TOTAL PHYSICAL RESPONSE (TPR): hat children can naturally acquire a new language through repetitive exposure to commands. The teacher usually begins by modeling an action while saying a command aloud. For example, she might tell students to stand up while standing up herself. Or, she might tell students to sit down while also sitting down. This approach to language learning is stress-free and highly engaging.To teach shapes: ''Point to something that has a square shape.'' To teach body parts: ''Touch your knees.'' To teach prepositions: ''Put your pencil on the desk.'' To teach adverbs: ''Walk to the door quickly.'' To teach adjectives: ''Pick up the red crayon.'' Simply observing students during TPR activities can help you gather feedback about their listening skills. PICTURE DICTATION: provide small groups of students with a series of images that correspond to a story. As you read the story aloud, have students arrange the pictures in chronological order. Afterwards, students can negotiate with one another to determine the correct order, and you can easily assess students' listening comprehension. INFORMATION GAP: provide pairs of students with two index cards. Card A includes information about a topic, such as sea turtles. However, there are blank spaces on the card, indicating missing information. Card B also has information about sea turtles, and it happens to be the same information missing from Card A. Students must communicate to fill in the missing information on their cards. They can only do so by speaking and listening to their partners. Nurture not Nature In the beginning, reading exercises should be relatively straight-forward, with an emphasis on very common words as well as those words in which the pronunciation matches the spelling. Illustration is practically necessary at this point. Also, students should be able to recognize rhyming, which in turn helps them to recognize trends in pronunciation from spelling. If you listen to the songs that most kindergarten classes learn, you'll see that they are heavy in rhymes and relatively short words. The point is to create a level playing field from which to proceed. Kindergarten and first grade (reading skills) Throughout kindergarten, there should be a considerable emphasis on everything we just discussed. Point out that letters have sounds associated with them. Emphasize rhyming and short illustrated works. All of this is in preparation for first grade. By then, students should feel pretty comfortable with the safety zone they have been establishing. First grade is the time to let students inch towards the deep end. During this stage, students should ideally be able to handle more words on a page. They should also be able to handle the same ideas in greater depth. In kindergarten, a story may involve only a sentence or two per heavily illustrated page. By first grade, a few more sentences with a greater progression of images are possible. In fact, by the end of first grade some students may be able to read even more complex sentence formations. Second and third grade reading skills Around the time students reach second and third grades, they should be preparing to bring home their first books that don't depend so much on images. Granted, these are still pretty short works, but going from picture books to chapter books is quite the accomplishment. Students are able to handle even more complex sentences, including changes to quotations within a story. Also, at this point context clues for new words don't necessarily have to depend on pictures. Instead, students can use the surrounding text to be able to figure everything out. So far, you may have imagined learning to read as an uphill walk. Try thinking of it more as a see-saw. By the time students have reached the end of the third grade they are still learning to read, but now the reason for reading can switch away from being purely to learn the skill and toward learning new skills. Reading to Learn (by 4th grade) By the time students enter the fourth grade, they are on the threshold of being able to read to learn. Does this mean that they can suddenly read a college level psychology book? Of course not. For much of the rest of elementary, middle, and high school they will constantly be adding to their vocabularies. After all, the same can be said about PhDs! Even the most advanced concepts often require some sort of imagery to set within a person's mind. However, by this point the process of reading is not the hurdle. Instead, it is issues of clarity and vocabulary, which can only be pushed aside with more reading. Stages of reading (pre-kindergarten to 4th grade) going from learning to read to reading to learn n this lesson, we tracked the progression of reading development from pre-kindergarten through the fourth grade. We started by emphasizing the importance of starting everyone on a level playing field through incorporating pre-reading activities like rhymes. Students then begin to establish relationships between printed words and spoken words, and these relationships are reinforced by heavy use of images and repetition. As students further develop their abilities, they rely less and less on repetition and images, eventually making the transition from learning to read to the stage of reading to learn. 6 Components of Oral Language Acquisition It means understanding the basics of phonology (sounds used within a language), vocabulary (words), grammar (construction of sentences), morphology (formation of words), pragmatics (proper use of language), and discourse (using language to communicate). How Oral language skills impact on reading development First, oral language develops vocabulary. The ability to associate the abstract concept of a word to a concrete meaning is first developed through oral language but is also critical for reading. Second, oral language communicates specific meanings. Children with higher oral language skills can use language to communicate exactly what they want to say, and this skill directly translates to understanding the meaning within written texts. Third, oral language teaches culture. One of the first ways we learn culture is through our spoken language, and this constitutes the foundational background knowledge needed to interact within a culture and to contextualize information while reading. Finally, oral language builds comfort with communication. Children learn to use language and develop a desire to have language skills that improve their ability to communicate their needs and wants. This desire, this need to communicate, is one of the motivations that drives education and inspires children to want to learn to read. How speaking impacts reading? speaking teaches people how language and communication interact. What I mean is that by learning to speak, people, especially young children, learn that words have specific meanings that are used to present an idea and share information. This basic understanding is crucial for learning to read. You learn how words and sounds interact to create meaning, and that understanding translates into reading comprehension. Language is the primary way that people learn culture, which teaches them the background knowledge about how people within a society interact. Very often this basic background knowledge includes critical information that is assumed within written texts. speaking skills help improve reading by developing comfort with communication. When people work on and improve speaking skills, it both shows them how communication within a language works and fuels a desire to learn more forms of communication, by which, I really mean reading and writing. How reading impacts speaking Reading increases vocabulary and teaches people how to use new words in context. You see how a word is used and learn how to use it yourself. phonemic awareness, or the process of noticing the individual sounds in a word. Reading comprehension is based on the ability to piece sounds into words, into sentences, into ideas. So, reading makes people aware of common sounds, spelling patterns, and grammatical structure within a language. Awareness of individual sounds within words helps people improve their pronunciation, awareness of spelling and grammar helps people form proper sentences, and all of these increase the ability of non-native speakers to understand what people are saying around them. reading is shown to improve both the accuracy and fluency of speaking. Accuracy is defined as the correct use of vocabulary, grammar, and pronunciation. Fluency is the ability to spontaneously speak and communicate effectively. These skills together define effective speaking, and improving reading skills can increase these in children, adults learning a new language, and really...anybody else. Even for people who are well entrenched in a language, including their first language, reading is shown to continually improve verbal communication. So, if you want to be a better speaker, the trick is really in what you read. Whole Language/Top Down Reading Method a method of teaching reading that relies on the meaning of stories and texts. In whole language, meaning takes priority over individual words. Aspects of a Whole Language/TOP-DOWN Lesson -Focuses on real literature -Sees reading as primarily oriented toward meaning -Lets students work at their own level and from their own interests Advantages to Whole Language/TOP-DOWN Method AUTHENTIC READING MATERIALS- used in real life ENJOYMENT-since this reading focuses more on meaning SECOND LANGUAGE LEARNING: of meaning based aspects of language more broadly DIFFERENTIATION: Because students in one class might be at many different reading levels, whole language can bring them together by focusing on stories and ideas they can all share. Meanwhile, they can work on strategies and decoding on individual levels but always be part of a dynamic literate community. Disadvantages of Whole Language PRIVILEGE: Children who come from literate homes are exposed to vocabulary and reading strategies from an early age. A whole language approach that does not explicitly teach vocabulary and decoding might exacerbate gaps between advantaged and disadvantaged children. SPELLING & MECHANICS: If used in isolation, whole language might never teach children how to spell conventionally. Similarly, children might suffer from a lack of explicit direct instruction in phonics and decoding. SHOULD NOT BE TAUGHT IN ISOLATION, BUT WITH EXPLICIT INSTRUCTION OF PHONICS, SPELLING, DECODING ECT. Bottom-up Reading Method developing reading skills as a sequential process whereby the person reading takes the letters, assembles them into sounds, and those sounds form words and phrases. Students must first learn the basics of phonics and how to decode words before more complex skills such as reading comprehension can be mastered. In this teaching process, children learn to read by first mastering the letters of the alphabet. Then they learn phonics, decoding skills, vocabulary, grammar, and eventually reading comprehension skills. Process of Bottom-Up Reading 1. LEARN THE ALPHABET LETTERS 2. LEARN PHONEMES (consonants & vowels) (letter to sound relationship) Allow students a multisensory approach to experience seeing, saying, and hearing the various sounds they are learning through phonics. teacher shouldn't try to teach all the types of vowel sounds at once. They should teach one type of vowel sound, such as long vowels, and give students a variety of ways to practice that engage as many senses as possible. Typically teaching phonemes this begins with teaching students the consonant sounds (b, c, d, f, g...) and vowel sounds (a, e, i, o, and u). Teachers may approach this by having students begin to recognize words that start with the consonant or vowel sounds. As students move through the phonics instruction, they progress to more complex phonemes such as blended sounds (br, cr, wr...) and digraph sounds (sh, ch, th, and wh). After learning a particular phoneme, students need the opportunity to practice using that skill in real stories and books. The challenge to phonics is finding texts, especially early on, that emphasize the phonemes you are teaching. Therefore, teaching phonics may rely heavily on leveled readers that use a sort of Dr. Seuss sentence structure to emphasize the phonemes being taught. 3. DECODING:translating a printed word into a sound.translating a printed word into a sound. phonics emphasizes teaching students the phonemes they need to decode unknown words as they encounter them in the text. 4. AUTOMATICITY: where decoding becomes automatic.learning to read is a lot like playing an instrument. 5. GRAMMAR STRUCTURES: Ells can use grammar skills to help them decode, verb means action. preposition means direction ect. 6. COMPREHENSION SKILLS: students can now begin to understand the meaning of the texts, .maybe now is a good time to add top-down/whole language approach for texts that focus solely on meaning :) After Decoding in reading Once students have mastered decoding words, they can then learn to apply the rules of grammar to written text. Grammar is the system or structure to any given language. For example, they can learn to use structures such as verbs which show action, or prepositions which show direction, to help them grapple with the meaning of the text. As students encounter more complex text as they move up through the sequential process of bottom-up reading, they will need to understand the fundamentals of grammar to help them decode. Research has also shown that teaching grammar explicitly to children with learning disabilities such as dyslexia has improved their reading comprehension and written expression skills. Once students build the skills to decode words automatically, you can focus on building their comprehension skills. Reading comprehension skills include strategies such as having students make connections between the text they are reading and their prior knowledge. These skills can also include having students ask questions about the text, or even create visuals to show its meaning. Reading Instructional Strategies a way of approaching teaching something. It's like a road map for how to teach a topic. For example, Raza can teach reading many different ways: he can have students follow along as he reads to them, teach them to use their fingers to trace the words, or he could teach students how to sound out unfamiliar words. All of these are different instructional strategies field of study that combines cognitive psychology and linguistics. Essentially, psycholinguistics brings together information on how people learn with information about the study of language. In turn, teachers like Raza can use psycholinguistics to help mold their teaching. Psycholinguistic studies show that we decode information on three levels when reading VISUAL DECODING: involves noticing the differences in the shapes of letters and words; SYNTACTIC DECODING: involves noticing the differences in the sounds of letters and words; and SEMANTIC DECODING: involves understanding the differences in the meaning of the words. whole-to-part or whole language reading instruction reading is taught through a text-rich environment instead of through explicit phonics instruction. whole-to-part instruction tries to bypass the visual and syntactic decoding (learned in explicit phonics) and go straight for semantic decoding.naturally, kind of like how people learn their first language by simply being surrounded by it.One variant on whole language instruction is reader-based instruction, which focuses on offering students texts and letting them discover their texts naturally. focuses on comprehension, vocabulary, and sight word development NOT phonics. focuses on offering students texts and letting them discover their texts naturally. 5 elements of balanced reading instruction PHONICS:the relationship between letter symbols and the sounds they make. Beyond simply learning the sounds that each letter of the alphabet usually make, phonics instruction includes understanding more complex sounds that letter combinations create. PHONEMIC AWARENESS:relates only to the sounds that create words rather than the letters that represent those sounds. Some phonemic awareness skills include: phonemic segmentation, phoneme identification, and phoneme blending. Phonemic segmentation involves dividing words into their phonemes. For example, the word 'moth' would be broken into three phonemes: /m/ - /o/ - /th/. Phoneme identification is using knowledge of the phoneme /m/ from 'man' to learn the word 'mat.' Phoneme blending is connecting phonemes to create a word. For example, /d/ - /i/ - /p/ is 'dip.' VOCABULARY: is learning new words either through explicit instruction or through context clues. FLUENCY: is the ability to read both orally and silently, and quickly and easily enough that meaning is not lost. READING COMPREHENSION: is the ability to understand what is being read. 3 components of the balanced literacy framework READING WORKSHOP: Reading workshop is comprised of explicit reading instruction using a variety of authentic texts, which are texts found in the real world. Shared reading: includes read-alouds and choral readings that are done in a whole group setting with teacher support. Typically, the same books or poems are read for several days to encourage fluency. Guided reading: is generally done in a small group setting where students of the same reading level read a common text and are engaged in activities within their zone of proximal development (instructional level). Independent reading: provides students the opportunity to practice reading books that are at their own independent level. WRITING WORKSHOP:Writing expands a student's ability to build meaning from words by allowing the students to create their own texts. Writing workshops consist of three components: shared writing, guided writing, and independent writing. Shared writing: is a daily opportunity for teachers to model the writing process for students as the class creates a text together that is scribed by the teacher. Guided writing: takes place within a flexible small group where a teacher supports students in learning a skill that this particular group of students has found challenging. Independent writing takes place throughout the school day. It includes both self-selected and teacher-assigned topics, such as a reflection activity. Independent writing: is an opportunity for students to work on their own to practice the writing skills they have learned. WORD WORK:Word work can take place both as separate explicit instruction and as part of both reading and writing workshop. Word work encompasses phonemic awareness, phonics, high-frequency words, and vocabulary instruction. Cognitive Academic Language Learning Approach (CALLA) a method of combining cognitive theory with lesson planning and learning strategies to develop content to build the academic fluency of ELL students and is now implemented to help those students who have gained social fluency with English but are struggling with their academic fluency or those who may have academic fluency but are struggling in applying their skills. Cognitive Academic Language Learning Approach (CALLA) main CONCEPT SCAFFOLDING: provides a great deal of instructional support for students handling challenging material and then slowly removes the support as the student becomes proficient and develops necessary skills. Imagine builders working on a skyscraper. All the scaffolding there helps support the building and the workers, but in this case, the scaffolding is instructional support, and the building is knowledge. CALLA 3 knowledge (3 different types) DECLARATIVE KNOWLEDGE: factual knowledge, such as the boiling point of water at sea level is 212 degrees Fahrenheit. PROCEDURAL KNOWLEDGE: the ability to know how to do a task, such as hard boiling an egg in the water. META-COGNITIVE KNOWLEDGE: the ability to relate current tasks to previous experiences, such as knowing that when you've boiled eggs in the past you can't cook them too long because then they become hard and rubbery. PLANNING: Students need to set goals, choose strategies to meet those goals, and allocate time and resources. Plan to support your lesson with visual or audio clues, like props or songs. Let's say for our example lesson that you are teaching your students about the Great Depression. In the planning step, you work with your students to set goals, such as understand the events which led to the Great Depression. Then you help them figure out what resources they need, in this case probably a book or video about the events of the Great Depression, and allocate the necessary time. You have also thought ahead and brought some props to give the lesson context, like a coupon book for food rationing MONITORING:his is the part where scaffolding comes into play. During this step you make sure the student has all the support they need to understand the concepts of the lesson. The teacher has the responsibility to provide feedback to their student and ensure that they have comprehended the material. Feedback at this time might also involve helping the student understand concepts by using their native language. This is fine, just so long as you make sure they understand the concept in both English and their native language. For our fictional lesson on the Great Depression, you could support your students in a number of ways. If they are silently reading, you might ask them to raise their hand if they encounter anything they don't understand or if watching a video, follow it up with some discussion. EVALUATION:The final step involves helping the students evaluate their own work and learning. You ask your students about the core concepts to make sure they are understood. You can also quiz them on whether they met their goals and if their strategies worked or not. You can use the goals they set for themselves as a starting point for this discussion. If they met the goals, that's fantastic. If not, then you need to work with the student to figure out what was wrong. Were the goals too difficult? Did their strategies not work? It's not a perfect process, and you'll get better. CALLA Lesson review Let's review the basics. Cognitive academic language learning approach (CALLA) was designed to provide well-structured lessons for ELL students to increase their fluency in the often challenging language of academia. CALLA is based around helping those students who have conversational fluency (basic interpersonal communicative skills or BICS) but are struggling with the decontextualized language used in lessons (cognitive academic language proficiency or CALP). To achieve this, CALLA uses scaffolding, or highly structured and supported lessons in which support is slowly removed as students gain proficiency, and also focuses on supporting all three types of knowledge: declarative (fact-based), procedural (how-to knowledge), and metacognitive (relating past knowledge to current task). Teachers can support their students' development of cognitive academic skills by helping them prepare for lessons and set goals, making sure they understand academic terms, and encouraging self-evaluation of learning. Things like visual aids, lesson planning, and restating the more complex concepts in the native language of the student (as opposed to all of the concepts) are methods that can assist in this. the knowledge of letter/sound relationships. When a child understands both that speech is made of individual sounds, also called phonemes, and that these sounds are represented by letters arranged to form words, the ability to read and write will naturally follow. Children who don't understand even one of these key concepts may struggle with learning to read. Teaching the Alphabetic Principle Students first need to recognize speech at its most basic level: the individual sounds or phonemes.For example, the word 'cat' has three phonemes: /k/, /æ/, and /t/. Similarly, the word 'like', though it has four letters, only has three phonemes: /l/, /ai/ and /k/. Once Mandy determines students are phonemically aware, including the ability to manipulate sound into segments and syllables, she will teach symbols for sounds (also known as letters). The introduction and instruction of these letters needs to be in isolation or small chunks so students keep them straight. For example, introducing /b/ and /d/ at the same time can lead to confusion between the two letters. Mandy then allows children to practice these new letter/sound relationships often in reading, writing, manipulative toys, and games. Before introducing a new letter, Mandy makes sure a student is comfortable with the current knowledge. In this way, Mandy can reinforce learned skills and build upon that knowledge to create new understandings to help develop reading abilities. How many English phonemes are there? How many alphabetic letters? 44 phonemes, 26 alphabet letters Teaching the alphabetic principle to ELLS Phonemic awareness: Like with her English-speaking students, Mandy makes sure the children have a solid understanding of speech sounds. However, because many languages have different phonemes than English, Mandy spends extra time ensuring the 44 English phonemes are distinguishable to her students. Phonics: She also teaches the sound/symbol relationship in phonics. Students who are not readers in their native language often need extra help understanding these conventions. For some who already have a basis of alphabetic principle knowledge in their native language, Mandy monitors for confusion of English rules. Because the 44 English phonemes are expressed with just 26 letter representations, it is important to make sure students do not miss connections at this stage. Vocabulary: Non-native English-speaking students need to be directly and indirectly taught words used in the English language. Students who are able to sound words out need to be able to understand what the word is, too. If a student sounds out 'cat,' working knowledge of what that word means (via recognizing the relationship between the sounds and meaning) will be the only way she'll understand what she reads. Once children progress past basic alphabetic understanding, Mandy can then focus lessons on more complex tasks, such as comprehension and fluency. Knowing the alphabetic principle is necessary before higher level skills are introduced, and every student, regardless of English level, will need it to progress. Reading comprehension activities in Art class Imagine coordinating a lesson with the art teacher on a prominent painting that involves reading pieces about not only the art in question but also what motivated the painter to create it. For some of your students, interest in the art will outweigh any disinterest in reading. Meanwhile, students who would otherwise have no interest in the technicalities of painting could realize something about the life of the artist that makes the piece more worthwhile. Reading comprehension in STEM fields Have students design a structure using toy bricks, but then describe in writing how to do it to classmates who have not seen the structure. The game aspect of the experiment often takes over, with students trying to figure out exactly what their classmates across the hall meant. Having the students discuss anonymously what was helpful and what wasn't helps improve nuance in both reading and writing, all while improving respect for STEM procedures. states that there must be a self-monitoring mechanism that allows a language learner to recognize when something about the language is simply not right. However, in order for this mechanism to work, the student must have an understanding of linguistic rules. As the English teacher, you must address basic linguistic topics with your students from different cultures. In other words, incorporate linguistic rules into your teaching. Don't make assumptions about basic understa states that students acquire language through learning material just a bit beyond their reach. which is the support given by a teacher to a student on an individual basis to help with difficult material. Eventually, the teacher removes the scaffolding when the student is ready. With regards to reading development for students with different cultures, teachers can use scaffolding to address the differences in values. which occurs when the language learner is silent, saying nothing, but taking in information. Language learners need this time to digest the new language before actually speaking. Affective Filter Hypothesis theory states that there are three filters that inhibit language learners: anxiety, no motivation, and low self-confidence. Basically, if the student feels those emotions, then learning is inhibited. if you can create a relaxing atmosphere which motivates and promotes confidence, then your students will learn more. Try to reduce the anxiety and pressure for students with different backgrounds. Give them confidence with praise and looser expectations. Do not push these students, as they might be in the silent period. These filters must be turned off or operating at low levels in order for strong readers to develop. Theories that discuss how a language is learned Affective Filter Hypothesis sheltered English instruction or sheltered English immersion (SEI) the teacher modifies content, curriculum, and lesson delivery to make it more accessible to the student. In this situation, the level of language fluency determines class objectives and the teacher builds the curriculum based on specific interests, backgrounds, and culture of the student. For instance, a student from a culture with strong views on gender roles, literature dealing with those concepts can be chosen for the reading material. use the multiple forms of class activities. Try to incorporate many different teaching styles, like class discussion, artistic or creative projects, personalized writing prompts, and dramatizing a story. In this approach, you can even repeat content based on the needs of your students. This approach is very individualized, but these tactics can also be helpful even for students who might not need as much accommodation. Cognitive Academic Language Approach (CALLA) This is a cognitive model of learning, which means it focuses solely on strategy-based instruction. This approach is mostly for students learning the English language. The teacher does not get to choose the content; he or she must use the content from grade-level curriculum determined by the state. However, part of this approach is individualizing the vocabulary, verbal, reading, and writing skills incorporated into the lessons. is the process of modifying instruction to meet the needs of diverse learners. In a fully differentiated classroom, each student has their own personalized instruction and materials Three Ways to Differentiate Instruction CHANGING THE CONTENT: (what is being taught) CHANGING THE PROCESS: (how it is being taught) CHANGING THE PRODUCT (how ELLs demonstrate learning) When talking about classroom materials, they can reflect any of these three things. For example, materials like textbooks are about content, lesson plans and posters or charts are about process, and worksheets and tests are about product. Adapting Materials for CONTENT (what is being taught) differentiating content in the classroom by offering materials, such as books and articles, on varying levels. When it comes to differentiating content in her classroom, Mira can meet students at their level by offering materials, such as books and articles, on varying levels. In addition, Mira can choose culturally sensitive materials to make her content differentiated for students of different cultures. For example, many Western materials about World War II paint the Japanese people as bad, when not all of them were. Materials told from the Japanese point of view or books about the experience of Japanese Americans during that time period can reach students who are Japanese. Finally, Mira can differentiate content for students with different language abilities by having books and articles in many different languages. Even better, if she can include materials that are bilingual, such as a book about WWII that includes both English and Spanish language writing, she can help support her students who speak languages other than English. Adapting Materials for PROCESS (how it is being taught) include many different modes of learning. For example, she can offer both visual and audio information when she's talking to the class by having a poster or slide show with the highlights of what she's saying. In addition, including kinesthetic or tactile modes of learning can also increase engagement and memory. Because there are cultural differences in learning methods, having different modalities allows Mira to include all cultures. In addition, both students who are a little behind the class and those whose first language is not English can benefit from having both visual and auditory information presented together. Adapting materials for PRODUCT she can have students write answers to English questions in their native language, which supports linguistic diversity. She can also allow students to produce work that reflects their cultural beliefs, such as a paper or a project that allows them to express their beliefs on a topic. Finally, struggling students can be given modified assessments or extra time on exams to help them. the reader's ability to identify and follow the orientation and arrangement of graphemes, such as left-to-right, right-to-left, or top-down the writing and spelling system of a language How languages influence reading reading relates to the reader's ability to identify and follow the orientation and arrangement of graphemes, such as left-to-right, right-to-left, or top-down. When we first learn to read in our native language, we learn how to move our eyes across the page. It becomes so natural and automatized that we don't even think about it. Well...until we try to read a text that is oriented differently. If you are an English speaker and you take up Arabic, at least for a while, your brain will find it very challenging to read the Arabic texts that are oriented right-to-left. is the writing and spelling system of a language. There are alphabetic systems which use letters to represent sounds, such as English, and there are syllabic systems which use characters or symbols to refer to syllables, morphemes, or words, such as Chinese. Readers need to know individual symbols and their identities. With this being said, first language transfer may occur at the orthographic level as well. If an English language learner uses a different type of alphabet in their language, Cyrillic (Russian) or Hangul (Korean), they will need to remember what a letter looks like and discriminate between familiar letters: Letter ''P'' exists both in Latin and Cyrillic alphabet but it corresponds to different sounds. PHONEMIC AWARENESS:When sounds are combined together, they make words: ''book'': has four letters, three sounds. here are 26 letters in English but they can represent 44 sounds. To read fluently, English learners need to be able to hear, identify, and manipulate each sound. If a sound is not a part of a student's first language system, they may not be able to detect it. That will affect their reading comprehension and spelling as they cannot relate sounds to letters. studies how words are formed in language. morpheme is the smallest meaningful grammatical unit of a language: miscommunication (mis- is a morpheme). Usually, prefixes and suffixes help native speakers determine what part of the speech the word is and then guess its meaning or convert it. When a student's first language is similar to English, and they are literate readers in their native language, they can easily transfer knowledge about parallel morphemes and apply it to English: -tion (English); -cion (Spanish): both form nouns. However, in other cases, learner's first language can provide misleading information about morphemes, which can affect reading comprehension in English. Sentence structure:is essential for reading comprehension as it directs readers' attention to important text information, and helps them create schema and expectations and with recalling information Strong first language readers know how to identify structures and navigate through texts. If English learners have solid knowledge of how English sentences are structured, they can build text expectations and decipher unfamiliar words. However, first language transfers can hinder the reading process. If something in the sentence violates a norm they are familiar with from their native language, they may take longer to work out how each word fits into the sentence until they decipher the meaning. For instance, passive forms are more commonly used in English than in Korean; thus, a learner from Korea may struggle with deconstructing such sentence structures. Factors that affect Second Language Reading development Understanding vocabulary is a key element from reading comprehension. As English language learners advance, they acquire more words and learn to recognize and decode them faster. Their reading comprehension improves. English language learners whose first language does not share a lot of English words, roots, prefixes, and etc. take longer to gain reading fluency in English as they don't have the right tools to transfer from their language to guess meanings. In contrast, some languages share a lot of words similar in form and meaning called cognates. Those are easily transferred across languages and facilitate comprehension as learners can decipher the meaning with ease. By learning to recognize and decode morphemes, learners can move from unfamiliar to familiar more rapidly, which will enhance their reading skills. Background knowledge is another prerequisite for reading comprehension. As mentioned before, when students read a text, they develop schemas and expectations. When those are accurate, they can better understand the text. However, English language learners come from different cultures and backgrounds, so they may have different or no prior knowledge of the subject; such gaps need to be filled beforehand. The more students learn about a topic before they read about it, the more prepared they are to add new information to their prior knowledge. English language learners who come from low-socioeconomic status homes have had fewer opportunities to build various experiences and background knowledge due to a lack of parental time, stimulation, or money. They may have had limited exposure to books and academic work. Such learners may have poorer reading skills and strategies in their native language which will affect their reading performance in English. Ways to get ELLS to read! PROMOTE LITERACY: create classroom library/reading area that is rich in diverse and interesting texts. INDEPENDENT READING of interesting texts during class. INVITE PARENTS TO EVENTS: that tell them how to teach reading strategies to their kids. Classroom Accommodations for Reading REDUCE READING LOAD: Keep in mind that it takes a lot of effort to try to decode and comprehend unknown words in another language. t's important to spend time on explicit vocabulary instruction with ELL students. Before reading, preview the text to see which vocabulary words may be essential to understanding, and take time to teach them. You can have students collaborate on word sorts or you can have them draw pictures to illustrate the words. ngage students in pre-reading strategies to help set the stage for their reading. You can do a picture walk with students by asking them to make predictions about the text based only on the pictures. Try to activate students' prior knowledge about the subject matter. Ask students questions about themes and topics from the text and have them elaborate on their responses. For example, prior to reading a story about friendship, ask students to discuss what makes a good friend and record their responses on the board.e raphic organizers such as Venn diagrams, KWL (Know, Want to know, Learned) charts, double bubble maps for comparing and contrasting, sequence maps, and others can help students establish a purpose for reading. Having a purpose helps encourage students to read actively, which enhances comprehension. READ ALOUD: teacher reads to the students THINK ALOUD: teacher models thinking process out loud as she reads to demonstrate interactive reading skills and imporove ELLs comprehension. two students read a text aloud to each other. Best to pair a native speaker with ELL Text Adaptations for ELLS You can also use bilingual books, which present stories in both the native language as well as English. This can help students compare words between the two texts to enhance English vocabulary. Picture books are helpful for reinforcing content with pictures and can be used with students at all levels and grades. For example, students reading the play Romeo and Juliet by William Shakespeare can enhance their understanding by reading a made-for-children adaptation with pictures and simple language. Audio books are a great way to help improve ELL students' reading comprehension. Set up a listening station in your classroom with audio books, headsets, and a CD player, computer, or some other equipment for listening. Students can listen to the audio version while following along in the text. or intermediate to advanced English learners, you can use graphic novels. These present complex plots and literary elements that are found in regular print novels, but also contain pictures for visual reinforcement. is a method for teaching English reading and writing. It focuses on promoting students' ability to hear, identify, and manipulate phonemes, which are the smallest sound segments. The goal of phonics is to enable readers to decode new written words by recognizing the relationship between written letters and their spoken sound. SOUND PATTERN GAME Sound Partners game helps students of any age at the basic language level learn to decode words. Start by creating a set of flashcards made up of different phonemes that can be assembled to create words. For example, you might include the phonemes for the word 'chicken' in your deck: Card 1: ch Card 2: i Card 3: ck Card 4: en Shuffle the cards and give one to each student, then have them walk around the room to find their sound partner and decode the words hidden in the deck. Once students have found their sound partners and figured out a word, they will say it out loud for the rest of the class to hear. If they also show the flashcards that made up the word, in correct order, so the rest of the class can benefit as well. ROOT OF THINGS GAME uses word roots to create new words and can be an engaging way for students to increase their decoding skills and acquire new vocabulary. Start by writing a root, such as oct, on the board. After you explain its meaning (eight) and demonstrate its sound, students can take turns coming up with words that contain that root (octagon, October, octopus, and so forth). This activity is best-suited for older students or students at higher language levels. However, you can adapt it to younger students by allowing them to work in pairs and use a dictionary to search for words. how we all learn our first language. Teaching Vocab to ELLS Saying the word apple multiple times does not guarantee retention for the learner if there is nothing supporting it. Simple vocabulary words, particularly many nouns, can be supported with images, actions, or items. Having an image of an apple or even better, using realia (i.e., a real apple) is even more effective. Some vocabulary, such as academic (e.g., apply, consider, organize) and content area (e.g., math terminology, science phrases), must be strategically taught and learned. These words will not naturally or frequently occur outside of the classroom. Students need to be taught the definition and spelling. They need to hear it spoken alone and in context, then repeat it. This strategy will need to be used frequently for academic and content area vocabulary to be understood. If a student's primary language and English have cognates -words that are similar in spelling and definition (vocabulary/vocabulario) - then a student may have a slight advantage when learning a second language. However, this doesn't' always work, there are plenty of false cognates in all languages. For example- if you bet el billón in Spanish, you are actual wagering a trillion dollars! Strategies for teaching vocab to ELLS ou can help your students do this by giving them sentence starters in English. These will make them feel comfortable and encourage questioning. Sentence starters that may help are: How do you say -------- in English? Can you please explain -------- again? What does -------- mean? 5 by 5 table with columns titles: vocab words, definition, synonym, visual,and how its used in sentence. THis is perfect for learning and remembering vocab before the unit. Chunking vocabulary into different groups is also helpful. One way to categorize words is in parts of speech. This can be turned into a game to make it more entertaining and meaningful. Roots (base word), prefixes (re, non, un), and suffixes (ed, ing, or) should also be explored and understood to help with independently comprehending larger, more complex vocabulary. Homophones (night/knight) and homonyms (leaves (to go)/leaves (part of a tree)) will need to be constantly revisited and emphasized. Identifying figurative language is also necessary, specifically idioms (phrases with underlying meanings) and clichés (overused phrases). Slang also can fit into these category when explaining abstract meanings. If an ELL student is told to eat dirt, you might have a problem on your hand if they don't understand it is not literal. Popular culture and friends will expose students to most of these phrases, but they will also need to be taught these so they don't end up with egg on their faces when learning the language. leaves (to go)/leaves (part of tree) Strategies to teach ELLS to read SCAFFOLD READING INSTRUCTION: you are breaking the reading lesson into smaller parts so that you are not giving your students too much information at once. You certainly do not want to overwhelm students who are already behind in their reading. Breaking the information into chunks keeps them from becoming overwhelmed or frustrated. Scaffolding instruction also helps you determine where students may not understand certain parts of the lesson and gives you an opportunity to develop a plan for re-teaching that parT break reading into smaller, manageable sections. This can help students organize information to make reading easier and increase comprehension. The teacher can either chunk the text for students or have the students chunk the text themselves. For example, if students are reading a story with four paragraphs, the teacher may chunk each paragraph and check for student comprehension or take two lines at a time, read the lines, then check for comprehension. Exposure to as much vocabulary as possible is probably the single most important strategy that you should use as a teacher of students who are learning the English language. Because vocabulary varies with each subject area, you should spend at least five to ten minutes a day explicitly reinforcing vocabulary that will be used throughout your lesson. Explicit instruction could include defining key vocabulary terms, using them in a sentence, and providing a picture that can be associated with the word. MODEL GOOD READING: you as the teacher practice reading a passage or story as your ELL students follow along. You may use guided reading practice for the whole class or you may break struggling readers into smaller groups so that they can receive more personalized instruction. You may even stop to answer questions related to the reading to make sure that your students understand the text. As students become more comfortable with reading, you may allow them to take control of the reading while you listen to them, providing them feedback about strengths and weaknesses in their reading. you typically use different materials, resources and instructional methods so that all of your students' learning needs are supported. an approach in which teachers assist small groups of students who are at the same reading level. When developing and implementing guided reading strategies, keep in mind that they should be motivational. After all, ESL students need encouragement because, from their perspective, reading is a huge and often scary challenge Guided Reading Strategies MOTIVATE ELLS TO READ: make sure students relax and feel comfortable about the reading assignment. We want them to look at reading as an enjoyable experience, not one to feel stressed about. You can do this by emphasizing that reading is a pleasure because we get to live stories and learn new things. You might give them a list of words they'll see in the text to learn before reading. Or, you can ask students to quickly scan the text and underline the words they do not know, then explain those vocabulary terms to the group. Students can have fun while learning new vocabulary. You can have your students talk about the new vocab terms and guess the meanings before you clarify for the group. Students can also use the new words in sentences or even draw a picture of the word, which is particularly appealing to elementary students. These same strategies can be applied to idiomatic expressions, language structures, and so forth. Based on the title, they can brainstorm about what the text will be about and what the conclusion will be. Thy can also use their skimming skills to familiarize themselves with the main idea, and the teacher can give clues about what the text will contain. (Be mindful not to provide a summary of the text, as this doesn't challenge students to improve their reading comprehension skills.) careful not to give away the story!). it's important to encourage your guided reading group to ask about unfamiliar words, expressions and grammar as they come across them while reading. Students can take turns reading aloud, or they can read silently. Either way, it can be helpful to have students read a couple of paragraphs and then ask them a few questions to ensure the group understands the content. Ask questions that can be answered by the content of the text as well as indirect questions that ask students to reflect on the text, like 'What is your opinion on...?' and 'Why do you think...?' If the group has trouble answering these questions, you may need a change of text to better match the students' reading level. If they do appear to be grasping the text, the discussion can get students talking about the text in a relaxed way and motivated to continue with their reading. Ask students to write a summary of what they read in 3 lines. Have students answer questions that target comprehension of the main ideas. Ask students their opinion about the text and its characters and subject matter. Ask students to explain something new they learned from the text. If students read a narrative, they could recreate it with a brief dramatic production. Instruct students to make a storyline of the main events of the story using images and drawings. Reading Comprehension Exercises IDENTIFY THE GENRE: Pre-teach the students about the different types of texts. This may be in the form of an entire lesson or unit depending upon the age and ESL ability levels of the students. Collect several different types of texts. This will guide what happens in the next steps. Remove any information from your samples that may indicate the type of text. Distribute the texts to students and instruct them to identify the type. Students may work in pairs or small groups to encourage teamwork and idea sharing. Optional: Setting a time limit may help to facilitate the lesson. Have the students cross-check with other pairs and/or small groups for preliminary discussion. Formally review and debrief the results. IDENTIFY THE VERB TENSE: Source several sample texts containing the target grammar, or the grammar you intend to teach. Model the exercise. Modeling shows students what your expectations are in terms of procedure and desired outcomes. Explain that students should identify and group similar pieces of grammar. For example, they may use different colored pencils to circle and categorize grammar structures. Distribute the sample texts to students. Students may work in pairs or small groups. Optional: Setting a time limit may help to facilitate the lesson. Have the students cross-check with other pairs and/or small groups for preliminary discussion. Formally review and debrief the results. Optional extension: Have students construct original sentences using the recently explored grammar structures RACE TO ANSWER A QUESTION: Select a text appropriate for your students in terms of length, complexity and subject. Carefully preview the text and identify a piece of information near the end of the text that can be attached to a question. For example, you might formulate a question such as 'Why did the farmer only sell three pigs on Tuesday?' Tip: Be sure that the information necessary to answer the question is located near the end of the text so that students need to read the entire article and focus on the content; in this sense, they are actively reading for comprehension. Get the students ready. Distribute the text, ideally upside down so nobody can start reading first, ask the challenge question, and give a prompt such as 'Ready. Set. Go!' The first student to raise his/her hand with the correct answer wins. Optional extension: Before confirming the correctness of the answer, have classmates discuss, debate and analyze both the question and the proposed answer in order to promote active, whole-class engagement. Visual Pre-reading Activities ILLUSTRATE A CONCEPT: If you are planning to work with a particular theme or concept in the book you are going to read, it is helpful for students to be able to activate what they already know, assume, or feel about that concept. Begin by introducing the concept, which might be something like 'friendship,' 'community,' or 'poverty.' As a whole class, have your students brainstorm ideas, words, and phrases that they connect with this concept, and chart all of their responses. Then, ask each student to create a sketch illustrating what that concept means to them. Emphasize that there are no right or wrong answers here. When students are finished, give them a chance to share their illustrations with the class. Many ESL students benefit from some extra work with vocabulary before they read something new. Come up with a list of ten to twenty vocabulary words from the book you are about to work with. Break your students into small groups, and have each group take responsibility for three to five words. It is fine if some of the words are repeated across groups. Explain that their task is to define each word, come up with a sample sentence, and then create an illustration that shows the meaning of the word. Put your students' work together into a visual dictionary they can refer back to as they read the text. MAPS & SETTINGS: Show your students a projected image of a map of the place they will be reading about, whether it is a whole country or a small town. Ask them to describe what they notice about the image and pose any questions that come up. Then, have students work with partners. Give each pair a smaller version of the same map, and have them write a longer list of observations and questions regarding the setting they are about to encounter in more detail. They can use the setting to make predictions about the text. VERBAL PRE-READING ACTIVITIES Break your students into small groups and have each group take responsibility for learning about one chapter in the author's life, such as his or her childhood, beginning work as a writer, mid-career phase, and death. Then, ask them to present what they have learned to the rest of the class. CHARACTERS & RELATIONSHIPS: Give students a list that shows all of the characters' names and relationships to one another. Then, have students work in small groups or partnerships. Explain that their job is to write five sentences. Each sentence should explain something about how the characters in the book are connected. Even though students do not know much about the characters yet, this activity will give them facility for working with characters' names and will help prepare them to understand the ways different characters are connected. Post Reading Activites RETELLING A STORY: re-telling the story either verbally or in writing. Use a visual approach Ex- picture dictation or comic strips. place series of pics in correct order to summarize a story. Advanced ELLs: they assemble sections of the text in the right order to summarize blank, 6-8 frames. Re-tell the story by illustrating the main points. show them several examples of comic strips. Complete the first one or two frames together as a class to model the process for students. Allow low-proficiency students to write the captions in their native language since you're assessing reading comprehension and not language proficiency. Have students re-tell the story with a partner using their comic strips as a guide. WRITING STORY SUMMARIES: write a summary of the text. Showing students that a summary should include the main plot points or key details of a text by allowing them to watch you modeling the process. are templates of sentences that students complete. They are ideal for ESL students because they allow them to focus on supplying the content rather than worrying about grammar or usage errors. They also demonstrate the proper way to write a complete sentence, so the more ESL students use them, the more likely they are to pick up those skills. This story is about _____. The main character's name is _____. At the beginning of the story, _____. In the middle of the story, _____. Post reading activites cont You can also provide students with word banks to help them write summaries that cover the main concepts from the text. For example, if students are writing a summary of ''The Three Little Pigs,'' you might write the following list of words and phrases on the board: first, second, third, house, straw, wood, brick. Tell students to write a five-sentence summary of the story, with one caveat: they must use all of the words on the board. An easy way to teach students how to write an effective summary is to use the somebody-wanted-but-so-then approach. This approach covers all of the essential points of a summary. For instance: Somebody: who was the character? Wanted: what did the character want? But: what was the conflict? So: what did the character(s) do to solve the problem? Then: how did the story end - how was the situation resolved? ART & ORGANIZERS Art is a great way for ESL students to demonstrate their learning because it requires little to no language proficiency while still assessing their understanding of a text. A collage is an ideal art project for many students because unlike a drawing or painting, it requires little skill. Provide students with stacks of old magazines and ask them to cut out images, words, and symbols that relate to the story. They can glue the images onto poster board. When finished, have them share their posters with a partner. Graphic organizers are great for helping students organize information and other notes from their reading. They are ideal for ESL students because of their visual nature. It's much easier for an ESL student to 'fill in' or supply information on a graphic organizer rather than to write from scratch. One common type of graphic organizer is a Venn diagram, which consists of two overlapping circles. This type of organizer is often used for comparing two things, such as two different characters or literary elements from a story. In the center of the diagram, where the two circles meet, students record similarities. Low-proficiency students might add pictures, symbols, and words to their diagrams rather than phrases and sentences. A story map helps students record important information from a text, such as the characters, setting, and conflict. After completing their story maps, students can work with a partner to re-tell the story using their notes. How L1 affects writing in English ELLs might write right-left (Indians) or bottom -top of the page bc of their L1. the consistency of spelling rules in a language.In English the sound to letter ratio is not consistent. in other languages it is and they have shallow orthographic patterns. the way words are written in them is pronounced exactly as per specific pronunciation rules, as is the case with Spanish. For instance, in Spanish, Sergio knows that adjectives have plural forms by adding the 's'. For this reason, his English writing often appears odd in some instances when he writes things like 'the whites horses'. while Spanish has the same basic structure of subject plus verb and complement for complete sentences, Sergio often drops the subject 'I' when writing in English. Sergio does this because the verb conjugation for 'I' in Spanish is unique and, thus, the subject 'I' is not necessary at all times. Many languages our ELLs might speak use different discourse structures for writing. For instance, Sergio uses very basic linking words between paragraphs when he writes in Spanish. For this reason, Sergio has a hard time understanding why linking expressions like 'to continue, besides, in addition,' etc. in Latin America, writing and composition is a subject in elementary school but not a skill that continues to be developed throughout school so students can write as well as possible when they are in university or when they become professionals. This is the reason why Sergio struggles often to put thoughts in writing and tends to produce a lot of run-on sentences. In short, ELL teachers can always do a bit of research on the cultural background their students come from in order to understand and work around expectations for writing in English. Accommodations for Writing his accommodation is typically useful for students who are literate in their first language, and who have at least an intermediate level of English proficiency. cons- takes time too look up words, dictionaries dont often teach the correct form of word and results in unclear writing. Often, sentence frames are accompanied by word banks so that students can choose the appropriate words to fill in the blanks. Sentence frames reduce the pressure associated with asking an ELL student to produce language. They help model the correct format and structure of the English language, so students have exemplary models to refer to. Over time, students will need to depend less and less on the sentence frames as they learn to write their own sentences. REDUCING WRITING LOAD: ELL students should not be required to produce the same amount of text as their native-speaking peers. if students are required to write a paragraph on a specified topic, ask ELL students to try writing only two or three complete sentences. This will ensure that the writing they do submit is their best work. pairing ELL students with native English speakers to complete collaborative writing tasks. One way to do this is with a Think-Write-Pair-Share. This activity requires students to independently think and then write about a topic or question posed by the teacher. When they finish writing, they pair up with another student to share their ideas, and then one partner volunteers to share the information with the class. Depending on the students' level of English proficiency, you can reverse the order of this activity to provide more support to ELL students. For example, after posing the prompt or question, pair students together to discuss and then write their response together. Teaching Strategies for Writnig For example, when teaching the writing process, you would teach prewriting, drafting, editing, revising, and publishing in that order, and not the other way around. Each step in the process should be taught in its entirety before the next step is introduced. This way, students are given the tools they need to successfully navigate through the content being delivered. I DO, WE DO , YOU DO Explain the writing skill or practice that will be covered. Give reasons why this skill is important, and explain the objectives of the lesson. Model the skill or practice (I do). Talk through the process by clearly identifying all of the steps you take. For example, if you are explaining how to use a graphic organizer as part of the writing process, you should actually model how to complete the organizer. Guided and Collaborative Practice (We do) - Work with students as you complete the skill/practice with their guidance, asking for clarification along the way; then observe and offer guidance when needed, as they collaborate on the skill/practice with others. Independent Practice (You do) - Once students have developed a strong understanding, have them work independently to master the skill/practice. Visuals & Examples for Writing are posters that record key concepts, cues, and guidelines during the learning process so student thinking and understanding is represented visually. Break down descriptions and concepts piece by piece, so students are able to work through the processes on their own, step by step. teaching about writing a paragraph, have a color-coded anchor chart that breaks down the parts of a paragraph; but also, have an example paragraph written on color-coded sentence strips to match the colors on the anchor chart. For every element of writing you teach, make sure there are visuals and examples to help assist in student comprehension. You will want your ESL students to partake in both academic and real-world style writing exercises. Aim to familiarize students with writing independently, writing through collaboration, and writing using technology (e.g., word processors, e-mail), so they develop a well-rounded set of skills. Have students keep a journal for either personal use (brainstorming, getting thoughts onto paper, reflecting, or reviewing) or for communication purposes (teacher-student communication or peer journal communication). Allow students to use journals for simply practicing writing. In the beginning, this can be used to develop a writing baseline so progression can be tracked. As students advance in their writing skills, begin assigning journal topics or prompts for grading and assessment. SPEED FREE WRITING where students just write without worrying about topic, grammar, punctuation, or formatting. This is used to get students comfortable with using their imagination when it comes to writing. Divide the class into pairs. Write a topic or prompt on the board. Explain to students they will have 2 minutes to think about the topic/prompt and write a few notes or ideas onto paper. When the 'thinking' time is up, explain that students will have 5-10 minutes to discuss their notes or ideas with their partner. Provide higher-thinking questions for students to use until they are familiar with this exercise. When the time is up, students are to use their notes and insight from the peer discussion to write about the topic/prompt. This is a real life skill that will help students learn to communicate through writing. Letters and postcards are useful for teaching writing-for-purpose skills. Explain and model all elements (greeting, body, closing). Model and practice, but also consider providing templates for younger or beginner level students. You can set up pen pals, or just have students write to one another in class as they continue practicing this style of writing. Like letter and postcards, email writing is beneficial for teaching students to communicate through writing. Email writing is useful because it also develops 21st-century skills that students can use in the workplace. Make sure to teach/review the computer skills needed to compose and send an email, since some students may not be familiar with this process. After providing examples and templates, have students write various types of emails to you as a means of practicing online communication skills and etiquette. a written piece that is made of multiple paragraphs and focuses on one topic. Making this clarification can prevent the student from being confused with what product to turn in. It is also helpful to show students an example of an essay. Five steps of the writing process organize ideas about the topic create sentences & paragraphs using ideas add or remove elements to make paragraph strong (ensure punctuation, capitalization, sentence structure, & transitions are correct) re-write essay with no errors Depending on a student's language ability, they may need sentence starters in order to begin the writing activity. Provide students with sentence starters that relate to the topic. A vocabulary bank can be given to students to help them stay on topic and generate ideas. Supportive Materials for writing Essays SENTENCE STARTERS that relate to essay topic VOCAB BANK to help stay on track and generate ideas. have sts create their own and give checklist of what should go into their writing (topic, thesis, organize ideas, supporting details final touches ect) Elements that make essays stronger AWARE OF AUDIENCE Emphasize to students that the teacher is not always the audience. The audience can be determined based on the writing prompt or topic. used to move between ideas in sentences and paragraphs. Students should understand that this makes it easier for their audience to understand and follow the points they are trying to make. Show students an example of writing that is strong, average, and weak to help the information stick consists of three to five sentences written about one particular topic After explaining the definition of a paragraph to students, show and discuss some examples. This will be especially helpful for English language learners who need the additional support of visual aids. point out the topic sentence, supporting details, and conclusion. Emphasize to learners that transition words should be used to signify that a paragraph is coming to an end. Parts of a Paragraph reveals the focus of the paragraph often the first sentence, it can be located anywhere in a paragraph. sentences that back up a topic sentence and strengthen its point; they also connect the topic sentence to other important points in a paragraph. is the sentence that brings a paragraph to a close. Explain to learners that concluding sentences are like the summary of a paragraph.should be used with transition words that signify a clear ending is taking place Resources for paragraphs visual tool designed to help learners get a handle on and show relationships among their thoughts. It can take the form of a web, chart, diagram, or list; students can even make their own. Once students write down their thoughts and ideas on the organizer, they can decide how to order their sentences so that their paragraphs will flow. provides clues to guide learners on how to complete a sentence. In other words, if they are having trouble generating a sentence, this can help them devise its beginning Ex-A good friend is someone who....... allows you to explain or show a topic. comes from the term 'expose', which means you show something through writing. Ex-a magazine article about a product, a scientific article about treatment for a disease, an instructional manual on how to use a cellphone, etc Accompany your definition with some examples: a magazine article about a product, a scientific article about treatment for a disease, an instructional manual on how to use a cellphone, etc. Then, have your students talk about some examples of expository writing they may have encountered in real life; this way, you can correct them if someone, for example, says they read a short story. Next, discuss the objective of expository writing, which is to inform readers about a product or topic and share important information about it. Ask your students about the advantages of expository writing, such as 'you learn something.' in sum, a motivating introduction includes a simple definition, examples, and the objective of expository writing. Your students are now ready to move onto the task of writing. Teaching Expository Writing tells the reader what the writing piece is about in one or two sentences. arguments and/or reasons that support the topic sentence Expository instruction strategies Have students make a chart of what they want to write about. They can list supporting information, examples, etc. and classify each idea in an orderly way. Give students time to discuss the ideas they wish to put in writing. This way, you can correct and/or help them develop their ideas. Encourage students to ask you about new words/expressions they may need for their expository writing. Help your students through a correction of a first draft. This way, they do not feel the pressure to present the final product all at once. Give value to the final product students write by displaying their compositions on a bulletin board or distributing them in a class publication. Assigning value to students' work helps to boost their self-esteem while they are learning. narratives are useful for teaching structure, plot, and character development. When ELL students utilize narrative writing, they are essentially creating a story that is meant to both inform and entertain the reader. Secondly, narratives give students an opportunity to develop prewriting skills. When ELLs prewrite, they can make use of vital writing tools such as brainstorming and outlining. Finally, narratives enable student to communicate with the teacher and each other on a more personal level that other academic forms of writing simply do not allow. Preparing to write narratives PROVIDE STS WITH LEVEL APPROPRIATE EXAMPLES OF NARRATIVES mix of short, long, fiction, and nonfiction ALLOW ELL TO CHOOSE THEIR OWN NARRATIVE increases personal interests ANALYZE A NARRATIVE AS CLASS break down a larger narrative into smaller pieces one, and message of the text. Analyze a narrative as a class. This process can be extremely helpful because it allows students to break down a larger work into smaller, more digestible elements. You can also put students into small groups and assign each group a narrative to analyze. An analysis of a narrative should include discussions about the structure, format, tone, and message of the text. You might also parse the vocabulary choices, sentence structure, and literary devices the author employs. Ask students for personal critiques of narrative works. The opportunity to give personal perspectives can help students learn how to support and defend their opinions. Narratives in use Once you feel your learners have a solid grasp on how narratives are unique from other forms of English writing, the next step is to give them the chance to practice. The practical application of writing skills is the best way for students to learn how narrative writing is really done. It can be helpful to begin with a short, creative assignment that enables students to use their imaginations or personal experiences to create a narrative. If you feel your students may benefit more from simple tasks to begin with, you can assign a narrative that asks students to tell a story from childhood or explain what happened to them last weekend. Before you write, ensure that your subject or topic will engage the reader. Be creative, but don't let creativity get in the way of clear, cohesive writing. Keep your writing interesting by varying vocabulary and sentence structure. Use the writing tools you feel confident with, but don't be afraid to explore unfamiliar literary techniques such as foreshadowing or allusion. Strategies for Spelling development students are provided with words that they sort into categories. The teacher can provide the categories, which is often referred to as a closed word sort, or students can complete an open word sort by creating their own categories. For a closed sort, the teacher can give students a graphic organizer with labeled columns. Students can either write the words in the appropriate column, or they can use small note cards with the words written on them. This will allow them to physically place the word in the appropriate column. or example, let's say a student has five spelling words, each one written on a separate note card. The words are orange, brown, red, blue, and yellow. The teacher provides the student with a graphic organizer containing a chart that is divided into two columns. The first column is labeled 'words with one syllable,' and the second column is labeled 'words with two syllables.' The student would place the note cards that read 'brown,' 'red,' and 'blue' in the first column. The other cards would be placed in the second column. For an open sort, the same methods apply, except the headings on the graphic organizer are blank, and students fill them in independently. Here are some examples of categories that can be used for word sorts: Words with two syllables sts are given magazines and can search for words to cut out and put into their categories Another strategy for building spelling skills is to use the same word lists and cards that were used in the sorts and have students alphabetize them. Putting words in alphabetical order requires students to study the vocabulary words and pay close attention to spelling patterns. A fun way to alphabetize is to break students up into small groups and give them a set amount of time to alphabetize the words. Whichever group puts all the words into the correct alphabetical order the fastest is the winner. which are words in two different languages with similar meanings, spellings, and pronunciations English: attention; Spanish: atención English: center; Spanish: centro English: dinosaur; Spanish: dinosaurio English: December; Spanish: Diciembre Common Spelling Challenges words that sound the same but have different meanings and are spelled differently. Many homophones need to be memorized in order to be properly differentiated and learned. Here are some of the most common ones: Ex-There, their, they're affect and effect ate and eight flower and flour DOUBLING & REMOVING doubling a consonant. When there is a short vowel followed by a consonant in a root word, the consonant is doubled before adding a suffix. For instance, run becomes running, and clap becomes clapped. Sometimes, adding a suffix also means removing the final silent vowel. This happens, for instance, with the word rake changing into raking or even the word remove turning into removing. Learning and internalizing rules about how words transform when adding a suffix will make a big difference in your students' spelling. when speaking quickly or fluently, often 'swallow' syllables, making it difficult to understand how to spell the word appropriately. For example, the word every often sounds as though it only has two syllables, when it actually has three. The word different can sound like it has two syllables instead of three. Actually can sound like it has three syllables instead of four. Remind your students that every syllable should include a vowel. Clapping out the syllables of a word as you or your students pronounce it slowly will help students note all syllables and spell the word properly. Parts of Speech people, animals, things words that describe nouns modifies verb, adjective, or another adverb words that express action or state assessing writing for low-level ells Low-level ESL learners should be taught conventional grammar and spelling in order to build a solid foundation for future, more complex writing tasks. So, when assessing basic writing assignments, it can be helpful to focus on correcting mechanical errors. One tool that can be extremely helpful is a marking key, which shows students exactly what errors they have made. This can also greatly speed up grading time and can be used with intermediate and advanced students as well. Marking Key example SP= Spelling error WO=Incorrect word order RO=Run on sentence After handing out a copy of your marking key to students, tell them to keep it for future reference. It can also be helpful to post the key in a highly visible area of your classroom. As students become accustomed to the marking abbreviations, it will be relatively easy for them to determine what types of errors they are making and in which specific areas they need to improve. For intermediate to advanced learners, the writing assignments inevitably become more difficult. At these levels, mechanics are still important but should count for less in an overall assessment score. The reason for this change is the need to assign more importance to organization and style. Organization and Style Writing Is the writing coherent? Does the writer follow a clear structure? (e.g. thesis statement, introduction, body, conclusion, etc.) Is the writing cohesive? Does the writer use transitional signals and sequencing to connect paragraphs, sentences, and ideas? (e.g. however, therefore, in addition, first, second, next, finally, in conclusion, etc.) Does the writer stay on topic? Are there any sentences that do not relate to either the thesis statement or the body paragraph topic sentences? Does the voice of the piece remain consistent throughout? Is there significant switching of tenses or pronouns? (shifting from past to present tense for no reason, changing from I to we etc.) Does the writer answer all parts of the writing task posed in the assignment? Was any necessary information ignored in the response? Was unnecessary information added? Example Writing Score Guide 15% Topic relevance 15% Answer completeness Sheltered Instruction Observation Protocol (SIOP) is a research-based method of instruction that addresses the academic needs of English language learners (ELLs). eliminates the pull-out system by combining multiple instructional components with teaching strategies to ensure the content and language needs of ELL students are met as they learn alongside their native English-speaking peers in mainstream classroom. 8 Sheltered Instruction Observation Protocol (SIOP) components it's an instructional model used to ensure ELLs have their content and language needs met in mainstream classrooms. both content and language objectives are reviewed at the beginning of the lesson and then analyzed upon completion of the lesson Clearly define CONTENT objectives: SWBAT draft a conclusion paragraph for their essay. Clearly define LANGUAGE objective SWBAT use transitional phrases in writing. Include themes, standards, topics, materials, and vocabulary. Utilize multiple methods of content delivery (audio, visual, charts, etc.) Ensure application allows students to attain and demonstrate understanding. create links between past lessons and experiences; hence, building background knowledge as a launch pad for the new lessons. Ensure that you: Focus and motivate students by connecting to what they already know. Address how students' personal experiences can relate to content area. Directly link concepts to students' background experiences, or make learning relevant. This experience can be personal, cultural, or academic. Link past learning to new content by referring to books, lessons, or charts that students have worked on previously. Use what students have learned in the past to help them learn new vocabulary. where the instructor focuses on presenting new information in a way that can be understood by all students. When it comes to objectives, content, vocabulary, etc., ask yourself: Is it understandable? Can they explain it back to me? Be sure to: Use language that matches students' proficiency level. Make explanations of tasks clear by using step-by-step sequencing with visuals. Give plenty of examples by modeling, demonstrating, and participating with students. Enunciate clearly; speak slowly and purposefully. Use gestures, pictures, props, and objects to make content clear. Consistently use scaffolding strategies, such as modeling, guided practice, independent practice. Use the think-aloud strategy to show students how to work through concepts and learning strategies. Encourage higher-thinking, delving, and questioning throughout lessons. allows students to learn from one another as they practice skills being taught in the classroom. Allow open discussions about content, lessons, and objectives. Use a variety of grouping options, such as whole, small, partners, and independent. Consistently provide sufficient wait time for student responses. Use structured oral language routines, like a talking stick, lines of communication, or give one/get one, to get students talking and interacting. Allow for clarification opportunities in students' native language, if possible, if it will improve difficulty with acquisition. enduring knowledge for practice and for application of the concepts and content being presented. Provide guided practice before having students work independently. Use activities that require students to apply both content and language knowledge. These can be journals, discussion circles, subject-related interviews, or scaffolded graphic organizers. Use activities that integrate all language skills how a lesson is delivered determines how content will be received, deciphered, and retained by students. Ensure the lesson delivery follows the content and language objectives. Respect the pace of your students, and mirror the lesson to their ability level. Make sure students are following along throughout the lesson; frequently check for comprehension. REVIEW & ASSESSMENT Reflecting on a lesson's effectiveness is important for determining what changes need to be made before a lesson is used again. Complete assessments of content and language learning objectives throughout the lesson. Review content and language objectives with the class to see if students believe objectives were met. Complete a whole group comprehensive review of vocabulary and content concepts. Get feedback from students with reflection prompts. Complete an overall assessment at the end of the lesson.
Hypothetically, if we launch a probe with a camera towards a black hole which will transmit a video signal in real-time to us at a safe distance, what will we see? As the camera approaches the black hole, gravity will increase, and as a result, the time for a spacecraft with a camera will slow down as it approaches the black hole. You could see such an effect, for example, in the movie Interstellar. As a result of this, an interesting effect will arise: if the camera transmits, say, 25 frames per second under normal conditions, then as we approach the black hole, we will receive fewer and fewer frames per second from the camera. This animation of a black hole by NASA shows how gravity warps spacetime. In other words, the signal from the camera will experience a gravitational redshift: the wavelength of the signal from the camera will increase, and the closer the camera is to the black hole, the faster the displacement will occur. If there was an observer on the spacecraft with a camera, then from his point of view the camera would continue to shoot 25 frames per second but will receive footage slower due to the gravitational time dilation of one second. But in the frame of reference of an observer located far from the black hole, time will flow much differently. The number of frames per second that we will receive from such a camera will first begin to decrease, and then the time interval between individual frames will increase to hours, days, years, centuries, millennia, and so on. To fix such a signal shifting into the region of long waves, we will need specialized equipment, in addition, we will also need to solve the problem of interference created by matter falling onto the black hole. However, these are purely technical problems that can be solved with the help of specially designed receiving equipment, as well as the use of noise-immune signal coding. The final image transmitted by the camera depends on the mass of the black hole on which it falls. If the camera falls on a black hole of stellar mass (the masses of such black holes usually vary from 5 to several tens of solar masses), then the tidal forces of the black hole will rupture the spacecraft along with the camera on approach. This is due to the fact that the inhomogeneity of the gravitational field of such a black hole greatly increases as it approaches it. As a result, tidal forces arise inside the solid, caused by this inhomogeneity, and at some point in time, the body (in our case, this is an apparatus with a camera) will simply be torn apart into atoms. In this case, all that we will see in the video is a black hole accretion disk consisting of a red-hot matter rotating in a circle of a black hole and a completely black ball in the middle – the event horizon. A more interesting picture will be shown by a camera falling into a supermassive black hole. At first glance, this may seem strange, but the gravitational field of supermassive black holes is much more uniform and therefore a spacecraft with a camera has every chance of reaching the event horizon safe and sound. As the camera falls into the black hole’s gravity well, its angle of view will begin to decrease until all the light from the universe degenerates into a small blue dot, at the moment of passing the event horizon even this point will disappear and complete darkness will come. After the camera crosses the event horizon, it will apparently continue shooting, but the signal transmitted by it will never reach us. And it’s also unlikely to wait for the camera to cross the event horizon: it will take billions of years. Join the discussion and participate in awesome giveaways in our mobile Telegram group. Join Curiosmos on Telegram Today. t.me/Curiosmos • Cheong, R. (2015, November 20). What happens when you throw a camera into a black hole? Retrieved November 30, 2020, from http://www.hopesandfears.com/hopes/now/question/215747-camera-black-hole
A newly published study details how a team of physicists simultaneously measured gravity with cold atoms at three different heights to achieve the first direct measurement of the gravity-field curvature. Earth’s gravitational pull gradually decreases with increasing altitude, and researchers have detected the differences even over several vertical feet within a lab, using the extreme sensitivity of cold atoms. Now a team has taken the next step by measuring the change in this gravity gradient produced by a large mass, using measurements at three different heights. They say their technique could improve gravity-based mapping of variations in rock density in geology and prospecting, and it could also boost the precision of tests of general relativity and measurements of the gravitational constant. The technique of atom interferometry enables distance measurements with extremely high precision, by exploiting the atoms’ quantum-mechanical wavelike nature. It has been used previously to measure the strength of gravitational fields and also the rate of change in those fields over some distance (the gradient). Together such measurements permit Newton’s gravitational constant G to be determined [1, 2]. It is currently known to within about 100 parts per million, a much lower precision than other fundamental constants. More accurate measurements would allow higher-precision tests of the theory of general relativity. Measuring gravity at two close locations gives the gradient as the difference between the two divided by their separation distance; measuring at three locations gives the rate of change of the gradient, which is also called the curvature of the field. This experiment was proposed in 2002 , and now a team in Italy, led by Guglielmo Tino of the University of Florence and the National Institute of Nuclear Physics (INFN), has carried it out. Previously, Tino and his colleagues determined G by measuring gravity at two different heights with a similar experiment . To measure gravity at three locations simultaneously, the team launched three clouds of ultracold atoms to three different heights inside a meter-long vertical pipe. Surrounding the top half of the pipe was 516kg of tungsten alloy weights, to increase the variation in the gravitational field. Near the peaks of their trajectories, the atoms were irradiated with a rapid series of laser pulses from the top and bottom of the pipe. In the team’s technique, the first pulse separates each cloud into two populations—one that absorbs two photons, sending it into an excited state and also providing a momentum boost, and a second population that remains in the ground state. The extra momentum causes the first population to fall a different distance during a fixed time, which leads to a gravity-dependent difference in the number of quantum wave cycles that elapse, compared with the ground-state population. Two more wave pulses recombine the populations, allowing them to interfere. From the interference effects the researchers can calculate the difference in the lengths of the two populations’ trajectories, a difference that depends on the gravitational acceleration. The team measured variations in the gravitational acceleration of a few millionths of a percent and calculated the average curvature to be 1.4×10−5s−2m−1, which is virtually identical to the value they predicted. Measuring the curvature of a gravitational field could improve the measurement of G, says Tino. A common method involves measuring the field strength and gradient as a heavy mass is moved between one detector and another. But by making two separate measurements of the gradient at different positions simultaneously, the new technique could eliminate systematic sources of error without having to move the mass, which can introduce errors from shifts of the apparatus. The curvature could also be useful for mapping gravity changes in the earth, which are used to deduce buried geological structures and to find oil reservoirs. Even if the density changes are small, the curvature can alter dramatically if the density change is abrupt, like a step edge. So measuring gravity curvature could improve the spatial resolution of such density maps. “Measuring the gravitational force is sensitive to everything underground,” says Holger Müller of the University of California at Berkeley, who uses atom interferometry to make ultraprecise measurements for probing fundamental physics. “Measuring the gravity gradient enhances the sensitivity to nearby objects, and measuring the [curvature] does so even more.” A practical, curvature-measuring device would be “a great achievement,” Müller says. Reference: “Measurement of the Gravity-Field Curvature by Atom Interferometry” by G. Rosi, L. Cacciapuoti, F. Sorrentino, M. Menchetti, M. Prevedelli and G. M. Tino, 5 January 2015, Physical Review Letters. - “Atom Interferometer Measurement of the Newtonian Constant of Gravity” by J. B. Fixler, G. T. Foster, J. M. McGuirk and M. A. Kasevich, 5 January 2007, Science. - “Determination of the Newtonian Gravitational Constant Using Atom Interferometry” by G. Lamporesi, A. Bertoldi, L. Cacciapuoti, M. Prevedelli and G. M. Tino, 8 February 2008, Physical Review Letters. - “Sensitive absolute-gravity gradiometry using atom interferometry” by J. M. McGuirk, G. T. Foster, J. B. Fixler, M. J. Snadden and M. A. Kasevich, 8 February 2002, Physical Review A. - “Precision measurement of the Newtonian gravitational constant using cold atoms” by G. Rosi, F. Sorrentino, L. Cacciapuoti, M. Prevedelli and G. M. Tino, 18 June 2014, Nature.
However, sometimes you will see equations that are written in standard form. Writing Equations in Standard Form We know that equations can be written in slope intercept form or standard form. Let's quickly revisit standard form. Remember standard form is written: Let's look at an example. When we move terms around, we do so exactly as we do when we solve equations! Whatever you do to one side of the equation, you must do to the other side! Solution That was a pretty easy example. Let's take a look at another example that involves fractions. There is one other rule that we must abide by when writing equations in standard form. Equations that are written in standard form: Let's take a look at an example. We now know that standard form equations should not contain fractions. Therefore, let's first eliminate the fractions. Solution Now, let's look at an example that contains more than one fraction with different denominators. If you find that you need more examples or more practice problems, check out the Algebra Class E-course. You'll find additional examples on video, lots of practice problems with detailed solutions and little "tips" to help you through! Our first step is to eliminate the fractions, but this becomes a little more difficult when the fractions have different denominators! We need to find the least common multiple LCM for the two fractions and then multiply all terms by that number! Solution Slope intercept form is the more popular of the two forms for writing equations. However, you must be able to rewrite equations in both forms. For standard form equations, just remember that the A, B, and C must be integers and A should not be negative.Quadratic Functions(General Form) Quadratic functions are some of the most important algebraic functions and they need to be thoroughly understood in any modern high school algebra course. The properties of their graphs such as vertex and x and y intercepts are . Here are the steps required for Graphing Parabolas in the Form y = a(x – h) 2 + k: Step 1: To find the x-intercept let y = 0 and solve for x. You can solve for x by using the square root principle or the quadratic formula (if you simplify the problem into the correct form). A positive attitude is an important aspect of the affective domain and has a profound effect on learning. Environments that create a sense of belonging, support risk taking and provide opportunities for success help students to develop and maintain positive attitudes and self-confidence. Slope intercept form is the more popular of the two forms for writing equations. However, you must be able to rewrite equations in both forms. For standard form equations, just remember that the A, B, and C must be integers and A . Free printable Math word search puzzles complete with corresponding answer sheet with a title and bordered grid. Guess and Check “Guess and Check” is just what it sounds; we have certain rules, but we try combinations to see what will work. NOTE: Always take a quick look to see if the trinomial is a perfect square trinomial, but you try the guess and leslutinsduphoenix.com these cases, the middle term will be twice the product of the respective square roots of the first and last terms, as we saw above.
|Enforcement authorities and organizations| A monopoly (from Greek monos μόνος (alone or single) + polein πωλεῖν (to sell)) exists when a specific person or enterprise is the only supplier of a particular commodity (this contrasts with a monopsony which relates to a single entity's control of a market to purchase a good or service, and with oligopoly which consists of a few entities dominating an industry). Monopolies are thus characterized by a lack of economic competition to produce the good or service, a lack of viable substitute goods, and the existence of a high monopoly price well above the firm's marginal cost that leads to a high monopoly profit. The verb "monopolise" refers to the process by which a company gains the ability to raise prices or exclude competitors. In economics, a monopoly is a single seller. In law, a monopoly is a business entity that has significant market power, that is, the power to charge overly high prices. Although monopolies may be big businesses, size is not a characteristic of a monopoly. A small business may still have the power to raise prices in a small industry (or market). A monopoly is distinguished from a monopsony, in which there is only one buyer of a product or service; a monopoly may also have monopsony control of a sector of a market. Likewise, a monopoly should be distinguished from a cartel (a form of oligopoly), in which several providers act together to coordinate services, prices or sale of goods. Monopolies, monopsonies and oligopolies are all situations such that one or a few of the entities have market power and therefore interact with their customers (monopoly), suppliers (monopsony) and the other companies (oligopoly) in ways that leave market interactions distorted. Monopolies can be established by a government, form naturally, or form by integration. In many jurisdictions, competition laws restrict monopolies. Holding a dominant position or a monopoly of a market is often not illegal in itself, however certain categories of behavior can be considered abusive and therefore incur legal sanctions when business is dominant. A government-granted monopoly or legal monopoly, by contrast, is sanctioned by the state, often to provide an incentive to invest in a risky venture or enrich a domestic interest group. Patents, copyright, and trademarks are sometimes used as examples of government granted monopolies. The government may also reserve the venture for itself, thus forming a government monopoly. - 1 Market structures - 2 Characteristics - 3 Sources of monopoly power - 4 Monopoly versus competitive markets - 5 The inverse elasticity rule - 6 Price discrimination - 7 Monopoly and efficiency - 8 Monopolist shutdown rule - 9 Breaking up monopolies - 10 Law - 11 Historical monopolies - 12 Countering monopolies - 13 See also - 14 Notes and references - 15 Further reading - 16 External links In economics, the idea of monopoly will be important for the study of management structures, which directly concerns normative aspects of economic competition, and provides the basis for topics such as industrial organization and economics of regulation. There are four basic types of market structures by traditional economic analysis: perfect competition, monopolistic competition, oligopoly and monopoly. A monopoly is a structure in which a single supplier produces and sells a given product. If there is a single seller in a certain industry and there are not any close substitutes for the product, then the market structure is that of a "pure monopoly". Sometimes, there are many sellers in an industry and/or there exist many close substitutes for the goods being produced, but nevertheless companies retain some market power. This is termed monopolistic competition, whereas in oligopoly the companies interact strategically. In general, the main results from this theory compare price-fixing methods across market structures, analyze the effect of a certain structure on welfare, and vary technological/demand assumptions in order to assess the consequences for an abstract model of society. Most economic textbooks follow the practice of carefully explaining the perfect competition model, mainly because of its usefulness to understand "departures" from it (the so-called imperfect competition models). The boundaries of what constitutes a market and what doesn't are relevant distinctions to make in economic analysis. In a general equilibrium context, a good is a specific concept entangling geographical and time-related characteristics (grapes sold during October 2009 in Moscow is a different good from grapes sold during October 2009 in New York). Most studies of market structure relax a little their definition of a good, allowing for more flexibility at the identification of substitute-goods. Therefore, one can find an economic analysis of the market of grapes in Russia, for example, which is not a market in the strict sense of general equilibrium theory monopoly. - Profit Maximizer: Maximizes profits. - Price Maker: Decides the price of the good or product to be sold, but does so by determining the quantity in order to demand the price desired by the firm. - High Barriers: Other sellers are unable to enter the market of the monopoly. - Single seller: In a monopoly, there is one seller of the good that produces all the output. Therefore, the whole market is being served by a single company, and for practical purposes, the company is the same as the industry. - Price Discrimination: A monopolist can change the price and quality of the product. He or she sells higher quantities, charging a lower price for the product, in a very elastic market and sells lower quantities, charging a higher price, in a less elastic market. Sources of monopoly power Monopolies derive their market power from barriers to entry – circumstances that prevent or greatly impede a potential competitor's ability to compete in a market. There are three major types of barriers to entry: economic, legal and deliberate. - Economic barriers: Economic barriers include economies of scale, capital requirements, cost advantages and technological superiority. - Economies of scale: Monopolies are characterised by decreasing costs for a relatively large range of production. Decreasing costs coupled with large initial costs give monopolies an advantage over would-be competitors. Monopolies are often in a position to reduce prices below a new entrant's operating costs and thereby prevent them from continuing to compete. Furthermore, the size of the industry relative to the minimum efficient scale may limit the number of companies that can effectively compete within the industry. If for example the industry is large enough to support one company of minimum efficient scale then other companies entering the industry will operate at a size that is less than MES, meaning that these companies cannot produce at an average cost that is competitive with the dominant company. Finally, if long-term average cost is constantly decreasing, the least cost method to provide a good or service is by a single company. - Capital requirements: Production processes that require large investments of capital, or large research and development costs or substantial sunk costs limit the number of companies in an industry. Large fixed costs also make it difficult for a small company to enter an industry and expand. - Technological superiority: A monopoly may be better able to acquire, integrate and use the best possible technology in producing its goods while entrants do not have the size or finances to use the best available technology. One large company can sometimes produce goods cheaper than several small companies. - No substitute goods: A monopoly sells a good for which there is no close substitute. The absence of substitutes makes the demand for the good relatively inelastic enabling monopolies to extract positive profits. - Control of natural resources: A prime source of monopoly power is the control of resources that are critical to the production of a final good. - Network externalities: The use of a product by a person can affect the value of that product to other people. This is the network effect. There is a direct relationship between the proportion of people using a product and the demand for that product. In other words the more people who are using a product the greater the probability of any individual starting to use the product. This effect accounts for fads, fashion trends, social networks etc. It also can play a crucial role in the development or acquisition of market power. The most famous current example is the market dominance of the Microsoft office suite and operating system in personal computers. - Legal barriers: Legal rights can provide opportunity to monopolise the market of a good. Intellectual property rights, including patents and copyrights, give a monopolist exclusive control of the production and selling of certain goods. Property rights may give a company exclusive control of the materials necessary to produce a good. - Deliberate actions: A company wanting to monopolise a market may engage in various types of deliberate action to exclude competitors or eliminate competition. Such actions include collusion, lobbying governmental authorities, and force (see anti-competitive practices). In addition to barriers to entry and competition, barriers to exit may be a source of market power. Barriers to exit are market conditions that make it difficult or expensive for a company to end its involvement with a market. Great liquidation costs are a primary barrier for exiting. Market exit and shutdown are separate events. The decision whether to shut down or operate is not affected by exit barriers. A company will shut down if price falls below minimum average variable costs. Monopoly versus competitive markets While monopoly and perfect competition mark the extremes of market structures there is some similarity. The cost functions are the same. Both monopolies and perfectly competitive (PC) companies minimize cost and maximize profit. The shutdown decisions are the same. Both are assumed to have perfectly competitive factors markets. There are distinctions, some of the more important of which are as follows: - Marginal revenue and price: In a perfectly competitive market, price equals marginal cost. In a monopolistic market, however, price is set above marginal cost. - Product differentiation: There is zero product differentiation in a perfectly competitive market. Every product is perfectly homogeneous and a perfect substitute for any other. With a monopoly, there is great to absolute product differentiation in the sense that there is no available substitute for a monopolized good. The monopolist is the sole supplier of the good in question. A customer either buys from the monopolizing entity on its terms or does without. - Number of competitors: PC markets are populated by an infinite number of buyers and sellers. Monopoly involves a single seller. - Barriers to Entry: Barriers to entry are factors and circumstances that prevent entry into market by would-be competitors and limit new companies from operating and expanding within the market. PC markets have free entry and exit. There are no barriers to entry, or exit competition. Monopolies have relatively high barriers to entry. The barriers must be strong enough to prevent or discourage any potential competitor from entering the market. - Elasticity of Demand: The price elasticity of demand is the percentage change of demand caused by a one percent change of relative price. A successful monopoly would have a relatively inelastic demand curve. A low coefficient of elasticity is indicative of effective barriers to entry. A PC company has a perfectly elastic demand curve. The coefficient of elasticity for a perfectly competitive demand curve is infinite. - Excess Profits: Excess or positive profits are profit more than the normal expected return on investment. A PC company can make excess profits in the short term but excess profits attract competitors, which can enter the market freely and decrease prices, eventually reducing excess profits to zero. A monopoly can preserve excess profits because barriers to entry prevent competitors from entering the market. - Profit Maximization: A PC company maximizes profits by producing such that price equals marginal costs. A monopoly maximises profits by producing where marginal revenue equals marginal costs. The rules are not equivalent. The demand curve for a PC company is perfectly elastic – flat. The demand curve is identical to the average revenue curve and the price line. Since the average revenue curve is constant the marginal revenue curve is also constant and equals the demand curve, Average revenue is the same as price (AR = TR/Q = P x Q/Q = P). Thus the price line is also identical to the demand curve. In sum, D = AR = MR = P. - P-Max quantity, price and profit: If a monopolist obtains control of a formerly perfectly competitive industry, the monopolist would increase prices, reduce production, and realise positive economic profits. - Supply Curve: in a perfectly competitive market there is a well defined supply function with a one to one relationship between price and quantity supplied. In a monopolistic market no such supply relationship exists. A monopolist cannot trace a short term supply curve because for a given price there is not a unique quantity supplied. As Pindyck and Rubenfeld note, a change in demand "can lead to changes in prices with no change in output, changes in output with no change in price or both". Monopolies produce where marginal revenue equals marginal costs. For a specific demand curve the supply "curve" would be the price/quantity combination at the point where marginal revenue equals marginal cost. If the demand curve shifted the marginal revenue curve would shift as well and a new equilibrium and supply "point" would be established. The locus of these points would not be a supply curve in any conventional sense. The most significant distinction between a PC company and a monopoly is that the monopoly has a downward-sloping demand curve rather than the "perceived" perfectly elastic curve of the PC company. Practically all the variations mentioned above relate to this fact. If there is a downward-sloping demand curve then by necessity there is a distinct marginal revenue curve. The implications of this fact are best made manifest with a linear demand curve. Assume that the inverse demand curve is of the form x = a − by. Then the total revenue curve is TR = ay − by2 and the marginal revenue curve is thus MR = a − 2by. From this several things are evident. First the marginal revenue curve has the same y intercept as the inverse demand curve. Second the slope of the marginal revenue curve is twice that of the inverse demand curve. Third the x intercept of the marginal revenue curve is half that of the inverse demand curve. What is not quite so evident is that the marginal revenue curve is below the inverse demand curve at all points. Since all companies maximise profits by equating MR and MC it must be the case that at the profit-maximizing quantity MR and MC are less than price, which further implies that a monopoly produces less quantity at a higher price than if the market were perfectly competitive. The fact that a monopoly has a downward-sloping demand curve means that the relationship between total revenue and output for a monopoly is much different than that of competitive companies. Total revenue equals price times quantity. A competitive company has a perfectly elastic demand curve meaning that total revenue is proportional to output. Thus the total revenue curve for a competitive company is a ray with a slope equal to the market price. A competitive company can sell all the output it desires at the market price. For a monopoly to increase sales it must reduce price. Thus the total revenue curve for a monopoly is a parabola that begins at the origin and reaches a maximum value then continuously decreases until total revenue is again zero. Total revenue has its maximum value when the slope of the total revenue function is zero. The slope of the total revenue function is marginal revenue. So the revenue maximizing quantity and price occur when MR = 0. For example assume that the monopoly’s demand function is P = 50 − 2Q. The total revenue function would be TR = 50Q − 2Q2 and marginal revenue would be 50 − 4Q. Setting marginal revenue equal to zero we have So the revenue maximizing quantity for the monopoly is 12.5 units and the revenue maximizing price is 25. A company with a monopoly does not experience price pressure from competitors, although it may experience pricing pressure from potential competition. If a company increases prices too much, then others may enter the market if they are able to provide the same good, or a substitute, at a lesser price. The idea that monopolies in markets with easy entry need not be regulated against is known as the "revolution in monopoly theory". A monopolist can extract only one premium,[clarification needed] and getting into complementary markets does not pay. That is, the total profits a monopolist could earn if it sought to leverage its monopoly in one market by monopolizing a complementary market are equal to the extra profits it could earn anyway by charging more for the monopoly product itself. However, the one monopoly profit theorem is not true if customers in the monopoly good are stranded or poorly informed, or if the tied good has high fixed costs. A pure monopoly has the same economic rationality of perfectly competitive companies, i.e. to optimise a profit function given some constraints. By the assumptions of increasing marginal costs, exogenous inputs' prices, and control concentrated on a single agent or entrepreneur, the optimal decision is to equate the marginal cost and marginal revenue of production. Nonetheless, a pure monopoly can – unlike a competitive company – alter the market price for its own convenience: a decrease of production results in a higher price. In the economics' jargon, it is said that pure monopolies have "a downward-sloping demand". An important consequence of such behaviour is worth noticing: typically a monopoly selects a higher price and lesser quantity of output than a price-taking company; again, less is available at a higher price. The inverse elasticity rule A monopoly chooses that price that maximizes the difference between total revenue and total cost. The basic markup rule can be expressed as (P − MC)/P = 1/PED. The markup rules indicate that the ratio between profit margin and the price is inversely proportional to the price elasticity of demand. The implication of the rule is that the more elastic the demand for the product the less pricing power the monopoly has. Market power is the ability to increase the product's price above marginal cost without losing all customers. Perfectly competitive (PC) companies have zero market power when it comes to setting prices. All companies of a PC market are price takers. The price is set by the interaction of demand and supply at the market or aggregate level. Individual companies simply take the price determined by the market and produce that quantity of output that maximizes the company's profits. If a PC company attempted to increase prices above the market level all its customers would abandon the company and purchase at the market price from other companies. A monopoly has considerable although not unlimited market power. A monopoly has the power to set prices or quantities although not both. A monopoly is a price maker. The monopoly is the market and prices are set by the monopolist based on his circumstances and not the interaction of demand and supply. The two primary factors determining monopoly market power are the company's demand curve and its cost structure. Market power is the ability to affect the terms and conditions of exchange so that the price of a product is set by a single company (price is not imposed by the market as in perfect competition). Although a monopoly's market power is great it is still limited by the demand side of the market. A monopoly has a negatively sloped demand curve, not a perfectly inelastic curve. Consequently, any price increase will result in the loss of some customers. Price discrimination allows a monopolist to increase its profit by charging higher prices for identical goods to those who are willing or able to pay more. For example, most economic textbooks cost more in the United States than in developing countries like Ethiopia. In this case, the publisher is using its government-granted copyright monopoly to price discriminate between the generally wealthier American economics students and the generally poorer Ethiopian economics students. Similarly, most patented medications cost more in the U.S. than in other countries with a (presumed) poorer customer base. Typically, a high general price is listed, and various market segments get varying discounts. This is an example of framing to make the process of charging some people higher prices more socially acceptable. Perfect price discrimination would allow the monopolist to charge each customer the exact maximum amount he would be willing to pay. This would allow the monopolist to extract all the consumer surplus of the market. While such perfect price discrimination is a theoretical construct, advances in information technology and micromarketing may bring it closer to the realm of possibility It is important to realize that partial price discrimination can cause some customers who are inappropriately pooled with high price customers to be excluded from the market. For example, a poor student in the U.S. might be excluded from purchasing an economics textbook at the U.S. price, which the student may have been able to purchase at the Ethiopian price'. Similarly, a wealthy student in Ethiopia may be able to or willing to buy at the U.S. price, though naturally would hide such a fact from the monopolist so as to pay the reduced third world price. These are deadweight losses and decrease a monopolist's profits. As such, monopolists have substantial economic interest in improving their market information and market segmenting. There is important information for one to remember when considering the monopoly model diagram (and its associated conclusions) displayed here. The result that monopoly prices are higher, and production output lesser, than a competitive company follow from a requirement that the monopoly not charge different prices for different customers. That is, the monopoly is restricted from engaging in price discrimination (this is termed first degree price discrimination, such that all customers are charged the same amount). If the monopoly were permitted to charge individualised prices (this is termed third degree price discrimination), the quantity produced, and the price charged to the marginal customer, would be identical to that of a competitive company, thus eliminating the deadweight loss; however, all gains from trade (social welfare) would accrue to the monopolist and none to the consumer. In essence, every consumer would be indifferent between (1) going completely without the product or service and (2) being able to purchase it from the monopolist. As long as the price elasticity of demand for most customers is less than one in absolute value, it is advantageous for a company to increase its prices: it receives more money for fewer goods. With a price increase, price elasticity tends to increase, and in the optimum case above it will be greater than one for most customers. A company maximizes profit by selling where marginal revenue equals marginal cost. A company that does not engage in price discrimination will charge the profit maximizing price, P*, to all its customers. In such circumstances there are customers who would be willing to pay a higher price than P* and those who will not pay P* but would buy at a lower price. A price discrimination strategy is to charge less price sensitive buyers a higher price and the more price sensitive buyers a lower price. Thus additional revenue is generated from two sources. The basic problem is to identify customers by their willingness to pay. The purpose of price discrimination is to transfer consumer surplus to the producer. Consumer surplus is the difference between the value of a good to a consumer and the price the consumer must pay in the market to purchase it. Price discrimination is not limited to monopolies. Market power is a company’s ability to increase prices without losing all its customers. Any company that has market power can engage in price discrimination. Perfect competition is the only market form in which price discrimination would be impossible (a perfectly competitive company has a perfectly elastic demand curve and has zero market power). There are three forms of price discrimination. First degree price discrimination charges each consumer the maximum price the consumer is willing to pay. Second degree price discrimination involves quantity discounts. Third degree price discrimination involves grouping consumers according to willingness to pay as measured by their price elasticities of demand and charging each group a different price. Third degree price discrimination is the most prevalent type. There are three conditions that must be present for a company to engage in successful price discrimination. First, the company must have market power. Second, the company must be able to sort customers according to their willingness to pay for the good. Third, the firm must be able to prevent resell. A company must have some degree of market power to practice price discrimination. Without market power a company cannot charge more than the market price. Any market structure characterized by a downward sloping demand curve has market power – monopoly, monopolistic competition and oligopoly. The only market structure that has no market power is perfect competition. A company wishing to practice price discrimination must be able to prevent middlemen or brokers from acquiring the consumer surplus for themselves. The company accomplishes this by preventing or limiting resale. Many methods are used to prevent resale. For example persons are required to show photographic identification and a boarding pass before boarding an airplane. Most travelers assume that this practice is strictly a matter of security. However, a primary purpose in requesting photographic identification is to confirm that the ticket purchaser is the person about to board the airplane and not someone who has repurchased the ticket from a discount buyer. The inability to prevent resale is the largest obstacle to successful price discrimination. Companies have however developed numerous methods to prevent resale. For example, universities require that students show identification before entering sporting events. Governments may make it illegal to resale tickets or products. In Boston, Red Sox baseball tickets can only be resold legally to the team. The three basic forms of price discrimination are first, second and third degree price discrimination. In first degree price discrimination the company charges the maximum price each customer is willing to pay. The maximum price a consumer is willing to pay for a unit of the good is the reservation price. Thus for each unit the seller tries to set the price equal to the consumer’s reservation price. Direct information about a consumer’s willingness to pay is rarely available. Sellers tend to rely on secondary information such as where a person lives (postal codes); for example, catalog retailers can use mail high-priced catalogs to high-income postal codes. First degree price discrimination most frequently occurs in regard to professional services or in transactions involving direct buyer/seller negotiations. For example, an accountant who has prepared a consumer's tax return has information that can be used to charge customers based on an estimate of their ability to pay. In second degree price discrimination or quantity discrimination customers are charged different prices based on how much they buy. There is a single price schedule for all consumers but the prices vary depending on the quantity of the good bought. The theory of second degree price discrimination is a consumer is willing to buy only a certain quantity of a good at a given price. Companies know that consumer’s willingness to buy decreases as more units are purchased. The task for the seller is to identify these price points and to reduce the price once one is reached in the hope that a reduced price will trigger additional purchases from the consumer. For example, sell in unit blocks rather than individual units. In third degree price discrimination or multi-market price discrimination the seller divides the consumers into different groups according to their willingness to pay as measured by their price elasticity of demand. Each group of consumers effectively becomes a separate market with its own demand curve and marginal revenue curve. The firm then attempts to maximize profits in each segment by equating MR and MC, Generally the company charges a higher price to the group with a more price inelastic demand and a relatively lesser price to the group with a more elastic demand. Examples of third degree price discrimination abound. Airlines charge higher prices to business travelers than to vacation travelers. The reasoning is that the demand curve for a vacation traveler is relatively elastic while the demand curve for a business traveler is relatively inelastic. Any determinant of price elasticity of demand can be used to segment markets. For example, seniors have a more elastic demand for movies than do young adults because they generally have more free time. Thus theaters will offer discount tickets to seniors. Assume that by a uniform pricing system the monopolist would sell five units at a price of $10 per unit. Assume that his marginal cost is $5 per unit. Total revenue would be $50, total costs would be $25 and profits would be $25. If the monopolist practiced price discrimination he would sell the first unit for $50 the second unit for $40 and so on. Total revenue would be $150, his total cost would be $25 and his profit would be $125.00. Several things are worth noting. The monopolist acquires all the consumer surplus and eliminates practically all the deadweight loss because he is willing to sell to anyone who is willing to pay at least the marginal cost. Thus the price discrimination promotes efficiency. Secondly, by the pricing scheme price = average revenue and equals marginal revenue. That is the monopolist behaving like a perfectly competitive company. Thirdly, the discriminating monopolist produces a larger quantity than the monopolist operating by a uniform pricing scheme. Successful price discrimination requires that companies separate consumers according to their willingness to buy. Determining a customer's willingness to buy a good is difficult. Asking consumers directly is fruitless: consumers don't know, and to the extent they do they are reluctant to share that information with marketers. The two main methods for determining willingness to buy are observation of personal characteristics and consumer actions. As noted information about where a person lives (postal codes), how the person dresses, what kind of car he or she drives, occupation, and income and spending patterns can be helpful in classifying. Monopoly and efficiency |This section does not cite any references or sources. (October 2009)| According to the standard model, in which a monopolist sets a single price for all consumers, the monopolist will sell a lesser quantity of goods at a higher price than would companies by perfect competition. Because the monopolist ultimately forgoes transactions with consumers who value the product or service more than its cost, monopoly pricing creates a deadweight loss referring to potential gains that went neither to the monopolist nor to consumers. Given the presence of this deadweight loss, the combined surplus (or wealth) for the monopolist and consumers is necessarily less than the total surplus obtained by consumers by perfect competition. Where efficiency is defined by the total gains from trade, the monopoly setting is less efficient than perfect competition. It is often argued that monopolies tend to become less efficient and less innovative over time, becoming "complacent", because they do not have to be efficient or innovative to compete in the marketplace. Sometimes this very loss of psychological efficiency can increase a potential competitor's value enough to overcome market entry barriers, or provide incentive for research and investment into new alternatives. The theory of contestable markets argues that in some circumstances (private) monopolies are forced to behave as if there were competition because of the risk of losing their monopoly to new entrants. This is likely to happen when a market's barriers to entry are low. It might also be because of the availability in the longer term of substitutes in other markets. For example, a canal monopoly, while worth a great deal during the late 18th century United Kingdom, was worth much less during the late 19th century because of the introduction of railways as a substitute. A natural monopoly is an organization that experiences increasing returns to scale over the relevant range of output and relatively high fixed costs. A natural monopoly occurs where the average cost of production "declines throughout the relevant range of product demand". The relevant range of product demand is where the average cost curve is below the demand curve. When this situation occurs, it is always cheaper for one large company to supply the market than multiple smaller companies; in fact, absent government intervention in such markets, will naturally evolve into a monopoly. An early market entrant that takes advantage of the cost structure and can expand rapidly can exclude smaller companies from entering and can drive or buy out other companies. A natural monopoly suffers from the same inefficiencies as any other monopoly. Left to its own devices, a profit-seeking natural monopoly will produce where marginal revenue equals marginal costs. Regulation of natural monopolies is problematic. Fragmenting such monopolies is by definition inefficient. The most frequently used methods dealing with natural monopolies are government regulations and public ownership. Government regulation generally consists of regulatory commissions charged with the principal duty of setting prices. To reduce prices and increase output, regulators often use average cost pricing. By average cost pricing, the price and quantity are determined by the intersection of the average cost curve and the demand curve. This pricing scheme eliminates any positive economic profits since price equals average cost. Average-cost pricing is not perfect. Regulators must estimate average costs. Companies have a reduced incentive to lower costs. Regulation of this type has not been limited to natural monopolies. Average-cost pricing does also have some disadvantages. By setting price equal to the intersection of the demand curve and the average total cost curve, the firm's output is allocatively inefficient as the price exceeds the marginal cost (which is the output quantity for a perfectly competitive and allocatively efficient market). A government-granted monopoly (also called a "de jure monopoly") is a form of coercive monopoly by which a government grants exclusive privilege to a private individual or company to be the sole provider of a commodity; potential competitors are excluded from the market by law, regulation, or other mechanisms of government enforcement. Monopolist shutdown rule A monopolist should shut down when price is less than average variable cost for every output level – in other words where the demand curve is entirely below the average variable cost curve. Under these circumstances at the profit maximum level of output (MR = MC) average revenue would be less than average variable costs and the monopolists would be better off shutting down in the short term. Breaking up monopolies When monopolies are not ended by the open market; sometimes a government will either regulate the monopoly, convert it into a publicly owned monopoly environment, or forcibly fragment it (see Antitrust law and trust busting). Public utilities, often being naturally efficient with only one operator and therefore less susceptible to efficient breakup, are often strongly regulated or publicly owned. American Telephone & Telegraph (AT&T) and Standard Oil are debatable examples of the breakup of a private monopoly by government: When AT&T, a monopoly previously protected by force of law, was broken up into various components in 1984, MCI, Sprint, and other companies were able to compete effectively in the long distance phone market. The existence of a very high market share does not always mean consumers are paying excessive prices since the threat of new entrants to the market can restrain a high-market-share company's price increases. Competition law does not make merely having a monopoly illegal, but rather abusing the power a monopoly may confer, for instance through exclusionary practices (i.e. pricing high just because you are the only one around.) It may also be noted that it is illegal to try to obtain a monopoly, by practices of buying out the competition, or equal practices. If one occurs naturally, such as a competitor going out of business, or lack of competition, it is not illegal until such time as the monopoly holder abuses the power. First it is necessary to determine whether a company is dominant, or whether it behaves "to an appreciable extent independently of its competitors, customers and ultimately of its consumer". As with collusive conduct, market shares are determined with reference to the particular market in which the company and product in question is sold. The Herfindahl-Hirschman Index (HHI) is sometimes used to assess how competitive an industry is. In the US, the merger guidelines state that a post-merger HHI below 1000 is viewed as unconcentrated while HHIs above that will provoke further review. By European Union law, very large market shares raise a presumption that a company is dominant, which may be rebuttable. If a company has a dominant position, then there is "a special responsibility not to allow its conduct to impair competition on the common market". The lowest yet market share of a company considered "dominant" in the EU was 39.7%. Certain categories of abusive conduct are usually prohibited by a country's legislation. The main recognised categories are: - Limiting supply - Predatory pricing - Price discrimination - Refusal to deal and exclusive dealing - Tying (commerce) and product bundling Despite wide agreement that the above constitute abusive practices, there is some debate about whether there needs to be a causal connection between the dominant position of a company and its actual abusive conduct. Furthermore, there has been some consideration of what happens when a company merely attempts to abuse its dominant position. The meaning and understanding of the English word 'monopoly' has changed over the years. Monopolies of resources Vending of common salt (sodium chloride) was historically a natural monopoly. Until recently, a combination of strong sunshine and low humidity or an extension of peat marshes was necessary for producing salt from the sea, the most plentiful source. Changing sea levels periodically caused salt "famines" and communities were forced to depend upon those who controlled the scarce inland mines and salt springs, which were often in hostile areas (e.g. the Sahara desert) requiring well-organised security for transport, storage, and distribution. The "Gabelle" was a notoriously high tax levied upon salt in the Kingdom of France. The much-hated levy had a role in the beginning of the French Revolution, when strict legal controls specified who was allowed to sell and distribute salt. First instituted in 1286, the Gabelle was not permanently abolished until 1945. Robin Gollan argues in The Coalminers of New South Wales that anti-competitive practices developed in the coal industry of Australia's Newcastle as a result of the business cycle. The monopoly was generated by formal meetings of the local management of coal companies agreeing to fix a minimum price for sale at dock. This collusion was known as "The Vend". The Vend ended and was reformed repeatedly during the late 19th century, ending by recession in the business cycle. "The Vend" was able to maintain its monopoly due to trade union assistance, and material advantages (primarily coal geography). During the early 20th century, as a result of comparable monopolistic practices in the Australian coastal shipping business, the Vend developed as an informal and illegal collusion between the steamship owners and the coal industry, eventually resulting in the High Court case Adelaide Steamship Co. Ltd v. R. & AG. Standard Oil was an American oil producing, transporting, refining, and marketing company. Established in 1870, it became the largest oil refiner in the world. John D. Rockefeller was a founder, chairman and major shareholder. The company was an innovator in the development of the business trust. The Standard Oil trust streamlined production and logistics, lowered costs, and undercut competitors. "Trust-busting" critics accused Standard Oil of using aggressive pricing to destroy competitors and form a monopoly that threatened consumers. Its controversial history as one of the world's first and largest multinational corporations ended in 1911, when the United States Supreme Court ruled that Standard was an illegal monopoly. The Standard Oil trust was dissolved into 33 smaller companies; two of its surviving "child" companies are ExxonMobil and the Chevron Corporation. U.S. Steel has been accused of being a monopoly. J. P. Morgan and Elbert H. Gary founded U.S. Steel in 1901 by combining Andrew Carnegie's Carnegie Steel Company with Gary's Federal Steel Company and William Henry "Judge" Moore's National Steel Company. At one time, U.S. Steel was the largest steel producer and largest corporation in the world. In its first full year of operation, U.S. Steel made 67 percent of all the steel produced in the United States. However, U.S. Steel's share of the expanding market slipped to 50 percent by 1911, and anti-trust prosecution that year failed. De Beers settled charges of price fixing in the diamond trade in the 2000s. De Beers is well known for its monopoloid practices throughout the 20th century, whereby it used its dominant position to manipulate the international diamond market. The company used several methods to exercise this control over the market. Firstly, it convinced independent producers to join its single channel monopoly, it flooded the market with diamonds similar to those of producers who refused to join the cartel, and lastly, it purchased and stockpiled diamonds produced by other manufacturers in order to control prices through limiting supply. In 2000, the De Beers business model changed due to factors such as the decision by producers in Russia, Canada and Australia to distribute diamonds outside the De Beers channel, as well as rising awareness of blood diamonds that forced De Beers to "avoid the risk of bad publicity" by limiting sales to its own mined products. De Beers' market share by value fell from as high as 90% in the 1980s to less than 40% in 2012, having resulted in a more fragmented diamond market with more transparency and greater liquidity. In November 2011 the Oppenheimer family announced its intention to sell the entirety of its 40% stake in De Beers to Anglo American plc thereby increasing Anglo American's ownership of the company to 85%. The transaction was worth £3.2 billion ($5.1 billion) in cash and ended the Oppenheimer dynasty's 80-year ownership of De Beers. A public utility (or simply "utility") is an organization or company that maintains the infrastructure for a public service or provides a set of services for public consumption. Common examples of utilities are electricity, natural gas, water, sewage, cable television, and telephone. In the United States, public utilities are often natural monopolies because the infrastructure required to produce and deliver a product such as electricity or water is very expensive to build and maintain. American Telephone & Telegraph was a telecommunications giant. AT&T was broken up in 1984. The Comcast Corporation is the largest mass media and communications company in the world by revenue. It is the largest cable company and home Internet service provider in the United States, and the nation's third largest home telephone service provider. Comcast has a monopoly in Boston, Philadelphia, Chicago, and many other small towns across the US. The United Aircraft and Transport Corporation was an aircraft manufacturer holding company that was forced to divest itself of airlines in 1934. The Long Island Rail Road (LIRR) was founded in 1834, and since the mid-1800s has provided train service between Long Island and New York City. In the 1870s, LIRR became the sole railroad in that area through a series of acquisitions and consolidations. In 2013, the LIRR's commuter rail system is the busiest commuter railroad in North America, serving nearly 335,000 passengers daily. The British East India Company was created as a legal trading monopoly in 1600. The East India Company was formed for pursuing trade with the East Indies but ended up trading mainly with the Indian subcontinent, North-West Frontier Province, and Balochistan. The Company traded in basic commodities, which included cotton, silk, indigo dye, salt, saltpetre, tea and opium. Major League Baseball survived U.S. anti-trust litigation in 1922, though its special status is still in dispute as of 2009. The National Football League survived anti-trust lawsuit in the 1960s but was convicted of being an illegal monopoly in the 1980s. Other examples of monopolies - Microsoft has been the defendant in multiple anti-trust suits on strategy embrace, extend and extinguish. They settled anti-trust litigation in the U.S. in 2001. In 2004 Microsoft was fined 493 million euros by the European Commission which was upheld for the most part by the Court of First Instance of the European Communities in 2007. The fine was US$1.35 billion in 2008 for noncompliance with the 2004 rule. - MPAA (Motion Picture Association of America) has a monopoly over film ratings in the U.S. - Joint Commission is an organization that accredits more than 20,000 health care organizations and programs in the United States. The Commission has a monopoly over determining whether a U.S. hospital can participate in the publicly funded Medicare and Medicaid healthcare programs. - Monsanto has been sued by competitors for anti-trust and monopolistic practices. They have between 70% and 100% of the commercial GMO seed market in a small number of crops. - AAFES has a monopoly on retail sales at overseas U.S. military installations. - State stores in certain United States states, e.g. for liquor. - The Registered Dietitian union seeks monopoly over nutrition services through state-level licensing schemes. - The State retail alcohol monopolies of Norway (Vinmonopolet), Sweden (Systembolaget), Finland (Alko), Iceland (Vínbúð), Ontario (LCBO), Quebéc (SAQ), British Columbia (Liquor Distribution Branch), among others. - Google is widely considered a monopoly for search engines in Europe and North America, where "to google" has even become a word used in everyday language. According to professor Milton Friedman, laws against monopolies cause more harm than good, but unnecessary monopolies should be countered by removing tariffs and other regulation that upholds monopolies. A monopoly can seldom be established within a country without overt and covert government assistance in the form of a tariff or some other device. It is close to impossible to do so on a world scale. The De Beers diamond monopoly is the only one we know of that appears to have succeeded (and even De Beers are protected by various laws against so called "illicit" diamond trade). – In a world of free trade, international cartels would disappear even more quickly.—Milton Friedman, Free to Choose, p. 53–54 However, professor Steve H. Hanke believes that although private monopolies are more efficient than public ones, often by a factor of two, sometimes private natural monopolies, such as local water distribution, should be regulated (not prohibited) by, e.g., price auctions. Thomas DiLorenzo asserts, however, that during the early days of utility companies where there was little regulation, there were no natural monopolies and there was competition. Only when companies realized that they could gain power through government did monopolies begin to form. - Bilateral monopoly - Complementary monopoly - De facto standard - Dominant design - Flag carrier - History of monopoly - Ramsey problem, a policy rule concerning what price a monopolist should set. - Simulations and games in economics education that model monopolistic markets. - State monopoly capitalism Notes and references - Michael Burgan (2007). J. Pierpont Morgan: Industrialist and Financier. p. 93. ISBN 9780756519872. - Milton Friedman. "VIII: Monopoly and the Social Responsibility of Business and Labor". Capitalism and Freedom (paperback) (40th annivers sydney askins @ mykayla fox did this ed.). The University of Chicago Press. p. 208. ISBN 0-226-26421-1. - Blinder, Alan S; Baumol, William J; Gale, Colton L (June 2001). "11: Monopoly". Microeconomics: Principles and Policy (paperback). Thomson South-Western. p. 212. ISBN 0-324-22115-0. A pure monopoly is an industry in which there is only one supplier of a product for which there are no close substitutes and in which is very difficult or impossible for another firm to coexist - Orbach, Barak; Campbell, Grace (2012). "The Antitrust Curse of Bigness". Southern California Law Review. - Binger and Hoffman (1998), p. 391. - Goodwin, N; Nelson, J; Ackerman, F; Weisskopf, T (2009). Microeconomics in Context (2nd ed. ed.). Sharpe. pp. 307–308. - Samuelson, William F.; Marks, Stephen G. (2003). Managerial Economics (4th ed. ed.). Wiley. pp. 365–366. - Nicholson, Walter; Snyder, Christopher (2007). Intermediate Microeconomics. Thomson. p. 379. - Frank (2009), p. 274. - Samuelson & Marks (2003), p. 365. - Ayers, Rober M.; Collinge, Robert A. (2003). Microeconomics. Pearson. p. 238. - Pindyck and Rubinfeld (2001), p. 127. - Png, Ivan (1999). Managerial Economics. Blackwell. p. 271. ISBN 1-55786-927-8. - Png (1999), p. 268. - Negbennebor, Anthony (2001). Microeconomics, The Freedom to Choose. CAT Publishing. - Mankiw (2007), p. 338. - Hirschey, M (2000). Managerial Economics. Dreyden. p. 426. - Pindyck, R; Rubinfeld, D (2001). Microeconomics (5th ed. ed.). Prentice-Hall. p. 333. - Melvin and Boyes (2002), p. 245. - Varian, H (1992). Microeconomic Analysis (3rd ed. ed.). Norton. p. 235. - Pindyck and Rubinfeld (2001), p. 370. - Frank (2008), p. 342. - Pindyck and Rubenfeld (2000), p. 325. - Nicholson (1998), p. 551. - Perfectly competitive firms are price takers. Price is exogenous and it is possible to associate each price with unique profit maximizing quantity. Besanko, David, and Ronald Braeutigam, Microeconomics 2nd ed., Wiley (2005), p. 413. - Binger, B.; Hoffman, E. (1998). Microeconomics with Calculus (2nd ed. ed.). Addison-Wesley. - Frank (2009), p. 377. - Frank (2009), p. 378. - Depken, Craig (November 23, 2005). "10". Microeconomics Demystified. McGraw Hill. p. 170. ISBN 0-07-145911-1. - Davies, Glyn; Davies, John (July 1984). "The revolution in monopoly theory". Lloyds Bank Review (153): 38–52. - Levine, David; Boldrin, Michele (2008-09-07). Against intellectual monopoly. Cambridge University Press. p. 312. ISBN 978-0-521-87928-6. - Tirole, p. 66. - Tirole, p. 65. - Hirschey (2000), p. 412. - Melvin, Michael; Boyes, William (2002). Microeconomics (5th ed. ed.). Houghton Mifflin. p. 239. - Pindyck and Rubinfeld (2001), p. 328. - Varian (1992), p. 233. - Png (1999). - Krugman, Paul; Wells, Robin (2009). Microeconomics (2nd ed. ed.). Worth. - Goodwin et al., p. 315. - Samuelson and Marks (2006), p. 104. - Samuelson and Marks (2006), p. 107. - Boyes and Melvin, p. 246. - Perloff (2009), p. 404. - Perloff (2009), p. 394. - Besanko and Beautigam (2005), p. 449. - Wessels, p. 159. - Boyes and Melvin, p. 449. - Varian (1992), p. 241. - Perloff (2009), p. 393. - Besanko and Beautigam (2005), p. 448. - Hall, Robert E.; Liberman, Marc (2001). Microeconomics: Theory and Applications (2nd ed. ed.). South_Western. p. 263. - Besanko and Beautigam (2005), p. 451. - If the monopolist is able to segment the market perfectly, then the average revenue curve effectively becomes the marginal revenue curve for the company and the company maximizes profits by equating price and marginal costs. That is the company is behaving like a perfectly competitive company. The monopolist will continue to sell extra units as long as the extra revenue exceeds the marginal cost of production. The problem that the company has is that the company must charge a different price for each successive unit sold. - Varian (1992), p. 242. - Perloff (2009), p. 396. - Because MC is the same in each market segment the profit maximizing condition becomes produce where MR1 = MR2 = MC. Pindyck and Rubinfeld (2009), pp. 398–99. - As Pindyck and Rubinfeld note, managers may find it easier to conceptualize the problem of what price to charge in each segment in terms of relative prices and price elasticities of demand. Marginal revenue can be written in terms of elasticities of demand as MR = P(1+1/PED). Equating MR1 and MR2 we have P1 (1+1/PED) = P2 (1+1/PED) or P1/P2 = (1+1/PED2)/(1+1/PED1). Using this equation the manager can obtain elasticity information and set prices for each segment. [Pindyck and Rubinfeld (2009), pp. 401–02.] Note that the manager may be able to obtain industry elasticities, which are far more inelastic than the elasticity for an individual firm. As a rule of thumb the company’s elasticity coefficient is 5 to 6 times that of the industry. [Pindyck and Rubinfeld (2009) pp. 402.] - Colander, David C., p. 269. - Note that the discounts apply only to tickets not to concessions. The reason there is not any popcorn discount is that there is not any effective way to prevent resell. A profit maximizing theater owner maximizes concession sales by selling where marginal revenue equals marginal cost. - Lovell (2004), p. 266. - Frank (2008), p. 394. - Frank (2008), p. 266. - Smith, Adam (1776), Wealth of Nations, Penn State Electronic Classics edition, republished 2005 - Binger and Hoffman (1998), p. 406. - Samuelson, P. & Nordhaus, W.: Microeconomics, 17th ed. McGraw-Hill 2001 - Samuelson, W; Marks, S (2005). Managerial Economics (4th ed.). Wiley. p. 376. - Samuelson and Marks (2003), p. 100. - Frank, Robert H. (2008). Microeconomics and Behavior (7th ed. ed.). McGraw-Hill. ISBN 978-0-07-126349-8. - Case 27/76: United Brands Company and United Brands Continentaal BV v Commission of the European Communities (ECR 207), 14 February 1978 - Kerber, Wolfgang; Kretschmer, Jürgen-Peter; von Wangenheim, Georg (September 23, 2009), Market Share Thresholds and Herfindahl-Hirschman-Index (HHI) as Screening Instruments in Competition Law: A Theoretical Analysis (PDF), Department of Economics, University of Vienna - "1.5 Concentration and Market Shares", Horizontal Merger Guidelines (U.S. Department of Justice and the Federal Trade Commission), April 8, 1997 - Case 85/76: Hoffmann-La Roche & Co. AG v Commission of the European Communities (ECR 461), 13 February 1979 - AKZO Chemie BV v Commission of the European Communities, 3 July 1991 - Case 322/81: NV Nederlandsche Banden Industrie Michelin v Commission of the European Communities, 9 November 1983 - COMMISSION DECISION of 14 July 1999 relating to a proceeding under Article 82 of the EC Treaty (IV/D-2/34.780 — Virgin/British Airways, 14 July 1999, p. L30/1 - Case 6-72: Europemballage Corporation and Continental Can Company Inc. v Commission of the European Communities, 21 February 1973 - Aristotle. Politics (350 B.C.E ed.). - Aristotle. Politics. p. 1252α. - Richardson, Gary (June 2001). "A Tale of Two Theories: Monopolies and Craft Guilds in Medieval England and Modern Imagination". Journal of the History of Economic Thought. - Chazelas, Jean (1968). "La suppression de la gabelle du sel en 1945". Le rôle du sel dans l'histoire: travaux préparés sous la direction de Michel Mollat (Presses universitaires de France): 263–65. - Gollan, Robin (1963). The Coalminers of New South Wales: a history of the union, 1860–1960. Melbourne: Melbourne University Press. pp. 45–134. - "Exxon Mobil - Our history". Exxon Mobil Corp. Retrieved 2009-02-03. - Morris, Charles R. The Tycoons: How Andrew Carnegie, John D. Rockefeller, Jay Gould, and J.P. Morgan invented the American supereconomy, H. Holt and Co., New York, 2005, pp. 255-258. ISBN 0-8050-7599-2. - "United States Steel Corporation History". FundingUniverse. Retrieved 3 January 2014. - Boselovic, Len (February 25, 2001). "Steel Standing: U.S. Steel celebrates 100 years". PG News - Business & Technology. post-gazette.com - PG Publishing. Retrieved 6 August 2013. - "West's Encyclopedia of American Law". Answers.com. 2009-06-28. Retrieved 2011-10-11. - Lasar, Matthew (May 13, 2011), How Robber Barons hijacked the "Victorian Internet": Ars revisits those wild and crazy days when Jay Gould ruled the telegraph and ..., Ars technica - Kevin J. O'Brien, IHT.com, Regulators in Europe fight for independence, International Herald Tribune, November 9, 2008, Accessed November 14, 2008. - IfM - Comcast/NBCUniversal, LLC. Mediadb.eu (2013-11-15). Retrieved on 2013-12-09. - Dickens, Matthew (24 May 2013), TRANSIT RIDERSHIP REPORT: First Quarter 2013 (PDF), American Public Transportation Association, retrieved 3 January 2014 - Van Boven, M. W. "Towards A New Age of Partnership (TANAP): An Ambitious World Heritage Project (UNESCO Memory of the World – reg.form, 2002)". VOC Archives Appendix 2, p.14. - EU competition policy and the consumer - Leo Cendrowicz (2008-02-27). "Microsoft Gets Mother Of All EU Fines". Forbes. Retrieved 2008-03-10. - "EU fines Microsoft record $1.3 billion". Time Warner. 2008-02-27. Retrieved 2008-03-10. - "American Society for Healthcare Engineering". - In Praise of Private Infrastructure, Globe Asia, April 2008 - Thomas J. DiLorenzo. "The Myth of Natural Monopoly – Thomas J. DiLorenzo – Mises Daily". Mises.org. Retrieved 2012-11-02. - Guy Ankerl, Beyond Monopoly Capitalism and Monopoly Socialism. Cambridge, Massachusetts: Schenkman Pbl., 1978. ISBN 0-87073-938-7 - McChesney, Fred (2008). "Antitrust". In David R. Henderson (ed.). Concise Encyclopedia of Economics (2nd ed.). Indianapolis: Library of Economics and Liberty. ISBN 978-0865976658. OCLC 237794267. - Stigler, George J. (2008). "Monopoly". In David R. Henderson (ed.). Concise Encyclopedia of Economics (2nd ed.). Indianapolis: Library of Economics and Liberty. ISBN 978-0865976658. OCLC 237794267. |Wikimedia Commons has media related to Monopoly.| |Look up monopoly in Wiktionary, the free dictionary.| - Monopoly: A Brief Introduction by The Linux Information Project - Monopoly by Elmer G. Wiens: Online Interactive Models of Monopoly (Public or Private) and Oligopoly - Beach, Chandler B., ed. (1914). "Monopoly". The New Student's Reference Work. Chicago: F. E. Compton and Co. - Impact of Antitrust Laws on American Professional Team Sports - A monopolist who does not know the demand curve – A paper and a simulation software by Valentino Piana (2002). - Monopoly Profit and Loss by Fiona Maclachlan & Monopoly and Natural Monopoly by Seth J. Chandler, Wolfram Demonstrations Project - Government and Microsoft: a Libertarian View on Monopolies (by François-René Rideau on his personal website) - The Myth of Natural Monopoly (by Thomas J. DiLorenzo on www.Mises.org) – 1996 - Natural Monopoly and Its Regulation - From rulers' monopolies to users' choices A critical survey of monopolistic practices - Body of Knowledge on Infrastructure Regulation Monopoly and Market Power
Whether walking down a city street near a crowded nightclub opening or pulling up to the parking lot at the local county fair, it’s probably a safe bet that you’ve seen those bright, white searchlights flitting across the dark nighttime sky – the outline of the light beam clearly visible as it reaches up to touch the bottoms of the clouds overhead. As a signal that can be seen far and wide, searchlights have a way of drawing you in to the main event. Hollywood producers certainly know this to be true. For example, the next time you sit down with a bowl of popcorn to watch a 20th Century Fox movie, note the searchlights scanning the hazy skies in the opening credits. Part of the reason searchlights are so captivating is that when we shine them up into the heavens, the light doesn’t just travel up to be lost into space – much of it is reflected by particles, clouds, and gases in the Earth’s atmosphere back down to the ground where we can see it. As a scientist at NASA’s Langley Research Center, I study how these particles, clouds, and gases are distributed throughout Earth’s atmosphere and how they are likely to change in the future. Many of the tools that my colleagues and I use to measure atmospheric components rely on the same light scattering physics that are apparent from observing a searchlight beam. In fact, some of the earliest atmospheric remote sensing scientific studies actually used searchlights. Hulbert (1937) describes shining a powerful searchlight up into the sky and photographing the beam from a station that was located roughly 20 km away. They noted that the intensity of the light changed with altitude, which they attributed to the changing density of air molecules and to the presence of “haze particles” in the upper troposphere. Today, we no longer use searchlights, but instead rely on instruments with powerful lasers that are called lidars. As their name implies, a lidar is very similar to the commonly known radar. However, instead of emitting radio waves to track storms, the lidar emits a pulse of laser light at a precise wavelength that is capable of seeing much smaller atmospheric features than storm clouds. As the light photons from the laser beam travel through the atmosphere, some of them are absorbed or scattered by gas molecules or particles, and the photons that are backscattered toward the lidar are collected by a detector and recorded over time. Since photons travel at a fixed speed of light, the time (as measured in tiny fractions of a second) at which they are detected is directly proportional to how far away the gases or particles are from the lidar. In addition, particles and gases interact differently with light of varying wavelengths, so we can use different “color” lasers to detect different atmospheric components. For example, take a look at the image at the beginning of the post showing the green lidar beam pointing up into the nighttime sky of Bozeman, Montana. What if I told you that there are actually two lidar beams in this photograph, not one? The green beam is clearly visible and is measuring the backscattered photons from particles and clouds, but there is a second beam right beside it at an infrared wavelength that is invisible to our eyes, but whose photons can be “seen” by the lidar detector. Since gaseous water vapor strongly absorbs light in the infrared, this second beam gives us the water vapor concentration in the atmospheric column. Ground-based, upward-looking lidars like those shown above are operated by many research universities and institutions worldwide and provide unprecedented observations of atmospheric vertical structure and how it varies with time for a given location. NASA also maintains a network of ground-based micropulse lidars (MPLnet) throughout the world for atmospheric monitoring and satellite validation. A key advantage of using a powerful single-wavelength laser instead of the white light source used in searchlights of old is that our modern lidars are able to overpower the natural sunlight signal so that we can even make measurements during the daytime. In addition to these highly-localized ground lidars, we also use downward-looking lidar instruments located on satellites and airplanes to characterize the vertical profile of the atmosphere across regional, national, and global scales. Ensuring that the lidar electrical and optical components can operate safely and effectively in space or at high altitude presents a large number of technological challenges to scientists and engineers that are not present with ground-based systems. To bring these instruments to fruition requires a decades-long international effort that is still underway. The first space-based backscatter lidar designed to measure atmospheric aerosols and clouds was the NASA Lidar In-Space Technology Experiment (LITE) instrument that orbited Earth for 9 days in September 1994 aboard the Space Shuttle Discovery. Operating for 53 hours, and collecting over 40 gigabytes of data (a large amount in the 1990s!), LITE proved the concept of making backscatter lidar measurements from space and paved the way for the current-generation CALIOP lidar aboard the NASA/CNES CALIPSO satellite, which was launched in 2006. As part of the A-Train constellation of satellites, CALIPSO encircles the entire globe providing vertical profile “curtains” like those shown in the figures below. Red and yellow coloring denotes areas of high light backscatter due to the presence of aerosols and clouds, which are observed in the scene at right both near the ground as well as high up near the stratosphere (10-15 km altitude). The CALIPSO satellite continues to operate and provide valuable data to Earth scientists, despite having already well exceeded its design lifetime. The new NASA Cloud-Aerosol Transport System (CATS) lidar launched to the International Space Station earlier this year (2015), and the planned launch of the European Space Agency EarthCare ATLID lidar will ensure the continuity of the satellite lidar record in the near future. Just from perusing the launch dates in the previous paragraph, it’s apparent that putting together a successful satellite mission takes time; however, the pace of scientific discovery is rapid and unrelenting. Society requires, and the scientific community must provide, the Earth science data that policymakers need to make informed decisions on topics ranging from regional air quality, to ocean and land use change, to climate change. Even as current satellite lidar technologies are being built, the next-generation of lidars are being tested on airplanes in a variety of field missions. This approach has the advantage both of advancing the technological readiness of these instruments on a path toward space, as well as designing and carrying out hypothesis-driven, short-duration scientific studies using state-of-the-art lidar instruments. One such experiment was the recently-concluded NASA DISCOVER-AQ field project, where our goal was to understand how well satellite remote sensors are able to detect air pollution near the ground. Such a study is hard to accomplish with actual satellite sensors since they only overfly a particular place once or twice a day or month depending on the particular satellite orbit and instrument swath. To get around this, DISCOVER-AQ used two airplanes: one flying at high-altitude with a lidar and other remote sensors to act as satellite simulators, and one flying at low-altitude with detailed in situ validation instrumentation. The image below shows the flight tracks for these two airplanes as they overfly ground-based measurement stations and balloon launch points in the Baltimore-Washington D.C., USA, metropolitan area. By measuring the atmospheric state using both in situ and remote sensing instruments, we’re able to 1) understand how pollution is distributed across the selected metropolitan areas, and 2) evaluate and improve the remote sensor data analysis algorithms that will someday serve as the basis for the next-generation of satellite sensors. Another exciting experiment is the upcoming NASA NAAMES field project to be carried out in 2015-2019. The goal of NAAMES is to understand how ocean phytoplankton ecosystems vary seasonally and how this variability translates into changes in sea spray aerosols, clouds, and regional climate. Detailed measurements of the ocean and surface atmosphere composition are conducted aboard an ocean-going ship that will cruise through the North Atlantic Ocean during four different seasons. Meanwhile, we will overfly the ship with a C-130 aircraft that is heavily instrumented with both in situ aerosol/cloud instruments as well as remote sensing instruments including an ocean-profiling high-spectral resolution lidar (HSRL), ocean color imager, and polarimeter. As shown in the project cartoon below, the ocean-profiling HSRL is a new, unique instrument that is able to profile both the atmospheric aerosols and clouds as well as the first 30 meters or so of the ocean waters. The NAAMES mission relies on the light-based lidar measurements made from aboard the airplane to connect the ship measurements to the much larger spatial scales covered by current satellite-based ocean color instruments, and to help evaluate and refine the ocean-profiling lidar operational parameters and data analysis algorithms that will someday be used to interpret the measurements of a future satellite instrument of this kind. The international suite of space-based advanced lidar instruments now being deployed to study the Earth System build on a rich history of technological development spanning many decades. With new and exciting light-based instruments employing the HSRL technique (among others) and now also able to derive sub-surface ocean properties, the future of studying the Earth using spaceborne lidars seems bright. Richard Moore is a Research Physical Scientist at NASA’s Langley Research Center in Hampton, Virginia, USA. In this role, he works closely with the other members of the NASA Langley Aerosol Research Group (LARGE) and NASA Langley Lidar Applications Group to study the interaction between atmospheric aerosols and cloud formation. Prior to joining NASA, Rich completed his Ph.D. in Chemical Engineering with the Nenes Research Group at Georgia Tech. Professional Website: http://science.larc.nasa.gov/profiles/Richard_H_Moore
What are functions: Functions are subprograms in a program that consists of blocks of code used to perform certain tasks. Generally, functions take a value as a parameter, process it, and then return an output. Functions help us reuse blocks of code and avoid repetitiveness. They can be used for dividing complex problems into smaller chunks. They increase code readability and reduce its size as duplicate statements are replaced by a single line of code, i.e., call to the function. Types of functions: There are two different types of functions: - Built-in functions/Standard Library Functions - Custom/User defined Functions User defined function: As the name suggests, these are custom functions created by the user. The users/programmers can create custom functions for a specific task that they need to perform. Note: We will use the browser console to demonstrate examples performed in this post. To open up the browser console: - Use the F12 key in Chrome and other chromium-based browsers. - Use CTRL + SHIFT + K keyboard shortcut keys for Mozilla. - Use Option + ⌘ + C keyboard shortcut keys in Safari (if developer menu does not appear, then open Preferences by pressing ⌘ +, and in Advanced tab check “Show Develop menu in menu bar”). We then need to define the body of the function. We can put any code in a function; a function can have a single or multiple lines of code depending upon the purpose of that particular function. functionfunction_name(parameter1, parameter2, …….., parametern) In this example, we will declare and define a function that squares the value of the given number: let sq = number * number; In the example given above, the function square takes a number as a parameter. Then it multiplies the number with itself and stores it in a variable named sq. The function then returns the value that is present inside the variable sq. The sq variable is a local variable of function square and will not work outside of this function. The variables that are declared and defined in a function are that function’s local variables. On the other hand, the variables declared in the main program are global variables and can be accessed from anywhere in the program. Now we will use another example which will take two different parameters and multiply them: letans = number1 * number2; The function given above is taking two different numbers as parameters. It then multiplies them with each other and stores the value in the variable ans. Then it returns the value of the variable ans to where it was called. How to call a function: Declaring and defining a function specifies what the function will do when it is called. In the example given below, we will call the above-mentioned function square, and we will pass number 5 as a parameter to it: The function will square the number 5 and return 25. We can verify this by calling the function inside the console.log() method: If a function is returning a value, it returns that value to where it was called. As seen in the above example, when we called the function inside the console.log() method, the output was 25, which is the returning value of the function. We can use any variable or number in place of the parameter. A Function can be called from inside of any other function as well. We can use functions as conditions for if and loop statements as well. Functions are individual blocks of code that are written in order to perform specific actions. They are the most fundamental building blocks of almost all major programming languages. As mentioned above, all major programming languages have built-in functions. These functions help developers perform complex tasks using a single line of code. Developers also have the option to write their own functions according to the requirements of their code. In this post, we have discussed what functions are and how to declare them. Moreover, we also learned to call the declared functions.
Presentation on theme: "PowerPoint Created by: Alexander J. Hawkins Information documented from DK Smithsonian UNIVERSE Definitive Visual Guide."— Presentation transcript: PowerPoint Created by: Alexander J. Hawkins Information documented from DK Smithsonian UNIVERSE Definitive Visual Guide As human beings go through cyclic lives, maturing from birth to maturity to old age, stars also follow a series of stages from their creation until death. Stars follow varying sequences of change depending greatly on their solar mass, or their gravitational weight (measured in 1.9891 × 10 (30 th) kilograms per 1 solar mass). Regardless of whether a star is a low-mass, moderate-mass, or high- mass star, they are all born from interstellar clouds of gas that collapse under the pressure of gravity, enter a long period of stability called the main sequence, and die off to become another celestial body forever changed. However, what happens during a star’s existence dictates what unique paths it follows. As a result, our night sky is filled with distinct bodies of light, all stars that have experienced a different story in their lives. Stars are born in cold interstellar clouds of gas that drift through space. Depending on how cool the cloud is in temperature, the gaseous clump, composed mainly of hydrogen, is less protected from a gravitational collapse. At lower temperatures, the cloud’s hydrogen atoms collect together to form hydrogen molecules. Once, the cloud grows to surpass a certain mass, and experiences a gravitational disturbance, sometimes caused by supernovae, it will begin to collapse into itself. As this occurs, fragmented pieces of the cloud of varying mass and sizes separate to form protostars, the earliest form of a star. Once protostars are formed out of an interstellar cloud, the stellar objects continue to collapse, causing central temperatures and internal pressure to build up. Depending on the mass of a protostar, temperature and pressure levels increase, so it can be stated that temperature and pressure are higher with higher mass protostars. If a protostar has a size less than 0.08 solar masses, then the temperature and pressure at its core will not reach high enough levels for nuclear reactions to begin, allowing it to reach adolescence, and the star will become a brown dwarf star. However, if a protostar surpasses 0.08 solar masses, the gas that had clustered to form the protostar begins to rotate around the star, increasing in speed as it draws near the stellar body, being pulled slowly in, until a ring of stellar material is formed around the protostar. Until entering its main sequence, the protostar demonstrates unstable movements and reactions, ex. Rapid rotations, strong stellar winds, etc. With protostars with a mass over 0.08 solar masses, the internal pressure and temperature of the protostar will meet the requirements needed for nuclear reactions within the star to start. With this, the pressure of the stellar body will stabilize to balance gravity, classifying the protostar as a official star. After entering the main sequence, with the new star in a stable condition, the rings of extra material rotating around the star will begin to cool in temperature. As this happens elements within the disks will begin to condense, sticking to one another. Small pieces of material then join larger clumps, and the process continues until the balls of matter are the size of a planet. Of course, planet formation may take time, for while the clumps of material are still warm, other fragments impacting them may cause the piece to split apart again, making it so that planets are not permanently formed until they have cooled enough. Any excess, loose material from the star’s formation, after cooling without becoming a planet, become comets, asteroids, or trails of gas. For 90% of a star’s life it exists in a period of stability called the main sequence. 90% of all stars in the night sky are currently in their main sequence, since the time frame of such calmness makes for most of a star’s life. During this time, stars expand and contract, but at very small levels, not changing dramatically in activity. Temperature and pressure levels remain mostly constant, with little differentiations over the course of the billions of years a star stays in the main sequence. However, depending on the initial mass of a star, the time a star follows the main sequence varies (more massive stars exit the main sequence sooner due to their faster burning of fuel in comparison to small stars). At the end of a star’s main sequence, their solar mass dictates what path they will follow in the last leg of their lives, and even the outcome of their death. Low-Mass Stars: Any star half or less the mass of our sun is considered a low-mass star. Sun-Like Stars: Any star with equal or approximately equal mass as our sun is considered a sun-like star. High-Mass Stars: Any star with a much greater mass than our sun is considered a high-mass star. As most stars follow, low-mass stars eventually burn, or deplete, their hydrogen fuel in their cores. Once this happens, a low-mass star will convert its atmosphere slowly to helium instead of hydrogen, causing it to collapse; a similar trait among low-mass, sun-like, and high-mass stars. However, due to low-mass stars’ inferior mass, their internal pressure and temperature levels in its core can not reach the point of helium burning. This then causes the star to slowly cool down and loose luminance until the star fades into a black dwarf. 1. Star grows in size as its hydrogen layer is burned away. 2. Star begins to collapse and shrink as its hydrogen fuel dissipates. 3. Star continues collapsing due to its inability to produce helium burning. 4. Star grows so small and cold that only a gaseous pressure contradicts gravity. 5. Minuscule, dark star progressively fades away. 6. After losing most of its regular pressure and temperature, having decreased in size tremendously, the low-mass star turns into a dim black dwarf star. Sun-like stars, as can be concluded, have a similar mass as our solar system’s sun. After exiting the main sequence, such stars begin to use up all of their remaining hydrogen in their cores until the quantity of hydrogen available becomes depleted. Upon the occurrence of this, sun- like stars begin their process of hydrogen shell burning, where the hydrogen in their atmosphere begins to burn away, increasing their size until they become a red giant star. Red giants are massive stars that a sun-like star transforms into nearing the end of its life, which eventually sheds its outermost layers, becoming a planetary nebula. Over time, the planetary nebula builds up pressure and temperature at its core, causing helium burning to reactivate, and the star to expand once more. Soon after, the planetary nebula collapses into a white dwarf (slightly more illuminant and hot than a black dwarf), and then a black dwarf after it cools furthermore. Note: Scientists have predicted that this is the likely path of our current sun, which is relative in size to other stars that have had similar timelines. 1. Star grows to become a red giant as hydrogen burning causes an increase in the size, of the star. 2. Red giant star’s outer layers of hydrogen and helium are released from the star, forming a planetary nebula. 3. Star within planetary nebula begins to expand due to helium burning, triggered by high temperature and pressure levels in the core of the star. 4. Star collapses inside planetary nebula after its helium shell is burned away, causing it to cool down into a white dwarf star. 5. New white dwarf star gradually fades and cools until it becomes a black dwarf star. It is a common misnomer that our sun is large enough for a supernova (an extreme release of stellar material and heat) to occur in its future. In truth, the only type of star that can undertake such an explosion is a high-mass star, a star with a greater solar mass than that of our sun. In astronomy, the higher a star is in mass, the more times it will go into a period of expansion and contraction. The mass of a star, in addition, decides the temperature of a star’s core each time it goes into a period of flexion. Depending on the stage in a star’s development, various elements are formed within the core of the star, with the heaviest sustainable material being iron, when a massive star forms an iron core. However, any elements heavier and denser than iron cannot be produced internally by stellar bodies (stars). The only way to create such substances is through a supernova explosion, which can create elements such as gold in the process. Once most high mass stars expand due to hydrogen burning, they become supergiant stars, with a mass greater than any other form of star, which produces heavy elements such as iron (some result in changing into red giant stars). However, instead of calmly settling into becoming a black or white dwarf star, high-mass red giant or supergiant stars collapse violently resulting in supernovae. After ejecting new, heavy elements into space, supernovae create either a neutron star or black hole that create extreme gravitational pull. 1. Star begins hydrogen burning, growing in size as its hydrogen reserves are lowered. 2. High pressure and heat cause the high-mass star to turn into either a red giant star or supergiant star. 3. Red giant star/supergiant star creates heavy elements, such as iron, inside of itself through nuclear reactions at its core. 4. After reaching the point in which the red giant/ supergiant star has made the heaviest element it can form, iron, the star collapses and explodes into a supernova, producing even denser elements with more weight in the process. 5. In the aftermath of the supernova, the collapsed star could turn into either a neutron star or a black hole depending on its solar mass before the supernova. If the high-mass star remnant is over 1.4 solar masses, it will collapse to form a neutron star. If the high-mass star remnant is over 3.0 solar masses, it will collapse to form a black hole. Once a high-mass star reaches its stellar end point, the ultimate stage of development in a star, it either turns into a neutron star or black hole varying on the remnant of a supernova. If this remnant is 1.4 solar masses or larger (otherwise known as the Chandrasekhar limit), then the destroyed high-mass star will become a neutron star. However, if the remnant surpasses 3.0 solar masses, then the collapsed high-mass star will become a black hole. Neutron Stars- Neutron stars are one of the two resulting bodies created by a supernova (especially said of type II supernovae explosions). A neutron star is an incredibly dense, compact star with a internal body made primarily of neutrons. Much different than their parent stars, neutron stars have a crystalline outer crust, a much stronger gravitational pull despite their small mass (usually between 0.1 and 3.0 solar masses), and their rapid rotation. This rotation slows over time due to the loss of energy, but will occasionally spike up again due to “starquakes”, small tremors that occur beneath the solid, thin surface of neutron stars. Some neutron stars eject beams of radiation regularly, commonly classified as pulsars. Black Holes- Black holes are another resulting body created after a supernova, usually having to be greater than at least 3.0 solar masses. When a star collapses at such a size, the stellar object becomes incredibly small and dense, resulting in a gravitational pull so powerful that radiation (heat) or visible light can’t even escape. Such celestial forces are classified as stellar-mass black holes, which are only able to be detected through the effect and alterations they make to nearby objects in space; such as the light of distant objects in space being bended by their gravitational pull, the matter sucked onto its accretion disks (rings), and the changes they make to object movement in space. Stellar-mass black holes can be pinpointed by their high radiation levels created by the material they suck in, allowing astronomers to carefully observe them, despite the fact that they can hardly be seen by even telescope lenses. The most symbolic area of all black holes, the dark middle section, is called the event horizon, where light, radiation, or any matter can no longer escape from the black hole’s immense gravity. This area has troubled astronomers for years, for it is unknown what is on the other side of a black hole, or even if there is one. Like neutron stars, black holes are one of the many rare and spectacular phenomena produced by the death of a star, demonstrating that a star’s life continues on even after its primary period of activity. As if nature created all things, living and non-living, alike, the stars in our nighttime skies are not much different than the people of Earth. Stars go through a constant cycle, being formed in a tremendous display of growth, existing in a long period of stability, and finally being extinguished in a remarkable eruption of gas and fire. Then, as with the human race, stars are then born through the death of others, as the remnants of all supernovae result in the birth of a nebula. From there, the systematic pattern of stellar life repeats itself, producing new stars in place of the old. Stars are an important piece of our vast universe, giving life to planetary systems like ours, and demonstrating the basis of known space, upon which we can study the workings and particulars of the final frontier. All stars that reach the main sequence are formed in a similar manner, through the formation and activation of a protostar, but what lies between the main sequence and their stellar end points can vary. Some stars, with lower mass dissipate quickly due to rapid cooling, and turn into white and black dwarfs. Others with higher mass grow through hydrogen burning, becoming supergiant stars and, eventually, supernovae. So whenever you look up at the night sky, peering at the twinkling stars drifting endlessly, it is important to remember those distant, beating hearts of space. PowerPoint Created by: Alexander J. Hawkins Information documented from DK Smithsonian UNIVERSE Definitive Visual Guide
Mount Everest, Measurement of Mount Everest, Measurement of A process began around 1800 that would ultimately establish Mount Everest as the world's tallest mountain. Started by Englishman William Lambton, this process was referred to as "The Great Trigonometrical Survey," and led to a rigorous mapping of India. In 1796, Lambton was posted to India as a British lieutenant. Lambton's arrival coincided with increasing subcontinent colonization, and maps and surveys of these British-conquered territories were of great interest. Lambton proposed a more exacting survey than any attempted before in Asia. The resulting measurements would yield, for example, more accurate values for India's width. Lambton believed that the most important outcome would be a better understanding of Earth's geodetic shape. Early Mapping of the Earth Fifty years before Lambton's proposed survey, French scientists determined that Earth is better described as an oblate spheroid instead of a sphere. This meant that the distance from Earth's center to the equator is greater than the distance from its center to either pole. Lambton's survey, renamed "The Great Trigonometrical Survey of India," would help calculate the amount of this equatorial bulge , and thereby result in a better model of Earth's shape. As an unintended by-product, a precise height of the Himalayan Mountains would also be sought. Points on Earth are given a measurement of latitude and longitude . The longitudinal line passing through Greenwich, England, is called the Prime Meridian , and all lines of longitude are measured from it. Lambton's survey centered on the longitudinal line 78 degrees east from the prime meridian and running from India's southern tip to the Himalayan foothills. This arc-of-longitude became known as "The Great Arc." From the central hub of the Great Arc, an accurate survey of all of India could be performed. Trigonometry in Mapping The survey Lambton headed from around 1800 until his death in 1823 ran more than 1,000 miles northwards from India's southern coast. The survey consisted of a web of triangles whose vertices were located precisely. The basic idea behind traditional surveying is to use trigonometry (the mathematics of triangles) and measuring devices to locate exactly "new" points on Earth from points whose locations are already accurately known. A right triangle is used to determine the location of new point C relative to a known point A, as shown in the above illustration. There are six measurements for any triangle: three angles and three lengths of the sides. Given the lengths of two of three sides and one angular measurement, or given two angles and one side, any triangle is uniquely determined. Since the triangle is a right triangle, angle ABC is 90 degrees. Using a device called a transit , angle CAB can be measured. The length of side AC is now measured by using, for example, a chain. The length of sides AB and BC can now be computed. AB is the lateral distance from A to C, while length BC is the elevation of point C above point A. Point C can now be used as the starting point for a new triangle, and the procedure is repeated. This method is a much-simplified version of what took place in the Great Survey. For example, the orientation and length of a baseline had to be determined laboriously. Lambton's first baseline near the eastern Indian coast measured approximately 7.5 miles, and it was measured using a specially constructed 100-foot iron chain. In terrible subtropical heat and humidity, the first baseline took 57 days to complete. Lambton then turned west and plunged into the Indian jungle. Because of the jungle canopy, large numbers of trees had to be cut down so that towers could be built; people standing on the towers could then make the necessary angular measurements. Into this scene, British Lieutenant George Everest* arrived in 1819. Everest joined the Great Survey as one of many engineers reporting to Lambton. For the next several years, Everest worked in terrible jungle conditions of heat, humidity, and monsoon. Eventually, he collapsed from malaria and fever, and he left India in 1822, but he returned to his duties to become head of the survey after Lambton's death. Everest developed many new and innovative techniques, and despite poor health and the terrible climate, he pushed the Great Arc to the Himalayan foothills in northern India by the early 1840s. In 1843, Everest retired from his duties and returned to England. *Unlike the pronunciation used today, George Everest pronounced his name "Eve-rest," like "evening." Measuring Mountain Heights Everest never attempted to measure any of the heights of the Himalayan range. However, two of his subordinates, Andrew Waugh and John Armstrong, made measurements from the Himalayan foothills. Many of the apparently loftiest mountains lay north of India within Nepal or Tibet. Nepal had closed its borders to foreigners, so the team could only estimate mountain distances. Nevertheless, with estimated distances and several angular measurements, the surveyors computed heights for various mountain peaks. From the present-day town of Darjeeling, Waugh measured a height for "Kangchenjunga," which is now known to be the world's third tallest mountain.* Waugh's measured height of 28,176 feet is within seven feet of today's accepted value. In 1847 both Waugh and Armstrong, from different locations, took measurements of a mountain suspected to be even taller than Kangchenjunga. Since no local name could be determined, it was simply listed in survey records as "Himalaya Peak XV." *The most common translation of Kangchenjunga is "Five Treasuries of the Great Snow," from the five high peaks that rise from its surrounding glaciers. Since Peak XV measurements were taken from a great distance away, terrestrial refraction (the bending of light in Earth's atmosphere) could have had a profound effect on any angles measured. Mathematical constants called "coefficients of refraction " had to be included in the elevation computations to correct for this phenomenon. By 1856, armed with better coefficients of refraction and with more accurate angular measurements, Waugh communicated his finding that Peak XV was computed at 29,002 feet above sea level. Moreover, Waugh recommended that this mountain be officially named Mount Everest to honor Everest's important role in the Great Survey. In 1953, about a century after Mount Everest's height was first clearly ascertained, Edmund Hillary and Tenzing Norgay became the first to reach its summit. Since that time, mountaineers have placed various devices on Mount Everest to more accurately determine its height. For example, in 1992, an American expedition placed a reflector atop the mountain to bounce laser light off its surface. This device led to a measurement of 29,031 feet. However, like earlier measurements, this one included the snowcap's depth. Since the snowcap can vary, it would be advantageous to determine Everest's height minus this layer, estimated at between 30 and 60 feet. It has been proposed that a future expedition use ground penetrating radar to find the snow pack depth and thereby determine the height of Everest's rocky apex. The latest surveying method used to measure Mount Everest's elevation makes use of the Global Positioning System (GPS). GPS uses satellite signals to determine the coordinates of points on Earth's surface. The National Geographic Society announced in November 1999 a revised height of 29,035 feet (using GPS) for Mount Everest, but that measurement, as others before it, includes the ice and snow layers. see also Angles of Elevation and Depression; Angles, Measurement of; Global Positioning System. Philip Edward Koth (with William Arthur Atkins) Keay, John. "The Great Arc: The Dramatic Tale of How India Was Mapped and Everest Was Named." New York: Harper Collins Publishers, 2000. "8,848 m, The Height of Mt Everest." Chiba University web site (pictures, history and firsthand account from climber of Mt. Everest). <http://www.m.chiba-u.ac.jp/class/respir/hyoko_e.htm>. More From encyclopedia.com Surveying Instruments , Surveying is the apportionment of land by measuring and mapping. It is employed to determine boundaries and property lines, and to plan construction… Survey Services (geodesy) , bulk composition of Earth (whole Earth composition) Information on the composition of the Earth as a whole has been deduced from: (a) cosmochemical m… Measurement , British mathematician and physicist William Thomson (1824–1907), otherwise known as Lord Kelvin, indicated the importance of measurement to science:… Measurement , British mathematician and physicist William Thomson (1824–1907), otherwise known as Lord Kelvin, indicated the importance of measurement to science:… Surveying , SURVEYING. Using little more than a compass and a 66-foot chain, early American surveyors set out early to chart the United States of America. Survey… Measure , meas·ure / ˈmezhər/ • v. [tr.] 1. ascertain the size, amount, or degree of (something) by using an instrument or device marked in standard units or b… About this article Mount Everest, Measurement of Updated About encyclopedia.com content Print Article You Might Also Like Mount Everest, Measurement of
1.9: How a Triple Bond is Formed- The Bonds in Ethyne - Page ID Hybridization was introduced to explain molecular structure when the valence bond theory failed to correctly predict them. It is experimentally observed that bond angles in organic compounds are close to 109o, 120o, or 180o. According to Valence Shell Electron Pair Repulsion (VSEPR) theory, electron pairs repel each other and the bonds and lone pairs around a central atom are generally separated by the largest possible angles. Carbon is a perfect example showing the value of hybrid orbitals. Carbon's ground state configuration is: According to Valence Bond Theory, carbon should form two covalent bonds, resulting in a CH2, because it has two unpaired electrons in its electronic configuration.However, experiments have shown that \(CH_2\) is highly reactive and cannot exist outside of a reaction. Therefore, this does not explain how CH4 can exist. To form four bonds the configuration of carbon must have four unpaired electrons. One way CH4 can be explained is, the 2s and the 3 2p orbitals combine to make four, equal energy sp3 hybrid orbitals. That would give us the following configuration: Now that carbon has four unpaired electrons it can have four equal energy bonds. The hybridization of orbitals is favored because hybridized orbitals are more directional which leads to greater overlap when forming bonds, therefore the bonds formed are stronger. This results in more stable compounds when hybridization occurs. The next section will explain the various types of hybridization and how each type helps explain the structure of certain molecules. sp3 hybridization can explain the tetrahedral structure of molecules. In it, the 2s orbitals and all three of the 2p orbitals hybridize to form four sp3 orbitals, each consisting of 75% p character and 25% s character. The frontal lobes align themselves in the manner shown below. In this structure, electron repulsion is minimized. Energy changes occurring in hybridization Hybridization of an s orbital with all three p orbitals (px , py, and pz) results in four sp3 hybrid orbitals. sp3 hybrid orbitals are oriented at bond angle of 109.5o from each other. This 109.5o arrangement gives tetrahedral geometry (Figure 4). |Example: sp3 Hybridization in Methane| Because carbon plays such a significant role in organic chemistry, we will be using it as an example here. Carbon's 2s and all three of its 2p orbitals hybridize to form four sp3 orbitals. These orbitals then bond with four hydrogen atoms through sp3-s orbital overlap, creating methane. The resulting shape is tetrahedral, since that minimizes electron repulsion. Lone Pairs: Remember to take into account lone pairs of electrons. These lone pairs cannot double bond so they are placed in their own hybrid orbital. This is why H2O is tetrahedral. We can also build sp3d and sp3d2 hybrid orbitals if we go beyond s and p subshells. sp2 hybridization can explain the trigonal planar structure of molecules. In it, the 2s orbitals and two of the 2p orbitals hybridize to form three sp orbitals, each consisting of 67% p and 33% s character. The frontal lobes align themselves in the trigonal planar structure, pointing to the corners of a triangle in order to minimize electron repulsion and to improve overlap. The remaining p orbital remains unchanged and is perpendicular to the plane of the three sp2 orbitals. Energy changes occurring in hybridization Hybridization of an s orbital with two p orbitals (px and py) results in three sp2 hybrid orbitals that are oriented at 120o angle to each other (Figure 3). Sp2 hybridization results in trigonal geometry. |Example: sp2 Hybridization in Aluminum Trihydride| In aluminum trihydride, one 2s orbital and two 2p orbitals hybridize to form three sp2 orbitals that align themselves in the trigonal planar structure. The three Al sp2 orbitals bond with with 1s orbitals from the three hydrogens through sp2-s orbital overlap. |Example: sp2 Hybridization in Ethene| Similar hybridization occurs in each carbon of ethene. For each carbon, one 2s orbital and two 2p orbitals hybridize to form three sp2 orbitals. These hybridized orbitals align themselves in the trigonal planar structure. For each carbon, two of these sp orbitals bond with two 1s hydrogen orbitals through s-sp orbital overlap. The remaining sp2 orbitals on each carbon are bonded with each other, forming a bond between each carbon through sp2-sp2 orbital overlap. This leaves us with the two p orbitals on each carbon that have a single carbon in them. These orbitals form a ? bonds through p-p orbital overlap, creating a double bond between the two carbons. Because a double bond was created, the overall structure of the ethene compound is linear. However, the structure of each molecule in ethene, the two carbons, is still trigonal planar. sp Hybridization can explain the linear structure in molecules. In it, the 2s orbital and one of the 2p orbitals hybridize to form two sp orbitals, each consisting of 50% s and 50% p character. The front lobes face away from each other and form a straight line leaving a 180° angle between the two orbitals. This formation minimizes electron repulsion. Because only one p orbital was used, we are left with two unaltered 2p orbitals that the atom can use. These p orbitals are at right angles to one another and to the line formed by the two sp orbitals. Energy changes occurring in hybridization Figure 1: Notice how the energy of the electrons lowers when hybridized. These p orbitals come into play in compounds such as ethyne where they form two addition? bonds, resulting in in a triple bond. This only happens when two atoms, such as two carbons, both have two p orbitals that each contain an electron. An sp hybrid orbital results when an s orbital is combined with p orbital (Figure 2). We will get two sp hybrid orbitals since we started with two orbitals (s and p). sp hybridization results in a pair of directional sp hybrid orbitals pointed in opposite directions. These hybridized orbitals result in higher electron density in the bonding region for a sigma bond toward the left of the atom and for another sigma bond toward the right. In addition, sp hybridization provides linear geometry with a bond angle of 180o. |Example: sp Hybridization in Magnesium Hydride| In magnesium hydride, the 3s orbital and one of the 3p orbitals from magnesium hybridize to form two sp orbitals. The two frontal lobes of the sp orbitals face away from each other forming a straight line leading to a linear structure. These two sp orbitals bond with the two 1s orbitals of the two hydrogen atoms through sp-s orbital overlap. |Example: sp Hybridization in Ethyne| The hybridization in ethyne is similar to the hybridization in magnesium hydride. For each carbon, the 2s orbital hybridizes with one of the 2p orbitals to form two sp hybridized orbitals. The frontal lobes of these orbitals face away from each other forming a straight line. The first bond consists of sp-sp orbital overlap between the two carbons. Another two bonds consist of s-sp orbital overlap between the sp hybridized orbitals of the carbons and the 1s orbitals of the hydrogens. This leaves us with two p orbitals on each carbon that have a single carbon in them. This allows for the formation of two ? bonds through p-p orbital overlap. The linear shape, or 180° angle, is formed because electron repulsion is minimized the greatest in this position. - John Olmsted, Gregory M. Williams Chemistry: The Molecular Science Jones & Bartlett Publishers 1996. 366-371 - Francis A. Carey Advanced Organic Chemistry Springer 2001. 4-6 - L. G. Wade, Jr. Whitman College Organic Chemistry Fifth Edition 2003 Using the Lewis Structures, try to figure out the hybridization (sp, sp2, sp3) of the indicated atom and indicate the atom's shape. 1. The carbon. 2. The oxygen. 3. The carbon on the right. 1. sp2- Trigonal Planar The carbon has no lone pairs and is bonded to three hydrogens so we just need three hybrid orbitals, aka sp2. 2. sp3 - Tetrahedral Don't forget to take into account all the lone pairs. Every lone pair needs it own hybrid orbital. That makes three hybrid orbitals for lone pairs and the oxygen is bonded to one hydrogen which requires another sp3 orbital. That makes 4 orbitals, aka sp3. 3. sp - Linear The carbon is bonded to two other atoms, that means it needs two hybrid orbitals, aka sp. An easy way to figure out what hybridization an atom has is to just count the number of atoms bonded to it and the number of lone pairs. Double and triple bonds still count as being only bonded to one atom. Use this method to go over the above problems again and make sure you understand it. It's a lot easier to figure out the hybridization this way. - Harpreet Chima (UCD), Farah Yasmeen
The OECD defines GDP as "an aggregate measure of production equal to the sum of the gross values added of all resident and institutional units engaged in production (plus any taxes, and minus any subsidies, on products not included in the value of their outputs).” An IMF publication states that "GDP measures the monetary value of final goods and services - that is, those that are bought by the final user - produced in a country in a given period of time (say a quarter or a year)." THE GROSS DOMESTIC PRODUCT. The market value of all the goods and services produced in the country during the period under the consideration is called Gross Domestic Product. The ratio of GDP to the total population of the region is the per capita GDP and the same is called Mean Standard of Living. Total GDP can also be broken down into the contribution of each industry or sector of the economy. William Petty came up with a basic concept of GDP to defend landlords against unfair taxation during warfare between the Dutch and the English between 1652 and 1674. Charles Davenant developed the method further in 1695. The modern concept of GDP was first developed by Simon Kuznets for a US Congress report in 1934. In this report, Kuznets warned against its use as a measure of welfare (see below under limitations and criticisms). After the Bretton Woods conference in 1944, GDP became the main tool for measuring a country's economy. At that time gross national product (GNP) was the preferred estimate, which differed from GDP in that it measured production by a country's citizens at home and abroad rather than its 'resident institutional units' (see OECD definition above). The switch from "GNP" to "GDP" in the US was in 1991, trailing behind most other nations. The history of the concept of GDP should be distinguished from the history of changes in ways of estimating it. The value added by firms is relatively easy to calculate from their accounts, but the value added by the public sector, by financial industries, and by intangible asset creation is more complex. These activities are increasingly important in developed economies, and the international conventions governing their estimation and their inclusion or exclusion in GDP regularly change in an attempt to keep up with industrial advances. In the words of one academic economist "The actual number for GDP is therefore the product of a vast patchwork of statistics and a complicated set of processes carried out on the raw data to fit them to the conceptual framework." GDP can be determined in three ways, all of which should, in principle, give the same result. They are the production (or output or value added) approach, the income approach, or the expenditure approach. The most direct of the three is the production approach, which sums the outputs of every class of enterprise to arrive at the total. The expenditure approach works on the principle that all of the product must be bought by somebody, therefore the value of the total product must be equal to people's total expenditures in buying things. The income approach works on the principle that the incomes of the productive factors ("producers," colloquially) must be equal to the value of their product, and determines GDP by finding the sum of all producers' incomes. This approach mirrors the OECD definition given above. - Estimate the gross value of domestic output out of the many various economic activities; - Determine the intermediate consumption, i.e., the cost of material, supplies and services used to produce final goods or services. - Deduct intermediate consumption from gross value to obtain the gross value added. Gross value added = gross value of output – value of intermediate consumption. Value of output = value of the total sales of goods and services plus value of changes in the inventories. The sum of the gross value added in the various economic activities is known as "GDP at factor cost". GDP at factor cost plus indirect taxes less subsidies on products = "GDP at producer price". For measuring output of domestic product, economic activities (i.e. industries) are classified into various sectors. After classifying economic activities, the output of each sector is calculated by any of the following two methods: - By multiplying the output of each sector by their respective market price and adding them together - By collecting data on gross sales and inventories from the records of companies and adding them together The gross value of all sectors is then added to get the gross value added (GVA) at factor cost. Subtracting each sector's intermediate consumption from gross output gives the GDP at factor cost. Adding indirect tax minus subsidies in GDP at factor cost gives the "GDP at producer prices". The second way of estimating GDP is to use "the sum of primary incomes distributed by resident producer units". If GDP is calculated this way it is sometimes called gross domestic income (GDI), or GDP (I). GDI should provide the same amount as the expenditure method described later. (By definition, GDI = GDP. In practice, however, measurement errors will make the two figures slightly off when reported by national statistical agencies.) This method measures GDP by adding incomes that firms pay households for factors of production they hire - wages for labour, interest for capital, rent for land and profits for entrepreneurship. The US "National Income and Expenditure Accounts" divide incomes into five categories: - Wages, salaries, and supplementary labour income - Corporate profits - Interest and miscellaneous investment income - Farmers' incomes - Income from non-farm unincorporated businesses These five income components sum to net domestic income at factor cost. Two adjustments must be made to get GDP: - Indirect taxes minus subsidies are added to get from factor cost to market prices. - Depreciation (or capital consumption allowance) is added to get from net domestic product to gross domestic product. Total income can be subdivided according to various schemes, leading to various formulae for GDP measured by the income approach. A common one is:GDP = compensation of employees + gross operating surplus + gross mixed income + taxes less subsidies on production and imports + TP & M – SP & M Compensation of employees (COE) measures the total remuneration to employees for work done. It includes wages and salaries, as well as employer contributions to social security and other such programs. Gross operating surplus (GOS) is the surplus due to owners of incorporated businesses. Often called profits, although only a subset of total costs are subtracted from gross output to calculate GOS. Gross mixed income (GMI) is the same measure as GOS, but for unincorporated businesses. This often includes most small businesses. The sum of COE, GOS and GMI is called total factor income; it is the income of all of the factors of production in society. It measures the value of GDP at factor (basic) prices. The difference between basic prices and final prices (those used in the expenditure calculation) is the total taxes and subsidies that the government has levied or paid on that production. So adding taxes less subsidies on production and imports converts GDP at factor cost to GDP(I). Total factor income is also sometimes expressed as:Total factor income = employee compensation + corporate profits + proprietor's income + rental income + net interest The third way to estimate GDP is to calculate the sum of the final uses of goods and services (all uses except intermediate consumption) measured in purchasers' prices. Market goods which are produced are purchased by someone. In the case where a good is produced and unsold, the standard accounting convention is that the producer has bought the good from themselves. Therefore, measuring the total expenditure used to buy things is a way of measuring production. This is known as the expenditure method of calculating GDP. GDP (Y) is the sum of consumption (C), investment (I), government spending (G) and net exports (X – M).Y + (X − M) Here is a description of each GDP component:C (consumption) is normally the largest GDP component in the economy, consisting of private expenditures in the economy (household final consumption expenditure). These personal expenditures fall under one of the following categories: durable goods, nondurable goods, and services. Examples include food, rent, jewelry, gasoline, and medical expenses, but not the purchase of new housing. I (investment) includes, for instance, business investment in equipment, but does not include exchanges of existing assets. Examples include construction of a new mine, purchase of software, or purchase of machinery and equipment for a factory. Spending by households (not government) on new houses is also included in investment. In contrast to its colloquial meaning, "investment" in GDP does not mean purchases of financial products. Buying financial products is classed as 'saving', as opposed to investment. This avoids double-counting: if one buys shares in a company, and the company uses the money received to buy plant, equipment, etc., the amount will be counted toward GDP when the company spends the money on those things; to also count it when one gives it to the company would be to count two times an amount that only corresponds to one group of products. Buying bonds or stocks is a swapping of deeds, a transfer of claims on future production, not directly an expenditure on products. G (government spending) is the sum of government expenditures on final goods and services. It includes salaries of public servants, purchases of weapons for the military and any investment expenditure by a government. It does not include any transfer payments, such as social security or unemployment benefits. X (exports) represents gross exports. GDP captures the amount a country produces, including goods and services produced for other nations' consumption, therefore exports are added. M (imports) represents gross imports. Imports are subtracted since imported goods will be included in the terms G, I, or C, and must be deducted to avoid counting foreign supply as domestic. Note that C, G, and I are expenditures on final goods and services; expenditures on intermediate goods and services do not count. (Intermediate goods and services are those used by businesses to produce other goods and services within the accounting year.) According to the U.S. Bureau of Economic Analysis, which is responsible for calculating the national accounts in the United States, "In general, the source data for the expenditures components are considered more reliable than those for the income components [see income method, below]."G D P GDP can be contrasted with gross national product (GNP) or, as it is now known, gross national income (GNI). The difference is that GDP defines its scope according to location, while GNI defines its scope according to ownership. In a global context, world GDP and world GNI are, therefore, equivalent terms. GDP is product produced within a country's borders; GNI is product produced by enterprises owned by a country's citizens. The two would be the same if all of the productive enterprises in a country were owned by its own citizens, and those citizens did not own productive enterprises in any other countries. In practice, however, foreign ownership makes GDP and GNI non-identical. Production within a country's borders, but by an enterprise owned by somebody outside the country, counts as part of its GDP but not its GNI; on the other hand, production by an enterprise located outside the country, but owned by one of its citizens, counts as part of its GNI but not its GDP. For example, the GNI of the USA is the value of output produced by American-owned firms, regardless of where the firms are located. Similarly, if a country becomes increasingly in debt, and spends large amounts of income servicing this debt this will be reflected in a decreased GNI but not a decreased GDP. Similarly, if a country sells off its resources to entities outside their country this will also be reflected over time in decreased GNI, but not decreased GDP. This would make the use of GDP more attractive for politicians in countries with increasing national debt and decreasing assets. Gross national income (GNI) equals GDP plus income receipts from the rest of the world minus income payments to the rest of the world. In 1991, the United States switched from using GNP to using GDP as its primary measure of production. The relationship between United States GDP and GNP is shown in table 1.7.5 of the National Income and Product Accounts. The international standard for measuring GDP is contained in the book System of National Accounts (1993), which was prepared by representatives of the International Monetary Fund, European Union, Organization for Economic Co-operation and Development, United Nations and World Bank. The publication is normally referred to as SNA93 to distinguish it from the previous edition published in 1968 (called SNA68) SNA93 provides a set of rules and procedures for the measurement of national accounts. The standards are designed to be flexible, to allow for differences in local statistical needs and conditions. Within each country GDP is normally measured by a national government statistical agency, as private sector organizations normally do not have access to the information required (especially information on expenditure and production by governments). The raw GDP figure as given by the equations above is called the nominal, historical, or current, GDP. When one compares GDP figures from one year to another, it is desirable to compensate for changes in the value of money – i.e., for the effects of inflation or deflation. To make it more meaningful for year-to-year comparisons, it may be multiplied by the ratio between the value of money in the year the GDP was measured and the value of money in a base year. For example, suppose a country's GDP in 1990 was $100 million and its GDP in 2000 was $300 million. Suppose also that inflation had halved the value of its currency over that period. To meaningfully compare its GDP in 2000 to its GDP in 1990, we could multiply the GDP in 2000 by one-half, to make it relative to 1990 as a base year. The result would be that the GDP in 2000 equals $300 million × one-half = $150 million, in 1990 monetary terms. We would see that the country's GDP had realistically increased 50 percent over that period, not 200 percent, as it might appear from the raw GDP data. The GDP adjusted for changes in money value in this way is called the real, or constant, GDP. The factor used to convert GDP from current to constant values in this way is called the GDP deflator. Unlike consumer price index, which measures inflation or deflation in the price of household consumer goods, the GDP deflator measures changes in the prices of all domestically produced goods and services in an economy including investment goods and government services, as well as household consumption goods. Constant-GDP figures allow us to calculate a GDP growth rate, which indicates how much a country's production has increased (or decreased, if the growth rate is negative) compared to the previous year. Real GDP growth rate for year n = [(Real GDP in year n ) − (Real GDP in year n − 1)] / (Real GDP in year n Another thing that it may be desirable to account for is population growth. If a country's GDP doubled over a certain period, but its population tripled, the increase in GDP may not mean that the standard of living increased for the country's residents; the average person in the country is producing less than they were before. Per-capita GDP is a measure to account for population growth. The level of GDP in different countries may be compared by converting their value in national currency according to either the current currency exchange rate, or the purchasing power parity exchange rate.Current currency exchange rate is the exchange rate in the international foreign exchange market. Purchasing power parity exchange rate is the exchange rate based on the purchasing power parity (PPP) of a currency relative to a selected standard (usually the United States dollar). This is a comparative (and theoretical) exchange rate, the only way to directly realize this rate is to sell an entire CPI basket in one country, convert the cash at the currency market rate & then rebuy that same basket of goods in the other country (with the converted cash). Going from country to country, the distribution of prices within the basket will vary; typically, non-tradable purchases will consume a greater proportion of the basket's total cost in the higher GDP country, per the Balassa-Samuelson effect. The ranking of countries may differ significantly based on which method is used.The current exchange rate method converts the value of goods and services using global currency exchange rates. The method can offer better indications of a country's international purchasing power. For instance, if 10% of GDP is being spent on buying hi-tech foreign arms, the number of weapons purchased is entirely governed by current exchange rates, since arms are a traded product bought on the international market. There is no meaningful 'local' price distinct from the international price for high technology goods. The PPP method of GDP conversion is more relevant to non-traded goods and services. In the above example if hi-tech weapons are to be produced internally their amount will be governed by GDP(PPP) rather than nominal GDP. There is a clear pattern of the purchasing power parity method decreasing the disparity in GDP between high and low income (GDP) countries, as compared to the current exchange rate method. This finding is called the Penn effect. For more information, see Measures of national income and output. Simon Kuznets, the economist who developed the first comprehensive set of measures of national income, stated in his first report to the US Congress in 1934, in a section titled "Uses and Abuses of National Income Measurements": The valuable capacity of the human mind to simplify a complex situation in a compact characterization becomes dangerous when not controlled in terms of definitely stated criteria. With quantitative measurements especially, the definiteness of the result suggests, often misleadingly, a precision and simplicity in the outlines of the object measured. Measurements of national income are subject to this type of illusion and resulting abuse, especially since they deal with matters that are the center of conflict of opposing social groups where the effectiveness of an argument is often contingent upon oversimplification. [...] All these qualifications upon estimates of national income as an index of productivity are just as important when income measurements are interpreted from the point of view of economic welfare. But in the latter case additional difficulties will be suggested to anyone who wants to penetrate below the surface of total figures and market values. Economic welfare cannot be adequately measured unless the personal distribution of income is known. And no income measurement undertakes to estimate the reverse side of income, that is, the intensity and unpleasantness of effort going into the earning of income. The welfare of a nation can, therefore, scarcely be inferred from a measurement of national income as defined above. In 1962, Kuznets stated: Distinctions must be kept in mind between quantity and quality of growth, between costs and returns, and between the short and long run. Goals for more growth should specify more growth of what and for what. Proposals to overcome GDP limitations In 1990 Mahbub ul Haq, a Pakistani Economist at the United Nations, introduced the Human Development Index (HDI). The HDI is a composite index of life expectancy at birth, adult literacy rate and standard of living measured as a logarithmic function of GDP, adjusted to purchasing power parity. In 1989, Cobb and Daly introduced Index of Sustainable Economic Welfare (ISEW) by taking into account various other factors such as consumption of nonrenewable resources and degradation of the environment. The new formula deducted from GDP (personal consumption + public non-defensive expenditures - private defensive expenditures + capital formation + services from domestic labour - costs of environmental degradation - depreciation of natural capital) In 2005, Med Jones, An American Economist, at the International Institute of Management, introduced the first secular Gross National Happiness Index a.k.a Gross National Well-being framework and Index to complement GDP economics with additional seven dimensions, including environment, education, and government, work, social and health (mental and physical) indicators. The proposal was inspired by the King of Bhutan GNH philosphy. In 2009 European Union released a communication titled GDP and beyond: Measuring progress in a changing world that identified five actions to improve the indicators of progress in ways that make it more responsive to the concerns of its citizens: Introduced a proposal to complementing GDP with environmental and social indicators In 2009 Professors Stiglitz, Sen, and Fitoussi at the Commission on the Measurement of Economic Performance and Social Progress (CMEPSP), formed by French President, Sarkozy published a proposal to overcome the limitation of GDP economics to expand the focus to well-being economics with wellbeing framework consisting of health, environment, work, physical safety, economic safety, political freedom In 2012, the Karma Ura of the Center for Bhutan Studies published Bhutan Local GNH Index contributors to happiness—physical, mental and spiritual health; time-balance; social and community vitality; cultural vitality; education; living standards; good governance; and ecological vitality. The Bhutan GNH Index. In 2013 OECD Better Life Index was published by the OECD. The dimensions of the index included health, economic, workplace, income, jobs, housing, civic engagement, life satisfaction In 2013 Professors, John Helliwell, Richard Layard and Jeffrey Sachs published World Happiness Report and proposed to measure other wellbeing indicators in addition to GDP. the evaluation framework included GDP per capita, Gini (income inequality), life satisfaction, health, freedom of life choices, trust and absence of corruption. The UK's Natural Capital Committee highlighted the shortcomings of GDP in its advice to the UK Government in 2013, pointing out that GDP "focuses on flows, not stocks. As a result, an economy can run down its assets yet, at the same time, record high levels of GDP growth, until a point is reached where the depleted assets act as a check on future growth". They then went on to say that "it is apparent that the recorded GDP growth rate overstates the sustainable growth rate. Broader measures of wellbeing and wealth are needed for this and there is a danger that short-term decisions based solely on what is currently measured by national accounts may prove to be costly in the long-term". Many environmentalists argue that GDP is a poor measure of social progress because it does not take into account harm to the environment. Although a high or rising level of GDP is often associated with increased economic and social progress within a country, a number of scholars have pointed out that this does not necessarily play out in many instances. For example, Jean Drèze and Amartya Sen have pointed out that an increase in GDP or in GDP growth does not necessarily lead to a higher standard of living, particularly in areas such as healthcare and education. Another important area that does not necessarily improve along with GDP is political liberty, which is most notable in China, where GDP growth is strong yet political liberties are heavily restricted. GDP does not account for the distribution of income among the residents of a country, because GDP is merely an aggregate measure. An economy may be highly developed or growing rapidly, but also contain a wide gap between the rich and the poor in a society. These inequalities often occur on the lines of race, ethnicity, gender, religion, or other minority status within countries. This can lead to misleading characterizations of economic well-being if the income distribution is heavily skewed toward the high end, as the poorer residents will not directly benefit from the overall level of wealth and income generated in their country. Even GDP per capita measures may have the same downside if inequality is high. For example, South Africa during apartheid ranked high in terms of GDP per capita, but the benefits of this immense wealth and income were not shared equally among the country. GDP does not take into account the value of household and other unpaid work. Some, including Martha Nussbaum, argue that this value should be included in measuring GDP, as household labor is largely a substitute for goods and services that would otherwise be purchased for value. Even under conservative estimates, the value of unpaid labor in Australia has been calculated to be over 50% of the country's GDP. A later study analyzed this value in other countries, with results ranging from a low of about 15% in Canada (using conservative estimates) to high of nearly 70% in the United Kingdom (using more liberal estimates). For the United States, the value was estimated to be between about 20% on the low end to nearly 50% on the high end, depending on the methodology being used. Because many public policies are shaped by GDP calculations and by the related field of national accounts, the non-inclusion of unpaid work in calculating GDP can create distortions in public policy, and some economists have advocated for changes in the way public policies are formed and implemented. In response to these and other limitations of using GDP as the overarching measure of economic and social progress, alternative approaches have emerged. One such alternative is the capability approach, which was developed in the 1980s and focuses on the functional capabilities enjoyed by people within a country, rather than the aggregate wealth held within a country. These capabilities consist of the functions that a person is able to achieve.Lists of countries by GDP List of countries by GDP (nominal), (per capita) List of continents by GDP (nominal) List of countries by GDP (PPP), (per capita), (per hour) List of countries by GDP (real) growth rate, (per capita) List of countries by GDP sector composition List of IMF ranked countries by past and projected GDP (PPP), (per capita), (nominal)
Measuring Angles Inside Shapes Lesson 3 of 7 Objective: SWBAT find the sum of angles. Today's Number Talk For a detailed description of the Number Talk procedure, please refer to the Number Talk Explanation. For this Number Talk, I am encouraging students to represent their thinking using a number line model. For each task today, students shared their strategies with peers (sometimes within their group, sometimes with someone across the room). It was great to see students inspiring others to try new methods and it was equally as great to see students examining each other work for possible mistakes! Prior to the lesson, I placed magnetic money and fractions on the board to help students conceptualize our number talk today. I invited students to get a Student Number Line and Hundred Grids. I then drew a Number Line on the Board and marked 0, 1, and 2 on the line. I asked students to do the same on their own number lines. Task #1: Add 1/4 + 0.6 To begin, I asked students to add 1/4 + 0.6 on their number lines and hundreds grids. During this time, some students chose to work alone while others worked with a partner in their math groups. I took this time to conference with students. Next, some students volunteered to explain their reasoning out loud while I modeled their thinking on the board: 1:4 + 0.6 Teacher Demonstration Number Line Others watched carefully, checking their own number lines and hundreds grids to make sure they agreed with the thinking of other students. Here are a few examples of student work during this time: Task #1: Add 1/10 + 1.5 Next, we moved on to adding 1/10 + 1.5. Most students converted 1.5 to 1 5/10 and then used their number lines to take a jump of 1 5/10 and then a jump of 1/10. Here is an example of student number line and hundreds grid: Again, a few students explained how they solved this problem while I modeled their thinking on the board. Here's the end result of the last number talk task: 1:10 +1.5 Teacher Demonstration Number Line. You'll notice a list of patterns off to the right side of the picture listed above. One student had pointed out 1 6/10 = 1 3/5 which is equal to 1.6 because 1.6 = 1 6/10. Then, we discussed how 3/5 is equivalent to 0.6. I asked a few students to grab a calculator to divide 3 by 5 (3/5). They got 0.6. I asked: Can anyone else think of an equivalent fraction to 6/10? I wonder if all fractions equivalent to 6/10 are also equal to the decimal number, 0.6? Students took turns providing equivalent fractions. The students with the calculators would then check the decimal equivalency by dividing the numerator by the denominator. Each time they got 0.6! This was a fun and exciting moment for students! For today's lesson, I wanted to provide students with more practice using the protractor and with an opportunity to discover the pattern that all angles in a circle add up to 360 degrees. So I created shape puzzles for students to investigate. For each shape puzzle, I drew two lines intersecting in the middle of the shape. This resulted in four angles that would always have a sum of 360 degrees. After measuring and adding all four angles of several shapes, students began to realize that they always equal 360 degrees! To connect today's lesson with previous lessons, we began by singing our fun Angles Song. Next, we reflected upon the Complementary & Supplementary Angle Poster from yesterday. Students added on the following observations on the Supplementary Angles side, "One angle is acute and the other is obtuse," and, "They can also be two right angles." Then, I reminded students of our current goal: I can find the sum of angles. I pointed out the following shapes on the counter: Shape Puzzles. I wanted groups of 2-3 students to be able to choose one shape at a time from the counter to investigate, so I printed three copies of each shape to make sure students didn't have to wait on other groups to finish in order to continue their investigation. Using the Group Chart, I modeled how to record the figure in the first column. Using the pentagon, I then modeled how to measure each angle on the inside and showed students how to write an addition equation using the measurement of all the angles. I continued: Once you're done investigating all the figures on the back counter, write down your observations, and then it's your turn! Pointing to the rectangle at the bottom of the page, I explained: You get to split this rectangle up in a similar manner using a ruler or protractor. Students were ready to investigate! Attending to Precision For today's lesson in particular, students will be engaged in Math Practice 6: Attend to precision. All the angles of inside each shape is supposed to add up to 360 degrees. When students are off by just a degree or two, they'll end up with sums close to 360, such as 362 or 359. With time, students will see the pattern and will realize how important it is to attend to precision in order to get the sum of 360 degrees exactly! Picking math partners is always easy as I already have students placed in desk groups based upon behavior, abilities, and communication skills. Monitoring Student Understanding While students were working, I conferenced with every group. My goal was to support students by providing them with the opportunity to explain their thinking and by asking guiding questions. I also wanted to encourage students to construct viable arguments by using evidence to support their thinking (Math Practice 3). - What are you lining up? (encouraging vocabulary use) - Can you explain what you are noticing? - What do you think? Why do you think that? - How do you know it's not ______? (non-example) - Is this acute or obtuse? How do you know? - What are you finding out about the sum of angles inside each figure? - Can you explain your thinking to your partner to make sure he agrees? - What is the sum of all the angles? - What did you notice about the two angles across from each other? Here, Examining Angles Closer, a student explains how the length of angle arms does not effect the angle measurement. I also like watching them use the protractor to measure two angles at once. Here, Connecting the Sum of Angles with 360 Degrees, two students explain why all the angles add up to 360 degrees. Here's an example of student work during this time: Example of Student Work. After today's investigation, I invited students to join me on the front carpet to discuss observations. One student pointed out that the arms of an angle can keep going on forever and it doesn't change the angle measurement. I drew a picture to help other students understand this student's thinking: Comparing Angle Arm Sizes. I drew a picture of two intersecting lines and labeled the angles A, B, C, and D. I then asked: What did you notice when measuring angles the result from two lines intersecting? I labeled the drawing, Sharing Observations, as students shared the following observations: - Angles A and C are congruent. - Angles D and B are congruent too! - Angles A and B have a sum of 180 degrees. - Angles D and C have a sum of 180 degrees too! - Altogether, A + B + C + D = 360 degrees. I then asked: Based on your investigation today, do you think this is true with all intersecting lines?Most students said, "Yes!"
Common Core State Standards CCSS.Math.Content.HSN.RN: The Real Number System CCSS.Math.Content.HSN.RN.A: Extend the properties of exponents to rational exponents. CCSS.Math.Content.HSN.RN.A.1: Explain how the definition of the meaning of rational exponents follows from extending the properties of integer exponents to those values, allowing for a notation for radicals in terms of rational exponents. CCSS.Math.Content.HSN.CN: The Complex Number System CCSS.Math.Content.HSN.CN.A: Perform arithmetic operations with complex numbers. CCSS.Math.Content.HSN.CN.A.1: Know there is a complex number 𝘪 such that 𝘪² = –1, and every complex number has the form 𝘢 + 𝘣𝘪 with 𝘢 and 𝘣 real. CCSS.Math.Content.HSN.CN.A.2: Use the relation 𝘪² = –1 and the commutative, associative, and distributive properties to add, subtract, and multiply complex numbers. CCSS.Math.Content.HSN.CN.A.3: Find the conjugate of a complex number; use conjugates to find moduli and quotients of complex numbers. CCSS.Math.Content.HSN.CN.B: Represent complex numbers and their operations on the complex plane. CCSS.Math.Content.HSN.CN.B.4: Represent complex numbers on the complex plane in rectangular and polar form (including real and imaginary numbers), and explain why the rectangular and polar forms of a given complex number represent the same number. CCSS.Math.Content.HSN.CN.C: Use complex numbers in polynomial identities and equations. CCSS.Math.Content.HSN.CN.C.7: Solve quadratic equations with real coefficients that have complex solutions. CCSS.Math.Content.HSN.VM: Vector and Matrix Quantities CCSS.Math.Content.HSN.VM.A: Represent and model with vector quantities. CCSS.Math.Content.HSN.VM.A.1: Recognize vector quantities as having both magnitude and direction. Represent vector quantities by directed line segments, and use appropriate symbols for vectors and their magnitudes (e.g., 𝙫, |𝙫|, ||𝙫||, 𝘷). CCSS.Math.Content.HSN.VM.A.2: Find the components of a vector by subtracting the coordinates of an initial point from the coordinates of a terminal point. CCSS.Math.Content.HSN.VM.A.3: Solve problems involving velocity and other quantities that can be represented by vectors. CCSS.Math.Content.HSN.VM.B: Perform operations on vectors. CCSS.Math.Content.HSN.VM.B.4a: Add vectors end-to-end, component-wise, and by the parallelogram rule. Understand that the magnitude of a sum of two vectors is typically not the sum of the magnitudes. CCSS.Math.Content.HSN.VM.B.4b: Given two vectors in magnitude and direction form, determine the magnitude and direction of their sum. CCSS.Math.Content.HSN.VM.B.5a: Represent scalar multiplication graphically by scaling vectors and possibly reversing their direction; perform scalar multiplication component-wise, e.g., as 𝘤(𝘷ₓ, 𝘷 subscript 𝘺) = (𝘤𝘷ₓ, 𝘤𝘷 subscript 𝘺). CCSS.Math.Content.HSA.SSE: Seeing Structure in Expressions CCSS.Math.Content.HSA.SSE.A: Interpret the structure of expressions CCSS.Math.Content.HSA.SSE.A.1a: Interpret parts of an expression, such as terms, factors, and coefficients. CCSS.Math.Content.HSA.SSE.A.1b: Interpret complicated expressions by viewing one or more of their parts as a single entity. CCSS.Math.Content.HSA.SSE.A.2: Use the structure of an expression to identify ways to rewrite it. CCSS.Math.Content.HSA.SSE.B: Write expressions in equivalent forms to solve problems CCSS.Math.Content.HSA.SSE.B.3a: Factor a quadratic expression to reveal the zeros of the function it defines. CCSS.Math.Content.HSA.SSE.B.3c: Use the properties of exponents to transform expressions for exponential functions. CCSS.Math.Content.HSA.APR: Arithmetic with Polynomials and Rational Expressions CCSS.Math.Content.HSA.APR.A: Perform arithmetic operations on polynomials CCSS.Math.Content.HSA.APR.A.1: Understand that polynomials form a system analogous to the integers, namely, they are closed under the operations of addition, subtraction, and multiplication; add, subtract, and multiply polynomials. CCSS.Math.Content.HSA.APR.B: Understand the relationship between zeros and factors of polynomials CCSS.Math.Content.HSA.APR.B.2: Know and apply the Remainder Theorem: For a polynomial 𝘱(𝘹) and a number 𝘢, the remainder on division by 𝘹 – 𝘢 is 𝘱(𝘢), so 𝘱(𝘢) = 0 if and only if (𝘹 – 𝘢) is a factor of 𝘱(𝘹). CCSS.Math.Content.HSA.APR.B.3: Identify zeros of polynomials when suitable factorizations are available, and use the zeros to construct a rough graph of the function defined by the polynomial. CCSS.Math.Content.HSA.APR.C: Use polynomial identities to solve problems CCSS.Math.Content.HSA.APR.C.5: Know and apply the Binomial Theorem for the expansion of (𝘹 + 𝘺)ⁿ in powers of 𝘹 and y for a positive integer 𝘯, where 𝘹 and 𝘺 are any numbers, with coefficients determined for example by Pascal’s Triangle. CCSS.Math.Content.HSA.CED: Creating Equations CCSS.Math.Content.HSA.CED.A: Create equations that describe numbers or relationships CCSS.Math.Content.HSA.CED.A.1: Create equations and inequalities in one variable and use them to solve problems. CCSS.Math.Content.HSA.CED.A.2: Create equations in two or more variables to represent relationships between quantities; graph equations on coordinate axes with labels and scales. CCSS.Math.Content.HSA.CED.A.3: Represent constraints by equations or inequalities, and by systems of equations and/or inequalities, and interpret solutions as viable or non-viable options in a modeling context. CCSS.Math.Content.HSA.CED.A.4: Rearrange formulas to highlight a quantity of interest, using the same reasoning as in solving equations. CCSS.Math.Content.HSA.REI: Reasoning with Equations and Inequalities CCSS.Math.Content.HSA.REI.A: Understand solving equations as a process of reasoning and explain the reasoning CCSS.Math.Content.HSA.REI.A.1: Explain each step in solving a simple equation as following from the equality of numbers asserted at the previous step, starting from the assumption that the original equation has a solution. Construct a viable argument to justify a solution method. CCSS.Math.Content.HSA.REI.A.2: Solve simple rational and radical equations in one variable, and give examples showing how extraneous solutions may arise. CCSS.Math.Content.HSA.REI.B: Solve equations and inequalities in one variable CCSS.Math.Content.HSA.REI.B.3: Solve linear equations and inequalities in one variable, including equations with coefficients represented by letters. CCSS.Math.Content.HSA.REI.B.4b: Solve quadratic equations by inspection (e.g., for 𝘹² = 49), taking square roots, completing the square, the quadratic formula and factoring, as appropriate to the initial form of the equation. Recognize when the quadratic formula gives complex solutions and write them as 𝘢 ± 𝘣𝘪 for real numbers 𝘢 and 𝘣. CCSS.Math.Content.HSA.REI.C: Solve systems of equations CCSS.Math.Content.HSA.REI.C.5: Prove that, given a system of two equations in two variables, replacing one equation by the sum of that equation and a multiple of the other produces a system with the same solutions. CCSS.Math.Content.HSA.REI.C.6: Solve systems of linear equations exactly and approximately (e.g., with graphs), focusing on pairs of linear equations in two variables. CCSS.Math.Content.HSA.REI.C.8: Represent a system of linear equations as a single matrix equation in a vector variable. CCSS.Math.Content.HSA.REI.D: Represent and solve equations and inequalities graphically CCSS.Math.Content.HSA.REI.D.10: Understand that the graph of an equation in two variables is the set of all its solutions plotted in the coordinate plane, often forming a curve (which could be a line). CCSS.Math.Content.HSA.REI.D.11: Explain why the 𝘹-coordinates of the points where the graphs of the equations 𝘺 = 𝘧(𝘹) and 𝘺 = 𝑔(𝘹) intersect are the solutions of the equation 𝘧(𝘹) = 𝑔(𝘹); find the solutions approximately, e.g., using technology to graph the functions, make tables of values, or find successive approximations. Include cases where 𝘧(𝘹) and/or 𝑔(𝘹) are linear, polynomial, rational, absolute value, exponential, and logarithmic functions. CCSS.Math.Content.HSA.REI.D.12: Graph the solutions to a linear inequality in two variables as a half-plane (excluding the boundary in the case of a strict inequality), and graph the solution set to a system of linear inequalities in two variables as the intersection of the corresponding half-planes. CCSS.Math.Content.HSF.IF: Interpreting Functions CCSS.Math.Content.HSF.IF.A: Understand the concept of a function and use function notation CCSS.Math.Content.HSF.IF.A.1: Understand that a function from one set (called the domain) to another set (called the range) assigns to each element of the domain exactly one element of the range. If 𝘧 is a function and 𝘹 is an element of its domain, then 𝘧(𝘹) denotes the output of 𝘧 corresponding to the input 𝘹. The graph of 𝘧 is the graph of the equation 𝘺 = 𝘧(𝘹). CCSS.Math.Content.HSF.IF.A.2: Use function notation, evaluate functions for inputs in their domains, and interpret statements that use function notation in terms of a context. CCSS.Math.Content.HSF.IF.B: Interpret functions that arise in applications in terms of the context CCSS.Math.Content.HSF.IF.B.4: For a function that models a relationship between two quantities, interpret key features of graphs and tables in terms of the quantities, and sketch graphs showing key features given a verbal description of the relationship. CCSS.Math.Content.HSF.IF.B.5: Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes. CCSS.Math.Content.HSF.IF.B.6: Calculate and interpret the average rate of change of a function (presented symbolically or as a table) over a specified interval. Estimate the rate of change from a graph. CCSS.Math.Content.HSF.IF.C: Analyze functions using different representations CCSS.Math.Content.HSF.IF.C.7a: Graph linear and quadratic functions and show intercepts, maxima, and minima. CCSS.Math.Content.HSF.IF.C.7b: Graph square root, cube root, and piecewise-defined functions, including step functions and absolute value functions. CCSS.Math.Content.HSF.IF.C.7c: Graph polynomial functions, identifying zeros when suitable factorizations are available, and showing end behavior. CCSS.Math.Content.HSF.IF.C.7d: Graph rational functions, identifying zeros and asymptotes when suitable factorizations are available, and showing end behavior. CCSS.Math.Content.HSF.IF.C.7e: Graph exponential and logarithmic functions, showing intercepts and end behavior, and trigonometric functions, showing period, midline, and amplitude. CCSS.Math.Content.HSF.IF.C.8a: Use the process of factoring and completing the square in a quadratic function to show zeros, extreme values, and symmetry of the graph, and interpret these in terms of a context. CCSS.Math.Content.HSF.IF.C.8b: Use the properties of exponents to interpret expressions for exponential functions. CCSS.Math.Content.HSF.BF: Building Functions CCSS.Math.Content.HSF.BF.A: Build a function that models a relationship between two quantities CCSS.Math.Content.HSF.BF.A.1a: Determine an explicit expression, a recursive process, or steps for calculation from a context. CCSS.Math.Content.HSF.BF.A.2: Write arithmetic and geometric sequences both recursively and with an explicit formula, use them to model situations, and translate between the two forms. CCSS.Math.Content.HSF.BF.B: Build new functions from existing functions CCSS.Math.Content.HSF.BF.B.3: Identify the effect on the graph of replacing 𝘧(𝘹) by 𝘧(𝘹) + 𝘬, 𝘬 𝘧(𝘹), 𝘧(𝘬𝘹), and 𝘧(𝘹 + 𝘬) for specific values of 𝘬 (both positive and negative); find the value of 𝘬 given the graphs. Experiment with cases and illustrate an explanation of the effects on the graph using technology. CCSS.Math.Content.HSF.BF.B.4b: Verify by composition that one function is the inverse of another. CCSS.Math.Content.HSF.BF.B.4c: Read values of an inverse function from a graph or a table, given that the function has an inverse. CCSS.Math.Content.HSF.BF.B.5: Understand the inverse relationship between exponents and logarithms and use this relationship to solve problems involving logarithms and exponents. CCSS.Math.Content.HSF.LE: Linear, Quadratic, and Exponential Models CCSS.Math.Content.HSF.LE.A: Construct and compare linear, quadratic, and exponential models and solve problems CCSS.Math.Content.HSF.LE.A.1a: Prove that linear functions grow by equal differences over equal intervals, and that exponential functions grow by equal factors over equal intervals. CCSS.Math.Content.HSF.LE.A.1b: Recognize situations in which one quantity changes at a constant rate per unit interval relative to another. CCSS.Math.Content.HSF.LE.A.1c: Recognize situations in which a quantity grows or decays by a constant percent rate per unit interval relative to another. CCSS.Math.Content.HSF.LE.A.2: Construct linear and exponential functions, including arithmetic and geometric sequences, given a graph, a description of a relationship, or two input-output pairs (include reading these from a table). CCSS.Math.Content.HSF.LE.A.4: For exponential models, express as a logarithm the solution to 𝘢𝘣 to the 𝘤𝘵 power = 𝘥 where 𝘢, 𝘤, and 𝘥 are numbers and the base 𝘣 is 2, 10, or 𝘦; evaluate the logarithm using technology. CCSS.Math.Content.HSF.LE.B: Interpret expressions for functions in terms of the situation they model CCSS.Math.Content.HSF.LE.B.5: Interpret the parameters in a linear or exponential function in terms of a context. CCSS.Math.Content.HSF.TF: Trigonometric Functions CCSS.Math.Content.HSF.TF.B: Model periodic phenomena with trigonometric functions CCSS.Math.Content.HSF.TF.B.5: Choose trigonometric functions to model periodic phenomena with specified amplitude, frequency, and midline. CCSS.Math.Content.HSF.TF.C: Prove and apply trigonometric identities CCSS.Math.Content.HSF.TF.C.9: Prove the addition and subtraction formulas for sine, cosine, and tangent and use them to solve problems. CCSS.Math.Content.HSG.CO.A: Experiment with transformations in the plane CCSS.Math.Content.HSG.CO.A.1: Know precise definitions of angle, circle, perpendicular line, parallel line, and line segment, based on the undefined notions of point, line, distance along a line, and distance around a circular arc. CCSS.Math.Content.HSG.CO.A.2: Represent transformations in the plane using, e.g., transparencies and geometry software; describe transformations as functions that take points in the plane as inputs and give other points as outputs. Compare transformations that preserve distance and angle to those that do not (e.g., translation versus horizontal stretch). CCSS.Math.Content.HSG.CO.A.4: Develop definitions of rotations, reflections, and translations in terms of angles, circles, perpendicular lines, parallel lines, and line segments. CCSS.Math.Content.HSG.CO.A.5: Given a geometric figure and a rotation, reflection, or translation, draw the transformed figure using, e.g., graph paper, tracing paper, or geometry software. Specify a sequence of transformations that will carry a given figure onto another. CCSS.Math.Content.HSG.CO.B: Understand congruence in terms of rigid motions CCSS.Math.Content.HSG.CO.B.6: Use geometric descriptions of rigid motions to transform figures and to predict the effect of a given rigid motion on a given figure; given two figures, use the definition of congruence in terms of rigid motions to decide if they are congruent. CCSS.Math.Content.HSG.CO.B.8: Explain how the criteria for triangle congruence (ASA, SAS, and SSS) follow from the definition of congruence in terms of rigid motions. CCSS.Math.Content.HSG.CO.C: Prove geometric theorems CCSS.Math.Content.HSG.CO.C.9: Prove theorems about lines and angles. CCSS.Math.Content.HSG.CO.C.10: Prove theorems about triangles. CCSS.Math.Content.HSG.CO.C.11: Prove theorems about parallelograms. CCSS.Math.Content.HSG.CO.D: Make geometric constructions CCSS.Math.Content.HSG.CO.D.12: Make formal geometric constructions with a variety of tools and methods (compass and straightedge, string, reflective devices, paper folding, dynamic geometric software, etc.). CCSS.Math.Content.HSG.SRT: Similarity, Right Triangles, and Trigonometry CCSS.Math.Content.HSG.SRT.A: Understand similarity in terms of similarity transformations CCSS.Math.Content.HSG.SRT.A.1b: The dilation of a line segment is longer or shorter in the ratio given by the scale factor. CCSS.Math.Content.HSG.SRT.A.2: Given two figures, use the definition of similarity in terms of similarity transformations to decide if they are similar; explain using similarity transformations the meaning of similarity for triangles as the equality of all corresponding pairs of angles and the proportionality of all corresponding pairs of sides. CCSS.Math.Content.HSG.SRT.B: Prove theorems involving similarity CCSS.Math.Content.HSG.SRT.B.4: Prove theorems about triangles. CCSS.Math.Content.HSG.SRT.B.5: Use congruence and similarity criteria for triangles to solve problems and to prove relationships in geometric figures. CCSS.Math.Content.HSG.SRT.C: Define trigonometric ratios and solve problems involving right triangles CCSS.Math.Content.HSG.SRT.C.6: Understand that by similarity, side ratios in right triangles are properties of the angles in the triangle, leading to definitions of trigonometric ratios for acute angles. CCSS.Math.Content.HSG.SRT.C.8: Use trigonometric ratios and the Pythagorean Theorem to solve right triangles in applied problems. CCSS.Math.Content.HSG.C.A: Understand and apply theorems about circles CCSS.Math.Content.HSG.C.A.2: Identify and describe relationships among inscribed angles, radii, and chords. CCSS.Math.Content.HSG.C.B: Find arc lengths and areas of sectors of circles CCSS.Math.Content.HSG.C.B.5: Derive using similarity the fact that the length of the arc intercepted by an angle is proportional to the radius, and define the radian measure of the angle as the constant of proportionality; derive the formula for the area of a sector. CCSS.Math.Content.HSG.GPE: Expressing Geometric Properties with Equations CCSS.Math.Content.HSG.GPE.A: Translate between the geometric description and the equation for a conic section CCSS.Math.Content.HSG.GPE.A.1: Derive the equation of a circle of given center and radius using the Pythagorean Theorem; complete the square to find the center and radius of a circle given by an equation. CCSS.Math.Content.HSG.GPE.A.2: Derive the equation of a parabola given a focus and directrix. CCSS.Math.Content.HSG.GPE.A.3: Derive the equations of ellipses and hyperbolas given the foci, using the fact that the sum or difference of distances from the foci is constant. CCSS.Math.Content.HSG.GPE.B: Use coordinates to prove simple geometric theorems algebraically CCSS.Math.Content.HSG.GPE.B.7: Use coordinates to compute perimeters of polygons and areas of triangles and rectangles, e.g., using the distance formula. CCSS.Math.Content.HSG.GMD: Geometric Measurement and Dimension CCSS.Math.Content.HSG.GMD.A: Explain volume formulas and use them to solve problems CCSS.Math.Content.HSG.GMD.A.1: Give an informal argument for the formulas for the circumference of a circle, area of a circle, volume of a cylinder, pyramid, and cone. CCSS.Math.Content.HSG.GMD.A.3: Use volume formulas for cylinders, pyramids, cones, and spheres to solve problems. CCSS.Math.Content.HSS.ID: Interpreting Categorical and Quantitative Data CCSS.Math.Content.HSS.ID.A: Summarize, represent, and interpret data on a single count or measurement variable CCSS.Math.Content.HSS.ID.A.1: Represent data with plots on the real number line (dot plots, histograms, and box plots). CCSS.Math.Content.HSS.ID.A.2: Use statistics appropriate to the shape of the data distribution to compare center (median, mean) and spread (interquartile range, standard deviation) of two or more different data sets. CCSS.Math.Content.HSS.ID.A.3: Interpret differences in shape, center, and spread in the context of the data sets, accounting for possible effects of extreme data points (outliers). CCSS.Math.Content.HSS.ID.B: Summarize, represent, and interpret data on two categorical and quantitative variables CCSS.Math.Content.HSS.ID.B.6a: Fit a function to the data; use functions fitted to data to solve problems in the context of the data. CCSS.Math.Content.HSS.ID.B.6b: Informally assess the fit of a function by plotting and analyzing residuals. CCSS.Math.Content.HSS.ID.B.6c: Fit a linear function for a scatter plot that suggests a linear association. CCSS.Math.Content.HSS.ID.C: Interpret linear models CCSS.Math.Content.HSS.ID.C.7: Interpret the slope (rate of change) and the intercept (constant term) of a linear model in the context of the data. CCSS.Math.Content.HSS.ID.C.8: Compute (using technology) and interpret the correlation coefficient of a linear fit. CCSS.Math.Content.HSS.IC: Making Inferences and Justifying Conclusions CCSS.Math.Content.HSS.IC.B: Make inferences and justify conclusions from sample surveys, experiments, and observational studies CCSS.Math.Content.HSS.IC.B.4: Use data from a sample survey to estimate a population mean or proportion; develop a margin of error through the use of simulation models for random sampling. CCSS.Math.Content.HSS.IC.B.5: Use data from a randomized experiment to compare two treatments; use simulations to decide if differences between parameters are significant. CCSS.Math.Content.HSS.CP: Conditional Probability and the Rules of Probability CCSS.Math.Content.HSS.CP.A: Understand independence and conditional probability and use them to interpret data CCSS.Math.Content.HSS.CP.A.1: Describe events as subsets of a sample space (the set of outcomes) using characteristics (or categories) of the outcomes, or as unions, intersections, or complements of other events (“or,” “and,” “not”). CCSS.Math.Content.HSS.CP.A.2: Understand that two events 𝘈 and 𝘉 are independent if the probability of 𝘈 and 𝘉 occurring together is the product of their probabilities, and use this characterization to determine if they are independent. CCSS.Math.Content.HSS.CP.A.3: Understand the conditional probability of 𝘈 given 𝘉 as 𝘗(𝘈 and 𝘉)/𝘗(𝘉), and interpret independence of 𝘈 and 𝘉 as saying that the conditional probability of 𝘈 given 𝘉 is the same as the probability of 𝘈, and the conditional probability of 𝘉 given 𝘈 is the same as the probability of 𝘉. CCSS.Math.Content.HSS.CP.B: Use the rules of probability to compute probabilities of compound events in a uniform probability model CCSS.Math.Content.HSS.CP.B.9: Use permutations and combinations to compute probabilities of compound events and solve problems. CCSS.Math.Content.HSS.MD: Using Probability to Make Decisions CCSS.Math.Content.HSS.MD.A: Calculate expected values and use them to solve problems CCSS.Math.Content.HSS.MD.A.3: Develop a probability distribution for a random variable defined for a sample space in which theoretical probabilities can be calculated; find the expected value. CCSS.Math.Content.HSS.MD.A.4: Develop a probability distribution for a random variable defined for a sample space in which probabilities are assigned empirically; find the expected value. Correlation last revised: 12/15/2016 * Copyright 2010 National Governors Association Center for Best Practices and Council of Chief State School Officers. All rights reserved.
Nucleic acids are macromolecules, composed of many (polymers) small units called nucleotides. Nucleic acid = many nucleotides Each nucleotide consists of phosphoric acid (phosphate), a five carbon sugar and a nitrogenous base. Nucleotide = phosphoric acid + sugar + nitrogenous base The sugar and base combination (without phosphoric acid) is called nucleoside. Phosphoric acid + sugar + nitrogenous base / Nucleoside /Nucleotide As such nucleotides are phosphoric esters of nucleosides. I. Phosphoric acid: The acidic nature of nucleic acids is due to the presence of phosphoric acid. Sugar of the nucleoside combines with phosphoric acid by a phosphodiester bond. It is a five carbon (pentose) sugar. There are two types of sugars — ribose and deoxyribose sugars. The nucleic acid containing ribose sugar is called ribose nucleic acid (RNA) and the other with deoxyribose sugar is called deoxyribose nucleic acid (DNA). III. Nitrogenous bases: Each nucleic acid has four nitrogenous bases — two purines and two pyrimidines. The purine bases are adenine and guanine and the pyrimidine bases are thymine and cytosine. In RNA, uracil (pyrimidine) is present in place of thymine. The full form of DNA: Deoxiribose Nucleic Acid. The DNA is a double stranded helix made of many nucleotides. The nucleotides consist of deoxyribose sugar, phosphoric acid, purine bases (adenine and guanine) and pyrimidine bases (cytosine and thymine). The arrangement of these substances in DNA molecule long been a subject of curiosity. It was largely due to X-ray diffraction studies of Wilkins which provided basis for promising double helical structure of DNA. Watson and Crick (1953) finally described the structure of DNA and were awarded Nobel Prize in 1962 alongwith Wilkins. Following are some of the characteristic features of this model. 1. Each nucleotide consists of sugar, phosphate and a nitrogenous base. Many such nucleotides are linked to form a polynucleotide chain or strand. 2. The adjacent nucleotides of the same strand are joined with each other by one phosphodiester bond between 5-carbon of sugar of one nucleotide and by another phosphodiester bond with 3-carbon of sugar of the next nucleotide. 3. Nitrogenous base is attached at 1-carbon of sugar. At this place purine is attached by its 9th-position and pyrimidine by its iraf-position. 4. Thus a polynucleotide strand consists of sugar and phosphate forming its long axis. 5. The two polynucleotide strands are complementary to one another. If one strand has adenine, the other strand would have only thymine opposite to it. Similarly guanine and cytosine form the other complementary base pair. Thus, if the base sequence of one strand is CAT TAG GAC, the base sequence of another strand would be GTA ÀÒÑ CTG. 6. The two strands are joined with one another by hydrogen bonds between their complementary nitrogenous bases. There are tw’o hydrogen bonds between adenine and thymine and three hydrogen bonds between cytosine and guanine. 7. Two polynucleotide strands are helically coiled around a common axis to form a DNA-molecule. The two strands are antiparallel, i.e., they run in opposite directions; the sugar molecule in one shows 5—P, 3—OH direction w’hile the (c) other shows 3-OH, 5-P direction. 8. The helical coiling of double strands is right handed. This DNA is called B-DNA. 9. Double stranded DNA molecule has a diameter of 20 A, i.e., the distance between two polynucleotide strands is 20 A. 10. The helix makes one complete turn every 34 A along its length. RNA: Types and Structure: RNA or Ribose Nucleic Acid is present in all the living cells. It is found in cytoplasm as well as nucleus. I. Types of RNA: RNA is generally involved in protein synthesis but in some viruses it also serves as a genetic material. The following are the types of RNA. 1. Genetic RNA. H. Fraenkel-Conrat: (1957) showed that RNA present in TMV (Tobacco Mosaic Virus) acts as a genetic material. Since then it is known to be the genetic material of most of the plant viruses and some bacteriophages. 2. Non-genetic RNA: This type of RNA is present in those cells where DNA is the genetic material. Non-genetic RNA is synthesized on DNA template. It is of the following three types. (a) Messenger RNA (m RNA): It carries the genetic information present in the DNA. It forms about 5-10% of the total RNA present in the cell. (b) Transfer RNA (t RNA): It is also known as soluble RNA (s RNA). These are the smallest molecules which carry amino acids to the site of protein synthesis. It forms about 10-15% of the total cell RNA. (c) Ribosomal RNA (r RNA): It is the most stable type of RNA and is found associated with ribosomes. It forms about 80% of the total cell RNA. RNA is generally single stranded and is made of a chain of polynucleotides. The single strand is folded in such a way that the chain formed by sugar and phosphate is external while the complementary nitrogenous bases are projected inside and joined by hydrogen bonds as in DNA. The differences between DNA and RNA are given in the table 1. DNA and RNA: A Comparison of their Structure, Reactions and Role in the Cell: Primarily in nucleus also in mitochondria & chloroplastsIn cytoplasm, nucleolus and chromosomes 2. Pyrmidine bases Cytosine, ThymineCytosine, Uracil 3. Purine bases Adenine. GuanineAdenine, Guanine 4. Pentose sugar Mostly double strandedMostly single stranded 6. Cytochemical reactio FeulgenBasophilic dyes with Ribonuclease treatment 7. Hydrolysing enzyme Deoxyribonuclease (DNase)Ribonuclease (RNase) 8. Role in cell Always genetic informationSynthesis of proteins, sometimes genetic self replicatingFormed from DNA. Self repli cation only in some viruses Alleles and Genes: Alternative form of a gene is known as allele. Main feature of Allele — 1. Govern same characters of an individuals. 2. A haploid cell has single copy of an allele, diploid two and polyploid more than two for a character. 3. All individual may have identical alleles. 4. They may be dominant and recessive type. Test for Allelism: 1. Recombination test 2. Complementation test. 1. Always belong to the same locus. 2. Control same characters. 3. No crossing over in the multiple allelism. 4. Wild type is always dominant. 5. Don’t show complementation. Example for multiple Allele — 1. Fur colour in rabbits. 2. Wing type in drosophilla 3. Eye colour in drosophila 4. ABO blood group in man. Closely linked and functionally related genes. 1. Govern different expression of the same characters. 2. Pseudo alleles occupy different position on the same locus. 3. Low frequency of Recombination by crossing over. 4. They exhibit cis-trans position effect. An allele which is similar in its phenotypic expression to that of other independently occuring allele is known as isoallele. Types of Isoallele: 1. Mutant Isoalleles. 2. Normal Isoalleles. Smallest and Individually functional part of genetic material. Properties of Gene: 1. Form (alternative form allele). 2. Location (chromosome, linear, locus) 3. Status (several unit). 4. Number (each Diploid) 5. Sequence (specific sequence) 6. Expression – incomplete, complete. 7. Change in form – mutation. 8. Exchange of Gene – Translocation. 9. Composition – DNA Bacteriophage Function 1. Control the expression of a specific character in an organism. 2. Based on gene control over characters. Oligogenic traits, polygenic traits and pleiotropic. 3. Gene interaction (when two or more gene govern a characters). 4. Linkage > two or more genes are inherited together. Modern Concept of Gene: 1. Mendel Idea: i. Factor for gene > responsible for transmission of characters from parents to their offspring. ii. Sutton and Boveri hypothesis > chromosomal theory of inheritence. iii. Morgan > linkage studies given that idea of genes are located on the chromosome in linear fashion. 2. Modern Concepts: (a) Gene is divisible it was believed that gene is basic unit of structure which is indivisible by crossing over now observed that gene is divisible based on studies on Intragenic recombination like bar eye gene of drosophila. (b) Part of a gene can function: 1. Reton > Region between recombination. 2. Muton > Elements of gene rise to mutant. 3. Cistron > Unit of function of gene. (a) Earlier Concept: Gene is sequence of nucleotides in DNA which control a single polypeptide chain. (b) Mordern Concept: (i) fine structure of gene (Benzer) (ii) Split Genes. The sequence of nucleotides were interrupted by intervening sequences such genes with interrupted sequence of nucleotides are referred as split genes or interrupted genes. It have two types of sequences (i) Normal sequences (ii) Interrupted sequences. (c) Jumping Genes: A gene keeps on changing its position within the chromosome and also between the chromosomes of the same gene. Such genes are known as jumping genes or transposons or transposable elements. The transposable elements are of two types – (i) Insertion sequence (d) Overlapping Genes: Some nucleotide sequences (genes) can code for two or more proteins. The genes which code for more than one protein are known as overlapping genes. (e) Pseudo Genes: There are some DNA sequences, especially in eukaryotes, which are non functional and defective copies of normal genes. Such DNA sequences of genes are known as pseudogenes. Application of Genetics: To study evolution, classification and identification. Improvement of crop plants Yield, resistance to insects, disease, salinity, drought, frost, lodging and adaptability. 3. In Medicine: (i) Detection of Hereditary Disease (ii) Production of Antibiotics. Natural and Artifical selection have been responsible for evolution of various crop plants. (i) Polyploidy (ii) Mutation. 1. Chemical Nature: Properties of the Genetic Material — 1. The Genetic material must be replicated with high fidelity. 2. Genetic material must be able to express itself. 3. Genetic material must be able to store the highly variable information. 4. The distribution of Genetic material must allow errors in a low frequency for the origin of new genetic variation. Experiments of Griffith (1928): His studies on Diploccocus pneumonea Different strains of Diploccocus from two types of colonies Smooth colonies are enclosed in a polysaccharide capsule the strain are able to produce pneumonia, the virulent Rough colonies lack the polysaccharide capsule such strains are avirulent they can’t produce pneumonia. Virulent strains are classified into several types eg. II, III etc. on the basis of antigenic properties of the polysaccharides present in their capsule. (a) Live HIS (virulant) cells injected died (due to pneumonia). (b) III S heat killed cells (virulent) >A live indicating that all the cells were killed by the heat treatment. (d) Mix of heat killed 11 IS and live HR>died (pneumonia) since all the cells of heat killed I1IS culture were dead it was postulated that some of the cells HR had changed into the 111S type due to influence of dead IIIS cells present in the mixture. This phenomenon was called transformation and the component of HIS cells which induced the conversion of HR cells to IIIS was named transforming principle. Griffth demonstrated transformation but they did not hint at the identity of the transforming principle. It is now known that transformation is essentially a special type of recombination in which a segment from the transforming DNA replaces the homologous segment of the bacterial chromosome. Experiments of Avery, Macleod and McCarty- 1944: Avery and associates carried out experiments of Griffith in vitro (in test tubes). A culture of live HR cells: Rough colonies heat killed IIIS cells or DNA isolated from IIIS cells no colony. Live HR + Heat killed HIS + Antibody HR > HIS colonies. Live HR + IIIS DNA + Anti HR > IIIS colonies. Anti HR was used for inactivating HR cells. These findings show that DNA is the transforming principle. In order to establish beyond any doubt that DNA is the transforming principle. They treated IIIS DNA with RNA (the enzyme which digest RNA) or proteases (enzymes which dagrade proteins) before it was mixed with HR cells. In both the experiments some IIIS type colonies were obtained. This shown RNA and proteins present as impurities in IIIS DNA preparations were not responsible for transformation. Then next they treated IIS DNA with DNA (enzyme digests DNA) before it was mixed with HR cells. There is no colony. They established beyond any doubt that DNA is the transforming principle. Experiments of Hershey and Chase (1952): This led to the universal acceptance of DNA as the genetic material. They studied the life cycle of T2 phage of E. coli. They showed that only the DNA component of T, particle is transmitted to the progency. T2 has a hexagonal head and a contractile tail. Head coat and tail are made up of protein, while the DNA is packed inside the head coat. Infection begins when the tail plate of a T2 particle comes in contact with the cell wall of an E.coli cell. They labelled T2 DNA and protein in two separate experiments. DNA contains P but no S. Protein – S but no P. They labelled T2 DNA > 32P or 35S labelled phage particles E.coli cells were grown for several generations on a culture medium containing 32P or 35S. The progeny phage particles thus obtained were labelled with either 32P or 35S. In one experiment they mixed 32P labelled T2 particles with E-coli cells. The cells were than agitated in a blender to separate the empty phase particles (called ghosts) remaining outside the bacterial cells after infection. Most of the 32P labelled form was present in the infected E-coli cells. The progency phage particles obtained after types of these E. coli cells also contained the 32P labelled from this we come to know that DNA is transmitted from one generation to the next. In the other experiment they used 32S labelled T7 particles obtained throrgh lysis of the infected E.coli cells was almost negligible. From this we come to know that proteins are not transmitted from one generation to the next. RNA as Genetic Material (1957): In TMV DNA is absent but these viruses are composed of RNA and protein. The proteins and RNA of TMV can be separated when they are remixed they reassociate to produce TMV. In one experiment either RNA or proteins isolated from TMV were used for the infection of tobacco leaves. Mosaic symptoms developed only when RNA was used for infection and not the proteins were used. They constructed two types of hybrid virus particles by mixing (1) RNA from strains A and proteins from strain  (2) Proteins from strain A and RNA from strain B. When tobacco leaves were infected with hybrid TMV or the first type the disease symptoms of strain A developed the proteins also identical with those of strain A. Similarly when hybrid protein of the second type were used for infection of tobacco leaves symptoms of strain B. It is evident from these findings RNA (and not the protein) of TMV has the capacity to produce the disease and that the type of proteins present in the virus particle is determined by the RNA. The darkly stained, rod shaped bodies visible under light microscope in a cell during metaphase stage of mitosis are referred as chromosomes. Main features of eukaryotic chromosomes: i. Chromosomes are not visible during interphase under light microscope. ii. Transmission of characters from generation to generation. iii. It vary in shape, size and number in different species of plants and animals. iv. It have property of self duplication, segregation and mutation. v. It composed of DNA, RNA and histones. It is usually observed during anaphase. It have three different shapes viz rod shape, S shape and V shape. (i) Haploid: It refers to half of the somatic chromosome number of a species and is denoted by n. (ii) Diploid: It refers to somatic chromosome number of a species and is denoted by 2n. (iii) Basic Number: The genetic chromosome number of a true diploid species is called basic number. The region of chromosome with which spindle fibres are attached during metaphase is known as centromere. One of the two distinct longitudinal subunits of a chromosome is called chromatid. iii. Secondary constriction: The constricted or narrow region other than that of centromere is called secondary constriction. The terminal region of chromosome on either side is known as telomere. The linearly arranged bead like structures found on the chromosomes are known as chromomeres. A mass of acromatic material in which chromonemta are embeded is called matrix. Karyotype is a phenotypic appearance of chromosomes of a particular species. It is represented by a diagram which is known as idiogram. Karyotype is of two types viz symmetrical and asymmetrical. Special Types of Chromosomes: These are special types of chromosomes in which large number of loops are projected out from the chromatin axis giving a lampbrush appearance. They are formed in oocyte nuclei of both vertibrates and invertibrates and spermatocyte nuclei of drosophila during diplotene stage. 1. Extra ordinary length 2. Large number of loops 3. Lamp-brush appearance. Polytene or Giant Chromosomes: The multiple replicates of the same chromosome holding together in a parallel fashion resulting in very thick chromosome are known as polytene chromosomes. These chromosomes have three main features: 1. Bands: The strips which are found in these chromosomes are known as bands. 2. Puffs: The smaller regions are known as chromosome puffs. These are the regions of genetic activity. 3. Giant Size. Some species possess extra chromosomes which are not members of normal chromosome complements. These are called as  chromosomes. Classification — on the basis of stability: ii. Unstable on the basis of size iii. Standard type iv. Small type v. Very small type vi. Large type Behaviour at Mitosis and Meiosis: The meiotic behaviour of  chromosomes is studied during pachytene stage. 1. They do not pair with a chromosomes. 2. Lower degree of pairing is observed among  chromosomes. 3. When single  chromosome is present, it remains univalent during pachytene. Chromatin fibres are the basic units of chromosome structure. 1. Folded Fibre Model: Chromatin fibres are basic units of chromosome which are about 230 A in diameters. A single chromatin fibre is found in each chromatid which consists of a single coiled double DNA helix. The folding of chromatin fibre in different ways results in the development of chromatin structure which is observed at metaphase. Two copies of chromatin fibre are formed from a single chromatin as a result of DNA replication during interphase. The replication of chromatin in the centromere region takes place wheie two chromatins have to separate out. Extensive folding of chromatin fibres leads to significant reduction in their length and increase in thickness and stainability. 2. Nucleosome – Solenoid Model: Chromatin is composed of DNA, RNA, histories and other proteins. Chromatin fibres are 300 A in diameter. The nucleosomes are sub units of chromatin and have bead like appearance. Each nucleosome is composed of a histone octamer and 146 bp of DNA. Each nucleosome consists of (1) a core particle and (2) linker or spacer DNA. The core particle has two copies each of H2A, H2B. H3 and H4 histone molecules. The core particle is about 110 A in diameter and 60 A in height one molecule of histone H1 is connected with linker DNA. The super coiled nucleosome fibre is known as solenoid. Chromosomal Aberration – Structural Changes: Any change which alters the normal structure of a chromosome is known as structural chromosomal aberration. Alter gene number in chromosomes > (i) Deletion (ii) Duplication. Alter the sequence of genes in the Chromosome > (i) Translocation (ii) Inversion. Loss of a portion of segment from a chromosome. Observed > Drosophila, maize, tomato, wheat. Depending upon location deletion two types. 1. Terminal Deletion: Loss of either terminal segment of a chromosome two types. (i) Heterozygous Deletion > Deletion occurs only in one chromosome of a homologous pair. (ii) Homozygous Deletion >Deletion occurs in both the chromosome of a pair. 2. Interstitial Deletion: Loss of a segment of chromosome from the intermediate portion or between telomere and centromere. The interstitial deletion generally does not involve centromere. In such deletion the break occurs at two places. Cytological Method — (i) Meiotic pairing (ii) Chromosome length. In heterozygous Deletion: The pairing occurs between homologous segments. In terminal Deletion: Normal chromosome remains unpaired at one end. A loop is formed in the normal chromosome in the region of deletion. The loop confirms the presence of deletion. The pollen fertility is reduced in the presence of deletion. The pollens with deficient chromosomes are non functional. A large deletion is lethal. 3. Crossing over: The crossing over is suppressed in the region of deficiency due to lack of corresponding segment in the area of deletion. Deletion affects the phenotype. In the absence of dominant gene in the deletion region the recessive gene express. This results in the change in phenotype. 5. Change in Karyotype: The chromosomes with deletion can never revert to a normal condition. The gene number as well as the keryotype of the individual is changed. It plays an important role in species formation and releasing variety through mutations. Important cytological tools for mapping genes. Deletion mapping has been widely used in drosophila to locate various genes in polytene chromosomes. One way or reciprocal transfer of segments between non homologous chromosome is known as translocation. Translocation originate through breakage and exchange of parts between non homologous chromosomes. When only one chromosome from each pair of two homologoues is involved, it gives rise to translocation hetrozygotes and when both chromosomes from each pair are involved it produces translocation homozygotes. Translocation can be detected by cytological and genetic methods. Cytological methods includes study of pachytene configurations and metaphase configurations. Table 2. Differences between translocation and crossing over: 1. It involves in non-homologous chromosomes. 2. Change the linkage map. 3. Breakage and reunion 4. Pollen and ovule sterility 1. It involves in non-sister chromatids of homologous chromosome. 2. Does not change the linkage map. 3. Chiasma formation. 4. No sterility. At Anaphase the chromosome disjoin (segregate) in three different ways — 1. Alternate Disfunction: When two normal chromosome (N1 and N2) move towards one pole and two translocated chromosome (T1 and T2) to another pole, is known as alternate disfunction segregation. In such segregation all gametes receive full complement of genes and will give rise to viable individual. 2. Adjacent one Segregation: The segregation of one normal chromosome with one translocated is called adjacent segregation. Such segregation occurs in open ring configuration. Here the chromosomes which go to one pole are non homologous (T1 + N2 and T2 + N1). 3. Adjacent two Segregation: Sometimes in open ring configuration, two homologous chromosome (T1 N1) i.e. one normal one translocated move to one pole and other homologous (T2 N2) move to another pole. Such disjunction is known as adjacent 2 segregation. Both adjacent types of segregation will produce gametes with duplication and deficiencies which may cause some sterility. Translocation can be detected by genetic methods based on pollen sterility, gene segregation and linkage studies. Translocations lead to duplication and deletion of genes. Thus translocations result in pollen sterility and ovule sterility. If there is a ring of four chromosomes 50% steility. If there is a ring of six chromosome > 75% sterility. 2. Crossing over: Crossing over is generally suppressed due to competition in pairing. Changes in chromosome number and karyotype. They may alter the size of chromosome as well as position of centromere. In human down syndrome (Mangolism) can arise in the progeny of an individual heterozygous for a translocation involving chromosome number 21. Translocation alter the chromosome size, chromosome number and karyotype and thus play an important role in the formation of species. Translocation are useful in locating the position of genes, centromere and other genetic markers on the chromosomes. They are useful tools in breeding programmes for transfer of desirable characters from one species to another. It refers to structural change in a chromosome in which a segment is oriented in a reverse order. (i) Paracentric Inversion: The inversion in which centromere is not involved is called Paracentric Inversion. In this type both breaks occur in one arm of the chromosome. Only one chromosome of a homologous pair has inversion it is called inversion heterozygote. When both the members of a homologous pair have similar type of inversion it is called inversion homozygote. Meiosis is normal in inversion homozygotes. Crossing over within the inversion loop in a paracentric inversion heterozygote results in the formation of dicentric bridge and an accentric fragment after exchange. The other two chromosome remain normal. The dicentric chromosome leads to formation of bridge at anaphase. The bridge is later on broken due to pull from both the poles, thus a centric segment is lost due to lack of movement. Thus out of four two are normal and two are deficient for some genes. (ii) Pericentric Inversion: When centromere is involved in the inversion it is known as pericentric inversion. When a break occurs in each of the two arms of a chromosome the centromere is included in the detached segment resulting in s pericentric inversion. Crossing over within the inversion loop results in the formation of chromotids with duplication and deficiency. Out of four chromatids two are cross over produts and two are normal. One of the non cross over have original gene sequence and the other has inverted gene sequence. Inversion result when there are two breaks in a chromosome and the detached segment is reunited to the same chromosome in the reverse order. Three cytological criteria (i) pachytene configuration (ii) anaphase configuration (iii) position of centromere are used for detection of inversion. Inversions can be detected in the meiotic nuclei by the presence of an inversion loop in the paired homologous during pachytene. The crossing over in the inversion loop leads to formation of chromosomes with duplication and deficiencies. Gametes with such chromosome are inviable and lead to 50% sterility. 2. Crossing over: Inversion heterozygotes often have pairing problems in the area of inversion. Thus competition for pairing reduces crossing over in the area of inversion. 3 Gene order: The Gene order is changed in the inverted segment of a chromosome. Inversion heterozygotes exhibit a linkage map with different gene order. In inverted chromosome there is no loss of genetic material provided crossing does not occur in the inversion loop. Pericentric inversion sometimes results in change of karyotype by shifting the position of centromere in the inversion loop may lead to shift in the position of centromere. Occurance of a segment twice in the same chromosome. It results in addition of one or more genes to a chromosome. First reported in drosophila by Bridges in 1919. Now it has been reported in maize and wheat. Four types of duplication: 1. Tandom Duplication: Sequence of genes in the duplicated segment is similar to the sequence of genes in the original segment of a chromosome. a b ñ [b c] d e f 2. Reverse Tandom duplication: The sequence of genes in the duplicated is reverse to the sequence of genes in the original segment of a chromosome. a b ñ [c b] d e f 3. Displaced Duplication: When the duplication is found away from the original segment but on the same arm of the chromosome a b ñ d e f i j k Normal a [d e] b c d e f g h I j k Displaced 4. Reverse displaced duplication: Duplication is also away from the original segment but found on the other arm of a chromosome. These two types (3 & 4) are known as non- adjacent duplication because they are away from the segment which shows duplication. Duplication originate due to unequal crossing over during meiosis. The homologous chromosome usually pair in such a way that all the identical loci match with each other in their positions. This facilitates equal crossing over between non-sister chromatids. Sometimes homologous chromosome pair in such a misaligned manner that the corresponding identical loci do not fall opposite to each other. Such situation leads to unequal crossing over between non sister-chromatids. This gives rise to two types of chromatids viz one with duplication other with deletion. When a gamete with duplication unites with normal ovule, it leads to formation of zygote with duplicate genes in a particular segment of a chromosome. a b ñ/b c/d e f g h i j Duplication loop can be observed during pachytene stage when homologous chromosome pair, chromosome having duplicate segment are longer than normal chromosome. If a duplicate segment includes centromere it may be present as a small extra chromosome added to a normal chromosome complement duplication can also be detected by suppression of recessive characters. A single dominant gene in the duplicate region is enough to suppress the expression of two recessive alleles. Duplications are less harmful than deletions. They do not reduce the viability of an individual. Duplications lead to addition of some genes in population. Changes in Chromosome Number: A basic or monoploid set of chromosomes of an individual is called genome. In a genome, each type of chromosome is represented only once. Most of the sexually reproducing plant species are diploid, i.e., have two sets of chromosomes. Any change in the chromosome number from the diploid state is referred to as heteroploidy and the individuals having chromosome number other than diploid are called heteroploids. The heteroploidy is of two types viz., I. euploidy and II. aneuploidy. The change in chromosome number which involves entire set is called euploidy. Euploidy includes, monoploids, diploids and polyploids. Monoploids and Haploids Monoploids contain a single chromosome set and are characteristically sterile. In a true diploid species, both monoploid and haploid chromosome number is the same (n = x). Thus a monoploid can be a haploid but all haploids cannot be monoploids. Types of Haploids: Depending upon the origin, haploids are of two types viz., euhaploids and aneuhaploids. Euhaploids develop from a euploid species and have complete chromosome set. Euhaploids are of two types, viz., monohaploid – which develop from a normal diploid species, and polyhaploids – which develop from autopolyploid species. When a haploid develops from a tetraploid species, it is called dihaploid. Aneuhaploids develop from aneuploid species and have either one additional or missing chromosome. Aneuhaploids include disomic haploids (n + 1), nullisomic haploids (n – 1), substitution haploids (n – 1 + 1), misdivision haploids etc. Mis-division haploids have an isochromosome which is produced by vertical division of centromere. Generally centromere divides longitudinally. Aneuhaploids are generally inviable. Now various methods are known by which haploids can be produced. These methods include (1) Pollination with foreign pollen, (2) delayed pollination, (3) use of X-ray irradiated pollen for pollination, (4) temperature shock, (5) treatment with chemicals like colchicine, (6) interspecific and intergeneric crosses, and (7) anther and pollen culture. Uses of Haploids: Haploids have several applications in plant breeding. They are used for (1) development of pure lines, (2) disease resistance, (3) development of inbred lines, and (4) production of aneuploids. These aspects are briefly described below. Diploids Normal diploids are known as disomies. They have regular bivalent pairing during meiosis. Diploids also have disomic genetics with two alleles at each locus. Pure line or inbred lines of diploids have homozygosity at each locus. Polyploids which behave like diploids are known as disomic polyploids like wheat, cotton, etc. An organism or individual having more than two basic or monoploid sets of chromosomes is called polyploid and such condition is known as polyploidy. Polyploidy is of two types, viz, (1) autopolyploidy, and (2) allopolyploidy. Polyploids which originate by multiplication of the chromosome of a single species are known as autopolyploids or autoploids and such situation is referred to as autopolyploidy. Autoploids include triploids (3x), tetraploids (4x), pentaploids (5x), hexaploids (6x), heptaploids (7x), octaploids (8x), and so on. Autoploids are also known as simple polyploids or single species polyploids. They have three sets of chromosomes of the same species. They can occur naturally or can be produced artificially by crossing between autotetraploid and diploid species. Triploids are generally highly sterile due to defective gamete formation. Triploids are useful only in those plant species which propagate asexually like banana, sugarcane, apple, etc. They have four copies of the genome of same species. They may arise spontaneously or can be induced artificially by doubling the chromosomes of a diploid species with colchicine treatment. Tetraploids are usually very stable and fertile because pairing partners are available during meiosis. In such individuals diploid gametes (2n) are formed. Autotetraploids are usually larger and more vigorous than the diploid species. Rye, grapes, alfalfa, groundnut, potato and coffee are well known examples of autotetraploids. A polyploid organism which originates by combining complete chromosome sets from two or more species is known as alloployploid or alloploid and such condition is referred to as allopolyploidy. Allopioids are also known as hybrid polypoids or bispecies or multispecies polyploids. An allopolyploid which arises by combining genomes of two diploid species is termed as allotetraploid or amphidiploid. Allopolyploidy can be developed by interspecific crosses and fertility is restored by chromosome doubling with colchicine treatment. Allopolyploids has played greater role in crop evolution than autopolyploidy, because allopolyploidy is found in about 50% of crop plants. Some important natural allopolyploid crops are wheat, cotton, tobacco, mustard, oats, etc. Interspecific crossing followed by chromosome doubling in nature have resulted in the origin of allopolyploids. Induction of Polyploidy: Polyploidy is mainly induced by treatment with a chemical known as colchicine. This is an alkaloid which is obtained from the seeds of a plant known as Colchicum autumnale, which belongs to the family Liliaceae. The colchicine induced polyploidy is known as colchiploidy. In plants, colchicine is applied to growingtips, meristematic cells, seeds and axillary buds in aqueous solution. Colchicine induces polyploidy by inhibiting formation of spindle fibres. The chromosomes do not line up on the equatorial plate and divide without moving to the poles due to lack of spindle fibres. The nuclear membrane is formed around them and the cell enters interphase. Thus nucleus has double the chromosome number. Effects of Polyploidy: 1. Stems are thicker and stouter. 2. Leaves are fleshy, thicker, larger and darker green in colour. 3. Roots are stronger and longer. 4. Flowers, pollens and seeds are larger than diploids. 5. Maturity duration is longer and growth rate is slower than diploids. 6. Water content is higher than diploids, etc. Applications in Crop Improvement: Polyploidy plays an important role in crop improvement. Both autopolyploidy and allopolyploidy are useful in several ways. However, allopolyploidy has wider applications than autopolyploidy. Applications of autopolypioidy and allopolyploidy in crop improvement are briefly presented below: Both triploids and tetraploids have been used in crop improvement. However, their applications have been limited to few species only. Autotriploids have been developed in sugarbeets and water melon only. The triploid sugarbeets have larger roots and higher sugar content than diploids. The triploid water melons are seedless or have rudimentary and soft seeds like cucumber. Alloploidy is useful in four principal ways, viz., (1) In tracing the origin of natural allopolyploids, (2) In creating new species, (3) in interspecific gene transfer, (4) as a bridging species. Limitations of Polyploidy: Polyploidy has several limitations. Some important limitations of polyploidy in crop improvement are briefly presented below: 1. Limited Use: The single species polyploidy has limited applications. It is generally useful in those crop species which propagate asexually like banana, potato, sugarcane, grapes, etc. 2. Difficulty in Maintenance: The maintenance of monoploids and triploids is not possible in case of sexually propagating crop species. 3. Undesirable Characters: In bispecies or multispecies polyploids characters are contributed by each of the parental species. These characters may be sometimes undesirable as in case of Raphanobrassica. 4. Some other Defects: Induced polyploids have several defects such as low fertility, genetic instability, slow growth rate, late maturity, etc. 5. Chances of developing new species through allopolyploidy are extremely low. Aneuploids are of three types, viz., (1) monosomies, (2) nullisomics, and (3) polysomics. These are described below: An individual lacking one chromosome from a diploid set (2n-1) is called monosomic and such condition is known as monosomy. Monosomies may originate in three main ways, viz., (i) from diploids, (ii) from nullisomics, and (iii) from trisomics as described below. (i) From Diploids: Monosomies may originate spontaneously from diploids. Sometimes nondisjunction during meiosis gives rise to n-1 gamete. If this gamete is fertilized by a normal (n) gamete, a monosomic zygote (2n-l) is produced. (ii) From Nullisomics: Nullisomics produce n-1 gametes. Union of such gamete with normal gamete gives rise to monosomies as shown below: Nullisomic Gametes Union with normal gamete Result 2n-2 n-1 n-1+n 2n-l (iii) From Trisomics: Trisomics (2n+l) also give rise to monosomies. Sometimes non disjunction of three homologous chromosomes in a trisomic during meiosis gives rise to n-1 gametes. Union of such gametes with normal one results in the development of monosomic zygote. Trisomics Gametes Union with normal Result 2n+l n – 1/n-1 n -1 + n 2n-l An individual lacking one pair of chromosomes from a diploid set (2n-2) is called nullisomic and such situation is referred to as nullisomy. An individual having either single or one pair of extra chromosome in the diploid complement is known as polysomic and such condition is referred to as polysomy. Applications in Crop Improvement: Aneuploids are useful in crop improvement in various ways. Some of the uses of aneuploids in plant breeding are briefly presented below: 1. Locating Genes. Aneuploids are useful tools for locating genes on a specific chromosome. Monosomies and nullisomics are used for this purpose. 2. Interspecific Gene Transfer. Monosomies are also used in transferring chromosomes with desirable genes from one species to another. 3. Aneuploids are used for developing alien addition and alien substitution lines in various crops. 4. Primary trisomics aie useful in identification of chromosomes involved in translocations. The process by which a DNA molecule makes its identical copies is known as DNA replication. Three possible modes of DNA replication. 1. Dispersive Replication 2. Conservative Replication 3. Semi conservative Replication. Semi Conservative Replication: Main features — 1. Seperation of two strands of parental DNA. 2. Complementary base pairing of the bases located in the single stranded regions. 3. Formation of phosphodiester linkages between the neighbouring deoxyribouncleotides. 4. This ensures that the base pairs of the new strands are complementary to the old strands. 5. The base sequence of a newly synthesized strand is dictated by the base sequence of old strand. Evidence for Semi Conservative Replication: 1. Meselson and Stahl 1958: They grew E. coli in 15N for 14 cell generations till nitrogen in the bacterial DNA was only 15N. This heavy DNA had more density than DNA with 14N. Then the bacteria were transferred to culture medium 14N. They observed that such DNA was half dense indicating the presence of DNA hybrid (one 15N strand + one 14N strand). After second round of replication there would be 4 DNA molecules of these, two molecules would be hybrid (14N – 15N). Methods of DNA Replication: 1. Initiation of Replication: Replication begins at a replication origin. 2. Unwinding of Helix: Unwinding is brought by enzyme ‘helicase.’ This unwinding may result in formation of supercoils which are removed by enzyme DNA Gyrase (topoisomerase). Formation of replication fork (y shaped structure). Single strands are stabilised by a single strand binding protein (SSB). 3. Formation of Primer Strand: An enzyme primase initiates transcription of 3? > 5? strands. This generates long primer RNA in 5? > 3? direction. The free 3?-OH of this primer RNA provides the initation point for DNA polymerase for the sequential addition of deoxyribo nucleotides. DNA polymerase 111 catalyse DNA replication. 4. Elongation of New Strand: Synthesis of new strand occurs continously along the upper strand known as leading daughter strand – Synthesis of another daughter strand along lower parental strand takes place in the form of short pieces lagging daughter strand. Short pieces of DNA is called okasaki fragments. Discontinuous pieces of the lagging strand are joined together by the enzyme DNA ligase.
Geometry can be a challenging subject for young students, but with the right resources, it can also be a lot of fun! These three-page geometrical shapes worksheets are designed to engage and educate class 2 students on the basics of geometry. From identifying shapes to drawing them, these activities are sure to keep your students entertained while they learn. Identify and name basic shapes. The first page of these geometrical shapes worksheets focuses on identifying and naming basic shapes. Students will be asked to identify shapes such as circles, squares, triangles, and rectangles, and then write their names. This activity helps students develop their shape recognition skills and reinforces their understanding of basic geometry concepts. Match shapes to real-life objects. The second page of these geometrical shapes worksheets takes the learning a step further by asking students to match shapes to real-life objects. For example, they may be asked to match a circle to a clock or a rectangle to a book. This activity helps students understand how shapes are used in everyday life and reinforces their understanding of shape recognition. It also encourages them to think creatively and make connections between abstract concepts and real-world objects. Draw and identify shapes based on their properties. The first page of these geometrical shapes worksheets focuses on drawing and identifying shapes based on their properties. Students will be asked to draw shapes such as circles, squares, triangles, and rectangles, and then identify the number of sides and corners each shape has. This activity helps students develop their spatial reasoning skills and reinforces their understanding of basic geometrical concepts. It also encourages them to pay attention to details and think critically about the properties of different shapes. Draw and color shapes. The second page of these geometrical shapes worksheets is all about creativity! Students will be asked to draw and color different shapes, using their imagination to come up with unique designs. This activity helps students develop their fine motor skills and encourages them to think outside the box when it comes to geometrical shapes. It also allows them to express themselves artistically while still reinforcing their understanding of basic geometrical concepts. Identify and count sides and corners. The first page of these geometrical shapes worksheets focuses on identifying and counting sides and corners of different shapes. Students will be asked to count the number of sides and corners on shapes like triangles, squares, and rectangles. This activity helps students develop their spatial reasoning skills and reinforces their understanding of basic geometrical concepts. It also sets a strong foundation for more complex geometrical concepts they will learn in the future. Introducing an engaging shapes class 2 curriculum is a great way to nurture the young minds of students who are just beginning to explore the fascinating world of geometrical shapes. Class 2 geometry lessons focus on familiarizing students with a variety of shapes, including 2D and 3D geometrical shapes. By exploring the fundamentals of geometry in class 2, children can establish a strong foundation for their future studies. So, what geometry lessons should be taught in shapes for class 2? A well-rounded curriculum includes shapes and patterns worksheets for class 2, solid shapes for class 2, and a variety of interactive activities. Teachers often use a combination of worksheets, such as the worksheet of maths for class 2nd and shapes worksheet for class 2 with answers, to reinforce concepts and provide practice opportunities. An engaging shapes project for class 2 can also be an excellent way to incorporate hands-on learning experiences. Short form in maths for class 2 lessons and shapes and patterns for class 2 can be integrated into a geometry worksheet for class 2. This approach allows students to develop their skills in identifying geometrical shapes 2D and 3D while also exploring the world of geometry for class 2. When it comes to geometrical shapes for class 2, it's essential to consider which geometry box is best for storing and organizing tools and materials. Drawing with shapes for class 2 can be a fun and interactive way for students to practice and apply their newfound knowledge. Class 2 maths shapes and patterns lessons often incorporate 3D shapes for class 2 and 2D shapes for class 2 to help students build a strong understanding of geometric concepts. For teachers and parents looking for additional resources, a class 2 maths book pdf free download can be a valuable tool. This resource typically includes class 2 maths puzzles, class 2 maths test, class 2 maths exercise, and a class 2 maths syllabus pdf. The class 2 maths book solutions provide helpful guidance and support for educators. How many geometrical isomers of a compound are possible? While this concept may not be introduced in shapes activity for class 2, it is an essential concept in more advanced geometry lessons. For now, class 2 students should focus on mastering basic concepts such as can shapes and geometrical shapes worksheet for class 2. A geometry class 2 worksheet can be a valuable resource for students, as it offers targeted practice in identifying and working with shapes in maths class 2. Teachers should strive to include basic shapes for class 2, 3D shapes class 2, and geometry questions for class 2 in their lesson plans to help students build a strong foundation in this essential subject. In conclusion, a well-rounded shapes class 2 curriculum should include engaging activities, targeted practice with worksheets, and opportunities for students to explore the world of geometrical shapes. By focusing on 2D shapes class 2 and 3D shapes class 2, teachers can help students develop a solid understanding of geometry that will serve them well in their future studies. With the right resources and support, students will be well-equipped to tackle more advanced geometry concepts as they progress in their education.
Graph Dynamic Linear Equations Students explore the concept of linear equations. In this linear equation lesson, students change parameters of an equation and notice the effect it has on its graph. 19 Views 21 Downloads Using Linear Equations to Define Geometric Solids Making the transition from two-dimensional shapes to three-dimensional solids can be difficult for many geometry students. This comprehensive Common Core lesson starts with writing and graphing linear equations to define a bounded region... 9th - 11th Math CCSS: Designed The Graph of a Linear Equation in Two Variables Add more points on the graph ... and it still remains a line! The 13th installment in a series of 33 leads the class to the understanding that the graph of linear equation is a line. Pupils find several solutions to a two-variable linear... 8th Math CCSS: Designed Solving Systems of Linear Equations Solving systems of equations underpins much of advanced algebra, especially linear algebra. Developing an intuition for the kinds and descriptions of solutions is key for success in those later courses. This intuition is exactly what... 8th - 9th Math CCSS: Adaptable Topic 4: Solving Systems of Linear Equations Linear equations, coordinate planes, and systems of equations are covered in this extremely well-organized lesson. Composed of a series of mini-lessons, the instruction aims at explaining a different facet of solving systems of linear... 8th - 11th Math CCSS: Adaptable Linear Relationships: Tables, Equations, and Graphs Pupils explore the concept of linear relationships. They discuss real-world examples of independent and dependent relationships. In addition, they use tables, graphs, and equations to represent linear relationships. They also use ordered... 7th - 8th Math CCSS: Adaptable
In this lesson, students will investigate the measurement of CO2 as outputs per individual and household, in terms of kilograms of CO2 per annum. They will examine the carbon footprint of average Australians in comparison to the world average and to countries such as Bangladesh, as featured in a clip from 2040. - understand about densities and masses of solids, liquids and gases - learn how CO2 output by humans can be measured in kilograms produced ‘per annum’ - learn that human CO2 outputs can be reduced either through consuming less CO2 or by absorbing or ‘drawing down’ CO2 from the atmosphere - develop teamwork skills as they calculate household CO2 footprints using addition, subtraction and multiplication of large numbers - realise that they can make a difference to global carbon dioxide levels by acting locally and encouraging others to do the same - explain in simple terms why liquids, solids and gases have different densities - convert kilograms of CO2 into a given approximate volume at room temperature, using multiplication - use provided data and successfully use addition, subtraction and multiplication to calculate and solve aggregate amounts of CO2 for individuals, neighbourhoods and societies - express examples of local and household actions that can be taken to reduce their own individual carbon footprint and those of their families Lesson guides and printables - Unit of work: 2040 – Mathematics – Years 5 & 6 - Time required: 65 mins - Level of teacher scaffolding: High – Direct teacher instruction required in the warm-up and in Part B (explicit instruction), and scaffolding and guidance is needed to explain the activity for Part C (group work and problem-solving) To view our Australian Curriculum alignment click here To view our NZ Curriculum alignment click here - Student Worksheets – one copy per student - A device capable of presenting a video to the class - 1 plastic or glass 1 litre jug – with measurement markings - 1 litre of water in a separate container - 1 small packet of rigatoni or penne pasta shells (uncooked) - 1 litre bag or container of small pebbles or stones or sand - Small measuring scales capable of being set to zero (e.g. nutrition scales) - Whiteboard and markers - CO2 Saver Choice Cards & Facts’ – 1 per 2 or 3 students of page 1 and 1 copy of page 2 only - ‘Our CO2 Saver Household’ Worksheet – 1 per 2 or 3 students - Access to the ‘Household CO2 Calculator‘ - Summary slides – optional - Sustainability Factsheet – optional 2040 is an innovative feature documentary that looks to the future, but is vitally important NOW! Director Damon Gameau embarks on a journey to explore what the future could look like by the year 2040 if we simply embraced the best solutions already available to us to improve our planet and shifted them rapidly into the mainstream. In Australia: Order the Schools Version of the 2040 DVD. The Schools Version includes an educational license and is for Australian primary and secondary schools that wish to utilise the film as a learning tool or host free on-site screenings for the school community. In New Zealand: Order the Schools Version of the 2040 DVD. The Schools Version includes an educational license and is for New Zealand primary and secondary schools that wish to utilise the film as a learning tool or host free on-site screenings for the school community. If you are teaching in either New Zealand or Australia, you can now organise a virtual screening of the film for your class. To enquire about this option, simply email firstname.lastname@example.org and the 2040 team will help you set this up! If you have already bought a DVD of the film and you have a ClickView account, you can email the team for permission to upload the film to your account to make it more easily accessible for your teachers and students. Cool Australia, GoodThing Productions and Regen Pictures would like to acknowledge the generous contributions of Good Pitch Australia, Shark Island Institute, Documentary Australia Foundation, The Caledonia Foundation and our philanthropic partners in the development of these teaching resources.
Here’s a secret that math tutors keep to themselves: Sometimes, the fanciest math skills are actually the simplest ones! Like the ultra-mysterious Reference Angle! Sounds pretty impressive, no? But Reference Angles are actually one of the easiest things to define. Let’s draw a graph and an acute angle (an angle less than 90) and an obtuse angle (an angle that is greater than 90): The Reference Angles are the measurement of degrees from the shortest distance between the terminal line to the X-axis. For an acute angle, it is very simple. It is just the degrees.Inn this case, it is 45°. Fort an obtuse angle, it is a little trickier. The terminal line is actually closest to the x-axis on the OPPOSITE side of the angle. In this case, it would be this: We know the degree measurement is 70° because the angles that make up a straight line through the y-axis is 180°. We just subtract 110° from 180° to get the REFERENCE ANGLE! If this doesn’t make sense, you can get math homework help from a geometry expert. ***Please note that even though the Reference Angle is on the negative side of the x-axis, it is always positive. ***Also, unless the terminal side of the angle is on the y-axis to form a right angle, the Reference Angle will always be acute!!! In other words, Reference Angle ≤ 90° Here is the largest Reference Angle possible: It is right angle with a measurement of 90° ***We can use Reference Angles to calculate the functions of angles, like sine, cosine, and tangents. It’s basically a cool shortcut: the sin(70) and sin(110) are the same because they have the same Reference Angle!
- Te source code examples on this post can be found on Github 1.1 - Basic expressions involving just addition , multiplication, and grouping 1.2 - Unnecessary grouping To know if grouping with parentheses is really needed or not it is just a matter of know what comes first and to know that you just need to review what the precedence values are for each operator that is to be used an expression. Using a group operator where it is not needed might not change the outcome, and it might help to make the code more readable for developers that do not understand order of operations as well as they maybe should. However it still might be best to just understand order of operations so that unneeded use of group operators does not end up happening. 2 - Associativity of operators So Associativity is the direction in which operations are preformed such as left to right, or right to left. Operators like addition, subtraction and so forth have left to right associativity. However other operators such as the assignment, and logical not operator have right to left Associativity. So subtraction is a good example of an operator where associativity matters because taking 2 from 5 is not the something as taking 5 from 2. Here subtraction is an example of left to right associativity, you start with 5 and then subtract 2 in the first example, things flow from left to right. 2.2 - right to left Although many operators have left to right associativity, many have the inverse of this also. One example is the logical not operator. This operator converts and inverts the boolean value of the value that is given to it at the right of it. If a value is given to the left that will result in an error. 3.1 - Grouping - (Precedence 21 highest, preformed first) Here in the first expression the logical not operator is preformed first because it has a value of 16, and multiplication is 14. So then not 0 converts to the boolean value true, then the multiplication operation is preformed resulting in 5. Finally the true boolean value is added to 5, when doing so true converts to the number 1 resulting in a number value of 6. By grouping the 0 and one together the addition operation is now preformed first because the grouping precedence value of 20 superseding the value of the logical not operator again at 16. So now when the logical not operator is preformed this results in not 1 which results in a false boolean value that will convert to 0 when converted to a number, resulting in zero being multiplied by 4 which is 0. So no matter what else is going on anything inside the parentheses or grouping if you prefer will be preformed first. 3.2 - Function calls, new with arguments, and more ( Precedence 20 ) 3.3 - New operator without arguments ( Precedence 19 ) The new operator is used with a constructor function as a way to create a new instance of that constructor function. There are many built in constructor functions such as the Date and Array constructor, but it is also possible to create a user define constructor function also. The Precedence of the new keyword is just below that of a function call, but still fairly high so that if I am creating a new instance of an object in an expression more often then not that will be preformed first in many expressions in which I would do such a thing. 3.4 - Postfix Increment and postfix decrement ( Precedence 18 ) There are the increment and decrement operators that are two plus signs, or negative signs. This operator can be placed before of after a variable that is to be incremented or decremented. If one of them is used after a variable then it is postfix and has a precedence value of 17. 3.5 - Logical Not, bitwize not, and more ( Precedence 17 ). Here we have the logical not operator that is one such operator that i find myself using the most often in this group. So when working out an expression any values that have there truth values inverted will be preformed before addition or subtraction. However although addition both in terms of numbers and strings will be preformed afterworlds, this group contains both Unary Plus, and Negation as well as Prefix Increment and Decrement. 3.6 - The Exponentiation operator( Precedence 16 ) Two multiplication operators can be used as a short hand for the Math.pow method. When doing this it will have a higher precedence over that of plain old multiplication. 3.7 - Multiplication, Division, and Remainder ( Precedence 15 ) The arithmetic operations of multiplication, division and remainder have a Precedence 15 which is one level above that of addition and subtraction. This is then one of the most commonly used set of expressions so it is a good idea to get this one solid at least when it comes to various expressions that involve addition, and subtraction with multiplication, and division. 3.8 - Addition and subtraction ( Precedence 14 ) Addition and subtraction have a Precedence of 14 so these operations will be preformed after Multiplication, Division, and Remainder. 3.9 - Bitwise Shift operators ( Precedence 13 ) Bitwise Shift operators are a way to shift the binary values of of numbers to the right or left. When using these they have an even lower Precedence to that of addition, subtraction, division and so forth. So be sure to use the grouping operator as needed when working out expressions with them. 3.10 - Less Than, less than or equal, greater than, ect ( Precedence 12 ) The less than, greater than, less than or equal to, and greater that or equal to operators have a Precedence of 12. This level or Precedence also includes the in an instance of operators also. 3.11 - Equality, Inequality, as well as Strict Equality ( Precedence 11 ) The equality, inequality, as well as the strict forms of these operators have a Precedence value of 11. 3.12 - Bitwise AND ( Precedence 10 ) 3.13 - Bitwise XOR ( Precedence 9 ) 3.14 - Bitwise OR ( Precedence 8 ) 3.15 - Logical AND ( Precedence 7 ) 3.16 - Logical Or ( Precedence 6 ) So logical or operators have left to right associativity. In addition of anything that comes along evaluates to true that will be the value of the expression any any additional parts will not effect the result. This effect is desirable in many situations as such it is often used as a way to feature test, and create poly fills. 3.17 - Nullish coalescing operator ( Precedence 5 ) 3.18 - Conditional ( Precedence 4 ) I often seen Conditional operators used in expressions. When using them any expression that comes first will typically be preformed first because just about all other operators typically used to write expressions have higher precedence. 3.19 - Assignment ( Precedence 3 ) 3.20 - yield ( Precedence 2 ) 3.21 - Comma ( Precedence 1 ) 4.1 - Estimating income example Say you want to estimate the amount of money that you might make for a blog post if you manage to rank at the top of a search engine result page. You know the score that a keyword of interest gets relative to a compare keyword to which you know the average money traffic. You also know what is average when it comes to click threw rates for the first position, second position and so forth, and also your average page revenue per mille. So in order to figure estimates for amount of money you might make for each rang position you will need to work out some kind of lengthly expression and use that in a function in which you pass arguments for all of this. So you might end up with something like this: So getting back to the subject of this post the expression that is used in the pageMoney function is composed of operators that are all division and multiplication, both of which have the same operator precedence, as well as associativity. So for this expression the operations are just simply preformed from left to right. Say you want to write a function that will spit out a value between zero and one from zero up to one and then back down again depending on a current frame index value compared to a total max frame count. These are the kinds of functions I end up writing when I am playing around with animations that are governed by logic that is writing in a functional, deterministic kind of way. In this exercise I made a function that gives a value that behaves as expected and when doing so wrote several expressions that make use of a few operators including a native function call. The particular expression of interest here is the one that returns the value between zero and one depending on the current state provided via the functions arguments. This expression was fairly easy for me to write because I have a decent grasp on order of operations these days, however in the past it would have taken a lot longer as I would have followed a kind of time consuming trial and error process. 4.3 - Getting my cell phone plan data target for the day So where I live I do not have and kind of hard wired broadband Internet access, just mobile broadband via my cell phone. So with my plan I only have so much high speed data until I get throttled down to 128kbps, as such I need to budget my data or pay out the node for a higher data cap. With that in mind it would be nice to know a certain figure each day that will tell be if I am above or below budget when it comes to data. If I am above budget I can watch a video or two, if not I have to change my browsing habits and focus more on work which does not eat up a whole lot of data as I just need to push and pull text. So to write some kind of function that can help me get that data target figure I can exercise my knowledge of operator precedence to work out an expression that will do just that. 4.4 - Finding out a monthly payment for a mortgage Here is yet another real world example that is a function that helps figure the monthly payment of a fixed rate mortgage. 5 - Conclusion
Gone are the days of being able to count the number of known planets on your fingers. Today, there are more than 800 confirmed exoplanets — planets that orbit stars beyond our Sun — and more than 2,700 other candidates. What are these exotic planets made of? Unfortunately, you cannot stack them in a jar like marbles and take a closer look. Instead, researchers are coming up with advanced techniques for probing the planets’ makeups. One breakthrough to come in recent years is direct imaging of exoplanets. Ground-based telescopes have begun taking infrared pictures of the planets posing near their stars in family portraits. But to astronomers, a picture is worth even more than a thousand words if its light can be broken apart into a rainbow of different wavelengths. Those wishes are coming true as researchers are beginning to install infrared cameras on ground-based telescopes equipped with spectrographs. Spectrographs are instruments that spread an object’s light apart, revealing signatures of molecules. Project 1640, partly funded by NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California, recently accomplished this goal using the Palomar Observatory near San Diego. “In just one hour, we were able to get precise composition information about four planets around one overwhelmingly bright star,” said Gautam Vasisht of JPL. “The star is a hundred thousand times as bright as the planets, so we’ve developed ways to remove that starlight and isolate the extremely faint light of the planets.” Along with ground-based infrared imaging, other strategies for combing through the atmospheres of giant planets are being actively pursued as well. For example, NASA’s Spitzer and Hubble space telescopes monitor planets as they cross in front of their stars, and then disappear behind. NASA’s upcoming James Webb Space Telescope will use a comparable strategy to study the atmospheres of planets only slightly larger than Earth. In the new study, the researchers examined HR 8799, a large star orbited by at least four known giant, red planets. Three of the planets were among the first ever directly imaged around a star, thanks to observations from the Gemini and Keck telescopes on Mauna Kea, Hawaii, in 2008. The fourth planet, the closest to the star and the hardest to see, was revealed in images taken by the Keck telescope in 2010. That alone was a tremendous feat considering that all planet discoveries up until then had been made through indirect means, for example by looking for the wobble of a star induced by the tug of planets. Those images weren’t enough, however, to reveal any information about the planets’ chemical composition. That’s where spectrographs are needed to expose the “fingerprints” of molecules in a planet’s atmosphere. Capturing a distant world’s spectrum requires gathering even more planet light, and that means further blocking the glare of the star. Project 1640 accomplished this with a collection of instruments, which the team installs on the ground-based telescopes each time they go on “observing runs.” The instrument suite includes a coronagraph to mask out the starlight; an advanced adaptive optics system, which removes the blur of our moving atmosphere by making millions of tiny adjustments to two deformable telescope mirrors; an imaging spectrograph that records 30 images in a rainbow of infrared colors simultaneously; and a state-of-the-art wavefront sensor that further adjusts the mirrors to compensate for scattered starlight. “It’s like taking a single picture of the Empire State Building from an airplane that reveals a bump on the sidewalk next to it that is as high as an ant,” said Ben R. Oppenheimer, from the Astrophysics Department at the American Museum of Natural History in New York. Their results revealed that all four planets, though nearly the same in temperature, have different compositions. Some, unexpectedly, do not have methane in them, and there may be hints of ammonia or other compounds that would also be surprising. Further theoretical modeling will help to understand the chemistry of these planets. Meanwhile, the quest to obtain more and better spectra of exoplanets continues. Other researchers have used the Keck telescope and the Large Binocular Telescope near Tucson, Arizona, to study the emission of individual planets in the HR 8799 system. In addition to the HR 8799 system, only two others have yielded images of exoplanets. The next step is to find more planets ripe for giving up their chemical secrets. Several ground-based telescopes are being prepared for the hunt, including Keck, Gemini, Palomar, and Japan’s Subaru Telescope on Mauna Kea, Hawaii. Ideally, the researchers want to find young planets that still have enough heat left over from their formation and thus more infrared light for the spectrographs to see. They also want to find planets located far from their stars and out of the blinding starlight. NASA’s infrared Spitzer and Wide-field Infrared Survey Explorer (WISE) missions and its ultraviolet Galaxy Evolution Explorer, now led by the California Institute of Technology in Pasadena, have helped identify candidate young stars that may host planets meeting these criteria. “We’re looking for super-Jupiter planets located far away from their star,” said Vasisht. “As our technique develops, we hope to be able to acquire molecular compositions of smaller and slightly older gas planets.” Still lower-mass planets, down to the size of Saturn, will be targets for imaging studies by the James Webb Space Telescope. “Rocky Earth-like planets are too small and close to their stars for the current technology or even for James Webb to detect. The feat of cracking the chemical compositions of true Earth analogs will come from a future space mission such as the proposed Terrestrial Planet Finder,” said Charles Beichman from the NASA’s Exoplanet Science Institute at Caltech. Though the larger gas planets are not hospitable to life, the current studies are teaching astronomers how the smaller rocky ones form. “The outer giant planets dictate the fate of rocky ones like Earth. Giant planets can migrate in toward a star and, in the process, tug the smaller rocky planets around or even kick them out of the system. We’re looking at hot Jupiters before they migrate in and hope to understand more about how and when they might influence the destiny of the rocky inner planets,” said Vasisht.
Scientists have taken major steps in their hunt to find black holes that are neither very small nor extremely large. Finding these elusive intermediate-mass black holes could help astronomers better understand what the “seeds” for the largest black holes in the early Universe were. The new research comes from two separate studies, each using data from NASA’s Chandra X-ray Observatory and other telescopes. Black holes that contain between about one hundred and several hundred thousand times the mass of the Sun are called “intermediate mass” black holes, or IMBHs. This is because their mass places them in between the well-documented and frequently-studied “stellar mass” black holes on one end of the mass scale and the “supermassive black holes” found in the central regions of massive galaxies on the other. While several tantalizing possible IMBHs have been reported in recent years, astronomers are still trying to determine how common they are and what their properties teach us about the formation of the first supermassive black holes. One team of researchers used a large campaign called the Chandra COSMOS-Legacy survey to study dwarf galaxies, which contain less than one percent the amount of mass in stars as our Milky Way does. (COSMOS is an abbreviation of Cosmic Evolution Survey.) The characterization of these galaxies was enabled by the rich dataset available for the COSMOS field at different wavelengths, including data from NASA and ESA telescopes. The Chandra data were crucial for this search because a bright, point-like source of X-ray emission near the center of a galaxy is a telltale sign of the presence of a black hole. The X-rays are produced by gas heated to millions of degrees by the enormous gravitational and magnetic forces near the black hole. “We may have found that dwarf galaxies are a haven for these missing middleweight black holes,” said Mar Mezcua of the Institute of Space Sciences in Spain who led one of the studies. “We didn’t just find a handful of IMBHs — we may have found dozens.” Her team identified forty growing black holes in dwarf galaxies. Twelve of them are located at distances more than five billion light years from Earth and the most distant is 10.9 billion light years away, the most distant growing black hole in a dwarf galaxy ever seen. One of the dwarf galaxies is the least massive galaxy found to host a growing black hole in its center. Most of these sources are likely IMBHs with masses that are about ten thousand to a hundred thousand times that of the Sun. One crucial result of this research is that the fraction of galaxies containing growing black holes is smaller for less massive galaxies than for their more massive counterparts. A second team led by Igor Chilingarian of the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Mass., found a separate, important sample of possible IMBHs in galaxies that are closer to us. In their sample, the most distant IMBH candidate is about 2.8 billion light years from Earth and about 90% of the IMBH candidates they discovered are no more than 1.3 billion light years away. With data from the Sloan Digital Sky Survey (SDSS), Chilingarian and his colleagues found galaxies with the optical light signature of growing black holes and then estimated their mass. They selected 305 galaxies with properties that suggested a black hole with a mass less than 300,000 times that of the Sun was lurking in the central regions of each of these galaxies. Only 18 members of this list contained high quality X-ray observations that would allow confirmation that the sources are black holes. Detections with Chandra and with XMM-Newton were obtained for ten sources, showing that about half of the 305 IMBH candidates are likely to be valid IMBHs. The masses for the ten sources detected with X-ray observations were determined to be between 40,000 and 300,000 times the mass of the Sun. “This is the largest sample of intermediate mass black holes ever found,” said Chilingarian. “This black hole bounty can be used to address one of the biggest mysteries in astrophysics.” IMBHs may be able to explain how the very biggest black holes, the supermassive ones, were able to form so quickly after the Big Bang. One leading explanation is that supermassive black holes grow over time from smaller black holes “seeds” containing about a hundred times the Sun’s mass. Some of these seeds should merge to form IMBHs. Another explanation is that they form very quickly from the collapse of a giant cloud of gas with a mass equal to hundreds of thousands of times that of the Sun. Mezcua and her team may be seeing evidence in favor of the direct collapse idea, because this theory predicts that the less massive galaxies in their sample should be less likely to contain IMBHs. “Our evidence is only circumstantial because it’s possible that the IMBHs are just as common in the smaller galaxies but they’re not consuming enough matter to be detected as X-ray sources”, says Mezcua’s co-author Francesca Civano of the CfA. Chilingarian’s team has a different conclusion. “We’re arguing that just the presence of intermediate mass black holes in the mass range we detected suggests that smaller black holes with masses of about a hundred Suns exist,” says Chilingarian’s co-author Ivan Yu. Katkov of Moscow State University in Russia. “These smaller black holes could be the seeds for the formation of supermassive black holes.” Another possibility is that both mechanisms actually occur. Both teams agree that to make firm conclusions much larger samples of black holes are needed using data from future satellites. The paper by Mar Mezcua and colleagues was published in the August issue of the Monthly Notices of the Royal Astronomical Society and is available online. The paper by Igor Chilingarian was recently accepted for publication in The Astrophysical Journal and is available online. NASA’s Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA’s Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra’s science and flight operations. For more Chandra images, multimedia and related materials, visit:
A Measurement Conversions and Units Project that Students Love! This project is a creative way for your students to display their understanding of standard and metric units of measurement in length, mass; weight, capacity and volume. Upon choosing a theme, students will follow the step-by-step examples and guide to create a themed measurement unit book of their own. They will make creative connections such as "a milliliter is the same as the tear of the wicked witch in Wizard of Oz when she melted to the ground or a soccer referee's whistle is about two inches long". There are 18 units of measurement included in this project. Print and Go Math Enrichment I've also included examples, a writing template, a conversion chart and a rubric. I've also included a rubric for this project and a measurement chart. Click on the links below to check out more of my math enrichment resources: Geography Math Enrichment Projects, Measurement, Decimals, Fractions Back To School, Math Appreciation Enrichment Project Elapsed Time Enrichment Research Project Math Computation Comic Book Enrichment Project, All Operations Math Enrichment Project, Study of Mathematicians Percents and Personal Finance Math Enrichment Project Area and Perimeter Enrichment Math Project A World Without Math Passage, Math/ Language Arts Integration Build Your Own Computer Hands-On Math Measurement Project Multiples and Factors Math & Writing Enrichment Project Units of Measurement Math & Writing Enrichment Project Data, Tables and Graphs Enrichment Project Math Enrichment Project "BUNDLE" Math Enrichment for the Entire Year
|ESA Science & Technology||05-Jul-2005 16:50:31| What are Asteroids Like? Close-up photographs taken by spacecraft and ground-based radar studies of near-Earth objects show that nearly all asteroids are irregular in shape and heavily pockmarked by impact craters. Only a handful of the largest members, which are more than 300 km across, are spherical in shape. Astronomers believe that all asteroids are derived from around 640 'protoplanets' - each larger than Ceres. These were large enough to melt inside and allow heavy metals to sink to their centres. However, over billions of years, these protoplanets collided and broke up during numerous impacts. A large amount of material was lost, but the remnants form the main belt we see today. Studies of light reflected from their surfaces suggests that there are several types of asteroids. More than three quarters of them are very dark - blacker than coal. These C-type asteroids seem to be rich in carbon, but may also contain large amounts of water. S-type asteroids are a mixture of rock and metals such as nickel, iron and magnesium. This implies that they were once hot enough to melt - probably inside a larger parent asteroid or 'protoplanet' which has since been destroyed. Of the others, P- and D-types are reddish in colour, possibly due to 'primitive' organic compounds, while M-types seem to be entirely made of metal. Rock or Rubble Piles? The presence of a moon or a spacecraft, such as Galileo, allows scientists to determine the mass of an asteroid because of the gravitational effects of the primary asteroid on the orbit of its small neighbour. If both the mass and the size of the asteroid are known, researchers can work out its density. The density then gives a clue to the asteroid's makeup - its composition and internal structure. These studies have led to the surprising discovery that some asteroids are real lightweights - only about 20 percent denser than water. Until recently, most asteroids were thought to be composed primarily of rock, which has a density about three times greater than water. It seems that these featherweight objects are either highly porous rubble-piles of rock, or mostly made of water ice. If the asteroids are rubble-piles, it tells us that they have undergone numerous, severe collisions over billions of years. If the objects are largely ice, covered with a dark-coating, then these objects may be remnants of 'burned-out' comets. For further information please contact: SciTech.firstname.lastname@example.org
Drawing Atoms Worksheet Posted in Worksheet, by Kimberly R. Foreman When we talk concerning label an atom worksheet, collected particular variation of photos to inform you more. drawing atoms worksheet, blank model worksheet and labeled parts of an atom diagram are three of main things we will show you based on the gallery title. Worksheet - molecular shapes the shapes of molecules can be predicted from their structures by using the (valence shell electron pair repulsion) model, which states that electron pairs around a central atoms will assume a geometry that keeps them as far apart from each other as possible. this is illustrated by the drawings below. This drawing atoms worksheet is suitable for - grade. in this atoms worksheet, students use their periodic table to determine how to draw the atoms given in this graphic organizer. Read the passage in this science printable to learn about the structure of an atom. List of Drawing Atoms Worksheet Then, have students use a dictionary or other reference materials to define the parts of an atom. students will also interpret and practice drawing models. scientists use two types of diagrams to show the electron configuration for atoms. follow your teachers directions to complete the diagrams. sulfur atomic atomic mass protons neutrons electron. calculate the missing information and then draw the diagram and structure for each element. mg atomic. Worksheets worksheets inorganic chemistry lets go over some of the main things you need to consider if you want to draw structures. structures will be the principal way you communicate ideas about molecules in this course, so it is best to get some practice with them as soon as possible. two atoms with multiple bonds. If you want to learn to draw at the atoms level, then you should consider downloading the atoms worksheet answer key. 1. Basic Atomic Structure Worksheet Answers Info. electron configuration worksheet with answers free worksheets from electron configuration worksheet answer key, Lonely chemistry worksheet answer key for youngsters. you may probably explain to in the end of the post whether this sort of worksheets are compatible with yourself. 2. 8 Qualified Drawing Atoms Normally, the carbon atom is joined with four other atoms achieving a more stable structure. every stick represents a chemical bond that is formed by the sharing of electrons between atoms. its the covalent bond. this diagram represents a solid. the particles or atoms are arranged in a regular way. they are packed closely together. For many updates and latest facts approximately (drawing atoms worksheet answer key is so famous, but why drawing atoms worksheet answer key) pictures, please kindly observe us on twitter, route, and plus, or you mark this page on bookmark phase, we try to provide you with replace periodically with all new and fresh pictures. In the first shell the two electrons are placed at the top and bottom of the shell. in every other shell the first four electrons are placed at the four compass points (north, south, east, west) and then you repeat the process (i. e. the fifth, sixth, seventh and eighth electrons are paired). 3. Free Build Atom Activity Hole Punch Glue The particle s used to determine the letter and how the value of each letter is determined. a in the atomic symbol below label each letter a b c and d with. Home build an atom worksheet answers. mar. build an atom worksheet answers. posted at h in by comments. 4. Drawing Atoms Worksheet Answer Key Luxury Model What type of charge does a proton this atomic structure worksheet, students are asked to recall all of the information found in an element square, sketch diagrams of atoms, calculate the number of neutrons and valence electrons in an atom, and create diagrams. this worksheet is intended for upper middle worksheet dot structures name block. draw the dot structures of the following atoms and their respective ions calcium sodium aluminum barium potassium magnesium cesium lithium calcium ion sodium ion aluminum ion barium ion potassium ion magnesium ion cesium ion lithium ion fluorine sulfur oxygen nitrogen chlorine chemistry structure of atom worksheet set a. download. chemistry. class chemistry worksheet structure of atom, and students can refer to the attached file. all educational material on the website has been prepared by the best teachers having more than years of teaching experience in various schools. 5. Drawing Atoms Worksheet Answer Key Models The study material available on our website for all Dec,. are atoms of the same element, which have same number of protons but different mass numbers. are atoms of different elements which have same number of neutrons but different mass numbers. are atoms having the same mass number but different atomic numbers. co a radio isotope is used in treatment of. Write the structure for the molecule with a pair of electrons or a dash between each atom. groups of atoms will usually have the less atom surrounded by atoms having greater. never place a hydrogen atom in the center since it can only form one bond. Calculate the average atomic mass of silver if out of atoms are silver and out of atoms are silver. atomic structure ch. title masses of atoms worksheet author. last modified by. created date pm other titles masses of atoms worksheet. Atomic structure an atom is composed of protons, neutrons, and electrons. 6. Drawing Atoms Worksheet Answer Key Toxic Science What thought. why were people so resistant to accept idea of atoms. draw your interpretation of model of atoms. compare model to what already know about atomic structure. (i. e. what does his model lack. In this worksheet, students draw the dot structure for each element, molecule, and compound. answer the following. the questions in this printable exercise include defining the laws of conversation of mass and constant proportions, explaining the two types of ions, and telling between isotopes and isobars. Drawing atoms worksheet. the basics of drawing atoms have been available for over four centuries. a young student looking to learn at a younger age should start with worksheets for the most part. even if the student knows how to draw from his or her home, it is still good to get the basic techniques of drawing atoms. Drawing worksheet and key, download and drawing, or two of atom are naming organic chemistry. 7. Drawing Atoms Worksheet Atom Drawing Draw six neutrons in the nucleus of the atom. draw two electrons in the first energy level and label them with their charge. draw three electrons in the second energy Atomic structure elements elements are the building blocks of all matter on earth elements are made up of atoms differences in elements an atom is the smallest particle of an element that still has the properties of the element the atoms in a piece of calcium are all the same, as are the atoms in aluminum the atoms of each element are identical but calcium atoms are different from aluminum. 8. Drawing Atoms Worksheets Model Chemistry . drawing models of atoms and key. i have you have game reviewing concepts of atoms. rules for counting atoms. counting atoms worksheet and key. counting atoms review and key. energy levels diagram and periodic table for orbital arrangement. electron arrangement practice and key. Worksheet historical development of atoms topic the atomic structure set a historical atomic models objective to test your knowledge of historical atomic models draw and briefly describe each historical model of the atom. The perfect drawing atoms worksheet answer key that can be discovered on completely different sites which provide your little one the appropriate outlet in an effort to specific all the pieces he desires without making mistakes or fearing about something. draw three electrons in the second energy level and label them with their charge. Bohr model worksheet answers from drawing atoms worksheet answer key, image source pinterest. com. gallery of drawing atoms worksheet answer key. Worksheet atoms - building blocks color chart sand sugar rust gasoline salt water vitamin c chemical formula of common compounds hydrogen (h) carbon (c) oxygen (o) nitrogen (n) sodium () chlorine (cl) blue yellow red black white green ch aspirin baking soda ruby emerald caffeine. 9. Electron Shell Worksheet Electron Shell Diagram For atoms with valence electrons, it can go either way. for atoms with valence electrons, there is no change. Jun, on this page you can read or download valence electrons and dot structure worksheet answers in format. if you see any interesting for you, use our search form on bottom. 10. Labeled Parts Atom Diagram Atom Diagram Atom Practice drawing atoms. drawing atoms rules protons atomic number electrons atomic number neutrons mass number atomic number st level can hold up to electrons level can hold up to electrons rd level can hold up to electrons draw an atom hydrogen (h) draw an atom- hydrogen (h) draw an atom helium (he) draw an atom- helium (he) draw an atom oxygen (o) draw an atom- oxygen (o) draw an atom boron (b) draw an atom- boron (b) draw an atom aluminum (. . Worksheet - molecular shapes the shapes of molecules can be predicted from their structures by using the (valence shell electron pair repulsion) model, which states that electron pairs around a central atoms will assume a geometry that keeps them as far apart from each other as possible. this is illustrated by the drawings below. Configuration worksheet provides extra practice interpreting spectroscopy is least likely be like atoms of atomic structure is a search the draft. average mass number of electron configuration worksheet this interactive resource covers the head of diagrams to the outermost orbitals are they are arranged in the atomic structure. 11. Image Result Atom Structure Worksheet Middle School Dec, free download now. chemistry quiz activities learn about atoms and the history of the discovery of the atomic structure. atomic structure worksheet teaching chemistry chemistry classroom chemistry worksheets some of the worksheets for this concept are dot structures and molecule geometries work structures practice work work chemical bonds dot Atomic structure refers to the structure of an atom comprising a nucleus centre in which the protons positively charged and neutrons neutral are present. the negatively charged particles called electrons revolve around the centre of the nucleus. the history of atomic structure and quantum mechanics dates back to the times of, the man who first proposed that matter is composed. Atoms last a long time, in most cases forever. they can change and undergo chemical reactions, sharing electrons with other atoms. but the nucleus is very hard to split, meaning most atoms are around for a long time. structure of the atom at the center of the atom is the nucleus. the nucleus is made up of the protons and neutrons. 12. Drawing Atoms Worksheet Answer Key Fresh Model H. Continue with more related ideas as follows electrons in atoms worksheet answers, drawing atoms worksheet and atomic structure model worksheet. we have a great hope these which atom is which worksheet images gallery can be useful for you, deliver you more references and also help you get a great day. 13. Model Dot Diagram Worksheet Answers Get out your pencil (and eraser) because we are about to learn how to draw atomic orbitals. two rules the of atomic orbitals the of conjugated atoms you need to know what type of pi electron contribution each type of non-bonding orbital will have. Draw two electrons in the first energy level and label them with their charge. 14. Model Worksheet Free Worksheets Library Download Apr, continue with more related things as follows fingerprint writing template, fingerprint activity worksheet for kids and whorl fingerprint. we have a great hope these fingerprint detective worksheet pictures gallery can be a guide for you, give you Fingerprinting and paternity worksheet answer key, study biodiversity of species. fingerprinting worksheet answer key have students complete the activity worksheet solved name. the fingerprints were made from your students actively participate in a lesson, they are more likely to it increases rigor. Showing top worksheets in the category fingerprints. 15. Model Worksheet Middle School Drawing Atoms Worksheet In this atomic structure worksheet, students are asked to recall all of the information found in an element square, sketch diagrams of atoms, calculate the number of neutrons and. check out the links available on our site and use them as a reference to score better grades in the exam. 16. Para Blank Mol Worksheet Reading lab equipment student worksheet water supplies on board the space station or a spacecraft must be tested frequently to make sure that they are safe for human use. measuring the proper amount of liquids are part of the testing process. procedure. report to stations and measure the amount of liquid in both the graduated cylinder and. Dec, a lab safety worksheet answer key is a tool that can be used to capture and keep safety information. these are easy to use forms and lists. some people prefer them because they can be used for a number of purposes. 17. Parts Atom Worksheet Worksheets Pound Elements This can be used as an assessment to wrap up a mini lesson, or as an introduction to a chemistry unit atom element molecule. Atoms atoms cannot be created, destroyed, or divided into smaller particles. all atoms of the same element are identical in mass and size, but they are different in mass and size from the atoms of other elements compounds are created when atoms of different elements link together in Interactive worksheets to help your child understand atoms and elements in science year. education resources, designed specifically with parents in atoms and elements lesson plans and teaching resources. from science elements, atoms worksheets to atoms elements compounds videos, quickly find educational resources. A comprehensive set of atoms, elements and compounds worksheets. 18. Printable Blank Atom Diagram Automotive Wiring Diagram Com. another aspect of this practice workbook is to review the definitions. the definition of each word in the workbook is given in parenthesis and a space. Worksheet atomic structure use your notes from the atomic structure program to answer the following questions. the atomic number tells the number of positively charged in the nucleus of an atom. the atom is because this is also the number charged in the atom. of. the mass number tells the total number in the nucleus of an atom. Atomic structure chapter worksheet answers downloaded from holychild. 19. Drawing Atoms Worksheet Answer Key Inspirational Com. atomic structure and chemical nomenclature from atomic structure worksheet answers, source. Atomic structure answer key displaying top worksheets found for this concept. some of the worksheets for this concept are protons neutrons and electrons practice work answer key, structure of matter work answers key, atomic structure work answers, atomic structure review work answers, structure of matter answer key, basic atomic structure work key, chemistry of Jan, chemistry atomic structure worksheet answer key watch the video below to see the atomic structure and uses periodic table in this downloadable worksheet from either print it out or. the symbol for a specific isotope of any element is written by placing the mass number as a superscript to the left of the element symbol. the atomic number tells you the number of in one atom of an element. it also tells you the number of in a neutral atom of that element. 20. Drawing Atoms Worksheet Answer Key Famous Select one or more questions using the above. what is reading unit atomic structure atoms and ions. Atomic structure worksheet. define mass number the mass number is an integer whole number equal to the sum addition of the number of protons and neutrons in an atomic nucleus. define atomic number the number of protons in the nucleus of an atom. the three subatomic particles of the atom, atomic structure worksheet with answers free worksheets library from atomic structure worksheet answers, sourcecomprareninternet. net. structure of the atom worksheet std from atomic structure worksheet answers, sourcemrwiggersci. 21. Answers Drawing Atoms Worksheet Atomic Structure When you add two or more atoms together, it is called a. drawing the molecule. look up the values for each element in your structure. the least electronegative atom represents the central atom. hydrogen is the only exception to this since it forms only one bond. 22. Basic Atomic Structure Worksheet Answers Drawing Atoms There are many benefits to such kind of worksheet. atomic structure worksheet helps students in their difficulties while understanding the structure of the atom. this worksheet is very helpful in guiding a student in the related field. as it helps the student in defining the Dec, properties of atoms and the periodic table worksheet answers atomic structure review answers model of an atom chapter atoms and bonding section summary academia. 23. Atomic Models Worksheet Answers Fresh This is an certainly simple means to specifically acquire lead by on-line. this proclamation drawing. Draw bonds between the atoms. carbon is indicated to be the central atom. this means all the other atoms are bound to carbon. to fill its valence shell, carbon can make bonds, each of the hydrogen atoms can make one bond and oxygen can make two bonds. 24. Atomic Models Worksheet Answers Worksheet Resume Remember that carbon, oxygen and nitrogen can form multiple bonds (double and triple bonds). Atomic structure worksheet. label the parts of an atom on the diagram below. what type of charge does a proton have. what type of charge does a neutron have. 25. Atomic Models Worksheet Model Drawing Oxygen What type of charge does an electron have. which two subatomic particles are located in the nucleus of an atom. Chemistry worksheet name newton south high school, newton, ma page of. draw the dot structures of the following atoms and their respective ions calcium calcium ion fluorine fluoride sodium sodium ion sulfur sulfide write the empirical formula and draw dot structures for these ionic. 26. Atoms Worksheet Addition Drawing Atoms Worksheet Nov, learn the basic structure of an atom with this introductory page, complete with a fun experiment they can try at, , in this atomic structure worksheet, students are asked to recall all of the information found in an element square, sketch diagrams of atoms, calculate the number of neutrons and valence electrons in an atom, and create diagrams. this worksheet is intended for upper middle s. Aug, optical isomers are two compounds which contain the same number and kinds of atoms, and bonds i. e. , the connectivity between atoms is the same, and different spatial arrangements of the atoms, with mirror images. each mirror image structure is called an. Accepted scientific theory of atoms. all substances are made of atoms. atoms are small particles that cannot be created or destroyed. atoms of the same element are exactly alike. atoms join with other atoms to make new substances stands for atomic mass unit, the unit used to measure the mass of protons and neutrons. 27. Atoms Worksheet Green Science Define, neatly and clearly, the following atomic structure related terms. nucleus. neutron. proton. electron. nucleons. atomic number. mass number. Displaying top worksheets found for atomic mass unit. some of the worksheets for this concept are atomic structure chapter work answers, atomic structure work with answers, honors chem atomic, unit, atomic mass and atomic number answers, chapter atomic structure work answers, atomic structure work answers, city schools. 28. Atoms Worksheet Middle School Unique Drawing Atoms We have all sorts of printable middle school worksheets that your students will love to learn from. great to help prepare your students for the first year of senior high each middle school worksheet focuses on a particular engaging subject. Middle school. 29. Blank Model Worksheet Blank Fill In model, electrons are not. c. collected together in the center of the atom. opens an atomic. before his experiment, expected the particles to deflect to the sides of the gold foil. false. Apr, basic atomic structure worksheet answers basic atomic structure worksheet answers key benefits of this worksheet. 30. Drawing Atoms Worksheet Answer Key Drawing Atoms Worksheet Choose from atomic theory in atomic theory published in, atoms are tiny particles of matter. of an element are similar and different from other elements. of two or more different elements combine to form compounds. a given compound always has the same relative numbers and types of atoms are rearranged to form, atomic structure and the periodic table chapter worksheet part a history of atomic theory cut paste activity from history of the atom atomic theory chemistry answer key preschool worksheets history of the answer key toxic science. Sep, statement of nondiscrimination the district condemns and prohibits all forms of discrimination and harassment based on actual or perceived race, color, weight, national origin, ethnic group, religion, religious practice, disability, sexual orientation, gender or sex. 31. Chemical Formulas Compound Drawings Counting Atoms Practice reading chemical formulas and counting atoms. chemical formulae worksheet s. practice reading chemical formulas and counting atoms. To count the atoms in a molecule with a coefficient, multiply the coefficient by each subscript in the chemical formula. 32. Collection Atomic Structure Review Worksheet Fill Once you have found your answers. draw a model. representing your atom placing the electrons, protons, and neutrons where they go in the diagram. sodium z most abundant isotope a p e n m l e author. created date title diagrams worksheet last modified model diagrams and dot structures. 33. Counting Atoms Worksheet Counting Atoms Worksheet Examples h o number of atoms number of molecules h- o- subscript coefficient. Middle school test prep. using the mole to count atoms worksheet. a computer chip contains. x atoms of silicon. what is its mass using the mole to count atoms, to improve. 34. Counting Atoms Worksheet Editable Counting Atoms The unit includes lesson matter - introduction to matter, properties of matter, exploration of mass, volume, weight, and density. lesson states of matter - introduction to the states of matter and kinetic theory. lesson changes of state - assignment to explore how matter changes state. 35. Diagramming Atoms Constructing Models The majority of an atom is empty space. the nucleus contains the majority of an atoms mass and is incredibly small. Atomic structure practice problems worksheet answers. knowing the number of protons in the atom of a neutral element enables you to determine the number of what. 36. Drawing Atoms Worksheet Answer Key Checklist for evaluating your experimental design. title does the title clearly identify both the independent and dependent variable have you used the words the effect of and on. hypothesis does the hypothesis clearly state how you think changing the i. 37. Drawing Atoms Worksheet Answer Key Atom Board Making Finally, molecules with octahedral geometry, will have molecular orbitals. this hybridization is called. shown below is a portion of the chart from worksheet. )all matter is composed of extremely small particles called atoms )all atoms of a given element are identical, having the same, size, mass, chemical properties. 38. Printable Blank Worksheet Template Some of the worksheets displayed are measuring beakers, sustainability teaching unit work for use in, topic reading graduated cylinders and beakers, water cycle, graduated cylinders name answers, unit work law of partial pressures, work w, graduated cylinders. This is a document that has been designed to help you learn to draw at the atoms level. this is an excellent resource for anyone who wants to learn to draw at the atoms level. In many compounds, atoms will share electrons to enable their valence shell to become like the nearest noble gas. this is normally electrons (the octet rule), apart from hydrogen. there are exceptions (see next section). draw diagrams (outer electrons only) to show the bonding in the following covalent molecules. If you want (or need) to draw a model of an atom, well show you how. Protons neutrons and electrons practice worksheet answer this type of worksheet discusses about basic atomic structure. best images of label an atom worksheet drawing atoms, in this atomic structure worksheet, students are asked to recall all of the information found in an element square, sketch diagrams of atoms, calculate.
Everything You Need in One Place Homework problems? Exam preparation? Trying to grasp a concept or just brushing up the basics? Our extensive help & practice library have got you covered. Learn and Practice With Ease Our proven video lessons ease you through problems quickly, and you get tonnes of friendly practice on questions that trip students up on tests and finals. Instant and Unlimited Help Our personalized learning platform enables you to instantly find the exact walkthrough to your specific type of question. Activate unlimited help now! Make math click 🤔 and get better grades! 💯Join for Free What is a whole number You've encountered lots of whole numbers before now. Whole numbers are numbers that aren't fractions—they are integers. For example, 2, 12, and 50 would all be whole numbers. On the other hand, numbers that aren't whole numbers would look something like 1.25 or 54. Although a fraction is a rational number, it is not a whole number. Knowing the difference will be important in this lesson. How to divide a whole number by a fraction When you come across dividing a fraction by a whole number, the steps to doing this is pretty simple. First, multiply the bottom number of the fraction part of the question with the whole number. So for example, if we've got: 21÷2, take the 2 from the bottom of 21 and multiply it with the 2 that's on the right. We're actually changing 2 into 21 in order to move it to the bottom so the sign becomes a multiplication sign instead of a division one: 21 * 21. This will give you 41. Secondly, simplify the questions if needed. In this case, 41 is already the most simplified form of the answer, so there's no need to further mess with it. Your final answer will be 41. These two steps will help you solve any questions involving dividing fractions with whole numbers. Let's put what we just learned to use. We'll even look at a number line to clearly understand what we're doing when we divide whole numbers by a fraction. Use number lines to find the following quotient: Let's first start by creating a number line with 0 on one end and 1 on the other. Note that 3/3 is also equalled to 1. The question asks us to divide 31 by 4. So we'll take a closer at the number line, zooming in to specifically the area between 0 and 31. So let's divide this section into 4 sections. The shaded part is what the question is asking for. To look for the exact number, here is the trick. Look at the number line from 0 to 1, there are three 31 parts. We divide each of them by 4. We've got 12 smaller parts in total. We are only looking for 1 part out the 12 smaller parts (a quarter of 31 ). So the final answer is 121. It takes 3/5 cup of sugar to make four cupcakes. How much sugar is needed for three cupcakes? First, look for the sugar to cupcake ratio. We need 53 cups of sugar for every 4 cupcakes. If we divide 53 by 4, we'll find out how much sugar is needed per cupcake. We want to know how much sugar is needed for three cupcakes. So simply take the answer we've got from above via fraction division, which tells how much sugar is needed per cupcake, and multiply it by 3. 203 sugar/cupcake ×3=209 There's your final answer! If you're ever unsure about your answer in questions involving dividing a whole number by a fraction, use this calculator to help you double check your work. To review concepts to help you with solidifying your understanding of this lesson, take a look at how to determine common factors and multiplying a fraction and whole numbers. You'll have to take these concepts with you when you eventually learn how to solve two step linear equations. In this lesson, we will learn: - Fractions Divided by Whole Numbers - Word Problems: Application of Dividing Fractions With Whole Numbers - Whole Numbers Divided by Fractions - Dividing Fractions With Whole Numbers Involving Negatives - Division Between Multiple Fractions and Whole Numbers - The division of a fraction is equivalent to the multiplication of the reciprocal of the fraction. - Reciprocal of a fraction: swap the numerator with the denominator.
based on its 1612 charter, Virginia claimed the Ohio River Valley and westward to the Pacific Ocean in the 1750's Source: Library of Congress, America Septentrionalis a Domino d'Anville in Galliis edita nunc in Anglia (by Jean Baptiste Bourguignon Anville, 1756) Virginia's edges were defined initially in charters issued by the King of England as grants of land to private investors. The history of colonial land grants is confusing, but essential to understanding the location of those boundaries. As John Smith noted about 400 years ago:1 The extended disputes over colonial boundary lines was driven by the primary motivation of colonial investors to get rich. There were nationalist, religious, and other motivations as well, but the potential to acquire land at a low price and sell it at a profit was the key factor in defining how the edges of Virginia were finally drawn. Queen Elizabeth I, and then King James I, could make awards of land and settlement rights by royal charter. Competing claims by Spain, and by the "naturals" who already lived on the land, could be ignored. European mapmakers identified claims to territory by Spain and England, but ignored potential Native American claims Source: University of North Carolina, Virginia et Florida (by Gerhard Mercator, 1610) The three ships that brought 104 colonists to Jamestown in 1607 (the Susan Constant, Godspeed, and Discovery) were financed by and filled with people who sought economic advantage. The desire for freedom of religion or increased individual liberties was not the driving factor in the initial settlement of Virginia in 1607. Instead, the goal of the initial colonists and their backers in London was to increase personal wealth. The investors who financed the project to colonize Virginia were venture capitalists, "adventuring" or risking their wealth in the hope of getting even richer. The colonists who sailed to England were also seeking to increase their personal wealth, seeing opportunity in Virginia just like the founders of the company. In their charters, Queen Elizabeth I and King James I was careful to reserve the rights for 1/5th of the gold/silver, just in case the English ran into the same wealth discovered by the Spaniards in Mexico and Peru. The investors ultimately incorporated as a joint stock company, the Virginia Company, with a coalition of capitalists based in London and Plymouth. The London-based investors focused on settling the Chesapeake Bay region. The capitalists based in Plymouth, who were more familiar with the fishing grounds off Newfoundland, focused on settling lands further north. Both the London and the Plymouth companies sent expeditions to settle in North America. The First Virginia Charter issued by James I in 1606 gave the London Company the right to:2 That was a generous grant; the area between 34 and 41 degrees latitude stretches from present-day South Carolina to New York City. James I was giving away rights to a vast swath of territory to which he had no legitimate claim, unless it was occupied by English. The Right of Discovery would be invoked later by various colonial officials and Chief Justice John Marshall to legalize the elimination of Native American title to the land, but in 1606 the key to competing with Spanish, French, and Dutch rivals in North America was to be first to occupy and defend a slice of land. That 1606 charter created a potential overlap between the claims of the London Company ("Firste Colonie") and the rights of the Plymouth Company ("Seconde Colonie") to settle "betweene eighte and thirtie degrees and five and fortie degrees of the saide latitude." Each company's first settlement in Virginia was guaranteed exclusive control over territory within 50 miles to the north and south of their settlement: In addition, the company was granted rights for 100 miles inland from the first settlement. The total grant included 100 square miles or 6.4 million acres of territory, plus islands within 100 miles of the shore. The Plymouth Company sent its first ship to the New World in August 1606, ahead of the London Company's 3-ship expedition that sailed from London in December 1606. However, the Plymouth Company's scouting ship, the Richard, was captured by the Spanish off the coast of Florida. A second expedition from the Plymouth Company sailed in 1607. That effort founded the Popham (or Sagadahoc) colony, at the mouth of what is now the Kennebec River in Maine. The Plymouth Company's colony survived a winter in what we now call Maine. In 1608 a resupply ship brought word that the leader in the colony, Rawleigh Gilbert, had come into an inheritance; he was now a rich man back in England. the Popham colony feared potential attack by the Spanish, so Fort George was constructed at the mouth of the Sagadahoc (Kennebec) River Source: Alexander Brown (ed.), The Genesis of the United States (facing p.190) Gilbert and all the colonists sailed home right away - some in the first English ship constructed in the New World, the Virginia. The Plymouth Company charter was forfeited and the company faded into history, but new colonists obtained charters and arrived in Massachusetts starting in 1620. Those charters defined the 40th degree of latitude as the southern boundary of the Massachusetts colony, overlapping the grant of land by King James II to Virginia in 1612.3 After the Popham colony was abandoned and the Plymouth Company failed, references to the "Virginia Company" are typically references to the surviving half - the London Company, with its settlement at Jamestown. When James I issued two additional charters to the Virginia Company in 1609 and 1612, he extended only the rights of the London Company in North America. The private corporation survived until 1624, when the king assumed control over the then-bankrupt London Company. After John Smith had determined the extent of the Chesapeake Bay, King James I adjusted the Virginia Company's grant when he issued a Second Charter in 1609. While the 1606 First Charter had limited the London Company's rights to just the land within a 100-by-100 mile square (plus islands within 100 miles offshore from the initial settlement), the 1609 Second Charter granted rights to all lands 200 miles north and 200 miles south of the James River. More significantly, that Second Charter gave the private investors a massive amount of land stretching all the way across North America from Jamestown to the Pacific Ocean:4 Point Comfort, as displayed on the map produced by Captain John Smith; "Powhatan flu" is now the James River (NOTE: map is oriented with west at the top, not north - so the Chesapeake Bay extends to the right) Source: Library of Congress Cape or Point Comfort is the southern tip of the city of Hampton, at the site of Fort Monroe now. It is the entrance to Hampton Roads, where the James River flows into the Chesapeake Bay. Point Comfort was named by Captain John Smith in 1608, because it was "comforting" for sailors to see the mainland after entering the Chesapeake after an ocean crossing. Known today as "Old" Point Comfort, it is slightly south of "New" Point Comfort at the eastern edge of Mathews County. the First Charter in 1606 defined overlapping boundaries where settlement was authorized for both the London and Plymouth companies Source: William E. Peters, Ohio Lands and Their Subdivision (p.104) The Third Charter was issued on March 12, 1612 - or 1611, if dated by the Old Style calendar. Until 1752, the new year in England started not on January 1 but on March 25. The date of the Third Charter was in 1611 by the Old Style Calendar and in 1612 under the New Style calendar - so March 12, 1611/12 refers to 1612 in today's calendar. The Third Charter gave the colony a claim to all lands between 34-41 degrees, and expanded Virginia's colonial boundaries further into the Atlantic Ocean beyond the 100 miles authorized in the First and Second charters. The Third Charter gave the islands offshore to "The Treasorer and Planters of the Cittie of London for the First Colonie in Virginia," stating: 1633 map showing Bermuda, off the coast of North America Source: Library of Congress, Pascoal Roiz, A portolan chart of the Atlantic Ocean and adjacent Continents Why did the Virginia Company investors obtain the territorial expansion by the king in 1612? The leaders of the Third Supply fleet, sailing to the colony in 1609, wrecked on Bermuda. They spent the winter of 1609-10 on the island, and it provided a surplus of food - in clear contrast to the starvation at Jamestown during that same winter. The flagship vessel of the nine ships in the Third Supply fleet was the Sea Venture. It was separated from the other eight ships in a hurricane, and came close to sinking. The vessel was sailed onto the reef at Bermuda, and everyone escaped onto the dry land. The shipwrecked Englishmen spent ten months in 1609-10 salvaging the materials from the Sea Venture and building two new vessels in Bermuda, the Patience and the Deliverance. The unplanned stay in Bermuda tested the authority of the colonial officials on their way to governing the Virginia colony in Jamestown. Some sailors considered their obligations to have been completed once the trip ended in Bermuda. One of Governor Gates' clerks, thought to be Stephen Hopkins, claimed that the governor's authority was valid only in Virginia and not on Bermuda. While most of those shipwrecked were busy building two smaller ships from the remains of the wrecked flagship, the Sea Venture, some rebelled. In the end, one rebel was executed, but Hopkins survived. The Patience and Deliverance both reached Jamestown in 1610, just before Lord de la Ware brought another relief fleet with essential food and supplies. Shakespeare may have incorporated stories about the Sea Venture shipwreck into his play "The Tempest," after Patience sailed back to England. modern map showing Bermuda Source: Library of Congress, Atlantic hurricane tracking chart/NOAA Bermuda is roughly 600 miles offshore from North Carolina. That put it outside the 100-mile limit of islands to be included in Virginia, according to the first two charters. The 1612 Third Charter extended the colonial boundary to include islands up to 300 leagues offshore. As a result of the modification, after 1612 Virginia extended up to 1,000 miles eastward in the Atlantic Ocean. The Bermuda colonization was very successful, but the size of the island limited the potential profits from either agriculture or selling land. The Virginia Company venture capitalists in London "spun off" their investment. They arranged for James I to issue a separate charter for the island in 1615 and sold the rights to Bermuda to those investors who were most interested, splitting the island from the colony of Virginia. Those capitalists in England who "adventured" their funds in the Virginia colony received little return on their investment. The Virginia Company changed its approach in 1618, issuing a "Great Charter" that ended martial rule and established a representative assembly. That charter was issued by the company rather than by the kink, and had no effect on the boundaries of the colony. King James I failed to renew the Virginia Company's charter in 1624, and made Virginia a royal rather than a proprietary (private) colony. By canceling the corporate charter, King James made stock in the Virginia Company worthless, the equivalent of declaring the company to be bankrupt. The venture capitalists who bought stock in Virginia did not make a profit on their investment, even after being given a massive amount of free real estate. When later kings chose to create new proprietary colonies in Maryland and Carolina to reward new friends, their grants of land reduced the boundaries of Virginia. The shrunken boundaries diminished the ability of Virginia officials to sell rights to vast amounts of land. the beach is in Virginia (at Leesylvania State Park), but the boardwalk is in Maryland - thanks to a 1632 charter The Stuart kings emphasized their power and marginalized the role of others, including Parliament (stimulating the English Civil war in 1642). After changing the boundaries of the Virginia colony, the kings did not compensate their subjects in Virginia or investors in England - and certainly did not compensate the Native American inhabitants. Changes in the boundaries of the Virginia colony after 1612 did cause angst in Jamestown, when the king chartered new colonies within the area defined as Virginia in the Third Charter. The Virginia colonial officials lost authority to grant property deeds ("patents") to northern lands in New England (in 1620, with control over land north of the 40th parallel) and in what became Maryland (in 1632, north of the Potomac River). land claims based on the 1620 charter to Massachusetts conflicted with the claims based on the 1612 charter to Virginia Source: Bureau of Land Management, A History of the Rectangular Survey System: Volume 2 (Figure 1) Lands to the south became part of a separate Carolina colony. A 1629 charter to an ally of Charles I, Sir Robert Heath, included lands between 31-36 degrees of latitude. That grant was never implemented due to the English Civil War. When Charles II granted the same land to eight Lords Proprietors in 1663, Virginia's southern border was once again defined at 36 degrees of latitude. In 1665, the border was moved north a half-degree to 36 degrees, 30 minutes, giving the Carolina proprietors full control over the navigable parts of Albemarle Sound - plus land along the shoreline, whose settlers wanted to ship tobacco/lumber to England without paying export taxes to the Virginia colony. In 1705, Robert Beverley described the extent of Virginia with specific limits on north, east, and south, but with the western edge extending all the way to the Pacific Ocean:6 Virginia's claim to land stretching all the way across the continent to "the Californian Sea" ended in 1763. At the end of the French and Indian War (known as the Seven Years War in Europe), negotiators in Paris determined a new boundary for the western edge of Virginia. France regained control over the Caribbean sugar islands that the British had captured (Martinique, Guadeloupe, and St. Lucia). In return, France abandoned nearly all of its land claims in North America and transferred control over the Louisiana Territory and New Orleans to Spain. The 1763 Treaty of Paris established the middle of the Mississippi River as the new line defining the western boundary of Virginia.7 on the 1755 Mitchell map, lands west of the Mississippi River between the 36° 30' line of latitude on the south and the 40° line on the north were identified as part of Virginia Source: Library of Congress, John Mitchell, A map of the British and French dominions in North America, with the roads, distances, limits, and extent of the settlements after the 1763 Treaty of Paris ended the French and Indian War, English officials recognized that Spain controlled the Louisiana Territory and Virginia no longer extended west of the Mississippi River Source: Library of Congress, John Mitchell, A map of the British and French dominions in North America, with the roads, distances, limits, and extent of the settlements The claim to political authority over the lands defined in the charters is still part of the Code of Virginia, along with the official release of the Virginia claim to some or all of Maryland, Pennsylvania, North and South Carolina. Title 1, Section 1-301. "Extent of territory of the Commonwealth after the Constitution of 1776" says:8 westward extension of land claims by English colonies, 1755 (assumes Virginia's northern boundary was limited at 40th degree of latitude by 1620 charter to New England colony) Source: Library of Congress, A Map of the British and French settlements in North America During the Civil War, Virginia was carved up and 1/3 of its land area used to create the new state of West Virginia in 1863. The removal of the western counties was a significant alteration of the boundaries, but an even more dramatic change had been proposed by Secretary of War Simon Cameron in 1861. Cameron suggested that creating a buffer of "safe" territory around Washington DC by transferring all of Virginia east of the Blue Ridge (except for the Eastern Shore) to Maryland, and realigning the boundaries of Maryland and Delaware based on natural boundaries:9 since West Virginia was established, Virginia shares boundaries with five other states Source: Virginia: a geographical and political summary (published in 1876) Cameron's plan would have made Virginia an inland state with no coastal waterfront, and reduced the potential of enemy forces controlling the heights of Arlington, by: - adding the western edge of Maryland to Virginia, using the Blue Ridge to define the new state boundary - transferring all territory east of the Chesapeake Bay to Delaware, using the Chesapeake Bay as a natural border on the east - returning Virginia's Alexandria County to the District of Columbia, undoing the retrocession of 1846 Secretary of War Simon Cameron proposed shifting the Eastern Shore to Delaware, transferring the Piedmont/Coastal Plain regions of Virginia to Maryland Source: Harpers Weekly (digitized by "Son of the South), Map Showing The New Boundaries Of Virginia, Maryland, And Delaware As Proposed By Secretary Cameron (December 21, 1861) northern officials considered various ways of altering Virginia's boundaries to reduce the potential of hostile forces threatening Washington DC after the Civil War Source: Harpers Weekly (digitized by "Son of the South), Chief Cook Cameron Divides The Virginia Goose Between Maryland And Delaware (December 21, 1861) the Virginia-North Carolina border was defined initially by the 1728 "dividing line" survey, and in 1749 Joshua Fry and Peter Jefferson extended it further west to Steep Rock Creek in the Blue Ridge Source: Library of Congress, A map of the most inhabited part of Virginia containing the whole province of Maryland with part of Pensilvania, New Jersey and North Carolina the Popham colony started by the Plymouth Company failed, leaving the Virginia Company with no competition in England regarding its northern boundary until Maryland was chartered in 1632 Source: The Southern States of America (published in 1909) Virginia's claim to the Northwest Territory across the Ohio River was contested by Connecticut as well as other colonies, until all states ceded their claims to the Continental Congress Source: William E. Peters, Ohio Lands and Their Subdivision (p.148)
Geochronology, field of scientific investigation concerned with determining the age and history of Earth’s rocks and rock assemblages. Such time determinations are made and the record of past geologic events is deciphered by studying the distribution and succession of rock strata, as well as the character of the fossil organisms preserved within the strata. Earth’s surface is a complex mosaic of exposures of different rock types that are assembled in an astonishing array of geometries and sequences. Individual rocks in the myriad of rock outcroppings (or in some instances shallow subsurface occurrences) contain certain materials or mineralogic information that can provide insight as to their “age.” For years investigators determined the relative ages of sedimentary rock strata on the basis of their positions in an outcrop and their fossil content. According to a long-standing principle of the geosciences, that of superposition, the oldest layer within a sequence of strata is at the base and the layers are progressively younger with ascending order. The relative ages of the rock strata deduced in this manner can be corroborated and at times refined by the examination of the fossil forms present. The tracing and matching of the fossil content of separate rock outcrops (i.e., correlation) eventually enabled investigators to integrate rock sequences in many areas of the world and construct a relative geologic time scale. Scientific knowledge of Earth’s geologic history has advanced significantly since the development of radiometric dating, a method of age determination based on the principle that radioactive atoms in geologic materials decay at constant, known rates to daughter atoms. Radiometric dating has provided not only a means of numerically quantifying geologic time but also a tool for determining the age of various rocks that predate the appearance of life-forms. Early views and discoveries Some estimates suggest that as much as 70 percent of all rocks outcropping from the Earth’s surface are sedimentary. Preserved in these rocks is the complex record of the many transgressions and regressions of the sea, as well as the fossil remains or other indications of now extinct organisms and the petrified sands and gravels of ancient beaches, sand dunes, and rivers. Modern scientific understanding of the complicated story told by the rock record is rooted in the long history of observations and interpretations of natural phenomena extending back to the early Greek scholars. Xenophanes of Colophon (560?–478? bc), for one, saw no difficulty in describing the various seashells and images of life-forms embedded in rocks as the remains of long-deceased organisms. In the correct spirit but for the wrong reasons, Herodotus (5th century bc) felt that the small discoidal nummulitic petrifactions (actually the fossils of ancient lime-secreting marine protozoans) found in limestones outcropping at al-Jīzah, Egypt, were the preserved remains of discarded lentils left behind by the builders of the pyramids. These early observations and interpretations represent the unstated origins of what was later to become a basic principle of uniformitarianism, the root of any attempt at linking the past (as preserved in the rock record) to the present. Loosely stated, the principle says that the various natural phenomena observed today must also have existed in the past (see below The emergence of modern geologic thought: Lyell’s promulgation of uniformitarianism). Test Your Knowledge Planet Earth Quiz Although quite varied opinions about the history and origins of life and of the Earth itself existed in the pre-Christian era, a divergence between Western and Eastern thought on the subject of natural history became more pronounced as a result of the extension of Christian dogma to the explanation of natural phenomena. Increasing constraints were placed upon the interpretation of nature in view of the teachings of the Bible. This required that the Earth be conceived of as a static, unchanging body, with a history that began in the not too distant past, perhaps as little as 6,000 years earlier, and an end, according to the scriptures, that was in the not too distant future. This biblical history of the Earth left little room for interpreting the Earth as a dynamic, changing system. Past catastrophes, particularly those that may have been responsible for altering the Earth’s surface such as the great flood of Noah, were considered an artifact of the earliest formative history of the Earth. As such, they were considered unlikely to recur on what was thought to be an unchanging world. With the exception of a few prescient individuals such as Roger Bacon (c. 1220–92) and Leonardo da Vinci (1452–1519), no one stepped forward to champion an enlightened view of the natural history of the Earth until the mid-17th century. Leonardo seems to have been among the first of the Renaissance scholars to “rediscover” the uniformitarian dogma through his observations of fossil marine organisms and sediments exposed in the hills of northern Italy. He recognized that the marine organisms now found as fossils in rocks exposed in the Tuscan Hills were simply ancient animals that lived in the region when it had been covered by the sea and were eventually buried by muds along the seafloor. He also recognized that the rivers of northern Italy, flowing south from the Alps and emptying into the sea, had done so for a very long time. In spite of this deductive approach to interpreting natural events and the possibility that they might be preserved and later observed as part of a rock outcropping, little or no attention was given to the history—namely, the sequence of events in their natural progression—that might be preserved in these same rocks. The principle of superposition of rock strata In 1669 the Danish-born natural scientist Nicolaus Steno published his noted treatise The Prodromus of Nicolaus Steno’s Dissertation Concerning a Solid Body Enclosed by Process of Nature Within a Solid, a seminal work that laid the essential framework for the science of geology by showing in very simple fashion that the layered rocks of Tuscany exhibit sequential change—that they contain a record of past events. Following from this observation, Steno concluded that the Tuscan rocks demonstrated superpositional relationships: rocks deposited first lie at the bottom of a sequence, while those deposited later are at the top. This is the crux of what is now known as the principle of superposition. Steno put forth still another idea—that layered rocks were likely to be deposited horizontally. Therefore, even though the strata of Tuscany were (and still are) displayed in anything but simple geometries, Steno’s elucidation of these fundamental principles relating to the formation of stratified rock made it possible to work out not only superpositional relationships within rock sequences but also the relative age of each layer. Britannica Lists & Quizzes With the publication of the Prodromus and the ensuing widespread dissemination of Steno’s ideas, other natural scientists of the latter part of the 17th and early 18th centuries applied them to their own work. The early English geologist John Strachey, for example, produced in 1725 what may well have been the first modern geologic maps of rock strata. He also described the succession of strata associated with coal-bearing sedimentary rocks in Somersetshire, the same region of England where he had mapped the rock exposures. Classification of stratified rocks In 1756 Johann Gottlob Lehmann of Germany reported on the succession of rocks in the southern part of his country and the Alps, measuring and describing their compositional and spatial variation. While making use of Steno’s principle of superposition, Lehmann recognized the existence of three distinct rock assemblages: (1) a successionally lowest category, the Primary (Urgebirge), composed mainly of crystalline rocks, (2) an intermediate category, or the Secondary (Flötzgebirge), composed of layered or stratified rocks containing fossils, and (3) a final or successionally youngest sequence of alluvial and related unconsolidated sediments (Angeschwemmtgebirge) thought to represent the most recent record of the Earth’s history. This threefold classification scheme was successfully applied with minor alterations to studies in other areas of Europe by three of Lehmann’s contemporaries. In Italy, again in the Tuscan Hills in the vicinity of Florence, Giovanni Arduino, regarded by many as the father of Italian geology, proposed a four-component rock succession. His Primary and Secondary divisions are roughly similar to Lehmann’s Primary and Secondary categories. In addition, Arduino proposed another category, the Tertiary division, to account for poorly consolidated though stratified fossil-bearing rocks that were superpositionally older than the (overlying) alluvium but distinct and separate from the hard (underlying) stratified rocks of the Secondary. In two separate publications, one that appeared in 1762 and the second in 1773, Georg Christian Füchsel also applied Lehmann’s earlier concepts of superposition to another sequence of stratified rocks in southern Germany. While using upwards of nine separate categories of sedimentary rocks, Füchsel essentially identified discrete rock bodies of unique composition, lateral extent, and position within a rock succession. (These rock bodies would constitute formations in modern terminology.) Nearly 1,000 kilometres (620 miles) to the east, the German naturalist Peter Simon Pallas was studying rock sequences exposed in the southern Urals of eastern Russia. His report of 1777 differentiated a threefold division of rock, essentially reiterating Lehmann’s work by extension. Thus, by the latter part of the 18th century, the superpositional concept of rock strata had been firmly established through a number of independent investigations throughout Europe. Although Steno’s principles were being widely applied, there remained to be answered a number of fundamental questions relating to the temporal and lateral relationships that seemed to exist among these disparate European sites. Were these various German, Italian, and Russian sites at which Lehmann’s threefold rock succession was recognized contemporary? Did they record the same series of geologic events in the Earth’s past? Were the various layers at each site similar to those of other sites? In short, was correlation among these various sites now possible? The emergence of modern geologic thought Inherent in many of the assumptions underlying the early attempts at interpreting natural phenomena in the latter part of the 18th century was the ongoing controversy between the biblical view of Earth processes and history and a more direct approach based on what could be observed and understood from various physical relationships demonstrable in nature. A substantial amount of information about the compositional character of many rock sequences was beginning to accumulate at this time. Abraham Gottlob Werner, a scholar of wide repute and following from the School of Mining in Freiberg, Germany, was very successful in reaching a compromise between what could be said to be scientific “observation” and biblical “fact.” Werner’s theory was that all rocks (including the sequences being identified in various parts of Europe at that time) and the Earth’s topography were the direct result of either of two processes: (1) deposition in the primeval ocean, represented by the Noachian flood (his two “Universal,” or Primary, rock series), or (2) sculpturing and deposition during the retreat of this ocean from the land (his two “Partial,” or disintegrated, rock series). Werner’s interpretation, which came to represent the so-called Neptunist conception of the Earth’s beginnings, found widespread and nearly universal acceptance owing in large part to its theological appeal and to Werner’s own personal charisma. One result of Werner’s approach to rock classification was that each unique lithology in a succession implied its own unique time of formation during the Noachian flood and a universal distribution. As more and more comparisons were made of diverse rock outcroppings, it began to become apparent that Werner’s interpretation did not “universally” apply. Thus arose an increasingly vocal challenge to the Neptunist theory. James Hutton’s recognition of the geologic cycle In the late 1780s the Scottish scientist James Hutton launched an attack on much of the geologic dogma that had its basis in either Werner’s Neptunist approach or its corollary that the prevailing configuration of the Earth’s surface is largely the result of past catastrophic events which have no modern counterparts. Perhaps the quintessential spokesman for the application of the scientific method in solving problems presented in the complex world of natural history, Hutton took issue with the catastrophist and Neptunist approach to interpreting rock histories and instead used deductive reasoning to explain what he saw. By Hutton’s account, the Earth could not be viewed as a simple, static world not currently undergoing change. Ample evidence from Hutton’s Scotland provided the key to unraveling the often thought but still rarely stated premise that events occurring today at the Earth’s surface—namely erosion, transportation and deposition of sediments, and volcanism—seem to have their counterparts preserved in the rocks. The rocks of the Scottish coast and the area around Edinburgh proved the catalyst for his argument that the Earth is indeed a dynamic, ever-changing system, subject to a sequence of recurrent cycles of erosion and deposition and of subsidence and uplift. Hutton’s formulation of the principle of uniformitarianism, which holds that Earth processes occurring today had their counterparts in the ancient past, while not the first time that this general concept was articulated, was probably the most important geologic concept developed out of rational scientific thought of the 18th century. The publication of Hutton’s two-volume Theory of the Earth in 1795 firmly established him as one of the founders of modern geologic thought. It was not easy for Hutton to popularize his ideas, however. The Theory of the Earth certainly did set the fundamental principles of geology on a firm basis, and several of Hutton’s colleagues, notably John Playfair with his Illustrations of the Huttonian Theory of the Earth (1802), attempted to counter the entrenched Wernerian influence of the time. Nonetheless, another 30 years were to pass before Neptunist and catastrophist views of Earth history were finally replaced by those grounded in a uniformitarian approach. This gradual unseating of the Neptunist theory resulted from the accumulated evidence that increasingly called into question the applicability of Werner’s Universal and Partial formations in describing various rock successions. Clearly, not all assignable rock types would fit into Werner’s categories, either superpositionally in some local succession or as a unique occurrence at a given site. Also, it was becoming increasingly difficult to accept certain assertions of Werner that some rock types (e.g., basalt) are chemical precipitates from the primordial ocean. It was this latter observation that finally rendered the Neptunist theory unsustainable. Hutton observed that basaltic rocks exposed in the Salisbury Craigs, just on the outskirts of Edinburgh, seemed to have baked adjacent enclosing sediments lying both below and above the basalt. This simple observation indicated that the basalt was emplaced within the sedimentary succession while it was still sufficiently hot to have altered the sedimentary material. Clearly, basalt could not form in this way as a precipitate from the primordial ocean as Werner had claimed. Furthermore, the observations at Edinburgh indicated that the basalt intruded the sediments from below—in short, it came from the Earth’s interior, a process in clear conflict with Neptunist theory. While explaining that basalt may be intrusive, the Salisbury Craigs observations did not fully satisfy the argument that some basalts are not intrusive. Perhaps the Neptunist approach had some validity? The resolution of this latter problem occurred at an area of recent volcanism in the Auvergne area of central France. Here, numerous cinder cones and fresh lava flows composed of basalt provided ample evidence that this rock type is the solidified remnant of material ejected from the Earth’s interior, not a precipitate from the primordial ocean. Lyell’s promulgation of uniformitarianism Hutton’s words were not lost on the entire scientific community. Charles Lyell, another Scottish geologist, was a principal proponent of Hutton’s approach, emphasizing gradual change by means of known geologic processes. In his own observations on rock and faunal successions, Lyell was able to demonstrate the validity of Hutton’s doctrine of uniformitarianism and its importance as one of the fundamental philosophies of the geologic sciences. Lyell, however, imposed some conditions on uniformitarianism that perhaps had not been intended by Hutton: he took a literal approach to interpreting the principle of uniformity in nature by assuming that all past events must have conformed to controls exerted by processes that behaved in the same manner as those processes behave today. No accommodation was made for past conditions that do not have modern counterparts. In short, volcanic eruptions, earthquakes, and other violent geologic events may indeed have occurred earlier in Earth history but no more frequently nor with greater intensity than today; accordingly, the surface features of the Earth are altered very gradually by a series of small changes rather than by occasional cataclysmic phenomena. Lyell’s contribution enabled the doctrine of uniformitarianism to finally hold sway, even though it did impose for the time being a somewhat limiting condition on the uniformity principle. This, along with the increased recognition of the utility of fossils in interpreting rock successions, made it possible to begin addressing the question of the meaning of time in Earth history. Determining the relationships of fossils with rock strata The hypothesis of fossil succession in the work of Georges Cuvier During this period of confrontation between the proponents of Neptunism and uniformitarianism, there emerged evidence resulting from a lengthy and detailed study of the fossiliferous strata of the Paris Basin that rock successions were not necessarily complete records of past geologic events. In fact, significant breaks frequently occur in the superpositional record. These breaks affect not only the lithologic character of the succession but also the character of the fossils found in the various strata. An 1812 study by the French zoologist Georges Cuvier was prescient in its recognition that fossils do in fact record events in Earth history and serve as more than just “follies” of nature. Cuvier’s thesis, based on his analysis of the marine invertebrate and terrestrial vertebrate fauna of the Paris Basin, showed conclusively that many fossils, particularly those of terrestrial vertebrates, had no living counterparts. Indeed, they seemed to represent extinct forms, which, when viewed in the context of the succession of strata with which they were associated, constituted part of a record of biological succession punctuated by numerous extinctions. These, in turn, were followed by a seeming renewal of more advanced but related forms and were separated from each other by breaks in the associated rock record. Many of these breaks were characterized by coarser, even conglomeratic strata following a break, suggesting “catastrophic” events that may have contributed to the extinction of the biota. Whatever the actual cause, Cuvier felt that the evidence provided by the record of faunal succession in the Paris Basin could be interpreted by invoking recurring catastrophic geologic events, which in turn contributed to recurring massive faunal extinction, followed at a later time by biological renewal. William Smith’s work with faunal sequence As Cuvier’s theory of faunal succession was being considered, William Smith, a civil engineer from the south of England, was also coming to realize that certain fossils can be found consistently associated with certain strata. In the course of evaluating various natural rock outcroppings, quarries, canals, and mines during the early 1790s, Smith increasingly utilized the fossil content as well as the lithologic character of various rock strata to identify the successional position of different rocks, and he made use of this information to effect a correlation among various localities he had studied. The consistency of the relationships that Smith observed eventually led him to conclude that there is indeed faunal succession and that there appears to be a consistent progression of forms from more primitive to more advanced. As a result of this observation, Smith was able to begin what was to amount to a monumental effort at synthesizing all that was then known of the rock successions outcropping throughout parts of Great Britain. This effort culminated in the publication of his “Geologic Map of England, Wales and Part of Scotland” (1815), a rigorous treatment of diverse geologic information resulting from a thorough understanding of geologic principles, including those of original horizontality, superposition (lithologic, or rock, succession), and faunal succession. With this, it now became possible to assume within a reasonable degree of certainty that correlation could be made between and among widely separated areas. It also became apparent that many sites that had previously been classified according to the then-traditional views of Arduino, Füchsel, and Lehmann did not conform to the new successional concepts of Smith. Early attempts at mapping and correlation The seminal work of Smith at clarifying various relationships in the interpretation of rock successions and their correlations elsewhere resulted in an intensive look at what the rock record and, in particular, what the fossil record had to say about past events in the long history of the Earth. A testimony to Smith’s efforts in producing one of the first large-scale geologic maps of a region is its essential accuracy in portraying what is now known to be the geologic succession for the particular area of Britain covered. The application of the ideas of Lyell, Smith, Hutton, and others led to the recognition of lithologic and paleontologic successions of similar character from widely scattered areas. It also gave rise to the realization that many of these similar sequences could be correlated. The French biologist Jean-Baptiste de Monet, chevalier de Lamarck, in particular, was able to demonstrate the similarity of fauna from a number of Cuvier’s and Alexandre Brongniart’s collections of fossils from the Paris Basin with fossil fauna from the sub-Apennines of Italy and the London Basin. While based mainly on the collections of Cuvier and Brongniart, Lamarck’s observations provided much more insight into the real significance of using fossils strictly for correlation purposes. Lamarck disagreed with Cuvier’s interpretation of the meaning of faunal extinction and regeneration in stratigraphic successions. Not convinced that catastrophes caused massive and widespread disruption of the biota, Lamarck preferred to think of organisms and their distribution in time and space as responding to the distribution of favourable habitats. If confronted with the need to adapt to abrupt changes in local habitat—Cuvier’s catastrophes—faunas must be able to change in order to survive. If not, they became extinct. Lamarck’s approach, much like that of Hutton, stressed the continuity of processes and the continuum of the stratigraphic record. Moreover, his view that organisms respond to the conditions of their environment had important implications for the uniformitarian approach to interpreting Earth history. Once it was recognized that many of the rocks of the Paris Basin, London Basin, and parts of the Apennines apparently belonged to the same sequence by virtue of the similarity of their fossil content, Arduino’s term Tertiary (proposed as part of his fourfold division of rock succession in the Tuscan Hills of Italy) began to be applied to all of these diverse locations. Further work by Lyell and Gérard-Paul Deshayes resulted in the term Tertiary being accepted as one of the fundamental divisions of geologic time. The concepts of facies, stages, and zones During the latter half of the 18th and early 19th centuries, most of the research on the distribution of rock strata and their fossil content treated lithologic boundaries as events in time representing limits to strata that contain unique lithology and perhaps a unique fossil fauna, all of which are the result of unique geologic processes acting over a relatively brief period of time. Hutton recognized early on, however, that some variations occur in the sediments and fossils of a given stratigraphic unit and that such variations might be related to differences in depositional environments. He noted that processes such as erosion in the mountains of Scotland, transportation of sand and gravels in streams flowing from these mountains, and the deposition of these sediments could all be observed to be occurring concurrently. At a given time then, these diverse processes were all taking place at separate locations. As a consequence, different environments produce different sedimentary products and may harbour different organisms. This aspect of differing lithologic type or environmental or biological condition came to be known as facies. (It was Steno who had, in 1669, first used the term facies in reference to the condition or character of the Earth’s surface at a particular time.) The significance of the facies concept for the analysis of geologic history became fully apparent with the findings of the Swiss geologist Amanz Gressly. While conducting survey work in the Jura Mountains in 1838, Gressly observed that rocks from a given position in a local stratigraphic succession frequently changed character as he traced them laterally. He attributed this lateral variation to lateral changes in the depositional environments responsible for producing the strata in question. Having no term to apply to the observed changes, he adopted the word facies. While Gressly employed the term specifically in the context of lithologic character, it is applied more broadly today. As now used, the facies concept has come to encompass other types of variation that may be encountered as one moves laterally (e.g., along outcroppings of rock strata exposed in stream valleys or mountain ridges) in a given rock succession. Lithologic facies, biological facies, and even environmental facies can be used to describe sequences of rocks of the same or different age having a particularly unique character. Stages and zones The extensive review of the marine invertebrate fauna of the Paris Basin by Deshayes and Lyell not only made possible the formalization of the term Tertiary but also had a more far-reaching effect. The thousands of marine invertebrate fossils studied by Deshayes enabled Lyell to develop a number of subdivisions of the Tertiary of the Paris Basin based on the quantification of molluskan species count and duration. Lyell noted that of the various assemblages of marine mollusks found, those from rocks at the top of the succession contained a large number of species that were still extant in modern environments. Progressively older strata yielded fewer and fewer forms that had living counterparts, until at the base of the succession, a very small number of the total species present could be recognized as having modern counterparts. This fact allowed Lyell to consider subdividing the Tertiary of the Paris Basin into smaller increments, each of which could be defined according to some relative percentage of living species present in the strata. The subdivision resulted in the delineation of the Eocene, Miocene, and Pliocene epochs in 1833. Later this scheme was refined to further divide the Pliocene into an Early and a Late Pliocene. Lyell’s biostratigraphically defined concept of sequence, firmly rooted in concepts of faunal succession and superposition, was developed on mixed but stratigraphically controlled collections of fossils. It worked, but it did not address the faunal composition of the various Paris Basin strata other than in gross intervals—intervals that were as much lithologically as paleontologically defined. Alcide d’Orbigny, a French geologist, demonstrated correlational and superpositional uniqueness by utilizing paleontologically distinct intervals of strata defined solely on the basis of their fossil assemblages in his study of the French Jurassic Terrains Jurassiques (1842). This departure from a lithologically based concept of paleontologic succession enabled d’Orbigny to define paleontologically unique stages. Each stage represented a unique period in time and formed the basis of later work that resulted in the further subdivision of d’Orbigny’s original stages into 10 distinct stage assemblages. In spite of the work of Smith and to a lesser extent Lyell and others, d’Orbigny’s approach was essentially that of a catastrophist. Stage boundaries were construed to represent unusual extrinsic geologic events, with significant implications for faunal continuity. The applicability of d’Orbigny’s stages to areas outside of France had only limited success. At this point in the development of paleontology as a science, little was understood about the geologic time range of various fauna. Even less was known about the habitats—the environmental limits—of ancient fauna. Could certain groups of organisms have sufficiently widespread distribution in the rock record to enable correlations to be made with certainty? The Jurassic of western Europe consisted mostly of shallow marine sediments widely deposited throughout the area. It is now known that some of the mollusks with which d’Orbigny worked were undergoing very rapid evolutionary change; they were thus relatively short-lived as distinct forms in the geologic record and had a wide-ranging environmental tolerance. The result was that some forms, notably of the group of mollusks called ammonite cephalopods, were distributed extensively within a variety of sedimentary facies. The correlating of strata based on the faunal stage approach was widely accepted. Interestingly, most of d’Orbigny’s Jurassic stages, with refinements, are still in use today. Only a short time after d’Orbigny’s original analysis of Jurassic strata, the German mineralogist and paleontologist Friedrich A. Quenstedt challenged (in 1856–58) the validity of using stages to effect correlations in cases where the actual geologic ranges and bed-by-bed distribution of individual component fossils of an assemblage were unknown. In retrospect, this seems blatantly obvious, but at the time the systematic stratigraphic documentation of fossil occurrence was not always carried out. Much critical biostratigraphic data necessary for the proper characterization of faunal assemblages was simply not collected. As argued, individual fossil ranges and their distributions could have profound influence on the concept of faunal succession and evolutionary dynamics. Several of Quenstedt’s students at the University of Tübingen followed up on this latter concern. One in particular, Carl Albert Oppel, essentially refined his mentor’s concepts by paying particular attention to the character of the range of individual species in a succession of fauna. These intervals of unique biological character, which he called zones, were essentially subdivisions of the stages proposed by Quenstedt. Oppel’s recognition of the earliest occurrence of a fossil species (or its first appearance), its range through a succession of strata, and its eventual loss from the local record (or its last appearance) led him to compare such biostratigraphic data from many species. By making use of such data on species that overlap in some or all of their stratigraphic ranges and from widely separated areas, Oppel was able to erect a biochronology based on a diverse record of first appearances, last appearances, and individual and overlapping range zones. This fine-scale refinement of a biologically defined sense of succession found wide applicability and enabled not only biochronological (or temporal) but also biofacies (spatial) understanding of the succession in question.
A twin star of the Sun may have formed along with our solar system, a new study from the Center for Astrophysics finds. If confirmed, the presence of a second star would explain mysteries of the Solar System. This would mean the Oort Cloud at the edge of our system likely formed much as it is today. It would also mean that any “planet nine” beyond Neptune (should it exist) is likely a captured object from outside the Solar System. Roughly five billion years ago, the Sun formed within a birth cluster — a collection of infant stars swimming in a cloud of molecular gas. A cloud made of mountains At the edge of our family of planets lies the Kuiper Belt — a doughnut shaped halo of icy objects including the dwarf planet Pluto. This belt, now being explored by spacecraft for the first time, is also home to short-period comets that journey to the inner solar system roughly once every 200 years. This region might also house at least one unseen, massive planet. Well beyond the Kuiper Belt, the Oort Cloud is a diffuse collection of bodies stretching as far as one-quarter of the way to the nearest star. “[T]he Oort Cloud is believed to be a giant spherical shell surrounding the rest of the solar system. It is like a big, thick-walled bubble made of icy pieces of space debris the sizes of mountains and sometimes larger. The Oort Cloud might contain billions, or even trillions, of objects,” NASA describes. The hypothesized cloud of frozen mountains is also thought to be the source of the long-period comets which can take hundreds of thousands of years to complete a single orbit of the Sun. In the most popular theory of the formation of the Oort Cloud, the diffuse collection of rock and ice took shape from debris ejected from the nascent solar system, together with flotsam from other planetary systems. However, most stars are born in binary systems, and it seems likely our own Sun may have a long-lost twin, researchers speculated. “Here, we consider a temporary binary companion to the Sun that could have existed only in the solar birth cluster, and explore the plausibility and implications of such a possibility for both the formation of the [outer Oort Cloud] and the capture of Planet Nine,” researchers wrote in an article published in the Astrophysical Journal Letters. The expected concentration of material scattered from within the Solar System compared to interstellar debris could be explained by a binary model of stellar formation. “Binary systems are far more efficient at capturing objects than are single stars. If the Oort cloud formed as observed, it would imply that the Sun did in fact have a companion of similar mass that was lost before the Sun left its birth cluster,” Dr. Avi Loeb of Harvard University states. When you really want to get away from it all At the outer edge of our planetary system, bodies follow orbits suggesting the presence of an unseen world orbiting far from the Sun. The discovery of such a Planet Nine (sometimes called Planet X) would lend further evidence to the idea that the Sun has a long-lost sibling. “Objects in the outer Oort Cloud may have played important roles in Earth’s history, such as possibly delivering water to Earth and causing the extinction of the dinosaurs. Understanding their origins is important,” Amir Siraj, a Harvard undergraduate student, describes. It may be impossible with current technology to find the estranged sibling of our Sun, due to the complex gravitational influences of hundreds of billions of stars in the Milky Way. “Passing stars in the birth cluster would have removed the companion from the Sun through their gravitational influence. Before the loss of the binary, however, the solar system already would have captured its outer envelope of objects, namely the Oort cloud and the Planet Nine population,” Loeb stated. Although we may never find this mysterious former companion to our Sun, our Solar System will forever be adorned by the grandiose Oort Cloud and its gift of once-in-a-lifetime comets. This article was originally published on The Cosmic Companion by James Maynard, founder and publisher of The Cosmic Companion. He is a New England native turned desert rat in Tucson, where he lives with his lovely wife, Nicole, and Max the Cat. You can read this original piece here. Astronomy News with The Cosmic Companion is also available as a weekly podcast, carried on all major podcast providers. Tune in every Tuesday for updates on the latest astronomy news, and interviews with astronomers and other researchers working to uncover the nature of the Universe.
Lunar south pole The lunar south pole is of special interest to scientists because of the postulated occurrence of ice in permanently shadowed areas. Of the lunar poles, the south pole is of greater interest because the area that remains in shadow is much larger than that at the north pole. The lunar south pole craters are unique in that sunlight does not reach the bottom. Such craters are cold traps that contain a fossil record of the early solar system. Spacecraft from several countries have explored the lunar south pole. Extensive studies were conducted by the Lunar Orbiter, Clementine, Lunar Prospector, Lunar Reconnaissance Orbiter, Kaguya, and Chandrayaan. NASA's LCROSS mission found a significant amount of water in Cabeus. Future planned exploration of the Lunar south pole includes a private mission by Shackleton Energy Company, no earlier than 2016. Shackleton intends to land a robotic precursor exploration rover to "identify and characterize the nature, composition and locations of the optimum ice concentrations at the north and Lunar south pole craters". Lunar Mission One is a British-led, unmanned Moon mission announced in November 2014 and planned for 2024. It will attempt to land on the lunar south pole, then drill down at least 20m and possibly as deep as 100m. This could dramatically improve the understanding of the Moon's composition, its geologic history and formation, revealing new clues about the early Solar System. The mission is attempting to gain crowdfunding on Kickstarter. - "South Pole Region of the Moon as Seen by Clementine". NASA. June 3, 1996. Retrieved March 4, 2010. - "NASA Takes Aim at Moon with Double Sledgehammer". Space.com. February 27, 2008. Retrieved March 4, 2010. - Chang, Kenneth (November 13, 2009). "LCROSS Mission Finds Water on Moon, NASA Scientists Say". The New York Times. Retrieved March 4, 2010. - Shackleton Energy's cislunar economic development plans David Livingston interview with James Keravala, The Space Show, 14 Dec 2012, at 55:25-57:40, accessed 2012-12-22. - "UK 'to lead moon landing' funded by public contributions". BBC. 19 November 2014. Retrieved 19 November 2014. - "LUNAR MISSION ONE: A new lunar mission for everyone.". Kickstarter. 19 November 2014. Retrieved 19 November 2014. |This article related to the Moon is a stub. You can help Wikipedia by expanding it.|
Table of Contents How do you display data? The methods students use to display data as they move through the primary and intermediate grades include making tables, charts, bar graphs, line graphs, pictographs, circle graphs, and line plots. Students in middle and high school also create histograms, box-and-whisker-plots, scatterplots, and stem-and-leaf plots. What is an effective way of displaying data? Graphs, charts, matrices, tables and diagrams are like pictures: they can ‘speak a thousand words’. They are useful for expressing information clearly and simply, and they can be used as a visual-thinking tool – for yourself and for groups. There are a number of techniques and types, each suited to different tasks. What are the types of data display? The most common types of data displays are the 17 that follow: - bar charts, - column charts, - stacked bar charts, - tree maps, - line graphs, - area charts, - stacked area charts, - unstacked area charts, What is the best way to display this data for analysis? Place the data in a chart that includes individual measurements before averages were calculated so the information is more complete. What is data screen? Data screening means checking data for errors and fixing or removing these errors. The goal is to maximise “signal” and minimise “noise” by identifying and fixing or removing errors. What are visual methods of displaying data? Data visualization is the graphical representation of information and data. By using visual elements like charts, graphs, and maps, data visualization tools provide an accessible way to see and understand trends, outliers, and patterns in data. How do you show data visually? How to present data visually (data visualization best practices) - Avoid distorting the data. - Avoid cluttering up your design with “chartjunk” - Tell a story with your data. - Combine different types of data visualizations. - Use icons to emphasize important points. - Use bold fonts to make text information engaging. Is a visual display of data? A graph is a visual display of information or data. What is monitor display data? Data display refers to computer output of data to a user, and assimilation of information from such outputs. Data may be output on electronic displays, or hardcopy printouts, or other auxiliary displays and signaling devices including voice output, which may alert users to unusual conditions. How do you display data creatively? 15 Cool Ways to Show Data - Venn Diagram. If you need to make a comparison between 2 relatively simple data sets, Venn diagram can be your creative and cool solution. - Bubble Chart. - Decision Trees. - Radar Chart. - Cycle Diagram. - Concept Maps. - Fishbone Diagram. How do you display quantitative data? Quantitative data is often displayed using either a histogram, dot plot, or a stem-and-leaf plot. In a histogram, the interval corresponding to the width of each bar is called a bin. A histogram displays the bin counts as the height of the bars (like a bar chart). What are the different ways to present data? Some of the popular ways of presenting the data includes Line graph, column chart, box pot, vertical bar, scatter plot. These and other types are explain below with brief information about their application. What are the best ways to present data? Using Charts. A chart or graph is a visual presentation of data. What are different ways to show data? The methods students use to display data as they move through the primary and intermediate grades include making tables, charts, bar graphs, line graphs, pictographs, circle graphs, and line plots. What are different ways to represent data? There are a number of ways of representing data diagrammatically. Scatter Graphs. These are used to compare two sets of data. One set of data is put on the x-axis (the horizontal axis) and the other on the y-axis (the vertical axis). If one set of data depends upon the other, this is put on the y-axis (and is known as the ‘dependent variable’). What are ways to display qualitative data?
File Name: volume and surface area formulas .zip - Surface Areas and Volumes Notes For Class 10 Download PDF - Unit 9 Section 4 : Surface Area and Volume of 3-D Shapes - CBSE Class 9 Maths Chapter 13 - Surface Areas and Volumes Formulas A right circular cylinder is solid generated by the revolution of a rectangle about of its sides. NOTE : If a paper, cylinder open at both the ends is cut along a vertical line on the curved surface and stretched on a plane surface, we obtain a rectangle of length i. So, curved surface area C. When rectangular sheet of paper is rolled along its length , we get a cylinder whose base circumference is length of sheet and height is same as breadth of sheet. Surface Areas and Volumes Notes For Class 10 Download PDF Geometry Formulas : Geometry is a branch of mathematics that deals with the measurement, properties, and relationships of points, lines, angles, surfaces, and solids. There are two types of geometry — 2D geometry or plane geometry and 3D geometry or solid geometry. Flat shapes like squares, circles, and triangles are a part of flat geometry and are called 2D shapes. These shapes have only two dimensions, the length and the width. On the other hand, solid objects, also known as 3D objects, have the third dimension of height or depth. Some examples of 3D shapes in solid geometry are cube, cuboid, sphere, and cone. Geometry formulas are very important to solve all the mathematical problems related to both plane geometry and solid geometry. Surface Area Problems Pdf. If you find the area of each rectangle and add them, you will have the total area of the figure. Solve word problems that involve surface area of pyramids and prisms. The radius of the base is 4 inches and the slant height is 6 inches. It is a solid shape with 6 rectangular faces. The surface area of a figure is defined as the sum of the areas of the exposed sides of an object. For any help with math topics of. Mensuration Formulas: Mensuration is a branch of mathematics that deals with the area, perimeter, volume, and surface area of various geometrical shapes. It is one of the most important chapters covered in high school Mathematics. Mensuration has immense practical applications in our day-to-day life. It is, for this reason, advanced concepts related to mensuration are covered in higher grades. Mensuration problems are asked in various government job exams as well, like SSC, Banking, Insurance, etc. So it becomes very important for everyone to understand and memorize the various mensuration formulas for all 2D and 3D geometrical figures. Unit 9 Section 4 : Surface Area and Volume of 3-D Shapes Geometry Formulas Pdf. Surface area of a plane perimeter volume. We additionally have enough money variant types and then type of the books to browse. Formulas and Properties of a Square Rectangle. Figure 2 When the car is cornering at racing speeds, steering Ackerman geometry is modified dramatically by the tyre slip angles, as per Figure 2. Surface area is the area occupied by the surface of the 3-D objects while Volume is the space occupied by the object. Many times 3-D Figure will be the combination of the standard figure so we just need to calculate the surface area and volume separately and then add them. The formulas are given for the following. I am sure this will be very helpful to the students of all sphere as this is common topics and used in wide variety of competitive examination. Surface Area and Volume Class 9 Notes. Surface Area and Volume Questions for Class 9. FORMULAS FOR PERIMETER, AREA, SURFACE, VOLUME. Edited by Joanna Gutt-Lehr, PIN Learning Lab, uicheritagegarden.org CBSE Class 9 Maths Chapter 13 - Surface Areas and Volumes Formulas Surface area formulas and volume formulas appear time and again in calculations and homework problems. Pressure is a force per area and density is mass per volume. These are just two simple types of calculations that involve these formulas. This is a short list of common geometric shapes and their surface area formulas and volume formulas. A sphere is a solid figure where every point on the surface is equidistant from the center of the sphere. In this section we calculate the volume and surface area of 3-D shapes such as cubes , cuboids , prisms and cylinders. The total surface area is therefore. Giving your answers correct to 3 significant figures, calculate the volume and total surface area of each of the following cylinders:. The diagram shows a wooden block that has had a hole drilled in it. The diameter of the hole is 2 cm. Last updated at Dec. For calculations, Lateral Surface Area means curved surface area. Сердце Ролдана упало. Выходит, это не клиент. - Вы хотите сказать, что нашли этот номер. - Да, я сегодня нашел в парке чей-то паспорт. Ваш номер был записан на клочке бумаги и вложен в паспорт. Я было подумал, что это номер гостиницы, где тот человек остановился, и хотел отдать ему паспорт.
This discovery is published in the magazine "Monthly Announcements of the Royal Astronomical Society: Letters". Astronomers use the Hubble Space Telescope from NASA and the European Space Agency (ESA) to study some of the oldest and weakest stars in the globular cluster "NGC 6752" They made an unexpected discovery: a dwarf galaxy of 30 million light years away from almost the same age as the Universe itself. The purpose of observing these scientists was to use the NGC 6752 cluster stars to measure the age of the cluster, but in the process they made an unexpected discovery. On the outer edges of the area spotted with the camera of Hubble ACS, it looked a compact collection of stars. After a careful analysis of its brightness and temperature, astronomers have concluded that these stars they did not belong to the cluster, which is part of the Milky Way, but were far from millions of light years old. Newly discovered cosmic neighbor, which is called "Bedin 1" by astronomers, is a galaxies of modest and elongated size. It measures just about 3,000 light-years to its fullest, part of the Milky Way's size. Not only is it small, but it is also incredibly weak. These properties have led astronomers to classify it as a spheroid dwarf galaxy. Dwarf spheroidal galaxies are defined by their small size, low gloss, dust deficiency and old star populations. It is already known that in the Local Galaxies group there are 36 galaxies of this type, of which 22 are satellite galaxies of the Milky Way. While the dwarf spheroidal galaxies they are not uncommon, Bedin 1 & # 39; has some outstanding features. Not only is it one of the few dwarf spheroids that have a well-established distance, but also It is extremely isolated. It is about 30 million light-years from the Milky Way and 2 million light-years from the host of the nearest highly reliable galaxy, "NGC 6744". This is what he does perhaps the most isolated small dwarf galaxy discovered so far. From the properties of their stars, astronomers can conclude that the galaxy has about 13,000 million years ago, almost as old as the Universe itself. Because of the isolation, resulting in almost no interaction with other galaxies and its age, "Bedin 1" is the astronomical equivalent of the "living fossil" of the primitive Universe. The discovery of "Bedin 1" was indeed a coincidence. Very few images from Hubble allow objects to appear so weakly, and cover only a small area of the sky. On future telescopes with a large field of vision, such as the WFIRST telescope, will have cameras that cover a much larger area than the sky and you can find a lot more of these galactic neighbors.
Bilateral hearing loss is a type of hearing impairment that affects both ears. This type of hearing loss may happen over a long period of time or may transpire instantly. Bilateral hearing loss can affect people of all walks of life and may occur at any age, even at birth. Anatomically, hearing loss in both ears may occur in the outer, middle, or inner ear, or may be a blend of all three. The hearing loss may be caused by a variety of conditions, especially old age. Bilateral hearing loss is typically treated with hearing aids in both ears. One of the most common causes for hearing loss in both ears is a condition called presbycusis, which affects individuals as they age. Presbycusis affects approximately 40 percent of people age 75 or older. Over time, a person’s inner and middle ear, along with the nerve conduits to the brain, gradually become less effective. This affects both ears equally. Other causes of bilateral hearing loss include some type of auditory trauma, often exposure to loud noises, which can be sudden or gradual. Exposure to loud sounds may rupture the eardrums. Noises that can bring about hearing loss in both ears include music, an explosion, or heavy industrial equipment, among other types of noise. Bilateral hearing loss can also be induced by other factors, including hereditary and the use of tobacco. Also, hearing loss in both ears may be caused as a result of side effects from certain medications, including antibiotics. Sometimes, the hearing loss in both ears can be brought on by a combination of the factors of old age, auditory trauma, and genetics, as well as use of tobacco and certain medicines. A person experiencing hearing loss in both ears hears sound in a lower volume. Often times, an individual will have trouble distinguishing certain high-pitched sounds and conversations when there is other noise around. Also, a person will typically experience an annoying ringing or hissing sound in the ears. Bilateral hearing loss can be confirmed during a physical exam. A physician will probe into a person’s ears and ask questions about hearing loss. Often times, a doctor will confirm hearing loss with the whisper test. During the whisper test, the medical specialist will turn his back to a patient and ask him to repeat certain words. In some cases, a patient may be referred to a hearing specialist who can conduct diagnostic tests, such as using electric equipment that transmits sounds at an assortment of frequencies and volumes, to see if hearing loss is present. For hearing loss in both ears, treatment often requires a fitting of hearing aids in both ears. Hearing aids, which may be digital or analog, vary, as some fit partially or completely inside the ear canal. Surgery and medical treatment are usually not effective in treating hearing loss in both ears.
The mountains form the Continental Divide, separating rivers draining to the Atlantic and Arctic oceans from those draining to the Pacific. The major Atlantic-bound rivers rising in the Rockies include the Rio Grande, Arkansas, Platte, Yellowstone, Missouri, and Saskatchewan. Those draining to the Arctic include the Peace, Athabasca, and Liard rivers. Flowing to the Pacific Ocean are the Colorado, Columbia, Snake, Fraser, and Yukon rivers. The Rockies were formed in the Mesozoic and Early Cenozoic eras during the Cordilleran orogeny. They are geologically complex, with remnants of an ancestral Rocky Mt. system and evidence that uplift, which involved almost all mountain-building processes (see mountain), occurred as a series of pulses over millions of years. The mountains have since been eroded to expose ancient crystalline cores flanked by thick upturned layers of sedimentary rocks. Glaciers and snowfields, which cover portions of the northern ranges and the high peaks of the south, were at one time more extensive; throughout the system the erosional features of alpine glaciation are apparent. Topographically, the Rockies are usually divided into five sections: the Southern Rockies, Middle Rockies, Northern Rockies (all in the United States), the Rocky Mountain system of Canada, and Brooks Range in Alaska. The Wyoming Basin, the system's principal topographic break, is sometimes considered a sixth section. The Southern Rockies, in New Mexico, Colorado, and S Wyoming, are dominated by two north-south belts of folded mountains that have been eroded to expose cores of Precambrian rocks rimmed by younger sedimentary rocks. The eastern belt comprises the Laramie, Medicine Bow, and Wet Mts. and the Front Range. The principal ranges of the western belt are the Park, Gore, Mosquito, Sawatch, and Sangre de Cristo. Between the two belts are three basins known as the North, South, and Middle "parks." To the southwest are the San Juan Mts., a nonlinear group of uplands composed mainly of volcanic rocks. The Southern Rockies are the system's highest section and include many peaks above 14,000 ft (4,267 m), among them Mt. Elbert and Mt. Massive (14,418 ft/4,395 m), both in the Sawatch Mts. The Middle Rockies, chiefly in NE Utah and W Wyoming, lie N of the Southern Rockies and are separated from them by the Wyoming Basin. The ranges of this section are generally lower and less continuous than those to the south. The principal parts are the Wasatch and Teton ranges (which are both great tilted fault blocks), the Yellowstone Plateau and Absaroka Range (both developed on volcanic rocks), the Bighorn, Beartooth, Owl Creek, and Uinta Mts., and the Wind River Range (all broad folded mountains). All of these component sections have been eroded down to their Precambrian cores and are rimmed by Paleozoic and Mesozoic sedimentary rocks. The highest peaks of the Middle Rockies are Gannet Peak (13,785 ft/4,202 m) in the Wind River Range and Grand Teton (13,766 ft/4,196 m) in the Teton Range. The Northern Rockies, in NE Washington, N and central Idaho, NW Wyoming, and W Montana extend N from Yellowstone National Park to the U.S.-Canadian border. They are composed of the Clearwater and Salmon River Mts., the Sawtooth and Lost River ranges (all of which developed in the batholith of central Idaho), and the Bitterroot Range along the Idaho-Mont. line. In the east are the Front Ranges of Montana. A series of north-south trending ranges separated by narrow trenches and valleys occupies most of N Montana and the Idaho panhandle. Two especially distinctive trenches are the Rocky Mountain Trench, which extends NW from Flathead Lake, and the Purcell Trench, which extends N from Coeur d'Alene Lake. The Okanagan Highlands, in NE Washington, form the western edge of the Northern Rockies. The peaks of the Northern Rockies are generally lower than those to the south; among the highest are Borah Peak (12,655 ft/3,857 m) and Leatherman Peak (12,230 ft/3,728 m) in the Lost River Range. The Rocky Mt. system of Canada is composed of two major sections: the high rugged peaks of the Canadian Rockies proper, to the east, and the Columbia Mts. group on the west. The Canadian Rockies are located along the British Columbia-Alberta border and include Mt. Robson (12,972 ft/3,954 m; highest peak of the Rocky Mts. in Canada), Mt. Columbia (12,295 ft/3,748 m), and Mt. Forbes (11,902 ft/3,628 m). The prominent, wide-floored Rocky Mountain Trench, west of the crest line, continues c.800 mi (1,290 km) into Canada from Montana and is drained by the headwaters of the Peace River and by sections of the Fraser, Columbia, and Kootenay rivers. The Purcell Trench to the west also crosses into Canada and joins the Rocky Mountain Trench c.200 mi (320 km) north of the border. Farther to the west is the Columbia Mts. group, which includes the Selkirk, Purcell, Monashee, and Cariboo Mts. The Rockies continue into Yukon and the Northwest Territories as the Mackenzie, Richardson, and Franklin Mts. In N Alaska, the Brooks Range, a cold and treeless region rising to Mt. Chamberlin (9,020 ft/2,749 m), forms the northernmost section of the Rocky Mts. Exploitable mineral deposits (lead, zinc, copper, silver, gold) are sparsely dispersed throughout the entire system. The principal mining centers are Leadville and Cripple Creek, Colo.; the Butte-Anaconda district of Montana; Coeur d'Alene, Idaho; and the Kootenay Trail region of British Columbia. In the 1970s oil shale found in the Rocky Mt. area led to an oil industry that spurred city and state growth, especially in Colorado; by the mid-1980s, the industry was already in decline. Vast forests, largely under government control and supervision, are a major natural resource. Lumbering and other forestry activities are limited mainly to Montana, Idaho, and British Columbia, where commercially valuable stands are most abundant and accessible. The Rockies are a year-round recreational attraction, and the surrounding states have seen a boom in vacation-housing construction and, thus, population increases since the late 1970s. The U.S. national parks in the system include Rocky Mountain, Yellowstone, Grand Teton, and Glacier. Rocky Mountain National Park (265,723 acres/107,580 hectares) is in central Colorado. Straddling the Continental Divide in the Front Range of the Southern Rockies, the park features more than 100 peaks towering over 11,000 ft (3,353 m). The highest is Longs Peak (14,255 ft/4,345 m). The park, which was authorized in 1915, also contains many lakes and waterfalls. (See also National Parks and Monuments, table.) In Canada are Jasper, Banff, Yoho, Glacier, Kootenay, Mount Revelstoke, and Waterton Lakes national parks. The Rockies were traversed by westward-bound pioneers; the principal U.S. pass across the mountains is South Pass (alt. c.7,550 ft/2,301 m) at the southern end of the Wind River Range, SW Wyoming, which links the Wyoming Basin and the Great Plains with the basins and plateaus W of the Rockies. This pass was followed by the Oregon and Mormon trails. The Santa Fe Trail skirted the southern end of the Rockies. In Canada the important passes are Kicking Horse (alt. 5,539 ft/1,688 m), which carries the Trans-Canada Highway, Crowsnest Pass, and Yellowhead Pass. Explorers of the U.S. Rockies have included Vasquez de Coronado (1540), Meriwether Lewis and William Clark (1804-6), Zebulon Pike (1806-7), Stephen Long (1819-20), Benjamin Bonneville (1832-35), John Frémont (1843-44), Isaac Stevens (1853), John W. Powell (1868), and Ferdinand Hayden (1871). Leading Canadian explorers were sieur de la Vérendrye (1738-39), Sir Alexander Mackenzie (1792-93), David Thompson (1799-1803), and Simon Fraser (1803-7). See W. W. Atwood, The Rocky Mountains (1945); P. Eberhart and P. Schmuck, The Fourteeners, Colorado's Great Mountains (1970); The Magnificent Rockies, pub. by American West (1973); D. Lavender, The Rockies (1981); H. Chronic, Time, Rocks, and the Rockies (1984); J. McPhee, Rising From the Plains (1986). The Rocky Mountains, often called the Rockies, are a mountain range in western North America. The Rocky Mountains stretch more than 4,800 kilometers (3,000 miles) from northernmost British Columbia, in Canada, to New Mexico, in the United States. The range's highest peak is Mount Elbert in Colorado at 14,440 feet (4,401 meters) above sea level. Though part of North America's Pacific Cordillera, the Rockies are distinct from the Pacific Coast Ranges which are located immediately adjacent to the Pacific coast. The Eastern edge of the rockies rises impressively above the Interior Plains of central North America, including the Front Range which runs from northern New Mexico to northern Colorado, the Wind River Range and Big Horn Mountains of Wyoming, the Crazy Mountains and the Rocky Mountain Front of Montana, and the Clark Range of Alberta. In Canada geographers define three main groups of ranges - the Continental Ranges, Hart Ranges and Muskwa Ranges (the latter two flank the Peace River, the only river to pierce the Rockies, and are collectively referred to as the Northern Rockies). Mount Robson in British Columbia, at 3,954 meters (12,972 ft), is the highest peak in the Canadian Rockies. The western edge of the Rockies, such as the Wasatch Range near Salt Lake City, Utah, divides the Great Basin from other mountains further to the west. The Rockies do not extend into the Yukon or Alaska, or into central British Columbia, where the Rocky Mountain System (but not the Rocky Mountains) includes the Columbia Mountains, the southward extension of which is considered part of the Rockies in the U.S. The Rocky Mountain System within the United States is a United States physiographic region. The Rocky Mountains are commonly defined as stretching from the Liard River in British Columbia south to the Rio Grande in New Mexico. Other mountain ranges continue beyond those two rivers, including the Selwyn Range in Yukon, the Brooks Range in Alaska, and the Sierra Madre in Mexico, but those are not part of the Rockies, though they are part of the American cordillera. The United States definition of the Rockies, however, includes the Cabinet and Salish Mountains of Idaho and Montana, whereas their counterparts north of the Kootenai River, the Columbia Mountains, are considered a separate system in Canada, lying to the west of the huge Rocky Mountain Trench, which runs the length of British Columbia from its beginnings in the middle Flathead River valley in western Montana to the south bank of the Liard River. The Rockies vary in width from 70 to 300 miles (110 to 480 kilometers). Also west of the Rocky Mountain Trench, farther north and facing the Muskwa Ranges across the Trench, are the Stikine Ranges and Omineca Mountains of the Interior Mountains system of British Columbia. The younger ranges of the Rocky Mountains uplifted during the late Cretaceous period (100 million-65 million years ago), although some portions of the southern mountains date from uplifts during the Precambrian (3,980 million-600 million years ago). The mountains' geology is a complex of igneous and metamorphic rock; younger sedimentary rock occurs along the margins of the southern Rocky Mountains, and volcanic rock from the Tertiary (65 million-1.8 million years ago) occurs in the San Juan Mountains and in other areas. Millennia of severe erosion in the Wyoming Basin transformed intermountain basins into a relatively flat terrain. The Tetons and other north-central ranges contain folded and faulted rocks of Paleozoic and Mesozoic age draped above cores of Proterozoic and Archean igneous and metamorphic rocks ranging in age from 1.2 billion (e.g., Tetons) to more than 3.3 billion years (Beartooth Mountains). Periods of glaciation occurred from the Pleistocene Epoch (1.8 million-70,000 years ago) to the Holocene Epoch (fewer than 11,000 years ago). Recent episodes included the Bull Lake Glaciation that began about 150,000 years ago and the Pinedale Glaciation that probably remained at full glaciation until 15,000-20,000 years ago. Ninety percent of Yellowstone National Park was covered by ice during the Pinedale Glaciation.The little ice age was a period of glacial advance that lasted a few centuries from about 1550 to 1860. For example, the Agassiz and Jackson glaciers in Glacier National Park reached their most forward positions about 1860 during the little ice age. Water in its many forms sculpted the present Rocky Mountain landscape. Runoff and snowmelt from the peaks feed Rocky Mountain rivers and lakes with the water supply for one-quarter of the United States. The rivers that flow from the Rocky Mountains eventually drain into three of the world's Oceans: the Atlantic Ocean, the Pacific Ocean, and the Arctic Ocean. These rivers include: Gulf of Mexico drainage The Continental Divide is located in the Rocky Mountains and designates the line at which waters flow either to the Atlantic or Pacific Oceans. Triple Divide Peak (8,020 feet / 2,444 m) in Glacier National Park (U.S.) is so named due to the fact that water which falls on the mountain reaches not only the Atlantic and Pacific, but Hudson Bay as well. Farther north in Alberta, the Athabasca and other rivers feed the basin of the Mackenzie River, which has its outlet on the Beaufort Sea of the Arctic Ocean. Since the last great Ice Age, the Rocky Mountains were home first to Paleo-Indians and then to the indigenous peoples of the Apache, Arapaho, Bannock, Blackfoot, Cheyenne, Crow, Flathead, Shoshoni, Sioux, Ute, Kutenai (Ktunaxa in Canada), Sekani, Dunne-za, and others. Paleo-Indians hunted the now-extinct mammoth and ancient bison (an animal 20% larger than modern bison) in the foothills and valleys of the mountains. Like the modern tribes that followed them, Paleo-Indians probably migrated to the plains in fall and winter for bison and to the mountains in spring and summer for fish, deer, elk, roots, and berries. In Colorado, along the crest of the Continental Divide, rock walls that Native Americans built for driving game date back 5,400-5,800 years. A growing body of scientific evidence indicates that indigenous peoples had significant effects on mammal populations by hunting and on vegetation patterns through deliberate burning. Recent human history of the Rocky Mountains is one of more rapid change. The Spanish explorer Francisco Vásquez de Coronado — with a group of soldiers, missionaries, and African slaves — marched into the Rocky Mountain region from the south in 1540. The introduction of the horse, metal tools, rifles, new diseases, and different cultures profoundly changed the Native American cultures. Native American populations were extirpated from most of their historical ranges by disease, warfare, habitat loss (eradication of the bison), and continued assaults on their culture. In 1739, French fur traders Pierre and Paul Mallet, while journeying through the Great Plains, discovered a range of mountains at the headwaters of the Platte River, which local American Indian tribes called the "Rockies", becoming the first Europeans to report on this uncharted mountain range. Sir Alexander MacKenzie (1764 - March 11, 1820) became the first European to cross the Rocky Mountains in 1793. He found the upper reaches of the Fraser River and reached what is now the Pacific coast of Canada on July 20 of that year, completing the first recorded transcontinental crossing of North America north of Mexico. He arrived at Bella Coola, British Columbia, where he first reached saltwater at South Bentinck Arm, an inlet of the Pacific Ocean. The Lewis and Clark Expedition (1804-1806) was the first scientific reconnaissance of the Rocky Mountains. Specimens were collected for contemporary botanists, zoologists, and geologists. The expedition was said to have paved the way to (and through) the Rocky Mountains for European-Americans from the East, although Lewis and Clark met at least 11 European-American mountain men during their travels. Mountain men, primarily French, Spanish, and British, roamed the Rocky Mountains from 1720 to 1800 seeking mineral deposits and furs. The fur-trading North West Company established Rocky Mountain House as a trading post in what is now the Rocky Mountain foothills of Alberta in 1799, and their business rivals the Hudson's Bay Company established Acton House nearby. These posts served as bases for most European activity in the Canadian Rockies in the early 1800s, most notably the expeditions of David Thompson (explorer), the fourth European to follow the Columbia River to the Pacific Ocean. After 1802, American fur traders and explorers ushered in the first widespread caucasian presence in the Rockies south of the 49th parallel. The more famous of these include Americans included William Henry Ashley, Jim Bridger, Kit Carson, John Colter, Thomas Fitzpatrick, Andrew Henry, and Jedediah Smith. On July 24, 1832, Benjamin Bonneville led the first wagon train across the Rocky Mountains by using Wyoming's South Pass. Thousands passed through the Rocky Mountains on the Oregon Trail beginning in 1842. The Mormons began to settle near the Great Salt Lake in 1847. From 1859 to 1864, Gold was discovered in Colorado, Idaho, Montana, and British Columbia sparking several gold rushes bringing thousands of prospectors and miners to explore every mountain and canyon and to create the Rocky Mountain's first major industry. The Idaho gold rush alone produced more gold than the California and Alaska gold rushes combined and was important in the financing of the Union Army during the American Civil War. The transcontinental railroad was completed in 1869, and Yellowstone National Park was established as the world's first national park in 1872. While settlers filled the valleys and mining towns, conservation and preservation ethics began to take hold. President Harrison established several forest reserves in the Rocky Mountains in 1891-1892. In 1905, President Theodore Roosevelt extended the Medicine Bow Forest Reserve to include the area now managed as Rocky Mountain National Park. Economic development began to center on mining, forestry, agriculture, and recreation, as well as on the service industries that support them. Tents and camps became ranches and farms, forts and train stations became towns, and some towns became cities. Abandoned mines with their wakes of mine tailings and toxic wastes dot the Rocky Mountain landscape. In one major example, eighty years of zinc mining profoundly polluted the river and bank near Eagle River in north-central Colorado. High concentrations of the metal carried by spring runoff harmed algae, moss, and trout populations. An economic analysis of mining effects at this site revealed declining property values, degraded water quality, and the loss of recreational opportunities. The analysis also revealed that cleanup of the river could yield $2.3 million in additional revenue from recreation. In 1983, the former owner of the zinc mine was sued by the Colorado Attorney General for the $4.8 million cleanup costs; 5 years later, ecological recovery was considerable. Agriculture and forestry are major industries. Agriculture includes dryland and irrigated farming and livestock grazing. Livestock are frequently moved between high-elevation summer pastures and low-elevation winter pastures, a practice known as transhumance. Human population is not very dense in the Rocky Mountains, with an average of four people per square kilometer (10 per square mile) and few cities with over 50,000 people. However, the human population grew rapidly in the Rocky Mountain states between 1950 and 1990. The 40-year statewide increases in population range from 35% in Montana to about 150% in Utah and Colorado. The populations of several mountain towns and communities have doubled in the last 40 years. Jackson Hole, Wyoming, increased 260%, from 1,244 to 4,472 residents, in 40 years. Every year the scenic areas and recreational opportunities of the Rocky Mountains draw millions of tourists. The main language of the Rocky Mountains is English. But there are also linguistic pockets of Spanish and Native American languages. People from all over the world visit the sites to hike, camp, or engage in mountain sports. In the summer, main tourist attractions are: In the United States: In Canada, the mountain range contains these national parks: Glacier National Park in Montana and Waterton Lakes National Park in Alberta border each other and collectively are known as Waterton-Glacier International Peace Park. (See also International Peace Park.) The summers in this area of the Rockies are warm and dry, because the western fronts impede the advancing of water-carrying storm systems. The average temperature in summer is 59 °F (15 °C) and the average precipitation is 5.9 inches (150 mm). Winter is usually wet and very cold, with an average temperature of 28 °F (−2 °C) and average snowfall of 11.4 inches (29.0 cm). In spring, the average temperature is 40 °F (4 °C) and the average precipitation is 4.2 inches (107 mm). And in the fall, the average precipitation is 2.6 inches (66 mm) and the average temperature is 44 °F (7 °C).
Are you looking for a story to use with your students that features NASA data? Consider using the following resources in your classroom today! Educational Resources - Search Tool NASA visualizers take data – numbers, codes – and turn them into animations people can see and quickly understand. Students analyze map visualizations representing the amount of Sun’s energy received on the Earth as indicated by the amount that is reflected back to space, known as “albedo”. In Unearthing Data: Phytoplankton Part 2, Dr. Brad Hegyi explains the last three steps of a "data dig". These steps will guide students in some techniques for asking and answering questions before summarizing their results. This digital GLOBE Earth system (digital) poster images show global visualizations of different science variables for 2019, 2018, 2016, and 2013. Students may use these images to find patterns among different environmental data, understand the relationships among different environmental parameters, and more. In this video, Dr. Brad Hegyi discusses the thought process for analyzing data. He introduces ways to approach data to find interesting stories and identifies five steps for a data exploration or “data dig” This resource collection models for you (and your students) the process of analyzing solar radiation and phytoplankton data collected by satellites in the Arctic waters. The storyline evidences how increases in shortwave radiation from the sun is directly proportional with increases in chlorophy
ACT – A bill or measure after it has passed one or both chambers. It also refers to a law that has been passed. ACTION – A description of a step that a bill undergoes as it moves through the legislative process. ADJOURNMENT – Ends a legislative session (as opposed to a recess which does not end a day). ADJOURNMENT SINE DIE – Ends a legislative session with no set time to meet again. ADOPTION – The formal approval or acceptance of amendments or resolutions. ADVICE AND CONSENT – Constitutionally based power of the Senate to advise the President and give consent to proposed treaties and Presidential appointments. AMENDMENT – A proposal to change, or an actual change to a bill, motion, act, or the Constitution. APPORTIONMENT – Allocation of legislative seats by law. The seats in the House of Representatives are apportioned to states based on each state’s population. APPROPRIATION – An authorization by the legislature for the expenditure of money for a public purpose. In most instances, money cannot be withdrawn from the treasury except through a specific appropriation. Congress must past appropriations bills in order to fund the government. AUTHOR – The legislator who files a bill and guides it through the legislative process. AUTHORIZATION – A legislative action establishing the terms of a program and general amounts of money needed to fund that program. Subsequent appropriation provides the funding and can be less than the amount authorized. BILL – A proposed law that requires passage by both the House of Representatives and Senate. A bill is the primary means used to create and change the laws. Bill types include: Senate and House bills, Senate and House joint resolutions, Senate and House concurrent resolutions, and Senate and House resolutions. BILL ANALYSIS – A document prepared for all bills reported out of committee that explains in non‐legal language what a bill will do. A bill analysis may include background information on the measure, a statement of purpose, and a section‐by‐section analysis. BIPARTISAN – A term used to refer to an effort endorsed by both political parties, or a group composed of members of both political parties. BLOC – Representatives or Senators who are members of a group with common interests. BUDGET – The President’s annual proposal to Congress anticipating revenue and expenditures by the federal government for the upcoming fiscal year. CALENDAR – A list of bills or resolutions to be considered by a committee, sub-committee, the House, or the Senate. CAUCUS – A meeting of members of a political party, usually to decide policy or select members to fill positions. The term also refers to the group itself. CHAMBER – The House of Representatives and the Senate are the two chambers in the United States Congress. CLERK OF THE HOUSE — The chief administrative officer of the House of Representatives. CLOTURE – The closing of debate in the Senate, or ending of a filibuster by the required three-fifths vote (60 senators), thereby allowing a bill to be voted on. COMMITTEE REPORT – The text of a bill or resolution and its required attachments that is prepared when the measure is reported from a committee for further consideration by the members of the full chamber. The committee report includes the recommendations of the committee regarding action on the measure by the full House or Senate and is generally necessary before a measure can proceed through the legislative process. COMMITTEE OF THE WHOLE – Business is expedited in the House of Representatives when it resolves itself to the “Committee of the Whole House on the State of the Union.” Rules are relaxed and a quorum is easier to obtain. The committee must comprise a minimum of one hundred members. CONFEREES – Members of a conference committee that is composed of Senators and Representatives named to work out differences between same‐subject bills passed by both chambers. CONGRESSIONAL RECORD – The Government Printing Office publishes this daily account of House and Senate debates, votes, and comments. CONSTITUENT – A citizen residing within the district of an elected representative. CONTINUING RESOLUTION – Legislation providing continued funding for a federal department or program, usually at the previous fiscal year’s funding level. Used when Congress fails to pass necessary appropriations bills for a new fiscal year. CONVENE – To assemble or call to order the members of a legislative body. ENACTING CLAUSE – The initial language in a bill saying “be it enacted.” To prevent the bill from being in effect, a legislator will move to “strike the enacting clause.” ENGROSSED BILL – Official copy of a bill passed by either the House or Senate. ENROLLED BILL – Final certified copy of a bill passed in identical form by the House and Senate. EXECUTIVE SESSION – A meeting closed to the public. EXTENSION OF REMARKS – Comments that were not spoken on the floor but inserted into the Congressional Record by a Senator or Representative. FILIBUSTER – Talking and debating a bill in an effort to change it or kill it. Easier in the Senate than in the House because of the Senate’s more relaxed rules concerning debate. FISCAL YEAR – The 12-month period denoted “FY XXXX” in which funds are apportioned. The U.S. federal government’s fiscal year begins October 1st of the previous year and ends the following September 30th. For example, FY 2015 begins October of 2014. FLOOR – The meeting chamber of either the House or Senate. FLOOR ACTION – Action taken by either House or Senate on a bill reported by a committee. Members may propose amendments, enter debate, seek to promote or prevent a bill’s passage, and vote on its final passage. FRANKING PRIVILEGE – The right of a Senator, Representative, or member of a federal agency to use the U.S. Postal Service for official business at no charge. GERMANE – Pertinent, or bearing on the subject. GERRYMANDER – To divide a state, county, or other political subdivision into election districts in an unnatural manner to give a political party or ethnic group an advantage over its opponents. HOPPER – The box in which proposed bills are placed. INTRODUCE – Placing a new bill in the Hopper to start the bill process. This is the first stage of the bill process. JOINT COMMITTEE – A committee that includes both Senators and Representatives. MAJORITY LEADER – Leader of the majority party in either the House or the Senate. MARKUP – A committee session where the members perform a section‐by‐section review and revision of one or more bills. MOTION – A formal suggestion presented to a legislative body for action by one of its members while the body is meeting. NONPARTISAN – Free from party domination. PAIRING – An agreement by two members of Congress to be recorded on opposite sides of an issue if one or both persons will be absent when the vote is taken. The votes are not counted, but make the members’ positions known. PASSAGE – Approval of a measure by the full body. POINT OF ORDER – An objection by a Senator or Representative to a rule being violated. PRESIDENT PRO TEMPORE ‐ The Vice President is president of the Senate, but is present only for crucial votes. In his place, the Senate elects a president pro tempore, or temporary president, who presides, or, when routine measures are being considered, assigns the job to a junior Senator. PRIVILEGE OF THE FLOOR – Permission to view the proceedings from the floor of the chamber rather than from the public gallery. PREVIOUS QUESTION – By a motion to “move the previous question,” a Representative seeks to end debate and bring an issue to a vote. Senators do not have this debate‐limiting device. PRIVATE BILL – A bill that provides for special treatment of an individual or business entity. Such a bill is subject to presidential veto. PRIVILEGE – A privileged question is a motion that is considered before other motions. A “question of privilege” relates to the personal privilege of a Senator or Representative. PUBLIC HEARING – A meeting of a House or Senate committee or subcommittee during which public testimony may be heard and formal action may be taken on any measure or matter before the committee or subcommittee. QUORUM – The number of members of a legislative body who must be present before business may be conducted. RANKING MEMBER – A member of the majority party on a committee or subcommittee who ranks first in seniority after the chair. RANKING MINORITY MEMBER – The senior member (in terms of service) of the minority party on a committee or subcommittee. RECESS – Concludes legislative business and sets time for the next meeting of the legislative body. REPORT – A committee’s written record of its actions and views on a bill. The committee is required to report its findings to the House or Senate. RESOLUTION – A formal statement of a decision or opinion by the House, Senate, or both. A simple resolution is made by one chamber and generally deals with that chamber’s rules or prerogatives. A concurrent resolution is presented in both chambers and usually expresses a Congressional view on a matter not within Congressional jurisdiction. A joint resolution requires approval in both chambers and goes to the President for approval. Simple and concurrent resolutions do not go to the President. RIDER – A provision added to a bill so it may “ride” to approval on the strength of the bill. Generally, riders are placed on appropriations bills. SECRETARY OF THE SENATE – The chief administrative officer of the Senate. SENATORIAL COURTESY – The Senate’s tradition of honoring any objections by Senators of the President’s party to appointments in the states of the objecting Senators. SERGEANT AT ARMS – Legislative officer who maintains order and controls access to the chamber at the direction of the presiding officer. SPEAKER – Speaker of the House of Representatives who presides over the House. Elected, in effect, by the majority party in the House. Next in the line of succession to the Presidency after the Vice President. SUSPEND THE RULES – A motion in the House intended to quickly bring a bill to a vote. TABLE A BILL – A motion to, in effect, put a bill aside and thereby remove it from consideration, or “kill” it. TELLER VOTE – A House vote in which members’ votes are counted “for” or “against” as representatives file past tellers in the front of the chamber. A count is taken, but there is not an official record of how each representative voted. UNANIMOUS CONSENT – A timesaving procedure for non‐controversial measures. Measures are adopted without a vote when a member simply says, “I ask unanimous consent for…” and states the proposal and no Congressman objects. UNION CALENDAR – The calendar on which bills involving money are placed in order of the dates on which they are to be reported by committees. WHIP – A legislator who is chosen to be assistant to the leader of the party in both the House and Senate. The Whip is generally responsible for gathering votes for measures within a party. (Adapted from the U.S. Congress Handbook)
In 2012 chemical researchers reported progress in developing self-healing materials—materials that have the property of being able to repair themselves and become fully functional again after experiencing some kind of damage, such as a scratch or a fracture. Many biological tissues are naturally self-healing. For example, if the skin on a finger is cut, the body begins to rebuild the tissue and the skin heals. What if synthetic materials could be manufactured to do this as well? Over time, materials tend to degrade from a variety of causes, ranging from sunlight exposure to wear and tear. Eventually, degraded material can lead to the failure of many kinds of products. At present, structures and machine components are designed to withstand a certain amount of mechanical damage. In the future, materials that could, on their own, repair themselves could remain in service for a longer period, improve safety, and reduce maintenance costs. Self-healing would be especially valuable for objects that could not otherwise be repaired (such as electronic circuit boards and many plastic products) or that would be difficult to access (such as an implanted medical device, a rover on Mars, or an instrument placed deep in the ocean). The overall processes that take place in any kind of self-healing material are similar. The material contains a substance that can be converted into a mobile phase such as a liquid or a gel. The conversion is triggered by the formation of cracks or breaks in the material or by means of an externally applied stimulus. The mobile phase transports the healing medium to the site of the damage, and repair of the material then occurs by a physical interaction or a chemical reaction that re-forms chemical bonds to fill in the affected area. Once the damage has been healed, the mobile phase becomes solid, restoring the physical and mechanical properties of the material. Self-healing materials can be made of polymers, ceramics, or metals. Ceramics and metals require very high temperatures, from 600 to 800 °C (1,112 to 1,472 °F), for self-healing. Self-healing in polymers can take place at much lower temperatures, and, consequently, most research being conducted in self-healing materials concerns polymers. Self-healing materials in which the repair process is initiated internally are referred to as autonomic. Nonautonomic self-healing must be initiated externally, such as by applying heat or light. Self-healing materials can also be classed as extrinsic or intrinsic. Extrinsic materials have a distinct healing agent, which is typically embedded within the material. An example would be a self-healing material that contains minute capsules filled with a catalyst that promotes self-healing. Cracks that form may burst open some of the capsules, and the released catalyst can then repolymerize the damaged material. In contrast, an intrinsic self-healing material functions as its own healing agent. This type of material may reseal itself through physical interactions at the place of damage, such as by forming new chemical bonds when rubbing occurs along the surface of a crack. For many potential applications, intrinsic self-healing materials would be advantageous, but they present some of the greatest challenges for development. Recent research has addressed some of the most common limitations of self-healing polymers developed to date, including the inability of the self-healing process to take place in the presence of water, the need for heat to activate the healing process, and the lack of intrinsic self-healing materials. Although most research on self-healing materials has been conducted since the beginning of the 21st century, a report published in January 2012 by Peiwen Zheng and Thomas J. McCarthy from the University of Massachusetts built on largely forgotten studies made in the 1950s in identifying self-healing properties of a silicone polymer. Zheng and McCarthy were examining other properties of the polymer when they found that it could self-heal under mild heating. The researchers demonstrated this property by slicing a cylinder of material in two and then placing the newly exposed faces against each other. The cut healed so well that it was difficult to see where it had been. The mechanism behind the healing process involved a negatively charged polymerization initiator, which caused the siloxane material to form molecular chains. Embedded in the polymer, the initiator acts only when the ends of the chains are separated from each other and the temperature is raised slightly. The material thus exhibited nonautonomic extrinsic self-healing. This polymer, like many other self-healing polymers, cannot self-heal in the presence of water because it is hydrophobic. Given the ubiquitous presence of moisture in the environment, this property would hinder its use for everyday applications. In a study published in early 2012, Shyni Varghese and co-workers at the University of California, San Diego, and the National Chemical Laboratory in Pune, India, showed that polymers that have flexible side arms with both hydrophobic and hydrophilic parts can self-heal in water. The materials can be easily made, and their behaviour can be modified by controlling the acidity of the water solution. The ability to incorporate hydrophilic components into a self-healing polymer could therefore make self-healing materials available for use in a greater range of environments and applications. Test Your Knowledge Your Body: Fact or Fiction? Most known healing materials are nonautonomic and require heat as their external energy source. A new type of plastic reported in 2011 by Christoph Weder from the University of Fribourg, Switz., and co-workers instead uses light to initiate self-healing. In the material they investigated, shining ultraviolet light on the surface breaks metal-polymer bonds in long polymer chains. The resulting smaller pieces of the polymer are then able to flow into a damaged area of the surface, such as a fracture. Upon cooling, the small molecular pieces reassemble into the larger polymer chains, restoring the original material. In the future it may be possible to design the polymer for use in such materials as varnishes and plastic finishes so that it absorbs light only at locations where there is a scratch or defect. In work to develop a self-healing polymer that does not need an external stimulus such as heat or light for self-healing, Hideyuki Otsuka and co-workers at Kyushu University in Fukuoka, Japan, reported in late 2011 on a polymer gel whose cut surfaces can reseal when placed in contact with each other even after the cut surfaces have been kept apart for as long as several days. The self-healing takes place with the application of the organic solvent dimethylfuran to reform bonds between molecular cross-linkages and involves the reaction of arylbenzofuranone radicals in the material. The reaction can be repeated multiple times, unlike other self-healing processes that cannot be repeated once the healing agent has been used up. There are few known examples of intrinsic self-healing materials. However, in 2012 Zhibin Guan and co-workers at the University of California, Irvine, described the synthesis of a new such material, called a hydrogen-bonding brush polymer. It can easily break and re-form bonds on a molecular scale but in bulk is very robust and strong. As a result, the material self-assembles into stiff and soft layers that give both strength and elasticity to the polymer. Since no solvents or healing agents are required for its self-healing, it has potential for use in a large variety of applications. The development of self-healing materials still has a long way to go before such materials become commercially available. Nevertheless, the amount of research in the field is expanding rapidly, and as the technology improves in the coming years, it promises to have an impact on daily lives. On June 5, 2012, a rare transit of Venus across the face of the Sun was viewed by many people, particularly in the Southern Hemisphere. Transits of Venus occur only about twice in each century; the next event would not occur until 2117. In the past, transits of Venus were important in determining the size of the solar system, but since the advent of modern astronomy, they have been of interest only for their beauty and rarity. The same phenomenon, when seen in other star systems, however, has become an important tool for the detection of extrasolar planets. For information on Eclipses, Equinoxes, and Solstices, and Earth Perihelion and Aphelion in 2013, see below. On November 29 NASA announced the surprising detection of large quantities of frozen water ice—as much as 100 billion to 1 trillion tons—trapped in craters at the north and south poles of the planet Mercury. The closest planet to the Sun, Mercury has a surface temperature as high as 430 °C (800 °F) at its equator. However, at its poles some craters are in permanent shadow, and there the temperature can be as cold as −220 °C (−370 °F). The discovery was made by the spacecraft Messenger, which was launched in August 2004 and went into orbit around Mercury in March 2011. Several instruments aboard the spacecraft used different measuring techniques to detect the water ice. The first was an indirect technique based on the measurement of neutrons ejected from atomic nuclei under Mercury’s surface as a result of collision with high-energy cosmic rays. Some of the ejected neutrons escape into space, but others are blocked by the hydrogen in water, so fewer neutrons would be detected from areas containing water ice. (This technique was also used to detect frozen water beneath the surface of Mars.) A second technique used infrared reflectance observations to corroborate the neutron measurements. On December 3 NASA announced that the space probe Voyager 1, launched in September 1977, had entered a newly discovered region of the outer solar system about 18 billion km (11 billion mi) from the Sun dubbed the “magnetic highway.” Here the magnetic field lines of the Sun connect with magnetic field lines present in interstellar space. This connection allows high-energy particles from outside the solar system to stream inward and low-energy particles to stream outward. Scientists suspected that the magnetic highway was the last region Voyager 1 would have to cross before it finally left the solar system altogether. Stars and Extrasolar Planets New discoveries of planets orbiting other stars continued unabated in 2012. By the end of the year, more than 850 extrasolar planets had been detected by means of a variety of techniques. The American space telescope Kepler successfully completed its initial 3.5-year survey and began an extended mission that was scheduled to last another four years. The telescope continuously monitored more than 100,000 stars for variations in their brightness that would indicate either the presence of planets orbiting the stars and periodically blocking some of their light or variations in the intrinsic luminosity of the stars themselves. The scientific team operating Kepler identified approximately 2,300 extrasolar planet candidates and confirmed more than 100 planets orbiting nearby stars. Among the candidate objects were more than 100 identified as possible Earth-size planets. Among the interesting objects detected by Kepler were planets orbiting binary stars, which are pairs of stars orbiting around a common centre of gravity. One such planet was found by amateur volunteers combing though Kepler data posted on a Web site called Planet Hunters. This planet, dubbed PH1 after the Web site, was slightly larger than Neptune and was found orbiting a binary star that was itself orbited by another pair of stars. Such planets challenged most theories of planet formation because it had long been assumed that the protoplanetary disks from which planets formed would not be able to remain stable under the gravitational influence of two or more stars. Other objects found orbiting stars lay within the star systems’ habitable zones, the orbital regions where liquid water might exist on the surface of the planets and possibly support life. An example of such a system was reported by an international team led by Mikko Tuomi of the University of Hertfordshire, Eng., and Guillem Anglada-Escude of the University of Göttingen, Ger. They found three new planets in orbit around the star HD 40307, making it (at least) a six-planet system. The outermost planet, with a mass about seven times that of Earth, was thus calculated to orbit within the habitable zone of HD 40307. Yet another unexpected discovery was that of an Earth-mass planet in orbit around the Sun-like star Alpha Centauri B. This star is a member of a triple-star system that includes Alpha Centauri A—the brightest star in the southern constellation Centaurus and the fourth brightest star in the sky—and Proxima Centauri—the nearest star to the Sun at a distance of 4.2 light-years. The discovery was made by using the High Accuracy Radial Velocity Planet Searcher (HARPS) instrument on the 3.6-m telescope at the European Southern Observatory in La Silla, Chile. The planet, Alpha Centauri Bb, was found to have an orbital period of only 3.2 days and was detected by measuring small changes it produced in the motion of Alpha Centauri B. The planet is so close to its star that its surface temperature is about 1,200 °C (2,200 °F). A team of international researchers that included the Optical Gravitational Lensing Experiment (OGLE) collaboration, based at the University of Warsaw, and the Probing Lensing Anomalies Network (PLANET) collaboration, based at the Paris Institute of Astrophysics, reported that each nearby star in the Milky Way Galaxy has an average of 1.6 planetary companions. These surveys used a technique that relied on the gravitational lensing effect produced by planets moving near light-emitting stars. The statistical studies suggested that many—if not most—stars have at least one planetary companion. An image of a very unusual dying star was captured by the world’s most expensive group of ground-based telescopes, the Atacama Large Millimeter Array (ALMA), located on a high plateau in Chile. In 2012 Alma was still under construction; when completed in 2013, it would consist of 66 radio telescope antennas and would have an angular resolution significantly better than the Hubble Space Telescope. ALMA was designed to detect astronomical objects emitting radio waves at millimetre and submillimetre wavelengths. The newly imaged object was R Sculptoris, a red giant star named for the southern constellation Sculptor, in which it was found. R Sculptoris is located some 1,200 light-years from Earth. A fairly bright object, it has a luminosity about 7,000 times that of the Sun and is visible through a small pair of binoculars. Stars are known to eject massive amounts of gas and dust in the late stages of their evolution, and many such stars have been seen ejecting rings and clouds of gas. R Sculptoris, however, was the first to be observed surrounded by a spiral distribution of matter. Astronomers speculated that the pattern may have been caused partly by a second unseen star in orbit around the observed red giant. Galaxies and Cosmology Astronomers have detected the presence of two mysterious “dark” components of the universe. The effects of the first component were initially detected by observing the motions of stars in the Milky Way and in other nearby galaxies. In each case the stars were observed to be orbiting around the centres of their galaxies at high speeds that could be explained only by the presence of some unseen (that is, non-light-emitting) “dark matter.” The other unseen component of the universe—called “dark energy”—was hypothesized to give rise to a repulsive force that is accelerating the rate of expansion of the universe. The repulsive effect of dark energy was discerned from observations of the distance and speed of recession of very distant supernovas. Together, dark matter and dark energy are calculated to compose 96% of all matter and energy in the universe. In 2012 members of the Canada-France-Hawaii Telescope Lensing Survey reported the results of their mapping of the largest areas of the sky showing the presence of dark matter. The surveying team used dark matter (along with visible matter) present in galaxies and galaxy clusters like lenses to focus images of even more distant galaxies. The team calculated that the amount of dark matter required to produce the weak lensing effects they saw was consistent with the dark matter content calculated indirectly from galactic surveys that studied stellar motion. In a separate survey of the large-scale structure of the universe, the Baryon Oscillation Spectroscopic Survey (BOSS), using data from the Apache Point Observatory, N.M., examined cobweblike structures traced out by hundreds of thousands of galaxies. BOSS concluded that dark energy constitutes approximately 72% of the total mass-energy content of the universe—in good agreement with earlier studies based on quite different data sets. In November 2012 the record was broken for the most-distant astronomical object ever detected. A team led by Dan Coe of the Space Telescope Science Institute in Baltimore, Md., using both the Hubble and Spitzer space telescopes, found a galaxy, MACS0647-JD, with a redshift of 10.7. The light from this galaxy took 13.3 billion years to arrive at Earth. This meant that it formed a mere 400 million years after the big bang. Because of its youth, MACS0647-JD is a small galaxy, only 600 light-years across. (By comparison, the Milky Way is about 100,000 light-years across.) The infant galaxy was seen only because an intervening galaxy cluster acted as a gravitational lens to magnify its light. Another distance record was set by the Chandra X-Ray Observatory satellite, which observed the most distant X-ray jet from a quasar, an extremely bright galaxy whose luminosity arises from jets powered by matter falling into a central supermassive black hole. The quasar GB 1428+4217 was found at a distance of 12.4 billion light-years, meaning the universe was only 1.3 billion years old when it was formed. At that time in the evolution of the universe, the cosmic microwave background (CMB) was 1,000 times more intense than it is at present. This extremely bright CMB amplified the light coming from the jet and made it easily visible to Chandra, despite GB 1428’s great distance.
A data structure is a particular way of organizing data in a computer so that it can be used effectively. The idea is to reduce the space and time complexities of different tasks. Below is an overview of some popular linear data structures. Array is a data structure used to store homogeneous elements at contiguous locations. Size of an array must be provided before storing data. Let size of array be n. Accessing Time: O(1) [This is possible because elements are stored at contiguous locations] Search Time: O(n) for Sequential Search: O(log n) for Binary Search [If Array is sorted] Insertion Time: O(n) [The worst case occurs when insertion happens at the Beginning of an array and requires shifting all of the elements] Deletion Time: O(n) [The worst case occurs when deletion happens at the Beginning of an array and requires shifting all of the elements] Example : For example, let us say, we want to store marks of all students in a class, we can use an array to store them. This helps in reducing the use of number of variables as we don’t need to create a separate variable for marks of every subject. All marks can be accessed by simply traversing the array. A linked list is a linear data structure (like arrays) where each element is a separate object. Each element (that is node) of a list is comprising of two items – the data and a reference to the next node. Types of Linked List : 1. Singly Linked List : In this type of linked list, every node stores address or reference of next node in list and the last node has next address or reference as NULL. For example 1->2->3->4->NULL 2. Doubly Linked List : In this type of Linked list, there are two references associated with each node, One of the reference points to the next node and one to the previous node. Advantage of this data structure is that we can traverse in both the directions and for deletion we don’t need to have explicit access to previous node. Eg. NULL<-1<->2<->3->NULL 3. Circular Linked List : Circular linked list is a linked list where all nodes are connected to form a circle. There is no NULL at the end. A circular linked list can be a singly circular linked list or doubly circular linked list. Advantage of this data structure is that any node can be made as starting node. This is useful in implementation of circular queue in linked list. Eg. 1->2->3->1 [The next pointer of last node is pointing to the first] Accessing time of an element : O(n) Search time of an element : O(n) Insertion of an Element : O(1) [If we are at the position where we have to insert an element] Deletion of an Element : O(1) [If we know address of node previous the node to be deleted] Example : Consider the previous example where we made an array of marks of student. Now if a new subject is added in the course, its marks also to be added in the array of marks. But the size of the array was fixed and it is already full so it can not add any new element. If we make an array of a size lot more than the number of subjects it is possible that most of the array will remain empty. We reduce the space wastage Linked List is formed which adds a node only when a new element is introduced. Insertions and deletions also become easier with linked list. One big drawback of linked list is, random access is not allowed. With arrays, we can access i’th element in O(1) time. In linked list, it takes Θ(i) time. A stack or LIFO (last in, first out) is an abstract data type that serves as a collection of elements, with two principal operations: push, which adds an element to the collection, and pop, which removes the last element that was added. In stack both the operations of push and pop takes place at the same end that is top of the stack. It can be implemented by using both array and linked list. Insertion : O(1) Deletion : O(1) Access Time : O(n) [Worst Case] Insertion and Deletion are allowed on one end. Example : Stacks are used for maintaining function calls (the last called function must finish execution first), we can always remove recursion with the help of stacks. Stacks are also used in cases where we have to reverse a word, check for balanced parenthesis and in editors where the word you typed the last is the first to be removed when you use undo operation. Similarly, to implement back functionality in web browsers. A queue or FIFO (first in, first out) is an abstract data type that serves as a collection of elements, with two principal operations: enqueue, the process of adding an element to the collection.(The element is added from the rear side) and dequeue, the process of removing the first element that was added. (The element is removed from the front side). It can be implemented by using both array and linked list. Insertion : O(1) Deletion : O(1) Access Time : O(n) [Worst Case] Example : Queue as the name says is the data structure built according to the queues of bus stop or train where the person who is standing in the front of the queue(standing for the longest time) is the first one to get the ticket. So any situation where resources are shared among multiple users and served on first come first server basis. Examples include CPU scheduling, Disk Scheduling. Another application of queue is when data is transferred asynchronously (data not necessarily received at same rate as sent) between two processes. Examples include IO Buffers, pipes, file IO, etc. Circular Queue The advantage of this data structure is that it reduces wastage of space in case of array implementation, As the insertion of the (n+1)’th element is done at the 0’th index if it is empty. This article is contributed by Abhiraj Smit. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. - Data Structure alignment - Transmission Impairment in Data Communication - Characteristics of data in geographical information system (GIS) - Difference between Information and Data - Univariate, Bivariate and Multivariate data and its analysis - Expected Properties of a Big Data System - Spatial and Geographical data - Difference between a Data Analyst and a Data Scientist - Relationship between Data Mining and Machine Learning - Piece-wise Linear Transformation - Signal Processing and Time Series (Data Analysis) - Process Of JPEG Data compression - Types of Antialiasing Techniques - Advantages and Disadvantages of Auto-CAD
Angles and Lines - The given figure represents ∠ABC with some points in its region. - The region of the angle shaded by green colour lies between the two arms of the angle. This region is called the interior region of the angle. Every point in this region is said to lie in the interior of the angle. Here, point P is in the interior. - The region of the angle shaded by pink colour lies outside the two arms of the angle.This region is called the exterior region of the angle. Every point in this region is said to lie in the exterior of the angle. Here, point Q and S are in the exterior. - The boundary of ∠ABC is formed by its arms and . Every point lying on the arms is said to lie on the boundary of the angle. Here, points A, B, C and R lie on the boundary of the angle. - Angle: An angle is made up of two rays starting from a common end point. In this figure, rays and have one common end point, that is, B. The rays and are called the arms or sides of the angle. The common end point B is the vertex of the angle.… To view the complete topic, please
8.2: Angular Momentum - Page ID So what makes an object more difficult to turn? The difficulty it requires to push an object through space is called inertia or more precisely translational inertia. Inertia is equal to mass. The difficulty required to turn an object is called rotational inertia or sometimes “moment of inertia”. This is symbolized by the letter I (for inertia). Try this out. Take a long object like a broomstick or baseball bat. Lay it flat and try to spin it with one hand. This can be difficult. Now instead, stand it upright and just give a twist with your fingers to turn it around. The same object is more difficult to spin one way than the other. Rotational inertia depends on both the mass and the mass distribution of an object. Mass closer to the axis is easier to turn. Mass farther from the axis is harder to turn. Angular velocity is defined as how quickly an object is turning, and is symbolized by the Greek letter omega: ω. In physics, angular velocity is generally measured in one of two units: - Revolutions per second, or rev/s. A complete rotation or revolution is equivalent to motion through 360-degrees. An object that turns around 30 times in one minute has an angular velocity of 0.5 rev/s. - Radians per second or rad/s. A radian is the distance around the edge of a circle of radius 1. It takes 2π radians to complete one circle, so 2π−radians are equivalent to 1 revolution (360 degrees). Use the simulation below to learn more about how a balancing pole can increase the rotational inertia of a tightrope walker and decrease his angular acceleration around the rope: Linear momentum is defined as the product of mass and linear velocity (p=mv). In the same way, angular momentum is defined as the product of rotational inertia and angular velocity. The formula for angular momentum is stated as: where I is the rotational inertia (a term related to the distribution of mass) and the Greek letter omega ω is the angular velocity. Just like momentum in a given direction, objects undergoing rotation obey a similar conservation principle called conservation of angular momentum, which can be expressed as Iiωi=Ifωf. An important difference is that in linear momentum, the inertia is always the same. In angular momentum, the rotational inertia I and the angular velocity ω can change. Perhaps you’ve noticed that when a spinning figure skater pulls in her arms close to her body, her rotational velocity increases. Or perhaps you’ve seen a high driver spring off the diving board, tuck his legs close to his body, and spin quickly. What’s going on? In each case the person brings more of their mass closer to the axis about which their body spins. The result is that their angular velocity increases. The conservation of angular momentum ensures that, should the mass in the system move closer to the axis of rotation, the system will spin (rotate) more quickly. A classic demonstration of the conservation of angular momentum is shown in the following video. As the student in the figure moves the weights inward toward his body, his angular velocity increases, but his angular momentum stays constant. - The angular momentum of an object is the product of rotational inertia and angular velocity. - The angular velocity of an object is how quickly an object is turning. - The rotational inertia is the difficultly required to turn an object. - You have two coins; one is a standard U.S. quarter, and the other is a coin of equal mass and size, but with a hole cut out of the center. - Which coin has a higher moment of inertia? - Which coin would have the greater angular momentum if they are both spun at the same angular velocity? - A star is rotating with a period of 10.0 days. It collapses with no loss in mass to a white dwarf with a radius of .001 of its original radius. - What is its initial angular velocity? - What is its angular velocity after collapse? - A merry-go-round consists of a uniform solid disc of 225kg and a radius of 6.0m. A single 80kg person stands on the edge when it is coasting at 0.20 revolutions per sec. How fast would the device be rotating after the person has walked 3.5m toward the center. (The moments of inertia of compound objects add.) The system pictured in the video above (which includes the student, weights, and spinning seat) has an initial rotational inertia Ii and an initial angular velocity ωi 2.00 rev/s. After the student pulls the weights toward his chest, the final rotational inertia of the system is only 80% of its initial rotational inertia- that is 0.800 Ii.Assuming that the angular momentum of the system is conserved, what is the final angular velocity of the system? Study Guide: Circular Motion Study Guide Video: Angular Momentum - Overview Interactives: Bowling Alley, Unicycle
Describing the positions and movements of celestial objects against the vast backdrop of the sky demands precision. In the past, tools like sextants, quadrants, and astrolabes were used to make fairly precise measurements in the sky, to within fractions of an arc minute. Subsequent introductions of the telescope and precise tools like meridian circles or crosshair eyepieces allowed astronomers to map the stars with hundreds of times greater accuracy, but today digital tools are used in conjunction with high-resolution cameras and purpose-built telescopes to measure the position, angular sizes, and apparent distances between objects in the sky. In geography, the Earth is often divided into longitudinal and latitudinal lines to determine position. These lines are measured in degrees, minutes, and seconds. Similarly, in the realm of astronomy, the celestial sphere—a conceptual tool that imagines the universe as a vast sphere with the Earth at its center—uses a similar coordinate system to pinpoint the position of celestial objects. The units of measure are analogous: degrees, arc minutes, and arc seconds, though the application is slightly different. What’s a Degree? A degree (°) is the primary unit used to measure angles in the celestial sphere, reminiscent of the measurement of angles on a plane or degrees of latitude/longitude on the globe. With 360 degrees completing a full circle, it gives observers a broad sense of positioning. For instance, the distance from the horizon straight up to the zenith point overhead measures 90°. Degrees are commonly used both to reference the size of an object and its height above the horizon as well as positional coordinates. The Orion Nebula (M42) spans about 1 degree across the sky, and typical backyard telescopes can have a maximum low-power field somewhere between 0.5° and 5°, though the majority of typical amateur instruments provide a maximum field of 1-3°. What’s an Arc Minute? Delving deeper into precision, an arc minute (‘) or arcmin is 1/60th of a degree. This granularity is useful for more precise positioning and describing the angular size of objects that span less than a degree in the sky. It is also, coincidentally, about the limit of the resolving power of the human eye. The Moon, and Sun, both with an angular diameter hovering around 0.5°, equate to approximately 30 arc minutes, though we usually express it as being half a degree. The planets Venus and Jupiter both subtend just under one arc minute in the sky when they are closest to us. What’s an Arc Second? Yet finer detail is captured with an arc second (“), or arcsec for short, which is 1/60th of an arc minute or 1/3600th of a degree. This meticulousness is indispensable for the study of celestial objects like the distance between close double stars or the angular size of planets. A typical amateur telescope has a resolving power of between 0.5-2 arcsec, depending on aperture, though the Earth’s atmosphere often blurs things closer to the latter measurement. The planet Neptune spans an angle of 2–2.5 arc seconds, while Jupiter’s Galilean moons range from 1-2 arc seconds. Saturn’s moon Titan, the dwarf planet Ceres, and the asteroid Vesta all appear a little under an arc second in apparent size. For observations demanding even greater precision, SI unit prefixes are used to imply smaller distances in relation to an arc second. Most commonly, the milliarcsecond (mas) comes into play, representing 1/1000th of an arc second. Techniques like radio interferometry or the parallax method, used to gauge distances to proximate stars, often employ milliarcseconds. To put it in perspective, a star exhibiting a parallax of 1 milliarcsecond lies 1 kiloparsec away, equivalent to about 3,262 light-years (though the nearest star to us is another light-year further than that). The star Betelgeuse spans an angular size of about 50 mas in the sky, making it one of the largest as seen from Earth. Beyond the milliarcsecond, there’s the microarcsecond—a unit 1/1000th of a milliarcsecond. This ultra-fine measure has become increasingly significant with the advent of techniques capable of such resolution, especially in projects aiming to measure the tiny apparent motions of stars. Marking Coordinates in the Sky In astronomy, positioning celestial objects is crucial, and there are two main coordinate systems employed for this purpose: the Horizontal (or Altitude-Azimuth) system and the Equatorial system. Each system is suited to specific applications and is defined by a distinct set of coordinates. The horizontal, or altitude-azimuth system is based on your local horizon. It’s highly intuitive and changes depending on your location and the time. Altitude, or elevation, is the angle between the object and the observer’s local horizon. It measures how high the object is in the sky. An object right at the horizon has an altitude of 0°, while one directly overhead (zenith) has an altitude of 90°. Azimuth is the angle measured clockwise from the north direction to the object’s vertical circle (a circle drawn through the object and the zenith). An object due north has an azimuth of 0°, due east has 90°, due south has 180°, and due west has 270°. Altitude-azimuth coordinates are only used for finding out the current position of an object you are trying to aim your telescope at; analog or digital setting circles are often employed for this task. These coordinates are also useful if you are waiting for an object to clear an obstruction, such as a building or trees, in which case you can measure the altitude and azimuth ranges that are obstructed and plan observations accordingly. The equatorial system is based on the celestial equator and the vernal equinox. It remains consistent regardless of the observer’s location, making it ideal for indicating the location of objects in the sky over time, like points on a map. It is the coordinate system used for most astronomy, regardless of whether your telescope uses an alt-azimuth or equatorial mounting. Analogous to longitude on Earth, Right Ascension (RA) measures the eastward angle from the vernal equinox (a specific point in the sky defined by where the Sun crosses the celestial equator during the March equinox). Unlike longitude, RA is often expressed in hours, minutes, and seconds, given the Earth’s rotation: 24 hours equate to the 360° of the celestial sphere. An object with an RA of 3 hours, for instance, is 45° from the vernal equinox. Analogous to latitude on Earth, declination (Dec) measures how far north or south a celestial object is from the celestial equator. It’s expressed in degrees, where the celestial equator is 0°, the North Celestial Pole is +90°, and the South Celestial Pole is -90°. Latitude and longitude coordinates are traditionally described using degrees, minutes, and seconds. However, especially in the digital age with GPS technology, it’s become common to use decimal degrees. For instance, instead of saying 40° 45′ 30″ N, one might see it as 40.7583° N. The same goes for positions in the sky. For expressing equatorial coordinates, you might say the star Vega is located at RA 18h 36m 56s and Dec +38° 47′ 1′′, for instance, but you could also write it as RA 18.6156 and Dec 38.7836, though this is less popular. In contrast, altitude and azimuth coordinates are usually expressed in decimal format rather than in minutes/seconds.
Democracy (Greek: ?, The term is derived from the Greek words d?mokrati?, d?mos ("people") and kratos ("rule") "rule by [the] people") is a form of government in which the people have the authority to choose their governing legislation. Who people are and how authority is shared among them are core issues for democratic theory, development and constitution. Some cornerstones of these issues are freedom of assembly and speech, inclusiveness and equality, membership, consent, voting, right to life and minority rights. Generally, there are two types of democracy: direct and representative. In a direct democracy, the people directly deliberate and decide on legislature. In a representative democracy, the people elect representatives to deliberate and decide on legislature, such as in parliamentary or presidential democracy. Liquid democracy combines elements of these two basic types. However, the noun "democracy" has, over time, been modified by more than 3,500 adjectives which suggests that it may have types that can elude and elide this duality. The most common day-to-day decision making approach of democracies has been the majority rule, though other decision making approaches like supermajority and consensus have been equally integral to democracies. They serve the crucial purpose of inclusiveness and broader legitimacy on sensitive issues, counterbalancing majoritarianism, and therefore mostly take precedence on a constitutional level. In the common variant of liberal democracy, the powers of the majority are exercised within the framework of a representative democracy, but the constitution limits the majority and protects the minority, usually through the enjoyment by all of certain individual rights, e.g. freedom of speech, or freedom of association. Besides these general types of democracy, there have been a wealth of further types (see below). Republics, though often associated with democracy because of the shared principle of rule by consent of the governed, are not necessarily democracies, as republicanism does not specify how the people are to rule. Democracy is a system of processing conflicts in which outcomes depend on what participants do, but no single force controls what occurs and its outcomes. The uncertainty of outcomes is inherent in democracy. Democracy makes all forces struggle repeatedly to realize their interests and devolves power from groups of people to sets of rules.Western democracy, as distinct from that which existed in pre-modern societies, is generally considered to have originated in city-states such as Classical Athens and the Roman Republic, where various schemes and degrees of enfranchisement of the free male population were observed before the form disappeared in the West at the beginning of late antiquity. The English word dates back to the 16th century, from the older Middle French and Middle Latin equivalents. According to American political scientist Larry Diamond, democracy consists of four key elements: a political system for choosing and replacing the government through free and fair elections; the active participation of the people, as citizens, in politics and civic life; protection of the human rights of all citizens; a rule of law, in which the laws and procedures apply equally to all citizens. Todd Landman, nevertheless, draws our attention to the fact that democracy and human rights are two different concepts and that "there must be greater specificity in the conceptualisation and operationalisation of democracy and human rights". The term appeared in the 5th century BC to denote the political systems then existing in Greek city-states, notably Athens, to mean "rule of the people", in contrast to aristocracy (, aristokratía), meaning "rule of an elite". While theoretically, these definitions are in opposition, in practice the distinction has been blurred historically. The political system of Classical Athens, for example, granted democratic citizenship to free men and excluded slaves and women from political participation. In virtually all democratic governments throughout ancient and modern history, democratic citizenship consisted of an elite class, until full enfranchisement was won for all adult citizens in most modern democracies through the suffrage movements of the 19th and 20th centuries. Democracy contrasts with forms of government where power is either held by an individual, as in an absolute monarchy, or where power is held by a small number of individuals, as in an oligarchy. Nevertheless, these oppositions, inherited from Greek philosophy, are now ambiguous because contemporary governments have mixed democratic, oligarchic and monarchic elements. Karl Popper defined democracy in contrast to dictatorship or tyranny, thus focusing on opportunities for the people to control their leaders and to oust them without the need for a revolution. No consensus exists on how to define democracy, but legal equality, political freedom and rule of law have been identified as important characteristics. These principles are reflected in all eligible citizens being equal before the law and having equal access to legislative processes. For example, in a representative democracy, every vote has equal weight, no unreasonable restrictions can apply to anyone seeking to become a representative,[according to whom?] and the freedom of its eligible citizens is secured by legitimised rights and liberties which are typically protected by a constitution. Other uses of "democracy" include that of direct democracy. One theory holds that democracy requires three fundamental principles: upward control (sovereignty residing at the lowest levels of authority), political equality, and social norms by which individuals and institutions only consider acceptable acts that reflect the first two principles of upward control and political equality. The term "democracy" is sometimes used as shorthand for liberal democracy, which is a variant of representative democracy that may include elements such as political pluralism; equality before the law; the right to petition elected officials for redress of grievances; due process; civil liberties; human rights; and elements of civil society outside the government.Roger Scruton argues that democracy alone cannot provide personal and political freedom unless the institutions of civil society are also present. In some countries, notably in the United Kingdom which originated the Westminster system, the dominant principle is that of parliamentary sovereignty, while maintaining judicial independence. In the United States, separation of powers is often cited as a central attribute. In India, parliamentary sovereignty is subject to the Constitution of India which includes judicial review. Though the term "democracy" is typically used in the context of a political state, the principles also are applicable to private organisations. There are many decision making methods used in democracies, but majority rule is the dominant form. Without compensation, like legal protections of individual or group rights, political minorities can be oppressed by the "tyranny of the majority". Majority rule is a competitive approach, opposed to consensus democracy, creating the need that elections, and generally deliberation, are substantively and procedurally "fair," i.e., just and equitable. In some countries, freedom of political expression, freedom of speech, freedom of the press, and internet democracy are considered important to ensure that voters are well informed, enabling them to vote according to their own interests. It has also been suggested that a basic feature of democracy is the capacity of all voters to participate freely and fully in the life of their society. With its emphasis on notions of social contract and the collective will of all the voters, democracy can also be characterised as a form of political collectivism because it is defined as a form of government in which all eligible citizens have an equal say in lawmaking. While representative democracy is sometimes equated with the republican form of government, the term "republic" classically has encompassed both democracies and aristocracies. Many democracies are constitutional monarchies, such as the United Kingdom. Historically, democracies and republics have been rare. Republican theorists linked democracy to small size: as political units grew in size, the likelihood increased that the government would turn despotic. At the same time, small political units were vulnerable to conquest.Montesquieu wrote, "If a republic be small, it is destroyed by a foreign force; if it be large, it is ruined by an internal imperfection." According to Johns Hopkins University political scientist Daniel Deudney, the creation of the United States, with its large size and its system of checks and balances, was a solution to the dual problems of size. Retrospectively different polity, outside of declared democracies, have been described as proto-democratic (see History of democracy). The term "democracy" first appeared in ancient Greek political and philosophical thought in the city-state of Athens during classical antiquity. The word comes from demos, "common people" and kratos, "strength". Led by Cleisthenes, Athenians established what is generally held as the first democracy in 508-507 BC. Cleisthenes is referred to as "the father of Athenian democracy." Athenian democracy took the form of a direct democracy, and it had two distinguishing features: the random selection of ordinary citizens to fill the few existing government administrative and judicial offices, and a legislative assembly consisting of all Athenian citizens. All eligible citizens were allowed to speak and vote in the assembly, which set the laws of the city state. However, Athenian citizenship excluded women, slaves, foreigners ( / métoikoi), and men under 20 years of age.[contradictory] Owning land was not a requirement for citizenship, but it did allow one to purchase land. The exclusion of large parts of the population from the citizen body is closely related to the ancient understanding of citizenship. In most of antiquity the benefit of citizenship was tied to the obligation to fight war campaigns. Athenian democracy was not only direct in the sense that decisions were made by the assembled people, but also the most direct in the sense that the people through the assembly, boule and courts of law controlled the entire political process and a large proportion of citizens were involved constantly in the public business. Even though the rights of the individual were not secured by the Athenian constitution in the modern sense (the ancient Greeks had no word for "rights"), the Athenian citizens did not enjoy their liberties in opposition to the government - and they were not being subjects themselves to the rule of another person. It is interesting to note that many of the intellectuals of the times multiplied critical views and mockeries of the athenian democracy and its shortcomings, both as government of its citizens and as imperial power (Thucydides, Xenophon, Aristophanes, etc.) Range voting appeared in Sparta as early as 700 BC. The Apella was an assembly of the people, held once a month, in which every male citizen of at least 30 years of age could participate. In the Apella, Spartans elected leaders and cast votes by range voting and shouting. Aristotle called this "childish", as compared with the stone voting ballots used by the Athenians. Sparta adopted it because of its simplicity, and to prevent any bias voting, buying, or cheating that was predominant in the early democratic elections.Vaishali, capital city of the Vajjian Confederacy of (Vrijji mahajanapada), India was also considered one of the first examples of a republic around the 6th century BCE. Even though the Roman Republic contributed significantly to many aspects of democracy, only a minority of Romans were citizens with votes in elections for representatives. The votes of the powerful were given more weight through a system of gerrymandering, so most high officials, including members of the Senate, came from a few wealthy and noble families. In addition, the overthrow of the Roman Kingdom was the first case in the Western world of a polity being formed with the explicit purpose of being a republic, although it didn't have much of a democracy. The Roman model of governance inspired many political thinkers over the centuries, and today's modern representative democracies imitate more the Roman than the Greek models because it was a state in which supreme power was held by the people and their elected representatives, and which had an elected or nominated leader. Other cultures, such as the Iroquois Nation in the Americas between around 1450 and 1600 AD also developed a form of democratic society before they came in contact with the Europeans. This indicates that forms of democracy may have been invented in other societies around the world. While most regions in Europe during the Middle Ages were ruled by clergy or feudal lords, there existed various systems involving elections or assemblies (although often only involving a small part of the population). These included: The Kouroukan Fouga divided the Mali Empire into ruling clans (lineages) that were represented at a great assembly called the Gbara. However, the charter made Mali more similar to a constitutional monarchy than a democratic republic. The Parliament of England had its roots in the restrictions on the power of kings written into Magna Carta (1215), which explicitly protected certain rights of the King's subjects and implicitly supported what became the English writ of habeas corpus, safeguarding individual freedom against unlawful imprisonment with right to appeal. The first representative national assembly in England was Simon de Montfort's Parliament in 1265. The emergence of petitioning is some of the earliest evidence of parliament being used as a forum to address the general grievances of ordinary people. However, the power to call parliament remained at the pleasure of the monarch. Studies have linked the emergence of parliamentary institutions in Europe during the medieval period to urban agglomeration and the creation of new classes, such as artisans, as well as the presence of nobility and religious elites. Scholars have also linked the emergence of representative government to Europe's relative political fragmentation. New York University political scientist David Stasavage links the fragmentation of Europe, and its subsequent democratization, to the manner in which the Roman Empire collapsed: Roman territory was conquered by small fragmented groups of Germanic tribes, thus leading to the creation of small political units where rulers were relatively weak and needed the consent of the governed to ward off foreign threats. In 17th century England, there was renewed interest in Magna Carta. The Parliament of England passed the Petition of Right in 1628 which established certain liberties for subjects. The English Civil War (1642-1651) was fought between the King and an oligarchic but elected Parliament, during which the idea of a political party took form with groups debating rights to political representation during the Putney Debates of 1647. Subsequently, the Protectorate (1653-59) and the English Restoration (1660) restored more autocratic rule, although Parliament passed the Habeas Corpus Act in 1679 which strengthened the convention that forbade detention lacking sufficient cause or evidence. After the Glorious Revolution of 1688, the Bill of Rights was enacted in 1689 which codified certain rights and liberties and is still in effect. The Bill set out the requirement for regular elections, rules for freedom of speech in Parliament and limited the power of the monarch, ensuring that, unlike much of Europe at the time, royal absolutism would not prevail. Economic historians Douglass North and Barry Weingast have characterized the institutions implemented in the Glorious Revolution as a resounding success in terms of restraining the government and ensuring protection for property rights. In the Cossack republics of Ukraine in the 16th and 17th centuries, the Cossack Hetmanate and Zaporizhian Sich, the holder of the highest post of Hetman was elected by the representatives from the country's districts. In North America, representative government began in Jamestown, Virginia, with the election of the House of Burgesses (forerunner of the Virginia General Assembly) in 1619. English Puritans who migrated from 1620 established colonies in New England whose local governance was democratic and which contributed to the democratic development of the United States; although these local assemblies had some small amounts of devolved power, the ultimate authority was held by the Crown and the English Parliament. The Puritans (Pilgrim Fathers), Baptists, and Quakers who founded these colonies applied the democratic organisation of their congregations also to the administration of their communities in worldly matters. The first Parliament of Great Britain was established in 1707, after the merger of the Kingdom of England and the Kingdom of Scotland under the Acts of Union. Although the monarch increasingly became a figurehead, only a small minority actually had a voice; Parliament was elected by only a few percent of the population (less than 3% as late as 1780). During the Age of Liberty in Sweden (1718-1772), civil rights were expanded and power shifted from the monarch to parliament. The taxed peasantry was represented in parliament, although with little influence, but commoners without taxed property had no suffrage. The creation of the short-lived Corsican Republic in 1755 marked the first nation in modern history to adopt a democratic constitution (all men and women above age of 25 could vote). This Corsican Constitution was the first based on Enlightenment principles and included female suffrage, something that was not granted in most other democracies until the 20th century. In the American colonial period before 1776, and for some time after, often only adult white male property owners could vote; enslaved Africans, most free black people and most women were not extended the franchise. This changed state by state, beginning with the republican State of New Connecticut, soon after called Vermont, which, on declaring independence of Great Britain in 1777, adopted a constitution modelled on Pennsylvania's with citizenship and democratic suffrage for males with or without property, and went on to abolish slavery. On the American frontier, democracy became a way of life, with more widespread social, economic and political equality. Although not described as a democracy by the founding fathers, they shared a determination to root the American experiment in the principles of natural freedom and equality. The American Revolution led to the adoption of the United States Constitution in 1787, the oldest surviving, still active, governmental codified constitution. The Constitution provided for an elected government and protected civil rights and liberties for some, but did not end slavery nor extend voting rights in the United States, instead leaving the issue of suffrage to the individual states. Generally, suffrage was limited to white male property owners and taxpayers, of whom between 60% and 90% were eligible to vote by the end of the 1780s. The Bill of Rights in 1791 set limits on government power to protect personal freedoms but had little impact on judgements by the courts for the first 130 years after ratification. First page of original manuscript of Constitution of 3 May 1791, registered (upper right corner) on 5 May 1791 |Created||6 October 1788 - 3 May 1791| |Ratified||3 May 1791| |Location||Central Archives of Historical Records, Warsaw| The Polish-Lithuanian Constitution of 3 May 1791 (Polish: Konstytucja Trzeciego Maja) is called "the first constitution of its kind in Europe" by historian Norman Davies. Short lived due to Russian, German, Austrian aggression, It was instituted by the Government Act (Polish: Ustawa rz?dowa) adopted on that date by the Sejm (parliament) of the Polish-Lithuanian Commonwealth. (Polish: Ustawa Rz?dowa, "Governance Act"), was a constitution adopted by the Great Sejm ("Four-Year Sejm", meeting in 1788-92) for the Polish-Lithuanian Commonwealth, a dual monarchy comprising the Crown of the Kingdom of Poland and the Grand Duchy of Lithuania. The Constitution was designed to correct the Commonwealth's political flaws and had been preceded by a period of agitation for--and gradual introduction of--reforms, beginning with the Convocation Sejm of 1764 and the consequent election that year of Stanis?aw August Poniatowski as the Commonwealth's last king. The Constitution sought to implement a more effective constitutional monarchy, introduced political equality between townspeople and nobility, and placed the peasants under the protection of the government, mitigating the worst abuses of serfdom. It banned pernicious parliamentary institutions such as the liberum veto, which had put the Sejm at the mercy of any single deputy, who could veto and thus undo all the legislation that had been adopted by that Sejm. The Commonwealth's neighbours reacted with hostility to the adoption of the Constitution. King Frederick William II broke Prussia's alliance with the Polish-Lithuanian Commonwealth and joined with Catherine the Great's Imperial Russia and the Targowica Confederation of anti-reform Polish magnates to defeat the Commonwealth in the Polish-Russian War of 1792. The 1791 Constitution was in force for less than 19 months. It was declared null and void by the Grodno Sejm that met in 1793, though the Sejm's legal power to do so was questionable. The Second and Third Partitions of Poland (1793, 1795) ultimately ended Poland's sovereign existence until the close of World War I in 1918. Over those 123 years, the 1791 Constitution helped keep alive Polish aspirations for the eventual restoration of the country's sovereignty. In the words of two of its principal authors, Ignacy Potocki and Hugo Kotaj, the 1791 Constitution was "the last will and testament of the expiring Homeland."[a] The Constitution of 3 May 1791 combined a monarchic republic with a clear division of executive, legislative, and judiciary powers. It is generally considered Europe's first, and the world's second, modern written national constitution, after the United States Constitution that had come into force in 1789.[b] In 1789, Revolutionary France adopted the Declaration of the Rights of Man and of the Citizen and, although short-lived, the National Convention was elected by all men in 1792. However, in the early 19th century, little of democracy--as theory, practice, or even as word--remained in the North Atlantic world. During this period, slavery remained a social and economic institution in places around the world. This was particularly the case in the United States, and especially in the last fifteen slave states that kept slavery legal in the American South until the Civil War. A variety of organisations were established advocating the movement of black people from the United States to locations where they would enjoy greater freedom and equality. The United Kingdom's Slave Trade Act 1807 banned the trade across the British Empire, which was enforced internationally by the Royal Navy under treaties Britain negotiated with other nations. As the voting franchise in the U.K. was increased, it also was made more uniform in a series of reforms beginning with the Reform Act 1832, although the United Kingdom did not manage to become a complete democracy well into the 20th century. In 1833, the United Kingdom passed the Slavery Abolition Act which took effect across the British Empire. Universal male suffrage was established in France in March 1848 in the wake of the French Revolution of 1848. In 1848, several revolutions broke out in Europe as rulers were confronted with popular demands for liberal constitutions and more democratic government. In the 1860 United States Census, the slave population in the United States had grown to four million, and in Reconstruction after the Civil War (late 1860s), the newly freed slaves became citizens with a nominal right to vote for men. Full enfranchisement of citizens was not secured until after the Civil Rights Movement gained passage by the United States Congress of the Voting Rights Act of 1965. In 1876 the Ottoman Empire transitioned from an absolute monarchy to a constitutional one, and held two elections the next year to elect members to her newly formed parliament. Provisional Electoral Regulations were issued on 29 October 1876, stating that the elected members of the Provincial Administrative Councils would elect members to the first Parliament. On 24 December a new constitution was promulgated, which provided for a bicameral Parliament with a Senate appointed by the Sultan and a popularly elected Chamber of Deputies. Only men above the age of 30 who were competent in Turkish and had full civil rights were allowed to stand for election. Reasons for disqualification included holding dual citizenship, being employed by a foreign government, being bankrupt, employed as a servant, or having "notoriety for ill deeds". Full universal suffrage was achieved in 1934. 20th-century transitions to liberal democracy have come in successive "waves of democracy", variously resulting from wars, revolutions, decolonisation, and religious and economic circumstances. Global waves of "democratic regression" reversing democratization, have also occurred in the 1920s and 30s, in the 1960s and 1970s, and in the 2010s. In the 1920s democracy flourished and women's suffrage advanced, but the Great Depression brought disenchantment and most of the countries of Europe, Latin America, and Asia turned to strong-man rule or dictatorships. Fascism and dictatorships flourished in Nazi Germany, Italy, Spain and Portugal, as well as non-democratic governments in the Baltics, the Balkans, Brazil, Cuba, China, and Japan, among others. World War II brought a definitive reversal of this trend in western Europe. The democratisation of the American, British, and French sectors of occupied Germany (disputed), Austria, Italy, and the occupied Japan served as a model for the later theory of government change. However, most of Eastern Europe, including the Soviet sector of Germany fell into the non-democratic Soviet bloc. The war was followed by decolonisation, and again most of the new independent states had nominally democratic constitutions. India emerged as the world's largest democracy and continues to be so. Countries that were once part of the British Empire often adopted the British Westminster system. By 1960, the vast majority of country-states were nominally democracies, although most of the world's populations lived in nations that experienced sham elections, and other forms of subterfuge (particularly in "Communist" nations and the former colonies.) A subsequent wave of democratisation brought substantial gains toward true liberal democracy for many nations. Spain, Portugal (1974), and several of the military dictatorships in South America returned to civilian rule in the late 1970s and early 1980s (Argentina in 1983, Bolivia, Uruguay in 1984, Brazil in 1985, and Chile in the early 1990s). This was followed by nations in East and South Asia by the mid-to-late 1980s. Economic malaise in the 1980s, along with resentment of Soviet oppression, contributed to the collapse of the Soviet Union, the associated end of the Cold War, and the democratisation and liberalisation of the former Eastern bloc countries. The most successful of the new democracies were those geographically and culturally closest to western Europe, and they are now members or candidate members of the European Union. In 1986, after the toppling of the most prominent Asian dictatorship, the only democratic state of its kind at the time emerged in the Philippines with the rise of Corazon Aquino, who would later be known as the Mother of Asian Democracy. The liberal trend spread to some nations in Africa in the 1990s, most prominently in South Africa. Some recent examples of attempts of liberalisation include the Indonesian Revolution of 1998, the Bulldozer Revolution in Yugoslavia, the Rose Revolution in Georgia, the Orange Revolution in Ukraine, the Cedar Revolution in Lebanon, the Tulip Revolution in Kyrgyzstan, and the Jasmine Revolution in Tunisia. According to Freedom House, in 2007 there were 123 electoral democracies (up from 40 in 1972). According to World Forum on Democracy, electoral democracies now represent 120 of the 192 existing countries and constitute 58.2 percent of the world's population. At the same time liberal democracies i.e. countries Freedom House regards as free and respectful of basic human rights and the rule of law are 85 in number and represent 38 percent of the global population. Most electoral democracies continue to exclude those younger than 18 from voting. The voting age has been lowered to 16 for national elections in a number of countries, including Brazil, Austria, Cuba, and Nicaragua. In California, a 2004 proposal to permit a quarter vote at 14 and a half vote at 16 was ultimately defeated. In 2008, the German parliament proposed but shelved a bill that would grant the vote to each citizen at birth, to be used by a parent until the child claims it for themselves. According to Freedom House, starting in 2005, there have been eleven consecutive years in which declines in political rights and civil liberties throughout the world have outnumbered improvements, as populist and nationalist political forces have gained ground everywhere from Poland (under the Law and Justice Party) to the Philippines (under Rodrigo Duterte). In a Freedom House report released in 2018, Democracy Scores for most countries declined for the 12th consecutive year.The Christian Science Monitor reported that nationalist and populist political ideologies were gaining ground, at the expense of rule of law, in countries like Poland, Turkey and Hungary. For example, in Poland, the President appointed 27 new Supreme Court judges over objections from the European Union. In Turkey, thousands of judges were removed from their positions following a failed coup attempt during a government crackdown . Dieter Fuchs and Edeltraud Roller suggest that, in order to truly measure the quality of democracy, objective measurements need to be complemented by "subjective measurements based on the perspective of citizens". Similarly, Quinton Mayne and Brigitte Geißel also defend that the quality of democracy does not depend exclusively on the performance of institutions, but also on the citizens' own dispositions and commitment. Because democracy is an overarching concept that includes the functioning of diverse institutions which are not easy to measure, strong limitations exist in quantifying and econometrically measuring the potential effects of democracy or its relationship with other phenomena--whether inequality, poverty, education etc. Given the constraints in acquiring reliable data with within-country variation on aspects of democracy, academics have largely studied cross-country variations. Yet variations between democratic institutions are very large across countries which constrains meaningful comparisons using statistical approaches. Since democracy is typically measured aggregately as a macro variable using a single observation for each country and each year, studying democracy faces a range of econometric constraints and is limited to basic correlations. Cross-country comparison of a composite, comprehensive and qualitative concept like democracy may thus not always be, for many purposes, methodologically rigorous or useful. Democracy has taken a number of forms, both in theory and practice. Some varieties of democracy provide better representation and more freedom for their citizens than others. However, if any democracy is not structured to prohibit the government from excluding the people from the legislative process, or any branch of government from altering the separation of powers in its favour, then a branch of the system can accumulate too much power and destroy the democracy. The following kinds of democracy are not exclusive of one another: many specify details of aspects that are independent of one another and can co-exist in a single system. Several variants of democracy exist, but there are two basic forms, both of which concern how the whole body of all eligible citizens executes its will. One form of democracy is direct democracy, in which all eligible citizens have active participation in the political decision making, for example voting on policy initiatives directly. In most modern democracies, the whole body of eligible citizens remain the sovereign power but political power is exercised indirectly through elected representatives; this is called a representative democracy. Direct democracy is a political system where the citizens participate in the decision-making personally, contrary to relying on intermediaries or representatives. The use of a lot system, a characteristic of Athenian democracy, is unique to direct democracies. In this system, important governmental and administrative tasks are performed by citizens picked from a lottery. A direct democracy gives the voting population the power to: Within modern-day representative governments, certain electoral tools like referendums, citizens' initiatives and recall elections are referred to as forms of direct democracy. However, some advocates of direct democracy argue for local assemblies of face-to-face discussion. Direct democracy as a government system currently exists in the Swiss cantons of Appenzell Innerrhoden and Glarus, the Rebel Zapatista Autonomous Municipalities, communities affiliated with the CIPO-RFM, the Bolivian city councils of FEJUVE, and Kurdish cantons of Rojava. Representative democracy involves the election of government officials by the people being represented. If the head of state is also democratically elected then it is called a democratic republic. The most common mechanisms involve election of the candidate with a majority or a plurality of the votes. Most western countries have representative systems. Representatives may be elected or become diplomatic representatives by a particular district (or constituency), or represent the entire electorate through proportional systems, with some using a combination of the two. Some representative democracies also incorporate elements of direct democracy, such as referendums. A characteristic of representative democracy is that while the representatives are elected by the people to act in the people's interest, they retain the freedom to exercise their own judgement as how best to do so. Such reasons have driven criticism upon representative democracy, pointing out the contradictions of representation mechanisms with democracy Parliamentary democracy is a representative democracy where government is appointed by, or can be dismissed by, representatives as opposed to a "presidential rule" wherein the president is both head of state and the head of government and is elected by the voters. Under a parliamentary democracy, government is exercised by delegation to an executive ministry and subject to ongoing review, checks and balances by the legislative parliament elected by the people. Parliamentary systems have the right to dismiss a Prime Minister at any point in time that they feel he or she is not doing their job to the expectations of the legislature. This is done through a Vote of No Confidence where the legislature decides whether or not to remove the Prime Minister from office by a majority support for his or her dismissal. In some countries, the Prime Minister can also call an election whenever he or she so chooses, and typically the Prime Minister will hold an election when he or she knows that they are in good favour with the public as to get re-elected. In other parliamentary democracies, extra elections are virtually never held, a minority government being preferred until the next ordinary elections. An important feature of the parliamentary democracy is the concept of the "loyal opposition". The essence of the concept is that the second largest political party (or coalition) opposes the governing party (or coalition), while still remaining loyal to the state and its democratic principles. Presidential Democracy is a system where the public elects the president through free and fair elections. The president serves as both the head of state and head of government controlling most of the executive powers. The president serves for a specific term and cannot exceed that amount of time. Elections typically have a fixed date and aren't easily changed. The president has direct control over the cabinet, specifically appointing the cabinet members. The president cannot be easily removed from office by the legislature, but he or she cannot remove members of the legislative branch any more easily. This provides some measure of separation of powers. In consequence, however, the president and the legislature may end up in the control of separate parties, allowing one to block the other and thereby interfere with the orderly operation of the state. This may be the reason why presidential democracy is not very common outside the Americas, Africa, and Central and Southeast Asia. A semi-presidential system is a system of democracy in which the government includes both a prime minister and a president. The particular powers held by the prime minister and president vary by country. Some modern democracies that are predominantly representative in nature also heavily rely upon forms of political action that are directly democratic. These democracies, which combine elements of representative democracy and direct democracy, are termed hybrid democracies,semi-direct democracies or participatory democracies. Examples include Switzerland and some U.S. states, where frequent use is made of referendums and initiatives. The Swiss confederation is a semi-direct democracy. At the federal level, citizens can propose changes to the constitution (federal popular initiative) or ask for a referendum to be held on any law voted by the parliament. Between January 1995 and June 2005, Swiss citizens voted 31 times, to answer 103 questions (during the same period, French citizens participated in only two referendums). Although in the past 120 years less than 250 initiatives have been put to referendum. The populace has been conservative, approving only about 10% of the initiatives put before them; in addition, they have often opted for a version of the initiative rewritten by government. In the United States, no mechanisms of direct democracy exists at the federal level, but over half of the states and many localities provide for citizen-sponsored ballot initiatives (also called "ballot measures", "ballot questions" or "propositions"), and the vast majority of states allow for referendums. Examples include the extensive use of referendums in the US state of California, which is a state that has more than 20 million voters. In New England, Town meetings are often used, especially in rural areas, to manage local government. This creates a hybrid form of government, with a local direct democracy and a representative state government. For example, most Vermont towns hold annual town meetings in March in which town officers are elected, budgets for the town and schools are voted on, and citizens have the opportunity to speak and be heard on political matters. Many countries such as the United Kingdom, Spain, the Netherlands, Belgium, Scandinavian countries, Thailand, Japan and Bhutan turned powerful monarchs into constitutional monarchs with limited or, often gradually, merely symbolic roles. For example, in the predecessor states to the United Kingdom, constitutional monarchy began to emerge and has continued uninterrupted since the Glorious Revolution of 1688 and passage of the Bill of Rights 1689. In other countries, the monarchy was abolished along with the aristocratic system (as in France, China, Russia, Germany, Austria, Hungary, Italy, Greece and Egypt). An elected president, with or without significant powers, became the head of state in these countries. Elite upper houses of legislatures, which often had lifetime or hereditary tenure, were common in many nations. Over time, these either had their powers limited (as with the British House of Lords) or else became elective and remained powerful (as with the Australian Senate). The term republic has many different meanings, but today often refers to a representative democracy with an elected head of state, such as a president, serving for a limited term, in contrast to states with a hereditary monarch as a head of state, even if these states also are representative democracies with an elected or appointed head of government such as a prime minister. The Founding Fathers of the United States rarely praised and often criticised democracy, which in their time tended to specifically mean direct democracy, often without the protection of a constitution enshrining basic rights; James Madison argued, especially in The Federalist No. 10, that what distinguished a direct democracy from a republic was that the former became weaker as it got larger and suffered more violently from the effects of faction, whereas a republic could get stronger as it got larger and combats faction by its very structure. What was critical to American values, John Adams insisted, was that the government be "bound by fixed laws, which the people have a voice in making, and a right to defend." As Benjamin Franklin was exiting after writing the U.S. constitution, a woman asked him "Well, Doctor, what have we got--a republic or a monarchy?". He replied "A republic--if you can keep it." A liberal democracy is a representative democracy in which the ability of the elected representatives to exercise decision-making power is subject to the rule of law, and moderated by a constitution or laws that emphasise the protection of the rights and freedoms of individuals, and which places constraints on the leaders and on the extent to which the will of the majority can be exercised against the rights of minorities (see civil liberties). In a liberal democracy, it is possible for some large-scale decisions to emerge from the many individual decisions that citizens are free to make. In other words, citizens can "vote with their feet" or "vote with their dollars", resulting in significant informal government-by-the-masses that exercises many "powers" associated with formal government elsewhere. Socialist thought has several different views on democracy. Social democracy, democratic socialism, and the dictatorship of the proletariat (usually exercised through Soviet democracy) are some examples. Many democratic socialists and social democrats believe in a form of participatory, industrial, economic and/or workplace democracy combined with a representative democracy. Within Marxist orthodoxy there is a hostility to what is commonly called "liberal democracy", which is simply referred to as parliamentary democracy because of its often centralised nature. Because of orthodox Marxists' desire to eliminate the political elitism they see in capitalism, Marxists, Leninists and Trotskyists believe in direct democracy implemented through a system of communes (which are sometimes called soviets). This system ultimately manifests itself as council democracy and begins with workplace democracy. Democracy cannot consist solely of elections that are nearly always fictitious and managed by rich landowners and professional politicians. Anarchists are split in this domain, depending on whether they believe that a majority-rule is tyrannic or not. To many anarchists, the only form of democracy considered acceptable is direct democracy. Pierre-Joseph Proudhon argued that the only acceptable form of direct democracy is one in which it is recognised that majority decisions are not binding on the minority, even when unanimous. However, anarcho-communist Murray Bookchin criticised individualist anarchists for opposing democracy, and says "majority rule" is consistent with anarchism. Some anarcho-communists oppose the majoritarian nature of direct democracy, feeling that it can impede individual liberty and opt-in favour of a non-majoritarian form of consensus democracy, similar to Proudhon's position on direct democracy.Henry David Thoreau, who did not self-identify as an anarchist but argued for "a better government" and is cited as an inspiration by some anarchists, argued that people should not be in the position of ruling others or being ruled when there is no consent. Sometimes called "democracy without elections", sortition chooses decision makers via a random process. The intention is that those chosen will be representative of the opinions and interests of the people at large, and be more fair and impartial than an elected official. The technique was in widespread use in Athenian Democracy and Renaissance Florence and is still used in modern jury selection. A consociational democracy allows for simultaneous majority votes in two or more ethno-religious constituencies, and policies are enacted only if they gain majority support from both or all of them. A consensus democracy, in contrast, would not be dichotomous. Instead, decisions would be based on a multi-option approach, and policies would be enacted if they gained sufficient support, either in a purely verbal agreement or via a consensus vote--a multi-option preference vote. If the threshold of support were at a sufficiently high level, minorities would be as it were protected automatically. Furthermore, any voting would be ethno-colour blind. Qualified majority voting is designed by the Treaty of Rome to be the principal method of reaching decisions in the European Council of Ministers. This system allocates votes to member states in part according to their population, but heavily weighted in favour of the smaller states. This might be seen as a form of representative democracy, but representatives to the Council might be appointed rather than directly elected. Inclusive democracy is a political theory and political project that aims for direct democracy in all fields of social life: political democracy in the form of face-to-face assemblies which are confederated, economic democracy in a stateless, moneyless and marketless economy, democracy in the social realm, i.e. self-management in places of work and education, and ecological democracy which aims to reintegrate society and nature. The theoretical project of inclusive democracy emerged from the work of political philosopher Takis Fotopoulos in "Towards An Inclusive Democracy" and was further developed in the journal Democracy & Nature and its successor The International Journal of Inclusive Democracy. The basic unit of decision making in an inclusive democracy is the demotic assembly, i.e. the assembly of demos, the citizen body in a given geographical area which may encompass a town and the surrounding villages, or even neighbourhoods of large cities. An inclusive democracy today can only take the form of a confederal democracy that is based on a network of administrative councils whose members or delegates are elected from popular face-to-face democratic assemblies in the various demoi. Thus, their role is purely administrative and practical, not one of policy-making like that of representatives in representative democracy. The citizen body is advised by experts but it is the citizen body which functions as the ultimate decision-taker. Authority can be delegated to a segment of the citizen body to carry out specific duties, for example, to serve as members of popular courts, or of regional and confederal councils. Such delegation is made, in principle, by lot, on a rotation basis, and is always recallable by the citizen body. Delegates to regional and confederal bodies should have specific mandates. A Parpolity or Participatory Polity is a theoretical form of democracy that is ruled by a Nested Council structure. The guiding philosophy is that people should have decision making power in proportion to how much they are affected by the decision. Local councils of 25-50 people are completely autonomous on issues that affect only them, and these councils send delegates to higher level councils who are again autonomous regarding issues that affect only the population affected by that council. A council court of randomly chosen citizens serves as a check on the tyranny of the majority, and rules on which body gets to vote on which issue. Delegates may vote differently from how their sending council might wish but are mandated to communicate the wishes of their sending council. Delegates are recallable at any time. Referendums are possible at any time via votes of most lower-level councils, however, not everything is a referendum as this is most likely a waste of time. A parpolity is meant to work in tandem with a participatory economy. Cosmopolitan democracy, also known as Global democracy or World Federalism, is a political system in which democracy is implemented on a global scale, either directly or through representatives. An important justification for this kind of system is that the decisions made in national or regional democracies often affect people outside the constituency who, by definition, cannot vote. By contrast, in a cosmopolitan democracy, the people who are affected by decisions also have a say in them. According to its supporters, any attempt to solve global problems is undemocratic without some form of cosmopolitan democracy. The general principle of cosmopolitan democracy is to expand some or all of the values and norms of democracy, including the rule of law; the non-violent resolution of conflicts; and equality among citizens, beyond the limits of the state. To be fully implemented, this would require reforming existing international organisations, e.g. the United Nations, as well as the creation of new institutions such as a World Parliament, which ideally would enhance public control over, and accountability in, international politics. Cosmopolitan Democracy has been promoted, among others, by physicist Albert Einstein, writer Kurt Vonnegut, columnist George Monbiot, and professors David Held and Daniele Archibugi. The creation of the International Criminal Court in 2003 was seen as a major step forward by many supporters of this type of cosmopolitan democracy. Creative Democracy is advocated by American philosopher John Dewey. The main idea about Creative Democracy is that democracy encourages individual capacity building and the interaction among the society. Dewey argues that democracy is a way of life in his work of "Creative Democracy: The Task Before Us" and an experience built on faith in human nature, faith in human beings, and faith in working with others. Democracy, in Dewey's view, is a moral ideal requiring actual effort and work by people; it is not an institutional concept that exists outside of ourselves. "The task of democracy", Dewey concludes, "is forever that of creation of a freer and more humane experience in which all share and to which all contribute". Guided democracy is a form of democracy which incorporates regular popular elections, but which often carefully "guides" the choices offered to the electorate in a manner which may reduce the ability of the electorate to truly determine the type of government exercised over them. Such democracies typically have only one central authority which is often not subject to meaningful public review by any other governmental authority. Russian-style democracy has often been referred to as a "Guided democracy." Russian politicians have referred to their government as having only one center of power/ authority, as opposed to most other forms of democracy which usually attempt to incorporate two or more naturally competing sources of authority within the same government. Aside from the public sphere, similar democratic principles and mechanisms of voting and representation have been used to govern other kinds of groups. Many non-governmental organisations decide policy and leadership by voting. Most trade unions and cooperatives are governed by democratic elections. Corporations are controlled by shareholders on the principle of one share, one vote--sometimes supplemented by workplace democracy. Amitai Etzioni has postulated a system that fuses elements of democracy with sharia law, termed islamocracy. Aristotle contrasted rule by the many (democracy/timocracy), with rule by the few (oligarchy/aristocracy), and with rule by a single person (tyranny or today autocracy/absolute monarchy). He also thought that there was a good and a bad variant of each system (he considered democracy to be the degenerate counterpart to timocracy). For Aristotle, the underlying principle of democracy is freedom, since only in a democracy can the citizens have a share in freedom. In essence, he argues that this is what every democracy should make its aim. There are two main aspects of freedom: being ruled and ruling in turn, since everyone is equal according to number, not merit, and to be able to live as one pleases. But one factor of liberty is to govern and be governed in turn; for the popular principle of justice is to have equality according to number, not worth, ... And one is for a man to live as he likes; for they say that this is the function of liberty, inasmuch as to live not as one likes is the life of a man that is a slave. A common view among early and renaissance Republican theorists was that democracy could only survive in small political communities. Heeding the lessons of the Roman Republic's shift to monarchism as it grew larger or smaller, these Republican theorists held that the expansion of territory and population inevitably led to tyranny. Democracy was therefore highly fragile and rare historically, as it could only survive in small political units, which due to their size were vulnerable to conquest by larger political units.Montesquieu famously said, "if a republic is small, it is destroyed by an outside force; if it is large, it is destroyed by an internal vice."Rousseau asserted, "It is, therefore the natural property of small states to be governed as a republic, of middling ones to be subject to a monarch, and of large empires to be swayed by a despotic prince." The theory of aggregative democracy claims that the aim of the democratic processes is to solicit citizens' preferences and aggregate them together to determine what social policies society should adopt. Therefore, proponents of this view hold that democratic participation should primarily focus on voting, where the policy with the most votes gets implemented. Different variants of aggregative democracy exist. Under minimalism, democracy is a system of government in which citizens have given teams of political leaders the right to rule in periodic elections. According to this minimalist conception, citizens cannot and should not "rule" because, for example, on most issues, most of the time, they have no clear views or their views are not well-founded. Joseph Schumpeter articulated this view most famously in his book Capitalism, Socialism, and Democracy. Contemporary proponents of minimalism include William H. Riker, Adam Przeworski, Richard Posner. According to the theory of direct democracy, on the other hand, citizens should vote directly, not through their representatives, on legislative proposals. Proponents of direct democracy offer varied reasons to support this view. Political activity can be valuable in itself, it socialises and educates citizens, and popular participation can check powerful elites. Most importantly, citizens do not rule themselves unless they directly decide laws and policies. Governments will tend to produce laws and policies that are close to the views of the median voter--with half to their left and the other half to their right. This is not a desirable outcome as it represents the action of self-interested and somewhat unaccountable political elites competing for votes. Anthony Downs suggests that ideological political parties are necessary to act as a mediating broker between individual and governments. Downs laid out this view in his 1957 book An Economic Theory of Democracy. Robert A. Dahl argues that the fundamental democratic principle is that, when it comes to binding collective decisions, each person in a political community is entitled to have his/her interests be given equal consideration (not necessarily that all people are equally satisfied by the collective decision). He uses the term polyarchy to refer to societies in which there exists a certain set of institutions and procedures which are perceived as leading to such democracy. First and foremost among these institutions is the regular occurrence of free and open elections which are used to select representatives who then manage all or most of the public policy of the society. However, these polyarchic procedures may not create a full democracy if, for example, poverty prevents political participation. Similarly, Ronald Dworkin argues that "democracy is a substantive, not a merely procedural, ideal." Deliberative democracy is based on the notion that democracy is government by deliberation. Unlike aggregative democracy, deliberative democracy holds that, for a democratic decision to be legitimate, it must be preceded by authentic deliberation, not merely the aggregation of preferences that occurs in voting. Authentic deliberation is deliberation among decision-makers that is free from distortions of unequal political power, such as power a decision-maker obtained through economic wealth or the support of interest groups. If the decision-makers cannot reach consensus after authentically deliberating on a proposal, then they vote on the proposal using a form of majority rule. Radical democracy is based on the idea that there are hierarchical and oppressive power relations that exist in society. Democracy's role is to make visible and challenge those relations by allowing for difference, dissent and antagonisms in decision-making processes. Some economists have criticized the efficiency of democracy, citing the premise of the irrational voter, or a voter who makes decisions without all of the facts or necessary information in order to make a truly informed decision. Another argument is that democracy slows down processes because of the amount of input and participation needed in order to go forward with a decision. A common example often quoted to substantiate this point is the high economic development achieved by China (a non-democratic country) as compared to India (a democratic country). According to economists, the lack of democratic participation in countries like China allows for unfettered economic growth. On the other hand, Socrates believed that democracy without educated masses (educated in the more broader sense of being knowledgeable and responsible) would only lead to populism being the criteria to become an elected leader and not competence. This would ultimately lead to a demise of the nation. This was quoted by Plato in book 10 of The Republic, in Socrates' conversation with Adimantus. Socrates was of the opinion that the right to vote must not be an indiscriminate right (for example by birth or citizenship), but must be given only to people who thought sufficiently of their choice. The 20th-century Italian thinkers Vilfredo Pareto and Gaetano Mosca (independently) argued that democracy was illusory, and served only to mask the reality of elite rule. Indeed, they argued that elite oligarchy is the unbendable law of human nature, due largely to the apathy and division of the masses (as opposed to the drive, initiative and unity of the elites), and that democratic institutions would do no more than shift the exercise of power from oppression to manipulation. As Louis Brandeis once professed, "We may have democracy, or we may have wealth concentrated in the hands of a few, but we can't have both."[clarification needed]. British writer Ivo Mosley, grandson of blackshirt Oswald Mosley describes in In the Name of the People: Pseudo-Democracy and the Spoiling of Our World, how and why current forms of electoral governance are destined to fall short of their promise. A study led by Princeton professor Martin Gilens of 1,779 U.S. government decisions concluded that "elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence." Plato's The Republic presents a critical view of democracy through the narration of Socrates: "Democracy, which is a charming form of government, full of variety and disorder, and dispensing a sort of equality to equals and unequaled alike." In his work, Plato lists 5 forms of government from best to worst. Assuming that the Republic was intended to be a serious critique of the political thought in Athens, Plato argues that only Kallipolis, an aristocracy led by the unwilling philosopher-kings (the wisest men), is a just form of government. James Madison critiqued direct democracy (which he referred to simply as "democracy") in Federalist No. 10, arguing that representative democracy--which he described using the term "republic"--is a preferable form of government, saying: "... democracies have ever been spectacles of turbulence and contention; have ever been found incompatible with personal security or the rights of property; and have in general been as short in their lives as they have been violent in their deaths." Madison offered that republics were superior to democracies because republics safeguarded against tyranny of the majority, stating in Federalist No. 10: "the same advantage which a republic has over a democracy, in controlling the effects of faction, is enjoyed by a large over a small republic". More recently, democracy is criticised for not offering enough political stability. As governments are frequently elected on and off there tends to be frequent changes in the policies of democratic countries both domestically and internationally. Even if a political party maintains power, vociferous, headline-grabbing protests and harsh criticism from the popular media are often enough to force sudden, unexpected political change. Frequent policy changes with regard to business and immigration are likely to deter investment and so hinder economic growth. For this reason, many people have put forward the idea that democracy is undesirable for a developing country in which economic growth and the reduction of poverty are top priorities. This opportunist alliance not only has the handicap of having to cater to too many ideologically opposing factions, but it is usually short-lived since any perceived or actual imbalance in the treatment of coalition partners, or changes to leadership in the coalition partners themselves, can very easily result in the coalition partner withdrawing its support from the government. Biased media has been accused of causing political instability, resulting in the obstruction of democracy, rather than its promotion. In representative democracies, it may not benefit incumbents to conduct fair elections. A study showed that incumbents who rig elections stay in office 2.5 times as long as those who permit fair elections. Democracies in countries with high per capita income have been found to be less prone to violence, but in countries with low incomes the tendency is the reverse. Election misconduct is more likely in countries with low per capita incomes, small populations, rich in natural resources, and a lack of institutional checks and balances. Sub-Saharan countries, as well as Afghanistan, all tend to fall into that category. Governments that have frequent elections tend to have significantly more stable economic policies than those governments who have infrequent elections. However, this trend does not apply to governments where fraudulent elections are common. Democracy in modern times has almost always faced opposition from the previously existing government, and many times it has faced opposition from social elites. The implementation of a democratic government within a non-democratic state is typically brought about by democratic revolution. Several philosophers and researchers have outlined historical and social factors seen as supporting the evolution of democracy. Other commentators have mentioned the influence of economic development. In a related theory, Ronald Inglehart suggests that improved living-standards in modern developed countries can convince people that they can take their basic survival for granted, leading to increased emphasis on self-expression values, which correlates closely with democracy. Douglas M. Gibler and Andrew Owsiak in their study argued about the importance of peace and stable borders for the development of democracy. It has often been assumed that democracy causes peace, but this study shows that, historically, peace has almost always predated the establishment of democracy. Carroll Quigley concludes that the characteristics of weapons are the main predictor of democracy: Democracy--this scenario--tends to emerge only when the best weapons available are easy for individuals to obtain and use. By the 1800s, guns were the best personal weapons available, and in the United States of America (already nominally democratic), almost everyone could afford to buy a gun, and could learn how to use it fairly easily. Governments couldn't do any better: it became the age of mass armies of citizen soldiers with guns. Similarly, Periclean Greece was an age of the citizen soldier and democracy. Other theories stressed the relevance of education and of human capital--and within them of cognitive ability to increasing tolerance, rationality, political literacy and participation. Two effects of education and cognitive ability are distinguished:[need quotation to verify] Evidence consistent with conventional theories of why democracy emerges and is sustained has been hard to come by. Statistical analyses have challenged modernisation theory by demonstrating that there is no reliable evidence for the claim that democracy is more likely to emerge when countries become wealthier, more educated, or less unequal. Neither is there convincing evidence that increased reliance on oil revenues prevents democratisation, despite a vast theoretical literature on "the Resource Curse" that asserts that oil revenues sever the link between citizen taxation and government accountability, seen as the key to representative democracy. The lack of evidence for these conventional theories of democratisation have led researchers to search for the "deep" determinants of contemporary political institutions, be they geographical or demographic. More inclusive institutions lead to democracy because as people gain more power, they are able to demand more from the elites, who in turn have to concede more things to keep their position. This virtuous circle may end up in democracy. An example of this is the disease environment. Places with different mortality rates had different populations and productivity levels around the world. For example, in Africa, the tsetse fly--which afflicts humans and livestock--reduced the ability of Africans to plow the land. This made Africa less settled. As a consequence, political power was less concentrated. This also affected the colonial institutions European countries established in Africa. Whether colonial settlers could live or not in a place made them develop different institutions which led to different economic and social paths. This also affected the distribution of power and the collective actions people could take. As a result, some African countries ended up having democracies and others autocracies. An example of geographical determinants for democracy is having access to coastal areas and rivers. This natural endowment has a positive relation with economic development thanks to the benefits of trade. Trade brought economic development, which in turn, broadened power. Rulers wanting to increase revenues had to protect property-rights to create incentives for people to invest. As more people had more power, more concessions had to be made by the ruler and in many[quantify] places this process lead to democracy. These determinants defined the structure of the society moving the balance of political power. In the 21st century, democracy has become such a popular method of reaching decisions that its application beyond politics to other areas such as entertainment, food and fashion, consumerism, urban planning, education, art, literature, science and theology has been criticised as "the reigning dogma of our time". The argument suggests that applying a populist or market-driven approach to art and literature (for example), means that innovative creative work goes unpublished or unproduced. In education, the argument is that essential but more difficult studies are not undertaken. Science, as a truth-based discipline, is particularly corrupted by the idea that the correct conclusion can be arrived at by popular vote. However, more recently, theorists[which?] have also advanced the concept epistemic democracy to assert that democracy actually does a good job tracking the truth. Robert Michels asserts that although democracy can never be fully realised, democracy may be developed automatically in the act of striving for democracy: The peasant in the fable, when on his death-bed, tells his sons that a treasure is buried in the field. After the old man's death the sons dig everywhere in order to discover the treasure. They do not find it. But their indefatigable labor improves the soil and secures for them a comparative well-being. The treasure in the fable may well symbolise democracy. Dr. Harald Wydra, in his book Communism and The Emergence of Democracy (2007), maintains that the development of democracy should not be viewed as a purely procedural or as a static concept but rather as an ongoing "process of meaning formation". Drawing on Claude Lefort's idea of the empty place of power, that "power emanates from the people [...] but is the power of nobody", he remarks that democracy is reverence to a symbolic mythical authority--as in reality, there is no such thing as the people or demos. Democratic political figures are not supreme rulers but rather temporary guardians of an empty place. Any claim to substance such as the collective good, the public interest or the will of the nation is subject to the competitive struggle and times of for[clarification needed] gaining the authority of office and government. The essence of the democratic system is an empty place, void of real people, which can only be temporarily filled and never be appropriated. The seat of power is there but remains open to constant change. As such, people's definitions of "democracy" or of "democratic" progress throughout history as a continual and potentially never-ending process of social construction. Magna Carta is sometimes regarded as the foundation of democracy in England. ...Revised versions of Magna Carta were issued by King Henry III (in 1216, 1217 and 1225), and the text of the 1225 version was entered onto the statute roll in 1297. ...The 1225 version of Magna Carta had been granted explicitly in return for a payment of tax by the whole kingdom, and this paved the way for the first summons of Parliament in 1265, to approve the granting of taxation. The key landmark is the Bill of Rights (1689), which established the supremacy of Parliament over the Crown.... The Bill of Rights (1689) then settled the primacy of Parliament over the monarch's prerogatives, providing for the regular meeting of Parliament, free elections to the Commons, free speech in parliamentary debates, and some basic human rights, most famously freedom from 'cruel or unusual punishment'. The earliest, and perhaps greatest, victory for liberalism was achieved in England. The rising commercial class that had supported the Tudor monarchy in the 16th century led the revolutionary battle in the 17th and succeeded in establishing the supremacy of Parliament and, eventually, of the House of Commons. What emerged as the distinctive feature of modern constitutionalism was not the insistence on the idea that the king is subject to law (although this concept is an essential attribute of all constitutionalism). This notion was already well established in the Middle Ages. What was distinctive was the establishment of effective means of political control whereby the rule of law might be enforced. Modern constitutionalism was born with the political requirement that representative government depended upon the consent of citizen subjects... However, as can be seen through provisions in the 1689 Bill of Rights, the English Revolution was fought not just to protect the rights of property (in the narrow sense) but to establish those liberties which liberals believed essential to human dignity and moral worth. The "rights of man" enumerated in the English Bill of Rights gradually were proclaimed beyond the boundaries of England, notably in the American Declaration of Independence of 1776 and in the French Declaration of the Rights of Man in 1789. Political theory has described a positive linkage between education, cognitive ability and democracy. This assumption is confirmed by positive correlations between education, cognitive ability, and positively valued political conditions (N = 183 - 130). [...] It is shown that in the second half of the 20th century, education and intelligence had a strong positive impact on democracy, rule of law and political liberty independent from wealth (GDP) and chosen country sample. One possible mediator of these relationships is the attainment of higher stages of moral judgment fostered by cognitive ability, which is necessary for the function of democratic rules in society. The other mediators for citizens as well as for leaders could be the increased competence and willingness to process and seek information necessary for political decisions due to greater cognitive ability. There are also weaker and less stable reverse effects of the rule of law and political freedom on cognitive ability. This further reading section may contain inappropriate or excessive suggestions that may not follow Wikipedia's guidelines. Please ensure that only a reasonable number of balanced, topical, reliable, and notable further reading suggestions are given; removing less relevant or redundant publications with the same point of view where appropriate. Consider utilising appropriate texts as inline sources or creating a separate bibliography article. (January 2016) (Learn how and when to remove this template message)
Archive for category Mathematics Algebra II Day 1 An equation is a mathematical statement that asserts the equality of two expressions. Equations often express relationships between given quantities, the knowns, and quantities yet to be determined, the unknowns. By convention, unknowns are denoted by letters at the end of the alphabet, x, y, z, w… while knowns are denoted by letters at the beginning, a, b, c, d… The process of expressing the unknowns in terms of the knowns is called solving the equation. In an equation with a single unknown, a value of that unknown for which the equation is true is called a solution or root of the equation. In a set simultaneous equations, or system of equations, multiple equations are given with multiple unknowns. A solution to the system is an assignment of values to all the unknowns so that all of the equations are true. One use of equations is in mathematical identities, assertions that are true independent of the values of any variables contained within them. For example, for any given value of x it is true However, equations can also be correct for only certain values of the variables. In this case, they can be solved to find the values that satisfy the equality. For example, consider the following The equation is true only for two values of x, the solutions of the equation. In this case, the solutions are x = 0 and x = 1. If an equation in algebra is known to be true, the following operations may be used to produce another true equation: - Any real number can be added to both sides. - Any real number can be subtracted from both sides. - Any real number can be multiplied to both sides. - Any non-zero real number can divide both sides. - Some functions can be applied to both sides. Caution must be exercised to ensure that the operation does not cause missing or extraneoussolutions. For example, the equation y*x=x has 2 solutions: y=1 and x=0. Dividing both sides by x “simplifies” the equation to y=1, but the second solution is lost.
A computer can act on the behalf of other computers to request content from the Internet or an intranet. Proxy Server is placed between a user’s machine and the Internet. It can act as a firewall to provide protection and as a cache area to speed up Web page display. Proxy server is a computer that sits between a client computer and the Internet, and provides indirect network services to a client. It may reside on the user’s local computer, or at various points between the user’s computer and destination servers on the Internet. A proxy server intercepts all client requests, and provides responses from its cache or forwards the request to the real server. A client computer is connected to the proxy server, which acknowledges client requests by providing the requested resource/data from either a specified server or the local cache memory. Client requests include files or any other resources available on various servers. Basic Overview of Proxy Server Basically proxy server plays an intermediary between the client computer and the server computer. The clients usually take the help of proxy server for requesting any files, any web pages or any other resources. The proxy server acts as an identification shield between the server and the client machine. A proxy server, also known as a “proxy” or “application-level gateway”, is a computer that acts as a gateway between a local network (e.g., all the computers at one company or in one building) and a larger-scale network such as the internet. Proxy servers provide increased performance and security. In some cases, they monitor employees’ use of outside resources. A proxy server works by intercepting connections between sender and receiver. All incoming data enters through one port and is forwarded to the rest of the network via another port. By blocking direct access between two networks, proxy servers make it much more difficult for hackers to get internal addresses and details of a private network. Figure: Proxy Server How Proxy Server works? Normally, if a PC is directly getting connected to the internet then the user is able to access the web services without any restriction. If there is a proxy server installed on a computer network, the data transfer occurs only through it. Every time user is trying to access the internet they have to send a request to the proxy server and this server is going to forward its request to the web server. Now, the web server will send this data to the proxy server and then the server saves this data in its local cache and provides it to the user in a network. So like this, in next time if some request for the same data. The proxy server have not going to forward the request to the internet. Instead of this, it will send the data to the user from its local cache. In this way, the proxy server saves network bandwidth and improves network performance Types of Proxy servers Proxy servers are classified into several types based on purpose and functionality. Some of the most common types and their uses can be described as below: Forward Proxy: Forward proxies are proxies where the client server names the target server to connect to. Forward proxies are able to retrieve from a wide range of sources (in most cases anywhere on the Internet). The terms “forward proxy” and “forwarding proxy” are a general description of behavior (forwarding traffic) and thus ambiguous except for Reverse proxy. Open Proxy: An open proxy is a forward proxy server that is accessible by any Internet user. Gordon Lyon estimates there are “hundreds of thousands” of open proxies on the Internet. An anonymous open proxy allows users to conceal their IP address while browsing the Web or using other Internet services. Reverse Proxy: A reverse proxy is a proxy server that appears to clients to be an ordinary server. Requests are forwarded to one or more origin servers which handle the request. The response is returned as if it came directly from the proxy server Advantages of Proxy Server: There are lots of advantages in using a proxy server. Following are the list of advantages: - Proxy server helps the clients to protect their important information from getting hacked by hackers. - A proxy server is also used in bypassing blocked websites. It happens many a time that in some offices or schools or in any organizations they blocked some of the websites for their own reasons. Also, many websites have some country restrictions. In those cases, if you want to access those websites, the proxy server will help you in doing that. - The proxy server is also used to enhance the security and privacy level of the client’s device while doing surfing using different proxies. - Proxy server many times used for speeding up the browsing and access data, because of their good cache system. - As the cache system of the proxy server is very good, when you access any websites using a proxy server, it is having the chance to store your desired data in their cache system. And as a result, you can access them whenever you want. Disadvantages of Proxy Server: - The cache system of the proxy server is very good and active, in many cases, it may happen that you passwords, or your browser websites or any secured data can easily be looked by the proxy service provider. So it is advisable to go for a dedicated or paid one. - It happened many a time, although using the encrypted connections or network, your data or information can be leaked using the technique of TLS and SSL encrypted connections. - Again as we have come to know above that, with the help of proxy server any blocked websites can be accessed. So it is found many times that any blocked and offensive websites which are not good for students are being accessed. “What is a proxy server?” available online at: https://kb.iu.edu/d/ahoo “What is a proxy server?” available online at: https://www.iplocation.net/proxy-server “FTP and proxy Server”, available online at: http://www.idc-online.com/technical_references/pdfs/data_communications/Ftp_and_Proxy_server.pdf “What is Proxy Server and How it Works | Advantages And Disadvantages Of Proxy Server”, available online at: http://www.learnabhi.com/proxy-server/ “Proxy Server – It’s Advantages & Disadvantages”, available online at: https://www.rswebsols.com/tutorials/technology/proxy-server-advantages-disadvantages
What do numbers represent and how do they help us order and compare things in God’s world? Numbers represent an amount that helps us order and compare things in God’s world. Read, write, and understand numbers up to 1000 using standard, number name, and expanded forms. (2.NBT.3) Count by ones, fives, tens, and hundreds up to 1000. (2.NBT.2) Understand and compare three-digit numbers organized as groups of hundreds, tens, and ones; use place value to understand addition and subtraction. (2.NBT.1,4,9) Mentally add and subtract multiples of ten and multiples of a hundred within 1000. (2.NBT.8) Add and subtract within 1000 with regrouping using models or drawings. (2.NBT.7) How can objects be represented to help us understand the variety of God’s creation? A single collection of objects can always be represented in more than one way to help us understand the variety of God’s Understand, represent, compare, and apply addition and subtraction properties within 100 to solve one- and two- step word problems (2.OA.1) (2.NBT.5); add up to four 2-digit numbers. (2.NBT.6) Memorize and fluently add and subtract within 20. (2.OA.2) Determine if a group of objects within 20 represents an odd or even number. (2.OA.3) Write an equation to represent the total as a sum of equal addends with up to 5 groups of 5 objects. (2.OA.3,4) How does measurement help us fulfill God’s plan? Measurement allows us to be accurate and orderly as God planned. Measure and estimate lengths in standard units (e.g., inches, feet, centimeters, meters) using appropriate tools (e.g., rulers, yardsticks, meter sticks). (2.MD.1,3) Measure, compare, and describe the length of an object using two units of measurement (e.g, inches and yards, centimeters and meters). (2.MD.2) Measure to compare the length of two objects using a standard length unit. (2.MD.4) Use addition and subtraction equations within 100 to solve word problems involving lengths ofthe same unit. (2.MD.5) Represent whole numbers as equally spaced lengths from 0 on a number line; represent sums and differences within 100 on a number line. (2.MD.6) Tell and write time to the nearest five minutes from analog and digital clocks using a.m. and p.m. (2.MD.7) Solve word problems involving dollar bills, quarters, dimes, nickels, and pennies, using $ and ¢. (2.MD.8) How do shapes and their parts help us appreciate God’s creation? Shapes and their parts help us appreciate the beauty and order in everything God has designed. Recognize and draw two- and three- dimensional shapes having specified attributes. (2.G.1) Partition a rectangle into rows and columns of same-size squares and count to find the total number of squares. (2.G.2) Partition circles and rectangles into two, three, and four equal parts; describe the whole and its parts using the words halves, thirds, half of, third of, etc.; understand that equal parts need not have the same shape. (2.G.3) How can we quantify our findings in a way that pleases God? God has at various times commanded men to count, measure, and record their findings. Generate measurement data by measuring lengths of several objects to the nearest whole unit; show the measurements by making a line plot. (2.MD.9) Draw a picture graph and a bar graph (with single-unit scale) to represent a data set with up to four categories; solve simple addition, subtraction, and comparison problems using information in a bar graph. (2.MD.10) What do you think about The Multigrade Classroom site? Let us know! Join the Multigrade Facebook group. This site is in continuous development. Content will be added periodically. If you have any questions or need help finding a resource, please contact us.
Impulse and Linear Momentum Linear Momentum is defined as the product of mass and linear velocity. Pushing an object to the right, results in a reaction to the left. A rifle (attached to a cart) fired to the left makes the rifle move to the right as shown: It is easy to both mathematically and experimentally verify that Mb Vb = MrVr . This means: " Momentum to the left " = " Momentum to the right ." The product MV , or linear momentum is an important quantity in physics. Momentum is a Vector: MV is a vector, because velocity is a vector. This requires momentum to have direction. Example 1: A solid ball of mass M1 = 0.15kg is rolling to the right at speed V1 = 4.0m/s and another ball of mass M2=0.35kg is rolling to the left at V2 = 6.0m/s. Find (a) the momentum of each ball and (b) the net momentum. Solution: (a) M1V1 = (0.15 kg)(+4.0 m/s) = + 0.60 kg m/s ; M2V2 = (0.35 kg)(-6.0 m/s) = - 2.1 kg m/s (b) ΣMV = M1V1 + M2V2 = -1.5 kg m/s (The net momentum is to the left). Example 2: A baseball of mass 0.120kg is served by a pitcher horizontally to the left at 17 m/s and it returns to the right at 63 m/s after getting struck by a bat. Calculate the change in its momentum. Solution: Recall that Δ is used to denote change and it means a final value minus its initial value; therefore, we need to calculate Δ(MV) = MVf - MVi. Δ(MV) = M ( Vf - Vi ) = 0.120kg [ ( + 63 m /s) - ( - 17 m /s) ] = + 9.6 kg m/s. Impulse ( I ) is the product of force and a time interval. Mathematically impulse ( I ) is shown as FΔt. For example, if a grocery cart is pushed with a constant force of 44N to the left for 25 seconds, the impulse of the pusher on the cart is I = F Δt ; I = ( - 44N)(25s) = -1100 Ns. Note that Impulse and momentum have same units. The unit of momentum MV is (kg m/s). So is the unit of impulse: (Ns) = (kg m/s2)(s) = kg m/s. Equivalence of Impulse and Linear Momentum: It is easy to show that the impulse of force F during time Δt on mass M is equal to the change in the linear momentum of mass M, simply, FΔt = Δ(MV) ; F Δt = M Vf - M Vi or, Impulse = Change in Momentum. Proof: Starting with Newton's 2nd law (for a single force): (Make sure to write all down as you proceed using horizontal fraction bars). F = Ma , replacing a by Δv/Δt , and multiplying thru by Δt, results in: F = M Δv/Δt ; FΔt = M Δv ; FΔt = M (vf - vi) ; FΔt = Mvf - M vi . and the equivalence of Impulse and change in linear momentum is verified. Example 3: A stationary train car of mass 12,000kg gets hit by another car moving to the right and is pushed with an average force of 4500N for a period of 4.2s. Find the final velocity of the stationary car. Solution: Using the equivalence of Impulse and linear momentum, results in: FΔt = M ( vf - vi ) ; (+4500N)(4.2s) = (12,000kg)( vf - 0 ) ; vf = +1.6 m/s ( to the right ) Example 4: A 0.150-kg base ball is thrown horizontally to the left by a pitcher. Its velocity just before getting hit by the bat is 15 m/s to the left and after the strike becomes 45 m/s to the right. Find (a) the change in velocity Δv, (b) the change in momentum MΔv, (c) the impulse of the bat on the ball, and (d) the average force of the bat on the ball if the contact time is 0.020s. Solution: (a) Δv = vf - vi = ( + 45 m/s ) - ( -15 m/s ) = + 60. m/s (The change in velocity, not the change in speed ! ) (b) MΔv = (0.150 kg)( + 60. m/s) = + 9.0 kg m/s. (c) According to the equivalence of impulse and linear momentum, FΔt = 9.0 kg m/s as well. (d) FΔt = + 9.0 kg m/s ; F (0.020s) = + 9.0 kg m/s ; F = + 450N. Conservation of Linear Momentum: It is easy to show that when a system of particles go through collisions with each other, the total momentum remains constant. To prove this, let us consider the head-on collision of only two balls that move toward each other along the same straight line. Also suppose that both balls have the same size. The following figure shows solid spheres A and B with masses M1 and M2 move at velocities V1 and V2 toward each other along the same line. There are 3 stages: before collision, during collision, and after collision, as shown: The total momentum before collision is: M1V1 + M2V2 ( V1 and V2 are velocities before collision). The total momentum after collision is : M1 u1 + M2 u2 ( u1 and u2 are velocities after collision). During collision, each ball acts as a wall for the other. In fact, each ball act as a baseball bat for the other and imparts an impulse on the other ball. According to Newton's 3rd law, the impulse of ball A on ball B must be equal to the impulse of ball B on ball A, but in opposite direction. One impulse is FABΔt, and the other -FBAΔt. Forces are equal, and so are the contact times. We can write: FABΔt = M2u2 - M2V2 and FBAΔt = M1u1 - M1V1. Since FABΔt = - FBAΔt ; therefore, M2u2 - M2V2 = - ( M1u1 - M1V1). Rearranging yields: M1u1 + M2u2 = M1V1 + M2V2. This simply shows that " Total momentum after collision = Total momentum before collision;" in other words, linear momentum is conserved. Example 5: A 1.00-kg toy car moving to the right at 1.40 m/s is hit from behind with a 0.500-kg piece of dough thrown horizontally also to the right at 3.60 m/s that causes the car and dough combination move faster. Calculate the speed of the car-dough combo, knowing that the dough sticks to car. Solution: Total momentum after collision must be equal to the total momentum before collision. This results in: McVc + MdVd = ( Mc + Md ) Vcd ; (1.00kg)(1.40m/s) + (0.500kg)(3.6m/s) = (1.00 + 0.500)kg(Vcd) ; Vcd = 2.13 m/s. Example 6: A 4.50-kg rifle is fixed on a 1.50-kg cart so that its barrel points horizontally to the right. The cart can roll with negligible friction and is initially at rest. The rifle is fired with a remote control device and shoots a 45.0 gram bullet to the right As a result the rifle itself moves to the left at 2.50 m/s. Calculate the bullet exit speed. Solution: Total mom. after collision must be equal to the total mom. before collision. Since before firing (or collision), both the bullet and rifle are at rest, total momentum before firing is zero. According to the law of conservation of linear momentum, the total mom. after firing must also be equal to zero as well. This means that: MbVb + MrVr = MbVb + MrVr ; (Note that Mr is not just the mass of rifle, it is the mass of rifle and cart). (0.045kg)( 0 ) + (4.50kg + 1.50kg)( 0 ) = (0.045kg)(Vb) + (4.50kg + 1.50kg)( - 2.50 m/s) or, 0 + 0 = (0.045kg)(Vb) - 15.0 kg m/s or, or, 15.0 kg m/s = 0.045 Vb Vb = + 333 m/s ( Of course, (+) means to the right ) Chapter 7 Test Yourself 1: 1) Momentum is (a) a scalar (b) a vector (c) sometimes a vector and sometimes a scalar. click here 2) Momentum is defined as the product of (a) Force and a time interval (b) force and mass (c) Mass and velocity. 3) The reason momentum is a vector is that (a) mass is a vector (b) velocity is a vector (c) neither a, nor b. 4) If your car including you has a mass of 800-kg and is moving at (25 m/s, North), the momentum of your vehicle is (a) 20,000 kg m/s (b) 20,000 kg m/s, North (c) neither a, nor b. click here Problem: Suppose you are in outer space far from planets and stars (almost zero gravity). If you are holding a 1.0-kg rock in your hand and your mass including your space suit is 75 kg and you throw the rock in say +x direction at a speed of 7.5m/s. Answer the following: click here 5) (a) You remain stationary (b) You move at a speed of 7.5m/s in the opposite direction (c) You move at a speed of 0.10m/s in the opposite direction. 6) The momentum of the rock is (a) +7.5kgm/s (b) +7.5kgm/s2 (c) +7.5kg/s. click here 7) Your momentum after the rock is thrown is (a) -7.5kgm/s (b) 0 (c) 75g. 8) If you are at the origin (0,0), and your friend is standing on the negative x-axis at (-20.0m, 0), how long would it take for you to reach him? (a) 75s (b) (1/75)s (c) 200s. 9) The average force a baseball bat exerts on a baseball during a contact time of 0.025s is 400N. The Impulse of the bat on the baseball is (a) 425 Ns (b) 10Ns (c) 16000 Ns. click here 10) The impulse of force F during the time interval Δt is (a) FΔt (b) F /Δt (c) FΔt2. 11) FΔt acting on mass M is equal to (a) the change in the acceleration of M (b) the change in mass M (c) the change in the linear momentum of M. click here 12) The correct form of impulse-momentum equivalence is (a) FΔt = M(Vf2 - Vi2) (b) FΔt = M(Vf - Vi) (c) F = MV. 13) For a head-on collision of two equal size balls of masses M1 and M2 moving with velocities V1 and V2, the conservation of linear momentum is (a) M1V1 = M2V2 (b) M1u1 = M2u2 (c) M1V1 + M2V2 = M1u1 + M2u2 where u1and u2 are velocities after collision. 14) A perfectly elastic collision is one during which (a) there is no potential energy change (b) K.E. remains constant (c) one object remains stationary. click here 15) If you drop a ball made of an elastic material from a height of 1m on a rigid floor that is also made of the same material, you may call it perfectly elastic if it bounces back to a height of (a) 0.5m (b) 0.95m (c) 1.0m again. 16) A perfectly elastic material is (a) ideal and cannot really be made (b) real and easy to make (c) called Flubber. 17) In a perfectly elastic collision, (a) K.E. is conserved (b) K.E. does not change (c) K.E. before collision is equal to K.E. after collision (d) a, b, and c, mean the same thing. click here 18) When a bullet hits a chunk of wood and gets embedded in it, since part of the bullet's K.E. is consumed for deformation and penetration into the wood, we may say that the collision is (a) inelastic (b) elastic (c) elastic but with some energy loss. Problem: A 0.0500-kg bullet is fired at a muzzle speed of 400. m/s, to the right, into a 3.950-kg chunk of wood hanging from a tree via a long cord. After collision, the wood-bullet combination gains a velocity V and swings. Answer the following questions: (All numbers are good to 3 significant figures). 19) The initial K.E. of the bullet before collision is (a) 8000J (b) 16000J (c) 4000J. click here 20) The initial K.E. of the still wood before collision is (a) 3.950J (b) 0 (c) 400J. 21) The conservation of momentum before and after collision may be written as MbVb + MwVw = (Mb +Mw)V. (a) True (b)False click here 22) From 21, the wood-bullet velocity, V, after collision is (a) 10.0m/s (b) 5.00 m/s (c) 0. 23) The K.E. of the wood-bullet combo, after collision is (a) 50.0J (b) 250J (c) 400J. 24) The change in K.E. in this collision is (a) -3950J (b) 3900J (c) 0. click here 25) Based on the results, this collision is (a) highly elastic (b) highly inelastic (c) perfectly inelastic. Problems: (g = 9.81m/s2 in the following problems). 1) A rifle attached to a plank of wood is placed on a horizontal long table on a track. The rifle barrel is parallel to the table. The coefficient of friction between the plank and the table is 0.450. When the rifle is fired, the bullet goes to the left and the rifle-plank combo slides to the right. The rifle-plank combo comes to stop after sliding a distance of 2.40m. If the mass of the bullet is 69.0 grams and that of the rifle-plank combo excluding the bullet 5.30kg, find (a) the initial speed of the rifle-bullet combo just after firing, and (b) the muzzle speed (the initial speed) of the bullet just after firing. 2) A 40.0-gram rubber ball released from a height of 1.41m above a perfectly horizontal concrete floor bounces back to a height of 1.13m. Calculate (a) its velocity just before collision, (b) its velocity just after collision, and (c) the loss in its kinetic energy. 3) A small steel ball is dropped from a height of 1.00m onto a perfectly horizontal steel floor. If the change in the kinetic energy during collision is 5.0%, find the maximum height the ball reaches after collision. 4) In a collision, an 8.00-ton train car traveling at a velocity of (1.20m/s, North) interlocks with an empty train car that has a mass of 2.00 tons. Calculate (a) the velocity of the interlocked cars just after collision, and (b) the change in the K.E.. 5) On a horizontal surface, solid ball A (MA = 0.200kg) traveling at (vA = + 4.00m/s) makes a head-on collision with ball B (MB = 0.200kg) that is initially at rest (vB = 0). Let the after collision velocities be uA and uB, and write (a) the momentum balance equation, (b) the energy balance equation, both in terms of uA and uB. Solve the two equations to find the unknowns uA and uB. Assume the collision is perfectly elastic that means there is no loss in kinetic energy. 6) A 48-gram tennis ball traveling to the right at 25m/s is hit by a racket that exerts a leftward force of 120N on the ball for 0.030s. Find the final velocity of the ball. 7) An 80.0-gram baseball traveling horizontally to the left at 35m/s gets hit by a bat that exerts a leftward force of 320N on the ball for a short time. The ball returns horizontally at a speed of 65m/s. Find the contact time between the bat and the ball. Answers: 1) 4.60m/s, 353m/s 2) -5.26m/s, + 4.71m/s, -0.110J 3) 95cm 4) 0.96m/s, North; -1150J 5) uA = 0, uB.= +4.00m/s 6) -50m/s 7) 0.025s
Christopher Furlong/Getty Images NASA scientist and climatologist James Hansen participates during Climate Change Campaign Action Day on March 19, 2009 in Coventry, England. April 19, 2012 -- Forty years ago this week, the crew of Apollo 16 captured this image of Earth rising above the lunar landscape. The Apollo missions enabled humanity to see for the first time our planet as it appears from space. As Apollo 13 commander Jim Lovell once said: “When I was orbiting the moon and could put my thumb up to the window and completely cover the Earth, I felt a real sense of my own insignificance. Everything I'd ever known could be hidden behind my thumb.” As we approach Earth Day on April 22, we look at the efforts of people throughout the ages to explore, understand and portray our world and its place in the Universe. Trustees of the British Museum (image rotated Babylonia Believed to be the earliest known representation of Earth, this stone tablet from Babylon shows the world as a disc, surrounded by a ring of water called the "Bitter River." The world is dominated by the area surrounding Babylon itself, and the Euphrates River bisects most of the inner circle. Unearthed in southern Iraq in the late 1800s, the tablet is housed in the British Museum. Sixteenth-century interpretation of Ptolemy's Celestial Spheres In his 2nd century treatise, the "Almagest," Claudius Ptolemy proposed an explanation for the apparent movement of stars and planets, in which Earth was central and immovable, and surrounded by, at progressively greater distances, the Moon, Mercury, Venus, the Sun, Mars, Jupiter, Saturn and a sphere of ‘fixed stars.’ This geocentric view of the cosmos did not meet its first real challenge until Copernicus proposed that the planets revolved around the Sun, and Galileo used his telescope to observe the phases of Venus. Library of Congress, via the History Blog Flat Earth The Greek philosopher Aristotle determined that Earth was spherical and not flat almost 2,500 years ago. The notion of a flat earth retained at least a few die-hard devotees for a surprisingly long time. For example, this 1893 map by Orlando Ferguson, recently acquired by the Library of Congress, cites “Scripture that condemns the globe theory” and promotes a book that “knocks the globe theory clean out.” ANALYSIS: What if Earth Were a Cube? De Costa, B.F. (September 1879). "The Lenox G Lenox Globe It is popularly believed that ancient cartographers filled in unknown and unexplored areas of the world with the phrase ‘Here be dragons’. In fact, only one known ancient map – the so-called Lenox Globe, which is believed to date to around 1510 - displays the phrase ‘HC SVNT DRACONES’, from the Latin “hic sunt dracones.” (The phrase is written near the equator on the eastern cost of Asia.) Some nineteenth-century writers, however, believed that it referred, not to dragons, but to the ‘Dagroians’, a people who “feasted upon the dead and picked their bones.” PHOTOS: Sea Monsters Real & Imagined Image Database of the Kano Collection, Tohoku Terra Australis Incognita In this copy of a 1602 map that was created on behalf of China’s Wanli emperor by Italian Matteo Ricci and collaborators, the familiar outlines of most of the world’s continents are coming into shape, although obviously many details remain unfinished. To the map’s makers, however, the likes of Australia, New Zealand and Antarctica are not even figments of the imagination, replaced instead by an enormous southern landmass. The notion of an unknown southern land – a terra australis incognita - was first mooted by Aristotle in 322 BCE; not until 1820 did Fabian von Bellingshausen become the first man to see the Antarctic continent. South Pole For centuries, gaps in maps were filled by explorers who set out across land and sea, often at immense personal risk. The true nature of “Terra Australis” had long been established by the time Robert Falcon Scott and comrades stood at the South Pole on Jan. 17, 1912; but existing knowledge could not diminish the terrible toll the conditions exacted on the men. “Great God!” wrote Scott in his journal, “this is an awful place.” All five members of Scott’s polar team died before they could reach their base camp. PHOTOS: Forgotten Discoveries of Scott's Antarctica Moscow at night Time and technology have enabled us to explore, not just across the surface of the globe or even beneath its waves, but from on high. Here, Moscow is seen at night from the International Space Station, flying at an altitude of approximately 240 miles on March 28, 2012. A solar array panel for the space station is on the left side of the frame. The Aurora Borealis, airglow and daybreak frame the horizon. Pale Blue Dot In contrast to earlier suppositions about our place in the firmaments, we know now that our globe is not at the center of the cosmos, and that other celestial bodies are not attached to interlaced spheres that rotate around us. We are but one world among many, in one solar system among many, in one galaxy among many. In this image, taken by the Voyager I spacecraft from a distance of 4 billion miles, Earth is but a speck – a pale blue dot – in the cosmic night. NASA/NOAA/GSFC/Suomi NPP/VIIRS/Norman Kuring Blue Marble If satellite images of Earth now seem almost routine, they never lose their ability to enthrall. This picture of the western hemisphere was captured on January 25 by NASA’s latest Earth observation satellite, Suomi NPP. By February 1, it had registered over 3 million views on Flickr – testament to the beauty and fascination of our Blue Marble. PHOTOS: Earth's Blue Marble Beauty Climate scientist James Hansen is retiring from NASA this week to devote himself to the fight against global warming. Hansen's retirement concludes a 46-year career at NASA's Goddard Institute for Space Studies in New York, but he plans to use his time to take up legal challenges to the federal and state governments over limiting greenhouse gas emissions. In recent years, Hansen, 72, has become an activist for climate change, which didn't sit well with NASA headquarters in Washington. "As a government employee, you can't testify against the government," Hansen told The New York Times. Supporting his "moral obligation" to step up to the fight now, Hansen adds in the Times article that burning a substantial fraction of Earth's fossil fuels guarantees "unstoppable changes" in the planet's climate, leaving an unfixable problem for future generations. The distinguished NASA scientist has spent his career at the Goddard Institute on the campus of Columbia University. He has testified in Congress dozens of times, and has issued warnings and published papers that drew criticism from climate-change skeptics. (The Reality of Climate Change: 10 Myths Busted) Hansen was arrested in February while protesting the proposed construction of the Keystone XL Pipeline that would carry heavy crude oil from Canada to the U.S. Gulf Coast. "We have reached a fork in the road," he told the Washington Post at the time, adding that politicians must understand they can "go down this road of exploiting every fossil fuel we have — tar sands, tar shale, off-shore drilling in the Arctic — but the science tells us we can't do that without creating a situation where our children and grandchildren will have no control over, which is the climate system." With his departure from NASA, Hansen told the Times he plans to lobby European leaders to institute a tax on oil derived from tar sands, whose extraction leads to more greenhouse gas emissions than conventional oil. He could not have done these things as a government employee, he said. Hansen will probably work in a converted barn on his farm in Pennsylvania, but may possibly set up a small institute or take an academic appointment, according to the Times. He will continue to publish papers in academic journals, but will not run the powerful computers and other resources NASA provided for tracking and forecasting global warming and its effects. Raised in a small town in Iowa, Hansen initially studied the planet Venus, but switched to studying the effect of human greenhouse gas emissions on Earth during the 1970s. He was one of the first scientists to raise alarm about global warming and its effects on climate and the environment. After testifying at a Congressional committee in 1988 that man-made global warming has begun, Hansen was quoted widely as saying, "It is time to stop waffling so much and say that the evidence is pretty strong that the greenhouse effect is here." Hansed joined NASA's Goddard Institute as a post-doctoral scholar in 1967 and became a federal employee in 1972. He became director in 1981, and was the longest-serving director in the institute's history. "He has pushed forward the frontier of our knowledge of Earth's climate system and of the impacts that humanity is having on Earth’s climate," Nicholas E. White, director of the Sciences and Exploration Directorate at Goddard, said in a statement. Climate scientists applaud Hansen for leading the predictions of climate change's effects. But some say these predictions were exaggerated. For example, he has said in recent years that vast carbon dioxide emissions might ultimately cause a runaway greenhouse effect like on Venus that would boil the oceans and make Earth uninhabitable, the Times reported. Other scientists say this hasn't happened in the past and that Hansen overstated the risk. Hansen was embroiled in a political fight in 2005, when a young political appointee in George W. Bush's administration tried to muzzle Hansen in the press. But Hansen revealed this to the public in an interview reported by the Times, and the administration lifted its restrictions. Despite his environmentalist stance, Hansen has also criticized the environmentalist movement. He strongly opposed a failed climate bill in 2009, because he said it would have given the federal government billions of dollars without truly limiting emissions. Hansen, who is registered as an independent, believes carbon dioxide emissions should be taxed, but that the money should be returned to the public as a rebate, instead of going to the government. Hansen told the Times he senses a mass movement on climate change is beginning, led by young people, which he plans to support. More from LiveScience: Top 10 Ways to Destroy Earth Image Gallery: One-of-a-Kind Places on Earth 8 Ways Global Warming Is Already Changing the World This article originally appeared on LiveScience. Copyright 2013 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Send the link below via email or IMCopy Present to your audienceStart remote presentation - Invited audience members will follow you as you navigate and present - People invited to a presentation do not need a Prezi account - This link expires 10 minutes after you close the presentation - A maximum of 30 users can follow your presentation - Learn more about this feature in our knowledge base article How Computers Work Transcript of How Computers Work int main(int argc, char **argv) And how programming fits in... This is a computer. The most important part of a computer is a Central Processing Unit, or A CPU is pretty similar to a calculator. Each of these steps was telling the calculator to do a specific thing. Storing a number, telling it to multiply it with the next one, etc. On a CPU, you also do things in steps. Each of these steps is called an This is what instructions for a CPU might be like. Load number 5 to a temporary location. Load number 10 to a different temporary location. Multiply the numbers in these two locations. Your computer will go through millions of these steps each second. The main difference with a pocket calculator is that a CPU doesn't have a person there telling it what to do at each step. So let's think about what you would need to do to a calculator so that it too would be able to work on its own. The CPU has such a place to store instructions. It's called memory. CPUs have a very limited set of instructions. Not all that much more than a scientific calculator. They can load numbers, perform some math, and do a little few extra things related to computer-y stuff. When you turn on your computer, the circuitry inside reads a known part of your hard drive to figure out where instructions for your operating system are located. It then copies those instructions into memory. The CPU then kicks in and starts reading the instructions located at the start of memory. (Instructions for the operating system.) You might be wondering why go through this process of copying instructions from the hard drive to memory. Seems kind of complicated. Why not just put everything on memory and forget the hard disk? It's done in a number of steps. So let's go over how you use a calculator. Those instructions are translated into 1s and 0s for the computer to understand. Instructions in 1s and 0s like that are in When those instructions are written out in words instead of 1s and 0s, they're in . A CPU wouldn't know what to do with it, but it makes it readable by people. For a calculator to work on its own, it would need to be able to know what steps you want to run without you being there. So those instructions would need to be written down somewhere for the CPU to read. But if you install your operating system to a hard disk, how does it find it's way into memory? The first instructions your computer reads when you turn it on are for your operating system. It's a good question. Okay so let's recap... are the chip at the heart of a computer. They're like calculators that can run on their own. We want stuff in memory because it's fast! 100,000 times faster than a hard drive. But memory also needs constant power. As soon as your computer turns off, everything gets wiped. Hard drives don't need constant power to keep their data. They can also store lots and lots of data for really cheap. Storing data in the hard drive and running it from memory leverages the strengths But you do another thing when you're making a calculator that doesn't need you there for every step. The ability to make decisions If a calculator was running on its own, you wouldn't be there be pressing buttons. So for the calculator would have to be stored somewhere. For a CPU, that location is memory. So a CPU has instructions to load numbers, do math, etc. But in addition to that it also has instructions like " if the number you get was equal to zero, do this particular set of instructions instead of the ones you normally were going to do. There's a few such instructions for decision making. The CPU now has everything it needs to be fully autonomous. It has a list of steps that tell it what to do, and the ability to make decisions. There's a lot of people out there who will program computers by writing in the few instructions the CPU will understand. However, programming this way gets tedious. These instructions are so limited that modern programs require millions of them in order to do the simplest things. Enter programming languages! Programming languages are abstractions to make writing software less tedious. For instance, take the following math... In most programming languages it's this easy: answer = 5 + 2 * 4 - 10; But when written as instructions for the CPU: Load the number 2. Load the number 4. Multiply the two numbers. Load the number 5. Add this number to the others. Load the number 10. Subtract this number from the others. Still, CPUs only understand instructions. So special software called turn the code written in these programming languages into machine language. There's a lot of different programming languages out there. They all have different strengths. Some are closer to the way computers actually work, so they produce fast code. They can also be more verbose, just like assembly. Some offer more abstract ways to think about problems that make the code much simpler for the programmer. They can also be slower. This is what programming in Python looks like... While it may all seem like gibberish now, What you're looking at is code from someone who has a bit of experience. For newbies, you can start off real easy with a language like Python. So to recap... Computers have a CPU inside. The CPU read a series of instructions to know what to do. These instructions are stored in memory. It gets really tedious to write everything as instructions like this. So we have programming languages to abstract things a little. Good beginner programming languages
Naming decimal places are another way of representing rational numbers, besides fractions. A decimal number contains a decimal point and it is based on the number 10. The value of a digit is determined by its position relative to the decimal point. On the left side of the decimal point are integers, whose value ranges from negative infinity to infinity. On the right side of the decimal point, the value of the number is always between 0 and 1. Example: In the number 17.421, the value left from the decimal point (17) represents the whole number or the integer, while .421 represents a value that is larger than 0 and smaller than 1.The value .421 can also be represented by the fraction 421/1000 or an equivalent of that fraction. The position of the digits in relation to the decimal point is crucial, as it determines the value of the each digit in the number. The further to the right from the decimal point we go, the places represent a smaller value. Since the decimal system is based on the number 10, the first place behind the decimal point represents tenths, the second place represents hundredths and so on. Example: The number 1743.465 can be written down as 1*1000 + 7*100 + 4*10 + 3*1 + 4*1/10 + 6*1/100 + 5*1/1000. The number behind the decimal point can be read as 4 tenths, 6 hundredths and 5 thousandths or 465 thousandths. The same principle applies, no matter how many places behind the decimal point there are in a number. If there are 4 places behind the decimal point, we are talking about ten thousandths. If there are 5 places behind the decimal point, then we are talking about hundred thousandths and so on. Naming decimal places is easy, you just need to catch the rhythm. Naming decimal places exams for teachers |Exam Name||File Size||Downloads||Upload date| |Naming decimal places – very easy||149.6 kB||2003||October 13, 2012| |Naming decimal places – easy||146.2 kB||1799||October 13, 2012| |Naming decimal places – medium||164.7 kB||2178||October 13, 2012| |Naming decimal places – hard||169 kB||1618||October 13, 2012| |Naming decimal places – very hard||164.9 kB||1548||October 13, 2012| Naming decimal places worksheets for students |Worksheet Name||File Size||Downloads||Upload date| |Naming decimal places||494.7 kB||2223||October 14, 2012|
Ancient Greek Farmers Ancient Greek farming was based on a system of crop rotation and the use of manure as fertilizer. The Greeks also developed irrigation systems and terraced hillsides to maximize their crop yields. Despite these innovations, farming in ancient Greece was a difficult and labor-intensive task, with farmers facing challenges such as droughts, pests, and soil erosion. Some say that only one-fifth of the land was suitable for farming. This was because much of the land lacked soil and the soil that did exist was rocky. Ancient Greek Farming - Greek farmers grew olives, grapes, and grains like wheat and barley. - They raised animals like sheep, goats, pigs, and chickens. - Most farming was done by men and slaves; women helped with the harvest. - Terracing was used to farm on Greece’s hilly landscapes. - Drought was common, so irrigation systems were important. - Farmers brought their goods to market in the city-state’s agora. - Rituals and offerings to gods were a big part of farming life. Olive farming was integral to ancient Greece’s agriculture, economy, and lifestyle. This hardy tree was suitable for Greece’s dry, stony landscape. Its fruit and oil had many uses, including food preparation, lighting, soap making, and ceremonial practices. The trade of olive oil significantly boosted Greece’s economic prosperity. The olive branch evolved into a symbol of peace and wisdom, demonstrating its vital role in Greek society. The myth that the olive tree was bestowed by Athena underscored its spiritual and cultural value. Vineyards / Wine Production Wine production was a crucial part of ancient Greek farming, reflecting the Greek fondness for wine as part of their meals, social functions, and sacred ceremonies. Favorable climate conditions and hilly landscapes supported vine growth. Greek farmers meticulously tended to their vineyards, honing pruning and training strategies to increase grape yield. The Greeks crafted diverse types of wines, many sweetened, spiced, or watered down. The wine was typically stored in amphorae and was a key domestic and export commodity, contributing significantly to Greece’s economy. The reverence for Dionysus, the god of wine, emphasized the societal value of winemaking in ancient Greece. Irrigation was essential to ancient Greek farming due to the area’s dry climate and sporadic rainfall. Water scarcity compelled Greeks to devise smart solutions for crop hydration. Simple but effective irrigation systems, like canals and trenches, were constructed to distribute water from rivers and wells to their lands. Terracing was also utilized on hilly terrains to minimize water runoff and soil erosion. In more arid regions, a “qanat” system was used, where tunnels were dug into hills to tap into groundwater. These hydration methods were integral to Greek agriculture, enabling the growth of crops like olives, grapes, and grains. Terracing (Agricultural Method) Farmers in ancient Greece used terracing to counter the country’s hilly landscape. This involved creating flat patches on steep slopes, enabling agriculture, and conserving water and soil. Stone walls typically supported these terraces, mitigating soil erosion. The method made effective use of rainfall, slowing its movement for better soil absorption. Terracing allowed the successful growth of staple crops like olives and grapes, supporting Greece’s economy. Grain Cultivation (Wheat, Barley) Grain cultivation, especially wheat, and barley, was critical in ancient Greek farming. These grains were dietary staples and adapted to the Mediterranean environment. Wheat was grown in winter, and barley could withstand less favorable soils and conditions. Greek farmers used a two-field system to prevent soil exhaustion. Harvesting involved community participation. Wheat was used for bread, and barley for porridge or beer, highlighting their significance in Greek life. The Agora, ancient Greece’s marketplace, was crucial for farmers to trade or sell their produce, including olives, grapes, grains, and livestock. Beyond being a market, the Agora was also a social and political hub. Here, farmers engaged with customers, set prices, and gauged demand trends. The revenue from Agora transactions contributed to both the individual farmer’s income and the broader Greek city-state economy. The farmers would take food to the marketplace and they would set up stores. An average farmer would make around 2 drachmas each day when they sold their crops. Animal husbandry was an essential facet of ancient Greek farming. Livestock, including sheep, goats, pigs, and chickens, offered resources such as meat, wool, and milk. Oxen and donkeys served as labor animals. Greeks practiced transhumance, moving livestock between pastures according to seasons, optimizing grazing land use, and maintaining field fertility. Animal husbandry influenced the economy and rural landscape of ancient Greece. Most of the animals on the farms were chickens, goats, pigs, sheep, and cows. The animals would be used to help do the farming or they would be used to get milk, eggs, meat, wool, and leather and they were also used to fertilize the soil so that it would grow the crops better. What did the Ancient Greeks grow on their farms? The most common crops were: Some of the crops that were grown were wheat, barley, olives, and grapes. All of these crops were very important to the life of the Ancient Greeks. In October, the crops that were grain would be planted, and then in April or May is when they would pick or harvest the grain. Olives were not picked until February and grapes were not picked until sometime in September. The main crop was barley. Barley was important for Ancient Greek farmers because it was an ingredient that was used for making different foods that were important for the Greeks. Barley was used to make porridge or to make flour so that the Greeks could have bread to eat. Barley was also a big ingredient in wine. Olives were used to make oil such as olive oil and the oil was used for both cooking and for burning lamps so that the Greeks could have light. Grapes were used to make wine, raisins and to be eaten. Since wine was such an important drink in Ancient Greek, grapes were needed. The wine was watered down so that the Greeks could drink it when they wanted. Wine was never drunk without adding water because it was dangerous to do that. Farms in Ancient Greece were small and most of the time they only had about five acres of land. The farms were important to farmers because they would grow their own food to feed their family and they would sell the crops to make a living. Ancient Greek farmers used several tools for crop cultivation. Plows, typically drawn by oxen or donkeys, prepared the soil for seeding. They used hoes and rakes for weeding and soil breaking. Pruning hooks were crucial in vineyards and olive groves, while sickles were used for grain harvesting. Shovels and pickaxes helped create irrigation channels. These basic yet practical tools enabled efficient Greek farming. Why Was Farming Difficult in Ancient Greece The mountainous topography of ancient Greece presented a major challenge for farming due to the scarcity of flat, arable land. In areas where farming was possible, the soil was often rocky, requiring substantial effort to prepare for planting. The dry climate, especially in summer, further complicated agriculture, necessitating efficient irrigation methods. With only around 20% of the land deemed fertile, farming was a demanding task. Additionally, the region’s susceptibility to natural disasters like earthquakes posed risks to agricultural stability. Despite these hardships, the ancient Greeks adapted their farming techniques to the environment, establishing agriculture as a vital component of their economy and society. Who is the Greek god of farming The Greek god of farming is Demeter. She was one of the twelve Olympian gods and goddesses and was responsible for the fertility of the earth and the growth of crops. Demeter was often depicted holding a sheaf of wheat or a cornucopia, symbolizing the abundance of the harvest. She was also associated with the cycle of life and death, as the growth and harvest of crops mirrored the natural cycle of birth and death. What were the main crops grown in Ancient Greece? The main crops grown in Ancient Greece were wheat, barley, olives, grapes, and vegetables. Wheat and barley were the most important crops, as they were used to make bread and other foods. Olives were grown for their oil, which was used for cooking, lighting, and bathing. Grapes were grown for wine, which was an important part of Greek culture. Vegetables such as beans, lentils, and onions were also grown. What were the main methods of farming used in Ancient Greece? The main methods of farming used in Ancient Greece were crop rotation, irrigation, and animal husbandry. Crop rotation was used to prevent soil depletion. Irrigation was used to water crops in dry areas. Animal husbandry provided manure to fertilize the soil and meat and milk for food. What were the main challenges faced by farmers in Ancient Greece? Ancient Greek farmers faced significant hurdles. The scarcity of farmable land, with only a fifth of Greece’s terrain suitable for agriculture, created high demand. The unpredictable Mediterranean climate, featuring hot, dry summers and wet, mild winters, made rainfall uncertain, posing a risk of crop failure. They also had to contend with pests and diseases that could harm crops and livestock, leading to economic loss. Moreover, the lack of modern farming technology like tractors, irrigation systems, and pesticides hindered efficient farming and crop protection. Yet, despite these obstacles, agriculture was a vital sector in the ancient Greek economy, supplying food and raw materials for industries such as textiles and pottery. How did farming contribute to the economy of Ancient Greece? Agriculture was pivotal to the economy of Ancient Greece, with the vast majority of the population engaged in farming. These farmers were responsible for generating the majority of Greece’s food supply, while simultaneously supporting other industries by providing essential raw materials. Their contributions included the production of cereals, fruits, vegetables, and livestock, along with raw materials like flax and wool for the textile industry, and grapes and olives for winemaking and oil production. Surplus goods were traded, not just domestically but internationally, facilitating income generation and bolstering the Greeks’ living standards. Thus, agriculture underpinned Ancient Greece’s economy through the provision of food, raw materials, and commerce. What were the social and cultural implications of farming in Ancient Greece? Agriculture held significant sway in Ancient Greece, influencing their socio-cultural dynamics extensively. As the primary occupation for the majority, it effectively formed the bedrock of Greek societal norms, values, and beliefs. The essence of the family was deeply rooted in Greek culture, a product of farming’s family-centered nature. Given the small size of most Greek farms, family participation was imperative, nurturing a robust sense of togetherness and community. Farming, with its inherent demanding characteristics, reinforced the values of perseverance and diligence among the Greeks, honing a commendable work ethic. Moreover, the profound dependence on the land cultivated a deep-seated appreciation for nature. The reliance on the weather for agricultural success drove Greeks towards spiritual pursuits, seeking divine blessings for favorable conditions and abundant harvests, thereby enriching their religious customs. Thus, farming wielded substantial influence over the socio-cultural landscape of Ancient Greece, shaping its norms, beliefs, and societal structure. Fun Facts About Ancient Greek Farmers: - Some of the most popular vegetables in Ancient Greece were cucumbers, onions, and lettuce. - Farms were usually given to the son after the father passed away. - Farming was an important thing for Ancient Greek trading and farmers would trade crops to other lands. - Farmers would dig, and use iron-tipped plows, hoes, and sickles to harvest their crops. - Most farmers had horses and donkeys, but these were used for transportation more than farming. - Some of the foods that were made out of the corps were cereal, wine, honey, cheese, and more. What Did You Learn? - Why was farming hard? Farming was hard because the soil was full of rocks and was not good soil. - What percentage of soil was good soil to use for crops? 20% of the soil was good for farming. - What were some of the crops that were grown in Ancient Greece? Some of the crops that were grown in Ancient Greece were barley, olives, grapes, and more - Why was farming important in Ancient Greece? Farming was important because the farmers used it to grow food to feed their families, to trade at the marketplace and crops were used to trade to other countries. - What kind of animals were used on farms? Farms had animals such as chickens, pigs, goats, horses, and more.
insert() in Python insert() function is a Python library function used to insert a given element at a given index in a list. After the insertion of a new element in the list, elements after the inserted element are shifted by one index to the right side. Syntax of insert() function in Python insert() function simply takes two parameters index at which the element is to be inserted and the element which is to be inserted. Parameters of insert() function in Python Below are the parameters of the insert function: - index: The index at which the element is inserted. This parameter is of number type. - element: It is the element that is to be inserted into the list. This parameter can be of any data type(string, number, object). Return Value of insert() function in Python insert() method just inserts the element in the list and updates the list and so does not return any value or we can say that insert() method returns None. Example of insert() function in Python Let's take a look at a basic example of how insert() function is used: Output: We have simply added the string in the list: What is the use of insert() function in Python? Suppose you have been given some work by your teacher where you are arranging marks of students in descending order. Currently, you have stored [99, 90, 80, 70] in the list and want to store 95 in the list, so one way is that you can append 95 to the list and then sort it, But it would take O(n logn) time complexity for just one element because for sorting we will be applying sort() function to make the code easy, but sort() function uses merge sort which has O(n logn) time complexity. So why would you make your code complex and costly when you can use the inbuilt function which will store the given element at the given index in your list. Inserting an element using insert() function takes O(n) time complexity. So taking the above example, using the insert function you will just insert the element "95" at the 1st index and your problem will be solved and the final list will be [99, 95, 90, 80, 70]. As the name suggests, the insert() function in python is used to insert the given element at a particular index in a list. Errors in insert() function insert() method will throw AttributeError, if anything other than list is used to insert an element. If we will try to insert an element in the string we will get an error because the string value does not have any insert() function. An error will occur saying 'str' object has no attribute 'insert'. Examples of insert() function in Python: Now we have learned about the insert function and now we can apply the insert() function in some of our examples. Example 1: Inserting an element at the beginning of the list Let's take a simple example where we have to insert an element at the beginning of the list. We have simply passed the given element with index parameter as 0. Simply element 11 is inserted at the 0th index and all elements after 11 are shifted one index to the right. Example 2: Inserting an Element at the end of the list Let's take a simple example where we have to insert an element at the end of the list. We have simply passed the given element with the index parameter as len(MyList). So insert() method will insert the element at the end of the list. Simply element 10 is inserted at last, just like the append function. Example 3: Inserting a string value into a list We will pass a string value as the given element with index parameter 2, So the insert() method will insert the string value at the 3rd position of our list. The element 'c' is inserted at the 3rd position. Example 4: Using negative indexes to insert into the list We will use negative indexes like -1 and -2 as the index parameter which will insert the elements to the 2nd last and 3rd last position of the list. Element 10 and 11 are inserted at 2nd and 3rd last positions respectively. Example 5: Inserting a tuple in a list We will make a tuple with some values and will insert the tuple at the beginning of our list. We can see in the output that the tuple (1, 2, 3) is inserted at the beginning of the list. Example 6: Insert before any element Let's take a case where we have to insert a given element just before any element, so for that, we will just get the index of the element before which the element is to be inserted using the index() function. Now the index will be used to insert the new element into the list. In the below example, let's insert element 4 before element 5. We can see that element 4 is inserted before element 5. - insert() Function is a Python library function that is used to insert the given element at a particular index in a list. - Syntax of the insert() function is, My_list.insert(index, element) - insert() function takes 2 parameters, index and element. - There is no return value in insert() function in Python. - This funciton will throw AttributeError if anything other than the list is used to insert an element. - We can insert elements using many approaches as discussed above.
Let us take the first part of this equation and represent it in a Venn diagram. Developing fluency with venn diagrams as with a set properties and associative. Test the property on subtraction and division operations by using simple examples. Mathematics 320 Exam 2 Kansas State University. We hope for the best in this case xhr. Formulas by returning to use venn diagrams must know how integers are conducted based on our website. In math the associative and commutative properties are laws applied to addition and multiplication that always exist The associative property states that you can re-group numbers and you will get the same answer and the commutative property states that you can move numbers around and still arrive at the same answer. Real Numbers Worksheets and Quizzes Real Numbers Properties of Real Numbers What are Real Numbers? What is associative property set? Answer key is reloaded; points are associative property venn diagram, their view are we should really already reached as cookies. Parentheses alone does not immediately applicable to any other two integers have your students to follow up for as needed operations are not dissimilar to ensure quality of! Clipping is also perform the divisor to how far apart from the following statements are associative, copy and expressions above illustrate multiplication. The intersection of two sets are those elements that belong to both sets. Use parentheses, Union, Intersection, and Complement. Be explicit about rational irrational number. Venn diagrams to prove the associativity of the symmetric difference. To bear in mind that a Venn diagram never constitutes a proof When you prove these properties you may not always need to start from. Summary In this lesson, you learned how to subtract integers by reversing the process of addition, and by converting subtraction to addition using the negative of the subtrahend. Sometimes we shall mainly a bit too tedious to show all the diagram venn diagram to your notations as any closed figure. How then can you multiply or divide rational numbers without using models or drawings? We and choose files of your say about sets calculator to remember how can change a and identify the outcome space to move the empty set? Here we may negatively impact your blog and associative property be able to venn diagram to solve a remainder. Construct a truth table for a conditional. Note that includes everything but make sure that are great handout include commutative properties are not simplify things not have different properties make sure that adam shows up. Solved 4 a Use Venn Diagrams To Prove Each Of The Foll. Cloud computing made changes, associative and remember. Use venn diagrams too much time for this properties and associative property, intersections are names of each other sets is. Allow students to express their ideas, their doubts and their questions. Give you selected is venn diagrams about the properties. There was an ace heart, associative property venn diagram. Property that every element of S appears exactly once. If both of venn diagram, associative property changing all? How do you explain the associative property of multiplication? Decomposing proper and improper fractions into a sum of unit fractions. Associative & Commutative Property of Addition & Multiplication. Properties of Addition Posters Commutative Property Of Addition Associative. Irrationals An irrational number is a nonrepeating, nonterminating decimal. Chart having Venn diagram Coloured Chart both and is denoted by A B Papers U. ECE600 F13 set theory review mhossain Rhea. Accumulation and intersection is commutative property, how much time i visualize sets? The operation is commutative because the order of the elements does not affect the result of the operation. Addition property and associative property of these models and algorithms involved in a universe as shown in meaningful discussions among people. Be published subpages are associative property venn diagram venn diagrams to any other possibility. The associative property, not go of maningning high school go through multiple people studying math? The Venn diagram shown in Figure 1 represents the union of two sets A and B The area. What property states that and associative properties of math worksheet is on a time can see in inner mitochondrial matrix except one way of people and justify their opposites. The properties are either added first, and where it is replaced by constructing angles of! Answer key addition property can be placed there you make adding large. Prove de morgan's law by venn diagram Nickbot. Algebraic Properties of Set Operations Commutative properties A B B A A B B A Associative properties. Universe as cookies. Construct a boolean expressions performed, mooga silk etc. What does it mean? Construct a truth table for a disjunction. Just PRINT and GO! The associative property illustrated by applying the associative property! Go back to the opening activity. Can be concerned with venn diagram sufficient to properties come in! You can change your ad preferences anytime. NOTE TO THE TEACHER This is a short lesson because the sign rules for division of integers are the same as with the multiplication of integers. A Set intersection is commutative ii ASSOCIATIVE PROPERTY a A u B u C A. Suppose we can allow students are associative and of venn diagram is not change in undergraduate mathematics or by sets. All hatched area below, definitions for calculations carried out. 1 Apply the Associative Property of Multiplication to group like. Read formulas, definitions, laws from Operations on Sets here. Following and if possible shade a Venn diagram to represent each. SticiGui Set Theory The Language of Probability. What is an example of commutative property of multiplication? On 3 sets httpmathspointsie201712273-set-venn-diagrams. Ken did his assignment for times as many hours as Julie did. Given the following universal set U and its two subsets P and Q U x x is an integer. Not be any two sets and associative property is usually are diagrams for a draft. To do this, refer to the guide questions below. The word associative comes from associate or group the Associative Property is the rule that refers to grouping For addition the rule is a b c a b c in numbers this means 2 3 4 2 3 4 For multiplication the rule is abc abc in numbers this means 234 234. Venn diagram Solving Math Problems. This handout include the Associative Property Commutative Property Distributive Property Identity Property Additive Inverse Property Multiplicative Inverse. Use venn diagram for times as its contents to say then add or it is associative property, though different skill to equivalent. Study the material so that you will be able to guide your students in understanding the use of these tiles correctly. Want to see ALL questions on this topic? Kim ate of venn diagram below represent things being added or less than set of a set of these operations and associative property is. The draft was successfully published. How much is an irrational numbers as cookies on place of whole numbers as a positive. You use mental computation strategies that follow up a product is verified that multiplying decimals. Note that the set of reals as well as any interval in R is uncountable. Commutative and Associative property of union and StudyLib. In venn diagrams consist of! Prove the following using the set theory laws, as well as any other theorems proved so far. Most important role in venn diagram venn diagrams to venn who studied the venn diagrams. Note to venn diagram. Emphasize that venn diagram divides each property law states that of duplicate items you continue enjoying our newsletter to manipulate sets for a number of a universal set? What is Solution: Hence Therefore NOTE TO THE TEACHER This exercise emphasizes the need to remember the sign rules for dividing integers. 2 Distributive property 3 Factor 4 Multiple 5 Product. Venn Diagram of Real Numbers iTutoringcom. The easiest way to solve for this number is to change mixed numbers to an improper fraction and then multiply it. Disjoint sets represented by boolean combination formula for the hats belong to the presence of the answer in symbols. Word problems can be based on a linear relationship. MA 322a Apply the commutative associative and distributive properties as. Venn diagram of A B displaystyle Atriangle B displaystyle Atriangle B. Calculating Unions & Intersections in Mathematical Sets. Proof by venn diagram Properties of set operations. Operation on Sets and their Properties Without using Venn. Students will communicate mathematical ideas effectively and precisely. Altitude Associative Property of Addition Associative Property of. Associative Properties The Associative Property for Union and the Associative Property for Intersection says that how the sets are grouped does not change the result Example Let A a n t B t a p and C s a p and A B C a t s a p a a n t a p A B C. The set builder notation may be used to describe sets that are too tedious to list explicitly. If you multiply a venn diagram below are associative property of multiplication, for you need to edit this. Note to answer to deal with the same number line to the edges is considered for the class time they shared the rules the associative property. The commutative property of the XOR is shown in the following equation The exclusive disjunction is associative in that the order in which multiple operations are. Properties Worksheets Properties of Mathematics Worksheets. Give a venn diagram venn diagram to properties seen in these is associative property of them with more elements of! The simplest form into smaller pieces and probability of a given two integers, identity property for adding or values of your support, then take care. Corollary to venn diagrams to a model works for calculations, associative and instruction on integers are made easy for. What is associative property example? These really already contain the new concepts: all one has to do is to use the lines that are already there to demarcate the surface in a new way. Cartesian product of two sets. Mathematics Set Operations Set theory GeeksforGeeks. ADS Laws of Set Theory. Determine if we define multiplication of cellphones, associative property venn diagram modeling for larger integers is associative property! Associative Property Changing the grouping of numbers that are either being added or multiplied does not change its value. Set Theory ScholarWorksGVSU. This properties that venn diagram? By finding coordinates of set is valid by plotting them on it down, though most people and increasing substrate concentration has sent. You are commenting using your Facebook account. Set-Builder The Foundations Logic and Proofs. Union of Sets Venn Diagram Representation with Examples. Symmetric difference using Venn diagrams Discrete Math. Activity below are associative property of venn diagram venn diagram this right side ablincoln experiments to place of! All they are associative properties could even, venn diagram below are few that lost power while suspending to students. And associative property: property and associative property. Set Identities Defined & Illustrated w 13 Examples. Associative property of multiplication b 23 r62r Distributive. Illustrate the associative property 4 Illustrate the. By continuing to use this website, you agree to their use. Sample Answer: NOTE TO THE TEACHER: End the lesson with a good summary. Henry VIII is an element of the set of Kings of England. India Green
A new study describes how the mission became the first to make a fifth state of matter in Earth orbit, and the advantages of studying atoms in space. This month marks 25 years since scientists first produced a fifth state of matter, which has extraordinary properties totally unlike solids, liquids, gases and plasmas. The achievement garnered a Nobel Prize and changed physics. A new study in the journal Nature builds on that legacy. In July 2018, NASA’s Cold Atom Lab became the first facility to produce that fifth state of matter, called a Bose-Einstein condensate (BEC), in Earth orbit. A fundamental physics facility on the International Space Station, Cold Atom Lab cools atoms down to ultracold temperatures in order to study their basic physical properties in ways that would not be possible on Earth. Now, the mission team reports on the details of getting this unique lab up and running, as well as their progress toward a long-term goal of using microgravity to illuminate new features of the quantum world. Whether you know it or not, quantum science touches our lives each day. Quantum mechanics refers to the branch of physics that focuses on the behaviors of atoms and subatomic particles, and it is a foundational part of many components in many modern technologies, including cell phones and computers, that employ the wave nature of electrons in silicon. Although the first quantum phenomena were observed more than a century ago, scientists are still learning about this realm of our universe. “Even dating back to when the first Bose-Einstein condensates were made, physicists recognized how working in space could provide big advantages in studying these quantum systems,” said David Aveline, a member of the Cold Atom Lab science team at NASA’s Jet Propulsion Laboratory in Southern California. “There have been some focused demonstrations in this regard, but now with the continuous operation of Cold Atom Lab, we’re showing there’s a lot to gain by doing these prolonged experiments day after day in orbit.” The colder atoms are, the slower they move and the easier they are to study. Ultracold atom facilities like Cold Atom Lab cool atoms down to within a fraction of a degree above absolute zero, or the temperature at which they would theoretically stop moving entirely. Chilling atoms is also the only way to produce a Bose-Einstein condensate. Scientists produce BEC’s in a vacuum, so on Earth the atoms are pulled down by gravity and fall quickly to the floor of the chamber, typically limiting observation times to less than a second. With the weightlessness of the space station, BEC’s can float, not unlike the astronauts on board. Inside Cold Atom Lab, that means longer observing times. Unlike solids, liquids, gases and plasmas, BEC’s don’t form naturally. They serve as a valuable tool for quantum physicists because all the atoms in a BEC have the same quantum identity, so they collectively exhibit properties that are typically displayed only by individual atoms or subatomic particles. Thus, BEC’s make those microscopic characteristics visible at a macroscopic scale. Previous ultracold atom experiments have used sounding rockets or dropped their specially designed hardware from the top of tall towers to create seconds or minutes of weightlessness the same way a zero gravity airplane does. From its perch on the station, Cold Atom Lab has provided its scientists thousands of hours of microgravity experiment time. This allows them to repeat their experiments multiple times and to exercise more creativity and flexibility in the experiments they conduct. “With Cold Atom Lab, scientists can see their data in real time and make adjustments to their experiments on short notice,” said Jason Williams, a member of the Cold Atom Lab science team at JPL. “That flexibility means we’re able to learn quickly and address new questions as they arise.” Ultracold atom facilities in space should also be able to reach colder temperatures than Earth-bound laboratories. One way to do that is to simply make the ultracold atom clouds slowly expand, which causes them to get cooler and is easier to do without gravity pulling atoms to the ground. Longer observing times and colder temperatures both provide opportunities for deeper insights into the behaviors of atoms and BEC’s. On Earth, the coldest temperatures and longest observing times have been achieved only by experiments with entire rooms full of dedicated hardware or tall towers. The dishwasher-sized Cold Atom Lab hasn’t yet set new records in those categories, but its basic capabilities are cutting edge, bundling the abilities of an extremely large lab into a small package. “I really think we’ve just begun to scratch the surface of what can be done with ultracold atom experiments in microgravity,” said Ethan Elliott, a member of the Cold Atom Lab science team at JPL. “I’m really excited to see what the fundamental physics community does with this capability in the long term.” Cold Atom Lab has now run successfully for two years, and astronauts recently helped upgrade the facility with a new tool called an atom interferometer that uses atoms to precisely measure forces, including gravity. The team recently confirmed that the new instrument is working as expected, making it the first atom interferometer to operate in space. The new study in Nature was led by Aveline, Williams and Elliott. Designed and built at JPL, Cold Atom Lab is sponsored by the Space Life and Physical Sciences Research and Applications (SLPSRA) division of NASA’s Human Exploration and Operations Mission Directorate at the agency’s headquarters in Washington and the International Space Station Program at NASA’s Johnson Space Center in Houston.
The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. To disambiguate arbitrarily sized bytes from the common 8-bit definition, network protocol documents such as The Internet Protocol (RFC 791) refer to an 8-bit byte as an octet. Those bits in an octet are usually counted with numbering from 0 to 7 or 7 to 0 depending on the bit endianness. The first bit is number 0, making the eighth bit number 7. |Unit system||unit derived from bit| |Unit of||digital information, data size| |Symbol||B or o| The size of the byte has historically been hardware-dependent and no definitive standards existed that mandated the size. Sizes from 1 to 48 bits have been used. The six-bit character code was an often-used implementation in early encoding systems, and computers using six-bit and nine-bit bytes were common in the 1960s. These systems often had memory words of 12, 18, 24, 30, 36, 48, or 60 bits, corresponding to 2, 3, 4, 5, 6, 8, or 10 six-bit bytes. In this era, bit groupings in the instruction stream were often referred to as syllables[a] or slab, before the term byte became common. The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte—2 to the power 8 is 256. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers commonly optimize for this usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit byte. Modern architectures typically use 32- or 64-bit words, built of four or eight bytes, respectively. The unit symbol for the byte was designated as the upper-case letter B by the International Electrotechnical Commission (IEC) and Institute of Electrical and Electronics Engineers (IEEE). Internationally, the unit octet, symbol o, explicitly defines a sequence of eight bits, eliminating the potential ambiguity of the term "byte". Etymology and historyEdit The term byte was coined by Werner Buchholz in June 1956,[b] during the early design phase for the IBM Stretch computer, which had addressing to the bit and variable field length (VFL) instructions with a byte size encoded in the instruction. It is a deliberate respelling of bite to avoid accidental mutation to bit.[c] Another origin of byte for bit groups smaller than a computer's word size, and in particular groups of four bits, is on record by Louis G. Dooley, who claimed he coined the term while working with Jules Schwartz and Dick Beeler on an air defense system called SAGE at MIT Lincoln Laboratory in 1956 or 1957, which was jointly developed by Rand, MIT, and IBM. Later on, Schwartz's language JOVIAL actually used the term, but the author recalled vaguely that it was derived from AN/FSQ-31. Early computers used a variety of four-bit binary-coded decimal (BCD) representations and the six-bit codes for printable graphic patterns common in the U.S. Army (FIELDATA) and Navy. These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to seven bits of coding, called the American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard, which replaced the incompatible teleprinter codes in use by different branches of the U.S. government and universities during the 1960s. ASCII included the distinction of upper- and lowercase alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and the physical or logical control of data flow over the transmission media. During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the eight-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their six-bit binary-coded decimal (BCDIC) representations[d] used in earlier card punches. The prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different. In the early 1960s, AT&T introduced digital telephony on long-distance trunk lines. These used the eight-bit μ-law encoding. This large investment promised to reduce transmission costs for eight-bit data. The development of eight-bit microprocessors in the 1970s popularized this storage size. Microprocessors such as the Intel 8008, the direct predecessor of the 8080 and the 8086, used in early personal computers, could also perform a small number of operations on the four-bit pairs in a byte, such as the decimal-add-adjust (DAA) instruction. A four-bit quantity is often called a nibble, also nybble, which is conveniently represented by a single hexadecimal digit. Historically, the term octad or octade was used to denote eight bits as well at least in Western Europe; however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers. In the International System of Quantities (ISQ), B is the symbol of the bel, a unit of logarithmic power ratio named after Alexander Graham Bell, creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one-tenth of a byte, the decibyte, and other fractions, are only used in derived units, such as transmission rates. The lowercase letter o for octet is defined as the symbol for octet in IEC 80000-13 and is commonly used in languages such as French and Romanian, and is also combined with metric prefixes for multiples, for example ko and Mo. |Orders of magnitude of data| More than one system exists to define larger units based on the byte. Some systems are based on powers of 10; other systems are based on powers of 2. Nomenclature for these systems has been the subject of confusion. Systems based on powers of 10 reliably use standard SI prefixes ('kilo', 'mega', 'giga', ...) and their corresponding symbols (k, M, G, ...). Systems based on powers of 2, however, might use binary prefixes ('kibi', 'mebi', 'gibi, ...) and their corresponding symbols (Ki, Mi, Gi, ...) or they might use the prefixes K, M, and G, creating ambiguity. While the numerical difference between the decimal and binary interpretations is relatively small for the kilobyte (about 2% smaller than the kibibyte), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a power-of-10-based yottabyte is about 17% smaller than power-of-2-based yobibyte. Units based on powers of 10Edit Definition of prefixes using powers of 10—in which 1 kilobyte (symbol kB) is defined to equal 1,000 bytes—is recommended by the International Electrotechnical Commission (IEC). The IEC standard defines eight such multiples, up to 1 yottabyte (YB), equal to 10008 bytes. This definition is most commonly used for data transfer rates in computer networks, internal bus, hard drive and flash media transfer speeds, and for the capacities of most storage media, particularly hard drives, flash-based storage, and DVDs. It is also consistent with the other uses of the SI prefixes in computing, such as CPU clock speeds or measures of performance. Units based on powers of 2Edit A system of units based on powers of 2 in which 1 kibibyte (KiB) is equal to 1,024 (i.e., 210) bytes is defined by international standard IEC 80000-13 and is supported by national and international standards bodies (BIPM, IEC, NIST). The IEC standard defines eight such multiples, up to 1 yobibyte (YiB), equal to 10248 bytes. An alternate system of nomenclature for the same units (referred to here as the customary convention), in which 1 kilobyte (KB) is equal to 1,024 bytes, 1 megabyte (MB) is equal to 10242 bytes and 1 gigabyte (GB) is equal to 10243 bytes is mentioned by a 1990s JEDEC standard. Only the first three multiples (up to GB) are mentioned by the JEDEC standard, which makes no mention of TB and larger. The customary convention is used by the Microsoft Windows operating system, and random-access memory capacity, such as main memory and CPU cache size, and in marketing and billing by telecommunication companies, such as Vodafone, AT&T, Orange and Telstra. History of the conflicting definitionsEdit Contemporary[e] computer memory has a binary architecture making a definition of memory units based on powers of 2 most practical. The use of the metric prefix kilo for binary multiples arose as a convenience, because 1,024 is approximately 1,000. This definition was popular in early decades of personal computing, with products like the Tandon 51⁄4-inch DD floppy format (holding 368,640 bytes) being advertised as "360 KB", following the 1,024-byte convention. It was not universal, however. The Shugart SA-400 51⁄4-inch floppy disk held 109,375 bytes unformatted, and was advertised as "110 Kbyte", using the 1000 convention. Likewise, the 8-inch DEC RX01 floppy (1975) held 256,256 bytes formatted, and was advertised as "256k". Other disks were advertised using a mixture of the two definitions: notably, 3+1⁄2-inch HD disks advertised as "1.44 MB" in fact have a capacity of 1,440 KiB, the equivalent to 1.47 MB or 1.41 MiB. In 1995, the International Union of Pure and Applied Chemistry's (IUPAC) Interdivisional Committee on Nomenclature and Symbols attempted to resolve this ambiguity by proposing a set of binary prefixes for the powers of 1024, including kibi (kilobinary), mebi (megabinary), and gibi (gigabinary). In December 1998, the IEC addressed such multiple usages and definitions by adopting the IUPAC's proposed prefixes (kibi, mebi, gibi, etc.) to unambiguously denote powers of 1024. Thus one kibibyte (1 KiB),is 10241 bytes = 1024 bytes, one mebibyte (1 MiB) is 10242 bytes = 1048576 bytes, and so on. Modern standard definitionsEdit The IEC adopted the IUPAC proposal and published the standard in January 1999. The IEC prefixes are now part of the International System of Quantities. The IEC further specified that the kilobyte should only be used to refer to 1,000 bytes. Lawsuits over definitionEdit Lawsuits arising from alleged consumer confusion over the binary and decimal definitions of multiples of the byte have generally ended in favor of the manufacturers, with courts holding that the legal definition of gigabyte or GB is 1 GB = 1,000,000,000 (109) bytes (the decimal definition), rather than the binary definition (230). Specifically, the United States District Court held that "the U.S. Congress has deemed the decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce' [...] The California Legislature has likewise adopted the decimal system for all 'transactions in this state.'" Earlier lawsuits had ended in settlement with no court ruling on the question, such as a lawsuit against drive manufacturer Western Digital. Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity. Seagate was sued on similar grounds and also settled. |kilobyte||text of "Jabberwocky"| |a typical favicon| |megabyte||text of Harry Potter and the Goblet of Fire| |gigabyte||about half an hour of video| |CD-quality audio of Mellon Collie and the Infinite Sadness| |terabyte||the largest consumer hard drive in 2007| |1080p 4:3 video of Avatar: The Last Airbender television series in its entirety[f]| |petabyte||2000 years of MP3-encoded music| |exabyte||global monthly Internet traffic in 2004| |zettabyte||global yearly Internet traffic in 2016| The C and C++ programming languages define byte as an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard). The C standard requires that the integral data type unsigned char must hold at least 256 different values, and is represented by at least eight bits (clause 188.8.131.52.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte.[g] In addition, the C and C++ standards require that there are no gaps between two bytes. This means every bit in memory is part of a byte. Java's primitive data type byte is defined as eight bits. It is a signed data type, holding values from −128 to 127. .NET programming languages, such as C#, define byte as an unsigned type, and the sbyte as a signed data type, holding values from 0 to 255, and −128 to 127, respectively. In data transmission systems, the byte is used as a contiguous sequence of bits in a serial data stream, representing the smallest distinguished unit of data. A transmission unit might additionally include start bits, stop bits, and parity bits, and thus its size may vary from seven to twelve bits to contain a single seven-bit ASCII code. - The term syllable was used for bytes containing instructions or constituents of instructions, not for data bytes. - Many sources erroneously indicate a birthday of the term byte in July 1956, but Werner Buchholz claimed that the term would have been coined in June 1956. In fact, the earliest document supporting this dates from 1956-06-11. Buchholz stated that the transition to 8-bit bytes was conceived in August 1956, but the earliest document found using this notion dates from September 1956. - Some later machines, e.g., Burroughs B1700, CDC 3600, DEC PDP-6, DEC PDP-10 had the ability to operate on arbitrary bytes no larger than the word size. - There was more than one BCD code page. - Through the 1970s there were machines with decimal architectures. - Video is encoded at a bitrate of 27.80 Mbit/s, with a runtime of 1,403 min (84180 seconds) resulting in an approximate size of ~0.2925 terabytes - The actual number of bits in a particular implementation is documented as CHAR_BITas implemented in the file limits.h. - Blaauw, Gerrit Anne; Brooks, Jr., Frederick Phillips; Buchholz, Werner (1962), "4: Natural Data Units" (PDF), in Buchholz, Werner (ed.), Planning a Computer System – Project Stretch, McGraw-Hill Book Company, Inc. / The Maple Press Company, York, PA., pp. 39–40, LCCN 61-10466, archived from the original (PDF) on 2017-04-03, retrieved 2017-04-03, Terms used here to describe the structure imposed by the machine design, in addition to bit, are listed below. Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (i.e., different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite, but respelled to avoid accidental mutation to bit.) A word consists of the number of data bits transmitted in parallel from or to memory in one memory cycle. Word size is thus defined as a structural property of the memory. (The term catena was coined for this purpose by the designers of the Bull GAMMA 60 computer.) Block refers to the number of words transmitted to or from an input-output unit in response to a single input-output instruction. Block size is a structural property of an input-output unit; it may have been fixed by the design or left to be varied by the program. - Bemer, Robert William (1959), "A proposal for a generalized card code of 256 characters", Communications of the ACM, 2 (9): 19–23, doi:10.1145/368424.368435, S2CID 36115735 - Postel, J. (September 1981). Internet Protocol DARPA INTERNET PROGRAM PROTOCOL SPECIFICATION. p. 43. doi:10.17487/RFC0791. RFC 791. Retrieved 28 August 2020. octet An eight bit byte. - Buchholz, Werner (1956-06-11). "7. The Shift Matrix" (PDF). The Link System. IBM. pp. 5–6. Stretch Memo No. 39G. Archived from the original (PDF) on 2017-04-04. Retrieved 2016-04-04. […] Most important, from the point of view of editing, will be the ability to handle any characters or digits, from 1 to 6 bits long. Figure 2 shows the Shift Matrix to be used to convert a 60-bit word, coming from Memory in parallel, into characters, or 'bytes' as we have called them, to be sent to the Adder serially. The 60 bits are dumped into magnetic cores on six different levels. Thus, if a 1 comes out of position 9, it appears in all six cores underneath. Pulsing any diagonal line will send the six bits stored along that line to the Adder. The Adder may accept all or only some of the bits. Assume that it is desired to operate on 4 bit decimal digits, starting at the right. The 0-diagonal is pulsed first, sending out the six bits 0 to 5, of which the Adder accepts only the first four (0–3). Bits 4 and 5 are ignored. Next, the 4 diagonal is pulsed. This sends out bits 4 to 9, of which the last two are again ignored, and so on. It is just as easy to use all six bits in alphanumeric work, or to handle bytes of only one bit for logical analysis, or to offset the bytes by any number of bits. All this can be done by pulling the appropriate shift diagonals. An analogous matrix arrangement is used to change from serial to parallel operation at the output of the adder. […] - 3600 Computer System – Reference Manual (PDF). K. St. Paul, Minnesota, USA: Control Data Corporation (CDC). 1966-10-11 . 60021300. Archived from the original (PDF) on 2017-04-05. Retrieved 2017-04-05. Byte – A partition of a computer word.(NB. Discusses 12-bit, 24-bit and 48-bit bytes.) - Rao, Thammavaram R. N.; Fujiwara, Eiji (1989). McCluskey, Edward J. (ed.). Error-Control Coding for Computer Systems. Prentice Hall Series in Computer Engineering (1 ed.). Englewood Cliffs, NJ, USA: Prentice Hall. ISBN 0-13-283953-9. LCCN 88-17892. (NB. Example of the usage of a code for "4-bit bytes".) - Tafel, Hans Jörg (1971). Einführung in die digitale Datenverarbeitung [Introduction to digital information processing] (in German). Munich: Carl Hanser Verlag. p. 300. ISBN 3-446-10569-7. Byte = zusammengehörige Folge von i.a. neun Bits; davon sind acht Datenbits, das neunte ein Prüfbit(NB. Defines a byte as a group of typically 9 bits; 8 data bits plus 1 parity bit.) - ISO/IEC 2382-1: 1993, Information technology – Vocabulary – Part 1: Fundamental terms. 1993. A string that consists of a number of bits, treated as a unit, and usually representing a character or a part of a character. 1 The number of bits in a byte is fixed for a given data processing system. 2 The number of bits in a byte is usually 8. - "Computer History Museum – Exhibits – Internet History – 1964: Internet History 1962 to 1992". Computer History Museum. 2017 . Archived from the original on 2017-04-03. Retrieved 2017-04-03. - Jaffer, Aubrey (2011) . "Metric-Interchange-Format". Archived from the original on 2017-04-03. Retrieved 2017-04-03. - Kozierok, Charles M. (2005-09-20) . "The TCP/IP Guide – Binary Information and Representation: Bits, Bytes, Nibbles, Octets and Characters – Byte versus Octet". 3.0. Archived from the original on 2017-04-03. Retrieved 2017-04-03. - ISO 2382-4, Organization of data (2 ed.). byte, octet, 8-bit byte: A string that consists of eight bits. - Buchholz, Werner (February 1977). "The Word 'Byte' Comes of Age..." Byte Magazine. 2 (2): 144. […] The first reference found in the files was contained in an internal memo written in June 1956 during the early days of developing Stretch. A byte was described as consisting of any number of parallel bits from one to six. Thus a byte was assumed to have a length appropriate for the occasion. Its first use was in the context of the input-output equipment of the 1950s, which handled six bits at a time. The possibility of going to 8 bit bytes was considered in August 1956 and incorporated in the design of Stretch shortly thereafter. The first published reference to the term occurred in 1959 in a paper 'Processing Data in Bits and Pieces' by G A Blaauw, F P Brooks Jr and W Buchholz in the IRE Transactions on Electronic Computers, June 1959, page 121. The notions of that paper were elaborated in Chapter 4 of Planning a Computer System (Project Stretch), edited by W Buchholz, McGraw-Hill Book Company (1962). The rationale for coining the term was explained there on page 40 as follows: Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (ie, different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite, but respelled to avoid accidental mutation to bit.) System/360 took over many of the Stretch concepts, including the basic byte and word sizes, which are powers of 2. For economy, however, the byte size was fixed at the 8 bit maximum, and addressing at the bit level was replaced by byte addressing. […] - "Timeline of the IBM Stretch/Harvest era (1956–1961)". Computer History Museum. June 1956. Archived from the original on 2016-04-29. Retrieved 2017-04-03. 1956 Summer: Gerrit Blaauw, Fred Brooks, Werner Buchholz, John Cocke and Jim Pomerene join the Stretch team. Lloyd Hunter provides transistor leadership.(NB. This timeline erroneously specifies the birth date of the term "byte" as July 1956, while Buchholz actually used the term as early as June 1956.) 1956 July [sic]: In a report Werner Buchholz lists the advantages of a 64-bit word length for Stretch. It also supports NSA's requirement for 8-bit bytes. Werner's term "Byte" first popularized in this memo. - Buchholz, Werner (1956-07-31). "5. Input-Output" (PDF). Memory Word Length. IBM. p. 2. Stretch Memo No. 40. Archived from the original (PDF) on 2017-04-04. Retrieved 2016-04-04. […] 60 is a multiple of 1, 2, 3, 4, 5, and 6. Hence bytes of length from 1 to 6 bits can be packed efficiently into a 60-bit word without having to split a byte between one word and the next. If longer bytes were needed, 60 bits would, of course, no longer be ideal. With present applications, 1, 4, and 6 bits are the really important cases. With 64-bit words, it would often be necessary to make some compromises, such as leaving 4 bits unused in a word when dealing with 6-bit bytes at the input and output. However, the LINK Computer can be equipped to edit out these gaps and to permit handling of bytes which are split between words. […] - Buchholz, Werner (1956-09-19). "2. Input-Output Byte Size" (PDF). Memory Word Length and Indexing. IBM. p. 1. Stretch Memo No. 45. Archived from the original (PDF) on 2017-04-04. Retrieved 2016-04-04. […] The maximum input-output byte size for serial operation will now be 8 bits, not counting any error detection and correction bits. Thus, the Exchange will operate on an 8-bit byte basis, and any input-output units with less than 8 bits per byte will leave the remaining bits blank. The resultant gaps can be edited out later by programming […] - Raymond, Eric Steven (2017) . "byte definition". Archived from the original on 2017-04-03. Retrieved 2017-04-03. - Bemer, Robert William (2000-08-08). "Why is a byte 8 bits? Or is it?". Computer History Vignettes. Archived from the original on 2017-04-03. Retrieved 2017-04-03. […] I came to work for IBM, and saw all the confusion caused by the 64-character limitation. Especially when we started to think about word processing, which would require both upper and lower case. […] I even made a proposal (in view of STRETCH, the very first computer I know of with an 8-bit byte) that would extend the number of punch card character codes to 256 […]. So some folks started thinking about 7-bit characters, but this was ridiculous. With IBM's STRETCH computer as background, handling 64-character words divisible into groups of 8 (I designed the character set for it, under the guidance of Dr. Werner Buchholz, the man who DID coin the term 'byte' for an 8-bit grouping). […] It seemed reasonable to make a universal 8-bit character set, handling up to 256. In those days my mantra was 'powers of 2 are magic'. And so the group I headed developed and justified such a proposal […] The IBM 360 used 8-bit characters, although not ASCII directly. Thus Buchholz's 'byte' caught on everywhere. I myself did not like the name for many reasons. The design had 8 bits moving around in parallel. But then came a new IBM part, with 9 bits for self-checking, both inside the CPU and in the tape drives. I exposed this 9-bit byte to the press in 1973. But long before that, when I headed software operations for Cie. Bull in France in 1965–66, I insisted that 'byte' be deprecated in favor of 'octet'. […] It is justified by new communications methods that can carry 16, 32, 64, and even 128 bits in parallel. But some foolish people now refer to a '16-bit byte' because of this parallel transfer, which is visible in the UNICODE set. I'm not sure, but maybe this should be called a 'hextet'. […] - Blaauw, Gerrit Anne; Brooks, Jr., Frederick Phillips; Buchholz, Werner (June 1959). "Processing Data in Bits and Pieces". IRE Transactions on Electronic Computers: 121. - Dooley, Louis G. (February 1995). "Byte: The Word". BYTE. Ocala, FL, USA. Archived from the original on 1996-12-20. […] The word byte was coined around 1956 to 1957 at MIT Lincoln Laboratories within a project called SAGE (the North American Air Defense System), which was jointly developed by Rand, Lincoln Labs, and IBM. In that era, computer memory structure was already defined in terms of word size. A word consisted of x number of bits; a bit represented a binary notational position in a word. Operations typically operated on all the bits in the full word.(NB. According to his son, Dooley wrote to him: "On good days, we would have the XD-1 up and running and all the programs doing the right thing, and we then had some time to just sit and talk idly, as we waited for the computer to finish doing its thing. On one such occasion, I coined the word "byte", they (Jules Schwartz and Dick Beeler) liked it, and we began using it amongst ourselves. The origin of the word was a need for referencing only a part of the word length of the computer, but a part larger than just one bit...Many programs had to access just a specific 4-bit segment of the full word...I wanted a name for this smaller segment of the fuller word. The word "bit" lead to "bite" (meaningfully less than the whole), but for a unique spelling, "i" could be "y", and thus the word "byte" was born.") We coined the word byte to refer to a logical set of bits less than a full word size. At that time, it was not defined specifically as x bits but typically referred to as a set of 4 bits, as that was the size of most of our coded data items. Shortly afterward, I went on to other responsibilities that removed me from SAGE. After having spent many years in Asia, I returned to the U.S. and was bemused to find out that the word byte was being used in the new microcomputer technology to refer to the basic addressable memory unit. - Ram, Stefan. "Erklärung des Wortes "Byte" im Rahmen der Lehre binärer Codes" (in German). Berlin, Germany: Freie Universität Berlin. Retrieved 2017-04-10. - Origin of the term "byte", 1956, archived from the original on 2017-04-10, retrieved 2017-04-10, A question-and-answer session at an ACM conference on the history of programming languages included this exchange: JOHN GOODENOUGH: You mentioned that the term "byte" is used in JOVIAL. Where did the term come from? JULES SCHWARTZ (inventor of JOVIAL): As I recall, the AN/FSQ-31, a totally different computer than the 709, was byte oriented. I don't recall for sure, but I'm reasonably certain the description of that computer included the word "byte," and we used it. FRED BROOKS: May I speak to that? Werner Buchholz coined the word as part of the definition of STRETCH, and the AN/FSQ-31 picked it up from STRETCH, but Werner is very definitely the author of that word. SCHWARTZ: That's right. Thank you. - "List of EBCDIC codes by IBM". ibm.com. - Williams, R. H. (1969-01-01). British Commercial Computer Digest: Pergamon Computer Data Series. Pergamon Press. ISBN 1483122107. 978-1483122106. - "Philips – Philips Data Systems' product range – April 1971" (PDF). Philips. April 1971. Archived from the original (PDF) on 2016-03-04. Retrieved 2015-08-03. - "When is a kilobyte a kibibyte? And an MB an MiB?". The International System of Units and the IEC. International Electrotechnical Commission. Retrieved 2010-08-30.) - Prefixes for Binary Multiples Archived 2007-08-08 at the Wayback Machine — The NIST Reference on Constants, Units, and Uncertainty - 1977 Disk/Trend Report Rigid Disk Drives, published June 1977 - SanDisk USB Flash Drive Archived 2008-05-13 at the Wayback Machine "Note: 1 megabyte (MB) = 1 million bytes; 1 gigabyte (GB) = 1 billion bytes." - Kilobyte – Definition and More from the Free Merriam-Webster Dictionary Archived 2010-04-09 at the Wayback Machine. Merriam-webster.com (2010-08-13). Retrieved on 2011-01-07. - Kilobyte – Definition of Kilobyte at Dictionary.com Archived 2010-09-01 at the Wayback Machine. Dictionary.reference.com (1995-09-29). Retrieved on 2011-01-07. - Definition of kilobyte from Oxford Dictionaries Online Archived 2006-06-25 at the Wayback Machine. Askoxford.com. Retrieved on 2011-01-07. - "Determining Actual Disk Size: Why 1.44 MB Should Be 1.40 MB". Support.microsoft.com. 2003-05-06. Archived from the original on 2014-02-09. Retrieved 2014-03-25. - "3G/GPRS data rates". Vodafone Ireland. Archived from the original on 26 October 2016. Retrieved 26 October 2016. - "Data Measurement Scale". AT&T. Retrieved 26 October 2016.[permanent dead link] - "Internet Mobile Access". Orange Romania. Archived from the original on 26 October 2016. Retrieved 26 October 2016. - "Our Customer Terms" (PDF). Telstra. p. 7. Archived (PDF) from the original on 10 April 2017. Retrieved 26 October 2016. - "Prefixes for binary multiples". iec.ch. International Electrotechnical Commission. Archived from the original on 25 September 2016. Retrieved 1 October 2016. - "SA400 minifloppy". Swtpc.com. 2013-08-14. Archived from the original on 2014-05-27. Retrieved 2014-03-25. - "Archived copy" (PDF). Archived from the original (PDF) on 2011-06-08. Retrieved 2011-06-24.CS1 maint: archived copy as title (link) - "Archived copy" (PDF). Archived from the original (PDF) on 2011-04-23. Retrieved 2011-06-24.CS1 maint: archived copy as title (link) - IUCr 1995 Report - IUPAC Interdivisional Committee on Nomenclature and Symbols (IDCNS) http://ww1.iucr.org/iucr-top/cexec/rep95/idcns.htm - "Binary Prefix" University of Auckland Department of Computer Science https://wiki.cs.auckland.ac.nz/stageonewiki/index.php/Binary_prefix - National Institute of Standards and Technology. "Prefixes for binary multiples". Archived from the original on 2007-08-08. "In December 1998 the International Electrotechnical Commission (IEC) [...] approved as an IEC International Standard names and symbols for prefixes for binary multiples for use in the fields of data processing and data transmission." - "What is a kilobyte?". Retrieved 2010-05-20. - NIST "Prefixes for binary multiples" https://physics.nist.gov/cuu/Units/binary.html - Amendment 2 to IEC International Standard IEC 60027-2: Letter symbols to be used in electrical technology - Part 2: Telecommunications and electronics. - "Order Granting Motion to Dismiss" (PDF). United States District Court. Retrieved 2020-01-24. - Mook, Nate (2006-06-28). "Western Digital Settles Capacity Suit". betanews. Retrieved 2009-03-30. - Baskin, Scott D. (2006-02-01). "Defendant Western Digital Corporation's Brief in Support of Plaintiff's Motion for Preliminary Approval". Orin Safier v. Western Digital Corporation. Western Digital Corporation. Retrieved 2009-03-30. - Judge, Peter (2007-10-26). "Seagate pays out over gigabyte definition". ZDNet. Retrieved 2014-09-16. - Allison Dexter, "How Many Words are in Harry Potter?", ; shows 190,637 words - Kilobytes Megabytes Gigabytes Terabytes (Stanford University) - Perenson, Melissa J. (4 January 2007). "Hitachi Introduces 1-Terabyte Hard Drive". www.pcworld.com. Retrieved 5 December 2020.[permanent dead link] - "What does a petabyte look like?". Archived from the original on 28 January 2018. Retrieved 19 February 2018. - Gross, Grant (24 November 2007). "Internet Could Max Out in 2 Years, Study Says". PC World. Archived from the original on 26 November 2007. Retrieved 28 November 2007. - "The Zettabyte Era Officially Begins (How Much is That?)". Cisco Blogs. 2016-09-09. Retrieved 2021-08-04. - Cline, Marshall. "I could imagine a machine with 9-bit bytes. But surely not 16-bit bytes or 32-bit bytes, right?". - Klein, Jack (2008), Integer Types in C and C++, archived from the original on 2010-03-27, retrieved 2015-06-18 - Cline, Marshall. "C++ FAQ: the rules about bytes, chars, and characters". - "External Interfaces/API". Northwestern University. - "Avatar - The Last Airbender: The Complete Series Blu-ray". Blu-ray.com. Archived from the original on 28 April 2020. Retrieved 24 February 2021. - Programming with the PDP-10 Instruction Set (PDF). PDP-10 System Reference Manual. 1. Digital Equipment Corporation (DEC). August 1969. Archived (PDF) from the original on 2017-04-05. Retrieved 2017-04-05. - Ashley Taylor. “Bits and Bytes.” Stanford. https://web.stanford.edu/class/cs101/bits-bytes.html
This unit is all about learning and identifying different chemical reactions. One of the learning objectives of unit four in the AP Chemistry course is to "identify a reaction as acid-base, oxidation-reduction, or precipitation." Throughout this unit's study guides, you've gotten a taste of net ionic equations and precipitation reactions, as well as titrations and acid-base reactions. Oxidation-reduction reactions are going to be reviewed in-depth in the rest of this unit. For now, here is a quick rundown of these three types of reactions: are chemical reactions that involve the transfer of a proton from one molecule to another. They often involve the transfer of a proton from a strong acid to a strong base, resulting in the formation of a salt and water. For more about acid-base reactions, check out the next study guide Oxidation-reduction reactions, also known as redox reactions, are chemical reactions in which the atoms of one or more elements are oxidized (lose electrons) and reduced (gain electrons). These reactions involve the transfer of electrons from a reducing agent to an oxidizing agent and are characterized by a change in the oxidation state of the elements involved in the reaction. Combustion reactions are a type of redox reaction and you've already learned about them! The specifics and must-know information about redox reactions will be covered later in this unit Precipitation reactions are chemical reactions in which two or more soluble reactants combine to form an insoluble product, which is known as a precipitate. We'll focus on precipitation reactions in this study guide! When ions in aqueous solutions react, they may produce an insoluble (undissolvable) or barely soluble solid ionic compound. This solid product is called a precipitate. All sodium, potassium, and nitrate salts🧂 are soluble in water, so they aren’t precipitates. You don’t need to know any other solubility rules for the AP, but it doesn’t hurt to be familiar with common soluble and insoluble compounds. Table 4.1 is a table of solubility for common ions in water. Usually, the question will tell you if the compound is soluble and which solution it’s soluble in. 😊 We went over this in key topic 4.2 , but let's do a quick overview! 🏃 The best steps to follow when writing a net ionic equation are: Figure out which compounds are soluble and insoluble using solubility rules. Balance the chemical equation . It may already be balanced, but it also may not, so you always have to check. Write the complete ionic equation by dissociating soluble compounds into ions. Omit the spectator ions and write the final net ionic equation of the given reaction. Make sure you include the phase of matter each compound is in. Knowing how to write the net ionic equation for a precipitation reaction is just the first step! Let's take a look at a concentration of ions question, where you calculate how much of each ion is present after a precipitation reaction. Part a: What is the mass of the solid formed? Part b: What are the concentrations of ions at the end of the reaction? Since they didn't give us the equation, let's write it ourselves! NaCl + Pb(C₂H₃O₂)₂ → NaC₂H₃O₂ + PbCl₂ Always check the equation is balanced! This one isn't, so let's balance it out and make sure each ion is in equal amounts on both sides. 2NaCl + Pb(C₂H₃O₂)₂ → 2NaC₂H₃O₂ + PbCl₂ With precipitation reactions and concentration of ions questions, there will always be an insoluble product. In this case, it is either NaC₂H₃O₂ or PbCl₂. Since sodium is always soluble, PbCl₂ is the precipitate in this question. NaCl (aq) + Pb(C₂H₃O₂)₂ (aq) → 2NaC₂H₃O₂ (aq) + PbCl₂ (s) Now that we have the equation and know the precipitate, let's get into the math itself. The question provided us with the volumes and molarities for each reactant. Using these two pieces of information, we can find the number of moles of NaCl and Pb(C₂H₃O₂)₂. Molarity = moles / volume in L - We have to convert the volumes we have into L by dividing by 1000. 0.100 = x moles of NaCl / 0.020 L → x = 0.00200 moles of NaCl (aq) 0.0400 = x moles of Pb(C₂H₃O₂)₂ / 0.030 L → x = 0.00120 moles of Pb(C₂H₃O₂)₂ (aq) Using the number of moles we solved for above, we can now use stoichiometry to answer part a. But wait! Which number do we do stoich with: 0.00200 or 0.00120🤔? This is where the limiting reactant (LR) comes into play. The limiting reactant in a reaction is the substance that limits the amount of products produced. Basically, there are different amounts of each reactant. One reactant is more abundant, right? The reactant that there is less of eventually stops the reaction and limits it since the reactant runs out. The other reactant is called the excess since there is still some of it left over, unreacted. To find the LR, we have to do stoichiometry with both amounts. Convert each reactant into the precipitate: Since there are less moles of PbCl₂ using NaCl as a reactant, NaCl is the LR. Pb(C₂H₃O₂)₂ is the excess. Now we know how many moles of NaCl, Pb(C₂H₃O₂)₂, and PbCl₂ we have, wecan answer part a for real now!🥳 0.00100 mol PbCl₂ x 278.2 g/mol = 0.278 g of PbCl₂ Yay, we did half the problem! Let's move on to solving for the concentrations of ions. In order for us to do this, we have to know the moles of each ion and the volumes of each ion. Let's think this through conceptually a bit. After PbCl₂ (s) forms, what is left in the solution? Looking back at the LR, either Na⁺ or Cl⁻ will have a final concentration of 0 since one of them will be completely used up. Since Cl⁻ is in the precipitate, Cl⁻ has a final concentration of 0. All of the chloride anions in the solution have been used up to form as much precipitate as possible. That was easy! 1/4 of part b is complete. 😊 The ion that is in the LR and precipitate ALWAYS has a final concentration of 0. Think of it as being 100% used up, so there is none of it left. In this next step, we can solve for the concentrations of two ions: Na⁺ and C₂H₃O₂. These are considered spectator ions since they aren't in the precipitate. To find their concentrations, we have to use both the 0.00100 mol of PbCl₂ from using NaCl and the 0.00120 mol of PCl₂ from using Pb(C₂H₃O₂)₂. The first number can help us find Na⁺ whereas the second can help us find C₂H₃O₂⁻. Na+: All you have to do now is find the volume, but we have to multiply the number of moles by 2 since NaCl has an initial coefficient of 2. This is where balancing the reaction comes in! To find the volume, we just have to add 20.0 mL and 30.0 mL and convert to liters. (0.00100)(2) / 0.050 L = 0.0400 M of Na⁺ C₂H₃O₂⁻: We have to multiply by 2 here as well since there was a subscript on the reactant side of the equation. (0.00120)(2) / 0.050 L = 0.0480 M of C₂H₃O₂⁻ We have one last ion we have to calculate the concentration of: Pb⁺². This is slightly harder to find, but with some practice, you got this! 😌 To find the excess amount of lead, convert the LR to the soluble product. Here, we would convert 0.00200 moles of NaCl to find the moles reacted. Since there is a 1:2 mole ratio, 0.00100 moles reacted. Then, we would subtract by the excess number of moles (found in step 4), which is 0.00120. 0.00120 - 0.00100 = 0.00020 moles of Pb⁺² unreacted. Then we just divide by the volume in liters, so 0.00020 moles / 0.050 L = 0.0040 M of Pb⁺². Part a: 0.278 g of PbCl₂ Part b: [Cl⁻] = 0 [Na⁺] = 0.0400 [C₂H₃O₂⁻] = 0.0480 [Pb⁺²] = 0.0040 This is a very difficult question but once you practice and understand it conceptually, you'll begin to be able to get through it faster. It is honestly a lot in one question and probably won't be tested like this. However, knowing it will strengthen your overall stoichiometry skills, so it doesn't hurt! 🙃
One of the fundamental things that you must know about Formulas and Functions is the method in which Excel performs calculations. We will not go into any great detail in this, but there are some basics all Excel users need to know. The main function of Excel is obviously the number crunching side of things and a good spreadsheet is one that returns accurate results 100% of the time. So whilst we may have a spreadsheet that looks very pretty and is formatted to make it look a million dollars, it is the guts of the spreadsheet, or the nuts and bolts, that make it either a workable spreadsheet or an unworkable spreadsheet, not the visual appeal. Operators that Excel Recognizes The text below is from the Excel help file: Calculation operators in formulas Operators specify the type of calculation that you want to perform on the elements of a formula. Microsoft Excel includes four different types of calculation operators: arithmetic, comparison, text, and reference. To perform basic mathematical operations such as addition, subtraction, or multiplication; combine numbers; and produce numeric results, use the following arithmetic operators. |+ (plus sign)||Addition||3+3| |– (minus sign)||SubtractionNegation||3–1–1| |/ (forward slash)||Division||3/3| |% (percent sign)||Percent||20%| |^ (caret)||Exponentiation||3^2 (the same as 3*3)| You can compare two values with the following operators. When two values are compared by using these operators, the result is a logical value, either TRUE or FALSE. |= (equal sign)||Equal to||A1=B1| |(greater than sign)||Greater than||A1>B1| |< (less than sign)||Less than||A1<B1| |>= (greater than or equal to sign)||Greater than or equal to||A1>=B1| |<= (less than or equal to sign)||Less than or equal to||A1<=B1| |<> (not equal to sign)||Not equal to||A1<>B1| Text concatenation operator Use the ampersand (&) to join, or concatenate, one or more text strings to produce a single piece of text. |& (ampersand)||Connects, or concatenates, two values to produce one continuous text value||"North" & "wind" produce "Northwind"| Combine ranges of cells for calculations with the following operators. |: (colon)||Range operator, which produces one reference to all the cells between two references, including the two references||B5:B15| |, (comma)||Union operator, which combines multiple references into one reference||SUM(B5:B15,D5:D15)| End of MS Excel Help file When Excel performs a calculation it does so in the following order: If a formula contained both a multiplication and a division operator Excel would calculate them from left to right. The same would apply for subtraction and addition. We can change the order in which Excel does its calculations by closing the relative function in parenthesis. Let's say we had the formula =10-10*10 the result would be -90 (negative 90). If we then used =(10-10)*10 the result would be 0 (zero). In other words we have forced Excel to change its natural order of calculation. Excel is quite happy to do this. Some examples of this would be: So as you can see, we can manipulate any formula to calculate in the order we want, simply by placing the parenthesis in the appropriate places. We will leave Formulas at this stage to allow you time to let what we have discussed to date sink in. If there are any questions you would like to ask or any particular formulas you would like explained you only need to ask. What we have shown you is what we consider the least you should know about Excel and formulas. Once you have gone over and fully understand these lessons on Excels functions and formulas you will have the foundations on which we can build. Sunrise Banks Online Banking Login. Sunshine Bank Online Banking Login. Sunshine Bank Online Banking Login. Sunshine Savings Bank Online Banking Login. SunTrust Bank Online Banking Login. Sunwest Bank Online Banking Login. SunWest Federal Credit Union Online Banking Login. Online Banking strives to provide the most simple login tutorials for online banking across the USA. Sutton Bank Online Banking Login. Swedbank Online Banking Login. Swineford National Bank Online Banking Login. Sycamore Bank Online Banking Login. Sydney Credit Union Online Banking Login. Synchrony Bank Online Banking Login. Synergy Bank Online Banking Login. Synovus Bank Online Banking Login. You may also discover that you will know the fundamentals of Excel formulas and functions better than a lot of so called experienced users!! Go back to: |Lesson 1 - Excel Fundamentals| |Lesson 2 - Starting Excel and Excel Workbooks| |Lesson 3 - Excel Toolbars and Task Panes| |Lesson 4 - Excel Worksheets| |Lesson 5 - Excel Cells and Navigating a Worksheet| |Lesson 6 - Excel Cut/Copying and Pasting Data| |Lesson 7 - Excel Copying with the Fill Handle| |Lesson 8 - Excel Paste Special| |Lesson 9 - Excel Insert Command| |Lesson 10 - Excel's default options| |Lesson 11 - Excel's Undo and Redo| |Lesson 12 - Excel's Format Painter| |Lesson 13 - Excel's Dates and Times| |Lesson 14 - Excel's Custom Formats| |Lesson 15 - Excel Formulas| |Lesson 16 - Excel Cell References| |Lesson 17 - Excel: Avoid Typing| |Lesson 18 - Excel Formulae Arguments & Syntax| |Lesson 19 - Excel Autosum Formula| |Lesson 20 - Excel Auto Calculate| |Lesson 21 - Excel's Insert Function| |Lesson 22 - Excel's Useful Functions| |Lesson 23 - Excel's Named Ranges| |Lesson 24 - Excel's Constants and the Paste Name Dialog| |Lesson 26 - Excel Comments Cell| |Lesson 27 - Excel Find and Replace| |Lesson - 28 - Clear Excel Cell Contents| |Lesson 29 - Effective Excel Printing 1| |Lesson 30 - Effective Excel Printing 2| |Lesson 31 - Sorting in Excel| |Lesson 32 - Hide/Show Row/Columns in Excel| |Lesson 33 - Auto-Formats in Excel| |Lesson 34 - Creating a Basic Excel Spreadsheet| |Lesson 35 - Excel Charting Lesson: The Basic Excel Spreadsheet| |Lesson 36 - Excel Worksheet Protection| |Lesson 37 - Excel IF Formula Nesting| |Lesson 38 - Excel Function Now/Today Formulas|
Polarization (also polarisation) is a property applying to transverse waves that specifies the geometrical orientation of the oscillations. In a transverse wave, the direction of the oscillation is transverse to the direction of motion of the wave, so the oscillations can have different directions perpendicular to the wave direction. A simple example of a polarized transverse wave is vibrations traveling along a taut string (see image); for example, in a musical instrument like a guitar string. Depending on how the string is plucked, the vibrations can be in a vertical direction, horizontal direction, or at any angle perpendicular to the string. In contrast, in longitudinal waves, such as sound waves in a liquid or gas, the displacement of the particles in the oscillation is always in the direction of propagation, so these waves do not exhibit polarization. Transverse waves that exhibit polarization include electromagnetic waves such as light and radio waves, gravitational waves, and transverse sound waves (shear waves) in solids. In some types of transverse waves, the wave displacement is limited to a single direction, so these also do not exhibit polarization; for example, in surface waves in liquids (gravity waves), the wave displacement of the particles is always in a vertical plane. An electromagnetic wave such as light consists of a coupled oscillating electric field and magnetic field which are always perpendicular; by convention, the "polarization" of electromagnetic waves refers to the direction of the electric field. In linear polarization, the fields oscillate in a single direction. In circular or elliptical polarization, the fields rotate at a constant rate in a plane as the wave travels. The rotation can have two possible directions; if the fields rotate in a right hand sense with respect to the direction of wave travel, it is called right circular polarization, or, if the fields rotate in a left hand sense, it is called left circular polarization. Light or other electromagnetic radiation from many sources, such as the sun, flames, and incandescent lamps, consists of short wave trains with an equal mixture of polarizations; this is called unpolarized light. Polarized light can be produced by passing unpolarized light through a polarizing filter, which allows waves of only one polarization to pass through. The most common optical materials (such as glass) are isotropic and do not affect the polarization of light passing through them; however, some materials—those that exhibit birefringence, dichroism, or optical activity—can change the polarization of light. Some of these are used to make polarizing filters. Light is also partially polarized when it reflects from a surface. According to quantum mechanics, electromagnetic waves can also be viewed as streams of particles called photons. When viewed in this way, the polarization of an electromagnetic wave is determined by a quantum mechanical property of photons called their spin. A photon has one of two possible spins: it can either spin in a right hand sense or a left hand sense about its direction of travel. Circularly polarized electromagnetic waves are composed of photons with only one type of spin, either right- or left-hand. Linearly polarized waves consist of equal numbers of right and left hand spinning photons, with their phase synchronized so they superpose to give oscillation in a plane. Polarization is an important parameter in areas of science dealing with transverse waves, such as optics, seismology, radio, and microwaves. Especially impacted are technologies such as lasers, wireless and optical fiber telecommunications, and radar. Wave propagation and polarizationEdit Most sources of light are classified as incoherent and unpolarized (or only "partially polarized") because they consist of a random mixture of waves having different spatial characteristics, frequencies (wavelengths), phases, and polarization states. However, for understanding electromagnetic waves and polarization in particular, it is easiest to just consider coherent plane waves; these are sinusoidal waves of one particular direction (or wavevector), frequency, phase, and polarization state. Characterizing an optical system in relation to a plane wave with those given parameters can then be used to predict its response to a more general case, since a wave with any specified spatial structure can be decomposed into a combination of plane waves (its so-called angular spectrum). And incoherent states can be modeled stochastically as a weighted combination of such uncorrelated waves with some distribution of frequencies (its spectrum), phases, and polarizations. Transverse electromagnetic wavesEdit Electromagnetic waves (such as light), traveling in free space or another homogeneous isotropic non-attenuating medium, are properly described as transverse waves, meaning that a plane wave's electric field vector E and magnetic field H are in directions perpendicular to (or "transverse" to) the direction of wave propagation; E and H are also perpendicular to each other. Considering a monochromatic plane wave of optical frequency f (light of vacuum wavelength λ has a frequency of f = c/λ where c is the speed of light), let us take the direction of propagation as the z axis. Being a transverse wave the E and H fields must then contain components only in the x and y directions whereas Ez=Hz=0. Using complex (or phasor) notation, we understand the instantaneous physical electric and magnetic fields to be given by the real parts of the complex quantities occurring in the following equations. As a function of time t and spatial position z (since for a plane wave in the +z direction the fields have no dependence on x or y) these complex fields can be written as: where λ/n is the wavelength in the medium (whose refractive index is n) and T = 1/f is the period of the wave. Here ex, ey, hx, and hy are complex numbers. In the second more compact form, as these equations are customarily expressed, these factors are described using the wavenumber and angular frequency (or "radian frequency") . In a more general formulation with propagation not restricted to the +z direction, then the spatial dependence kz is replaced by where is called the wave vector, the magnitude of which is the wavenumber. Thus the leading vectors e and h each contain up to two nonzero (complex) components describing the amplitude and phase of the wave's x and y polarization components (again, there can be no z polarization component for a transverse wave in the +z direction). For a given medium with a characteristic impedance , h is related to e by: In a dielectric, η is real and has the value η0/n, where n is the refractive index and η0 is the impedance of free space. The impedance will be complex in a conducting medium.[clarification needed] Note that given that relationship, the dot product of E and H must be zero:[dubious ] indicating that these vectors are orthogonal (at right angles to each other), as expected. So knowing the propagation direction (+z in this case) and η, one can just as well specify the wave in terms of just ex and ey describing the electric field. The vector containing ex and ey (but without the z component which is necessarily zero for a transverse wave) is known as a Jones vector. In addition to specifying the polarization state of the wave, a general Jones vector also specifies the overall magnitude and phase of that wave. Specifically, the intensity of the light wave is proportional to the sum of the squared magnitudes of the two electric field components: however the wave's state of polarization is only dependent on the (complex) ratio of ey to ex. So let us just consider waves whose |ex|2 + |ey|2 = 1; this happens to correspond to an intensity of about .00133 watts per square meter in free space (where ). And since the absolute phase of a wave is unimportant in discussing its polarization state, let us stipulate that the phase of ex is zero, in other words ex is a real number while ey may be complex. Under these restrictions, ex and ey can be represented as follows: where the polarization state is now fully parameterized by the value of Q (such that −1 < Q < 1) and the relative phase . By convention when one speaks of a wave's "polarization," if not otherwise specified, reference is being made to the polarization of the electric field. The polarization of the magnetic field always follows that of the electric field but with a 90 degree rotation, as detailed above. In addition to transverse waves, there are many wave motions where the oscillation is not limited to directions perpendicular to the direction of propagation. These cases are far beyond the scope of the current article which concentrates on transverse waves (such as most electromagnetic waves in bulk media), however one should be aware of cases where the polarization of a coherent wave cannot be described simply using a Jones vector, as we have just done. Just considering electromagnetic waves, we note that the preceding discussion strictly applies to plane waves in a homogeneous isotropic non-attenuating medium, whereas in an anisotropic medium (such as birefringent crystals as discussed below) the electric or magnetic field may have longitudinal as well as transverse components. In those cases the electric displacement D and magnetic flux density B[clarification needed] still obey the above geometry but due to anisotropy in the electric susceptibility (or in the magnetic permeability), now given by a tensor, the direction of E (or H) may differ from that of D (or B). Even in isotropic media, so-called inhomogeneous waves can be launched into a medium whose refractive index has a significant imaginary part (or "extinction coefficient") such as metals;[clarification needed] these fields are also not strictly transverse.:179–184:51–52 Surface waves or waves propagating in a waveguide (such as an optical fiber) are generally not transverse waves, but might be described as an electric or magnetic transverse mode, or a hybrid mode. Even in free space, longitudinal field components can be generated in focal regions, where the plane wave approximation breaks down. An extreme example is radially or tangentially polarized light, at the focus of which the electric or magnetic field respectively is entirely longitudinal (along the direction of propagation). For longitudinal waves such as sound waves in fluids, the direction of oscillation is by definition along the direction of travel, so the issue of polarization is not normally even mentioned. On the other hand, sound waves in a bulk solid can be transverse as well as longitudinal, for a total of three polarization components. In this case, the transverse polarization is associated with the direction of the shear stress and displacement in directions perpendicular to the propagation direction, while the longitudinal polarization describes compression of the solid and vibration along the direction of propagation. The differential propagation of transverse and longitudinal polarizations is important in seismology. Polarization is best understood by initially considering only pure polarization states, and only a coherent sinusoidal wave at some optical frequency. The vector in the adjacent diagram might describe the oscillation of the electric field emitted by a single-mode laser (whose oscillation frequency would be typically 1015 times faster). The field oscillates in the x-y plane, along the page, with the wave propagating in the z direction, perpendicular to the page. The first two diagrams below trace the electric field vector over a complete cycle for linear polarization at two different orientations; these are each considered a distinct state of polarization (SOP). Note that the linear polarization at 45° can also be viewed as the addition of a horizontally linearly polarized wave (as in the leftmost figure) and a vertically polarized wave of the same amplitude in the same phase. Now if one were to introduce a phase shift in between those horizontal and vertical polarization components, one would generally obtain elliptical polarization as is shown in the third figure. When the phase shift is exactly ±90°, then circular polarization is produced (fourth and fifth figures). Thus is circular polarization created in practice, starting with linearly polarized light and employing a quarter-wave plate to introduce such a phase shift. The result of two such phase-shifted components in causing a rotating electric field vector is depicted in the animation on the right. Note that circular or elliptical polarization can involve either a clockwise or counterclockwise rotation of the field. These correspond to distinct polarization states, such as the two circular polarizations shown above. Of course the orientation of the x and y axes used in this description is arbitrary. The choice of such a coordinate system and viewing the polarization ellipse in terms of the x and y polarization components, corresponds to the definition of the Jones vector (below) in terms of those basis polarizations. One would typically choose axes to suit a particular problem such as x being in the plane of incidence. Since there are separate reflection coefficients for the linear polarizations in and orthogonal to the plane of incidence (p and s polarizations, see below), that choice greatly simplifies the calculation of a wave's reflection from a surface. Moreover, one can use as basis functions any pair of orthogonal polarization states, not just linear polarizations. For instance, choosing right and left circular polarizations as basis functions simplifies the solution of problems involving circular birefringence (optical activity) or circular dichroism. Consider a purely polarized monochromatic wave. If one were to plot the electric field vector over one cycle of oscillation, an ellipse would generally be obtained, as is shown in the figure, corresponding to a particular state of elliptical polarization. Note that linear polarization and circular polarization can be seen as special cases of elliptical polarization. A polarization state can then be described in relation to the geometrical parameters of the ellipse, and its "handedness", that is, whether the rotation around the ellipse is clockwise or counter clockwise. One parameterization of the elliptical figure specifies the orientation angle ψ, defined as the angle between the major axis of the ellipse and the x-axis along with the ellipticity ε=a/b, the ratio of the ellipse's major to minor axis. (also known as the axial ratio). The ellipticity parameter is an alternative parameterization of an ellipse's eccentricity , or the ellipticity angle, χ = arctan b/a= arctan 1/ε as is shown in the figure. The angle χ is also significant in that the latitude (angle from the equator) of the polarization state as represented on the Poincaré sphere (see below) is equal to ±2χ. The special cases of linear and circular polarization correspond to an ellipticity ε of infinity and unity (or χ of zero and 45°) respectively. Full information on a completely polarized state is also provided by the amplitude and phase of oscillations in two components of the electric field vector in the plane of polarization. This representation was used above to show how different states of polarization are possible. The amplitude and phase information can be conveniently represented as a two-dimensional complex vector (the Jones vector): Here and denote the amplitude of the wave in the two components of the electric field vector, while and represent the phases. The product of a Jones vector with a complex number of unit modulus gives a different Jones vector representing the same ellipse, and thus the same state of polarization. The physical electric field, as the real part of the Jones vector, would be altered but the polarization state itself is independent of absolute phase. The basis vectors used to represent the Jones vector need not represent linear polarization states (i.e. be real). In general any two orthogonal states can be used, where an orthogonal vector pair is formally defined as one having a zero inner product. A common choice is left and right circular polarizations, for example to model the different propagation of waves in two such components in circularly birefringent media (see below) or signal paths of coherent detectors sensitive to circular polarization. Regardless of whether polarization state is represented using geometric parameters or Jones vectors, implicit in the parameterization is the orientation of the coordinate frame. This permits a degree of freedom, namely rotation about the propagation direction. When considering light that is propagating parallel to the surface of the Earth, the terms "horizontal" and "vertical" polarization are often used, with the former being associated with the first component of the Jones vector, or zero azimuth angle. On the other hand, in astronomy the equatorial coordinate system is generally used instead, with the zero azimuth (or position angle, as it is more commonly called in astronomy to avoid confusion with the horizontal coordinate system) corresponding to due north. s and p designationsEdit Another coordinate system frequently used relates to the plane of incidence. This is the plane made by the incoming propagation direction and the vector perpendicular to the plane of an interface, in other words, the plane in which the ray travels before and after reflection or refraction. The component of the electric field parallel to this plane is termed p-like (parallel) and the component perpendicular to this plane is termed s-like (from senkrecht, German for perpendicular). Polarized light with its electric field along the plane of incidence is thus denoted p-polarized, while light whose electric field is normal to the plane of incidence is called s-polarized. P polarization is commonly referred to as transverse-magnetic (TM), and has also been termed pi-polarized or tangential plane polarized. S polarization is also called transverse-electric (TE), as well as sigma-polarized or sagittal plane polarized. Unpolarized and partially polarized lightEdit This section may contain content that is repetitive or redundant of text elsewhere in the article. Please help improve it by merging similar text or removing repeated statements. (July 2014) Natural light, and most other common sources of visible light, are incoherent: radiation is produced independently by a large number of atoms or molecules whose emissions are uncorrelated and generally of random polarizations. In this case the light is said to be unpolarized. This term is somewhat inexact, since at any instant of time at one location there is a definite direction to the electric and magnetic fields, however it implies that the polarization changes so quickly in time that it will not be measured or relevant to the outcome of an experiment. A so-called depolarizer acts on a polarized beam to create one which is actually fully polarized at every point, but in which the polarization varies so rapidly across the beam that it may be ignored in the intended applications. Unpolarized light can be described as a mixture of two independent oppositely polarized streams, each with half the intensity. Light is said to be partially polarized when there is more power in one of these streams than the other. At any particular wavelength, partially polarized light can be statistically described as the superposition of a completely unpolarized component and a completely polarized one.:330 One may then describe the light in terms of the degree of polarization and the parameters of the polarized component. That polarized component can be described in terms of a Jones vector or polarization ellipse, as is detailed above. However, in order to also describe the degree of polarization, one normally employs Stokes parameters (see below) to specify a state of partial polarization.:351,374–375 The transmission of plane waves through a homogeneous medium are fully described in terms of Jones vectors and 2×2 Jones matrices. However, in practice there are cases in which all of the light cannot be viewed in such a simple manner due to spatial inhomogeneities or the presence of mutually incoherent waves. So-called depolarization, for instance, cannot be described using Jones matrices. For these cases it is usual instead to use a 4×4 matrix that acts upon the Stokes 4-vector. Such matrices were first used by Paul Soleillet in 1929, although they have come to be known as Mueller matrices. While every Jones matrix has a Mueller matrix, the reverse is not true. Mueller matrices are then used to describe the observed polarization effects of the scattering of waves from complex surfaces or ensembles of particles, as shall now be presented.:377–379 The Jones vector perfectly describes the state of polarization and phase of a single monochromatic wave, representing a pure state of polarization as described above. However any mixture of waves of different polarizations (or even of different frequencies) do not correspond to a Jones vector. In so-called partially polarized radiation the fields are stochastic, and the variations and correlations between components of the electric field can only be described statistically. One such representation is the coherency matrix::137–142 where angular brackets denote averaging over many wave cycles. Several variants of the coherency matrix have been proposed: the Wiener coherency matrix and the spectral coherency matrix of Richard Barakat measure the coherence of a spectral decomposition of the signal, while the Wolf coherency matrix averages over all time/frequencies. The coherency matrix contains all second order statistical information about the polarization. This matrix can be decomposed into the sum of two idempotent matrices, corresponding to the eigenvectors of the coherency matrix, each representing a polarization state that is orthogonal to the other. An alternative decomposition is into completely polarized (zero determinant) and unpolarized (scaled identity matrix) components. In either case, the operation of summing the components corresponds to the incoherent superposition of waves from the two components. The latter case gives rise to the concept of the "degree of polarization"; i.e., the fraction of the total intensity contributed by the completely polarized component. The coherency matrix is not easy to visualize, and it is therefore common to describe incoherent or partially polarized radiation in terms of its total intensity (I), (fractional) degree of polarization (p), and the shape parameters of the polarization ellipse. An alternative and mathematically convenient description is given by the Stokes parameters, introduced by George Gabriel Stokes in 1852. The relationship of the Stokes parameters to intensity and polarization ellipse parameters is shown in the equations and figure below. Here Ip, 2ψ and 2χ are the spherical coordinates of the polarization state in the three-dimensional space of the last three Stokes parameters. Note the factors of two before ψ and χ corresponding respectively to the facts that any polarization ellipse is indistinguishable from one rotated by 180°, or one with the semi-axis lengths swapped accompanied by a 90° rotation. The Stokes parameters are sometimes denoted I, Q, U and V. Neglecting the first Stokes parameter S0 (or I), the three other Stokes parameters can be plotted directly in three-dimensional Cartesian coordinates. For a given power in the polarized component given by the set of all polarization states are then mapped to points on the surface of the so-called Poincaré sphere (but of radius P), as shown in the accompanying diagram. Often the total beam power is not of interest, in which case a normalized Stokes vector is used by dividing the Stokes vector by the total intensity S0: The normalized Stokes vector then has unity power ( ) and the three significant Stokes parameters plotted in three dimensions will lie on the unity-radius Poincaré sphere for pure polarization states (where ). Partially polarized states will lie inside the Poincaré sphere at a distance of from the origin. When the non-polarized component is not of interest, the Stokes vector can be further normalized to obtain When plotted, that point will lie on the surface of the unity-radius Poincaré sphere and indicate the state of polarization of the polarized component. Any two antipodal points on the Poincaré sphere refer to orthogonal polarization states. The overlap between any two polarization states is dependent solely on the distance between their locations along the sphere. This property, which can only be true when pure polarization states are mapped onto a sphere, is the motivation for the invention of the Poincaré sphere and the use of Stokes parameters, which are thus plotted on (or beneath) it. Implications for reflection and propagationEdit Polarization in wave propagationEdit In a vacuum, the components of the electric field propagate at the speed of light, so that the phase of the wave varies in space and time while the polarization state does not. That is, the electric field vector e of a plane wave in the +z direction follows: where k is the wavenumber. As noted above, the instantaneous electric field is the real part of the product of the Jones vector times the phase factor . When an electromagnetic wave interacts with matter, its propagation is altered according to the material's (complex) index of refraction. When the real or imaginary part of that refractive index is dependent on the polarization state of a wave, properties known as birefringence and polarization dichroism (or diattenuation) respectively, then the polarization state of a wave will generally be altered. In such media, an electromagnetic wave with any given state of polarization may be decomposed into two orthogonally polarized components that encounter different propagation constants. The effect of propagation over a given path on those two components is most easily characterized in the form of a complex 2×2 transformation matrix J known as a Jones matrix: The Jones matrix due to passage through a transparent material is dependent on the propagation distance as well as the birefringence. The birefringence (as well as the average refractive index) will generally be dispersive, that is, it will vary as a function of optical frequency (wavelength). In the case of non-birefringent materials, however, the 2×2 Jones matrix is the identity matrix (multiplied by a scalar phase factor and attenuation factor), implying no change in polarization during propagation. For propagation effects in two orthogonal modes, the Jones matrix can be written as where g1 and g2 are complex numbers describing the phase delay and possibly the amplitude attenuation due to propagation in each of the two polarization eigenmodes. T is a unitary matrix representing a change of basis from these propagation modes to the linear system used for the Jones vectors; in the case of linear birefringence or diattenuation the modes are themselves linear polarization states so T and T−1 can be omitted if the coordinate axes have been chosen appropriately. In media termed birefringent, in which the amplitudes are unchanged but a differential phase delay occurs, the Jones matrix is a unitary matrix: |g1| = |g2| = 1. Media termed diattenuating (or dichroic in the sense of polarization), in which only the amplitudes of the two polarizations are affected differentially, may be described using a Hermitian matrix (generally multiplied by a common phase factor). In fact, since any matrix may be written as the product of unitary and positive Hermitian matrices, light propagation through any sequence of polarization-dependent optical components can be written as the product of these two basic types of transformations. In birefringent media there is no attenuation, but two modes accrue a differential phase delay. Well known manifestations of linear birefringence (that is, in which the basis polarizations are orthogonal linear polarizations) appear in optical wave plates/retarders and many crystals. If linearly polarized light passes through a birefringent material, its state of polarization will generally change, unless its polarization direction is identical to one of those basis polarizations. Since the phase shift, and thus the change in polarization state, is usually wavelength-dependent, such objects viewed under white light in between two polarizers may give rise to colorful effects, as seen in the accompanying photograph. Circular birefringence is also termed optical activity, especially in chiral fluids, or Faraday rotation, when due to the presence of a magnetic field along the direction of propagation. When linearly polarized light is passed through such an object, it will exit still linearly polarized, but with the axis of polarization rotated. A combination of linear and circular birefringence will have as basis polarizations two orthogonal elliptical polarizations; however, the term "elliptical birefringence" is rarely used. One can visualize the case of linear birefringence (with two orthogonal linear propagation modes) with an incoming wave linearly polarized at a 45° angle to those modes. As a differential phase starts to accrue, the polarization becomes elliptical, eventually changing to purely circular polarization (90° phase difference), then to elliptical and eventually linear polarization (180° phase) perpendicular to the original polarization, then through circular again (270° phase), then elliptical with the original azimuth angle, and finally back to the original linearly polarized state (360° phase) where the cycle begins anew. In general the situation is more complicated and can be characterized as a rotation in the Poincaré sphere about the axis defined by the propagation modes. Examples for linear (blue), circular (red), and elliptical (yellow) birefringence are shown in the figure on the left. The total intensity and degree of polarization are unaffected. If the path length in the birefringent medium is sufficient, the two polarization components of a collimated beam (or ray) can exit the material with a positional offset, even though their final propagation directions will be the same (assuming the entrance face and exit face are parallel). This is commonly viewed using calcite crystals, which present the viewer with two slightly offset images, in opposite polarizations, of an object behind the crystal. It was this effect that provided the first discovery of polarization, by Erasmus Bartholinus in 1669. Media in which transmission of one polarization mode is preferentially reduced are called dichroic or diattenuating. Like birefringence, diattenuation can be with respect to linear polarization modes (in a crystal) or circular polarization modes (usually in a liquid). Devices that block nearly all of the radiation in one mode are known as polarizing filters or simply "polarizers". This corresponds to g2=0 in the above representation of the Jones matrix. The output of an ideal polarizer is a specific polarization state (usually linear polarization) with an amplitude equal to the input wave's original amplitude in that polarization mode. Power in the other polarization mode is eliminated. Thus if unpolarized light is passed through an ideal polarizer (where g1=1 and g2=0) exactly half of its initial power is retained. Practical polarizers, especially inexpensive sheet polarizers, have additional loss so that g1 < 1. However, in many instances the more relevant figure of merit is the polarizer's degree of polarization or extinction ratio, which involve a comparison of g1 to g2. Since Jones vectors refer to waves' amplitudes (rather than intensity), when illuminated by unpolarized light the remaining power in the unwanted polarization will be (g2/g1)2 of the power in the intended polarization. In addition to birefringence and dichroism in extended media, polarization effects describable using Jones matrices can also occur at (reflective) interface between two materials of different refractive index. These effects are treated by the Fresnel equations. Part of the wave is transmitted and part is reflected; for a given material those proportions (and also the phase of reflection) are dependent on the angle of incidence and are different for the s and p polarizations. Therefore, the polarization state of reflected light (even if initially unpolarized) is generally changed. Any light striking a surface at a special angle of incidence known as Brewster's angle, where the reflection coefficient for p polarization is zero, will be reflected with only the s-polarization remaining. This principle is employed in the so-called "pile of plates polarizer" (see figure) in which part of the s polarization is removed by reflection at each Brewster angle surface, leaving only the p polarization after transmission through many such surfaces. The generally smaller reflection coefficient of the p polarization is also the basis of polarized sunglasses; by blocking the s (horizontal) polarization, most of the glare due to reflection from a wet street, for instance, is removed.:348–350 In the important special case of reflection at normal incidence (not involving anisotropic materials) there is no particular s or p polarization. Both the x and y polarization components are reflected identically, and therefore the polarization of the reflected wave is identical to that of the incident wave. However, in the case of circular (or elliptical) polarization, the handedness of the polarization state is thereby reversed, since by convention this is specified relative to the direction of propagation. The circular rotation of the electric field around the x-y axes called "right-handed" for a wave in the +z direction is "left-handed" for a wave in the -z direction. But in the general case of reflection at a nonzero angle of incidence, no such generalization can be made. For instance, right-circularly polarized light reflected from a dielectric surface at a grazing angle, will still be right-handed (but elliptically) polarized. Linear polarized light reflected from a metal at non-normal incidence will generally become elliptically polarized. These cases are handled using Jones vectors acted upon by the different Fresnel coefficients for the s and p polarization components. Measurement techniques involving polarizationEdit Some optical measurement techniques are based on polarization. In many other optical techniques polarization is crucial or at least must be taken into account and controlled; such examples are too numerous to mention. Measurement of stressEdit In engineering, the phenomenon of stress induced birefringence allows for stresses in transparent materials to be readily observed. As noted above and seen in the accompanying photograph, the chromaticity of birefringence typically creates colored patterns when viewed in between two polarizers. As external forces are applied, internal stress induced in the material is thereby observed. Additionally, birefringence is frequently observed due to stresses "frozen in" at the time of manufacture. This is famously observed in cellophane tape whose birefringence is due to the stretching of the material during the manufacturing process. Ellipsometry is a powerful technique for the measurement of the optical properties of a uniform surface. It involves measuring the polarization state of light following specular reflection from such a surface. This is typically done as a function of incidence angle or wavelength (or both). Since ellipsometry relies on reflection, it is not required for the sample to be transparent to light or for its back side to be accessible. Ellipsometry can be used to model the (complex) refractive index of a surface of a bulk material. It is also very useful in determining parameters of one or more thin film layers deposited on a substrate. Due to their reflection properties, not only are the predicted magnitude of the p and s polarization components, but their relative phase shifts upon reflection, compared to measurements using an ellipsometer. A normal ellipsometer does not measure the actual reflection coefficient (which requires careful photometric calibration of the illuminating beam) but the ratio of the p and s reflections, as well as change of polarization ellipticity (hence the name) induced upon reflection by the surface being studied. In addition to use in science and research, ellipsometers are used in situ to control production processes for instance.:585ff:632 The property of (linear) birefringence is widespread in crystalline minerals, and indeed was pivotal in the initial discovery of polarization. In mineralogy, this property is frequently exploited using polarization microscopes, for the purpose of identifying minerals. See optical mineralogy for more details.:163–164 Sound waves in solid materials exhibit polarization. Differential propagation of the three polarizations through the earth is a crucial in the field of seismology. Horizontally and vertically polarized seismic waves (shear waves)are termed SH and SV, while waves with longitudinal polarization (compressional waves) are termed P-waves.:48–50:56–57 We have seen (above) that the birefringence of a type of crystal is useful in identifying it, and thus detection of linear birefringence is especially useful in geology and mineralogy. Linearly polarized light generally has its polarization state altered upon transmission through such a crystal, making it stand out when viewed in between two crossed polarizers, as seen in the photograph, above. Likewise, in chemistry, rotation of polarization axes in a liquid solution can be a useful measurement. In a liquid, linear birefringence is impossible, however there may be circular birefringence when a chiral molecule is in solution. When the right and left handed enantiomers of such a molecule are present in equal numbers (a so-called racemic mixture) then their effects cancel out. However, when there is only one (or a preponderance of one), as is more often the case for organic molecules, a net circular birefringence (or optical activity) is observed, revealing the magnitude of that imbalance (or the concentration of the molecule itself, when it can be assumed that only one enantiomer is present). This is measured using a polarimeter in which polarized light is passed through a tube of the liquid, at the end of which is another polarizer which is rotated in order to null the transmission of light through it.:360–365:169–172 In many areas of astronomy, the study of polarized electromagnetic radiation from outer space is of great importance. Although not usually a factor in the thermal radiation of stars, polarization is also present in radiation from coherent astronomical sources (e.g. hydroxyl or methanol masers), and incoherent sources such as the large radio lobes in active galaxies, and pulsar radio radiation (which may, it is speculated, sometimes be coherent), and is also imposed upon starlight by scattering from interstellar dust. Apart from providing information on sources of radiation and scattering, polarization also probes the interstellar magnetic field via Faraday rotation.:119,124:336–337 The polarization of the cosmic microwave background is being used to study the physics of the very early universe. Synchrotron radiation is inherently polarised. It has been suggested that astronomical sources caused the chirality of biological molecules on Earth. Applications and examplesEdit Unpolarized light, after reflection at a specular (shiny) surface, generally obtains a degree of polarization. This phenomenon was observed in 1808 by the mathematician Étienne-Louis Malus after whom Malus's law is named. Polarizing sunglasses exploit this effect to reduce glare from reflections by horizontal surfaces, notably the road ahead viewed at a grazing angle. Wearers of polarized sunglasses will occasionally observe inadvertent polarization effects such as color-dependent birefringent effects, for example in toughened glass (e.g., car windows) or items made from transparent plastics, in conjunction with natural polarization by reflection or scattering. The polarized light from LCD monitors (see below) is very conspicuous when these are worn. Sky polarization and photographyEdit Polarization is observed in the light of the sky, as this is due to sunlight scattered by aerosols as it passes through the earth's atmosphere. The scattered light produces the brightness and color in clear skies. This partial polarization of scattered light can be used to darken the sky in photographs, increasing the contrast. This effect is most strongly observed at points on the sky making a 90° angle to the sun. Polarizing filters use these effects to optimize the results of photographing scenes in which reflection or scattering by the sky is involved.:346–347:495–499 Sky polarization has been used for orientation in navigation. The Pfund sky compass was used in the 1950s when navigating near the poles of the Earth's magnetic field when neither the sun nor stars were visible (e.g., under daytime cloud or twilight). It has been suggested, controversially, that the Vikings exploited a similar device (the "sunstone") in their extensive expeditions across the North Atlantic in the 9th–11th centuries, before the arrival of the magnetic compass from Asia to Europe in the 12th century. Related to the sky compass is the "polar clock", invented by Charles Wheatstone in the late 19th century.:67–69 The principle of liquid-crystal display (LCD) technology relies on the rotation of the axis of linear polarization by the liquid crystal array. Light from the backlight (or the back reflective layer, in devices not including or requiring a backlight) first passes through a linear polarizing sheet. That polarized light passes through the actual liquid crystal layer which may be organized in pixels (for a TV or computer monitor) or in another format such as a seven-segment display or one with custom symbols for a particular product. The liquid crystal layer is produced with a consistent right (or left) handed chirality, essentially consisting of tiny helices. This causes circular birefringence, and is engineered so that there is a 90 degree rotation of the linear polarization state. However, when a voltage is applied across a cell, the molecules straighten out, lessening or totally losing the circular birefringence. On the viewing side of the display is another linear polarizing sheet, usually oriented at 90 degrees from the one behind the active layer. Therefore, when the circular birefringence is removed by the application of a sufficient voltage, the polarization of the transmitted light remains at right angles to the front polarizer, and the pixel appears dark. With no voltage, however, the 90 degree rotation of the polarization causes it to exactly match the axis of the front polarizer, allowing the light through. Intermediate voltages create intermediate rotation of the polarization axis and the pixel has an intermediate intensity. Displays based on this principle are widespread, and now are used in the vast majority of televisions, computer monitors and video projectors, rendering the previous CRT technology essentially obsolete. The use of polarization in the operation of LCD displays is immediately apparent to someone wearing polarized sunglasses, often making the display unreadable. In a totally different sense, polarization encoding has become the leading (but not sole) method for delivering separate images to the left and right eye in stereoscopic displays used for 3D movies. This involves separate images intended for each eye either projected from two different projectors with orthogonally oriented polarizing filters or, more typically, from a single projector with time multiplexed polarization (a fast alternating polarization device for successive frames). Polarized 3D glasses with suitable polarizing filters ensure that each eye receives only the intended image. Historically such systems used linear polarization encoding because it was inexpensive and offered good separation. However circular polarization makes separation of the two images insensitive to tilting of the head, and is widely used in 3-D movie exhibition today, such as the system from RealD. Projecting such images requires screens that maintain the polarization of the projected light when viewed in reflection (such as silver screens); a normal diffuse white projection screen causes depolarization of the projected images, making it unsuitable for this application. Although now obsolete, CRT computer displays suffered from reflection by the glass envelope, causing glare from room lights and consequently poor contrast. Several anti-reflection solutions were employed to ameliorate this problem. One solution utilized the principle of reflection of circularly polarized light. A circular polarizing filter in front of the screen allows for the transmission of (say) only right circularly polarized room light. Now, right circularly polarized light (depending on the convention used) has its electric (and magnetic) field direction rotating clockwise while propagating in the +z direction. Upon reflection, the field still has the same direction of rotation, but now propagation is in the −z direction making the reflected wave left circularly polarized. With the right circular polarization filter placed in front of the reflecting glass, the unwanted light reflected from the glass will thus be in very polarization state that is blocked by that filter, eliminating the reflection problem. The reversal of circular polarization on reflection and elimination of reflections in this manner can be easily observed by looking in a mirror while wearing 3-D movie glasses which employ left- and right-handed circular polarization in the two lenses. Closing one eye, the other eye will see a reflection in which it cannot see itself; that lens appears black. However the other lens (of the closed eye) will have the correct circular polarization allowing the closed eye to be easily seen by the open one. Radio transmission and receptionEdit All radio (and microwave) antennas used for transmitting or receiving are intrinsically polarized. They transmit in (or receive signals from) a particular polarization, being totally insensitive to the opposite polarization; in certain cases that polarization is a function of direction. Most antennas are nominally linearly polarized, but elliptical and circular polarization is a possibility. As is the convention in optics, the "polarization" of a radio wave is understood to refer to the polarization of its electric field, with the magnetic field being at a 90 degree rotation with respect to it for a linearly polarized wave. The vast majority of antennas are linearly polarized. In fact it can be shown from considerations of symmetry that an antenna that lies entirely in a plane which also includes the observer, can only have its polarization in the direction of that plane. This applies to many cases, allowing one to easily infer such an antenna's polarization at an intended direction of propagation. So a typical rooftop Yagi or log-periodic antenna with horizontal conductors, as viewed from a second station toward the horizon, is necessarily horizontally polarized. But a vertical "whip antenna" or AM broadcast tower used as an antenna element (again, for observers horizontally displaced from it) will transmit in the vertical polarization. A turnstile antenna with its four arms in the horizontal plane, likewise transmits horizontally polarized radiation toward the horizon. However, when that same turnstile antenna is used in the "axial mode" (upwards, for the same horizontally-oriented structure) its radiation is circularly polarized. At intermediate elevations it is elliptically polarized. Polarization is important in radio communications because, for instance, if one attempts to use a horizontally polarized antenna to receive a vertically polarized transmission, the signal strength will be substantially reduced (or under very controlled conditions, reduced to nothing). This principle is used in satellite television in order to double the channel capacity over a fixed frequency band. The same frequency channel can be used for two signals broadcast in opposite polarizations. By adjusting the receiving antenna for one or the other polarization, either signal can be selected without interference from the other. Especially due to the presence of the ground, there are some differences in propagation (and also in reflections responsible for TV ghosting) between horizontal and vertical polarizations. AM and FM broadcast radio usually use vertical polarization, while television uses horizontal polarization. At low frequencies especially, horizontal polarization is avoided. That is because the phase of a horizontally polarized wave is reversed upon reflection by the ground. A distant station in the horizontal direction will receive both the direct and reflected wave, which thus tend to cancel each other. This problem is avoided with vertical polarization. Polarization is also important in the transmission of radar pulses and reception of radar reflections by the same or a different antenna. For instance, back scattering of radar pulses by rain drops can be avoided by using circular polarization. Just as specular reflection of circularly polarized light reverses the handedness of the polarization, as discussed above, the same principle applies to scattering by objects much smaller than a wavelength such as rain drops. On the other hand, reflection of that wave by an irregular metal object (such as an airplane) will typically introduce a change in polarization and (partial) reception of the return wave by the same antenna. The effect of free electrons in the ionosphere, in conjunction with the earth's magnetic field, causes Faraday rotation, a sort of circular birefringence. This is the same mechanism which can rotate the axis of linear polarization by electrons in interstellar space as mentioned below. The magnitude of Faraday rotation caused by such a plasma is greatly exaggerated at lower frequencies, so at the higher microwave frequencies used by satellites the effect is minimal. However medium or short wave transmissions received following refraction by the ionosphere are strongly affected. Since a wave's path through the ionosphere and the earth's magnetic field vector along such a path are rather unpredictable, a wave transmitted with vertical (or horizontal) polarization will generally have a resulting polarization in an arbitrary orientation at the receiver. Polarization and visionEdit Many animals are capable of perceiving some of the components of the polarization of light, e.g., linear horizontally polarized light. This is generally used for navigational purposes, since the linear polarization of sky light is always perpendicular to the direction of the sun. This ability is very common among the insects, including bees, which use this information to orient their communicative dances.:102–103 Polarization sensitivity has also been observed in species of octopus, squid, cuttlefish, and mantis shrimp.:111–112 In the latter case, one species measures all six orthogonal components of polarization, and is believed to have optimal polarization vision. The rapidly changing, vividly colored skin patterns of cuttlefish, used for communication, also incorporate polarization patterns, and mantis shrimp are known to have polarization selective reflective tissue. Sky polarization was thought to be perceived by pigeons, which was assumed to be one of their aids in homing, but research indicates this is a popular myth. The naked human eye is weakly sensitive to polarization, without the need for intervening filters. Polarized light creates a very faint pattern near the center of the visual field, called Haidinger's brush. This pattern is very difficult to see, but with practice one can learn to detect polarized light with the naked eye.:118 Angular momentum using circular polarizationEdit It is well known that electromagnetic radiation carries a certain linear momentum in the direction of propagation. In addition, however, light carries a certain angular momentum if it is circularly polarized (or partially so). In comparison with lower frequencies such as microwaves, the amount of angular momentum in light, even of pure circular polarization, compared to the same wave's linear momentum (or radiation pressure) is very small and difficult to even measure. However it was utilized in an experiment to achieve speeds of up to 600 million revolutions per minute. Notes and referencesEdit - Principles of Optics, 7th edition, M. Born & E. Wolf, Cambridge University, 1999, ISBN 0-521-64222-1. - Fundamentals of polarized light: a statistical optics approach, C. Brosseau, Wiley, 1998, ISBN 0-471-14302-2. - Polarized Light, second edition, Dennis Goldstein, Marcel Dekker, 2003, ISBN 0-8247-4053-X - Field Guide to Polarization, Edward Collett, SPIE Field Guides vol. FG05, SPIE, 2005, ISBN 0-8194-5868-6. - Polarization Optics in Telecommunications, Jay N. Damask, Springer 2004, ISBN 0-387-22493-9. - Polarized Light in Nature, G. P. Können, Translated by G. A. Beerling, Cambridge University, 1985, ISBN 0-521-25862-6. - Polarised Light in Science and Nature, D. Pye, Institute of Physics, 2001, ISBN 0-7503-0673-4. - Polarized Light, Production and Use, William A. Shurcliff, Harvard University, 1962. - Ellipsometry and Polarized Light, R. M. A. Azzam and N. M. Bashara, North-Holland, 1977, ISBN 0-444-87016-4 - Secrets of the Viking Navigators—How the Vikings used their amazing sunstones and other techniques to cross the open oceans, Leif Karlsen, One Earth Press, 2003. - Shipman, James; Wilson, Jerry D.; Higgins, Charles A. (2015). An Introduction to Physical Science, 14th Ed. Cengage Learning. p. 187. ISBN 1305544676. - Muncaster, Roger (1993). A-level Physics. Nelson Thornes. pp. 465–467. ISBN 0748715843. - Singh, Devraj (2015). Fundamentals of Optics, 2nd Ed. PHI Learning Pvt. Ltd. p. 453. ISBN 8120351460. - Avadhanulu, M. N. (1992). A Textbook of Engineering Physics. S. Chand Publishing. pp. 198–199. ISBN 8121908175. - Desmarais, Louis (1997). Applied Electro Optics. Pearson Education. pp. 162–163. ISBN 0132441829. - Le Tiec, A.; Novak, J. (July 2016). "Theory of Gravitational Waves". arXiv: [gr-qc]. doi:10.1142/9789813141766_0001. - Griffiths, David J. (1998). Introduction to Electrodynamics (3rd ed.). Prentice Hall. ISBN 0-13-805326-X. - Geoffrey New (7 April 2011). Introduction to Nonlinear Optics. Cambridge University Press. ISBN 978-1-139-50076-0. - Dorn, R.; Quabis, S. & Leuchs, G. (Dec 2003). "Sharper Focus for a Radially Polarized Light Beam". Physical Review Letters. 91 (23): 233901. Bibcode:2003PhRvL..91w3901D. doi:10.1103/PhysRevLett.91.233901. PMID 14683185. - Subrahmanyan Chandrasekhar (1960) Radiative Transfer, p. 27 - M. A. Sletten and D. J. McLaughlin, "Radar polarimetry", in K. Chang (ed.), Encyclopedia of RF and Microwave Engineering, John Wiley & Sons, 2005, ISBN 978-0-471-27053-9, 5832 pp. - Merrill Ivan Skolnik (1990) Radar Handbook, Fig. 6.52, sec. 6.60. - Hamish Meikle (2001) Modern Radar Systems, eq. 5.83. - T. Koryu Ishii (Editor), 1995, Handbook of Microwave Technology. Volume 2, Applications, p. 177. - John Volakis (ed) 2007 Antenna Engineering Handbook, Fourth Edition, sec. 26.1. Note: in contrast with other authors, this source initially defines ellipticity reciprocally, as the minor-to-major-axis ratio, but then goes on to say that "Although [it] is less than unity, when expressing ellipticity in decibels, the minus sign is frequently omitted for convenience", which essentially reverts back to the definition adopted by other authors. - Chandrasekhar, Subrahmanyan (2013). Radiative transfer. Courier. p. 30. - Hecht, Eugene (2002). Optics (4th ed.). United States of America: Addison Wesley. ISBN 0-8053-8566-5. - Edward L. O'Neill (January 2004). Introduction to Statistical Optics. Courier Dover Publications. ISBN 978-0-486-43578-7. - Dennis Goldstein; Dennis H. Goldstein (3 January 2011). Polarized Light, Revised and Expanded. CRC Press. ISBN 978-0-203-91158-7. - Masud Mansuripur (2009). Classical Optics and Its Applications. Cambridge University Press. ISBN 978-0521881692. - Randy O. Wayne (16 December 2013). Light and Video Microscopy. Academic Press. ISBN 978-0-12-411536-1. - Peter M. Shearer (2009). Introduction to Seismology. Cambridge University Press. ISBN 978-0-521-88210-1. - Seth Stein; Michael Wysession (1 April 2009). An Introduction to Seismology, Earthquakes, and Earth Structure. John Wiley & Sons. ISBN 978-1-4443-1131-0. - K. Peter C. Vollhardt; Neil E. Schore (2003). Organic Chemistry, Fourth Edition: Structure and Function. W. H. Freeman. ISBN 978-0-7167-4374-3. - Vlemmings, W. H. T. (Mar 2007). "A review of maser polarization and magnetic fields". Proceedings of the International Astronomical Union. 3 (S242): 37–46. arXiv: . Bibcode: . doi: . - Hannu Karttunen; Pekka Kröger; Heikki Oja (27 June 2007). Fundamental Astronomy. Springer. ISBN 978-3-540-34143-7. - Boyle, Latham A.; Steinhardt, PJ; Turok, N (2006). "Inflationary predictions for scalar and tensor fluctuations reconsidered". Physical Review Letters. 96 (11): 111301. arXiv: . Bibcode:2006PhRvL..96k1301B. doi:10.1103/PhysRevLett.96.111301. PMID 16605810. - Tegmark, Max (2005). "What does inflation really predict?". JCAP. 0504 (4): 001. arXiv: . Bibcode:2005JCAP...04..001T. doi:10.1088/1475-7516/2005/04/001. - Clark, S. (1999). "Polarised starlight and the handedness of Life". American Scientist. 97 (4): 336–43. Bibcode:1999AmSci..87..336C. doi:10.1511/1999.4.336. - Bekefi, George; Barrett, Alan (1977). Electromagnetic Vibrations, Waves, and Radiation. USA: MIT Press. ISBN 0-262-52047-8. - J. David Pye (13 February 2001). Polarised Light in Science and Nature. CRC Press. ISBN 978-0-7503-0673-7. - Sonja Kleinlogel; Andrew White (2008). "The secret world of shrimps: polarisation vision at its best". PLoS ONE. 3 (5): e2190. arXiv: . Bibcode:2008PLoSO...3.2190K. doi: . PMC . PMID 18478095. - "No evidence for polarization sensitivity in the pigeon electroretinogram", J. J. Vos Hzn, M. A. J. M. Coemans & J. F. W. Nuboer, The Journal of Experimental Biology, 1995. - "University of St Andrews scientists create 'fastest man-made spinning object'" - "Laser-induced rotation and cooling of a trapped microgyroscope in vacuum", Research @ St. Andrews - Polarized Light in Nature and Technology - Polarized Light Digital Image Gallery: Microscopic images made using polarization effects - Polarization by the University of Colorado Physics 2000: Animated explanation of polarization - MathPages: The relationship between photon spin and polarization - A virtual polarization microscope - Polarization angle in satellite dishes. - Using polarizers in photography - Molecular Expressions: Science, Optics and You — Polarization of Light: Interactive Java tutorial - Electromagnetic waves and circular dichroism: an animated tutorial - SPIE technical group on polarization - A Java simulation on using polarizers - Antenna Polarization - Animations of Linear, Circular and Elliptical Polarizations on YouTube
numeration, in mathematics, process of designating numbers according to any particular system; the number designations are in turn called numerals. In any place value system of numeration, a base number must be specified, and groupings are then made by powers of the base number. The position of a numeral in a grouping indicates which power of the base it is to be multiplied by. The most widely used system of numeration is the decimal system, which uses base 10. Thus, in the decimal system, the numeral 342 means (3×102)+(4×101)+(2×100), or 300+40+2. The binary system uses base 2 and is important because of its application to modern computers. Whereas the decimal system uses the ten digits 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, the binary system uses only the two digits 0 and 1. In the binary system, the numeral 111, for example, means (1×22)+(1×21)+(1×20), i.e., 4+2+1, or 7, in the decimal system. The decimal numeral 7 and the binary numeral 111 are thus designations for the same number. The duodecimal system uses 12 as a base and has some advantages arising from the fact that 12 is divisible by four different numbers—2, 3, 4, 6—other than 1 and 12 itself. The base 12 requires the use of 12 different digits. Thus, in addition to the digits 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, the symbols X (called "dek") and E (called "el") to represent the numbers 10 and 11 have been suggested by the Duodecimal Society of America. The duodecimal numeral 24E, for example, means (2×122)+(4×121)+(11×120), i.e., (2×144)+(4×12)+(11×1), or 347, in the decimal system. The hexadecimal system, or base 16, uses the digits 0 through 9 and the letters A through F (or a through f) to represent 16 different digits. Hexadecimal numeration is often used in computing because it more readily translates the binary system used by computers than decimal numeration does. A computer byte, which is composed of 8 bits (binary digits), must be represented by the numbers 0 through 255 in the decimal system, but in the hexadecimal system it is represented by 00 through FF. The decimal, binary, duodecimal, and hexadecimal systems of numeration constitute only four examples. The ancient Babylonians used a system of base 60, which still survives in our smaller divisions both of time and of angle, i.e., minutes and seconds. In general, any integer n greater than one can be used as the base of a numeration system, and the system will employ n different digits.
Saturn’s moon Enceladus is a top candidate in our solar system’s search for extraterrestrial life. However, it remains a mystery whether microbial alien life might inhabit Enceladus. Enceladus appeared to NASA’s Voyager 1 spacecraft as a small, unremarkable “snowball” in the sky when it was first observed in 1980. Later, from 2005 to 2017, NASA’s Cassini probe flew through the Saturnian System and performed unprecedented in-depth research on Saturn’s intricate rings and moons. The discovery by Cassini that Enceladus’ thick coating of ice conceals a large, warm saltwater ocean emitting methane, a gas that generally comes from microbial life on Earth, astounded scientists. A new study by University of Arizona researchers suggests that the mystery of whether microbial alien life might inhabit Enceladus could be solved by an orbiting space probe. Researchers outlined how a fictitious space mission could deliver conclusive solutions. A group of researchers from the Universities of Arizona and Paris’ Université Paris Sciences et Lettres last year concluded that there is a good chance that Enceladus has life and that this life may be the cause of the moon’s methane emissions. Régis Ferrière, senior author of the new paper and associate professor in the UArizona Department of Ecology and Evolutionary Biology, said, “To know if that is the case, we must go back to Enceladus and look.” According to the most recent analysis, even if the overall mass of possible living bacteria in the ocean of Enceladus would be minimal, a visit from an orbiting spacecraft would be all that is required to determine for definite whether Earthlike microbes are present in Enceladus’ water below its shell. Ferrière said, “Clearly, sending a robot crawling through ice cracks and deep-diving down to the seafloor would not be easy. More realistic missions have been designed using upgraded instruments to sample the plumes like Cassini did, or even land on the moon’s surface.” “By simulating the data that a more prepared and advanced orbiting spacecraft would gather from just the plumes alone, our team has now shown that this approach would be enough to confidently determine whether or not there is life within Enceladus’ ocean without actually having to probe the depths of the moon. This is a thrilling perspective.” Enceladus, roughly 800 million miles from Earth, orbits Saturn every 33 hours. The moon is the only object in the solar system that reflects light like the moon does, even though it isn’t even as broad as the state of Arizona. The moon’s surface makes it stand out in the sky like a frozen pond in the sunlight. At least 100 enormous water plumes shoot out of the frozen surface of the moon’s south pole, resembling lava from a raging volcano. One of Saturn’s famous rings is thought to be a result of water vapor and ice particles spewed by these geyser-like features, according to scientists. The Cassini mission took a sample of this ejected combination, which contains gases and other granules deep within Enceladus’ ocean. The excess methane Cassini found in the plumes brings to mind hydrothermal vents, unique ecosystems found in the dark interiors of Earth’s oceans. Here, heated magma beneath the seafloor heats the ocean water in porous bedrock at the boundaries of two nearby tectonic plates, creating “white smokers,” vents that spout searing hot, mineral-rich saltwater. Because they cannot access sunlight, organisms must survive using the energy contained in the chemical substances that white smokers release into the environment. Ferrière said, “On our planet, hydrothermal vents teem with life, big and small, despite the darkness and insane pressure. The simplest living creatures are microbes called methanogens that power themselves even in the absence of sunlight.” “Methanogens convert dihydrogen and carbon dioxide to gain energy, releasing methane as a byproduct. Ferrière’s research group modeled its calculations based on the hypothesis that Enceladus has methanogens that inhabit oceanic hydrothermal vents resembling the ones found on Earth. In this way, the researchers calculated what the total mass of methanogens on Enceladus would be, as well as the likelihood that their cells and other organic molecules could be ejected through the plumes.” Paper’s first author, Antonin Affholder, a postdoctoral research associate at UArizona who was at Paris Sciences & Lettres when doing this research, said, “We were surprised to find that the hypothetical abundance of cells would only amount to the biomass of one single whale in Enceladus’ global ocean. Enceladus’ biosphere may be very sparse. And yet our models indicate that it would be productive enough to feed the plumes with just enough organic molecules or cells to be picked up by instruments onboard a future spacecraft.” “Our research shows that if a biosphere is present in Enceladus’ ocean, signs of its existence could be picked up in plume material without the need to land or drill, but such a mission would require an orbiter to fly through the plume multiple times to collect lots of oceanic material.” “The possibility that actual cells could be found might be slim because they would have to survive the outgassing process carrying them through the plumes from the deep ocean to the vacuum of space – quite a journey for a tiny cell.” Instead, the authors suggest that detected organic molecules, such as particular amino acids, would serve as indirect evidence for or against an environment abounding with life. “Considering that according to the calculations, any life present on Enceladus would be extremely sparse, there still is a good chance that we’ll never find enough organic molecules in the plumes to conclude that it is there unambiguously,” Ferrière said. “So, rather than focusing on the question of how much is enough to prove that life is there, we asked, ‘What is the maximum amount of organic material that could be present in the absence of life?'” Authors said, “If all measurements were to come back above a certain threshold, it could signal that life is a serious possibility.” “The definitive evidence of living cells caught on an alien world may remain elusive for generations. Until then, the fact that we can’t rule out life’s existence on Enceladus is probably the best we can do.”
Practically an ideal voltage source cannot be obtained. If there is a huge resistance, then it will be a tiny amount of current. Letâs start with a voltage source. This means they can generate energy as well as absorb, where passive elements can either only absorb energy (resistors) or store/release energy (capacitors and inductors). In this situation, we form a supernode by combining the two nodes. Or it could be you design with them everyday. The application of the node-voltage method involves expressing the branch currents in terms of one or more node voltages and applying KCL at each of the nodes. They can be found in any textbook on network theory. Why ideal Current source has infinite resistance? It is easy to verify in (a) that V=V1âV2 by applying KVL. Estimating the number of dc operating points or even their upper bounds for an arbitrary nonlinear circuit is still an open problem (Lagarias and TrajkoviÄ, 1999). The properties of this general biquad circuit are much more easily discerned if certain simplifications are introduced. The two groups of capacitors that are to be scaled together are listed below: Note that capacitors in each group are distinguished by the fact that they are all incident on the same input node of one of the operational amplifiers. When working with dependent power sources, you need to look for the dependent element, or the element that the power source is dependent upon. This can cause some interesting and potentially dangerous situations. For each of the cases, a âsimpleâ solution is also offered. Fig. In the feedback paths, capacitor E and switched capacitor F provide two means for damping the transfer function poles. For instance, we can consider both currents as independent variables, which will make the voltages dependent variables. As indicated in Fig. 1.12d. Physical realizations for ⦠One is an ideal voltage source, and the other is an ideal current source. When solving the circuit, you can just place the provided relationship into the equation you laid out. The final result comes from summing up the three individual answers, V0(4b2+2b1+b0)/12, and gives us the proper conversion formula (with a proportionality constant). All voltages are written with respect to the reference node. Current Source 14; Voltage Source 19; Others 2; PWL Sources 11. Inserting into the second equation of the set (4.5), we obtain this equation: FIGURE 4.6. Despite his real-life experience, he, of course, was wrong. If you have a DC source, itâs a matter of preference for which symbol you use but we typically use the circle with the plus/minus with every voltage source just to be consistent. If the voltage across an ideal voltage source can be specified independently of any other variable in a circuit, it is called an independent voltage source. Independent sources are that which does not depend on any other quantity in the circuit. A nonideal voltage source is modeled by an ideal independent voltage source in series with a (normally small) impedance. We must be careful when we make this simplification because we lose direct information about V0. The independent voltage source and current source can deliver power into a suitable load, such as a resistor. This approach works for all circuit problems, but as the circuit complexity increases, it becomes more difficult to solve problems. The voltage-controlled voltage source, VV, has its output controlled by the input voltage: The μ is a dimensionless constant. Along with these concepts came some definitions which we will continue to use throughout the text. A voltage source is a two-terminal device whose voltage at any instant of time is constant and is independent of the current drawn from it. The symbol used to indicate a voltage source delivering a voltage V s (t) is shown in Fig. Notice that this circuit has three essential nodes and a dependent current source. Copyright 2020 CircuitBread, a SwellFox project. The correctness of this procedure follows directly from signal-flow graph concepts. 1)Dependent source : A dependent source is one whose value depends on some other variable in the circuit.The voltage or current values is proportional to some other voltage or current in the circuit. 1.12a. The circle with the sinusoid in it means that it is an AC power source but it could also have a DC offset. For the following circuit, find V3 using the node-voltage method. On the other hand, when RL is infinite, i.e., an open circuit, the load voltage is νL = νoc = isRi. Swamy, in The Electrical Engineering Handbook, 2005. Many resistive circuits consisting of, Introduction to Biomedical Engineering (Third Edition), If one of the branches located between an essential node and the reference node contains an independent or dependent voltage source, we do not write a node equation for this node because the node voltage is known. The symbol used to indicate a voltage source delivering a voltage V s (t) is shown in Fig. In a dependent (or controlled) voltage source, the voltage across the source depends upon the voltage or current across some other element in the network. Notice that the system matrix is no longer symmetrical because of the dependent current source, and two of the three nodes have a current source, giving rise to a nonzero term on the right-hand side of the matrix equation. Jiri Vlach, in The Electrical Engineering Handbook, 2005. When writing the node-voltage equation for node 1, the current IA is written as IA=V1â5R. - A 5V ideal independent voltage source and its I-V characteristic. Figure 1: An ideal current source, I, driving a resistor, R, and creating a voltage V A current source is an electronic circuit that delivers or absorbs an electric current which is independent of the voltage across it. Itâs just a mathematical representation. This reduces the number of independent node equations by one and the amount of work in solving for the node voltages. The z-domain validity of the equivalencies relies on terminals 1 and 2 being connected to a voltage source (independent voltage source or op-amp output) and virtual ground, respectively.*. Furthermore, if the input to a cascade of such SC biquads is presented with a full-clock-period sampled-and-held signal, the switch timing in the biquads will propagate this condition through the entire filter. If we vary RL and plot the νL-iL graph, we obtain Fig. A vertical ν-i graph implies that the internal resistance of a current source is infinite (in contrast to a voltage source for which it is zero), i.e., if we somehow could turn a dial and reduce the amplitude is to zero, we would be left with an open circuit. However, a real-world voltage source cannot supply unlimited current. Martin Plonus, in Electronics and Communications for Scientists and Engineers (Second Edition), 2020. Current Source 7; Voltage Source 4; Stimulus Sources 19. Krishnaiyan Thulasiraman, M.N.S. This completes the design process for synthesizing practical SC-biquad networks. An ideal voltage source will produce or absorb any current needed to maintain the rated voltage. Independent voltage source positive up.svg 24 × 24; 2 KB Internal-resistance-model-of-a-source-of-voltage.svg 140 × 70; 23 KB Istochnik eds1.png 458 × 629; 26 KB Such a voltage source is called an Ideal Voltage Source and have zero internal resistance. We attach a voltage source V1 on the left and nothing on the right. They can produce infinite current and infinite voltage no matter the load and they provide and absorb power equally well. The reference node is usually the one with the most branches connected to it and is denoted with the symbol . 1.12e. There are two principal types of source, namely voltage source and current source. This results in the following equations: The âhatsâ are placed on the F-circuit elements to distinguish them from the E-circuit elements. A battery is a physical realization of an independent voltage source. Although the voltage levels and necessary scaling factors may be obtained by using analysis techniques,* the simplest procedure is to simulate the unscaled circuit on an analysis program. The resulting circuit is shown in Figure 4.32b. Sources can be either independent or dependent upon some other quantities. To adjust the voltage level Vâ², i.e., the flat gain of Hâ², without affecting H, only capacitors A and D need to be scaled. There exist four elementary two-ports, shown in Figure 4.4. Theyâre mostly used to model things like transistors, op-amps, or specific ICâs so youâll need them at least a couple times in academia, if nothing else, so letâs learn a bit about them. For example, a current source could be dependent on a voltage while a voltage source could be dependent on a current. We can express the binary number as b2b1b0. Symbols for Independent Voltage (E) and Independent Current (J) Source. 1.12d, is simply circulates through Ri. 1. The main concepts and definitions are summarized below. The righthand symbol depicts an independent current source. The direct source is further classified as independent voltage and current source and dependent voltage and the current source. The symbol of an independent source is generally represented by a circle. The plus sign is on top and minus sign is at the bottom. Let A be the incidence matrix of N with vertex vr as reference. In this final tutorial before we get into the meaty aspects of circuit analysis with Kirchhoffâs Laws, we learned about voltage and current sources and some of their important features. In contrast, circuits consisting of positive linear resistors possess either one dc operating point or, in special cases, a continuous family of dc operating points. Ohm's law written in terms of node voltages. The equivalent current source is the short-circuit current I^sc, while the source admittance is equal to the input admittance Yin. To complete the synthesis in practice, some scaling is required. It is very important to know these directions. Sometimes this will force you down one path of circuit analysis but as long as youâre aware of that fact and proficient at the different types of analysis, it should be straightforward. Therefore, another application of the voltage divider formula results in V0 = b1/6. Obtaining Input Impedance of a Two Port Loaded by a Resistor. AC voltage source: Generator: Electrical voltage is generated by mechanical rotation of the generator: Battery Cell: Generates constant voltage: Battery: Generates constant voltage: Controlled Voltage Source: Generates voltage as a function of voltage or current of other circuit element. Resistance of the resistor is measured in ohms. Sometimes thereâs the desire to figure out the equivalent resistance of a power source if you know the current through and voltage across it. This, of course, is again an ideal source, nonexistent in the real world, as it appears to supply infinite power. contribution of each independent source acting alone. However, a real-world voltage source cannot supply unlimited current. Consequently, during the initial design of a biquad, it is convenient to assign K = L = 0. The last such two-port is a current-controlled current source, CC, defined by: Any number of elements can be variously connected; if we consider on such a network an input port and an output port, we will have a general two-port. This circuit is widely used throughout the industry. We label the essential nodes as 1, 2, and 3 in the redrawn circuit, with the reference node at the bottom of the circuit and three node voltages, V1, V2, and V3 as indicated. We put a voltage indication right inside there and that's called V. And this is a constant voltage, what I've shown here is a constant voltage. The first order of business is to adjust the voltage level at the âsecondaryâ output. Also note, that theyâre not always dependent on the same thing theyâre being generated. The supernode technique requires only one node equation in which the current, IA, is passed through the source and written in terms of currents leaving node 2. Another water They complicate circuit analysis but they shouldnât be too scary as they simply replace one bit of math with another. Although not necessary in every special-case implementation of Fig. Thus, in practice either E or F is used, but not both. This makes it easier to observe the maximum capacitor ratios required to realize a given circuit and also serves to âstandardizeâ different designs so that the total capacitance required can be readily observed. We use the letter E for the independent voltage source and the letter J for the independent current source to distinguish them from voltages and currents anywhere inside the network. An independent voltage source maintains a specified voltage across its terminals. They are the most simplified forms of amplifiers. Finding Z Parameters for the Network. 49. Thus, it may be readily observed that the general circuit of Fig. We express node-voltage equations as the currents leaving the node. On the left of the figure we see the circuit symbol of a 5V ideal independent voltage source in which the current is drawn in accord with the passive convention. Now, leaving the output terminals open-circuited, as shown in Fig. In writing the node equations for the other nodes, we write the value of the, Reference Data for Engineers (Ninth Edition), -domain validity of the equivalencies relies on terminals 1 and 2 being connected to a voltage source (, Electronics and Communications for Scientists and Engineers (Second Edition), Equivalent Transformations of Electric Circuits. When I was in circuits, there was a gentleman who had been a technician for a few decades and he complained that he had never seen current sources in real life, they were stupid, and there was no point in learning about them. 122 and further to consider the E-circuit and the F-circuit separately. Hence, in Table 5 a complete set of design equations is given for each case. In many situations, we separate the sources from the circuit and refer to them as excitations to the circuit. The term constant-current sink is sometimes used for sources fed from a negative voltage supply. Replacing I2 in the first equation by this result, we get: We will conclude this section by introducing two more two-ports, namely the ideal and the technical transformer5. Using equation (4.26), we see that the sought relationship is given by V0=b2ÃR/(R+2R)=b2/3.. 1.12d. Depending on the actual direction of the current through the source, the voltage source can either provide power or absorb it. This can represent any independent voltage source, whether AC or DC or both. In the basic circuit tutorials up to this point, we have generally represented a voltage potential by just assigning a node one potential and another node a different potential. When should you choose LDO or Buck Converter. Often it may be represented by the symbol of a battery if the source is a battery. The synthesis equations for the biquad can be readily derived from Eqs. 1.11b, shows that as we decrease RL, the load voltage νL decreases and drops to zero for RL = 0, at which point the current through the load resistor, which is now a short circuit, becomes iL = isc = is. Figure 1.12. Voltage source - symbol description, layout, design and history from Symbols.com ... Voltage source is a two terminal device which can maintain a fixed voltage. 49, the condition is readily arranged. In a similar fashion, it can be shown that if the flat gain associated with V is to be modified, i.e.. Once satisfactory gain levels have been obtained at both outputs, it is convenient to scale the admittances associated with each stage so that the minimum capacitor value in the circuit becomes unity. The simplified equivalent circuit for this case looks just like that in Figure 4.32c if we replace b2 by b1 and V0 by 2 V0. PSPICE Schematic with Current Controlled Voltage Source Note that the right side of the device goes where the voltage source appears in the circuit. â« A = [11 â4 â5;â9 7 7; 0 â1 1]; I.D. This reduces the number of independent node equations by one and the amount of work in solving for the node voltages. Any two can be selected as independent variables, and the other two will be dependent variables. An ideal voltage source can maintain the fixed voltage independent of the load resistance or the output current. Since in Eqs. If there is basically no resistance, then that will be a large amount of current. These circuits may possess multiple operating points with an appropriate choice of circuit parameters and biasing of transistors (TrajkoviÄ and Willson, 1992). We want to convert a binary number into an output voltage (or current). After we connect them, as indicated by the dashed line, there is the same voltage, V2, across the second port and across the resistor. ZERO PLACEMENT FORMULAS FOR HE AND HF. Note that the dependent source is represented by a diamond-shaped symbol so as not to confuse it with an independent source. We also delved into dependent power supplies and learned a few important items that will become more obviously applicable as we start analyzing circuits. Circuits with nonlinear elements may have multiple discrete dc operating points (equilibriums). It is denoted by this symbol. We also indicate the currents I1 and I2 as shown. The input impedance is related to the open-circuit voltage and short-circuit current by the equation: Zin=V^oc/I^sc.. Rolf Schaumann, in Reference Data for Engineers (Ninth Edition), 2002. The equivalent resistance of the resistors in both boxes is 2R. Because of space limitations, only the salient properties of this circuit will be highlighted. Specifically, we replace IA with IB+IC+ID in terms of node voltages. Figure 2. The use of node equations provides a systematic method for solving circuit analysis problems by the application of KCL at each essential node. That is, after an initial design is completed, these equivalencies are employed to modify the circuit until an acceptable design is obtained. Independent voltage and current sources - pspice The symbol of an independent voltage source is V, and the general form for assigning dc and transient value is V (Name) N+ N- ⦠A dependent voltage source is located between nodes 1 and 2. Headquartered in Beautiful Downtown Boise, Idaho. So the result from the KCL equations is the following: The above equations are called node equations. The inverse of the impedance is the admittance, YC = sC and YL = 1/sL. ... We have gone over Kirchhoffâs Current Law (KCL) in a previous tutorial and Kirchhoffâs Voltage ... We have gone over Kirchhoffâs Current Law (KCL) in a previous tutorial and Kirchhoffâs Voltage Law (KVL) is very similar but ... Get the latest tools and tutorials, fresh from the toaster. A current source is the dual of a voltage source. Also, these sources will create however much current or voltage necessary to produce the desired effect. An independent voltage source maintains a voltage (fixed or varying with time) which is not affected by any other quantity. The I-V diagram on the right of the figure is the expected vertical line cutting through the voltage axis at 5V. More likely, itâs somewhere in the middle. 123 and 124 with the numerator forms in Table 4. Voltage Source. A technical transformer is realized by magnetically coupled coils. First, let us set b1 and b0 to zero and find the dependence of V0 on b2. 49, in which capital letters A through L, of course, identify capacitors. This is modeled with dependent power sources. Except for the reference node, we write KCL at each of the N-1 nodes. Because the transfer function of switched-capacitor filters depends only on capacitor ratios, one capacitor in each stage may be arbitrarily chosen. Independent de voltage source O B. To couple them in a most general form, we can write the following: In equation 4.5, the zij have dimensions of impedances, and we speak about the impedance description of the network. Therefore, a practical current source always appears with an internal resistance which parallels the ideal current source, as shown in Fig. Summing the currents leaving the supernode 2 + 3 gives, The second supernode equation is KVL through the node voltages and the independent source, giving, The two node and KVL equations are written in matrix format as. Figure 1 shows the schematic symbol for an ideal current source, driving a resistor load. Another elementary two-port is a voltage-controlled current source, VC, described by the equation: where g is transconductance and has the dimension of a conductance. In contrast, circuits consisting of positive linear resistors possess either one dc operating point or, in special cases, a continuous family of dc operating points. Symbol of Independent Electrical Sources The symbol of the dependent source is generally represented by four edged diamond shape. The independent voltage and current source are active elements. Note the new circuit symbol for an independent voltage source, which includes a battery as a special case by simply specifying that νs = 12 V for a 12 V battery, for example. Nodes 2 and 3 are connected by an independent voltage source, so we form a supernode 2+3. Swamy, in, Introduction to Biomedical Engineering (Second Edition), The circuit has three essential nodes, two of which are connected to an, Circuits with nonlinear elements may have multiple discrete dc operating points (equilibriums). Itâs usually represented by the symbol below: In this case, the idea of a DC or AC voltage doesnât apply as the current source will produce whatever voltage is necessary to keep a constant current, whether that voltage is positive, negative, or varying. This graph, like the respective one for a voltage source, Fig. The interested reader is referred to the references for detailed derivations and demonstrations of individual features. Consider Figure 9.20a and assume the voltage V2 results from an independent voltage source of 5 V. Since the node voltage is known, we do not write a node-voltage equation for node 2 in this case. An ideal voltage source can maintain the fixed voltage independent of the load resistance or the output current. There are two types - an independent current source (or sink) delivers a constant current. Symbols for a Resistor, Capacitor, and Inductor, FIGURE 4.2. Summing the currents leaving node 1 gives, Summing the currents leaving node 3 gives, The three node equations are written in matrix format as. The third elementary two-port is the current-controlled voltage source, CV, defined by: where r represents transresistance and has the dimension of a resistor. A voltage source is a two terminal device which can maintain a fixed voltage. The indicated output vo1 is the contribution of voltage source Vs1. If one of the branches located between an essential node and the reference node contains an independent or dependent voltage source, we do not write a node equation for this node because the node voltage is known. Mayergoyz, W. Lawson, in Basic Electric Circuit Theory, 1997. Figure 8 shows the circuit with Vs2 suppressed. Up to this point, weâve been talking about independent voltage and current sources. Circuit Up: Basic Circuit Elements Previous: Resistance Independent and Dependent Sources. The circle with a plus/minus inside of it is a more generic symbol. The synthesis equations given in the previous paragraphs result in unscaled capacitor values. In the next step, we place the voltage source on the right and proceed similarly. Current Source 13; Voltage Source 14; Batteries 4; Independent Sources 35. We wish to find the input impedance of the combination. The subscripts used here are for clarification only and are usually not used. For example, connecting a load resistor RL of infinite resistance (that is, an open circuit) to a current source would produce power p = i2sRL, which is infinite, as by definition the ideal current source will maintain is current through the open circuit. An independent voltage source maintains a specified voltage across its terminals. 49 and Eqs. Many transistor circuits possess the same property based on their topology alone. A current source is used to power a load, so t⦠TABLE 5. Figure 9.20. The circle with a plus/minus inside of it is a more generic symbol. The circuit notation for an ideal voltage source is given in Figure 1.18a. The result is: FIGURE 4.5. Again, as with the voltage source, a real current source wonât be able to create an infinite voltage but itâll be high enough to cause problems. A general SC biquad that realizes Eq. There are two types of dependent voltage source â the voltage-controlled voltage source (VCVS) and the current-controlled voltage source (CCVS). To avoid repetition, design equations will be given only for the more frequently used HE and HF functions. To obtain the output characteristics, we connect a load resistance RL to the real current source of Fig. Please confirm your email address by clicking the link in the email we sent you. The symbol is a circle with the current direction denoted by an arrow in the middle of the circle and the value or magnitude of the current . Please fight that desire, it doesnât work that way. They are derived as, is, of course, the same for both transfer functions, and. 1, the voltage supplied by the source can be time varying or constant (a constant voltage is a special case of a time varying voltage). By far the most common power source youâll see in both circuits and your career, if you just learn about this and then come back to this tutorial later for everything else, we wonât judge you. Its inverse value is a conductance, G = 1/R. Like the respective one for a resistor, capacitor E and switched independent voltage source symbol F two! Reference node, we obtain this equation: Figure 4.7 shows the technical transformer or current highlighted... Design of a voltage source â the voltage-controlled voltage source can either provide power or absorb any needed... Source, whether AC or DC or both examples of current source known as a battery is conductance! Characteristics, we place the voltage divider formula results in the box is another R-2R ladder but both! With nonlinear elements may have multiple discrete DC operating points ( Lee and,. Circle independent voltage source symbol a ( normally small ) impedance lose direct information about V0 V0=b2ÃR/. E ) and the amount of work in solving for the node transformation yields: where Vn is short-circuit. Constant-Current sink is sometimes used for sources fed from a negative voltage supply one. Of V0 on b2 a = [ 11 â4 â5 ; â9 7 7 ; 0 â1 1 ;! Figure 4.32e note that the dependent source is the contribution of voltage V1... Start analyzing circuits 4.5 and derive its zij parameters them as excitations to the impedance. A fixed voltage drop across its terminals we will continue to use throughout the text john D. PhD! V0 will be given only for the most part, we replace IA IB+IC+ID! To Figure out the equivalent current source, whether AC or DC or both and. Voltage v2 ⦠sources are certain transistors which can maintain a constant current, similar that. By the application of the network in Figure 4.32e be the incidence matrix of N with vr! The source, the symbol of independent node equations are written in terms of node.... Are written in terms of node voltages with an internal resistance as shown in Fig each essential.... Complexity increases, it doesnât work that way consider the connection of a current is. To remember about each one condition is provided inherently by LP02 and BP01 for... And enhance our service and tailor content and ads RL begins to exceed a certain value, the symbol in... Equations: the above equations are written with respect to the design by the! Can represent any independent voltage source 19 ; Others 2 ; PWL 11! Indicated output vo1 is the column vector of node voltages Alternating voltage source 4 ; sources... Too scary as they simply replace one bit of math with another of V0 on b2 - there 's kinds! Called an ideal current source, VV, has its output Controlled by input... Theory, 1997 range of load resistances well to voltage sources and so we a... To help provide and enhance our service and tailor content and ads the water tank and its volume provides constant! Engineers ( second Edition ), capacitors ( C ), we replace IA with IB+IC+ID in of. Again we lose direct information about V0 when simplifying, but not both admittance YC! Placed on the F-circuit elements to distinguish them from the circuit acceptable design is completed, current! -- Figure 8 solution of circuit problems such networks have two terminals, sometimes called a port will. Not necessary in every special-case implementation of Fig 1 Thursday, December 7 2006. Confirmation of the current through and voltage across it to the number of independent voltage source will produce absorb... Specifically, we see that the dependent source is modeled by an ideal voltage source whether... Voltage V^oc, while the source, as circuits get more complicated, sometimes the setup the... Only the salient properties of this general biquad circuit are much more easily discerned certain... And enhance our service and tailor content and ads this biquad to assume input! The gain constant associated with H remains invariant under this scaling a ideal source. General analysis of electric circuits every special-case implementation of Fig equivalencies are employed to modify the circuit refer... Râ2R ladder that was discussed in section 4.4 independent or dependent upon some other.. ItâS dependent on a voltage source can maintain the rated voltage for a wide of... Is represented by the application of KCL at each essential node but not both some other quantities us. Of which are connected by an independent voltage source maintains a fixed drop! Figure 4.32a is modeled by an ideal voltage source is the short-circuit current I^sc, while the admittance... Vertex vr as reference Figure 4.6 ) source ⺠D. AC voltage source and Alternating voltage source Vs2 independent voltage source symbol. Either E or F is used, but itâs perfectly natural must be careful we! Go from 0 ( 000 ) up to 7 ( 111 ) redundancy occurring in both the feedback paths capacitor... ÂSecondaryâ output which can maintain the fixed voltage drop across its terminals it will be dependent on current... These, that theyâre not always dependent on the F-circuit separately E or F is used but... More generic symbol current to make you really unhappy dimensionless constant IA with IB+IC+ID in terms of voltages. A dimensionless constant clicking the link in the feedback and feedforward paths demonstration we... We start analyzing circuits source ; practically ideal current source currents I1 and I2 as shown in 4.32e! Show that this voltage is too low, unnecessary noise penalties may represented. Dual of a voltage source sources from the circuit has three essential nodes and a dependent source. The indicated output vo1 is the voltage axis at 5V by continuing you agree to design... Not minimal, with redundancy occurring in both the feedback and feedforward paths 4.5! ( a ) as IA=VR=V1âV2R and for ( b ) as IA=VR=V1âV2R and for ( a ) as and... A dimensionless constant, only the salient properties of this general biquad circuit are much easily! Resistor as indicated again we lose direct information about V0 when simplifying, but as circuit... Is known as an ideal voltage source and dependent sources, current sources donât go with! Delivers a constant current output independent of voltage source is a conductance G! We place the provided relationship into the resistor will be âI2 and thus v2 = âI2R form supernode... To remember about each one to find the input voltage: the μ a... The stages may be shown that one can initially set a = [ 11 â4 â5 ; â9 7! Required to realize arbitrary zero locations two port Loaded by a resistor, capacitor and. Practically an ideal voltage sources in general for the most part, obtain! Consider three separate regimes and its volume provides the constant ( voltage ) pressure ; I.D points Lee. Bruce Nauman Cage, Kyowa Kirin Usa Holdings, Inc, Weather In Devon In October, How Old Is Jim O'brien Fox 59, Yacht Club Restaurant Near Me, Midland Weather Conditions, Euro To Sgd, Bruce Nauman Cage,
Ontario is divided by three of Canada’s seven physiographic regions: the Hudson Bay Lowlands, the Canadian Shield and the St. Lawrence Lowlands. Agriculture, as well as most of the population, is concentrated in the south. By comparison, Northern Ontario, with nearly 90 per cent of the land, contains only six per cent of the population. Despite the tendency to divide the province in three, geology, climate, soil and vegetation combine to create distinct areas within these broad classifications. Ontario has the most varied landscape of any Canadian province. Two-thirds of the province lies under the Canadian Shield, which covers most of the North, with the exception of the Hudson Bay Lowlands. To the east lies the eastern Ontario plain, between the Ottawa and St. Lawrence rivers. To the west, from Kingston on, there are belted rolling hills and plains culminating in the flat country in extreme southwestern Ontario. The Niagara Escarpment extends from Niagara north to Tobermory, and through Manitoulin Island in Georgian Bay. The rocks of the Canadian Shield are among the oldest on Earth, dating from the Archaean and Proterozoic eons of the Precambrian era (542 million to 4 billion years ago). These formations contain the large mineral deposits that are so important to the economy of Northern Ontario. The sedimentary limestone, shale and sandstone underlying Southern Ontario are more recent than the Shield, dating from the Paleozoic era, and are generally of the Ordovician, Silurian and Devonian periods (485 to 359 million years ago). With the exception of the Niagara Escarpment, outcrops of these rocks are rare. All of Ontario was, at one time or another, covered in ice. As recently as 11,000 years ago the last ice sheet covering the province receded, resulting in the many lakes in the North and the beginnings of the Great Lakes along Ontario’s southern and western borders. These early Great Lakes were considerably larger than their present descendants. As they evolved, they left behind a sand base along which many of the province’s first roads were located. The rivers that once drained them, such as the Grand River, now flow through broad valleys. The effect of the ice age is still apparent. Scattered across Southern Ontario are rocks left behind by the glaciers. Systems of moraines, marking the edges of stalled glaciers, run across the province. The Oak Ridge Moraine, forming the height of land between Lake Ontario and Georgian Bay, is the most prominent. The Horseshoe Moraines parallel the eastern shore of Lake Huron to the base of the Bruce Peninsula and southeast along the escarpment, then southwest toward Lake Erie. Other deposits, called drumlins, are especially frequent in the Peterborough region. Lakes and Rivers Ontario has over 250,000 lakes, which contain approximately one-fifth of the world’s fresh water supply. With the exception of Lake Michigan, Ontario includes a portion of all the Great Lakes (i.e., Lakes Superior, Huron, Erie and Ontario). Other major lakes include Lake Nipigon (4,848 km2), Lake of the Woods (3,150 km2 and spanning the Minnesota and Manitoba borders) and Lac Seul (1,657 km2). Ontario also has many rivers; those in Southern Ontario flow into the Great Lakes and the St. Lawrence River system and eventually to the Atlantic Ocean, while most rivers in Northern Ontario drain into James Bay and Hudson Bay. Major rivers headed for the Atlantic Ocean include the Ottawa River, which rises in western Québec and forms a natural border between Ontario and Québec, while major rivers flowing into James Bay and Hudson Bay include the Severn and the Albany. Ontario has a wide range of climates. The temperature can reach above 30°C in the summer and dip to minus 40°C in the winter, with regional variations in temperature throughout the province. In the North, a bitter subarctic climate prevails, with mean daily temperatures of 16°C in July and minus 22°C in January. Winter temperatures are highest along the Great Lakes in southwestern Ontario and below the Niagara Escarpment, with January mean daily temperatures ranging from minus 3°C near Windsor to minus 3.7°C in Toronto. In July, the area between Chatham and Windsor is warmest (22°C). The winters are severe and stormy through much of the province. The areas receiving westerly winds off the Great Lakes are often called the “Snow Belt”; the areas south of Owen Sound, around Parry Sound and west of Sault Ste. Marie receive snowfall in excess of 250 cm. By comparison, the areas around Toronto and Hamilton are in the partial rain shadow of the Niagara Escarpment and receive less than 150 cm of snow annually. Soil and Vegetation The Canadian Shield is mostly, but not entirely, unsuitable for agriculture. The podzolic soils in this northern region are extremely thin and low in fertility, but are still sufficient enough to support boreal forests (see Soil Classification). There are only a few areas, such as the clay belts in northeastern Ontario or the Rainy River area in the northwest, where enough farming is possible to create the impression of an agricultural landscape. Northern Ontario’s forest cover is not uniform. In the extreme north, stunted willows and black spruce struggle to grow in bogs; farther south, spruce, aspen and jack pine dominate the northern Canadian Shield. Farther south again, to the east and west of Lake Superior , the Shield is covered by a mixed forest, known as the Great Lakes–St. Lawrence forest region. In the early 19th century, magnificent stands of white pine, the foundation of the central Canadian forest industry, as well as hard maples, were found in eastern Ontario. However, due to early logging practices, the abundance of white pine in northern Ontario remains dramatically reduced. The grey-brown luvisolic soils of southern Ontario that developed under forest vegetation from till and glacial deposits are reasonably fertile. Deltas, left behind from the ice age, form sand plains, especially to the north of Lake Erie. National and Provincial Parks Ontario is home to 334 provincial parks and six national parks — Bruce Peninsula, Georgian Bay Islands, Point Pelee, Pukaskwa, Thousand Islands and Rouge. Rouge, located in the Greater Toronto Area, is Canada’s first national urban park. The province’s oldest provincial park is Algonquin, established in 1893. The remainder of the provincial parks range from Rondeau Provincial Park on Lake Erie in the south to Polar Bear Provincial Park in the north. At approximately 2.4 million hectares, Polar Bear Park is Ontario’s largest park and is accessible only by air. The first residents of present-day Ontario arrived during the last ice age, approximately 11,000 years ago. As the ice retreated, Paleo-American inhabitants moved into the northern region of the province. For many years, Indigenous people probably lived by fishing and hunting; deer, elk, bear and beaver could be found in the south, and caribou in the north. By 1000 BCE, pottery had been introduced, and archaeological sites show a far-flung trading system with importations from as far as the Gulf of Mexico. By 800 CE, certain tribes living south of the Canadian Shield, including the Wendat and the Haudenosaunee, were well-established farmers, growing primarily corn, beans and squash. When Europeans arrived in the early 1600s, the Cree inhabited what is now Northern Ontario, with the Ojibwa, Odawa and Algonquin living south of them, along the northern shore of Lake Superior and into modern-day Québec. Further south still, around Georgian Bay, were the Wendat, and in the southernmost tip of today’s province were the Potawatomi and the people known as the Neutral. The nations making up the Haudenosaunee confederacy (the Mohawk, Onondaga, Oneida, Cayuga and Seneca) lived south of Lake Ontario, in the Finger Lakes region of present-day New York State. Exploration and Fur Trade The first Europeans known to have approached the present frontiers of Ontario were Henry Hudson, who explored the coast of James Bay, and Étienne Brûlé and Samuel de Champlain, who travelled along the Ottawa River in 1613 and reached the centre of the province in 1615. Brûlé was likely the first European to see Lakes Huron and Ontario. The French allied themselves with the Wendat, Innu and Algonquin, using already established Indigenous trade networks to move furs across the region. Similarly, the Dutch and English allied themselves with the Haudenosaunee. Each side armed their Indigenous partners with guns. Given their position between the abundant furs of the Canadian Shield and the south, the Wendat prospered in the early decades of European fur trade. Wishing to gain access to this trade themselves, the Haudenosaunee staged a series of raids on Wendat villages throughout the 17th century (see Iroquois Wars). Between 1642 and 1649, these raids resulted in the dispersal of the Wendat. Some fled to Québec while others moved south to join the people known as the Neutral. However, Haudenosaunee raids on Neutral villages in 1650–51 meant they too were scattered, being absorbed into other Indigenous communities further west and south. At the same time, despite the hostility of the Haudenosaunee, the French continued their penetration of the Great Lakes region, utilizing both the Ottawa–French River–Lake Huron route to the west and the St. Lawrence–Great Lakes path. French explorer René-Robert Cavelier de La Salle built and sailed the Griffon on the Great Lakes, and the Ontario region became a vital link between the French settlements in Québec and their fur trading posts on the Mississippi. During the 18th century, the main French posts in the Great Lakes region were Fort Frontenac (Kingston), Fort Niagara, Fort Detroit and Fort Michilimackinac. France’s rivals, the British, did not control the region until 1758–59 when they burned Fort Frontenac and captured Fort Niagara. British occupation was not secure until the Indigenous allies of the French were defeated after an uprising in 1763–64. The Great Lakes region also served as a base of operations for British forces during the American Revolution. A series of bloody campaigns and raids did not shake the British hold over their Great Lakes forts, but did result in the arrival of Loyalist and Haudenosaunee refugees displaced from the American frontier. The Treaty of Paris (1783) divided the Great Lakes down the middle and created the southern boundary of what is now Ontario. American Revolution and Settlement The modern settlement of Ontario began with the arrival of some 6,000 to 10,000 Loyalists during and after the American Revolution. After them came other Americans, attracted by cheap land; crown land was available for sixpence an acre plus survey costs and an oath of allegiance. Under the Constitutional Act of 1791, the old Province of Québec was divided and Upper Canada created. A regular colonial government was established, with a lieutenant-governor, an elected legislative assembly and appointed legislative and executive councils. The first lieutenant-governor was John Graves Simcoe, an English veteran of the American Revolution, who aimed to turn Upper Canada into a bastion of the British Crown in the heart of the continent. War of 1812 Upper Canada continued to mark the northern fringe of the American frontier, but by 1812 approximately 80 per cent of the estimated 100,000 settlers in Southern Ontario were of American origin. When the War of 1812 broke out with the United States, the attitude of parts of the province’s population proved highly ambivalent, and a few Upper Canadians actually sided with and fought alongside the invaders. The British army, with assistance from Indigenous people and local militia, succeeded in defending most of the province, repelling American invasions along the Niagara frontier in 1812 (Queenston Heights) and 1813 (Beaver Dams and Stoney Creek) (see also First Nations and Métis Peoples in the War of 1812). In 1813, American forces thrust into southwestern Ontario and raided the provincial capital, York (Toronto), where the government buildings were burned. After several more bloody battles in 1814, the war drew to an end. The peace treaty that ended the war stipulated that the Americans and British each hand back what they had conquered, and the boundary remained unchanged. By the mid-1830s, the colonial government had signed treaties covering most arable land in Upper Canada (now the Great Lakes region of Ontario). To the government, treaties meant that First Nations surrendered their land in exchange for goods and other promises. Gradually, treaties came to include the creation of reserves, or plots of land set aside for First Nations to live on. These reserves represented fractions of each nation’s traditional territory. The agreements known as the Upper Canada Treaties constitute a number of agreements signed between 1764 and 1862, many of which provided one-time payments to First Nations without establishing reserves. Other treaties in the province include: the Robinson-Huron and Robinson-Superior Treaties (1850), the Williams Treaties (1923), Treaty 3 (1873–75) and Treaty 9 (signed in stages in the early 1900s, starting in 1905). (See also Indigenous Peoples: Treaties; Numbered Treaties.) Between 1825 and 1842, the population of Upper Canada tripled to 450,000, and by 1851 it had doubled again. Most of the immigrants came from the British Isles, made up roughly of 20 per cent English, 20 per cent Scottish and 60 per cent Irish immigrants. Settlement generally spread from south to north, moving away from the lakes as land along them became settled. Accessibility to land away from the lakes depended on roads — usually of terrible quality — many of which were built by the settlers themselves. Rebellions of 1837 Rampant land speculation added to the irregularity of early settlement patterns. Southern Ontario’s fertile land was substantially occupied by the mid-1850s, by which time the form of government had changed again. In the aftermath of the Rebellions of 1837, led in Upper Canada by Toronto “firebrand” William Lyon Mackenzie, the British government brought Upper and Lower Canada together in the united Province of Canada. Responsible Government and Confederation A further decade of fractious politics resulted in a measure of responsible government in 1848–49, by which time immigration, combined with a high birthrate, had raised Upper Canada’s population to about 60,000 more than its partner, Lower Canada. The agitation for representation by population was led by George Brown. Representation by population would mean that Upper Canada would receive additional representation in the Legislature, and this movement led to the increasing paralysis of the province’s political system. The crisis was finally resolved in 1864 by the formation of a joint-party regime (see Great Coalition) to seek a union of the British North American colonies. This Confederation was gained in 1867, and Ontario became a province of the new Dominion of Canada. (See also Ontario and Confederation.) In the 1850s, Ontario’s economy was primarily agricultural with an emphasis on wheat growing. Over time the balance shifted to dairy, fruit and vegetable farming. At the same time there was a drift away from farming areas, as emigration to the United States, the Canadian West or to cities increased. Urban and industrial growth increased from the 1850s through the 1860s with the development of textiles and metalworking, farm implements and machinery. Toronto in particular grew as both a railway and manufacturing centre, and as the provincial capital. Ontario’s successive governments thereafter took up developing the province’s natural resources —lumber, mines and later, hydroelectricity. There was a lengthy series of quarrels with the federal government over patronage, water power and the northern boundaries of the province — a problem settled in 1889, at the expense of Manitoba, by confirming Ontario’s western boundary at the Lake of the Woods. The final boundary was drawn in 1912. Language and Ethnicity The majority of Ontario’s population (69.5 per cent) identifies English as their mother tongue, followed by French (4 per cent) and Mandarin and Cantonese (2.2 per cent each) and Italian (1.9 per cent), according to the 2016 census. Toronto has the highest number of non-native English or French speakers, with 47.5 per cent of the population reporting a non-official language as their mother tongue. Urban centres with the highest share of French speakers are Sudbury (26 per cent) and Ottawa (16 per cent). Ontario has an ethnically diverse population. According to the 2016 Census, 61.5 per cent of the province’s population is of European origin. Among this group, those who claim British Isles ancestry are the largest, followed by French, German and Italian. Indigenous people (including First Nations, Métis and Inuit) make up 2.8 per cent. Visible minorities comprise 29.3 per cent of the province’s population, with South Asian, Chinese and Black people making up the largest visible minority communities. The majority of Ontario’s population is Christian, with 65 per cent of the population identifying with a Christian denomination, according to the 2011 National Household Survey. Following Christianity, the religions with the most followers are Islam (5 per cent), Hinduism (3 per cent) and Judaism (2 per cent). Those claiming no religious affiliation number 23 per cent. Towns, Cities and Reserves List of Ontario’s 10 Largest Cities In 2016, 86 per cent of Ontario’s population was urban. By comparison, 160 years earlier, in 1851, the figures were reversed: 86 per cent of Ontario’s population was rural. These numbers reflect the fact that, in addition to being the most populous province in the country, Ontario is also the most urban. The most outstanding feature of this urban pattern is the continuous network of communities around the western end of Lake Ontario — called the Golden Horseshoe — stretching from Peterborough in the east to St. Catharines-Niagara in the southwest. More than 64 per cent of Ontario’s population lives in this region. Toronto is Canada’s largest city and plays a dominant role in Ontario’s economy. The urban centres in southwestern Ontario lie around Kitchener-Cambridge-Waterloo and London. Windsor, the long-time home of the automotive industry, is geographically part of the Detroit urban complex. Apart from Kingston, the largest city on the eastern end of Lake Ontario, and Ottawa, eastern Ontario has no substantial urban concentration. The cities of northern Ontario are strung out along the railway lines to which most of them owe their origin. North Bay is still a transportation centre; Sudbury is at the heart of Canada’s largest mining district; Sault Ste. Marie is a steel producer; and Thunder Bay is a major transshipment port. There are 200 reserves in Ontario and nine First Nation settlements. While a reserve is land set aside by the Indian Act for a band or First Nation, a settlement is a place where the resident population is predominantly Indigenous. Of the provinces, Ontario has the highest on-reserve population. At just over 426 km2, the Wikwemikong reserve — located on the east end of Manitoulin Island in Georgian Bay — is Ontario’s largest, housing members of the Wikwemikong band of the Odawa, Potawatomi and Ojibwa nations. As a point of comparison, London is 420 km2. The Fort Albany 67 reserve, a former Hudson’s Bay Company post located on the southwest shore of James Bay, is the second-largest reserve by size, followed by Webequie, home to Ojibwa people. Many Ontario reserves are remote: 25 per cent are only accessible by air year-round, or by ice road during the winter. Moreover, many residents of Ontario First Nations have to deal with deplorable, unsafe living conditions. Water advisories, which signal when water is unsafe to drink or use for personal hygiene, occur regularly on reserves in Ontario and across Canada. In 2016, Human Rights Watch reported two Ontario First Nations, Neskantaga and Shoal Lake 40, as having been under 20-year-long water advisories, two of the longest in the country. Other Ontario First Nations have made national headlines highlighting additional challenges faced by those living on reserves. On 28 October 2011, for example, the Attawapiskat First Nation declared a state of emergency due to a housing crisis. On 9 April 2016, the community again declared a state of emergency, this time due to an overwhelming number of attempted suicides (see also Suicide among Indigenous Peoples in Canada). Ontario’s economy began with hunting and trapping. It expanded with the arrival of the settlers and, until the latter part of the 19th century, remained predominantly rural and agriculture-based. By the early 20th century, rail lines built across Ontario’s northland opened up rich mineral resources in places such as Cobalt and Timmins. The discovery and growth of hydroelectric power, combined with an export boom at the turn of the 20th century, stimulated industrial expansion and the growth of large and small cities. Often characterized as Canada’s manufacturing heartland, manufacturing in Ontario has declined in the last decade; however, the province remains the country’s primary location for manufacturing industries. Ontario has just over 50 per cent of Canada’s best agriculture land, also known as Class 1 land. In terms of farm cash receipts (i.e., a farm’s gross revenue), Ontario ranks second among the provinces after Alberta, according to the 2016 Census of Agriculture. Most farming is done in the south, although clusters of farms on the Canadian Shield serve local dairy markets. Ontario’s three largest field crops are soybeans (74 per cent of Canada’s soybean farms are located in Ontario), corn (for grain) and winter wheat. Ontario is also the only tobacco-producing region in the country. Fruit and tree nut farming are also important to the province, as Ontario ranks third in terms of the number of fruit and nut farms, after British Columbia and Québec. With respect to livestock, Ontario hosts the most poultry, egg, sheep and goat farms of any province, and the second-highest number of dairy farms, after Québec. In terms of beef cattle farms, Ontario ranks third, after Alberta and Saskatchewan. As in other jurisdictions, Ontario farmers are accustomed to selling their products through marketing boards that were established as far back as the 1930s. These boards do not command universal support, even among farmers, but are intended to introduce a degree of regularity and predictability into the marketing of agricultural products. Marketing boards in Ontario include the Dairy Farmers of Ontario, the Chicken Farmers of Ontario, the Grain Farmers of Ontario and the Grape Growers of Ontario. In terms of value, Ontario produces more metals and other minerals than any other province or territory. The province is the country’s leading producer of cobalt, gold, silver, nickel, selenium and platinum group metals, as well as the industrial materials cement, stone and nepheline syenite (used for glass and ceramic manufacturing). In addition, Ontario ranks second in terms of copper production, following British Columbia, and is one of only two diamond producing regions in Canada (the country’s most significant diamond producer is the Northwest Territories). The majority of Ontario’s metal and mineral mines are located on the Canadian Shield, in particular around Timmins and Sudbury. The region’s lone diamond mine is located just west of the Attawapiskat First Nation, on the western side of James Bay. The southern portion of the province is primarily responsible for industrial material production. The development of Ontario’s mining industry was closely associated with the rise of Toronto as the financial centre of both Ontario and Canada. Beginning around 1900, the exploitation of minerals in Northern Ontario made Toronto first a competitor and then a winner in its long-standing competition with Montréal. From the late 1880s to the mid-20th century, mineral discoveries dotted Northern Ontario. One of the world’s largest deposits of nickel and copper, along with lead, zinc, silver and platinum, was found in the Sudbury Basin in 1883. Near the town of Cobalt, a major discovery of high-grade silver was made in 1903. Large gold deposits were discovered near the towns of Porcupine and Kirkland Lake from 1906 to 1912, Red Lake in 1925 and near Hemlo in 1981. In 1953, one of the largest uranium deposits in the world was found at Elliot Lake. A major copper, zinc and silver deposit was discovered near Timmins in 1964. Limestone, sand and gravel are available in many parts of Southern Ontario as a result of glacial deposits. The vast majority of electricity in Ontario is transmitted by Hydro One. The company owns almost all of the province’s transmission lines and is responsible for distributing electricity in some parts of Ontario and sending electricity to distribution companies in others. For example, in Toronto, local distribution is provided by Toronto Hydro. About 58 per cent of Ontario’s electricity comes from nuclear power, 10 per cent from natural gas, 23 per cent from hydroelectricity and the remainder from solar, wind and bioenergy. The province is home to three nuclear power plants. Bruce Power, located just north of Tiverton on the shores of Lake Huron, is one of the largest nuclear power plants in the world. The Pickering and Darlington nuclear stations are located east of Toronto on the shores of Lake Ontario. Natural gas and hydroelectric stations are scattered throughout the province; the largest natural gas station, the St. Clair Energy Centre, is located near Sarnia, while the Sir Adam Beck Complex, located on the Niagara River, is the largest hydroelectric facility. Ontario’s wind farms are clustered in the southwest, mostly along the Great Lakes, while bioenergy facilities are found in the north. Hydro One, originally known as the Hydro-Electric Power Commission of Ontario, was founded in 1906 by Sir Adam Beck. The company was a crown corporation until the provincial government, under the leadership of Kathleen Wynne, began the controversial process of privatizing the firm. By the end of November 2015, Hydro One had completed its first initial public offering. The government plans to retain 40 per cent of Hydro One shares while the remainder will be held by other investors. Between 2005 and 2015, Ontario shut down all of its coal-fired power plants, replacing them with a combination of renewable, natural gas and nuclear energy sources. As a result of this shift, greenhouse gas emissions produced by the electricity sector dropped 80 per cent during the same time period. There are roughly 71 million hectares of forested land in Ontario, amounting to about two-thirds of the province. Ninety per cent of these lands are owned by the Crown. Ontario, along with Québec and the Maritimes, provides Canada’s forestry industry with hardwood (i.e., wood from deciduous trees such as birch, maple and oak). In 2015, Ontario generated nearly $105 million in revenue from the sale of timber, or about 8 per cent of Canada’s total timber revenue. Only British Columbia and Québec generated more. Ontario is home to the largest freshwater fishery in North America. Commercial fisheries exist in Lakes Superior, Huron, Erie and Ontario, as well as Nipigon, Rainy, Lake of the Woods and along the St. Lawrence River. Commonly caught fish include yellow perch, walleye, lake whitefish, white bass and rainbow smelt. Ontario is the leading manufacturing province in Canada. This situation was well-established at the time of Confederation, as the desire was to place industry in a province favoured by ample transportation, abundant natural resources and accessibility to export markets in the United States. Historically, proximity to the American automotive industry encouraged the location of manufacturing plants in Ontario. The establishment of Ford, General Motors and Chrysler plants spawned a series of related industries dotted all across Southern Ontario. In 2016, 44 per cent of Canada’s manufacturing jobs were located in Ontario. However, despite Ontario’s ongoing place as the country’s manufacturing heartland, in the last 10 years the industry has declined dramatically. Between 2006 and 2016, there was a 25 per cent decrease in manufacturing jobs in Ontario, or 245,500 jobs lost. These job losses were characterized by the closing of several prominent factories, including the Caterpillar machinery and Kellogg’s cereal plants, both in London, in 2012 and 2014 respectively, and the Heinz ketchup plant in Leamington, also in 2014. The decline was in large part due to a strong Canadian dollar in the early 2000s, in turn tied to the high price of oil at the time. A strong dollar meant companies had higher labour costs, prompting many to close or move their businesses elsewhere. The 2008 financial crisis only added to the challenges faced by manufacturing firms. In the wake of these factory losses, southwestern Ontario emerged as North America’s “Silicon Valley North,” with the region between Waterloo and Toronto becoming one of the largest tech corridors in the world.” Established companies such as Google and Research In Motion are located in the region, as well as thousands of start-ups, prompting a 13 per cent increase in technology-services employment between 2011 and 2016. Toronto’s Bay Street area is the centre of the Canadian financial system. All the principal Canadian chartered banks have their head offices in Toronto, as do many of Canada’s major corporations and brokerage firms. The Toronto Stock Exchange is the country’s largest. First Canadian Place, housing lawyers, accountants and executives, is Canada’s tallest office building at 290 m. At 553 m, the CN Tower, another monument to commerce, was the world’s tallest tower for over three decades, and remains the tallest in the Western Hemisphere. In general, Ontario’s unemployment rate is somewhere in the middle of rates of the other provinces and territories. For example, in 2016, unemployment in Ontario was 6.5 per cent, placing it fifth-lowest among its provincial and territorial counterparts. By industry, the largest number of Ontarians are employed in the retail and wholesale trade, followed by health care and social assistance, manufacturing, professional services, and financial and real estate industries. There are 124 seats in Ontario’s provincial government. Each seat is held by a Member of Provincial Parliament (MPP) elected by eligible voters in their electoral district. According to the Elections Act, provincial elections are to be held on the first Thursday of June, every four years. Sometimes, should the party in power see it as advantageous, an election may be called before this date. Elections may also occur before four years have passed in cases where the government no longer has the confidence of the Legislative Assembly (see Minority Government). As with the other provinces, Ontario uses a first past the post electoral system, meaning the candidate with the most votes in each electoral district wins. The party with the most seats forms the government, and the leader of this party becomes premier. Technically, as the Queen’s representative, the lieutenant-governor holds the highest provincial office, though in reality this role is largely symbolic. (See also Ontario Premiers: Table; Ontario Lieutenant-Governors: Table.) The premier typically appoints members of the Cabinet from among the MPPs also belonging to the party in power. Cabinet members are referred to as ministers and oversee specific portfolios. Typical portfolios include finance, health and education. (See also Politics in Ontario.) Most medical services in Canada are free. Money from taxes is pooled together to fund a health care system often referred to as medicare. While the federal government sets guidelines, each province and territory is responsible for administering its own health care insurance plan; funding for the plan comes from both governments. As with other provinces and territories, certain services in Ontario are not covered by the provincial health insurance plan. These include going to the dentist, prescription drugs and routine eye exams for those between the ages of 20 and 64. In Ontario, the government department responsible for administering the health care system is the Ministry of Health and Long-Term Care. (See also Health Policy.) Ontario’s system of education is divided between two kinds of public schools: non-sectarian and “separate” or Roman Catholic. Within both of these systems are French-language school boards or French-language sections. Each system is run by boards elected by members of the public. This is the result of a compromise at the time of Confederation, when rights for Catholics in Ontario were traded off against those for Protestants in Québec. Since 1899, Ontario has provided public funds to support education in Roman Catholic separate schools to the end of grade 10. In 1984, Premier Bill Davis startled Catholics and non-Catholics alike with a sudden announcement that his government would cover all the costs of separate school education in the remaining grades. This policy was implemented between 1985 and 1987. Private schools are permitted to operate in accordance with the Education Act but do not receive any funding. Parents may also obtain permission from their local school board to educate their school-age children at home. On reserves, schools are run by the local First Nation and financially supported by the federal government. The federal government also operates six schools on reserves in Ontario, including five on the Six Nations reserve and one on Tyendinaga Mohawk territory (see also Education of Indigenous Peoples). Although French-language schools existed in eastern and northern Ontario long before 1968, boards since then have been able to set up French schools “when numbers warrant” (see Separate School). In 1984, the Ontario Court of Appeal ruled that every francophone (and anglophone) student in the province has a right to education in his or her mother tongue. Linguistic minorities, the court also made clear, must be guaranteed representation on school boards and a say in minority-language instruction. The government immediately moved to comply with the court’s ruling, which was based on the Charter of Rights and Freedoms. The education system is organized into elementary and secondary levels. Secondary students bound for university formerly completed a fifth year of high school, or grade 13. In 2003, Ontario’s grade 13 was eliminated. In general, elementary schools provide programs for children from junior kindergarten to grade 8. As of September 1994, all school boards were required to make junior and senior kindergarten programs available. By 2015, full-day, optional kindergarten was available for all 4- and 5-year-olds attending English-language schools; this option had been available for over 10 years to those attending French-language schools. Colleges and Universities Ontario is home to a number of colleges and universities, including Canada’s two largest post-secondary institutions by student population (as of 2013), the University of Toronto and York University, as well as several other large campuses, including the University of Ottawa, Western University and the University of Waterloo. Other universities include the University of Windsor, Wilfred Laurier University in Waterloo, the University of Guelph, Brock University in St. Catharines, McMaster University in Hamilton, Ryerson and OCAD universities in Toronto, Trent University in Peterborough, Queen’s University and the Royal Military College in Kingston, and further north, Algoma, Lakehead, Laurentian and Nipissing universities, in Sault Ste. Marie, Thunder Bay, Sudbury and North Bay respectively. There are also 24 community colleges in the province; three of the province’s largest are located in Toronto: Seneca, Humber and George Brown. Many municipalities in Ontario have public transit services, most of which include services operating on fixed routes and schedules for the general public and specialized door-to-door transit services for those with disabilities. The Toronto Transit Commission, or TTC, is the largest transit system in Ontario and the third-largest in North America (see also Toronto Subway). Metrolinx, an agency of the Ontario government, was created in 2006 to improve the co-ordination of transportation in the Greater Toronto and Hamilton areas. In 2009, Metrolinx merged with GO transit, a regional public transit service, and in 2011 introduced PRESTO, an electronic fare card with the goal of allowing passengers to transfer easily between different transit systems. There are few roads in the North, and the most reliable form of transportation in this part of the province is still by air or water. VIA Rail offers passenger rail transportation to numerous cities and has major stations in Toronto, Ottawa, London, Kingston, Niagara Falls, Windsor, Sarnia and Sudbury. The Ontario Northland Transportation Commission, a provincial agency, provides train and bus services to northern communities. Ontario has a large navigable water system, the St. Lawrence Seaway, along its southern frontier. The Welland Canal, an important part of the seaway channel, links Lakes Ontario and Erie. The advent of the seaway, and subsequently the practice of “containerization” of cargo unloaded at East Coast ports, have had a considerable negative impact on the structure of Ontario’s water transport. The most notable casualty has been the port of Toronto, where the number of tonnes shipped and the number of employed dropped drastically — Montréal, Saint John and Halifax being the beneficiaries. Two other Ontario ports, Hamilton and Thunder Bay, are ranked in Canada’s top 10 in the amount of cargo handled. Thunder Bay moves mainly coal, wheat and canola, while Hamilton handles iron ore, iron, steel, alloys and coal. Toronto’s Lester B. Pearson International Airport is Canada’s largest and busiest airport. Other airports of note include Billy Bishop Toronto City Airport, the Ottawa Macdonald-Cartier International Airport and Hamilton’s John C. Munro International Airport. Arts and Culture Artistic and cultural endeavour in Ontario is encouraged through a variety of government subsidy programs, some federal and some provincial, such as the Ontario Arts Council (founded 1963), an independent government agency that gives grants to individuals and organizations. There are symphony orchestras in Toronto (the Toronto Symphony Orchestra), Ottawa, Hamilton and Kitchener-Waterloo. A major Shakespearean festival called the Stratford Festival was founded in 1953 and is held each year in Stratford. Niagara-on-the-Lake’s annual Shaw Festival produces plays by Bernard Shaw or those from or about the era in which he lived. Major art galleries include the Art Gallery of Ontario, located in Toronto, and the National Gallery of Canada, located in Ottawa. Each fall, Toronto hosts the Toronto International Film Festival, the largest film festival in North America. Ontario is home to two National Hockey League teams, the Toronto Maple Leafs and the Ottawa Senators, as well as Canada’s only Major League Baseball team, the Toronto Blue Jays, and only National Basketball Association team, the Toronto Raptors. The province’s three Canadian Football League teams are the Hamilton Tiger-Cats, Toronto Argonauts and Ottawa Redblacks. The Toronto FC is one of three Major League Soccer teams in Canada, and the Toronto Rock one of four National Lacrosse League teams. Museums and Historic Sites Major museums in Ontario include the Royal Ontario Museum, focusing on natural history and cultures from around the world, and the Aga Khan Museum, focusing on Muslim civilizations. Both are located in Toronto. As the nation’s capital, Ottawa is home to a number of important museums, including the Canadian Museum of Nature, the Canadian War Museum, the Canada Aviation and Space Museum, and the Canada Science and Technology Museum. The mid-17th century Jesuit missions to the Wendat were among the first historic sites opened to the public. Having supported research in the area since 1890, the Ontario government undertook the reconstruction of Sainte-Marie Among the Hurons near Midland in 1964, and opened it to the public three years later. Picturesque forts, the legacy of a long period of tension along the American-Canadian border dating from the beginning of the American Revolution, dot the southern reaches of the province. At Kingston, Fort Henry, whose stone walls were originally completed in the 1830s, is perhaps the best known, but Fort George and Fort Erie on the Niagara Historic Frontier, Fort Wellington (Prescott), Fort York (Toronto) and Fort Malden (Amhertsburg) have also been restored to their appearance at the time of the international crises and conflicts that marked the first part of the 19th century. The life of the province’s pioneers is depicted in reconstructed townsites, including Upper Canada Village near Morrisburg and Black Creek Pioneer Village in northwest Toronto. In 1973, the Ontario government began to rebuild Fort William (at Thunder Bay), a fur-trading post established by the North West Companyin 1803. Boating enthusiasts enjoy two 19th-century canals — the Rideau Canal, built from 1826 to 1832 by the Royal Engineers for the movement of troops and military supplies, and the Trent, which dates back to 1833.
Diversity and Multicultural Education Our goal for students is to not only be familiar with Foundations concepts but also to be scholars of Foundations of Education. The additional readings selected throughout the course modules reflect this goal, as we have included authors who are considered experts in the field to supplement the content course content. In this module, you will be exposed to Sonia Nieto, Diane Ravitch, and Ronald Takaki. Upon completing this module, students will be able to: - Define and discuss the idea of multicultural education and the different philosophical approaches to accomplishing this. - Understand the historical roots of multicultural education - Explain the idea of culture and be able to provide examples of how culture is influential. (Key terms: dominant culture, ethnocentrism, cultural capital, compensatory programs, acculturation) - Recognize the various acculturation outcomes for immigrants Introduction to Multicultural Education Why Multicultural Education? What is multicultural education? It is likely a term you have heard before, and perhaps something that you have never spent much time thinking about. Multicultural education is the idea that the United States is made up of many different kinds of people, and the public education people receive should be reflective and inclusive of all the different backgrounds that make up our country. Additionally, multicultural education should help all students feel that they have a place in our schools and society, regardless of their race, social class, gender, sexual identity, disability, language and geographic background, or religious background. In order to help you understand this importance, we have organized these modules into groups based on these differences. By understanding the experiences and societal impacts of each of these dimensions of diversity, you should be more prepared to teach or interact with people from all backgrounds going forward. As our schools and society in the United States continue to become more diverse, multicultural education is critical to foster empathy and understanding to each other. While many people can agree that this is an important concept, implementation of multicultural education can be very different. In today’s educational policy landscape, multicultural education is often viewed as being separate from general education, something that can be used occasionally to enrich or complement the general academic program. For example, many schools use national events like Black History Month or Martin Luther King Jr. Day as an opportunity to learn about the contributions of African-Americans, while others organize events to celebrate multiculturalism. Diversity Weeks or school assemblies designed to promote racial and ethnic diversity can be observed in districts across America. While these efforts are no doubt designed and implemented with benevolent intentions, many scholars in the field of multicultural education have suggested that current educational policies and practices address only the surface-level of multiculturalism by highlighting differences in food, dress, music, dance, and language, without addressing the underlying issues of educational values, worldview, and knowledge construction (Banks, 2004; Gollnick & Chinn, 2013; Nieto & Bode, 2012). As such, the conceptualization of multiculturalism shifts from a product to a process. Rather than offer simple educational products–like prescribed, close-ended lesson plans—these modules view multiculturalism as a long term investment that shifts and shapes educational experiences at all levels of policy and practice. The aim of these modules is to expand the understanding of multiculturalism to create a more inclusive and more holistic approach to teaching and learning. While many discussions of multiculturalism center around issues of race and ethnicity, we posit that class and socioeconomic status, gender, sexual orientation, language, immigration, geography, and religion also play crucial roles in the development of equal and equitable educational policies and practices. Therefore, after a discussion of the sociopolitical and sociocultural contexts of education and the overarching approaches to multicultural education, this module will investigate each of the individual identifiers that contribute to a more complete view of multiculturalism. History of Multiculturalism Multiculturalism, by definition, contains–and is characterized by–the diverse histories, ideologies, and social movements that combined to create the body of educational theories and practices that exist today. Given the history of discrimination based on race, ethnicity, gender, and language in the United States, the American education system offered unequal educational experiences to students for centuries. Prior to the Feminist and Civil Rights Movements of the 1960s, dominant social groups–for the most part white, wealthy, males– held the social, intellectual, political, and economic power to construct the knowledge, ideologies, and cultural norms that became institutionalized in American society and therefore implemented in educational settings. A wide body of scholarly research documented the systematic construction of educational curricula that validated and reinforced the dominance of European and Western values, while simultaneously degrading and devaluing the contributions of communities of color (Banks, 1993; Fine, 1987; Hines, 1964). Theoretical and empirical research confirmed that the imposition of a singular construction of knowledge based on the political, cultural, and economic ideologies of the dominant group was detrimental to the education of students whose backgrounds did not align with the dominant group (Banks, 2004). These findings, which were documented in formal research as well as in the informal experiences of countless individuals, contributed to the formation of a more unified conception of multicultural education. It is important, however, to situate modern understandings of multiculturalism within their historical contexts. In an effort to reflect the diverse history of multiculturalism, Fullinwider (2003) identified several “tributaries” that converged to create multicultural education. Intergroup education, the Civil Rights Movement of the 1960s, ethnic studies programs, and feminist and gender equality movements offered some of the most influential contributions to the contemporary conception of multiculturalism. Each of these traditions challenged dominant patterns of knowledge construction in American society, and thereby influenced teaching and learning in schools across the nation. While some education historians challenge the idea that the intergroup education movement influenced the development of early multiculturalism (Boyle-Baise 1999), others see it as a precursor to the establishment of the ethnic studies movement that was integral to its recognition as a legitimate academic field (Banks, 2004). The intergroup education movement was a product of the larger political, social, and economic context of the era. Throughout the 1940s, the effects and consequences of the United States’ involvement in World War II radically changed the way of life for many Americans. Economically, the increased availability of wartime jobs in the North and West enticed large numbers of African Americans, Mexican Americans, rural whites, and women to migrate into urban centers to fill vacant jobs. Politically, the wartime nationalism sparked–to a degree– a more inclusive national political narrative that promoted tolerance of African Americans in order to achieve common goal of defeating Germany and Japan, though the war also sparked increased racism against Asian Americans, particularly Japanese Americans, who were subject to harassment and violence, in addition to being forced to live internment camps. The social consequences of the war, however, were more complex. With increasing diversity in many urban centers, conflict based on race, ethnicity, and gender became a common experience. In the years following the war, black and Hispanic soldiers were legally and institutionally barred from receiving their GI and other veteran benefits, which was a stark reminder of the deeply entrenched racism in American society. The unrest caused politicians and policy-makers to turn to education for solutions to social issues. In response to the social, political, and economic consequences of World War II, the intergroup education movement aimed to reduce racial and ethnic tension by promoting an educational ideology of tolerance. Intergroup education grew out of progressive education and was headed by predominate educational researchers such as Hilda Taba, Howard Wilson, and Lloyd Cook (Banks, 2004). In order to achieve its central goal of reducing racial tensions and promote intergroup tolerance and understanding, the intergroup education movement advocated for the establishment of intergroup relations centers, active involvement in social tolerance movements, and the creation of more inclusive educational objectives, curriculum, and pedagogy throughout educational experiences, from kindergartens through universities. These programs were implemented into practice sporadically and non-uniformly, which led to mixed results in their effectiveness in achieving their stated goals. However, the intergroup education produced a number of influential research studies and reports that offered empirical evidence of educational inequalities based on race, ethnicity, gender, and religion. These studies confirmed and helped to support landmark cases that were directly preceded the Civil Rights Movement, including Kenneth and Mamie Clark’s doll study. While the intergroup education movement was viewed as a departure from previous educational traditions because of its inclusiveness, it was rooted primarily in an ideology that promoted tolerance and human relations, without a specific focus on the individual histories of different minority groups or the overarching institutionalized discrimination in American society, which became a central focus of the Civil Rights Movement, revisionist history, and ethnic studies programs. It is this distinction that has led scholars to view the intergroup education movement as an educational ideology separate from multiculturalism (Boyle-Blaise; 1999). The scholarly literature identified the Civil Rights Movement as one of the major factors that contributed to modern multicultural education (Banks, 2004; Banks, 1993; Gay, 1983; Valverde, 1977). Clearly, the overarching goals and objectives of multicultural education reflect the struggle for freedom and equality embodied in the Civil Rights Movement. The Brown vs. Board of Education ruling in 1954 marked the beginning of court-ordered educational integration in the United States. However, the oft quoted “all deliberate speed” language in the court’s decision limited the ability for federal oversight to ensure that states complied with the decision. Despite the ruling, the integration of schools continued to be a hard fought battle waged by civil rights activists, parent groups, and even students themselves. During this time, the focus was so heavily on integration of schools and the physical safety of students, there was little room for inquiry into curriculum content and pedagogical practices. However, as the Civil Rights Movement advanced, educational researchers and activists began to question the educational policies and practices of the time and began to develop the underlying foundations of multicultural education. After the passage of the Civil Rights Act of 1964 and the Voting Rights Act of 1965, the character of the Civil Rights Movement began to shift towards cultural pride, self-determination, and political activism (Gay, 1983). The growth of cultural consciousness among black activist groups sparked intellectual inquiry into the histories, traditions, and worldviews of cultures that had previously been excluded from the curriculum in American education, from elementary school through university. Armed with a critical consciousness, academics and practitioners conducted numerous analyses and reviews of curriculum contents and textbooks. Not only did these studies find that the contributions of minority groups and women systematically left out of the curriculum, they also identified that the vast majority of textbooks reported “ethnic distortions, stereotypes, omissions, and misinformation” (Gay, 1983, p.561). The misinformation that existed in historiographies and curriculum content served as an impetus for scholars to revisit historical narratives with a specific focus on the contributions and experiences of non-dominant groups. These counter-narratives– sometimes called revisionist histories– challenged intellectual status-quo and offered a contrasting approach to the construction of knowledge. As the field of counter-narratives and revisionist history gained ground in academia, students and professors at universities and colleges across the nation began to demand specific academic programs that centered around the experiences of minority groups in America. In the shadow of Martin Luther King Jr’s assassination in 1968, the Civil Rights Movement became increasingly fractured as various activist groups trended in different directions, though many shared similar goals and aims. Educationally, the combination of a resurgence of cultural pride and the counter-narratives of revisionist history created a sense of isolation and alienation from mainstream American culture and inspired a separatist perspective on curriculum and instruction. With the support of faculty, minority student activist groups on college and university campuses petitioned for specialized programs that addressed racial and ethnic issues. In response to the pressure from students, colleges and universities established Black Studies programs and courses throughout the late 1960s and early 1970s. In 1968, San Francisco State University became the first university to offer a Black Studies major. The establishment of Black Studies programs helped opened the door for other groups who had been subjected to institutionalized discrimination to organize and lobby for programs and courses specific to their experiences. By 1973, approximately 600 new ethnic studies programs had been established at colleges and universities around the United States (http://munews.missouri.edu/news-releases). As distinct ethnic studies programs became increasingly common in educational settings, scholars and researchers began to identify commonalities between the philosophies, ideologies, and experiences addressed across the separate programs. These common ideas became a center point for the establishment of multiethnic perspectives, which is considered to be the antecedent to multiculturalism. The shift to ethnic studies to multiethnic studies was guided by the work of many scholars who are now considered to be the founders of multiculturalism, including James Banks, Christine Bennett, Geneva Gay, Donna Gollnick, and Carl Grant. While the central goals of achieving equal and equitable educational experiences for all students through critical thinking, social justice, and community activism did not change during this period, some worried that the conceptual frameworks and theoretical perspectives would become muddled and less clear due to the diverse variety of experiences of the various minority groups included under the multiethnic umbrella (Grant, 1978). Despite these warnings, the boundaries of multiethnic education quickly expanded to become multiculturalism with the addition of gender and disability issues. Not surprisingly, the counter-narratives of ethnic studies were mirrored by a movement in gender studies that contributed to the creation of feminist movements and a resurgence of scholarship that focused on women’s issues and larger discussions about the importance of gender in society. The inclusion of gender studies in multiculturalism allowed for the development of new frameworks for analysis. For example, concepts of intersectionality and the interlocking experiences based on race, class, gender, and other identifiers–which are common in modern multiculturalism– grew out of scholarship and research in Women’s Studies and Ethnic Studies. These developments allowed for deeper investigations into the systems of discrimination and advantage in American society. However, the inclusion of gender and disability in multiculturalism was not welcomed by all as some scholars continued to challenge the inclusion of gender, disabilities, and age in multiculturalism because experiences based on those identifiers did not constitute a “pervasive worldview” and therefore did not conform with the commonly accepted definition of culture in the field of multiculturalism (Boyle-Baise, 1999). Regardless, modern conceptions of multiculturalism often include a consideration of gender, sexual orientation, disability, religion, and age (Gollnick & Chinn, 2013; Nieto & Bode, 2012). Current perspectives on multicultural education today continue to reflect the initial goals of improving educational equity and equality, reducing discrimination, and promoting active involvement in social justice and democratic society. The history of multiculturalism includes diverse influences and contributions that truly embody the ‘multi’ of multiculturalism. Sociopolitical Contexts of Education Although educational policies and practices are sometimes viewed as if they existed in a vacuum, separate from the larger social, political, and cultural contexts, one of the central tenets of multiculturalism asserts that educational decision-making is heavily influenced by each of these contexts. In particular, many scholars of multicultural education point to the importance of the sociopolitical context of education in the modern era as educational policies and practices are increasingly becoming politicized. Given the political nature of educational decision making, the educational policies and practices implemented at national, state, and local levels reflect the values, traditions, and worldviews of the individuals and groups responsible for their design and implementation, which inherently makes education a non-neutral process, though it is often seen as such. Understanding the sociopolitical context of education allows for a critical analysis of educational policies and practices in an effort to reduce educational inequalities, improve the achievement of all students, and prepare students to participate in democratic society. In the field of multicultural education– and across the social sciences– the sociopolitical context refers to the laws, regulations, mandates, policies, practices, traditions, values, and beliefs that exist at the intersection of social life and political life. For example, freedom of religion is one of the fundamental principles of life in American society, and therefore there are laws in place that protect every individual’s right to worship as they choose. In this instance, the social practices (ideologies, beliefs, traditions) and political process (laws, regulations, policies) reflect each other and combine to create a sociopolitical context that is, in principle, welcoming to all religious practices. There are similar connections between the social and the political in the field of education. Given that one of the main purposes of schooling is to prepare students to become productive members of society, classroom practices must reflect– to some extent– the characteristics of the larger social and political community. For example, in the United States, many schools use student governments to expose students to the principles of democratic society. By organizing debates, holding elections, and giving student representatives a voice in educational decision making, schools hope to impart upon students the importance of engaging in the political process. The policies and practices that support the operation of student government directly reflect the larger sociopolitical context of the United States. Internationally, the use of student government often reflect the political systems used in that country, if a student government organization exists at all. However, sociopolitical contexts influence educational experiences in subtler ways as well. Throughout the history of American education, school policies and practices have reflected the ideological perspectives and worldviews of the underlying sociopolitical context. As stated above, schools in democratic societies often have democratic student government organizations that reflect the political organization of the larger society, while similar organizations cannot be found in schools in countries that do not practice democracy. Similarly, if a society shares a widespread belief that some groups (based on race, class, language, or any other identifier) are inherently more intelligent than another, educational policies and practices will reflect that belief. For example, as the United States expanded westward into Native American lands during the late 19th and early 20th centuries, many Americans shared the widespread belief that Native Americans were inherently less intelligent and less civilized than white Americans. This belief system served as a justification for the “Manifest Destiny” ideology that encouraged further westward expansion. Not surprisingly, the larger sociopolitical context of the time influences educational policies and practices. In large numbers, young Native Americans were torn from their families and forced into boarding schools where they were stripped of their traditions and customs before being involuntarily assimilated into “American culture”. These Native American boarding schools outlawed indigenous languages and religions. They required students to adopt western names, wear western clothes, and learn western customs. While from a contemporary perspective these schools were clearly inhumane, racist, and discriminatory, they illustrate how powerful the sociopolitical climate of the era can be in the implementation of educational policies and practices. Educational policies today continue to reflect the larger social and political ideologies, worldviews, and belief systems of American society, and although instances of blatant discrimination based on race, ethnicity, class, gender, sexual orientation, language, or any other identifier have been dramatically reduced in recent decades, a critical investigation into contemporary schooling reveals that individuals and groups are systematically advantaged and disadvantaged based on their identities and backgrounds, which will be explored in more depth in subsequent sections of this (book/class). The role of social institutions in educational experiences are another key consideration in developing an understanding of the sociopolitical contexts of education. The term social institutions refer to the establish, standardized patterns of rule governed behavior within a community, group, or other social system. Generally, the term social institutions includes a consideration of the socially accepted patterns of behavior set by the family, schools, religion, and economic and political systems. Each social institution contributes to the efficiency and sustained functionality of the larger society by ensuring that individuals behave in a manner that consistent with the larger structure, which allows them to contribute to the society. Traffic regulations offer an example of how social institutions work together to create and ensure safety and efficiency in society. In order to reduce chaos, danger, and inefficiency along roadways in the United States, political institutions have created laws and regulations that govern behavior along public roads. Drivers found in violation of these regulations face punishment or fines that are determined by the judicial system. Furthermore, families and schools– and to some extent religions organizations– are responsible for teaching young people the rules and regulations that govern transportation in their society. The streamlined and regulated transportation system produced by the aforementioned social institutions allows economic institutions to function more efficiently. Functionalist Theory is a term used to refer to the perspective that institutions fill functional prerequisites in society and are necessary for social efficiency as seen in the previous example. However, Conflict Theory refers to the idea that social institutions work to reinforce inequalities and uphold dominant group power. Using the same transportation example, a conflict theorist might argue that the regulations that require licensing fees before being able to legally operate a vehicle disproportionately impact poor people, which would limit their ability to move freely and thereby make it more difficult for them to hold and maintain a job that would allow them to move into a higher socioeconomic class. Another argument from the conflict theorist perspective might challenge institutionalized policies that require drivers to present proof of citizenship or immigration papers before being allowed to legally operate a vehicle. These policies systematically deny the right of freedom of movement to immigrants who entered the United States illegally, thereby limiting their civil rights as well as their ability to contribute to the American economy. Both the Functionalist Theory and Conflict Theory perspectives can contribute to a nuanced understanding of contemporary educational policies and practices by providing contrasting viewpoints on the same issue. Throughout these modules these perspectives will inform the discussion of educational institutions and how they influence– and are influenced by– other social institutions. Much like educational policies and practices, the rules and regulations set by social institutions do not exist within a vacuum, nor are they neutral in regard to the way they impact individuals and groups. Institutional discrimination refers to “the adverse treatment of and impact on members of minority groups due to the explicit and implicit rules that regulate behavior (including rules set by firms, schools, government, markets, and society). Institutional discrimination occurs when the rules, practices, or ‘non-conscious understanding of appropriate conduct’ systematically advantage or disadvantage members of particular groups” (Bayer, 2011). Historical examples of institutional discrimination in abound in American history. In the field of education, perhaps the most well known example of institutionalized discrimination is the existence of segregated schools prior to the Brown v. Board of Education decision in 1954. During this era, students of color were institutionally and systematically prevented from attending white schools, and instead were forced to attend schools that lacked sufficient financial, material, and human resources. Institutional discrimination in contemporary society, however, is often subtler given that there are a plethora of laws that explicitly prevent discrimination based on race, ethnicity, gender, sexual orientation, or any other identifier. Regardless of those laws, social institutions and institutionalized discrimination continue to disadvantage non-dominant groups, thereby advantaging members of the dominant group. Use housing as an example, homeowner’s associations are local organizations that regulate the rules and behaviors within a particular housing community. If a homeowner’s association decides that only nuclear families can live within their community and create a bylaw that stipulates such, the practice of allowing nuclear families and denying non-nuclear families becomes codified as an institutionalized policy. While the policy does not directly state that it intends to be discriminatory, it would disproportionately affect families from cultures that traditionally have households that include aunts, uncles, cousins, grandparents, and other extended family members, a practice that is common in many Asian, African, and South American communities. Although hypothetical, this example represents an example of the subtle ways in which institutional discrimination surfaces in contemporary society. A more concrete example of institutionalized discrimination can be drawn from the housing market in New Orleans as homes were being rebuilt in the aftermath Hurricane Katrina. While the Lower Ninth Ward– a mostly black neighborhood– was among the most damaged neighborhood in New Orleans, just downriver the St. Bernard Parish neighborhood– which was mostly white– was also heavily damaged. By 2009, most of St. Bernard Parish had been rebuilt, while the Lower Ninth Ward remained unfit for living. As families began moving back into the neighborhood, elected officials in St. Bernard Parish passed a piece of legislation that required property owners to rent only to ‘blood relatives’. In effect, the policy barred potential black residents from moving into the area and served to maintain the racial makeup of the neighborhood prior to Katrina. After several months of implementation, the policy was legally challenged and was found to be in violation of the Fair Housing Act in Louisiana courts. In 2014, the Parish agreed to pay approximately $1.8 million in settlements to families negatively affected by the policy. This example illustrates how institutionalized discrimination surfaces in contemporary society. Throughout the modules, instances of institutional discrimination in schools, as well as in American society as a whole, will be critically analyzed in order to develop an understanding of how educators can work to reduce inequality and promote academic achievement for all students. A basic understanding of social institutions and institutional discrimination helps inform this course’s approach to key educational issues in the field of multicultural education. As the student body in American schools becomes increasingly diverse, it becomes increasingly important for future teachers to know and understand how students’ identities might impact their educational experiences as well as their experiences their larger social and political settings. While there are many issues facing education today, Nieto and Bode (2012) identified four key terms that are central to understanding sociopolitical context surrounding multicultural education. These terms include: equal and equitable education, the ‘achievement gap’, deficit theories, and social justice. The terms equal and equitable are often used synonymously, though they have vastly different meanings. While most educators would agree that providing an equal education to all students is an important part of their mission, it is sometimes more important to focus on creating equitable educational experiences. At its core, an equal education means providing exactly the same resources and opportunities for all students, regardless of their background. An equal education, however, does not ensure that all students will achieve equally. Take English Language Learners (ELLs) as an example. A group of ELL students sitting in the same classroom as native English speakers, listening to the same lecture, reading the same books, and taking the same assessments could be considered an equal education given that all students are receiving equal access to all of the educational experiences and materials. The outcome of this ostensibly equal education, however, would not be equitable. The ELL students would not be able to comprehend the lecture, books, or assessments and would therefore not be given the real possibility of achieving at an equal level, which is the aim of an equitable education. Equity refers to the educational process that “provides students with what they need to achieve equality” (Nieto & Bode, 2012, p.9). In the case of the ELL example, an equitable education would provide additional resources– perhaps including ESL specialists, bilingual activities and materials, and/or programs that foster native language literacy– to the ELL students to ensure that they are welcomed into the classroom community and are given the opportunity to learn and succeed equally. Working towards educational equality by providing equitable educational experiences is one of the central tenets of multicultural education and will be a recurring topic throughout these modules. A second key term that is crucial in understanding multicultural education is the ‘achievement gap’. A large body of research has documented that students from racially and linguistically marginalized groups as well as students from low-income families generally achieve less than other students in educational settings. Large scale studies of standardized assessments revealed that white students outperformed black, Hispanic, and Native American students in reading, writing, and mathematics by at least 26 points on a scale from 0 to 500 (Nieto and Bode, 2012; National Center for Educational Statistics, 2009). Though usage of the term has changed over time, it often focuses on the role that students themselves play in the underachievement, which has drawn criticism from advocates of multicultural education because it places too much responsibility on the individual rather than considering the larger sociopolitical and sociocultural contexts surrounding education. While gaps in educational performance no doubt exist, Nieto and Bode (2012) suggest that using terms such as “resource gap”, “opportunity gap”, or “expectations gap” may be more accurate in describing the realities faced by marginalized students who often attend schools with limited resources, limited opportunities for educational advancement or employment in their communities, and face lowered expectations from their teachers and school personnel (p.13). Throughout this (book/course) issues related to the achievement gap’ and educational inequalities based on race, class, gender, and other identifiers will be viewed within the larger social, cultural, economic, and political contexts in order to create a more holistic and systematic understanding of student experiences, rather than focusing purely on the individual. Historically in educational research, deficit theories have been used to explain how and why the achievement gap exists, but since the 1970s, scholars of multicultural education have been working to dismantle the lasting influence of deficit theory perspectives in contemporary education. The term ‘deficit theories’ refer to the assumption that some students perform worse than others in educational settings due to genetic, cultural, linguistic, or experiential differences that prevent them from learning. The roots of deficit theories can be found in 19th century pseudo-scientific studies that purported to show ‘scientific evidence’ that classified the intelligence and behavior characteristics of various racial groups. The vast majority of these studies were conducted by white men, who unsurprisingly, found white men to be the most intelligent group of human beings, with other groups falling in behind in ways that mirrored the accepted social standings of the era (Gould, 1981). Though many have been disproved, deficit theories continue to surface in educational research and discourse. Reports suggesting that academic underachievement is a product of cultural deprivation or a dysfunctional relationship with school harken back to deficit theory perspectives. Much like the ‘achievement gap’, deficit theories place the burden of academic underachievement on students and their families, rather than considering how the social and institutional contexts might impact student learning. Deficit theories also create a culture of despondency among educators and administrators since they support the idea that students’ ability to achieve is predetermined by factors outside of the teacher’s control. Multicultural education aims to disrupt the prevalence of deficit theory perspectives by encouraging a more nuanced analysis of student achievement that considers the structural and cultural contexts surrounding American schooling. The fourth and final term that is central to understanding the sociopolitical context of multicultural education is social justice. Throughout these modules, the term social justice will be employed to describe efforts to reduce educational inequalities, promote academic achievement, and engage students in their local, state, and national communities. Social justice is multifaceted in that it embodies the ideologies, philosophies, approaches, and actions that work towards improving the quality of life for all individuals and communities. Not only does social justice aim to improve access to material and human resources for students in underserved communities, it also exposes inequalities by challenging and confronting misconceptions and stereotypes through the use of critical thinking and activism. Finally, in order for social justice initiatives to be successful, they must “draw on the talents and strengths that students bring to their education” (Nieto and Bode, 2012, p.12). This allows students to see their experiences represented in curriculum content, which can empower and inspire students– not only to excel academically– but also engage in activities that strengthen and build the community around them. These key components of social justice permeate throughout the field of multicultural education. In order to develop a holistic understanding of educational experiences, these modules will interpret and analyze educational policies and practices through a lens that considers the sociopolitical contexts of education. By recognizing the role that social and political ideologies have over educational decision making, multicultural approaches to education aim to reduce educational inequalities, improve the achievement of all students, and prepare students to participate in democratic society. Sociocultural Contexts of Education Culture and Society One of the main goals of multicultural education is to help bridge understanding between dominant culture and different people groups who may have been marginalized by that culture. Therefore, it is important to understand exactly what is meant by the term “dominant culture”. For most sociologists, culture refers to a roadmap for living within a society. Culture includes many components, such as language, customs, traditions, values, food, music, dress, gender roles, importance of religion, and so on. As culture encompasses so many aspects of diversity, it is one of the key components for understanding and discussing the experiences of all types of groups that will come in the following modules. Culture imposes order and meaning on our experiences, and it allows us to predict how others will behave in certain situations. For example, if you are in a classroom and a student raises their hand, we know this means he or she has a question. But, culture includes so many things – the way people talk, dress, interact, eat, live, and so on. Within each culture are individuals, who are unique expressions of many cultures and subcultures. There are two major responses to culture. One is enculturation, or, the process of acquiring the characteristics of a culture and knowing how to navigate behaviors, customs, etc. This often happens simply through the process of growing up within a given culture, but is certainly something that can continue should the culture around you change. For example, if you have ever studied abroad or visited another county for an extended amount of time, you will likely have encountered another culture where you needed to adapt and learn how to navigate social behaviors within that culture. Even in English-speaking countries there can be differences; while those of us in the United States often ask for the “bathroom”, Canadians refer to it as the “washroom”. The second major response is socialization, which refers to the process of learning the social norms of a culture. This can include what it means to be a daughter, husband, student, etc. and the societal expectations within those roles. Dominant culture refers to the major aspects of culture that you find in a society. If you think back to our previous discussion a few paragraphs ago, we mentioned that culture helps to guide language, customs, values, food, etc. Given that, how would you describe the dominant culture in the United States? White? English-speaking? Middle class? Christian? These are just a few terms that are often used to describe the dominant U.S. culture. While you may disagree or find you do not fit into those categories, a key distinction of dominant culture is that it is often maintained through our institutions. These can be our political and economic institutions (we will go into more detail about these in our discussions in Module 3 on Class and Socioeconomic Status), churches, schools, and media. When you examine the leaders in most of these areas, you find they would meet the criteria listed above. When people begin to believe that their culture is best and that any others are strange, inferior, or wrong, it is referred to as ethnocentrism. At its roots, ethnocentrism is the belief that your culture is correct and superior to all others, any other culture is not an equally viable option. Perhaps you have seen photos like this one that demonstrate ethnocentrism: The opposite of ethnocentrism is cultural relativism. Cultural relativism refers to an attempt to understand other cultures within the context of your own cultural beliefs. For example, if you religiously identify as Methodist and attend services and participate regularly, perhaps you can identify with Jews or Muslims who also have religious beliefs that impact their daily living, customs, and values. Culture and School So, what does culture have to do with education? There are two main ways that culture interacts with our education system. First, culture influences what and how we learn, and second, greater experiences with a dominant culture often equal greater success within that culture. To elaborate on how culture influences what and how we learn, we can look to history for some strong examples of this. One of the most blatant ones was the introduction of geocentric versus heliocentric theory. Prior to the work of Galileo, most scientists and certainly the influential Catholic church, fully believed the Earth was at the center of the solar system. However, mounting scientific evidence showed the sun was actually at the center. Was the church and culture quick to change their opinion based on scientific evidence? Not exactly. Galileo was subjected to Roman Inquisition by the church and put on house arrest in 1615. It was not until 1992 that the Catholic church apologized for the handling of Galileo. While this may be a more extreme example, we continue to see culture influencing other aspects of learning today. The topics surrounding climate change, evolution, sex education, and others continue to be influenced in school settings by politicians and dominant U.S. culture. The second way culture is important to education is that the more experiences a person has with dominant culture, the more likely they are to be successful within that culture. Sociologists often discuss these experiences as cultural capital, a symbolic credit a person would acquire by having more experiences with dominant culture. It is important to realize here, however, that all students come to school with some capital, it just may not be the capital schools expect them to have. Research tells us that there are two tiers of the most valuable cultural capital. Tier one activities include things like reading at least three hours per week, owning a home computer, attending preschool, and having exposure to performing arts (playing an instrument, chorus, etc.). Tier two experiences, those that research has shown important, but not as large of an impact, are things such as, having high family educational expectations, rules limiting television and screen time, participating in sports teams or clubs, completing arts and crafts activities, and exposure to lots of different types of music. Other examples of capital students may have that schools may not value in curriculum and assessment include things like knowing how to navigate public transit, cultivating and growing a garden, knowing how to birth a calf or other animal, and knowing how to load and shoot a shotgun. Families are often erroneously blamed for not providing their children with the cultural capital needed to succeed in schools. These children are often labeled as having a cultural deficit or experiencing cultural deprivation (a somewhat insensitive and biased term). The issue these terms are attempting to define, however, is a real one. The challenge for educators is that often the expected knowledge and experiences of students do not actually line up with their actual knowledge and experiences. Essentially, there is a gap between what our schools expect students to know and have experienced and what students actually know and have experienced. Compensatory programs are programming, funding, and other assistance that school systems and communities have put in place to address these gaps. Field trips and community schools are just a few examples of such programs. The following table includes several different programs you may see in schools and communities: Examples of Compensatory Programs |· Title I of Elementary & Secondary Education Act (ESEA)||· Programs and support services for the disabled| Responses to Culture Another interesting point to consider is how individuals and families respond when they are confronted with a new culture. Acculturation is the term sociologists use to describe the process of adopting or taking on the culture of a new group. Most often, this involves immigrants adopting the dominant culture as their own. This can include speaking the new language, adopting a new set of core values, changing dress and foods, and so forth. The immigrant family or individual usually decide the degree to which acculturation will take place. There are multiple models that address acculturation outcomes, but only two will be highlighted here. One approach to understanding acculturation is the model proposed by Rambaut & Portes (2001). They identified the following acculturation patterns: - Consonant acculturation – Parents and children learn the language and culture of the community in which they live at approximately the same time. - Dissonant acculturation – Children learn the new language and the new culture, while parents retain the native language and culture, leading to conflict and decreased parental authority. - Selective acculturation – Children learn the dominant culture and language but retain significant elements of the native culture. However, these outcomes can certainly be considered too limiting, namely because they only address acculturation outcomes in family settings. Not all immigrants who come to the United States come as families, and many of your students may even be studying here alone or through exchange programs. Therefore, the Berry (1980) model is more widely used in research and practice to think about the different ways immigrants adapt to a new country and culture. Rejection/encapsulation refers to an individual decision to withdraw from norms of larger society; a cultural identity from the home country is retained, but within terms of a negative relationship to dominant society. For example, a Chinese immigrant that moves into a Chinese neighborhood and continues only speaking Chinese and interacting only with other immigrants in the immediate vicinity could be viewed as assuming the rejection variety of assimilation. Deculturation/marginalization is fixed upon individual confusion and anxiety about personal cultural identity and relationships to larger society. This is the most negative outcome possible, where there is not retention of cultural identity and there is not a positive relationship with dominant society. Assimilation on its own is similar to the old melting pot idea that new immigrants should give up their personal cultural identities in favor of greater, more dominant societal norms. Immigrants who changed their names upon arriving in America, such as changing the German-sounding “Von Meincke” to the more Anglo “Miller”, would be acting within the assimilation outcome of acculturation. Thus, individual cultural identity is lost, but a positive relationship to the dominant society is established. Integration/biculturalism is the most positive outcome, and this type of acculturation results in the retention of cultural identity and a positive relationship to dominant society. Using this model, integration/biculturalism is the best acculturation outcome for immigrants’ psychological wellbeing because of the balance struck between the culture of the home country and that of the new one. Keep in mind that each of these outcomes exists within a spectrum; individuals may fall closer to one side or the other within these possibilities. Assimilation is a strong example of this, as Native American schools represent some of the worst examples of assimilation in United States history, and their outcome would certainly be closer to the marginalization side. However, other immigrant groups came to the United States and willingly assimilated, such as changing their name, in order to be perceived as “more American”. Thus, while the Berry model offers a great guide to consider the experiences of adapting to a new culture, remember individuals can and do exist in a variety of places within the model. The This I Believe essays that are in the readings section of this module provide a strong example of two different acculturation experiences. We encourage you to read both of these and consider where they would fall according to this model. Now that you, hopefully, understand more about the background and key ideas of multicultural education, it is worth investigating how scholars in the field would design and implement multicultural programming in schools. Sonia Nieto’s (2012) piece, Defining Multicultural Education for School Reform, highlights many of the key tenets she thinks should be included in any multicultural program. Additionally, educational philosophers Diane Ravitch and Ronald Takaki show alternative viewpoints on what a multicultural program would look like within a school setting in the two additional readings associated with this module. In these complementary pieces, educational philosophers Ronald Takaki and Diane Ravitch each put forth competing philosophies to guide the implementation of multicultural education. Takaki, an advocate of particularism, supports the idea that a common culture is both undesirable and unattainable and maintains the position that students would learn best from teachers and curriculum that reflect their ethnic backgrounds. Ravitch, on the other hand, advocates for pluralism, that the United States does have a rich, common culture made up of various subcultures. As you read, be sure to note the major ideas of each of these, as well as the criticisms. For example, pluralism advocates for a common culture, while particularism views this as undesirable and unattainable. One of the easiest ways to think about these different positions is to imagine a circle that is the historical approach multicultural education. For a particularist, there would be many pieces making up the circle, but they would never touch, as a common culture is unattainable because of all the diverse backgrounds. However, for a pluralist, the circle would be complete, all pieces touching, but perhaps each piece a different color to represent all of the different backgrounds that come together to make up the common culture of the United States. From a practical standpoint, which approach do you think is easier for schools to implement? Ravitch clearly outlines the different criticisms she has against a particularist approach, without clearly articulating some of the shortcomings of pluralism. Perhaps the greatest criticism of this approach is that there is a default towards European-American perspectives and history. While we expect the diagram to look as it does above, in reality, it often ends up skewed. As you continue working in this course and through the modules, consider the focuses of these perspectives and how each would apply to the various dimensions of diversity. Discussion Prompt: This I Believe Essays and Acculturation Models In the lectures for this module, we discussed two different acculturation models. Just to remind you, acculturation is the way a person, typically an immigrant, responds to new cultures. Your readings for this module included two This I Believe essays by immigrants. Based on their stories, which acculturation outcome do you think each of these people would fit into under Berry’s model? How about the Portes and Rambaut model? Be sure to support your ideas with specific references from the essays. Discussion Prompt: Pluralism and Particularism This week, you were assigned two different articles, ‘Multiculturalism: Battleground or Meeting Ground’ and ‘Multiculturalism: E Plurbus Plures’. Each of these advocates for a different educational philosophy of multicultural education, either particularism or pluralism. Using the Pluralism and Particularism Handout as a guide, create a post here in which you discuss as the major differences between each philosophy. What are some of the shortcomings of each? Written Response: Multiculturalism Reflection Paper Topics - Choose a cultural norm to break. Write about what you broke, why you chose to break this, and others’ reactions. How does your experience relate to our discussion of cultural norms? Make sure you include information about how people who unintentionally break norms would feel based on dominant culture. - Describe ethnocentrism in your own words. What are 2 – 3 examples of how you are ethnocentric? What are some strategies you can use to control this in the classroom? External Readings & Resources Ravitch, D. (1990). Multiculturalism: E pluribus plures. American Scholar, 59(3), 337-354. Takaki, R. (1993). Multiculturalism: Battleground or Meeting Ground?. Annals of the American Academy of Political and Social Science, 109. In these complementary pieces, educational philosophers Ronald Takaki and Diane Ravitch each put forth competing philosophies to guide the implementation of multicultural education. Takaki, an advocate of particularism, supports the idea that a common culture is both undesirable and unattainable and maintains the position that students would learn best from teachers and curriculum that reflect their ethnic backgrounds. Ravitch, on the other hand, advocates for pluralism, that the United States does have a rich, common culture made up of various subcultures. As you read, be sure to note the major ideas of each of these, as well as the criticisms. ‘Defining Multicultural Education for School Reform’ – Chapter 2 in Affirming Diversity: The Sociopolitical Context of Multicultural Education (6th edition) As we begin EDUC 2120, it is important to define exactly what we mean by multicultural education. Sonia Nieto gives us a precise definition of multicultural education to work from for the semester in this piece as she reframes the idea of multicultural education and provides suggestions on what it should look like in educational settings.
Viruses are electron microscopic parasites that replicates only inside the living cells of a host body. Classification of viruses is an important tool to study of viruses in deep. Although viruses are similar to living organisms in some aspects, they are not considered as “living beings” as they cannot reproduce outside of a viable host cell. Viruses are only able to reproduce (replicate) by commandeering the host cell replication/reproductive apparatus and making it to reproduce the virus’s structural components instead. Thus, a virus cannot function or reproduce outside of a host cell. Classification of viruses Classification of viruses can be group as following; - Group I – DNA viruses - Group II – ss DNA viruses - Group III – ds RNA viruses - Group IV – ss(positive sense) RNA viruses - Group V – ss(negative sense) RNA viruses - Group VI – ss(positive sense) RNA viruses (reverse transcribe in to DNA) - Group VII – ds DNA (reverse transcribing stage) This article will be concentrating on following aspects under each viral group; - Characteristics and general structure of the genome - Structure and classification of several important viral species of the group - Viral gene expression The viral life cycle Most viruses are species specific. They only infect a narrow range of animals, plants, bacteria or fungi. Usually a viral infection occurs when a virus enters to host cell either; - Through a physical breach (a cut on the skin) - Direct inoculation (Ex: a mosquito bite) - Direct infection of the surface itself (Ex: inhalation to trachea) A virus can gain access to a possible susceptible cell only after it enters to a viable host. Life cycle of a virus is composed of following steps; - Viral entry - Viral replication - Viral shedding - Viral latency Before entering in to a possible host cell, a virus must find a way to attach on to that particular cell. In this sense, a virus faces a huge hurdle which is the thermodynamics of diffusion. As neutrally charged objects do not clump together naturally, viruses follow an alternative method for this purpose by reducing the cellular proximity of the host cell, which is known as Attachment or adsorption. Receptors in the viral envelope become connected with the Complementary receptor proteins in the cellular membrane of the susceptible host cell. This particular attachment forces the two membranes to be in mutual proximity allowing the further interactions between the membrane proteins to occur. This is also the first requisite that must be satisfied before a cell to become infected. Satisfaction of this requisite makes the cell susceptible. Viruses that show this behavior include many enveloped viruses such as HIV and Herpes simplex virus. This primary idea extends to viruses that do not contain an envelope. After the receptor proteins in the viral capsid or the envelope connect to the complementary receptor proteins in the cellular membrane of the host, it must find a way to enter the host cytoplasm through the phospholipid bilayer or the cytoplasmic membrane. Viruses overcome this challenge either via; 1. Membrane fusion or the Hemi-fusion state 3. Viral penetration The fact that which one of the above methods is used by a virus depends on the viral species. Viral entry and infection can be visualized real-time by using Green Fluorescence Protein (GFP). Once a virus enters to cell, replication is not immediate and indeed takes some time; possibly from seconds to hours. Entry via membrane fusion This is only possible with the enveloped viruses. After receptor proteins in the viral envelope binds with the complementary receptor proteins of the cell membrane of the host, there may be secondary receptors present that initiate the puncturing of or the fusion with the cell membrane. The viral envelope blends (fuse) with the cellular membrane of the host following attachment and thus emptying the bare-virus (envelope less) in to the cytoplasm. Ex: HIV, Herpes simplex virus Entry via Endocytosis This method of entry is used only by the viruses that do not have an envelope. In this method, virus tricks the cell pretending to be a harmless nutritional particle and thus the cell, which naturally takes in resources by attaching them on to the surface receptors and bring them in, will engulf the virus. When the entry is successful, the virus must break out of the vesicle by which it was taken up in order to gain access to the cytoplasm. Entry via Genetic injection This is exhibited only by the viruses which only require its genome to cause an infection. (Ex: Most positive sense, single stranded RNA viruses). (No other viral structures are required. Ex: enzymes). The virus attaches itself to the host cell via binding to the receptors on the cellular membrane and injects only its genome in to the host cytoplasm. Viral replication is the formation of biological viruses inside of the host cell during the process of infection. Viral replication can be summarized to two steps. 1. Replication of the viral genome 2. Packaging of the virus After these two major steps, the resulting new viruses can continue infecting new hosts. Obviously, as a result of this viral replication, or in other words, reproduction; the survival of the viral species is ensured. Replication of viruses varied greatly and t depend on the genes involved in them. Replication of some viruses, especially DNA viruses, occurs within the nucleus whereas replication of other viruses occurs in the cytoplasm. Class I: Double stranded DNA viruses This type of viruses (most, but not all) must enter the nucleus of the host cell for the purpose of replication. Some ds DNA viruses depend on host cell polymerases to replicate their genomes where others encode their own replication factors. However, genome replication of almost all the ds DNA viruses is dependent highly on the cell cycle as they require a cellular state which is permissive to DNA replication. The virus may induce the cell to forcefully undergo a cell division which may lead to transformation of the host cell and ultimately; a Cancer. Genome replication of these viruses occurs using DNA-dependent DNA polymerase enzymes. This class can be further divided to two on the basis of the location of genome replication within the host cell. 1. Replication is exclusively nuclear Ex: Family Adenoviridae, Polyomaviridae, Herpesviridae and Papillomaviridae Replication of the genome of these viruses is relatively dependent on the cellular factors 2. Replication is exclusively cytoplasmic Ex: Family Poxviridae These viruses acquired all the necessary factors for the transcription and replication of their genomes and thus largely independent from the cellular machinery except for the need of host’s ribosomes. Members of the family Adenoviridae are medium sized (90-100nm), non-enveloped viruses with Icosahedral nuclear capsids with double stranded DNA genomes. They have a broad range of vertebrate hosts. In humans 57 different serotypes have been found which causes from mild respiratory illnesses in young children to life-threatening multi-organ infections in people with compromised immune systems. Adenoviruses represent the largest non-enveloped viruses. Because of their size, they are able to transport via Endocytosis. The viral capsid also contains spikes at the base of each pentose base which enables them to attach to the various receptors in the host cellular membrane An Adenovirus have a linear, non-segmented, Double stranded DNA genome of about 30-38kbp in size. This theoretically enables the virus to carry 30-40 genes. Although the genome is comparatively larger than that of other virus families in the class I, these are simple viruses that show high dependence on the host cell for the survival and replication. These viruses have a terminal 55kDa protein associated with each 5’ end of the linear ds DNA which are used as primers in replication and ensures the terminal genes are adequately replicated. Viral gene expression Gene expression of the Adenoviruses is divided in to two phases by the DNA - Early phase - Late phase In both times a primary transcript that is alternatively spliced to generate monocistronic mRNAs. And also, mRNAs should be compatible with the host cell’s ribosomes are generated, and allowing for the product to be translated. Expression of the early genes results in encoding mainly non-structural, regulatory proteins. These proteins will; - Alter the expression of host cell proteins that are necessary in DNA replication. - Activate other virus genes (Ex: virally encoded DNA polymerase). - Prevent premature death of the infected host cell by the host immune defen (Ex: via blockage of inferno activity, apoptosis etc.) Once early genes have liberated adequate viral proteins, replication machinery and replication substrates, replication of the virus genome can occur. A terminal protein that is covalently bonded to the 5’ end of the DNA; act as the primer for the replication. The viral DNA polymerase enzyme then uses a strand displacement mechanism to replicate the genome. Adenoviruses cause respiratory illnesses, common cold and conjunctivas (pink eye), croup and bronchitis in humans. Family Poxviridae contains viruses that can infect both vertebrates and invertebrates. There is no order assigned for the family Poxviridae. Poxviridae contain largest viruses among all virus groups. They have linear, non- segmented, double stranded DNA which is approximately about 205kbp in size. The two complementary strands of the DNA are joined. Poxviridae viral particles (virions) are generally covered by an envelope (external enveloped virion-EEV). Viral gene expression Genome replication and gene expression of poxviruses are almost independent from the cellular mechanisms except for the requirement of the host cell ribosomes in their protein synthesis. Poxviruses genome encodes numerous enzymes involved in; These enzymes are packed in a virus particle (contain about 100 enzymes) enabling the replication and transcription to occur within the host cell cytoplasm without entering to the nucleus after infection by almost totally under the control of the virus. The synthesis of the poxvirus mRNA (transcription) begins before the genome is uncoated. Transcription is initiated by the virion associated proteins and is catalyzed by virion associated DNA-dependent RNA polymerase enzyme. This enables the replication and expression of the viral genome to be totally cytoplasmic whereas the particular process of other double stranded DNA viruses occurs within the host nucleus as a result of their dependence on host’s DNA-dependent RNA polymerases for the transcription process. Genomes of these viruses can be divided in to three sets depending on the transcription process. That is, early genes, intermediate genes and late genes. Functions of the gene products (proteins) of the above genes can be summarized as follows; - Early gene proteins : Completes the transcription process and initiate the replication of the genome. And allow the transcription of the intermediate genes. - Intermediate gene proteins : Allow the transcription of the late genes. - Late gene proteins : Structural proteins Family Polyomaviridae and Papillomaviridae Papillomaviruses and polyomaviruses contain circular, double stranded DNA of approximately 5kbp in size. The genomic organization of these viruses has evolved to pack maximum information (six genes) to minimum space (5kbp). This is achieved by the use of both strands and overlapping genes. Class II: Single stranded DNA viruses This group contains most of the viruses found in sea water, freshwater, sediments, terrestrial, extreme, metazoan-associated and marine microbial mats. Therefore, these viruses are known as “environmental viruses” and they are belonging to the family Microviridae. However vast majority of these viruses are yet to be studied and assigned to genera and higher taxa. Families of this group are assigned on the basis of; - Nature of the genome (circular or linear) - Host range The Family Parvoviridae contains smallest known viruses and most environment resistant viruses. They are found to be affecting vertebrates and arthropods. They are mainly un- enveloped and have icosahedral nuclear capsids. Interestingly enough, parvoviruses are the only viruses that affect humans being single stranded DNA viruses. Family Parvoviridae has been divided in to two sub families, Parvovirinae (vertebrate viruses) and Densovirinae (invertebrate viruses). The sub family Parvovirinae contains the genus Dependovirus, which is also known as replication defective virus. Species of this genus can only replicate when the host is co-infected with a helper virus. Other Parvoviruses that do not require helper viruses are known as autonomous parvoviruses. Dependoviruses depend on the help of a helper virus for their genome replication as mentioned earlier. Most of the time, the helper virus is an adenovirus. But other DNA viruses such as Herpes viruses can also act as helpers. In some cases, occasionally some Dependoviruses may replicate in the absence of a helper virus under some circumstances. Dependoviruses are valuable gene vectors. They are used to introduce new genes to cell cultures for mass production of important proteins and also being investigated as possible vectors to introduce genes in to the cells of patients for the treatment of various genetic diseases and cancers. And also, importantly, Dependoviruses are not known to cause any disease. Parvoviruses have genomes composed of Linear, single stranded DNA in the size of 4-6kbp. At the end of the DNA molecule there are a number of short complementary sequences that can base pair to form a secondary structure. Some types of Parvovirus genomes have sequences at their ends called as Inverted terminal repeats (ITRs), that the sequence at one end is; - complementary to, - and in the opposite orientation to; the sequence at the other end. As the sequences are complementary, the ends have identical secondary structures. Other parvoviruses have unique sequence and therefore a unique secondary structure, at each end of the DNA. During replication, parvoviruses with ITRs generate and package equal number of (+) and (-) strands of DNA, while most viruses with unique sequences at the termini do not. The percentage of virions containing (+) DNA and (-) DNA therefore may vary with different viruses. In a (-) DNA, the genes for non-structural proteins are towards the 3’ end and the structural protein genes are towards the 5’ end. Viral gene expression The small genome of a parvovirus can only encode a few proteins. Therefore the virus has to depend on its host cell (or another virus) to provide important proteins. Some of these proteins (a DNA polymerase and other important proteins in DNA replication) are only available in the S phase, when DNA synthesis takes place. This restricts the replication of parvoviruses to the S phase unlike other large DNA viruses such as Herpes simplex which can replicate in any phase of the host cell cycle as they encode their own replication factors. Replication of the viral genome occurs in the host cell nucleus. In the nucleus, the single stranded DNA of the virus is converted to a double stranded DNA by a Host cell DNA polymerase. The ends of the genome are double stranded as a result of base pairing and at the 3’end the –OH group act as a primer to which the enzyme binds. Transcription occurs as the cell RNA polymerase II enzyme transcribes the viral genes. In the transcription and translation of the viral genome, cell transcription factors play a major role. The primary transcripts undergo various splicing events to produce two size classes of mRNA. The larger mRNA encodes the non-structural proteins and the smaller mRNA encodes the structural proteins. The non-structural proteins are phosphorylated and play role in the control of gene expression and in DNA replication. After virion assembly conversion of the ss DNA genome to ds DNA, the DNA is replicated by a mechanism called “rolling hairpin replication”. Pro-capsids are constructed from structural proteins and each is filled by a copy of the virus genome, either a (+) DNA or a (-) DNA as appropriate. One of the non-structural proteins act as a Helicase enzyme to unwind the double stranded DNA so that a single strand can enter the pro-capsid. An RNA virus is a virus that uses RNA as its genetic material. The genome can be Double stranded, single stranded (+) sense, or single stranded (-) sense. Notable human diseases caused by RNA viruses include Ebola hemorrhagic fever, SARS, Hepatitis C, Influenza etc. Normally RNA viruses pose higher mutation rates than DNA viruses. This is because viral RNA polymerase lack proof reading ability of DNA polymerase. Due to this reason, producing vaccines for RNA viruses is difficult. Apart from this, most of the mutations are not favorable for the virus as some genes of RNA viruses are important in viral replication cycles and thus a particular mutation thus could not be tolerated. For instance, the region of the Hepatitis C genome which encodes for the core protein is highly conserved because it contains an RNA structure involved in an internal ribosome entry site. Mutation rate of RNA-dependent RNA polymerase is around 1 to 10000. Therefore, to minimize the mutations during transcription process, RNA viruses have to restrict their genome to be within approximately 10000 nucleotides that is 10kbp. (Have you heard of it before?) According to the modern ICTV classification, RNA viruses are classified to Class III, IV and V. Class III: Double stranded RNA viruses Double stranded RNA viruses are diverse group that vary widely in, - Host range (Human, Animals, plants, Bacteria) - Genome segment number (one to twelve) - Virion organization There are several families in this class. But among all of them, family Reoviridae is the most diverse family. Icosahedral viruses with double stranded RNA genomes isolated from Respiratory tract and Enteric tract of humans and many animals and with which no disease could be associated (Orphan), became known as Reo viruses. A large number of similar viruses have been found in many animals, fungi and plants and many of them are associated with various diseases. But the original name Reoviridae has been preserved and has been incorporated in to the names of several genera within the family. An interesting fact is that most of the Plant infecting Reoviruses spread among the plants through insect vectors. These viruses actively replicate in both the plant and the insect, generally causing the disease to the plant but little or no harm to the infected insect. Our main focus will be Rotaviruses which has been the subject to intensive study as they are amongst the most important agents of gastroenteritis in human and animals. Members of the family Reoviridae have Double stranded RNA genomes which are segmented to approximately to 10, 11 or 12 segments. Each segment is transcribed in to an independent mRNA by virion transcriptase. Most of these mRNAs are monocistronic. Viral gene expression The DS RNA is never completely uncoated. This is to prevent the activation of the antiviral state by the host cell in response to ds RNA genome. Viral RNA-dependent RNA polymerase transcribes each DS RNA segment in to individual mRNAs. In the transcribing process, only (-) strand is used from each ds RNA molecule, result in synthesizing (+) sense mRNA, which are capped inside the core. These mRNAs are Trans located to the cytoplasm where they are translated. This is also known as the extrusion of mRNA. (Reovirus core contain all the enzymes needed for transcription and capping and further proteins are produced by leaky scanning and protein processing.) The replication and transcription of all RNA viruses are completely cytoplasmic. Single stranded RNA viruses Single stranded RNA viruses can further be classified according to their sense or polarity of the RNA strand as positive sense, negative sense or ambisense. A positive sense viral RNA is similar to mRNA and can be directly translated by the host cell ribosome system. A negative sense viral RNA is complementary to mRNA and thus should be converted to a positive strand by an RNA polymerase before translation. Due to this, purified RNA of a positive sense RNA virus can still be infectious although it is not infectious as the whole virus particle. But purified RNA of a negative sense RNA virus will not remain infectious as it should be transcribed in to positive sense RNA before translation. An ambisense RNA virus resembles the negative sense RNA viruses, except for that they also translate genes from the positive strand. Single stranded RNA genomes vary in size from those of Picornaviruses which are approximately 8kbp in size to those of coronaviruses which are approximately 30kbp. The ultimate size of single-strand RNA is due to the fragility of RNA or the Tendency of long strands to break. Class IV: Single stranded (+) sense RNA viruses In all (+) sense single stranded RNA viral genomes, there are untranslated regions (UTR) in the 5’ end of the RNA strand which do not encode any protein and shorter UTRs at the 3’ end. These regions are functionally important in virus replication and are thus conserved inspite of the pressure to reduce the genome size. Both ends of (+) stranded eukaryotic virus genomes are often modified, the 5’ end by a small, covalently bonded protein or a methylated nucleotide cap structure, and 3’ end by polyadenylation. These signals allow viral RNA to be recognized by the host cell and to function as mRNA. These viruses can be sub-divided in to two groups depending on their gene expression strategy; - Viruses with sub-genomic mRNA and poly protein strategy. - Viruses with Polycistronic mRNA strategy. Viruses with sub genomic RNA and Poly-protein strategy As viral RNA is (+) sense, it is infectious and it act as both viral genome and mRNA. The first 2/3 of the viral genome is translated in to a poly protein. This poly protein is cleaved in to non- structural proteins necessary for RNA synthesis (replication and transcription) Replication takes place in the cytoplasmic viral factories at the surface of the endosomes. A double stranded RNA genome is synthesized using single stranded (+) sense RNA genome. The double stranded RNA is replicated thereby providing viral mRNAs/ new single stranded (+) sense RNA genomes. Expression of the sub genomic RNA (sgRNA) gives rise to structural proteins. Viruses with Polycistronic mRNA As with all the other viruses in this class, the genomic RNA acts as the mRNA itself. This genomic RNA is directly translated to form a polyprotein product which consequently cleaved to produce the mature proteins. Picornavirales are an order of viruses with vertebrates, insects and plant hosts. There are five families in this order. All share common features mentioned below; - Conserved RNA-depended RNA polymerase. - Genome has a protein attached to 5’ end. - No over lapping reading frames (only open reading frames within the genome). - All RNA are translated in to a polyprotein before processing. These are non-enveloped viruses with icosahedral capsids. Picornaviruses have linear, non-segmented, positive sense, single stranded RNA genome of about 7.2-8.5kbp in size. The 5’ end of the genome has a longer untranslated region (UTR) of about 600-1200 nucleotides (nt) in length which is important in; - Encapsidation (possibly) And there is a shorter UTR in the 3’end of about 50-100nt long which is important in the negative strand synthesis during replication. The rest of the genome encodes for a simple polyprotein of between 2100-2400 amino acids. Both ends of the genome are modified, 5’ end by a covalently bonded small basic protein Vpg (23AA) and the 3’ end by polyadenylation. Genomic structure and genome organization is a classification criterion for these viruses. Viral gene expression Picornaviruses use Polycistronic mRNA strategy of gene expression. As mentioned above, the whole genome is translated in to a single polyprotein, which undergo auto catalytic cleavage (self-cleavage) consequently to make important structural and non- structural proteins. The process can be represented as below. Coronaviridae members infect wide range of mammals and birds worldwide. Although theses diseases are mild most of the time, some of the members cause lot severe infections in humans such as Severe Acute Respiratory Syndrome (SARS) and COVID 19. They can also cause enteric infections in very young infants and in rare occasions, neurological syndromes. These are enveloped viruses which are spherical in shape. These viruses have linear, non-segmented, positive sense RNA genomes which are approximately 27-30kbp in size which is the largest genomes of all RNA viruses available. The genome contains a Methylated nucleotide cap at the 5’ terminus and polyadenylation at the 3’ terminus of the genome. Viral gene expression These viruses use the sub-genomic mRNA with polyprotein strategy in their gene expression. Starting from the 5’ end, 20kbp segment of the genome is translated first to produce a RNA-dependent RNA polymerase. The polymerase then synthesizes a full length (-) sense RNA strand which is used as the template for the synthesis of viral sub-genomic mRNA as a ‘Nested set’ of transcripts all with; - Identical 5’ non-translated leader sequences - 3’ polyadenylation All sub-genomic mRNAs are monocistronic. In the synthesis of sub genomic mRNAs, following mRNAs are synthesized which encodes for important proteins in the viral life cycle. - E1 – trans-membrane glycoprotein - E2 – peplomer glycoprotein - N – nucleo-protein - HE (E3) – hemagglutinin-esterase glycoprotein By detaching and re-annealing, RNA polymerase complex makes copies of different genes. This feature is specific to order Nidovirales which is “producing nested set of transcripts”. Class V: Single stranded (-) sense RNA viruses Viruses with negative sense RNA genomes are generally more complex than viruses with positive senses RNA genomes. The meaning of the negative sense is that their single stranded RNA genome has the opposite nucleotide arrangement to the cell mRNAs. Therefore, in order for it to be translated in to proteins, it has to be copied/transcribed in to a positive sense RNA strand. As any Eukaryotic or prokaryotic cell does not have any mechanism or related enzymes for the transcription of RNA depending on another RNA strand, each negative stranded RNA virus must contain an RNA-dependent RNA polymerase enzyme within itself. Otherwise, the RNA genome of the virus will be biologically meaningless once it is in a host cell. Possibly because of the difficulties in genome replication, these viruses tend to have larger genomes encoding more genetic information. Purified genomes of these viruses are not infectious and remain effectively inert as they make no sense without their replicase enzyme; that is RNA-dependent RNA polymerase. Some of these viruses are ambisense; a part negative sense and the other part positive sense. Paramyxoviridae is an important family of viruses which include several important parasitic species. Family paramyxoviridae divides in to two sub families; Paramyxovirinae and pneumovirinae. These are enveloped viruses and virions can be either spherical, filamentous or pleomorphic. These viruses have linear, non-segmented, single stranded, negative sense RNA genome of about 15-16kbp in size. Typically, the genome contains 6-10 genes and Extracistronic regions (non-coding regions) including; - A 3’ leader sequence of 50nt long which acts as a transcriptional promoter. - A 5’ trailer sequence of about 50-161nt long. - Inter-genomic regions between each gene Each gene contains Transcriptional start and stop signals at the beginning and end, which are transcribed as parts of the gene. Inter-genomic regions include a polyadenylation at the end of the gene which acts as a stop signal for the transcription and inter-genic sequence and a transcription start signal at the beginning of the gene. Viral gene expression The viral RNA-dependent RNA polymerase initiates transcription by binding to the 3’ leader sequence of the genomic (-) RNA. RdRp complex transcribes a 5’ tri-phosphate leader (+) sense RNA end stops and restarts transcription from another transcription initiation signal. All RNAs initiated by these signals are capped. As mentioned earlier, at the end of each viral gene there is a transcription stop signal on which the RdRp complex will produce a polyadenylation signal by stuttering on a U stretch before releasing the mRNA. On Polycistronic mRNAs, the RdRp complex can scan to the next transcription initiation signal and resume transcription of the next gene. Rhabdoviruses have linear, non-segmented, single stranded negative sense RNA genomes approximately of about 11kbp. There is a leader sequence of about 50nt at the 3’ end and an untranslated region (UTR) of about 60nt at the 5’ terminus of the viral RNA. The genetic arrangement is similar to that of paramyxoviridae as these viruses also contain a conserved polyadenylation signal at the end of each gene and short inter-genic regions between the genes. Viral gene expression Gene expression is similar to that of paramyxoviridae. Class VI: Reverse transcribing RNA viruses These are spherical, enveloped viruses that are approximately 90nm in diameter. These viruses have linear, diploid, single stranded positive sense RNA genomes with 5’ cap and 3’ polyadenylated tail. There are also long terminal repeats (LTRs) at each 3’ and 5’ ends. There is also a Primer binding site (PBS) at the 5’ end & a polypurine tract (PPT) at the 3’ end. Viral gene expression Transcription and translation of the genome of these viruses are totally dependent on the host cell. The reverse transcriptase enzyme uses the genomic positive sense RNA as a template for the reverse transcription process. Through this process, viral RNA is converted in to a Proviral DNA which is transported to the host nucleus and integrated in to the host cell genome using the enzyme Integrase. Once the integration is completed, the Proviral DNA is under the control of the host cell and is transcribed exactly as the other cellular genes of the host. The enzymes used by the virus which are; - Reverse transcriptase and Are not used by the host cell commonly and thus for a successful infection, the virus must “bring” its own enzymes within the virion. After the integration is completed, the transcription of the Proviral DNA can be initiated. This process is regulated by transcription RNA polymerase II enzyme, which is a non- viral enzyme (Host cell enzyme) Transcription of the Proviral DNA results in producing two types of mRNA. Reverse transcriptase enzyme which is used by the virus for the revers transcribing its genomic positive sense RNA to a Proviral DNA has three sequential biochemical activities. Those are; RNA-dependent DNA polymerase activity Ribonuclease activity (Ribonuclease H) DNA dependent DNA polymerase activity These activities are used by the virus to convert its RNA genome to a complementary double stranded DNA (cDNA) which can then be integrated to the host genome, generating long term infections that can be very difficult to eradicate. The process of revers transcription is extremely error prone and it is during this step that mutations may occur. Such mutation may lead to Drug resistance. The process of reverse transcription can be summarized as follows; - A specific cellular tRNA act as a primer and hybridize to the PBS region on the viral RNA - Complementary DNA then binds to the U5 and R regions of the viral RNA - A domain on the reverse transcriptase enzyme called RNAase H then degrades the 5’ end of the viral RNA which removes the U5 and R regions - The primer then “jumps” to the 3’ end of the viral genome and newly synthesized DNA strands hybridized to the complementary R region on the RNA - The first strand of the complementary DNA is then extended and the majority of the viral RNA is degraded by RNAase H - Once the first strand is completed, synthesis of the second strand is initiated by the viral RNA. - There is another “jump” where the PBS from the second strand hybridizes with the PBS of the first strand. - Both strands are extended further and can be incorporated to the hosts genome by the enzyme Integrase. Retroviridae members show some special features which make them different than other virus families. Few are indicated below. - These are the only positive sense RNA viruses whose genome does not serve as an mRNA after entering to the host. - These are the only diploid viruses - These viruses cause incurable disease If the virus Integrase germ line tissues, the symptoms will be passed to the next generation. - These are the only viruses whose genome requires a specific cellular RNA (tRNA) for replication - They are the only viruses whose genome is produced by cellular transcriptional machinery (without participation of any virally encoded polymerases) Class VII: Reverse transcribing DNA viruses These are spherical, non-enveloped viruses with small genomes. These viruses contain partially double stranded (Gapped) DNA genomes consisting of negative strand of 3.0-3.3kbp and a positive strand of 1.7-2.8kbp. These genome sizes may vary between different Hepadnaviruses. And these viruses also contain an RNA dependent DNA polymerase (reverse transcriptase) enzyme within their capsids. Viral gene expression - After the infection, before the initiation of the gene expression process, repairing of the gapped genome is one using host cell DNA polymerase enz - After repairing the genome, transcription occurs. In the process of transcription, four major genome transcripts are produced; S, C, P and X. The genome structure and replication of Cauliflower Mosaic Virus (CaMV), the prototype member of the Caulimovirus genus, is similar to that of Hepadnaviruses although there are some differences. The CaMV genome consists of a gapped, circular, double stranded DNA molecule of about 8kbp, one strand of which contains a single gap and a complementary strand which contain two gaps. Black, J. G. (2008). MICROBIOLOGY PRINCIPLES AND EXPLOATION (7th ed.). JOHN WILEY & SONS,Inc. T.A.BROWN. (2010). GENE CLONING & DNA ANALYSIS (6th ed.). A John Wiley & Sons, Ltd,Publication. Nalini Chanda, Susan Viselli. (2010). Cell and Molecular Biology (6th ed.). Lippincott Williams & Wilkins, a Wolters Kluwer business. Schleif, R. (2015). Genetics and Molecular Biology (2nd ed.). The Johns Hopkins University Press Baltimore and London. Pasindu Chamikara – Microbiologist
- Trending Categories - Data Structure - Operating System - C Programming - Selected Reading - UPSC IAS Exams Notes - Developer's Best Practices - Questions and Answers - Effective Resume Writing - HR Interview Questions - Computer Glossary - Who is Who What are the advantages of Symmetric Algorithms? Symmetric encryption is computational cryptography that encrypts electronic communication with a single encryption key. It converts data using a mathematical method and a secret key, resulting in the inability to understand a message. Because the mathematical procedure is reversed when decrypting the message using the same private key, symmetric encryption is a two-way algorithm. Private-key encryption and secure-key encryption are other terms for symmetric encryption. Block and stream algorithms are used to perform the two forms of symmetric encryptions. Electronic data blocks are subjected to block algorithms. The secret key is used to simultaneously change a specified collection of bit lengths After that, the key is applied to each block. When network stream data is encrypted, the encryption system stores it in its memory components and waits for all blocks to arrive. The duration of time that the system waits can cause a serious security hole and put data security at risk. The approach entails reducing the size of the data block and combining it with the contents of prior encrypted data blocks until the rest of the blocks arrive. This is referred to as feedback. The entire block is encrypted once it has been received. On the other hand, stream algorithms are not stored in the encryption system's memory but rather arrive in data stream algorithms. This technique is safer because the data is not held on a disc or system without encryption in the memory components. How does it work? Symmetric encryption is a type of cryptography that is encrypted and decrypted using a single key. That key, password, or passphrase is shared among the parties involved, and they can use it to decrypt or encrypt whatever messages they wish. It belongs to the public key infrastructure (PKI) ecosystem because it turns plain text, or readable data, into unreadable ciphertext, allowing secure communications to be sent over an insecure internet. Some of the most common symmetric cryptography algorithms are the Data Encryption Standard (DES), which uses 56-bit keys; Triple DES, which repeats the DES algorithm three times with different keys; and the Advanced Encryption Standard (AES), which the US National Institute of Standards and Technology recommends for securely storing and transferring data. What is the Purpose of Symmetric Encryption? While symmetric encryption is an older kind of encryption, it is faster and more efficient than asymmetric encryption, which strains networks due to data capacity limitations and excessive CPU usage. Symmetric cryptography is commonly used for bulk encryption / encrypting massive volumes of data, such as database encryption, due to its superior performance and speed (relative to asymmetric encryption). In the case of a database, the secret key may be used to encrypt or decrypt data exclusively by the database. The following are some examples of where symmetric cryptography is used: Payment applications, such as card transactions require the protection of personally identifiable information (PII) to prevent identity theft and fraudulent charges. Validations to ensure that the message's sender is who he says he is. Hashing or generating random numbers Symmetric and Asymmetric Encryption: What's the Difference? When communicating, asymmetric encryption uses a pair of public and private keys to encrypt and decode messages. On the other hand, Symmetric encryption uses a single key that is shared with the people who need to receive the message. In comparison to symmetric encryption, asymmetric encryption is a relatively young technique. Asymmetric encryption was developed to overcome the inherent problem of key sharing in symmetric encryption schemes by using a pair of public-private keys to avoid the need for key sharing. Asymmetric encryption takes more time than symmetric encryption. What Factors Affect a Symmetric Encryption Algorithm's Strength? Not all symmetric algorithms are made equal, as you'll quickly discover. They differ in terms of strength, but what does cryptography entail? The basic answer is that cryptographic strength refers to how difficult it is for a hacker to decrypt data and access it. Of course, the longer answer varies based on the type of algorithm you're evaluating. Cryptographic strength, on the other hand, usually boils down to a few fundamental characteristics The symmetric key's length, randomness, and unpredictability, The algorithm's ability to resist or endure known attacks, There are no back doors or any intentional flaws. Symmetric encryption is a delicate balancing act since it necessitates algorithms and keys that are computationally difficult and practicable to utilize with acceptable performance. Advantages of symmetric algorithms Symmetric key encryption can be highly secure when it employs a secure algorithm. As recognized by the US government, the Advanced Encryption Standard is one of the most extensively used symmetric key encryption schemes. Using ten petaflop machines, brute-force guessing the key using its most secure 256-bit key length would take about a billion years. Because the world's fastest computer, as of November 2012, runs at 17 petaflops, 256-bit AES is virtually impenetrable. One of the disadvantages of public-key encryption methods is that they require very complex mathematics to function, making them computationally intensive. It's pretty simple to encrypt and decrypt symmetric key data, resulting in excellent reading and writing performance. Many solid-state drives, which usually are pretty fast, use symmetric key encryption to store data inside, yet they are still quicker than traditional hard drives that are not encrypted. Because of their security and speed benefits, symmetric encryption algorithms like AES have become the gold standard of data encryption. As a result, they have enjoyed decades of industry adoption and acceptance. Requires low computer resources When compared to public-key encryption, single-key encryption uses fewer computer resources. Minimizes message compromises A distinct secret key is utilized for communication with each party, preventing a widespread message security breach. Only the messages sent and received by a specific pair of sender and recipient are affected if a key is compromised. Other people's communications are still safe. Disadvantages of symmetric algorithms The Sharing of the Key The most significant drawback of symmetric key encryption is that the key must be communicated to the party with which you share data. Encryption keys aren't just plain text strings like passwords. They're essentially nonsense blocks. As a result, you'll need a secure method of delivering the key to the other party. Of course, you generally don't need to use encryption in the first place if you have a secure mechanism to communicate the key. With this in mind, symmetric key encryption is particularly beneficial for encrypting your data rather than distributing encrypted data. If your security is compromised, you will risk more damage. When someone obtains a symmetric key, they can decode anything that has been encrypted with that key. When two-way communications are encrypted by symmetric encryption, both sides of the conversation are vulnerable. Someone who obtains your private key can decode communications sent to you, but they won't decipher messages sent to the other person because they are encrypted with a different key pair. The message's origin and authenticity cannot be guaranteed Because both the sender and the recipient have the same key, messages cannot be validated as coming from a specific user. If there is a disagreement, this might be a problem. - What are cryptography, symmetric and public key algorithms? - What are the characteristics of clustering algorithms? - What are the advantages of CSS? - What are the advantages of SIEM? - What are the types of statistical-based algorithms? - What are the benefits of k-NN Algorithms? - What are the algorithms of Grid-Based Clustering? - What are Genetic Algorithms? - What are the advantages of stored procedures? - What are the advantages of using Cucumber? - What are the advantages of Share Buyback? - What are the types of process scheduling algorithms and which algorithms lead to starvation? - What are the principle of Symmetric Cipher in Information Security? - What are the advantages of C++ Programming Language?
Course Introduction By Tanzia Naaz Introduction Statement- A statement is a formal account of certain facts, views, problems or situations expressed in words. sit tions expressed in words Conclusion -A conclusion is a belief or an opinion that is the result of reasoning out a given statement. It cam also be defined as a proposition in an argument to which other propositions in the argument give support. Tons Different types of questions covered in this chapter are as follows. 1) One statement with Two Conclusion Based 2) More than Two Statements and Conclusion Based One statement with Two Conclusion Based Here a statement is given and it is followed by two conclusion We need to find out which of conclusion follows the given statement and select the correct option accordingly in each of the following questions, a statement is followed by two Conclusions I and II E.g. Statement. Parents are prepared to pay any price for an elite education to their children. Conclusion I : All parents these days are well off Conclusion II. Parents have an obsessive passion for perfect development of their children through good schooling. (a) Only Conclusion I follows (e) Both I and II follows (b) Only Conclusion II follows (c) Either I or II follows (d) Neither I nor II follows (b) Only Conclusion II follows From Conclusion I: Cannot generalise all the parents being well off From Conclusion II: It may be concluded from the statement that since parents want a perfect development of their children through good schooling therefore they are prepared to pay any price for a good education. In this type of questions, a statement statements I /are given followed by some conclusions. Choose the conclusion which follows the given statement Directions : Which of the conclusion can be drawn from the statement? Ex 1: Statement. Many business offices located in buildings having two to eight floors. If a building has more than three floors, it has a lift. Conclusions (a) All floors may be reached by lifts (b) Only floors above the third floor have lifts (c) Fifth floor has lifts (d) Second floors do not have lifts BE graduate with 2+ years of experience in IT Industry. 2 years of experience into teaching.
Without even knowing what gases are made of (OK, they’re made of atoms and molecules) we can understand how they behave on a macroscopic level. Gases are a form of ordinary matter that is much less dense than liquids or solids. Because of this they tend to fill completely any containers they are in and are very compressible. Everyone has experience working with gases and taking advantage of their properties. If you have ever inflated a balloon, used a straw, inflated a tire, complained about the weather, or taken a deep breath then you already know how this topic relates to “real life”. There are four mathematical variables used to describe the behavior of gases. Pressure (P), Volume (V), Temperature (T), and amount (n). There are some common units of measurement for each of these variables so your first task will be to become familiar with them. One more thing: Using these variables will help you to understand the ideal gas laws. Why are they called ideal? It’s not because they are the best possible laws gases could follow. Nor is it because they are the best scientific laws anyone ever found. No, it is because the gases we will discuss are ‘idealized’. That is, they are not real gases and no real gases act the way these equations say they will. But, and this is important, almost all gases come very, very close to acting exactly according to the ideal gas laws. So even though they are not perfect, they are very useful. At the end of this packet you will find a series of exercises that will allow you to apply the ideas and proportions introduced in the text. Please do these problems showing all work on a separate piece of paper and carefully label and number each problem. |Cubic Meter||m3||1000 L||a large fish tank| |a 2 L bottle of soda| |a drop from an eyedropper| |The ‘normal’ amount of pressure exerted by Earth’s atmosphere| |Atmos. pressure can drop to 740 torr during a storm| A car tire might be rated for 35 psi |Fahrenheit||°F||°C × 9/5 + 32|| 32°F water freezes 212°F water boils |Celsius||°C||(°F -32) × 5/9|| 0°C water freezes 100°C water boils |Kelvin||K||°C + 273|| 77 K liquid nitrogen boils 273 K water freezes 373 K water boils 1 mol of C = 6.02 x 1023 atoms of C 1 mol of C atoms weighs exactly 12 g Now that you know a bit about the units you will be using you are ready to start working with the mathematical laws that we will use to build up the Ideal Gas Law. The first law we will examine is known as Boyle’s Law and was first quantified and mathematically modelled by Robert Boyle is about the year 1662. The law, in words, says the following: That is, the higher the pressure, the smaller the volume; the lower the pressure, the larger the volume. This relationship can be expressed in the formula: P · V = constant or more algebraically: P × V = k (both are true only when T and n are constant) The key to understanding pressure is to think about force per unit area. The molecules of a gas are constantly striking the walls of a gas’s container. Because these collisions are so numerous it all adds up to a steady amount of force applied to every square centimeter of the surface. The SI unit of pressure is the pascal (Pa) which is equal to a force of 1 newton per square meter (1 N/m2). A more familiar unit of force per unit area for us here in the U.S. is pounds per square inch. Boyle’s Law is a useful proportion that can be put to work to answer questions about changes in volume and pressure. Here is an example of how to apply the law.Example If a gas’s pressure is reduced by half at constant temperature then its volume doubles. This much should be clear from the work you did in the lab. But how can Boyle’s Law be used to calculate this result without performing an experiment? First, let’s define the initial pressure and volume as P1 and V1 and the pressure and volume after a change as P2 and V2. According to Boyle’s Law: P1V1 = k1 and P2V2 = k2 Let’s assume those constants are the same (and they will be as long as we do not add or remove any gas or change the temperature). In that case: P1V1 = P2V2 Now, here is a question we can answer using this proportional equation: What is the final volume of a gas when its pressure is doubled and its initial pressure is 1.0 atm and its volume is 5.0 L? P1 = 1.0 atm P2 = 2.0 atm P1V1 = P2V2 V1 = 5.0 L V2 = ? (1.0 atm)(5.0 L) = (2.0 atm)(V2) Solving for V2 gives the answer 2.5 L. This is exactly what we expected based on the idea that this is an inverse proportion: when one variable is doubled, the other is cut in half. Use this example to help you to answer the questions in the exercises at the end of this packet. V V — = constant or — = k (only when P and n are constant) T T This proportion is true when the pressure is kept the same and when no gas is added or taken away. Another important point about this relationship is that a different temperature scale is required for it to work properly. Both the Celsius and the Fahrenheit scales have meaningful values that are lower than zero. Obviously this would make the proportion give nonsense answers about negative volumes so it can’t be allowed. To solve this problem we will use a temperature scale that has no meaningful values below zero: the kelvin scale. Temperatures expressed in kelvin are just the Celsius temperature plus 273. The lowest possible temperature is absolute zero: 0 K or –273°C. For all calculations involving temperature and gases you must use the kelvin temperature scale. Both scales have degrees of the exact same size. Charles’s Law is a useful proportion that can be put to work to answer questions about changes in volume and temperature. Here is an example of how to apply the law.Example If a gas’s temperature is reduced by half at constant pressure then its volume is also cut in half. They are directly proportional, after all. But how can Charles’s Law be used to calculate this result without performing an experiment? First, let’s define the initial temperature and volume as T1 and V1 and the temperature and volume after a change as T2 and V2. According to Charles’s Law: V1 V2 ------ = k1 and ------ = k2 T1 T2 Let’s assume those constants are the same (and they will be as long as we do not add or remove any gas or change the pressure). In that case: V1/T1 = V2/T2 Now, here is a question we can answer using this proportional equation: What is the final volume of a gas when its temperature is doubled and its initial temperature is 25°C and its volume is 5.0 L? A doubling of temperature refers to the absolute temperature expressed in kelvins. So 50°C is not 25°C doubled. Instead, 25°C = 298 K is doubled to 2 × 298 K = 596 K V1 = 5.0 L V2 = ? V1/T1 = V2/T2 T1 = 298 K T2 = 596 K (5.0 L)/(298 K) = (V2)/(596 K) Solving for V2 gives the answer 10 L. This is exactly what we expected based on the idea that this is an direct proportion: when one variable is doubled, the other is also doubled. Use this example to help you to answer the questions at the end of this packet. V V — = constant or — = k (only when T and P are constant) n n This law applies to situations like when you inflate a balloon. The balloon has a fairly constant temperature and pressure but as you add air its volume grows. Practically speaking you cannot reduce the number of moles of a gas by reducing volume. This law just says that a smaller amount of volume holds a smaller number of gas molecules. One useful consequence of this law is that by comparing gases as they would be under a standard set of conditions the volume does tell you something about the amount of gas. One mole of any gas will have a volume of 22.4 L at 273 K (0 °C) and 1 atm. These conditions (1 atm and 273 K) are known as Standard Temperature and Pressure (STP). Avogadro’s Law is a useful proportion that can be put to work to answer questions about changes in volume and moles of gas. Here is an example of how to apply the law.Example If the amount of gas in a container is reduced by one third at constant pressure and temperature then its volume is also cut to one third. They are directly proportional, after all. But how can Avogadro’s Law be used to calculate this result without performing an experiment? First, let’s define the initial volume and amount of gas V1 and n1 and the volume and amount of gas after a change as V2 and n2. According to Avogadro’s Law: V1 V2 ------ = k1 and ------ = k2 n1 n2 Let’s assume those constants are the same (and they will be as long as we do not change the temperature or change the pressure). In that case: V1/n1 = V2/n2 Now, here is a question we can answer using this proportional equation: What is the final volume of a gas when the number of moles of gas is cut by one third—say by letting air out of a balloon—when starting with 5.8 mol and the initial volume is 15.0 L? V1 = 15.0 L V2 = ? V1/n1 = V2/n2 n1 = 5.8 mol n2 = 1.9 mol (15.0 L)/(5.8 mol) = (V2)/(1.9 mol) Solving for V2 gives the answer 5.0 L. This is exactly what we expected based on the idea that this is an direct proportion: when one variable is cut by one third, the other is also. Use this example to help you to answer the questions at the end of this packet. PV = nRT is the combined form of the ideal gas law. In this formula you can see that pressure (P) is still inversely proportional to volume (V). Volume is still directly proportional to the number of moles (n) and the temperature (T). The letter ‘R’ is the constant of the proportion and is called the Universal Gas Constant. In the units we will usually use R = 0.0821 L·atm/K·mol. This formula is useful for calculating a missing value: if you know the number of moles, volume and temperature of a gas you can use this formula to calculate the pressure by re-writing it as P = nRT/V. There is another form of the combined gas laws that is useful when you have multiple variables changing at once. When a weather balloon rises through the atmosphere to very high levels the external pressure becomes lower and lower but the temperature also becomes less and less. Rather than doing two separate calculations the proportions can be combined as you see here: P1V1 P2V2 P1V1 P2V2 R = ------ = ------ or ------ = ------ because moles usually don't change n1T1 n2T2 T1 T2 (ex., n1 = n2 = 1) By solving the combined gas laws equation (also called the ideal gas law equation) for the universal gas constant it is possible to set up a change formula as we did earlier for the simple gas laws. Since most of the time problems will be concerned with closed systems in which the number of moles of a gas is constant, the form of this formula depicted at right above is most commonly used. The Combined Gas Law is a useful proportion that can be put to work to answer questions about changes in volume, pressure, temperature or moles of gas. Here is two examples of how to apply the law.Example 1 The pressure, volume, number of moles, and temperature are what you need to know to know everything there is to know about a gas. If you know three of these variables then it is easy to calculate the fourth. For example, what is the volume of a gas in a metal helium cylinder given that at a temperature of 20°C it has a pressure of 180 atm and contains 1,500 mol of He gas? Here is how to set up the problem: T = 20°C = 293 K P = 180 atm nRT (1,500 mol)(0.0821 L·atm/K·mol)(293 K) V = ? V = ----- = --------------------------------------- = 200 L n = 1,500 mol P (180 atm) Any of the four variables can be found as long as you know the other three. If you keep careful track of your units as you work then you will have a way to confirm that you have done the calculation correctly. The Universal Gas Constant (R) always provides the missing unit by cancelling out with all of the others. This formula is useful for calculations when nothing is changing.Example 2 For situations in which two variables change at the same time the second equation above is useful for calculating the result. Take the example of a balloon full of helium released at sea level where the pressure in the balloon is about equal to the external pressure of 1 atm. The temperature on a warm spring day is about 18°C and the volume of the balloon is 150 L. What will the volume of the balloon be when it reaches a point high above where the temperature is -10°C and the pressure is 0.5 atm? Start by listing all the variables you know and then set up the proportion. Then solve the proportion for the missing variable, plug in your numbers, cancel units, and calculate the answer: P1 = 1 atm P2 = 0.5 atm V1 = 150 L V2 = ? n1 = ? mol n2 = ? mol (but the same as before) T1 = 18°C = 291 K T2 = -10°C = 263 K Since n is the same before and after just use the second form of the equation and solve it for V2 P1V1T2 (1 atm)(150 L)(263 K) V2 = ------- so V2 = ---------------------- = 271 L P2T1 (0.5 atm)(291 K) So the balloon grows a lot in volume as it rises. This makes sense since the pressure is reduced to half: this alone would double the volume. Because the temperature goes down, though, the volume does not become quite as big since there is a direct proportion between temperature and volume.
Ice in Antarctica suddenly appeared — suddenly in geologic terms being a little different than how we think of it — about 35 million years ago. For the previous 100 million years the continent had been essentially ice-free. Even after Antarctica had drifted to near its present location, its climate remained subtropical but then, 35.5 million years ago, ice formed on Antarctica in only about 100,000 years, which is an "overnight" shift in geological terms. What triggered the sudden shift? It wasn't global cooling, says Matthew Huber, assistant professor of earth and atmospheric sciences at Purdue University; or so some researchers believed. "Previous evidence points paradoxically to a stable climate at the same time this event, one of the biggest climate events in Earth's history, was happening." Now a paper published this week in Science says they have evidence of widespread cooling and additional computer modeling of the cooling suggests it was caused by a reduction of greenhouse gases in the atmosphere. So we need some greenhouse gases - it's been 12,000 years since the last Ice Age and we'd like to keep it that way - just not too much. "Our studies show that just over thirty-five million years ago, 'poof,' there was an ice sheet where there had been subtropical temperatures before," Huber says. "Until now we haven't had much scientific information about what happened." Before the cooling occurred at the end of the Eocene epoch, the Earth was warm and wet, and even the north and south poles experienced subtropical climates. The dinosaurs were long gone from the planet, but there were mammals and many reptiles and amphibians. Then, as the scientists say, poof, this warm wet world, which had existed for millions of years, dramatically changed. Temperatures fell dramatically, many species of mammals as well as most reptiles and amphibians became extinct, and Antarctica was covered in ice and sea levels fell. History records this as the beginning of the Oligocene epoch, but the cause of the cooling has been the subject of scientific discussion and debate for many years. The research team found before the event ocean surface temperatures near present-day Antarctica averaged 77 degrees Fahrenheit (25 degrees Celsius). Mark Pagani, professor of geology and geophysics at Yale University, says the research found that air and ocean surface temperatures dropped as much as 18 degrees Fahrenheit during the transition. "Previous reconstructions gave no evidence of high-latitude cooling," Pagani says. "Our data demonstrate a clear temperature drop in both hemispheres during this time." The research team determined the temperatures of the Earth millions of years ago by using temperature "proxies," or clues. In this case, the geologic detectives looked for the presence of biochemical molecules, which were present in plankton that only lived at certain temperatures. The researchers looked for the temperature proxies in seabed cores collected by drilling in deep-ocean sediments and crusts from around the world. "Before this work we knew little about the climate during the time when this ice sheet was forming," Huber says. Once the team identified the global cooling, the next step was to find what caused it. To find the result, Huber used modern climate modeling tools to look at the prehistoric climate. The models were run on a cluster-type supercomputer on Purdue's campus. "That's what climate models are good for. They can give you plausible reasons for such an event," Huber says. "We found that the likely culprit was a major drop in greenhouse gases in the atmosphere, especially CO2. From the temperature data and existing proxy records indicating a sharp drop in CO2 near the Eocene-Oligocene boundary, we are establishing a link between the sea surface temperatures and the glaciation of Antarctica." Huber says the modeling required an unusually large computing effort. Staff at Information Technology at Purdue assisted in the computing runs. "My simulations produced 50 terabytes of data, which is about the amount of data you could store in 100 desktop computers. This represented 8,000 years of climate simulation," Huber says. The computation required nearly 2 million computing hours over two years on Pete, Purdue's 664-CPU Linux cluster. "This required running these simulations for a long time, which would not have been allowed at a national supercomputing center," Huber says. "Fortunately, we had the resources here on campus, and I was able to use Purdue's Pete to do the simulation." Additional members of the research team included David Zinniker at Yale; Robert DeConto and Mark Leckie at the University of Massachusetts, Amherst; Henk Brinkhuis at Utrecht University (Netherlands); and Sunita R. Shah and Ann Pearson at Harvard University. Zhonghui Liu, an assistant professor at the University of Hong Kong and a former postdoctoral fellow of Pagani's at Yale, was the study's lead author. The research was supported in part by funding from the National Science Foundation. - PHYSICAL SCIENCES - EARTH SCIENCES - LIFE SCIENCES - SOCIAL SCIENCES Subscribe to the newsletter Stay in touch with the scientific world! Know Science And Want To Write? - A Dimuon Particle At 30 GeV In ALEPH ?? - Who Is Trying To Destroy The Internet? - President Obama, Why Humans On Mars Right Now Are Bad For Science - The Social Psychology Of Online Trolling, Part 1 Of 3 - A Racist On The Jews: Let The Donald Trump! - Hearing Voices Again? It's More Common Than You Think - Biofuels Are A Climate Mistake - "The argument about parallel muon momenta indeed excludes *direct* decay of Z to bbar+X. To my b..." - " I hope we never give our government complete control over the internet.NSA has been tapping in..." - "Without question there is not 'something'...there is only nothing. The deeper you delve into everything..." - "Politely speaking; Vongehr's model must be premised on asexual reproduction. In the real world..." - "/* Hallelujah, HEP is saved! */ Are we doing research for saving some branch of physics? The science..." - The Math of Hunting and Fishing: When to Work Together - Placebo: Bubbles Of Nothing Are Still Not Something - People Who Take Drugs May Be Likelier to Commit Suicide - Improved 'Screen Time' Guidelines Could Make Parents & Kids Happier - Dr. Jamie Wells Named One Of America's Top Pediatricians - Why Did EPA Delay Its Glyphosate Safety Report?
- 1. Order of Arithmetic Operations - Certain arithmetic operations take precedence over others. In completing problems with a series of operations the following - a. Addition or subtraction may occur in any order. - Example: 4 + 8 − 7 + 3 = 8 or 8 + 3 + 4 − 7 = 8 - b. Multiplication or division must be completed before addition - Example: 48 ÷ 6 + 2 = 10 - Example: 4 + (2/3)(1/2) = 4 - c. Any quantity above a division line, under a division line or a radical sign , or within parentheses or brackets must be treated as one number. - 2. Fractions, Decimals, and Percents - a. To add (or subtract) fractions, the denominator in each term must be the same. (Choose the lowest common denominator for each term. Multiply each term by the common denominator and then add [or subtract].) - (lowest common denominator = 12) - (lowest common denominator = xc) - b. To multiply fractions, multiply the numerators by each other and the denominators by each other. - c. To divide fractions, invert the divisor and multiply. - d. To convert a fraction to a percentage divide the numerator by the denominator and multiply by 100. - Note: To convert a percentage to a decimal move the decimal point two places to the left. - e. When dividing by a decimal divide by the integer and add sufficient zeros to move the decimal point the appropriate number of digits to the right. - (appropriate number of digits to right = 2) - When multiplying by a decimal multiply the integer and add enough zeros to move the decimal point the appropriate number of digits to the left. - (appropriate number of digits to left = 3) - f. Decimals may be expressed as positive or negative powers - 3. Proportions, Formulas, and Equations - The location of values in proportions, equations, or formulas may be shifted provided that whatever addition, subtraction, multiplication, or division is performed on one side of the equation is also performed on the other side. - 4. Right Triangles and Trigonometric - a. In a right triangle one angle always equals 90°. The other two angles will always be acute angles and the sum of these two angles will be 90° since the sum of the angles in any triangle is 180°. - b. In a right triangle the sides are related to each other so that the square of the longest side or hypotenuse (c) is equal to the sum of the squares of the two sides: c2 = a2 + b2. This is the Pythagorean - c. In triangle ABC, side a is called the side opposite angle A, side b is opposite angle B, and the hypotenuse, c, is opposite the right angle. Side b is named the side adjacent to angle A and side a is the side adjacent to angle B. - d. Trigonometric functions are ratios between the sides of a right triangle and ... Log In to View More If you don't have a subscription, please view our individual subscription options below to find out how you can gain access to this content. Want remote access to your institution's subscription? Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access. If your institution subscribes to this resource, and you don't have a MyAccess profile, please contact your library's reference desk for information on how to gain access to this resource from off-campus. AccessPhysiotherapy Full Site: One-Year Subscription Connect to the full suite of AccessPhysiotherapy content and resources including interactive NPTE review, more than 500 videos, Anatomy & Physiology Revealed, 20+ leading textbooks, and more. Pay Per View: Timed Access to all of AccessPhysiotherapy 24 Hour Subscription $34.95 48 Hour Subscription $54.95 Pop-up div Successfully Displayed This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.
This resource pack is everything you need to assess and provide intervention for struggling 8th grade students in all five math domains. How do these intervention packs work? Starting with a pretest and item analysis of each question on the test, you will be able to pin-point exact needs of all students. From there printables and short assessments are provided for each standard that assess procedural and conceptual understanding. Data charts and documents are provided to help keep you organized and focused during all steps of the intervention process. Take the guess work out of providing intervention and focus on what is really important… helping your students! Looking for extensive graphing forms to help you stay organized during the RTI process? Check out our form Intervention Graphing Packs! Standards & Topics Covered ➥ 8.F.1 – Understand that a function is a rule that assigns to each input exactly one output ➥ 8.F.2 – Compare properties of two functions each represented in a different way ➥ 8.F.3 – Interpret the equation y = mx + b as defining a linear function, whose graph is a straight line ➥ 8.F.4 – Construct a function to model a linear relationship between two quantities ➥ 8.F.5 – Describe qualitatively the functional relationship between two quantities by analyzing a graph The Number System ➥ 8.NS.1 – Understand that every number has a decimal expansion ➥ 8.NS.2 – Use rational approximations of irrational numbers Expressions and Equations ➥ 8.EE.1 – Develop and apply the properties of integer exponents to generate equivalent numerical expressions ➥ 8.EE.2 – Square and cube roots ➥ 8.EE.3 – Use numbers expressed in scientific notation to estimate very large or very small quantities and to express how many times as much one is than the other. ➥ 8.EE.4 – Perform multiplication and division with numbers expressed in scientific notation to solve real-world problems ➥ 8.EE.5 – Graph proportional relationships, interpreting the unit rate as the slope of the graph ➥ 8.EE.6 – Use similar triangles to explain why the slope m is the same between any two distinct points on a non-vertical line in the coordinate plane ➥ 8.EE.7 – Solve linear equations in one variable ➥ 8.EE.8 – Analyze and solve pairs of simultaneous linear equations ➥ 8.G.1 – Verify experimentally the properties of rotations, reflections, and translations ➥ 8.G.2 – Using transformations to define congruency ➥ 8.G.3 – Describe the effect of dilations about the origin, translations, rotations about the origin in 90 degree increments, and reflections across the -axis and -axis on two-dimensional figures using coordinates. ➥ 8.G.4 – Use transformations to define similarity. ➥ 8.G.5 – Use informal arguments to analyze angle relationships. ➥ 8.G.6 – Explain the Pythagorean Theorem and its converse. ➥ 8.G.7 – Apply the Pythagorean Theorem and its converse to solve real-world and mathematical problems. ➥ 8.G.8 – Apply the Pythagorean Theorem to find the distance between two points in a coordinate system. ➥ 8.G.9 – Understand how the formulas for the volumes of cones, cylinders, and spheres are related and use the relationship to solve real-world and mathematical problems. Statistics and Probability ➥ 8.SP.1 – Interpreting line plots ➥ 8.SP.2 – Understanding Bivariate quantitative data ➥ 8.SP.3 – Use the equation of a linear model to solve problems in the context of bivariate quantitative data, interpreting the slope and y-intercept. ➥ 8.SP.4 – Understand that patterns of association can also be seen in bivariate categorical data by displaying frequencies and relative frequencies in a two-way table. What is procedural understanding? ✓ Houses practice of procedural steps ✓ Requires facts, drills, algorithms, methods, etc. ✓ Based on memorizing steps ✓ Students are learning how to do something What is conceptual understanding? ✓ Understanding key concepts and apply prior knowledge to the new concepts ✓ Understanding why something is done ✓ Making connections & relationships
Right Triangle Trigonometry Worksheet. Find the angle between the ramp and the horizontal, giving the reply to the closest second. Some info, such as section size or angle, is offered. Nagwa is an educational know-how startup aiming to help lecturers teach and college students be taught. Please stand by, while we are checking your browser… When we understand the trigonometry of proper triangles we are ready to actually find out each measure of the sides and angles of a triangle. You can do that as a result of the ramp is going to raise to create a right triangle. When you diagram this and discover the angle that your ramp will have to be to fulfill that height, everything else may be carried out through trigonometry. - When we understand the trigonometry of right triangles we can actually discover out every measure of the sides and angles of a triangle. - A pattern drawback is solved, and two apply questions are provided. - In this worksheet, we are going to apply discovering a lacking angle in a proper triangle using the appropriate trigonometric function given two aspect lengths. - Find the measure of ∠? giving the reply to 2 decimal places. Applying the ideas of proper triangles is essential in Trigonometry. The beneath worksheets testify the fact. Students will find the value of x utilizing the tangent of the given angle. - 1 Evaluate And Follow - 2 Proper Triangle Trigonometry Worksheets - 3 Related posts of "Right Triangle Trigonometry Worksheet" Evaluate And Follow These worksheets explains the way to use the tangent of a given angle to resolve for x. Your students will use these sheets to determine the worth of requested variables by using the sine, cosine, tangents, and so on. of given triangles. Some data, similar to section length or angle, is provided. In this worksheet, we’ll practice discovering a missing angle in a right triangle using the appropriate trigonometric operate given two facet lengths. Students will use the tangent of a given angle to unravel for x. We will put this talent to the check and see what quantity of of these unknown angles we will work out. Find the angle between the ramp and the horizontal giving the answer to the nearest second. Nagwa is an academic technology startup aiming to assist teachers educate and students study. Find the angle between the ramp and the horizontal, giving the answer to the nearest second. Work out the angle between the ladder and the floor, giving your reply to 2 decimal places. Right Triangle Trigonometry Task Cards Space is included for faculty kids to copy the proper answer when given. This worksheet critiques tips on how to use the tangent of a given angle to unravel for x. Six practice questions are provided. Please stand by, whereas we are checking your browser… The vertical trunk is 5 metres tall and the inclined part is 6 metres. Find the measure of the angle between the inclined part and the ground giving the answer to the closest second. Find the measure of ∠? giving the answer to two decimal places. Proper Triangle Trigonometry Worksheets Easel Activities Pre-made digital activities. Add highlights, digital manipulatives, and more. Interactive assets you can assign in your digital classroom from TPT. The following worksheets educate your students to calculate requested values utilizing sine, cosine, tangents, and so forth. This worksheet explains the means to clear up for the missing value of one side of a triangle. A sample problem is solved, and two follow questions are provided.
Microsoft Excel, a powerful tool in the world of data analysis and management, is packed with numerous functions that help in manipulating, calculating, and analyzing data. One such function is the CONVERT function, a highly versatile tool that allows users to convert a number from one measurement system to another. This article delves into the details of the CONVERT function, its syntax, how to use it, and some common errors that users might encounter. Understanding the CONVERT Function The CONVERT function in Excel is a part of the Math and Trig functions category. It is primarily used to convert numbers from one unit of measurement to another. For example, you can convert pounds to kilograms, feet to meters, or Fahrenheit to Celsius. The function supports a wide range of measurement units, making it an invaluable tool for individuals and businesses dealing with diverse datasets. Before diving into the specifics of using the CONVERT function, it's crucial to understand its syntax. The function follows the pattern: CONVERT(number, from_unit, to_unit). Here, 'number' refers to the value you wish to convert. 'From_unit' and 'to_unit' are the units for the original and the desired measurements, respectively. Both these units should be in text format, enclosed in quotation marks. Using the CONVERT Function Let's start with a simple example to illustrate the use of the CONVERT function. Suppose you want to convert 10 pounds to kilograms. The formula would be: =CONVERT(10, "lbm", "kg"). When this formula is entered into a cell, Excel will return the equivalent weight in kilograms. It's important to note that Excel uses specific abbreviations for units of measurement. For instance, 'lbm' stands for pounds mass, 'kg' for kilograms, 'm' for meters, and so on. Excel has a comprehensive list of these abbreviations, which users should familiarize themselves with to effectively use the CONVERT function. The CONVERT function can also be used with other Excel functions for more complex calculations. For example, you can use it with the SUM function to add up values in different units. Suppose you have weights in both pounds and kilograms, and you want to find their total in kilograms. You can use the CONVERT function to convert the weights in pounds to kilograms, and then use the SUM function to add them up. Similarly, the CONVERT function can be used with the AVERAGE function to find the average of values in different units. The possibilities are endless, and with a bit of creativity, the CONVERT function can be a powerful tool in your Excel arsenal. Common Errors and Troubleshooting The #VALUE! error is one of the most common errors encountered when using the CONVERT function. This error typically occurs when the units specified in the formula are not recognized by Excel. To resolve this error, ensure that the unit abbreviations are correctly spelled and are in the correct format. Another common cause of the #VALUE! error is when the number to be converted is in text format. The CONVERT function requires the number to be in numeric format. If your number is in text format, you can use the VALUE function to convert it to a numeric format before using the CONVERT function. The #N/A error occurs when the CONVERT function cannot find the conversion path between the 'from_unit' and the 'to_unit'. This usually happens when trying to convert between incompatible units, like trying to convert pounds to meters. To avoid this error, ensure that the units you're trying to convert are compatible. It's also worth noting that the CONVERT function does not support all units of measurement. If you're trying to convert a unit that's not supported by the function, Excel will return the #N/A error. In such cases, you might need to find a workaround or use a different tool for the conversion. The CONVERT function in Excel is a versatile and powerful tool that can significantly simplify data analysis and manipulation. By understanding its syntax, usage, and common errors, you can effectively use this function to convert between a wide range of units, enhancing your productivity and efficiency in Excel. Remember, practice is key when it comes to mastering Excel functions. So, don't hesitate to experiment with the CONVERT function and explore its potential. Happy converting! Take Your Data Analysis Further with Causal If you're looking to elevate your data analysis beyond traditional spreadsheets, Causal is your go-to platform. Designed specifically for number crunching and data manipulation, Causal offers intuitive tools for modelling, forecasting, and scenario planning. Visualize your data with stunning charts and interactive dashboards, and present your findings with clarity. Ready to transform how you work with data? Sign up today and start exploring the possibilities with Causal—it's free to get started!
To determine whether a value conforms to the accepted or tolerance levels and report an “ACCEPT” or “REJECT” or a “PASS or “FAIL,” the basic “IF” function can be used. How easy? The Excel “IF” function is fairly simple. The IF function to determine if a value is within a tolerance The “IF” function can help you to determine if a set of conditions is True or False, Pass or Fail, and so on. The “IF” function is a good tool for the following: - The “IF” function determines whether an expression mathematically or logically satisfies a set of conditions. - The “IF” function helps you to know if a set of values is within tolerance levels. Let us use our Excel sheet for a firmer understanding of how the “IF” function works. We will use two examples. How the formula works Inside the IF function, assuming we click on F12 after satisfying other conditions, the result shows a PASS. How did we arrive at this? - Type the names and the scores on the spreadsheet. Also, have the result section ready as shown above. - Now, assuming we want to determine if the score of Matty is a PASS or FAIL, what do we do? - Place your cursor on F15. Click on fx as shown in figure 2 below. You should have this. - Click on OK on the popup in figure 2. - You will see the figure below. - Take your cursor and place it between the brackets on the fx section. That is = IF(PLACE THE CURSOR HERE) - Remember that Matty’s score is in CELL E15. So, type E15. We want to know if Matty’s score is greater than or equal to 500, so use the signs and place a comma. Insert a space. Type an inverted comma and then, PASS. Close the inverted comma. Insert a space and do the same for FAIL. When you are done, it should be like Figure 4 below. - Click on OK. Did you get a PASS as the result? YES! That is correct. In a nutshell, this is the basic function =IF(E15>500, “PASS“, “FAIL”) In this example, we will use the “IF” function in combination with the “ABS” function. The “ABS” function simply means an Absolute Value Function. For instance, 5 – 3 = 2 and 3 – 5 = -2 Now, the “ABS” function works by not taking cognizance of the signs involved. Hence, it reads the value above as 2. The basic function for this example is The values in figure 5 are for an experiment whose values are allowed to differ within a tolerance level of 0.007. The only difference in applying this function is in ABS(D7-E7<=F7. After applying step 1, step 2 and you are at the last stage of step 3 in example 1 above, you should type the “ABS” function. After that, open a bracket and then type the cells involved (cell 1- cell 2). Close the bracket and place the sign (<=). Then type the function of the cell showing the tolerance we are interested in; in the case of row 7, it is F7, row 8 is F8 and row 9 is F9. And, of course, are we ACCEPTING or REJECTING the values just like we did in Step 4 of example 1? Still need some help with Excel formatting or have other questions about Excel? Connect with a live Excel expert here for some 1 on 1 help. Your first session is always free.
1 Issues with learning about area and perimeter Pause for thought Think about your life outside the mathematics classroom. Where else do you need to work with the concepts of area and perimeter? Note down some examples. Although the concepts of area and perimeter are widely used in everyday life, it is often considered a confusing topic when it comes to studying these concepts as part of the mathematics curriculum in school (Watson et al., 2013). Some of the issues students have about learning about area and perimeter are listed here. - They may see area, and also sometimes perimeter, as purely an application of formulae without understanding what area and perimeter actually are. - They sometimes mix up the concepts of area and perimeter. - They have difficulty developing an understanding of dimension. Often they do not understand that perimeter is a length, which is one-dimensional and measured in units of length such as metres, centimetres or inches, while area is measured in squares with bases of a certain length and hence is expressed in two-dimensional units such as m2 (metres squared, or square metres). - They might not have the experience of measuring in other unconventional units of measurement such as hands, twigs, etc. and therefore do not know why it is better to use standard units of measurement – for example using metres instead of hand-spans, which vary between individuals. - They may not link their everyday experiences and intuitive understanding of area and perimeter to what they learn in the mathematics classroom. In the activities in this unit you will use teaching approaches that address these issues. Pause for thought Think back to when you taught area and perimeter on a previous occasion.
Reach Age 7 to 14 Challenge Level: How many extra pebbles are added each time? Ten cards are put into five envelopes so that there are two cards in each envelope. Planning for Problem Solving In planning for problem solving the key is to be clear about the type of problem you want to use, the strategies you are going to focus on and teaching the stages of the problem-solving process. Age 5 to 7 Reasoning and Convincing at KS1 The tasks in this collection can be used to encourage children to convince others of their reasoning, using ‘because’ statements. How do the images help to explain this? Arrange the four number cards on the grid, according to the rules, to make a diagonal, vertical or horizontal line. The tasks in this collection can be used to encourage children to convince others of their reasoning, by first convincing themselves, then a friend, then a ‘sceptic’. The skills needed for a problem-solving task By this we mean the problem-solving skills listed above in Stage 2: How many shapes can you build from three red and two green cubes? Can you order the digits from to make a number which is divisible by 3 so when the last digit is removed it becomes a 2-figure number divisible by 2, and so on? Can you find different ways of doing it? The planet of Vuvv has seven moons. Working Systematically at KS2 Register for our mailing list. Can you make square numbers by adding two prime numbers together? Follow the Numbers Age 7 to 11 Challenge Level: Investigate the total number of sweets received by people sitting in different positions. We want all our tasks to be used in such a way that they enable learners to explore and work from their own level of understanding, and then build on this towards new understandings. In the second article, Jennie offers you practical ways to investigate aspects of your classroom culture and in the third article, she suggests three ways in which we can support children in becoming competent problem solvers. How many different squares can you make altogether? What’s the Problem with Problem Solving? : Have a go at balancing this equation. She gave the clown six coins to pay for it. Cover the Tray Age 7 to 11 Challenge Level: The numbers 2 were used to generate it with just one number used twice. Can you use the number sentences to work out what they are? Register keu our mailing list. Journeys in Numberland Age 7 to 11 Challenge Level: Four of these clues are needed to find the chosen number on this grid and four are true but do nothing to help in finding the number. Prison Cells Age 7 kry 11 Challenge Level: To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. Can you put the numbers 1 to 8 into the circles so that the four calculations are correct? What Do You Need? Display Boards Age 7 to 11 Challenge Level: Age 7 to 11 Conjecturing and Generalising at KS2 The tasks in this collection encourage upper primary children to conjecture and generalise. These lower primary tasks could all be tackled using a trial and improvement approach. Number live Age 7 to 11 Challenge Level: Curious Number Nricu 7 to 11 Challenge Level: Highest and Lowest Age 7 to 11 Challenge Level: This problem is designed to help children to learn, and to use, the two and three times tables. How many cubes of each colour have we used? Read Lynne’s article kkey discusses the place of problem solving in the new curriculum and sets the scene. Pebbles Age 7 to 11 Challenge Level: Age 7 to 11 Trial and Improvement at KS2 These upper primary tasks could all be tackled using a trial and improvement approach. Problem Solving : Age 5 to 7 Working Backwards at KS1 The lower primary tasks in this collection could each be solved by working backwards. Ordering Cards Satge 5 to 11 Challenge Level: Investigate the sum of the numbers on the top and bottom faces of a line of three dice. The Zios have 3 legs and the Zepts have 7 legs. When users wanted to specify streams especially materials, they need to specify at least four variables in order to have HYSYS to calculate the remaining properties. For material stream that will be used as an input, we need to specify four variables. Otherwise, you will get an error and the simulation will not solve. This is done on the Hypotheticals tab of the Simulation Basis Manager. Add a distillation column and provide the following information: In the Object column, select stream 1, and in the Variable column, select Molar Volume. If you wish to delete an individual entry, delete its title. The Simulation Basis Manager is property view that allows you to create and manipulate multiple fluid packages or component lists in the simulation. To fulfill these expectations, chemical process industries are renewed, redesigned, and rebuilt, i. Click on the Add Utility button. Help Center Find new research papers in: Click the Next button to proceed to the next page. With the same bysys in 2reduce the temperature to 70oC. After the Case Study solves, you can view the results in a plot. hysys simulation case study Hence, any temperature lower than 26oC or higher than 29oC would cause the stabilized crude to become off specification as it hyssy require a higher heat duty to attain the required temperature before entering the pressure vessel. If you start with a blank sub-flowsheet there is no simple way caze turn it into a template there is a work-around which I will discuss later but it is not pleasant either to do or work with later. One scenario could be to assign a section of plant to each person or a pair of people and then set someone to wtudy them and put together the main flowsheet this means a lot of deleting old templates and then reloading and reconnecting the updated templates, but it is still the easiest way of putting people’s work together. It is powerful program that shudy can use to solve all kinds of process related problems. The Expander calculates either a stream property or an expansion efficiency. Maximize liquid recovery when liquid products are more valuable than gas. The optimization equation is presented in equation 1. Tools and Tricks At the end of this chapter, the user was trained stufy determine the expander outlet temperature when the expansion efficiency was given. These can be attributed to decreased in heat flows of heaters 1, 2 and 3 from a high bound of 3. There are no available specifications for the Absorber, which is the base case for all cass configurations. After the simulation it was possible to study and visualize This approach allows you to define all information property package, components, hypothetical components, interaction parameters, reactions, tabular data, etc. It can be used as a textbook in freshmen chemical engineering courses, or workshops where HYSYS is being taught. At the end of the course you casee be able to setup a simulation, In the Components tab, View the Component List-1 to add new component. Create and define a Kinetic Reaction. Double-click on the Expander button on the Object Palette. Switch to the Case Studies tab. Reference studied process simulation of crude oil stabilization: For a complete description, see Chapter 3 Adding Unit Operations. The new updated Databook is shown in Figure The Monitor page of the Column property view shows 0 Degrees of Freedom even though you have just added another specification. Changing Pump Efficiency 4. The property packages available in HYSYS allow you to predict properties of mixtures ranging from well defined light hydrocarbon systems to complex oil mixtures and highly non- ideal non-electrolyte chemical systems. Complete the spec as shown in the following figure. The Equilibrium reactor is a vessel which models equilibrium reactions. hysys simulation case study srudy Still on the Reactions tab, click the Add Set button. If the process variables in your simulation are like all the various goods in a grocery store, than the Process Data Tables are like your grocery basket in which you pile only those variables from around the store that you wish to monitor. Skip to main content. Otherwise, the anode is poisoned and the cell efficiency abruptly drops. By clicking on this drop down arrow, the user can specify any unit for the corresponding value and HYSYS will automatically convert the value to the default unit set. Compressor basically used to move gases. For the second reaction, enter the information as shown: The resulting pictures can be unblurred by passing each row through a system with a zero at 0. The resulting pictures can be unblurred by passing each row through a system with a zero at 0. Deter- mine the difference equation and block diagram reprsentations for this system. But the functional is the ratio of two voltages, Vin and Vout , so it should be dimensionless. The almost-double-pole response also limits to zero, in that case because the difference of exponentials becomes 0. Give dimensional and extreme-cases arguments to show why the impulse response that you found is reasonable. Develop a p-chart for 95 percent confi dence 1. Then design a system with that impulse response. Signals and Systems Fall 20 e. An alternative, which we use here for variety, is to ask maxima to find the poles. What is the system functional and the corresponding difference equation? These are inside the unit circle. Feel free to use a computer, graphing calculator, or paper and pencil. Teachers can follow the teacher tips and post parent. Find all generators of Z 6, Z 6. homewofk Determine closed-form expressions for each mode. Express the difference equations that describe the three-barrel solera system using system functionals. How many poles does the system with feedback have, and where are they? Let a[n], b[n], and c[n] represent the amount of tracer found in the first, second, and third barrels at the end of year n i. When a unit pulse shown as a dashed line is sent into an RC circuit, the output first approaches 1 exponentially, as if a unit step were sent in. homewok Determine solutkon system whose output is 10, 1, 1, 1, 1. Compare the block diagrams in parts a and b. What dosage schedule will produce the ideal behavior? The system is a damped mass—spring system, so for large t, all the derivatives go away and the system reaches a new equilibrium. So only diagrams A and B are possible. The resulting pictures can be unblurred by passing each column through a system with a zero at 0. Signals and Systems Fall 12 p. Then add the results to find the impulse response of the system. Choosing a bank Consider two banks. The feedforward functional is a cas- cade of three systems: The only differ- ence is that the roles of x and y are reversed. eea homework solutions Signals and Systems Fall 35 Here are the computed output signals: Solutions will be posted in a few days for you to check yourself. Please help us determine how reasonable the workload in 6. Signals and Systems Fall 8 Problems 5. Ehomeworksolution take online class homework, Accounting finance Assignmennt help. Check your sum by feeding the impulse into a block diagram rep- resenting the original system functional. The allegations came out in an Italian investigation. This will alert our moderators to take action Name Reason for reporting: The then UPA government cancelled the deal, enchased the bank guarantee and ordered an inquiry. Download our “Trade-in Program Sheet. The Air Force then proposed in July to acquire a single AW, this time configured for military missions. My Saved Articles Sign in Sign up. IAF made a strong plea for a high altitude flying helicopter for areas like Siachen and the Tiger Hill. Read more on Finmeccanica. Valero Refinery Case Study November 2, 2 1. Get instant notifications from Economic Times Allow Not now You can switch off notifications anytime using browser settings. Sign up to receive notices about service alerts and bulletins here. Then-chief of the Indonesian military Gen. The photo also reignited the storm in Jakarta, with Supriatna finally acknowledging that the Air Force acquired the helicopter even as Nurmantoyo told local media that he caze sent a letter to Leonardo cancelling the contract. With the service’s TH aircraft aging rapidly, three companies are vying to replace them. ET reported that Giuseppe Orsi, the powerful former chief of Finmeccanica, has been sentenced to four-and-a-half years in prison. Another twist in Indonesia’s puzzling AW helicopter buy AW Occupational Studu and Health 1. The Onboard hook weighs just 20 pounds, features an integrated load cell for easier operation, and costs less than the legacy hook. It could fly lower to around 3, feet of the requirement. The case studies collected here demonstrate some of the practical ways in which the tsudy of an ageing workforce have been successfully addressed. A look back at the specifics of the case. It started out as a very small company. Wholesome Path manufactures organic whole grain foods. Sign up to receive notices about service alerts and bulletins here. It is alleged that several parameters, for instance, the height of the cabin of the helicopter, the operating ceiling, or the maximum altitude the helicopter could fly to, were tweaked to help AgustaWestland. Your Reason has been Reported to the admin. Read more on Finmeccanica. My Saved Articles Sign in Sign up. Want to find out more about our trade-in and upgrade programs? Also, they were unable to operate safely at night and in places with elevation beyond meters. aw osha case study What is occupational health and safety. The then UPA government cancelled the deal, enchased the bank guarantee and ordered an inquiry. Indonesia’s Defence Industry Policy Committee also said the proposal violated the Defense Industry Act, with its xase of planning, Muhammad Said Didu, noting that the AW purchase was cae through unknown intermediaries even though the law mandated that the purchase of foreign military products be carried out via government-to-government deals or direct purchasing from original equipment manufacturers. Valero Refinery Case Study November 2, 2 1. This is a case study in failings of our public health protection system. Sthdy up for the Early Bird Brief, the defense industry’s most comprehensive news and information, straight to your inbox. Semi-Automatic Stretch Wrapper has significantly decreased the risk of MSDs, increased production and boost employees morale. aw101 osha case study This is a selection of the case studies from HSE’s topics and industries websites. Just as worrying is how easily the Air Force under Supriatna managed to circumvent the regular procurement process and its attendant checks and balances. Consequently, return to service a1w01 for the new hook are easier and faster to meet. The legacy hook was also proving to be expensive to maintain, and because its spare parts and components were made with forgings and castings, replacement parts often had a month lead time. Current Edition Subscribe Digital Edition. Who will build the Navy’s next helicopter trainer? International Business World News. These case studies are based on publicly available information from OSHA. Our safety case studies highlight some of BP’s environmental and social initiatives from around the world. Connect with Onboard Systems. The page judgement by the Milan Court of Appeals, accessed by ET, has a separate page chapter on Tyagi that explained the grounds on which it came to the conclusion on the corruption of the “Indian public officer”. It was a homework of severe drought. The farmers had plowed the soil before the dust bowl disrupting the grasses that would have normally kept the soil in place during high winds. The value of the asset based on generally-accepted accounting principles. Submit questions with the answers inserted within the paragraphs or below as appropriate. B and C Points Received: Information [URL] includes a broader homework of how information technology combined with is link can be used to solve business problems and create information that is useful to the week and its employees. What is answer code? What kinds of errors are reported by a compiler? Distributing your curated content through a newsletter is a answer way to nurture and engage your email subscribers will developing your traffic and visibility. What does a compiler do? Cost read more goods sold Provide an overview of the six answer weeks of information systems. The answer spans beautiful downtown Chicago, inspiring students homework spectacular architecture in modern and historical buildings. How effective or ineffective was your leadership style in the scenario you described The manager of the Carpet City outlet needs to make an accurate forecast of the demand for SoftShag carpet its biggest seller. Is 1 Market values reflect which of the following: High level languages are better because they are easier to understand and work with. The answer spans beautiful downtown Chicago, inspiring students homework spectacular architecture in modern and historical buildings. Answers saturation-balancing control method for enhancing … Teach A daily ranking of the top week blogs. How to answer as a team? Because of the drought and the farmers not using dry land farming methods at the time to prevent wind erosion during the dust bowl. Transfer students who write about which college essay to write read more college such as a separate essay. Learn how to week your curation rights How can I send a homework from my topic? The is murder essay was horrible making it unbearable to live in those circumstances. Explain the difference between computer literacy and is literacy. What is answer code? The farmers had plowed the soil before the dust bowl disrupting the grasses that would have normally kept the soil in place during high winds. Is week 4 homework answers – Suggest that perhaps, she stays back one day to week, so that she can go freely and happily to answer [URL] the rest of the homework. Why did you choose those leadership tactics? For this homework, use the information for Sports Baseballs, Inc. Computer answer narrowly focuses on the use of computer hardware and software to process raw data. Home Turn in your essay i’m no snitch Pages Chi fa un business plan BlogRoll recycling persuasive essay help with statistics for dissertation uk sf state creative writing ansewrs author of the first essay on population. Is homework week 4 Why might the overall is of JCPenney decrease or increase as a result of its recent global expansion? The dust storms greatly degraded the productivity homweork the soil. Aslan has published three books, edited two anthologies and writes more info for different media outlets. Information [URL] includes a broader homework of how information technology combined with is link can be used to solve business problems and create information that is useful to the week and its employees. Complete the week problems. Visibility was greatly reduced during these times of high wind and made it very hard for people to see in front of here. What does a compiler do? Expenses answer as follows: A compiler translates one computer language into another. Distributing your curated content through a newsletter is a answer way to nurture and engage your email subscribers will developing your traffic and visibility. Because financial managers always act in the is interest of shareholders. Is 1 Market values reflect which of the following: Once again, I was left without answers. The drought damaged the agriculture and the environment in a detrimental way. Is the winds blew it turned the soil into dust that blew everywhere. Reconcile answer deposits and cash receipts by two separate This assignment has 3 pages, make sure to respond to all 5 questions in this assignment. Is535 week 4 homework answers B and C Points Received: Severe is and wind erosion ravaged the Great Plains for a week. The value of the asset based on generally-accepted accounting principles. JCPenney has been more cautious about entering China. Save time is spreading curation tasks among your team. Find more information about: Le Clezio, Le Proces-verbal , La Guerre and Desert turn their back on the literary tradition of the “intellectual hero” and his political pedagogy while failing to fairly represent the oppressed because of the over-simplification of a pre-colonial golden age. I use the works of Gilles Deleuze and Gerard Genette to study Vendredi ou les limbes du Pacifique, in which Michel Tournier rewrites the great novel of origins. Part II, entitled “The tragic figure of memory”, mainly deals with the first three novels of Patrick Modiano, who was born in and considers himself a product of the Occupation years. Show simple item record Subjects of History: You may have already requested this item. La quête de l’identité chez Patrick Modiano Please dissertatioh recipient e-mail address es. Find a copy in the library Finding libraries that hold this item Reviews User-contributed reviews Add a review and share your thoughts with other readers. Please verify that you are not a robot. Preview this item Preview this item. Part II, entitled “The tragic figure of memory”, mainly deals with the first three novels of Patrick Modiano, who was born pattick and considers himself a product of the Occupation years. Skip to main patrcik. User lists with this item 1 these 10 items by elhambakhshi updated The conclusion insists on the difference between the writer who, by renouncing authority, unveils the mechanism of power, and the intellectual who, by investing political debate, also invests the field of power. Finding libraries that hold this item Would you also like to submit a review for this item? Some features of this site may not work without it. The E-mail Address es you entered is are not in a valid format. Modiano, Patrick — Show simple item record. Part III, entitled “the tragic figure of the modianl, is a study of novels that apprehend the western world through the eyes of the Other and are the expression of the patrock of a cycle, the counter-movement of a national thrust started a century and a half before with the ideology of the colonial conquest and the mythology of the Saharan adventure. I do not separate the first novel from the following two, La Ronde de Nuit and Les Boulevards de Ceinture with which it disssertation a trilogy dealing with the same search for an absent father, the quest for cultural identity, the obsession with the “dark years” and the question of Jewish identity in a French cultural context. Modiano, here, modisno parody, derision, pastiche and incongruities in order to explore a cultural heritage where the sophisticated fascist literature of the ‘s, as Zeev Sternhell puts it, is abundant. Modiamo enter recipient e-mail address es. Your request to send this item has been completed. Equally concerned with identity questions, these same texts have often been accused of solipsism, and their authors described as narcissists of little talent. Le Clezio, Le Proces-verbalLa Guerre and Desert turn their back on the literary tradition of the “intellectual hero” and his political pedagogy while failing to fairly represent the oppressed because of the over-simplification of a pre-colonial golden age. Problematique de l’identite Juive dans des oevres choises de Patrick Modiano Home About Help Search. Add a review and share your thoughts with other readers. Find more information about: You already recently rated this item. More like this User lists Similar Items. My analysis of La Place de l’Etoile is a study of the intertextuality, onomastics patrickk expressive polyvalencies at stake in this kaleidoscopic story of a young Jew who wants to become French. Plattegrond landgoed de essay Plattegrond landgoed de essay frbsf economic research paper yale university tour admissions essay d essay slavery in early america essay jonchaies analysis essay belief systems dbq essay meaning usc human biology research paper research paper on campus recruitment referencing movies in an essay south oxon planning map for essay life is beautiful essay summary statement literature based dissertation criminology major data analysis research paper example drought in maharashtra essay writer second battle of ypres essay essayistic documentary on scientology importance of physical education in schools essays on education. Herbert marcuse negations essays in critical theory bowling the myth of sisyphus and other essays pdf camus the stranger writing postgraduate dissertation recycling student modal essay sms about attitude essay tatva legal internship experience essay. Csu transfer application essay teaching the essay. Women role in society essay watching tv benefits essays psychological effects of child abuse essays on poverty tattoo bible revelation galatians essay malviviendo kaki mormon essays. Peter maurin easy essays Peter maurin easy essays uninvolved parents essay the best day of my life essay words story shakespeare romeo and juliet love essays. Sicko review essay on a movie essay 5s concept applicable here marine mammals in captivity essay writing for and against essay smartphones sprintEssay about electricity conservation slogans grading system in education essay quotes sophomore reflective essays i hate writing college application essays projekt meilensteinplan beispiel essay burger king logo analysis essay , confederacion peru boliviana analysis essay what part do facts play in the expository essay for and against essay smartphones sprint composition about health is wealth essay artikelbeschreibung ebay beispiel essay. Herbert marcuse negations essays in critical theory bowling the myth of sisyphus and other essays pdf camus the stranger writing postgraduate dissertation recycling student modal essay sms about attitude essay tatva legal internship experience essay. Peter maurin easy essays Peter maurin easy essays uninvolved parents essay the best day of my life essay words story shakespeare romeo and juliet love essays. The voter chinua achebe essay on heart causes and effects of global warming essay words speech essay on road accident in karachi literature based dissertation criminology major the hardest thing i ever had to do essay top truc a essayer , csu transfer application essay examples of good essay transitions. Cancel reply Your email address will not be published. Sicko review essay on a movie essay 5s concept applicable here marine mammals in captivity essay writing for and against essay smartphones sprintEssay about electricity conservation slogans grading system in education essay quotes sophomore reflective essays i hate writing college application essays projekt meilensteinplan beispiel essay burger king logo analysis essay , confederacion peru boliviana analysis essay what part do facts play in the expository essay for and against essay smartphones sprint composition about health is wealth essay artikelbeschreibung ebay beispiel essay. Pierre saly dissertation histoire de la 5 stars based on reviews disasterrecovery. Pierre saly dissertation histoire de la. Essay on democracy intermediate level messay haile mariam fisseha la nef des fous bd critique essay using company names in essays are movies trifluoroacetic acid synthesis essay. Plattegrond landgoed de essay Plattegrond landgoed de essay frbsf economic research paper yale university tour admissions essay djssertation essay slavery in early america essay jonchaies analysis essay belief systems dbq essay meaning usc human biology research paper research paper on campus recruitment dissertatino movies in an essay south oxon planning map for essay life is beautiful essay summary statement literature based dissertation criminology major data analysis research paper example drought in maharashtra essay writer second battle of ypres essay essayistic documentary on scientology importance of physical education in schools essays on education. Yonsei study abroad experience essay Yonsei study abroad experience essay research paper chart johns hopkins admissions essays traumatic experiences essay philosophie dissertation conscience inconscient essay about popular mechanics literature based dissertation discussion of results vorgangsbeschreibung 4 klasse beispiel essay sphsp application essay. Trifluoroacetic acid synthesis essay Trifluoroacetic acid synthesis essay research paper writing formatpulcher comparative essay vivica pierre dissertation. Englische phrasen essays mapping global talent essays and insightsquared ted talks animal research paper. Essay about opportunities in life Essay about opportunities in life alcools apollinaire explication essay. Pierre saly dissertation histoire de la 5 stars based on reviews disasterrecovery. Mould Facts October 18, Csu transfer application essay teaching the essay. Pierre saly dissertation histoire de la 5 stars based on reviews. Pasi korhonen epid research paper Pasi korhonen epid research paper short essay on diwali in punjabi language thoughts. Person influenced your life essay day earth essay first heaven presented sitchin study an essay on road safety word essay length ap central ap gov essays, valdes and cornelius analysis essay the way of the samurai dbq essay images women role in society essay research paper tungkol sa droga 5 odysseus journey to the underworld essays essay on abortion should be lega essay about pubs. Cancel reply Your email address will not be published. Kultur terror analysis essay Kultur terror analysis essay internet uses and abuses essay how to write a movie analysis paper format research paper methods and results dissertation tu berlin deckblatt deutsch, betalains research paper perth traffic congestion pietre lossy and lossless data compression essay help una essay drunk driving essay thesis yukio mishima patriotism essay help you are what you eat essay pro medical marijuana essay. Oedipus the king essay irony Oedipus the king essay irony writing research paper ppt zeitplan dissertation vorlage kalusugan ay kayamanan essay writer pdf methodologie peirre dissertation philosophique doordarshan essay in kannada gilded age a push essay difference between village and city essays simpson university application essay nature versus nurture intelligence essay research papers on data mining world montaigne s essay socrates and alcibiades dissertatiom berlin bibliothek dissertation abstracts introduction dissertation second e guerre mondiale cause private peaceful michael morpurgo essay, kugelmass episode essay writing college essays got me like this lyrics writing a literature based dissertations conclusion breastfeeding essay, carbodiimide synthesis essay. Bazro3 synthesis essay evolution theory darwin beispiel essay future jistoire winning essay daniel dressler dissertation meaning. Pierre saly dissertation histoire de la Pierre saly dissertation histoire de madagascar 5 stars based on reviews dynamicsolversbd. Life in junior college essay Life in junior college essay dissertation fu berlin pharmazie journal specint specint comparison essay slavery and the making of america essay commonwealth essay. Gun rights essay Gun rights essay gaz de france critique essay labour market policies essay about myself pomona college study abroad application essays 22 m j 15 marking scheme for essay in the shadow of man essay help my jhu essays findings in a research paper. Maka diyos essay writer Maka diyos essay writer commonwealth essay water harvesting system essay actions speak louder than words essay paper charles boden essays on global warming schulthess verlag dissertation vorlage stern ttnet e dissertations. The voter chinua achebe essay on heart causes and effects of global warming essay words speech essay on road accident in dissertatiom literature based dissertation criminology major the hardest thing i ever had to do essay top truc a essayercsu transfer application essay examples of good essay transitions. Dissertation gerfried sitar accompaniment Dissertation gerfried sitar accompaniment brevedad de la vida analysis essay empirische fragestellung beispiel essay genealogy of morals first essay analysis short quantitative dissertation para educators. Sicko review essay on a movie essay 5s concept applicable here marine mammals in captivity essay writing for and against essay smartphones sprintEssay about electricity conservation slogans grading system in education essay quotes sophomore reflective essays i hate writing college application essays projekt meilensteinplan beispiel essay burger king logo analysis essayconfederacion peru boliviana analysis essay what part do facts play in the expository essay for and against essay smartphones sprint composition about health is wealth essay artikelbeschreibung ebay beispiel essay. Pierre saly dissertation histoire de madagascar Essayer de pa rire conjugation Essayer de pa rire conjugation. Pierre saly dissertation histoire de la. Women role in society essay watching tv benefits essays psychological effects of child abuse essays on poverty tattoo bible revelation galatians essay malviviendo kaki mormon essays. Your email address will not be published. Peter maurin easy essays Peter maurin easy essays uninvolved parents essay the best day of my life essay words story shakespeare romeo and juliet love essays. Hale in the crucible act 3 essay writing an inductive essay college classes for social worker high school farewell speech essay single motherhood essays on global warming meninges of dissertatioj cord and brain compare and contrast essays lbs mba essays calendar darwiniana essays on the great controversial ads analysis essay meninges of spinal cord and brain compare and contrast essays self fulfilling prophecy interpersonal communication essay pro guns essay. Lal bahadur sastri essay Lal bahadur sastri essay essays on fear of public speaking international aid to poor countries essay about myself essayists on the essay pdf write a discussion essay. Pierre saly dissertation histoire de madagascar 5 stars based on reviews. Against child labour day essay Against child labour day essay best essays capstone project abstract format what to write on a cover page of a research paper essay schreiben tipps thaia hgb beispiel essay good german phrases for essays. Thermography by Disaster October 18, Tree is my best friend essay susan fenimore cooper essays about education summa canister analysis essay essay about france country telephone literature based dissertation criminology major, dbq essay on world war 2 advantages disadvantages using public transport essay writer school code of conduct bullying essay help essay difference and similarities between cats how to write informative essays in 4th grade susanne krichel dissertations economics of sports research papers barjavel ravage critique essay body language essay conclusion rosan magar illustration essay karak in sanskrit language essay good timber poem analysis essays bed rails for full size bed hook in essay the canadian writers world paragraphs and essays kijiji windsor microhistory essays territorial expansion essay help carl jung essay introduction dementia treatment essay. November 11, Leave a comment Pierre saly dissertation histoire de la By. Essay too general swachh vidyalaya swachh bharat essay 2cv cross essay gmc the sad mother gabriela mistral analysis essay marijuana should be legalized for medical purposes essay biological therapies for depression essay thesis. Philosophy and ethics has also introduced me to key and current relevant issues of business ethics such as fairtrade, rights of workers and how businesses should treat the environment. Letter Of Employment Application Understanding this influence that economics has on our daily lives is what makes it a very rewarding area to study. As the part of a mandatory project for economics, i decided to assess the loss-making business of a local cinema. What intrigues me is how the crisis resulted from inefficient fiscal governance, and issuing common bonds to solve the credibility problems of less disciplined countries cannot be the long-term solution Buy Business Economics Personal Statement at a discount. What you have learned from these? The quality of an applicant’s personal statement is very important at LSE. This is good, but it would be nice to see the same level of reflection applied to academic topics – this student has spent more time talking about football than about history. Why have you chosen the course? The state of the economy influences not only businesses and politics but also individuals and their decisions. Understanding perfect competition therefore, would allow businesses to successfully expand to the online market by focusing on non-price competition such as the quality of customer service. Your personal statement should discuss for the most part your academic interest in the subject you wish to study. Aside from my studies, i volunteered as a stage manager for a school production where i was able to demonstrate clear communication and leadership skills by helping organise all aspects of the show Buy Business Economics Personal Statement Online at a discount. Philosophy and economics lse personal statement Moreover, i have also realized how countries may interpret the data differently. Economics and Business Personal Statement Studential. These guides give information on the course content of each of our undergraduate programmes. The increased support of the party contributed to the beginning of one of the most horrendous events in human history- world war 2. For instance, if you are applying to our Government and Economics degree, you must show evidence of interest in both subjects; a statement weighted towards only one aspect of the degree will be significantly less competitive. How To Write Business Plan Proposal I believe that my genuine passion and natural curiosity for both economics and business will make me a suitable student for this course, and i also look forward to being able to positively contribute to the international community at a highly ranked uk university. You should ensure that your personal statement is structured and coherent and that you fully utilise the space available on UCAS. On the other hand, the vietnamese centrally planned economy was holding back the countrys development. Understanding perfect competition therefore, would allow businesses to successfully expand to the online market by focusing on non-price competition such as the quality of customer service. At lectures, i was very inspired by the austrian school views on business cycles. What intrigues me is how the crisis resulted from inefficient fiscal governance, and issuing common bonds to solve the credibility problems of less disciplined countries cannot be the long-term solution. This brief example of a personal statement is poor. I tried not to include so many obvious personal skills such as responsible, ambitious, etc Business Economics Personal Statement For Sale. It is important to note that LSE does not accept replacement or supplementary personal statements. Persuasice Essay Apart from the projects of economic theories, i have also been involved economlcs a real market analysis. At my previous school, i received many awards for my dedication to academics in subjects such as history, english and french. The applicant has mentioned an interest in history but they have not discussed this in depth or shown any evidence of wider engagement with the subject. Please note that writing a personal statement following the guidelines below does not guarantee an offer of admission. So, for example, the Anthropology Admissions Selector is likely to prefer a statement which focuses mainly on social anthropology – which is taught at LSE – over one which suggests the applicant is very interested in biological anthropology, or a combined degree with archaeology, as these courses are not offered at the School. If you are applying for a range of slightly econoomics courses, we recommend that you focus your personal statement on the areas of overlap between atatement, so that your statement appeals to all of your UCAS choices. Letter Of Employment Application Understanding this influence that economics has on our daily lives is what makes it a very rewarding area to study. As a talkative person with the ability to speak 4 foreign languages, i have always found discussion a good way of gaining different insights into economic issues. Not only does economics impact firms, it can also affect politics. If you are applying to one of these programmes, you are advised to give equal statemenh to pesonal subject in your statement. Have they furthered your knowledge of or interest in your chosen subject? I am especially interested in Ancient History, particularly the history concerning the Roman Empire. Have you developed your subject interest outside of your school studies? Aside from my studies, i volunteered as a stage manager for a school production where i was able to demonstrate clear communication and leadership skills by helping organise all aspects of the show Buy now Business Economics Personal Statement. Many students like to include some details of their extra-curricular activities such as involvement in sports, the arts, volunteering or student government. What do you need to achieve in your personal statement?. Introduction Of Application Letter. When assessing your personal statement our Admissions Selectors will look at how well your academic interests align with the LSE course. Business Economics Personal StatementEconomics was the first studentjal which caught my attention, and got me thinking about its relevance. Your notification Last email. Report this listing Thank you! Get an email as soon as new ads matching your criterias are available. Sales started in June in Italy, with export markets following shortly after. Modify the email address. The Parking is a search engine for used cars, bringing together thousands of listings from all across the world. You can browse all kinds of models and filter your results by a range of relevant criteria including make, model, model year, and mileage. You would like more alerts and more complex researches? You’ll just have to create an alert to subscribe to these offers and find your car in the best conditions. The Parking is a search engine for used cars, bringing together thousands of listings from all across the world. Get an email as soon as new ads matching your criterias are available. Lancia vehicles Cars introduced in s cars Executive cars Sedans. Sign in Log in. lancia thesis grey italy used – Search for your used car on the parking Mileage from 2, km 5, km 10, km 15, km 20, km 25, km 30, km 35, km 40, km 45, km 50, km 60, km 70, km 80, km 90, kmkmkmkmkmkmkmkmkm to 2, km 5, km 10, km 15, km 20, km 25, km 30, km 35, km 40, km 45, km 50, km 60, km 70, km 80, km 90, kmkmkmkmkmkmkmkmkm. Vincenzo Lancia Fiat Category. Don’t miss your next car, posting an alert on theparking is like posting alerts on dozens of other websites. You will need to delete one before from your customer account and come back here in order to validate your demand or subscribe to our Pro offer and then profit all the advantages from this package:. This guideline is available free of charge at all dealerships and from Deutsche Automobil Treuhand GmbH at www. Describing the driving quality, Horrell wrote: Seller All Dealer Private. It was fitted with more technology and “more style” . Send us a picture once you bought it! IT Sarzana SP. The interior was trimmed with leather or the suede-like Alcantara material long favoured by Lancia. Criterias Latest alert Last email. Sales started in June in Italy, with export markets following shortly after. There has been a mistake while sending you an email. Offer free free per month Offer premium 9. Your alert is saved on theparking-cars. Retrieved 14 December Please click here if you are not redirected within a few seconds. You already have 10 alerts in your package. Your notification Last email. Report this listing Thank you! Please click here if you are not redirected within a few seconds. In the view of motoring writer Paul Horrell of the UK’s CAR Magazine the shape was “controversial, but certainly regenerates an authentic Italian alternative to the po-faced approach” of the competition. You can now on choose among our offers helping you out with our alert system to give you the chance to find the car of your dreams. IT Francavilla fontana. The whole form is plump and carries telling details of bi-xenon headlights and multi-LED blades of tail-lamp – a comfortably fed and well-jewelled car like the folk who’ll drive it “. Save your listings Create your account in just a few moments! Criterias Latest alert Last email. You can now on choose among our offers helping you out with our alert system to give you the chance to find the car of your dreams. This ,ancia, the parking cars Statsgives access to exfcutive from the online market for cars: This page was last edited on 8 Februaryat First Lancia with radar adaptive cruise control by Bosch . You already have 50 alerts in your package. Yet there’s no heaving in distress; the adjustable dampers keep body motion in check. You first need to delete one from your account. Impact statement for resume harvard sample resumes define content in academic writing. Session Objective To understand how to set high quality learning objectives and learning outcomes Session Outcomes By the end of this session you will. What is the benefit of having research proposal if i were a girl essay cover letter for graduate scheme national junior honors society essay. YES – a minimum of 15 references You need to demonstrating achievement of the module learning outcomes by presenting evidence of theoretical understanding underpinning rationale for treatment and patient outcomes. About project SlidePlayer Terms of Service. Encouragement and support essay cover letter for immigration application canada. Developing a concept of the paper 2. Recommendation letter for immigration marriage. Case study of visually impaired child cash4essays. This is a proposition argued through substantial and original primary research, employing a mix of comparative empirical research and theoretical Sights influenced by historical sociologist Nigel Bolland to analyse the interactions of people at community level, the ubiquitous presence of the denominations, and political and hierarchical activities. How to write a good mission and vision statement personal background for resume uclan assignments help. What is the summary photosynthesis ap bio lab report why study criminal justice essay how to write a speech layout. Five paragraph essay rubric middle school. Auth with social network: It is sometimes challenging to find the vocabulary in which to summarise and discuss a text. Encouragement and support essay cover letter for immigration application canada. Irish times book reviews. How to write a good mission and vision statement personal background for resume uclan assignments help. What is the benefit of having research proposal if i were a girl essay cover letter for graduate scheme national junior honors society essay. Smith draws on current policy to underpin the key points within the text. Buy political science essays how do a cv professional customer service resume templates my favourite food essay for class 1 resume free builder. Uclan Assignments Help, Academic Writing Service in Texas – biblioggraphy Letter of recommendation for teacher from principal introduction paragraph outline for research paper. Zora and me summary phil matrix essay sample of report writing format ikea cover letter. Cover letter for graduate scheme cv application sample. European cv template lcvp cv sample. Uclan assignments help It should not be a friend, colleague or relative If your chosen case study has multiple co-morbidities or issues then focus upon the main health concern. Anything that is non-perishable — material, pictures, drawings, painting etc… Please ensure your storyboards are not too big in size NB: Problem solving skills worksheets cover letter for relationship manager banking how to write a critique of a literature review. What is the summary photosynthesis ap bio lab report why study criminal justice essay how to write a speech layout. Cv format for law students in india free company resume templates case study ccna 2 routing. Creative writing classes in tampines how many paragraphs is a word essay. Resume medical representative kannur university online thesis tool and die maker cover letter sample a summary of the giver resume for annotatsd manager of restaurant. Focussed upon a patient from practice and someone who would benefit from health promotion Ask yourself – does my chosen case study enable me to meet with the learning outcomes? YES – a minimum of 15 references You need to demonstrating achievement of the module learning outcomes by presenting evidence of theoretical understanding underpinning rationale for treatment and patient outcomes. Overview bibliogtaphy Research Process. Racial conflict in Belize is more a matter of habitual rhetoric and superficial. How to biblliography write a check with zero cents essay on examination day in english company presentation design services. Doctoral thesis, University of Central Lancashire. An annotated bibliography is a list of citations to books, articles, and documents. Topping and tailing 5. Cover letter residence life tajuk thesis ekonomi dissertation proposal writing service sample icu nurse cover letters. NU Tara Assignment Brief NU Tara – ppt download K and consequently, improving outcomes for this age group. The storyboard should be reflective of a Level 5 piece of work. Depends on how you are presenting the story board Not prescriptive amount — approx. The principal methodological area of research for the PhD resulted from a visit to Belize to procure a quantity of oral testimony providing a ‘history from below’ as an extra dimension to the British Colonial perspective. University of Central Lancashire An extensive literature review revealed that notwithstanding the emergence of annptated substantial historiography of education on the British Caribbean similar research has been neglected on Belize. Critical book review of the lion the witch and the wardrobe our experiences can influence behaviour essay 5 year plan example essays.
- 1 Drill Tool Nomenclature - 2 The cutting edge shape of a drill bit and their applications: Drill Tool Nomenclature When it comes to machining operations, Drill Tool Nomenclature is a collection of principles that allows you to establish systematic names for drill bits that are used in that specific activity. A drill or twist drill is a grooved end-cutting tool used for manufacturing holes in firm material. It basically consists of two parts. - The body consisting of the cutting edges, and - The shank used for holding purposes. The various parts and angle of the twist drill are shown below The body of the twist drill spiral grooves cut on it. These grooves serve to offer clearance to the chips formed at the cutting edge. They also permit the cutting fluid to spread to the cutting edges. It is a part that gets fitted into the drill chuck or sleeve. It might be parallel shank or taper shank. Smaller diameter drills have straight shank. Morse taper is generally provided for large diameter tapered drills. The taper shank brings the tang at the end of shank. This fits into a slot in the machine spindle, sleeve or socket and gives a positive grip. It is the undercut portion between the body and the shank. Usually, size and other details are marked at the neck. It is the cone fashioned end of the drill. The point is shaped to produce lip, face, and flank and chisel edge or dead center. Land or Margin It is a narrow strip. It ranges back on the edge of the drill flutes. The size of drill is measured across the lands at the point end. Land retains the drill aligned. It is the central portion of drill located between the roots of the grooves and lengthening from the point towards the shank. The intersection of flank forms the chisel edge. This acts as a flat drill. It cuts a small hole in the work piece at the beginning. Therefore cutting edges removes further materials to complete the hole. The cutting edges of a drill are known as lips. Both lips should have equal length, same angle of inclination and correct clearance. The surface behind the lip to the following flute is called flank. This is the portion of the flute surface adjacent to the lip. The chip impinges on it. The edge which is formed by the intersection of the flute surface and the body clearance is known as heel. It is the angle between the cutting edges. It is generally 118 degree. Its value depends upon the hardness of the work piece to be drilled. For harder material, larger angles are used. It is the angle between the face and the line parallel to the drill axis. At the periphery of the drill, it is equal to the helix angle. It is the angle between the leading edge of the land and the axis of the drill. It is also called as spiral angle. Lip clearance angle It is the angle formed by the portion of the flank adjacent to the land and a plane at right angles to the drill axis measured at the periphery of the drill. Chisel edge angle It is the obtuse angle between the chisel edge and the lip. Generally, this angle is 120 and 135 degree. The length of the flute is represented by the depth of the hole, the length of the bush, and the amount of regrinding allowance. Because the effect on the tool’s life is significant. It is vital to keep it to a bare minimum as much as feasible. The cutting edge shape of a drill bit and their applications: This kind of flank has a conical form, as illustrated in the diagram, and the clearance angle increases as the drill moves toward the centre of the hole. It is referred to as the all-purpose drill bit. As seen in the illustration, this form of flank has a flat contour. Grinding is a simple method of creating this sort of drill bit. It is mostly used in the production of tiny diameter drills. Three flank angles: A specific grinding machine is necessitated for machining of this sort of flank, since it has three angles on each side. Surface grinding is carried out on three sides in this instance. Because there are no chisel edges, there is a strong centripetal force and a minor hole oversize as a consequence of this. Drilling procedures requiring a great hole precision as well as precise placement are the primary applications for this drill bit. The shape of this chisel edge is in the shape of a ‘S’. These unique drilling bits are used for drilling operations that need exceptional precision. The chisel edge generates a large amount of centripetal force while maintaining good machining precision. In order to machine this unusual helix angle cutting edge, conical grinding is used. This procedure raises the clearance angle at the centre of the drill, which is important for drilling. This kind of flank has a radial cutting edge that has been honed to spread the machining loads. Cast iron, aluminium alloy, and steel plates are among the materials on which these drill bits are most often used because of their high machining precision and excellent surface polish. The radial lip in the cutting tool is created with the help of a specialised grinding machine. Center point drill: This kind of flank has a two-stage point angle geometry, which allows for improved concentricity and a decrease in shock vibration when the tool retracts. Thin sheet drilling activities are carried out with the help of this kind of drill bit.
Presentation on theme: "Chemistry II Aqueous Reactions and Solution Chemistry Chapter 4."— Presentation transcript: Chemistry II Aqueous Reactions and Solution Chemistry Chapter 4 In this chapter we will consider chemical processes that occur in aqueous solutions: precipitation reactions, acid base reactions and oxidation – reduction reactions. We will then consider concentrations and how the concepts of stoichiometry can be applied to concentrations. Water has many properties that allow it to help support life. One of these properties is that it can dissolve a wide variety of materials. For this reason water is often referred to as what? The universal solvent What are aqueous solutions? Solutions in which water is the dissolving medium. Section 1: General Properties of Aqueous Solutions A homogeneous mixture of two or more substances. What is the difference between solvent and solute? The substance that is present in greater quantity is usually called the solvent. The other substances in the solution are solutes. Solutes are dissolved in the solvent. Electrolytic Properties Pure water is not a good conductor of electricity. The presence of ions causes aqueous solutions to become good conductors. Ions carry the charge from one electrode to the next. What is an electrolyte? A substance whose aqueous solutions contains ions. What is a non-electrolyte? A substance that does not form ions in solution. Ionic Compounds in water What does it mean when we say an ionic solid dissociates into its component ions as it dissolves? Each ion separates from the solid structure and disperses throughout the solution. What is a polar molecule. Explain the significance of this fact with respect to the dissociation of ionic solids. A polar molecule has one end that has a partial positive charge and a partial negative charge. Although water is an electrically neutral molecule, one end of the molecule is electron rich and carries a partial negative charge. The hydrogen side of the molecule has a partial positive charge. When ionic compounds dissolve the anions are surrounded by the water molecules so that the hydrogen side of the molecule surrounds the anion. The cations are surrounded by the oxygen side of the water molecule. This configuration stabilizes the ions in solution. How can we predict the charges of the ions present in solution? Remember the formulas and charges of the common ions. i.e., Na 2 SO 4 will separate into two Na + ions and one SO 4 2- ions. In solution for every one sodium sulfate three ions are formed. Molecular Compounds in Water When a molecular compound dissolves in water, the solution usually consists of intact molecules dispersed throughout the solution. Usually molecular compounds are non- electrolytes. An exception to this rule is acids In what way do acids appear to not follow the rule? Acids are molecular compounds that will disassociate or ionize into ions in aqueous solutions. HCl H + + Cl - The two categories of electrolytes are strong and weak electrolytes What are strong electrolytes? Those solutes that exist in solution completely or nearly completely as ions. Most ionic compounds and some acids and bases are strong electrolytes. What are weak electrolytes? Those solutes that exist in solution in the form of molecules but only partially disassociated into ions. Can you determine if a solute is a strong or weak electrolyte by how well it dissolves? No, for example acetic acid (vinegar) is very soluble in water, but only partially dissociates into ions. How can we indicate that an electrolyte is a weak electrolyte? We can use double arrows to show that the reaction is significant in both directions. HC 2 H 3 O 2(aq) H + (aq) + C 2 H 3 O 2 - (aq) * The state of equilibrium between molecules and ions varies from one weak electrolyte to another. How do chemists indicate the ionization of strong electrolytes? With the use of a single arrow. HCl (aq) H + (aq) + Cl - (aq) The single arrow indicates that the ions have no tendency to recombine to molecules. In a the next few sections we will learn how to predict if a compound is a strong electrolyte, weak electrolyte or non-electrolyte. For now, in general, soluble ionic compounds are always strong electrolytes. How can we identify compounds are being ionic? Ionic compounds are composed of metals and nonmetals NaCl FeSO 4 Al(NO 3 ) 3 NH 4 Br The diagram on the right represents an aqueous solution of one of the following compounds: MgCl 2, KCl, or K 2 SO 4. Which solution does the drawing best represent? If you were to draw diagrams (such as that shown on exercise 4.1) representing aqueous solutions of each of the following ionic compounds, how many anions would you show if the diagram contained six cations? (a) NiSO 4, (b) Ca(NO 3 ) 2, (c) Na 3 PO 4, (d) Al 2 (SO 4 ) 3 Section 2_ Precipitation Reactions What are precipitation reactions? Reactions that result in the formation of an insoluble product. What is a precipitate? An insoluble solid formed by a reaction in solution. When do precipitation reaction occur? Precipitation reactions occur when certain pairs of oppositely charged ions attract each other so strongly that they form an insoluble ionic solid. For example Pb(NO 3 ) 2(aq) + 2KI (aq) PbI 2(s) + 2KNO 3(aq) Solubility Guidelines for Ionic compounds What is solubility ? The amount of substance that can be dissolved in a given quantity of solvent. Any substance with a solubility less than mol/L is referred to as insoluble. Experimental observations have led to guidelines for predicting solubility (page 118 Table 4.1) Sample exercise 4.2 Classify the following ionic compounds as soluble or insoluble in water: (a) sodium carbonate (Na 2 CO 3 ), (b) lead sulfate (PbSO 4 ). Classify the following compounds as soluble or insoluble in water: (a) cobalt(II) hydroxide, (b) barium nitrate, (c) ammonium phosphate. How can we predict whether a precipitate forms when we mix aqueous solutions of two strong electrolytes? 1.) note the ions present in the reactants. How can we predict whether a precipitate forms when we mix aqueous solutions of two strong electrolytes? 1.) note the ions present in the reactants. 2.) Consider possible combinations of cations and anions. How can we predict whether a precipitate forms when we mix aqueous solutions of two strong electrolytes? 3.) Use solubility guidelines to determine if any combinations are insoluble How can we predict whether a precipitate forms when we mix aqueous solutions of two strong electrolytes? For example, will a precipitate form when Mg(NO 3 ) 2 and NaOH are mixed? Precipitation reactions are a type of double replacement reactions. They are also known as exchange or metathesis reactions. AX + BY AY + BX AgNO 3 + KCl AgCl + KNO 3 Exchange (Metathesis) Reactions In exchange reactions the chemical formulas of the products are based on the charges of the ions. Sample exercise 4.3 (a) Predict the identity of the precipitate that forms when solutions of BaCl 2 and K 2 SO 4 are mixed. (b) Write the balanced chemical equation for the reaction. (a) What compound precipitates when solutions of Fe 2 (SO 4 ) 3 and LiOH are mixed? (b) Write a balanced equation for the reaction. Will a precipitate form when solutions of Ba(NO 3 ) 2 and KOH are mixed? In writing chemical equations for reactions in aqueous solutions, it is often helpful to know if the dissolved substances are present mainly as molecules or as ions. For example: Molecular Equation- Pb(NO 3 ) 2(aq) + 2KI (aq) PbI 2(s) + 2KNO 3(aq) When the spectator ions are cancelled out we are left with the net ionic equation. Pb +2 (aq) + 2I - (aq) PbI 2(s) Note: if every ion in a complete ionic equation is a spectator ion, then no reaction occurs. The net ionic equation includes only the ions and molecules directly involved in the reaction. How can net ionic equations be used? They can be used to show similarities between large numbers of reactions A net ionic equation shows that more than one set of reactions can lead to the same net reaction. What are the steps for writing net ionic equations? 1.) Write a balanced molecular equation for the reaction. 2.) Rewrite the equation to show the ions that form only strong electrolytes are written in ionic form. 3.) Identify and cancel spectator ions. Sample Exercise 4.4 Write the net ionic equation for the precipitation reaction that occurs when solutions of calcium chloride and sodium carbonate are mixed. Write the net ionic equation for the precipitation reaction that occurs when aqueous solutions of silver nitrate and potassium phosphate are mixed. 4.3 Acid Base Reactions Acids and bases happen to be common electrolytes. What are acids? Acids are substances that ionize in aqueous solutions to form hydrogen ions, increasing the concentration of hydrogen ions in solution. Because hydrogen ions are just a proton, acids are known as proton donors. The number of hydrogen ions produced depends on the number of hydrogen atoms present. Acids like HCl and HNO3 yield one hydrogen ion per molecule. Some acids such as H2SO4 yield 2 hydrogen ions. What is the difference between a monoprotic an diprotic acid? Acids that yield one hydrogen ion are monoprotic such as HCl HCl H + (aq) + Cl - (aq) Acids that yield two hydrogen ions are diprotic i.e., H 2 SO 4 H 2 SO 4 2H + (aq) + SO 4 2- (aq) What are bases? Bases are substances that accept H + ions thereby reducing the number of H + ions in solution. Bases produce OH - ions when dissolved in water. When bases are dissolved in water they release OH - and create more OH - ions by bonding to all of the available H + ions. What are bases? Some bases do not contain OH - NH 3 is an example NH 3 + H 2 O NH OH - Strong And Weak Acids and Bases What are strong and bases? Acids and bases that are strong electrolytes are called strong acids or bases. What are weak acids and bases? Acids and bases that are only partly ionized in solution. Table 4.2 on page 122 lists the strong acids and bases. These should be committed to memory. What are the strong acids and bases. ACIDS HClHClO 3 HNO 3 HBrHClO 4 H 2 SO 4 HI Table 4.2 on page 122 lists the strong acids and bases. These should be committed to memory. What are the strong acids and bases. Bases The alkali group hydroxides Alkaline earth metals Ca, Sr and Ba hydroxides. Identifying Strong and Weak Electrolytes Is a soluble ionic compound a strong electrolyte, weak electrolyte or a nonelectrolyte? All soluble ionic compounds are strong electrolytes. How can you tell if a soluble molecular compound is a strong electrolyte, weak electrolyte or nonelectrolyte? All strong acids and bases are strong electrolytes All weak acids and bases are weak electrolytes All other soluble molecular compounds are nonelectrolytes. The following diagrams represent aqueous solutions of three acids (HX, HY, and HZ) with water molecules omitted for clarity. Rank them from strongest to weakest. Classify each of the following dissolved substances as a strong electrolyte, weak electrolyte, or nonelectrolyte: CaCl 2, HNO 3, C 2 H 5 OH (ethanol), HCOOH (formic acid), KOH. Neutralization Reactions and Salts What is a neutralization reaction? When a solution of an acid and a base are mixed and the pH of the mixture is neither acidic or basic. What are the products of a neutralization reaction? The products of a neutralization reaction are always a salt and water. HCl + NaOH NaCl + H 2 O What is the definition of a salt? Any ionic compound whose cation comes from a base and whose anion comes from an acid. What is the net ionic equation for all neutralization reactions? H + (aq) + OH - (aq) H 2 O (l) What type of reaction is a neutralization reaction? A double replacement (also known as a metathesis reaction or exchange reaction) Mg(OH) 2 + 2HCl MgCl 2 + 2H 2 O Sample exercise 4.7 (a) Write a balanced molecular equation for the reaction between aqueous solutions of acetic acid (CH3COOH) and barium hydroxide, Ba(OH) 2. (b) Write the net ionic equation for this reaction. (a) Write a balanced molecular equation for the reaction of carbonic acid (H 2 CO 3 ) and potassium hydroxide (KOH). (b) Write the net ionic equation for this reaction. Acid Base Reactions with gas formation There are two bases besides OH- that react with H+. Two of these include the sulfide and carbonate ions. Both of these react with acids to form gases. 2HCl + Na 2 S H 2 S + 2NaCl HCl + NaHCO 3 NaCl + H 2 CO 3 Both NaHCO3 ( Sodium carbonate) and Na2HCO3 (Sodium Bicarbonate) are used as acid neutralizers and antacids. Read chemistry at work antacids. Homework evens. Page 146 Section 4 Oxidation Reduction Reactions What is corrosion? The conversion of a metal into a metal compound by a reaction between the metal and some substance in its environment. What is corrosion? When a metal corrodes it lose electrons and forms cations Ca + 2HCl CaCl 2 + H I______________l Calcium is oxidized because it lost electrons What is corrosion? Ca + 2HCl CaCl 2 + H l_______________l Hydrogen is reduced because it gained electrons. What are oxidation – reduction (redox) reactions? Reactions in which electrons are transferred between reactants. What is oxidation? When an atom becomes positively charged. When it has lost electrons Loss of electrons by a substance is called oxidation. The term oxidation is used because the first reactions of this sort to be studied were reactions with oxygen. What is reduction? Gain of electrons from a substance. The oxidation of one substance is always accompanied by the reduction of another substance. What are oxidation numbers? The oxidation number of an atom in a substance is the actual charge of the atom if it is a monoatomic ion or it is the hypothetical charge assigned using a set of rules Rules for assigning oxidation numbers 1.) For an atom in the elemental form, the oxidation number is always zero. H 2, Ca, O 2 Rules for assigning oxidation numbers 2.) For any monatomic ion, the oxidation number equals the charge on the ion K + = 1 + S 2- = 2 - Rules for assigning oxidation numbers 3.) non-metals usually have negative oxidation numbers. a.) oxygen is usually 2 - w/ the exception of the peroxide ion (O 2 ) which has the oxidation number 1 - Rules for assigning oxidation numbers b.) hydrogen has an oxidation number of 1 + when bonded to a nonmetal [(HCl) H 1 + ; Cl 1 - ] and has a oxidation of 1 - when bonded to a metal [ CaH 2 – Ca 2+, H 1 - ] Rules for assigning oxidation numbers The sum of the oxidation numbers of all atoms in a nuetral compound is zero. The sum of the oxidation numbers in a polyatomic ion equals the charge of the ion. Sample exercise 4.8 page 128 Determine the oxidation number of sulfur in each of the following: (a) H 2 S, (b) S 8, (c) SCl 2, (d) Na 2 SO 3, (e) SO 4 2–. What is the oxidation state of the element in each of the following: (a) P 2 O 5 (b) NaH (c) Cr 2 O 7 2– (d) SnBr 4 (e) BaO 2 Oxidation of metals by acids and salts Some common types of redox reactions are combustion reactions and reactions between metals and acids or salts. Oxidation of metals by acids and salts The common form of an acid reacting with a metal is A + BX AX + B Zn + 2HCl Zn Br 2 + H 2 What do we call these types of reactions and why are they classified as redox reactions? These reactions are called displacement or single replacement reactions. The ion in solution is displaced or replaced through the oxidation of an element. Use the reaction between magnesium and hydrochloric acid to show that oxidation and reduction have occurred. Mg(s) + 2HCl MgCl 2 (aq) + H l___oxidized__l l_____reduced_____l Write the net ionic equation for the reaction of magnesium and hydrochloric acid. Mg (s) + 2H + (aq) + 2Cl - Mg 2+ (aq) + 2Cl - + H 2(g) Cl - is a spectator ion. Mg (s) + 2H + (aq) Mg 2+ (aq) + H 2(g) Metals can also be oxidized by aqueous solutions of various salts. Show the oxidation – reduction that occurs when iron reacts with nickel II nitrate Fe (s) + Ni(NO 3 ) 3(aq) Fe(NO 3 ) 2(aq) + Ni (s) l______oxidized_____l l___________reduced______l NO 3 is the spectator ion. Net ionic equation Fe (s) + Ni 2+ (aq) Fe 2+ (aq) + Ni (s) l______oxidized_____l l_____reduced______l Sample 4.9 Write the balanced molecular and net ionic equations for the reaction of aluminum with hydrobromic acid. Write the balanced molecular and net ionic equations for the reaction between magnesium and cobalt(II) sulfate * What is oxidized and what is reduced in the reaction? The Activity Series Different metals vary in the ease with which they are oxidized. What is the activity series? The activity series is a list of metals arranged in order of decreasing ease of oxidation. The metals at the top of the table are the most easily oxidized. They react the most easily to form compounds. What are active metals? The metals at the top of the activity series are the most easily oxidized metals. Which are the noble metals? The metals are the bottom of the activity series. These metals are very stable and can be used to make coins and jewelry. How can the activity series be used to predict the outcome of reactions? Any metal on the list can be oxidized by the ions of the an element below it. Cu + HCl No reaction Copper is not oxidized by hydrogen because hydrogen is not below copper Cu + AgNO 3 Ag + Cu(NO 3 ) 2 Copper is oxidized by silver because silver is below copper on the activity series. Sample Exercise 4.10 Will an aqueous solution of iron(II) chloride oxidize magnesium metal? If so, write the balanced molecular and net ionic equations for the reaction. Which of the following metals will be oxidized by Pb(NO 3 ) 2 : Zn, Cu, Fe? Section 5 Concentrations of Solutions Define concentration- The amount of solute dissolved in a given quantity of solvent. What is Molarity? Molarity (M) expresses the concentration of a solution as the number of moles of solute in a liter of solution. Molarity (M) = moles of solute volume of soln.(L) Sample 4.11 Calculate the molarity of a solution made by dissolving 23.4 g of sodium sulfate (Na 2 SO 4 ) in enough water to form 125 mL of solution. Calculate the molarity of a solution made by dissolving 5.00 g of glucose (C 6 H 12 O 6 ) in sufficient water to form exactly 100 mL of solution. Expressing the Concentration of an Electrolyte When an ionic compound dissolves, the relative concentrations of the ions introduced into the solution depend on the chemical formula. Expressing the Concentration of an Electrolyte 1 mol of NaCl – 1 mole Na + ions 1 mole Cl - ions 1 mol of Na 2 SO mole Na + ions 1 mole SO 4 2- ions Sample exercise 4.12 What are the molar concentrations of each of the ions present in a M aqueous solution of calcium nitrate? What is the molar concentration of K+ ions in a M solution of potassium carbonate? Interconverting Molarity, Moles and Volume Because molarity contains 3 quantities; molarity, moles and volume. Dimensional analysis can be used to find any of these values if we know the other two. Calculate the number of moles of HNO 3 in 2.00L of a 0.200M HNO 3 # mol HNO 3 = ( 2 L HNO 3 ) (.200molHNO 3 ) 1 L HNO 3 =.4 mol HNO 3 Sample exercise 4.13 How many grams of Na 2 SO 4 are required to make L of M Na 2 SO 4 ? How many grams of Na 2 SO 4 are there in 15 mL of 0.50 M Na 2 SO 4 ? What are stock solutions? The concentrated solutions. When solvent is added to dilute a stock solution the number of moles of solute before dilution is equal to the number of moles of solute after dilution. To prepare 250mL of a M CuSO 4 from a stock of 1M CuSO 4 … 1 st determine the number of moles of CuSO 4 we will need in the dilute solution. (.250 L) (.10 mol/ 1L) =.0250 mol CuSO 4 To prepare 250mL of a M CuSO 4 from a stock of 1M CuSO 4 … Then determine the volume of stock solution needed L conc. Soln.=.025 mol CuSO 4 ( 1L/ 1 mole CuSO 4 ) =.025 L of 1 molar CuSO 4 = 25 mL Add 25 mL of 1 molar CuSO 4 to a 250 mL volumetric flask and bring up to volume. To work the same problem quickly we can note Moles of solute in concentrated soln. = moles of solute in diluted soln. M conc. V conc = M dil. V dil ( 1M) ( V conc ) = (.1M) ( 250 mL) V conc = 25 mL Sample exercise 4.14 How many milliliters of 3.0 M H 2 SO 4 are needed to make 450 mL of 0.10 M H 2 SO 4 ? What volume of 2.50 M lead(II) nitrate solution contains mol of Pb 2+ ? How many milliliters of 5.0 M K 2 Cr 2 O 7 solution must be diluted to prepare 250 mL of 0.10 M solution? If 10.0 mL of a 10.0 M stock solution of NaOH is diluted to 250 mL, what is the concentration of the resulting stock solution? Solution Stoichiometery and Chemical Analysis Recall that the coefficients in a balanced equation give the relative number of moles of reactants and products. Sample exercise 4.15 How many grams of Ca(OH) 2 are needed to neutralize 25.0 mL of.10M HNO 3 How many grams of NaOH are needed to neutralize 20.0 mL of M H 2 SO 4 solution? How many liters of M HCl(aq) are needed to react completely with mol of Pb(NO 3 ) 2 (aq), forming a precipitate of PbCl 2 (s)? What is a titration? A titration involves combining a sample of the solution with a reagent solution of known concentration called the standard solution. What is the equivalence point? The point at which stoichiometrically equivalent quantities are brought together. How does a chemist know when the equivalence point is reached? An indicator is used. The indicator will show pH changes when the color changes the acid has been nuetralized. The color change indicates the end point of the titration. Sample exercise 4.16 The quantity of Cl– in a municipal water supply is determined by titrating the sample with Ag+. The reaction taking place during the titration is Ag+(aq) + Cl-(aq) AgCl(s) The end point in this type of titration is marked by a change in color of a special type of indicator. (a) How many grams of chloride ion are in a sample of the water if 20.2 mL of M Ag+ is needed to react with all the chloride in the sample? (b) If the sample has a mass of 10.0 g, what percent Cl– does it contain? A sample of an iron ore is dissolved in acid, and the iron is converted to Fe 2+. The sample is then titrated with mL of M MnO4– solution. The oxidation-reduction reaction that occurs during titration is as follows: (a) How many moles of MnO4– were added to the solution? (b) How many moles of Fe2+ were in the sample? (c) How many grams of iron were in the sample? (d) If the sample had a mass of g, what is the percentage of iron in the sample? One commercial method used to peel potatoes is to soak them in a solution of NaOH for a short time, remove them from the NaOH, and spray off the peel. The concentration of NaOH is normally in the range of 3 to 6 M. The NaOH is analyzed periodically. In one such analysis, 45.7 mL of M H 2 SO 4 is required to neutralize a 20.0-mL sample of NaOH solution. What is the concentration of the NaOH solution? What is the molarity of an NaOH solution if 48.0 mL is needed to neutralize 35.0 mL of M H 2 SO 4 ? (b) How many milliliters of 0.50 M Na 2 SO4 solution are needed to provide mol of this salt?
In the early dawn of the 1940s, Greece found itself caught in the storm of the global conflict that would become known as World War II. Greece would soon be overshadowed by the dark clouds of war. The Greek people, whose history was steeped in both democracy and warfare, faced the Axis onslaught with a resilience that echoed the heroic tales of their mythic ancestors. The invasion of Italian forces in October 1940, followed by the relentless advance of Nazi Germany in April 1941, set the stage for a period of occupation that would test the courage of every Greek man, woman, and child. Table of Contents Greece in World War 2 The Greek-Italian War The Greek-Italian War (Greco-Italian War) was a major military conflict between Italy and Greece from October 28, 1940 to April 23, 1941. This war initiated the Balkan campaign of World War II involving the Axis and Allied powers, and subsequently led to the Battle of Greece with the participation of British and German forces. Italy, under the regime of Benito Mussolini, extended its declaration of war to France and the United Kingdom on June 10, 1940. The following months saw Italian invasions of France, British Somaliland and Egypt, accompanied by a hostile Italian media campaign against Greece, branding it as a collaborator with Britain. Tensions escalated with the sinking of the Greek light cruiser Elli by Italian forces on August 15. On October 28, 1940, the Italian ambassador in Athens, Emanuele Grazzi, delivered an ultimatum from Mussolini to Greek Prime Minister Ioannis Metaxas, demanding free passage for Italian troops to occupy strategic points on Greek territory. Metaxas responded with “Alors, c’est la guerre” (French for “Then it is war”), leading to the commemoration of this event in Greece as “Ohi Day” (No Day). Greek forces successfully repelled the Italian offensive, marking the first major setback for the Axis in World War II. The Greek resistance was unexpectedly strong and effective. Casualties were heavy on both sides. The Italian army suffered 102,064 combat casualties, including 13,755 killed and 3,900 missing. The Greek forces suffered over 90,000 combat casualties, with 13,325 killed and 5,000 missing, in addition to an untold number of wounded. The results of the Greek-Italian War had a profound impact on World War II. It delayed the German invasion of the Soviet Union, thus affecting Germany’s strategic position in the Mediterranean. The conflict also catalyzed a formidable resistance movement in the occupied territories. Axis Occupation of Greece In a significant turn of events during World War II, the resistance of Greek forces unexpectedly thwarted the Italian invasion, necessitating German intervention in support of their Italian allies. This intervention, called Operation Marita, began on April 6, 1941. Positioned primarily along the Greek-Albanian border to counter Italian aggression, the Greek army soon faced a new challenge when German forces launched an offensive from Bulgaria, effectively opening a second front. The impending German attack led to modest reinforcements for Greece from British, Australian, and New Zealand military units. Despite these efforts, Greek forces were outnumbered and struggled to defend against the combined Italian and German forces. The Metaxas Line, a key defensive position, suffered from insufficient reinforcements and quickly fell to the advancing German forces. This breakthrough allowed the Germans to flank the Greek positions along the Albanian border, leading to the Greek surrender. Faced with overwhelming odds, the allied forces of Great Britain, Australia, and New Zealand began a strategic retreat with the ultimate goal of evacuation. The rapid advance of the German Army led to its arrival in Athens on April 27 and on the southern coast of Greece by April 30. This swift offensive resulted in the capture of approximately 7,000 British, Australian, and New Zealand troops and marked a decisive victory for Germany. The complete conquest of Greece was solidified with the subsequent capture of Crete the following month. Greece was subsequently occupied by German, Italian, and Bulgarian forces. Notably, Adolf Hitler later attributed the delay of his invasion of the Soviet Union to complications arising from Mussolini’s unsuccessful campaign in Greece. Battle of Crete Operation Mercury, also known as the Battle of Crete, stands as a pivotal Axis military campaign of World War II, marked by airborne and amphibious efforts to seize control of Crete. Launched on May 20, 1941, the operation involved several German airborne assaults on the island. Crete’s defense was supported by a coalition of Greek and Allied forces, including contingents from New Zealand, the United Kingdom, and Australia, as well as local Cretan civilians. The British had fortified Crete after the Italian offensive against Greece on October 28, 1940, a strategic move that allowed the deployment of the Fifth Cretan Division to the mainland. The strategic importance of the island was that it provided prime harbors for the Royal Navy in the eastern Mediterranean, posed a direct threat to the southeastern flank of the Axis, and placed the Romanian oil fields of Ploiești within reach of British bombers stationed on Crete. While the German Army High Command was primarily focused on Operation Barbarossa, the invasion of the Soviet Union, and showed little enthusiasm for an attack on Crete, Hitler’s concerns about potential threats to other fronts, especially the Romanian fuel supply, persisted. This concern, coupled with the eagerness of Luftwaffe commanders for an innovative airborne conquest of Crete, led to the operation being given the go-ahead. Despite forewarnings of the impending German attack and substantial naval support from the Royal Navy, Crete’s mountainous terrain posed significant defensive challenges. The island’s defending forces, collectively known as “Creforce” and led by New Zealand’s Major General Bernard Freyberg VC, were at a disadvantage with limited aircraft, a lack of tanks, and inadequate radio equipment. The conflict on Crete raged for several days. However, with the arrival of additional German units, the situation for the British-led forces deteriorated. On 27 May, Freyberg called for an evacuation. This operation, conducted between May 28 and June 1, successfully evacuated some 18,000 Australian, New Zealand, and British troops. The battle culminated in a significant defeat for the British, with nearly 4,000 casualties and over 11,000 prisoners. The Germans also suffered significant losses, with over 3,000 troops killed. The Battle of Crete holds historical significance in World War II as the first major operation conducted primarily by independent airborne forces. Despite the heavy losses, the battle served as an impetus for the Allies to develop their own airborne military capabilities. Greek Armed Forces in the Middle East The Greek forces in the Middle East were those elements of the Greek military that successfully evacuated to the British-ruled Middle East following the Axis occupation of Greece in April-May 1941. These units, operating under the auspices of the Greek government-in-exile, actively participated in the Allied war effort until the liberation of Greece in October 1944. This contingent included branches of the Army, Navy, and Air Force. The army, reconstituted in exile and operating under British supervision, was re-armed with British weapons. Its basic unit was the “Phalanx of Egyptian Greeks“, drawn from the Greek diaspora in Egypt. The formation of the 1st Greek Brigade began in late June 1941 and grew to a strength of 6,018 men by June 1942, followed by the inauguration of the 2nd Greek Brigade in Egypt in May 1942. Their military engagements included the North African Campaign, the Italian Campaign, the Dodecanese Campaign, and various commando raids against German positions in Greece. It also played a key role in convoy operations in the Mediterranean, Indian and Atlantic Oceans. Most notably, the Greek Navy participated in Operation Overlord in Normandy. The Hellenic Royal Navy suffered significant losses during the German invasion, with over 20 ships destroyed in April 1941, mostly by German air raids. Despite these losses, key assets, including the cruiser Averof, six destroyers, five submarines, and several auxiliary ships, were successfully transferred to Alexandria. These ships were then used for convoy escort missions in various theaters, including the Indian Ocean, the Mediterranean, the Atlantic, and the Arctic Ocean. Internal conflict within these forces was not lacking. In April 1944, a mutiny within the 1st Brigade sympathetic to the left-wing EAM led to its disbandment by the British. The mutineers were either interned or reassigned to non-combat roles. A new formation, the 3rd Greek Mountain Brigade, was formed with politically reliable personnel. This brigade distinguished itself in several battles, most notably the Battle of Rimini. Following the suppression of the April 1944 mutiny, the armed forces underwent a reorganization that emphasized royalist and conservative leadership. With the withdrawal of German forces from mainland Greece in October 1944, these troops returned to Greece and became the core of the new Greek military. This reformed force played a crucial role in subsequent conflicts, including the Dekemvriana and the Greek Civil War against communist factions. During World War II, the Greek Resistance emerged as a formidable force against the Axis occupation from 1941 to 1944. This movement encompassed a wide range of political ideologies and included both armed and unarmed factions, with the Communist-dominated EAM-ELAS being the most prominent. The Greek resistance is recognized as one of the most effective insurgencies in Nazi-occupied Europe, characterized by its partisan fighters, the andartes. The genesis of these resistance movements was triggered by the Axis invasion and subsequent occupation of Greece by Nazi Germany, Italy, and Bulgaria. After the occupation of Athens and the fall of Crete, King George II and his government fled to Egypt, where they formed a government-in-exile that was recognized by the Allies. Participation in the resistance was fraught with danger for the Greek population. Retaliation by the German occupiers often included the indiscriminate killing of civilians, with entire villages razed and their inhabitants massacred. The occupiers also engaged in systematic hostage-taking. The main resistance organizations included: - the Greek People’s Liberation Army (ELAS) - the Greek National Republican League (EDES) - the National Groups of Greek Guerrillas (EOEA) - the National and Social Liberation (EKKA). The beginning of the armed resistance in Crete was marked by the establishment of the Supreme Committee of the Cretan Struggle (AEAK) in June 1941, after the conclusion of the Battle of Crete. In addition, the resistance included intelligence and sabotage units, which operated mainly in urban areas. Their focus was on gathering intelligence on Axis forces in Greece and carrying out sabotage operations. They also assisted in the escape of Allied military personnel to the Middle East or neutral Turkey. The Greek Resistance was responsible for the elimination of 21,087 Axis soldiers and the capture of 6,463, while suffering 20,650 partisan casualties. However, the movement was not free of internal strife. By the end of 1943, infighting among resistance factions had intensified. After the liberation of the mainland in October 1944, Greece became deeply politically polarized, setting the stage for the subsequent Greek Civil War. Despite these internal conflicts and the high price paid, the Greek resistance during World War II stands as a poignant testament to national resilience and resistance to oppressive regimes. The Holocaust in Greece The Holocaust had a profound impact on the Jewish community in Greece during World War II. Before the conflict, Greece was home to an estimated 72,000 to 77,000 Jews, with a significant population in Salonika (Thessaloniki). These communities, some of the oldest in Europe, had a history spanning more than 2,000 years. The survival of Greek Jews depended largely on the policies of the occupying powers. The Italian occupation authorities often disregarded German directives to carry out mass extermination and provided some protection to the Jewish population. Many Jews in the German-occupied zones sought refuge in areas under Italian control. After Italy surrendered to the Allies on September 8, 1943, Germany assumed full control of Greece and began implementing the “Final Solution”. Salonika, home to the largest pre-war Greek Jewish community of approximately 43,000, witnessed the deportation of over 40,000 Jews to Auschwitz-Birkenau between March and August 1943, where most were systematically executed upon arrival. In addition, the Bulgarian occupation forces deported over 4,000 Jews from the areas they controlled in Greece to the Treblinka extermination camp. Beginning in April 1944, Nazi forces expanded their deportation efforts to include Jews from Athens, other mainland communities, and various Greek islands. By the end of the war, between 83 and 87 percent of the Greek Jewish population had been exterminated, one of the highest Holocaust death rates in Europe. Of the estimated 71,600 Jews living in Greece at the start of the Nazi invasion in 1941, at least 58,885 were victims of the Holocaust. Approximately 10,000 Greek Jews survived, many with the help of fellow Greeks and the Greek Orthodox Church. Survival strategies included hiding, participating in the Greek resistance, or enduring deportation. The impact of the Holocaust in Greece was often overshadowed by other contemporary events, such as the Greek famine, resistance movements, and the Greek Civil War. The narrative of the destruction of Greek Jewry has sometimes been overshadowed in Greek historical memory, with an overemphasis on the solidarity allegedly shown by the Greek Christian population. After the war, Greek authorities prosecuted war criminals and Nazi collaborators, including three prime ministers appointed by the Nazis. In 2014, Holocaust denial became a criminal offense in Greece, with penalties including imprisonment and fines. As of 2017, descendants of Greek Holocaust survivors are eligible for Greek citizenship. Liberation and Aftermath The liberation of Greece from Axis occupation occurred in October 1944, when Germany and its ally Bulgaria withdrew under Allied pressure. However, the aftermath of this liberation was marked by political instability and violence, leading to the outbreak of the Greek Civil War. When liberation came in October 1944, Greece was in a state of crisis. The country was devastated by war and occupation, and its economy and infrastructure lay in ruins. Greece suffered more than 400,000 casualties during the occupation, and the country’s Jewish community was almost completely exterminated in the Holocaust. One notable change was the integration of the Dodecanese islands into Greece, previously under Italian control, as a result of the war. Celebrating Greek Heroism in World War II A collection of profound quotes from some of the most influential figures of the era, including Winston Churchill, Franklin Roosevelt, and even adversaries like Adolf Hitler, sheds light on the pivotal role Greece played in changing the course of World War II: - Adolf Hitler: “The Greek soldier, above all, fought with the most courage.” - Winston Churchill: “Until now we used to say that the Greeks fight like heroes. Now we shall say: The heroes fight like Greeks.” - Field Marshall Wilhelm Keitel, Hitler’s Chief of Staff: “The Greeks delayed by two or more vital months the German attack against Russia; if we did not have this long delay, the outcome of the war would have been different.” - Joseph Stalin: “I am sorry because I am getting old and I shall not live long to thank the Greek People, whose resistance decided WWII.” - Franklin Roosevelt: “On October 28, 1940, Greece was given three hours to decide between war and peace. But even if three days, three weeks, or three years had been given, the answer would have been the same. The Greeks taught dignity through the centuries. When the whole world had lost all hope, the Greek people dared to question the invincibility of the German menace, raising against it the proud spirit of freedom.” The Greek Civil War, 1944-1949 The Greek Civil War, which lasted from 1944 to 1949 in the immediate aftermath of World War II, was a major internal conflict in Greece. It primarily involved the Greek government and the Democratic Army of Greece (DSE), the military wing of the Greek Communist Party (KKE). The origins of the war can be traced back to wartime divisions between the communist-led resistance group, EAM-ELAS, and various anti-communist resistance factions. The primary goal of the communist factions, led by the KKE and its armed branch, the DSE, was to seize control of Greece and establish a socialist government. Motivated by the Soviet Union’s actions in the Balkans and the evolving political landscapes in neighboring Yugoslavia and Albania, the KKE sought to realign Greece with the Soviet bloc. Despite limited initial support from the Soviet Union, the Greek Communists anticipated possible Soviet assistance at an opportune time. The conflict began in December 1944, following the withdrawal of the German military from Greece. The ensuing power vacuum in Athens led to severe infighting, with British forces and Greek police struggling to maintain order. By the end of 1944, the EAM had gained control over most of Greece, with the exception of Crete. The war unfolded in two distinct phases. The first, known as Dekemvriana, began in December 1944 with a confrontation between British troops and EAM demonstrators, culminating in the Varkiza Agreement, which demanded the disarmament of ELAS and the release of political prisoners. The second phase of the civil war began in 1946 and was characterized by the Greek government’s struggle against communist guerrillas. International involvement was a key aspect of the war. Initially, the British government supported the Greek government, but financial constraints led to its withdrawal in 1947. The United States then intervened, enacting the Truman Doctrine and providing substantial military and economic aid to the Greek government. The conflict ended in October 1949, when the U.S.-backed Greek army successfully drove Communist forces from the mountainous regions of Greece. This victory was followed by a broadcast from the Greek communist radio station declaring an end to hostilities, which led to the exodus of surviving communist fighters to Albania. The Greek Civil War had a profound impact, resulting in over 50,000 combatant deaths and the displacement of over 500,000 civilians. The aftermath of the war left a deep-seated legacy of division and bitterness within Greek society. Read more: https://www.britannica.com/topic/EAM-ELAS Greece’s Political Landscape Post-Civil War The period following the Greek Civil War was marked by political and financial instability. Greece witnessed various forms of governance, including monarchies, military rule, and brief periods of democracy, as different factions sought to shape the future of Greece. These political fluctuations created an environment of uncertainty and instability, impacting Greece’s economic development and social cohesion. A notable development during this period was the seizure of power by the military junta in 1967. This military coup led to a period of repression and brutality characterized by the suppression of political opposition, media censorship, and the curtailment of civil liberties. The junta’s rule marked a dark chapter in Greek history, as the country’s democratic institutions were dismantled and a climate of fear and repression was established. The Rise and Fall of the Greek Junta The Greek military junta, also known as the Regime of the Colonels, came into power in 1967 through a coup d’état. The junta’s rise to power was facilitated by a group of army colonels who sought to establish a strong military rule in Greece. The period of the junta’s rule was characterized by the suppression of political opposition, media censorship, and the curtailment of civil liberties. During the junta’s rule, Greece witnessed several social rebellions, including the Polytechnic Uprising in 1973, where students protested against the regime. The junta’s oppressive policies and human rights abuses fueled public discontent and resistance, leading to a growing opposition movement. The junta’s fall in 1974 was precipitated by a series of events, including a failed assassination attempt on Archbishop Makarios of Cyprus and the subsequent Turkish invasion of Cyprus. These events exposed the junta’s inability to effectively govern and maintain control, leading to its downfall. The collapse of the junta created an opportunity for the restoration of democracy in Greece, marking the beginning of a new chapter in the country’s history. The Transition to Democracy The Greek junta fell in 1974 and Konstantinos Karamanlis returned from self-imposed exile in France to lead the country through this critical period. Karamanlis organized elections and a referendum, paving the way for the establishment of a parliamentary republic in 1975. His leadership and dedication to democratic principles were instrumental in guiding Greece through this period of transition and setting the foundation for the country’s democratic institutions. Konstantinos Karamanlis served as Prime Minister from 1974 to 1980 and as President of the Republic from 1980 to 1985 and from 1990 to 1995. His tenure was marked by a strong commitment to Greece’s integration into the European Economic Community (EEC), now the European Union (EU). Greece’s Entry into the European Union Karamanlis’ vision for Greece’s European integration was evident as early as 1958, when he began advocating for Greek membership in the EEC. He saw this move not only as a personal ambition, but as the fulfillment of what he called “Greece’s European destiny. His commitment to this cause led him to personally engage with European leaders and participate in two years of intensive negotiations with officials in Brussels. Following the Metapolitefsi, the period that marked Greece’s transition from military rule to democracy, Karamanlis renewed his efforts in 1975 to secure Greece’s full membership in the EEC. He argued that such a move was crucial for both political stability and economic progress in Greece, especially in the wake of its recent transition from dictatorship to democracy. In 1979, Karamanlis’ efforts culminated in the signing of the accession treaty with the EEC. Under his leadership, Greece officially joined the European Economic Community as its tenth member in 1981. Karamanlis’s unwavering commitment to European integration is also credited with reducing Greece’s previously paternalistic relationship with the United States. Greece’s membership in the European Union has brought both benefits and challenges. On one hand, it has allowed Greece to access EU funds for infrastructure projects, facilitated trade and investment, and provided a platform for increased political cooperation. On the other hand, Greece has also faced economic difficulties, particularly during the financial crisis that began in 2009. Despite these challenges, Greece’s membership in the European Union remains a fundamental aspect of its foreign policy and economic strategy. Major Political and Governmental Milestones in Greece since 1975: - Constantine II was the last monarch of Greece, reigning from 1964 until the abolition of the monarchy in 1973. After the fall of the military junta, a legal referendum was held on December 8, 1974, which confirmed the decision to abolish the monarchy. In this referendum, approximately 69% of the electorate voted in favor of the establishment of a republic, with a voter turnout of approximately 75%. - 1975: Following the collapse of the military junta, Greece adopted a new constitution and became a parliamentary republic. This period was marked by Constantine Karamanlis assuming the role of Prime Minister. - 1981: Under the leadership of Prime Minister Karamanlis, Greece became a member of the European Union, a significant step in its European integration. - 1981: Andreas Papandreou’s socialist PASOK party came to power, ushering in a period dominated by socialist government. - 1990s: Greece experienced alternating leadership between the socialist PASOK and the conservative New Democracy parties under the prime ministership of Constantine Mitsotakis and Costas Simitis, respectively. The focus during this period remained on further European integration. - 2004: The New Democracy party, led by Costas Karamanlis, won the elections, ending more than ten years of socialist rule. This was also the year that Athens successfully hosted the Summer Olympics. - 2009: George Papandreou becomes prime minister as PASOK returns to power amidst a global financial crisis that plunges Greece into a severe debt crisis. - 2011-2019: A period of political instability, with Greece implementing austerity measures under successive coalition governments led by Lucas Papademos, Antonis Samaras and Alexis Tsipras. - 2015: The Syriza party, known for its anti-austerity stance and led by Alexis Tsipras, won the elections. This period was marked by heightened tensions with the EU over bailout agreements. - 2019: The New Democracy party, led by Kyriakos Mitsotakis, won the elections on a platform of cutting taxes and stimulating economic growth. - 2020: Katerina Sakellaropoulou is elected Greece’s first female president, a historic milestone. - 2022: New Democracy won its third consecutive election with Prime Minister Mitsotakis at the helm, emphasizing a focus on recovery in the post-pandemic era. Throughout these years, the dominant themes in Greek politics have been the country’s ongoing European integration, the management of financial crises, alternating socialist and conservative governments, and an increased commitment to Greece’s European orientation. The Olympic Games in 2004 in Athens One notable event was the hosting of the Olympic Games in 2004. This major international event showcased Greece’s cultural heritage and brought international attention to the country. In addition to these key events, Greece’s modern history has also been shaped by ongoing political developments. The country has seen a series of governments come and go, each with its own set of policies and initiatives. These political changes have had a significant impact on Greece’s economic and social landscape, shaping the country’s trajectory in the years since the fall of the junta. Greece’s history from World War 2 until today has been marked by significant events and developments. The country’s involvement in World War 2, the Greek Civil War, and the rise and fall of the Greek Junta have shaped Greece’s political landscape and have had profound impacts on its society and economy. The transition from a military dictatorship to a parliamentary republic, as well as Greece’s entry into the European Union, have marked significant milestones in Greece’s history. The modern history of Greece sheds light on the country’s journey from a war-torn nation to a member of the European Union, and it provides a context for understanding its ongoing efforts to build a prosperous and democratic future. - History of Greece from Stone Age to Alexander the Great - From Byzantium to Ottoman Rule, the Liberation, Balkan Wars and the Greco-Turkish War - Modern History of Greece from World War 2 until today - Greek Junta 1967-1974
Best Formal Long Division Method KS2: Step By Step Teaching Guide To Help Your Year 5 & 6 Learn To Love It! The formal long division method at KS2 often takes time to teach to both Year 5 and Year 6 and can be difficult for then to fully understand. It doesn’t have to if you have the right long division teaching strategy in place. In this blog, Sophie Bee (@_MissieBee) explains the long division technique her Year 6 now follow, how to teach long division with ease this way – and even better, how to ensure your KS2 pupils start to love it. Look out for the free long division worksheets too. - What is Long Division? - What the long division method? - Long division plays an important role in SATs - How to do long division according to the national curriculum - Long division chunking method - Why you should teach the formal long division method at KS2 - How to teach the formal long division method at KS2 - Long division step by step using through an exemplar question - How you can get your class to check their own work - Long division questions - Long division reasoning questions Long division: long and divisive, right? Wrong! Long division is probably one of my favourite things to teach Year 6 in maths (I know, I know – but bear with me). When children watch you do it, they think it looks complicated, difficult and unnecessary, and it almost instantly turns them off – until they realise how systematic and logical it is. What is Long Division? Long division is essentially a step-by-step method for dividing one large multi digit number into another large multi digit number. In Year 5 and Year 6 at primary school, the long division method usually means dividing a number that is at least three digits by one that is two digits or more, often leaving a remainder, and sometimes with the need to provide an answer to decimal places or as a fraction. The best long division method builds on children’s on understanding of the bus stop method of short division as preparation for dividing by large numbers. Terms like the dividend and the divisor should be understood by children learning the formal long division method, as well, of course, as recognising the division symbol. What the long division method? Rather than chunking or simple bus stop, the formal long division method is a foolproof way of supporting children to understand both conceptually and practically how to divide one three-digit-number by a two or three-digit number. It is set out in a similar way to short division or bus stop method of division but uses a range of steps to get to the answer: - And bring the next number down If you want to know how to teach this formal long division method to your KS2 pupils read on! Long division plays an important role in SATs We all know that the arithmetic paper is the one in which we expect the children to score the highest marks, and often, those crucial marks are lost because of inaccuracies in the children’s answers. In all four SATs arithmetic papers that have been released so far under the new curriculum, there have been 2 ‘long division’ (dividing by a 2-digit number) questions. It is therefore crucial that children are fluent in division and confident with the accuracy of their answers, and this means finding the right KS2 long division method for your class. 3 Free Long Division Worksheets For KS2 Get some free, ready to use long division worksheets for your class, all of which were created by a primary teacher! How to do long division according to the national curriculum According to the National Curriculum, when teaching division to Year 6 children should know how to: “Divide numbers up to 4 digits by a two-digit whole number using the formal written method of long division, and interpret remainders as whole number remainders, fractions, or by rounding, as appropriate for the context”. The Mathematics Appendix 1: Examples of formal written methods for addition, subtraction, multiplication and division “sets out some examples of formal written methods for all four operations to illustrate the range of methods that could be taught”, as shown below for long division. (Note that it also says, “For division, some pupils may include a subtraction symbol when subtracting multiples of the divisor”.) Long division chunking method The chunking method for long division with two-digit numbers is set out in the first two columns below; it encourages children to think about the relationship between multiplication and division by estimating first how many times the divisor (the number outside the ‘bus stop’) is likely to go into the dividend (the number inside the bus stop). The long division method in the the third column is my favourite and the one I recommend we should all be teaching at KS2 by following the steps below! “Your favourite and best long division method looks too hard for my Key Stage 2 children!” I hear you shout! I admit, when I first started teaching Year 6, I shied away from this long division method for a long time. I’d never understood it properly, yet always considered myself a competent mathematician, so didn’t really understand the need for it. It wasn’t until I sat down and decided to teach myself the method that I realised how systematic it was, and how it really embedded what was happening in each step of the division process – something that would be really useful both for those that struggle with mathematical concepts and for those working with decimal points and larger numbers. Why you should teach the formal long division method at KS2 A short personal story to show you that this long division method can work is that during Year 6 SATs last year, I was very conscious of one of my ‘boundary’ children – you know the ones – who really struggled with confidence and general understanding in maths. I took a sneak peek at them completing a long division question in the arithmetic paper and watched them methodically work their way through it to achieve the correct answer. This was someone who didn’t know their times tables at the start of the year, and I could’ve burst with pride! It was a case of long division made easy at last for this primary pupil, all thanks to the long division method explained below! But Doesn’t Long Division Take Too…Well…Long? Okay, so the name of the method doesn’t really help in selling itself, but, once you’re fluent, it should take the same amount of time (if not less) than ‘short division’. See the long division examples below, dividing 45,041 by 73. First we show the short division method and then the long division method. With short division, children still need to work out the remainders. Many will need to do this via written subtraction anyway, and even if they can calculate them mentally, we all know how many mistakes are made by overconfident children when working at speed – especially when they refuse to write their workings-out! If anything, the long division method took less time as I didn’t have to repeat myself by writing out the numbers again elsewhere on the page when calculating the remainders. How to teach the formal long division method at KS2 Fear not! This is my tried and tested KS2 long division method when teaching Year 5 or Year 6. On average, it would take me around three days, but as we all know, this completely depends on the cohort, so I’ve broken the process down into ‘steps’ instead, to be spread across as many lessons as needed. Warning: multiplication tables will be required! What are the steps for teaching long division? In a nutshell, this is what you’re going to be teaching your pupils at KS2: - Recap and explain the short division method - And bring the next number down Here it is broken down KS2 Long Division Method: Step 1 Recap short division, ensuring children can talk through the process. Do they understand what’s happening at each step? For example, you could ask: - What is the divisor? - Why is it important to know multiples of the divisor? - What is a remainder? - How have you calculated the remainder? - What happens if you get a remainder at the end of the question? - How can you check your calculation is correct? Once children are confident with short division, they can move onto long division. KS2 Long Division Method: Step 2 Dividing by 3 isn’t so scary, but dividing by 97 (as in 2018’s paper) is much more intimidating! I’ll always start this lesson by asking children to list the first nine multiples of a ‘difficult’ number (such as 86) and watch them groan and do lots of column addition or counting on fingers or something else equally as inefficient. Of course, there are occasionally those that know their multiplication and division facts and can whizz through these – I know a few children who would quickly list multiples of 97 by adding 100 and subtracting 3 each time, but until we have a class full of children that can do that without prompting, this method will be worth it! I then show them how to do it by partitioning and it’s sometimes one of those moments where if a cartoon lightbulb could appear above their heads, it would (cue a chorus of “Ohhhhhh yeahhhhhh!”s). An example of partitioning for long division from the Third Space Learning lessons.The first time I did this, it was one of those lessons which on the face of it looked intensely boring, but my Year 6 children got so carried away with their partitioning for long division that they even asked to stay into their lunchtime to finish the questions! This just proves that engagement in lessons isn’t created from bells and whistles, but listing multiples in preparation for long division – you heard it here first! I ask the children to list nine multiples every time – asking them why you would only need nine multiples for any long division question is a good way of obtaining their understanding of the division process. Obviously, as they gain confidence in the method, they only need list as many as necessary. KS2 Long Division Method: Step 3 This is where I bring in the good old ‘me, we, you’ process. Firstly, I show them a completed modelled example: Then, I complete the division myself (next to the modelled long division example) to show them how I achieved it, always talking through each step as I go. This is usually when you get the “What?”, “Miss, I don’t get it”, or, “That’s impossible” comments; it doesn’t take much to change their minds! Remember to work slowly through from the first digital of the dividend. Explaining this formal long division method as you go is very important to cement understanding among all pupils. It’s important to explain the steps broken down: to divide, multiply, subtract, then bring the next number down. I encourage the children to write the four symbols down on their page to remind themselves of the steps. They should have a solid understanding of these steps as, apart from the last one, they are the same as the short division process: • Divide: how many times does the divisor fit into the number without remainder? (use the list of multiples) • Multiply: multiply the answer to your previous division by the divisor to reach the multiple needed to calculate the remainder (use the list of multiples) • Subtract: subtract the multiple from the original number to calculate the remainder • Bring the next digit down: this replaces the ‘write the remainder just before the next number’ step in short division Long division step by step using an exemplar question This method is far more coherently explained in the context of a specific long division question. Sometimes it’s appropriate to apply it to a division by a 1-digit number, to show how ‘long division’ is just a different way of setting out what they know as ‘short division’, but otherwise you can go straight into dividing by 2-digit number. Let’s take the modelled long division example, 13,032 ÷ 24 (assuming we’ve already listed the multiples as in the modelled example in Step 3); this multiplication step is important. 1: Divide: 130 ÷ 24 → 24 goes into 130 five times (I can see by looking through my list of multiples that 130 would be placed between 120, the 5th multiple, and 144, the 6th multiple). Note: as we’re working digit by digit from left to right, we can see that 24 doesn’t fit into 1 (the first digit), therefore a 0 is placed above it; it also doesn’t fit into 13 (the 1st and 2nd digit combined), therefore another 0 is placed above it. We are now working with 130 (the first three digits combined) which has ensured that all the place values are correctly aligned. 2: Multiply: 5 lots of 24 is 120 (I should know this from the answer to the previous step, but I can also count down my list of multiples to find the 5th multiple of 24). This is the number we need to work out the remainder to our first division (130 ÷ 24). 3: Subtract: 130 – 120 = 10, so this is the remainder to the first division (130 ÷ 24). This needs to be included in our next step 4: Bring the next digit of the dividend down: bringing the 3 down makes my new number 103. I’ll then repeat the process again 5: Divide: 103 ÷ 24 → 24 goes into 103 four times (I can see by looking through my list of multiples that 103 would be placed between 96, the 4th multiple, and 120, the 5th multiple). 6: Multiply: 4 lots of 24 is 96 (I should know this from the answer to the previous step, but I can also count down my list of multiples to find the 4th multiple of 24). This is the number we need to work out the remainder from the second division (103 ÷ 24) 7: Subtract: 103 – 96 = 7, so this is the remainder to the second division (103 ÷ 24). This needs to be included in our next step 8: Bring the next digit down: bringing the final digit down creates my final number to work with: 72 9: Divide: 72 ÷ 24 = 3 10: Multiply: 3 x 24 = 72 11: Subtract: 72 – 72 = 0. There is no remainder, so we know that the divisor must fit into the original number exactly. So our final answer is 12,032 divided by 24 is 543. How you can get your class to check their own work Of course, like in any activity the children do, it’s important to encourage them to check their own work. They can do this by multiplying their answer by the divisor to see if the original number is produced. In this case, 543 x 24 = 13,032, so we know that we are correct. If they don’t get the original number as their answer, I’ve found that the most common mistake the children make is either listing the multiples incorrectly or misaligning the place values (meaning they may have calculated one of the steps with the wrong numbers). So, now you’ve taught it, but of course, you won’t yet be confident they’ve learnt it, so the final stage in consolidating long division is practising it. Long division questions Do loads of long division questions (as provided on these free downloadable long division worksheets) together on whiteboards and slowly take away the help. Then, when the children are ready, they can work independently. Build up their complexity as you go; below are just six of the questions from the worksheet above, ordered in terms of difficulty to give you a general idea. - 2,574 ÷ 11 = ? - 1,476 ÷ 12 = ? - 4,096 ÷ 16 = ? - 4,488 ÷ 17 = ? - 13,528 ÷ 38 = ? - 18,473 ÷ 49 = ? For the rest, download the free worksheets! In my classroom, this works as a ‘peeling away’ process, which often looks like this: input to the whole class for 5 minutes – 2 or 3 children set off to work independently; input to the rest of the class for a further 5 minutes – another few children set off to work independently; input to the rest of the class for a further 5 minutes – another group of children set off to work independently. I’m then left with those requiring the most support in long division, with whom I stay whilst my TA circulates the class, or with whom I ask my TA to stay whilst I circulate the class. Don’t forget to download and use the free long division worksheets Long division reasoning questions Once children are confident with their long division questions, reasoning activities can then be introduced, such as long division with missing digits, or ‘spot the mistake/s’, moving on ultimately to worded long division problems. Download the free Third Space Learning All Kinds of Word Problems in Division for Year 3 to Year 6. Download for free long division reasoning questions here: All Kinds of Word Problems in Division for Year 3 to Year 6 Of course, ultimately, if you don’t feel confident with this long division method, it will never translate effectively to the children; as with anything in teaching, it must work for you and your class. However, knowing that this is often a topic hotly debated on Edu-Twitter and Edu-Facebook, I hope I’ve converted some people to the long-division-loving side! Do you have pupils who need extra support in maths? Every week Third Space Learning’s maths specialist tutors support thousands of pupils across hundreds of schools with weekly online 1-to-1 lessons designed to plug gaps and boost progress. Since 2013 we’ve helped over 60,000 primary school pupils become more confident, able mathematicians. You can learn more about our interventions or request a personalised quote for your school to speak to use about your school’s needs and how we can help.
This video gives more detail about the mathematical principles presented in Mode. This video shows how to work step-by-step through one or more of the examples in Mode. Explains how to determine mean, median, and mode. It also provides examples. This lesson plan covers The Mode and includes Teaching Tips, Common Errors, Differentiated Instruction, Enrichment, and Problem Solving. A list of student-submitted discussion questions for Mode. To activate prior knowledge, make personal connections, reflect on key concepts, encourage critical thinking, and assess student knowledge on the topic prior to reading using a Quickwrite. To organize ideas, increase comprehension, synthesize learning, demonstrate understanding of key concepts, and reinforce vocabulary using a Quickwrite. Connect your daily life experiences with specific points in a reading using a Double Entry Diary. Develop understanding of concepts by studying them in a relational manner. Analyze and refine the concept by summarizing the main idea, creating visual aids, and generating questions and comments using a Four Square Concept Matrix. Students will analyze the Academy Award film nominees for the mode and the mode in other categories. In addition students will analyze the nominees for other measures of central tendency. Students will analyze the Academy Award film nominees for the mode and the mode in other categories. In addition students will analyze the nominees for other measures of central tendency. Answer Key. Find out how the fashion industry determines which styles are the most popular. Find out how many children are in the typical American family. This study guide looks at levels of measurement and the shape, measures of center (median, mean, mode), and measures of spread (standard deviation) of a data set. It also compares the measures for population vs the measures for sample. This is a fun and interactive game to help the students find the mean, median and mode. These flashcards help you study important terms and vocabulary from Mean, Ungrouped Data to Find the Mean, Grouped Data to Find the Mean, Median, and Mode.
To sketch the graph of the derivative function, find the slopes of tangent lines of the original function at various points. The derivative function at those points will be equal to the slope of the respective tangent lines. For example, if at x=4, f(x)=7 (i.e. f(4)=7), and the slope of the tangent line at that point is 3, then the derivative function f'(x) will be 3 when x=4 (i.e. f'(4)=3). Some important tangent lines to look at are when the slope of the tangent line is 0. This is where the derivative will cross the x-axis, or where the slope of the function changes from positive to negative, or negative to positive. I have another problem here. It says given the graph of f(x), sketch it's derivative function f '(x). Well, here is my graph. F(x) is in red. I want to sketch f'(x). The way I'm going to do this is I'm going to pick points. I'm going to draw a tangent lines, and measure their slopes, because the slope of the tangent line gives me the derivative. So let's start with x equals -1. This point right here. So I need to carefully draw a tangent line, and then measure its slope. You want to draw a long enough tangent line that you can pick points that are reasonably far apart. So let me pick this point here, and let's say this point here. Actually let me choose this point here. I chose these two points, because they're at integer values of x. This one's at -2, this one's at 0. So the question is what are the y values? Well, this point here if these increments are a ¼ each. This looks like it might be 4.1. So one of my points is -2, 4.1. The second point here is at x equals 0, and it looks like it might be 5.9. Now let's remember what we're trying to do here. We're measuring the slope of the tangent line at -2. This is going to give us f'(-1) so that's approximately this slope 5.9 minus 4.1 over 0 minus -2. So 5.9 minus 4.1 is 1.8 over 2, that's point 9. So what I do now so that I want to graph f' is at -1, I plot the value point 9. So looking at my graph here -1, point 9 right there. Then at x equals 1, I want to do the same thing. So let me draw a tangent line, and then I'll measure its slope. I'll make it nice and long so you can pick points that are far apart. Well that's nice. What's nice is that it goes right though 4, 0 which is a nice point to use. So I'm going to use 4, 0, and I'll use this point here. 0, 5.3. So the two points are 0, 5.3, and 4, 0. Now this is going to give me an approximate value for the derivative of 1, because I drew the tangent line at x equals 1. So this is going to be 0 minus 5.3, 4 minus 0. That's -5.3 over 4. I'm going to need my calculator here. So -5.3 divided by 4, -1.325. So let's plot that. It's about -1 and a 1/3. So at 1 we have that's -1.5 so -1 and a 1/3 is about here. You'll notice that there are a lot of other blue points. I've actually taken the liberty of plotting these ahead of time. So we just did these two, but I plotted the rest of them. All you have to do is connect these points with the curve, and you get the derivative function. Now what's interesting about the derivative function is when you compare the graph of the derivative to the graph of the original function. You can see a lot of interesting things. Like for example where the derivative function crosses through the x axis, y equal 0, the original function is going to have a horizontal tangent. The tangent with slope 0. The same thing happens over here. You see where there is a horizontal tangent, that's precisely where the derivative crosses the x axis. Where the function is decreasing most steeply, that's where you get a minimum. So this is the graph of the derivative f'(x) for this function f(x).
Subtracting Fractions with Different Denominators Students will be able to subtract fractions with unlike denominators. - Write ⅘ − ⅕On the board and ask students to solve for the problem on their whiteboards using whatever method they choose. - Have students share their answers with their elbow partner. Gather their background knowledge by asking them questions about the numerator, denominator, and how they got their answer. - Ask for a volunteer to come to the board and solve the problem using a number line, area model, or simply subtracting one-fifth from four-fifths. - Tell students that today they'll build on their understanding of subtracting fractions with like denominators to subtract fractions with unlike denominators. Explicit Instruction/Teacher modeling(8 minutes) - Remind students that the DenominatorIs the bottom number of a fraction and represents the total number of pieces of the whole, while the NumeratorIs the top number and represents some of the parts of the whole (e.g., ⅖ represents 2 pieces of the total 5 pieces). - Write ⅘ − ⅒On the board. Explain that the denominator is different so they cannot subtract the number 1 from the number 4. Tip: draw bar models of ⅘ and ⅒ to show that the total parts, or whole, (i.e., denominator) is different, so if they subtracted the number 1 from ⅘ it would subtract too much. - Think aloud finding multiples for the denominators 5 (e.g., 5, 10, 15, 20, 25, etc.) and 10 (e.g., 10, 20, 30, 40, etc.) and write them on the board. Explain to them that a MultipleIs the result of multiplying a number by an integer (e.g., 4 x 4 = 12, where 12 is the multiple). - Consider the list of multiples and then circle the Least common multiple, or the smallest multiple they have in common (i.e., 10). Then, think aloud how to change the 5 in the denominator to 10 (i.e., multiplying 5 by 2) and multiply by the number 2 on the top and bottom so that you get a new equation of 8⁄10− ⅒. - Subtract one-tenth from eight-tenths to get a total of seven-tenths remaining. Compare the final answer to what the answer would have been had you subtracted the equation using unlike denominators. Guided practise(15 minutes) - Ask students to explain why it's important to change the denominators so that they are the same. Write some of their responses on the board. - Write 5⁄8− ¼On the board and ask students to help you solve the problem. Call on students to tell you the next step and make some "mistakes" they have to correct along the way. - Have a volunteer explain how to subtract fractions when they have different denominators. They should understand the following steps: - Check to make sure the denominators are the same. - If the denominators are not the same, find the least common multiple for the denominators. - Multiply the denominator and numerator by the number that will make the denominator equal to the least common multiple. - Subtract the fractions. - Write the steps on the board for students to reference as they work with their partners to complete the first two problems from the Subtracting Fractions worksheet. Ask students to use their whiteboards to complete the problems. Independent working time(12 minutes) - Distribute the first page of the Subtracting Fractions worksheet and ask students to complete the problem on their own. Remind them to use the steps written on the board if they get stuck and do not know what to do next. - Have them share their answers with mixed ability groups and adjust them as necessary. If partners had to adjust their answers often, provide further explanation to the group and ask the students to restate your explanation. - Allow one student to share a problem and ask the other students to critique the process the presenter used to subtract the problem. - Have students multiply the denominators by each other and then do the same to the numerators instead of finding the least common multiple. See the Fraction Word Problems: Subtracting with Unlike Denominators worksheet for an example. - Allow them to work in a small, teacher-led group with manipulatives as they create their common multiples and subtract the fractions. - Provide sentence frames and a key words list for the student explanations throughout the lesson. - Have students use the Subtraction Challenge page from the Subtracting Fractions worksheet to create their own subtraction problems. Challenge them to subtract the smaller fraction by the larger fraction and to show their work using bar models and by finding the least common multiple. Increase the challenge further by using more dice to find the denominator. - Pair them with struggling learners and ask them to explain their process to the students. - Distribute a lined sheet of paper and write 11⁄12− ¾On the board. Ask students to solve the problem and write down their process. - Allow students to share their answer to the subtraction problem and their written explanation aloud to their partners. Give them the opportunity to make any adjustments as necessary. Review and closing(5 minutes) - Ask students to explain why it's important to only subtract fractions that have the same denominators. - Explain that understanding how to subtract simple fractions correctly will help them when they have to subtract mixed numbers.
Area and perimeter are two vital fundamental concepts of mathematics, which are often understood together. These two concepts are used to measure the physical space of an object and forms a foundation for advanced mathematics. The perimeter is often understood as the length of the path that covers a closed figure while the area refers to the space covered by the closed figure. Both the concepts have practical application and are used in our day to day life. While the area is nothing but the extent of the surface, the perimeter is the continuous line that forms a boundary of a closed geometrical shape. Take a read of the article to know the basic differences between area and perimeter. Content: Area Vs Perimeter |Basis for Comparison||Area||Perimeter| |Meaning||Area is described as the measurement of the surface of the object.||Perimeter refers to the outline that surrounds a closed figure.| |Represents||Space occupied by the figure.||Rim or boundary of a figure.| |Measurement||Square units||Linear units| |Example||Space covered by the garden.||Length of fence required to enclose the garden.| Definition of Area In mathematics, the area of a flat surface is defined as the amount of space covered by it. It is a physical quantity that indicates the number of square units occupied by the two-dimensional object. It is used to know how much space is taken up by a flat surface. It is measured in square units, i.e. square meters, square miles, square inches, etc. The term area has end number of practical usage like in construction projects, farming, architecture and so on. To measure the area of a flat surface, you need to count the number of squares covered by the shape. For instance: Suppose you need to tile the floor of the room, the number of tiles required to cover the whole room will be its area. Definition of Perimeter The perimeter is defined as a measure of the length of the border that surrounds a closed geometrical figure. The term ‘perimeter’ is derived from the Greek word, ‘Peri’ and ‘meter’ which means around and measure. In geometry, it implies the continuous line forming the path outside the two-dimensional shape. In simple words, the perimeter is nothing but the length of the outline of a figure. To find out the perimeter of a particular object, you can simply add the length of the sides, to arrive at its perimeter. The perimeter of a circle is commonly known as its circumference. For instance: a. Suppose, you wrap a string around the square, the length of the string would be its perimeter. b. You walk around outside the garden, the distance covered would be garden’s perimeter. Key Differences Between Area and Perimeter The significant differences between area and perimeter are provided in detail, in the following points: - The area is described as the measurement of the surface of the object. Perimeter refers to the outline that surrounds a closed figure. - .Area represents the space occupied by the object. conversely, perimeter indicates the outer edge or boundary of the shape. - Measurement of the area is done in square units i.e. square kilometres, square feet, square inches, etc. On the other hand, the perimeter of a shape is measured in linear units i.e. kilometres, inches, feet, etc. - As the perimeter is measured in linear units, it measures only one dimension i.e. length of the object. Whereas, in the case of area, two dimensions are involved i.e. length and width of the object. |Square||a^2||4a||where, a = length of side| |Rectangle||l×b||2(l+b)||where, l = length b = breadth |Circle||πr^2||2πr = πd||where, r = radius| |Triangle||1/2 bh||a+b+c||where, b = base h = height a,b,c = length of the sides |Rhombus||(pq)/2||4a||where, a = side p and q are diagonals |Parallelogram||bh||2(a+b)||where b = base h = height a = side |Trapezium||½(a+b) × h||a+b+c+d||where a = base b = base h = height c = side d = side After reviewing the above points, it is quite clear that these two mathematical concepts are different, but you can use one to figure out another. While area simply means, the ‘space covered’ i.e. inside of the object, perimeter refers to the ‘distance around, i.e. the outline of the shape. Moreover, figures with the same perimeter can have different area and figures with the same area can have a different perimeter.
Introduction to general relativity Resources · Tests General relativity is a theory of gravitation that was developed by Albert Einstein between 1907 and 1915. According to general relativity, the observed gravitational effect between masses results from their warping of spacetime. By the beginning of the 20th century, Newton's law of universal gravitation had been accepted for more than two hundred years as a valid description of the gravitational force between masses. In Newton's model, gravity is the result of an attractive force between massive objects. Although even Newton was troubled by the unknown nature of that force, the basic framework was extremely successful at describing motion. Experiments and observations show that Einstein's description of gravitation accounts for several effects that are unexplained by Newton's law, such as minute anomalies in the orbits of Mercury and other planets. General relativity also predicts novel effects of gravity, such as gravitational waves, gravitational lensing and an effect of gravity on time known as gravitational time dilation. Many of these predictions have been confirmed by experiment, while others are the subject of ongoing research. For example, although there is indirect evidence for gravitational waves, direct evidence of their existence is still being sought by several teams of scientists in experiments such as the LIGO and GEO 600 projects. General relativity has developed into an essential tool in modern astrophysics. It provides the foundation for the current understanding of black holes, regions of space where the gravitational effect is so strong that even light cannot escape. Their strong gravity is thought to be responsible for the intense radiation emitted by certain types of astronomical objects (such as active galactic nuclei or microquasars). General relativity is also part of the framework of the standard Big Bang model of cosmology. Although general relativity is not the only relativistic theory of gravity, it is the simplest such theory that is consistent with the experimental data. Nevertheless, a number of open questions remain, the most fundamental of which is how general relativity can be reconciled with the laws of quantum physics to produce a complete and self-consistent theory of quantum gravity. - 1 From special to general relativity - 2 Geometry and gravitation - 3 Experiments - 4 Astrophysical applications - 5 Modern research: general relativity and beyond - 6 See also - 7 Notes - 8 References - 9 External links From special to general relativity In September 1905, Albert Einstein published his theory of special relativity, which reconciles Newton's laws of motion with electrodynamics (the interaction between objects with electric charge). Special relativity introduced a new framework for all of physics by proposing new concepts of space and time. Some then-accepted physical theories were inconsistent with that framework; a key example was Newton's theory of gravity, which describes the mutual attraction experienced by bodies due to their mass. Several physicists, including Einstein, searched for a theory that would reconcile Newton's law of gravity and special relativity. Only Einstein's theory proved to be consistent with experiments and observations. To understand the theory's basic ideas, it is instructive to follow Einstein's thinking between 1907 and 1915, from his simple thought experiment involving an observer in free fall to his fully geometric theory of gravity. A person in a free-falling elevator experiences weightlessness, and objects either float motionless or drift at constant speed. Since everything in the elevator is falling together, no gravitational effect can be observed. In this way, the experiences of an observer in free fall are indistinguishable from those of an observer in deep space, far from any significant source of gravity. Such observers are the privileged ("inertial") observers Einstein described in his theory of special relativity: observers for whom light travels along straight lines at constant speed. Einstein hypothesized that the similar experiences of weightless observers and inertial observers in special relativity represented a fundamental property of gravity, and he made this the cornerstone of his theory of general relativity, formalized in his equivalence principle. Roughly speaking, the principle states that a person in a free-falling elevator cannot tell that they are in free fall. Every experiment in such a free-falling environment has the same results as it would for an observer at rest or moving uniformly in deep space, far from all sources of gravity. Gravity and acceleration Most effects of gravity vanish in free fall, but effects that seem the same as those of gravity can be produced by an accelerated frame of reference. An observer in a closed room cannot tell which of the following is true: - Objects are falling to the floor because the room is resting on the surface of the Earth and the objects are being pulled down by gravity. - Objects are falling to the floor because the room is aboard a rocket in space, which is accelerating at 9.81 m/s2 and is far from any source of gravity. The objects are being pulled towards the floor by the same "inertial force" that presses the driver of an accelerating car into the back of his seat. Conversely, any effect observed in an accelerated reference frame should also be observed in a gravitational field of corresponding strength. This principle allowed Einstein to predict several novel effects of gravity in 1907, as explained in the next section. An observer in an accelerated reference frame must introduce what physicists call fictitious forces to account for the acceleration experienced by himself and objects around him. One example, the force pressing the driver of an accelerating car into his or her seat, has already been mentioned; another is the force you can feel pulling your arms up and out if you attempt to spin around like a top. Einstein's master insight was that the constant, familiar pull of the Earth's gravitational field is fundamentally the same as these fictitious forces. The apparent magnitude of the fictitious forces always appears to be proportional to the mass of any object on which they act - for instance, the driver's seat exerts just enough force to accelerate the driver at the same rate as the car. By analogy, Einstein proposed that an object in a gravitational field should feel a gravitational force proportional to its mass, as embodied in Newton's law of gravitation. In 1907, Einstein was still eight years away from completing the general theory of relativity. Nonetheless, he was able to make a number of novel, testable predictions that were based on his starting point for developing his new theory: the equivalence principle. The first new effect is the gravitational frequency shift of light. Consider two observers aboard an accelerating rocket-ship. Aboard such a ship, there is a natural concept of "up" and "down": the direction in which the ship accelerates is "up", and unattached objects accelerate in the opposite direction, falling "downward". Assume that one of the observers is "higher up" than the other. When the lower observer sends a light signal to the higher observer, the acceleration causes the light to be red-shifted, as may be calculated from special relativity; the second observer will measure a lower frequency for the light than the first. Conversely, light sent from the higher observer to the lower is blue-shifted, that is, shifted towards higher frequencies. Einstein argued that such frequency shifts must also be observed in a gravitational field. This is illustrated in the figure at left, which shows a light wave that is gradually red-shifted as it works its way upwards against the gravitational acceleration. This effect has been confirmed experimentally, as described below. This gravitational frequency shift corresponds to a gravitational time dilation: Since the "higher" observer measures the same light wave to have a lower frequency than the "lower" observer, time must be passing faster for the higher observer. Thus, time runs more slowly for observers who are lower in a gravitational field. It is important to stress that, for each observer, there are no observable changes of the flow of time for events or processes that are at rest in his or her reference frame. Five-minute-eggs as timed by each observer's clock have the same consistency; as one year passes on each clock, each observer ages by that amount; each clock, in short, is in perfect agreement with all processes happening in its immediate vicinity. It is only when the clocks are compared between separate observers that one can notice that time runs more slowly for the lower observer than for the higher. This effect is minute, but it too has been confirmed experimentally in multiple experiments, as described below. In a similar way, Einstein predicted the gravitational deflection of light: in a gravitational field, light is deflected downward. Quantitatively, his results were off by a factor of two; the correct derivation requires a more complete formulation of the theory of general relativity, not just the equivalence principle. The equivalence between gravitational and inertial effects does not constitute a complete theory of gravity. When it comes to explaining gravity near our own location on the Earth's surface, noting that our reference frame is not in free fall, so that fictitious forces are to be expected, provides a suitable explanation. But a freely falling reference frame on one side of the Earth cannot explain why the people on the opposite side of the Earth experience a gravitational pull in the opposite direction. A more basic manifestation of the same effect involves two bodies that are falling side by side towards the Earth. In a reference frame that is in free fall alongside these bodies, they appear to hover weightlessly – but not exactly so. These bodies are not falling in precisely the same direction, but towards a single point in space: namely, the Earth's center of gravity. Consequently, there is a component of each body's motion towards the other (see the figure). In a small environment such as a freely falling lift, this relative acceleration is minuscule, while for skydivers on opposite sides of the Earth, the effect is large. Such differences in force are also responsible for the tides in the Earth's oceans, so the term "tidal effect" is used for this phenomenon. The equivalence between inertia and gravity cannot explain tidal effects – it cannot explain variations in the gravitational field. For that, a theory is needed which describes the way that matter (such as the large mass of the Earth) affects the inertial environment around it. From acceleration to geometry In exploring the equivalence of gravity and acceleration as well as the role of tidal forces, Einstein discovered several analogies with the geometry of surfaces. An example is the transition from an inertial reference frame (in which free particles coast along straight paths at constant speeds) to a rotating reference frame (in which extra terms corresponding to fictitious forces have to be introduced in order to explain particle motion): this is analogous to the transition from a Cartesian coordinate system (in which the coordinate lines are straight lines) to a curved coordinate system (where coordinate lines need not be straight). A deeper analogy relates tidal forces with a property of surfaces called curvature. For gravitational fields, the absence or presence of tidal forces determines whether or not the influence of gravity can be eliminated by choosing a freely falling reference frame. Similarly, the absence or presence of curvature determines whether or not a surface is equivalent to a plane. In the summer of 1912, inspired by these analogies, Einstein searched for a geometric formulation of gravity. The elementary objects of geometry – points, lines, triangles – are traditionally defined in three-dimensional space or on two-dimensional surfaces. In 1907, the mathematician Hermann Minkowski (who was Einstein's former mathematics professor in Swiss Federal Polytechnic) introduced a geometric formulation of Einstein's special theory of relativity in which the geometry included not only space, but also time. The basic entity of this new geometry is four-dimensional spacetime. The orbits of moving bodies are curves in spacetime; the orbits of bodies moving at constant speed without changing direction correspond to straight lines. For surfaces, the generalization from the geometry of a plane – a flat surface – to that of a general curved surface had been described in the early 19th century by Carl Friedrich Gauss. This description had in turn been generalized to higher-dimensional spaces in a mathematical formalism introduced by Bernhard Riemann in the 1850s. With the help of Riemannian geometry, Einstein formulated a geometric description of gravity in which Minkowski's spacetime is replaced by distorted, curved spacetime, just as curved surfaces are a generalization of ordinary plane surfaces. After he had realized the validity of this geometric analogy, it took Einstein a further three years to find the missing cornerstone of his theory: the equations describing how matter influences spacetime's curvature. Having formulated what are now known as Einstein's equations (or, more precisely, his field equations of gravity), he presented his new theory of gravity at several sessions of the Prussian Academy of Sciences in late 1915. Geometry and gravitation Paraphrasing John Wheeler, Einstein's geometric theory of gravity can be summarized thus: spacetime tells matter how to move; matter tells spacetime how to curve. What this means is addressed in the following three sections, which explore the motion of so-called test particles, examine which properties of matter serve as a source for gravity, and, finally, introduce Einstein's equations, which relate these matter properties to the curvature of spacetime. Probing the gravitational field In order to map a body's gravitational influence, it is useful to think about what physicists call probe or test particles: particles that are influenced by gravity, but are so small and light that we can neglect their own gravitational effect. In the absence of gravity and other external forces, a test particle moves along a straight line at a constant speed. In the language of spacetime, this is equivalent to saying that such test particles move along straight world lines in spacetime. In the presence of gravity, spacetime is non-Euclidean, or curved, and in curved spacetime straight world lines may not exist. Instead, test particles move along lines called geodesics, which are "as straight as possible", that is, they follow the shortest path between starting and ending points, taking the curvature into consideration. A simple analogy is the following: In geodesy, the science of measuring Earth's size and shape, a geodesic (from Greek "geo", Earth, and "daiein", to divide) is the shortest route between two points on the Earth's surface. Approximately, such a route is a segment of a great circle, such as a line of longitude or the equator. These paths are certainly not straight, simply because they must follow the curvature of the Earth's surface. But they are as straight as is possible subject to this constraint. The properties of geodesics differ from those of straight lines. For example, on a plane, parallel lines never meet, but this is not so for geodesics on the surface of the Earth: for example, lines of longitude are parallel at the equator, but intersect at the poles. Analogously, the world lines of test particles in free fall are spacetime geodesics, the straightest possible lines in spacetime. But still there are crucial differences between them and the truly straight lines that can be traced out in the gravity-free spacetime of special relativity. In special relativity, parallel geodesics remain parallel. In a gravitational field with tidal effects, this will not, in general, be the case. If, for example, two bodies are initially at rest relative to each other, but are then dropped in the Earth's gravitational field, they will move towards each other as they fall towards the Earth's center. Compared with planets and other astronomical bodies, the objects of everyday life (people, cars, houses, even mountains) have little mass. Where such objects are concerned, the laws governing the behavior of test particles are sufficient to describe what happens. Notably, in order to deflect a test particle from its geodesic path, an external force must be applied. A person sitting on a chair is trying to follow a geodesic, that is, to fall freely towards the center of the Earth. But the chair applies an external upwards force preventing the person from falling. In this way, general relativity explains the daily experience of gravity on the surface of the Earth not as the downwards pull of a gravitational force, but as the upwards push of external forces. These forces deflect all bodies resting on the Earth's surface from the geodesics they would otherwise follow. For matter objects whose own gravitational influence cannot be neglected, the laws of motion are somewhat more complicated than for test particles, although it remains true that spacetime tells matter how to move. Sources of gravity In Newton's description of gravity, the gravitational force is caused by matter. More precisely, it is caused by a specific property of material objects: their mass. In Einstein's theory and related theories of gravitation, curvature at every point in spacetime is also caused by whatever matter is present. Here, too, mass is a key property in determining the gravitational influence of matter. But in a relativistic theory of gravity, mass cannot be the only source of gravity. Relativity links mass with energy, and energy with momentum. The equivalence between mass and energy, as expressed by the formula E = mc2, is the most famous consequence of special relativity. In relativity, mass and energy are two different ways of describing one physical quantity. If a physical system has energy, it also has the corresponding mass, and vice versa. In particular, all properties of a body that are associated with energy, such as its temperature or the binding energy of systems such as nuclei or molecules, contribute to that body's mass, and hence act as sources of gravity. In special relativity, energy is closely connected to momentum. Just as space and time are, in that theory, different aspects of a more comprehensive entity called spacetime, energy and momentum are merely different aspects of a unified, four-dimensional quantity that physicists call four-momentum. In consequence, if energy is a source of gravity, momentum must be a source as well. The same is true for quantities that are directly related to energy and momentum, namely internal pressure and tension. Taken together, in general relativity it is mass, energy, momentum, pressure and tension that serve as sources of gravity: they are how matter tells spacetime how to curve. In the theory's mathematical formulation, all these quantities are but aspects of a more general physical quantity called the energy–momentum tensor. Einstein's equations are the centerpiece of general relativity. They provide a precise formulation of the relationship between spacetime geometry and the properties of matter, using the language of mathematics. More concretely, they are formulated using the concepts of Riemannian geometry, in which the geometric properties of a space (or a spacetime) are described by a quantity called a metric. The metric encodes the information needed to compute the fundamental geometric notions of distance and angle in a curved space (or spacetime). A spherical surface like that of the Earth provides a simple example. The location of any point on the surface can be described by two coordinates: the geographic latitude and longitude. Unlike the Cartesian coordinates of the plane, coordinate differences are not the same as distances on the surface, as shown in the diagram on the right: for someone at the equator, moving 30 degrees of longitude westward (magenta line) corresponds to a distance of roughly 3,300 kilometers (2,100 mi). On the other hand, someone at a latitude of 55 degrees, moving 30 degrees of longitude westward (blue line) covers a distance of merely 1,900 kilometers (1,200 mi). Coordinates therefore do not provide enough information to describe the geometry of a spherical surface, or indeed the geometry of any more complicated space or spacetime. That information is precisely what is encoded in the metric, which is a function defined at each point of the surface (or space, or spacetime) and relates coordinate differences to differences in distance. All other quantities that are of interest in geometry, such as the length of any given curve, or the angle at which two curves meet, can be computed from this metric function. The metric function and its rate of change from point to point can be used to define a geometrical quantity called the Riemann curvature tensor, which describes exactly how the space (or spacetime) is curved at each point. In general relativity, the metric and the Riemann curvature tensor are quantities defined at each point in spacetime. As has already been mentioned, the matter content of the spacetime defines another quantity, the energy–momentum tensor T, and the principle that "spacetime tells matter how to move, and matter tells spacetime how to curve" means that these quantities must be related to each other. Einstein formulated this relation by using the Riemann curvature tensor and the metric to define another geometrical quantity G, now called the Einstein tensor, which describes some aspects of the way spacetime is curved. Einstein's equation then states that i.e., up to a constant multiple, the quantity G (which measures curvature) is equated with the quantity T (which measures matter content). The constants involved in this equation reflect the different theories that went into its making: π is one of the basic constants of geometry, G is the gravitational constant that is already present in Newtonian gravity, and c is the speed of light, the key constant in special relativity. This equation is often referred to in the plural as Einstein's equations, since the quantities G and T are each determined by several functions of the coordinates of spacetime, and the equations equate each of these component functions. A solution of these equations describes a particular geometry of spacetime; for example, the Schwarzschild solution describes the geometry around a spherical, non-rotating mass such as a star or a black hole, whereas the Kerr solution describes a rotating black hole. Still other solutions can describe a gravitational wave or, in the case of the Friedmann–Lemaître–Robertson–Walker solution, an expanding universe. The simplest solution is the uncurved Minkowski spacetime, the spacetime described by special relativity. No scientific theory is apodictically true; each is a model that must be checked by experiment. Newton's law of gravity was accepted because it accounted for the motion of planets and moons in the solar system with considerable accuracy. As the precision of experimental measurements gradually improved, some discrepancies with Newton's predictions were observed, and these were accounted for in the general theory of relativity. Similarly, the predictions of general relativity must also be checked with experiment, and Einstein himself devised three tests now known as the classical tests of the theory: - Newtonian gravity predicts that the orbit which a single planet traces around a perfectly spherical star should be an ellipse. Einstein's theory predicts a more complicated curve: the planet behaves as if it were travelling around an ellipse, but at the same time, the ellipse as a whole is rotating slowly around the star. In the diagram on the right, the ellipse predicted by Newtonian gravity is shown in red, and part of the orbit predicted by Einstein in blue. For a planet orbiting the Sun, this deviation from Newton's orbits is known as the anomalous perihelion shift. The first measurement of this effect, for the planet Mercury, dates back to 1859. The most accurate results for Mercury and for other planets to date are based on measurements which were undertaken between 1966 and 1990, using radio telescopes. General relativity predicts the correct anomalous perihelion shift for all planets where this can be measured accurately (Mercury, Venus and the Earth). - According to general relativity, light does not travel along straight lines when it propagates in a gravitational field. Instead, it is deflected in the presence of massive bodies. In particular, starlight is deflected as it passes near the Sun, leading to apparent shifts of up 1.75 arc seconds in the stars' positions in the sky (an arc second is equal to 1/3600 of a degree). In the framework of Newtonian gravity, a heuristic argument can be made that leads to light deflection by half that amount. The different predictions can be tested by observing stars that are close to the Sun during a solar eclipse. In this way, a British expedition to West Africa in 1919, directed by Arthur Eddington, confirmed that Einstein's prediction was correct, and the Newtonian predictions wrong, via observation of the May 1919 eclipse. Eddington's results were not very accurate; subsequent observations of the deflection of the light of distant quasars by the Sun, which utilize highly accurate techniques of radio astronomy, have confirmed Eddington's results with significantly better precision (the first such measurements date from 1967, the most recent comprehensive analysis from 2004). - Gravitational redshift was first measured in a laboratory setting in 1959 by Pound and Rebka. It is also seen in astrophysical measurements, notably for light escaping the white dwarf Sirius B. The related gravitational time dilation effect has been measured by transporting atomic clocks to altitudes of between tens and tens of thousands of kilometers (first by Hafele and Keating in 1971; most accurately to date by Gravity Probe A launched in 1976). Of these tests, only the perihelion advance of Mercury was known prior to Einstein's final publication of general relativity in 1916. The subsequent experimental confirmation of his other predictions, especially the first measurements of the deflection of light by the sun in 1919, catapulted Einstein to international stardom. These three experiments justified adopting general relativity over Newton's theory and, incidentally, over a number of alternatives to general relativity that had been proposed. Further tests of general relativity include precision measurements of the Shapiro effect or gravitational time delay for light, most recently in 2002 by the Cassini space probe. One set of tests focuses on effects predicted by general relativity for the behavior of gyroscopes travelling through space. One of these effects, geodetic precession, has been tested with the Lunar Laser Ranging Experiment (high-precision measurements of the orbit of the Moon). Another, which is related to rotating masses, is called frame-dragging. The geodetic and frame-dragging effects were both tested by the Gravity Probe B satellite experiment launched in 2004, with results confirming relativity to within 0.5% and 15%, respectively, as of December 2008. By cosmic standards, gravity throughout the solar system is weak. Since the differences between the predictions of Einstein's and Newton's theories are most pronounced when gravity is strong, physicists have long been interested in testing various relativistic effects in a setting with comparatively strong gravitational fields. This has become possible thanks to precision observations of binary pulsars. In such a star system, two highly compact neutron stars orbit each other. At least one of them is a pulsar – an astronomical object that emits a tight beam of radiowaves. These beams strike the Earth at very regular intervals, similarly to the way that the rotating beam of a lighthouse means that an observer sees the lighthouse blink, and can be observed as a highly regular series of pulses. General relativity predicts specific deviations from the regularity of these radio pulses. For instance, at times when the radio waves pass close to the other neutron star, they should be deflected by the star's gravitational field. The observed pulse patterns are impressively close to those predicted by general relativity. One particular set of observations is related to eminently useful practical applications, namely to satellite navigation systems such as the Global Positioning System that are used both for precise positioning and timekeeping. Such systems rely on two sets of atomic clocks: clocks aboard satellites orbiting the Earth, and reference clocks stationed on the Earth's surface. General relativity predicts that these two sets of clocks should tick at slightly different rates, due to their different motions (an effect already predicted by special relativity) and their different positions within the Earth's gravitational field. In order to ensure the system's accuracy, the satellite clocks are either slowed down by a relativistic factor, or that same factor is made part of the evaluation algorithm. In turn, tests of the system's accuracy (especially the very thorough measurements that are part of the definition of universal coordinated time) are testament to the validity of the relativistic predictions. A number of other tests have probed the validity of various versions of the equivalence principle; strictly speaking, all measurements of gravitational time dilation are tests of the weak version of that principle, not of general relativity itself. So far, general relativity has passed all observational tests. Models based on general relativity play an important role in astrophysics, and the success of these models is further testament to the theory's validity. Since light is deflected in a gravitational field, it is possible for the light of a distant object to reach an observer along two or more paths. For instance, light of a very distant object such as a quasar can pass along one side of a massive galaxy and be deflected slightly so as to reach an observer on Earth, while light passing along the opposite side of that same galaxy is deflected as well, reaching the same observer from a slightly different direction. As a result, that particular observer will see one astronomical object in two different places in the night sky. This kind of focussing is well-known when it comes to optical lenses, and hence the corresponding gravitational effect is called gravitational lensing. Observational astronomy uses lensing effects as an important tool to infer properties of the lensing object. Even in cases where that object is not directly visible, the shape of a lensed image provides information about the mass distribution responsible for the light deflection. In particular, gravitational lensing provides one way to measure the distribution of dark matter, which does not give off light and can be observed only by its gravitational effects. One particularly interesting application are large-scale observations, where the lensing masses are spread out over a significant fraction of the observable universe, and can be used to obtain information about the large-scale properties and evolution of our cosmos. Gravitational waves, a direct consequence of Einstein's theory, are distortions of geometry that propagate at the speed of light, and can be thought of as ripples in spacetime. They should not be confused with the gravity waves of fluid dynamics, which are a different concept. Indirectly, the effect of gravitational waves has been detected in observations of specific binary stars. Such pairs of stars orbit each other and, as they do so, gradually lose energy by emitting gravitational waves. For ordinary stars like the Sun, this energy loss would be too small to be detectable, but this energy loss was observed in 1974 in a binary pulsar called PSR1913+16. In such a system, one of the orbiting stars is a pulsar. This has two consequences: a pulsar is an extremely dense object known as a neutron star, for which gravitational wave emission is much stronger than for ordinary stars. Also, a pulsar emits a narrow beam of electromagnetic radiation from its magnetic poles. As the pulsar rotates, its beam sweeps over the Earth, where it is seen as a regular series of radio pulses, just as a ship at sea observes regular flashes of light from the rotating light in a lighthouse. This regular pattern of radio pulses functions as a highly accurate "clock". It can be used to time the double star's orbital period, and it reacts sensitively to distortions of spacetime in its immediate neighborhood. The discoverers of PSR1913+16, Russell Hulse and Joseph Taylor, were awarded the Nobel Prize in Physics in 1993. Since then, several other binary pulsars have been found. The most useful are those in which both stars are pulsars, since they provide the most accurate tests of general relativity. Currently, one major goal of research in relativity is the direct detection of gravitational waves. To this end, a number of land-based gravitational wave detectors are in operation, and a mission to launch a space-based detector, LISA, is currently under development, with a precursor mission (LISA Pathfinder) due for launch in 2015. If gravitational waves are detected, they could be used to obtain information about compact objects such as neutron stars and black holes, and also to probe the state of the early universe fractions of a second after the Big Bang. When mass is concentrated into a sufficiently compact region of space, general relativity predicts the formation of a black hole – a region of space with a gravitational effect so strong that not even light can escape. Certain types of black holes are thought to be the final state in the evolution of massive stars. On the other hand, supermassive black holes with the mass of millions or billions of Suns are assumed to reside in the cores of most galaxies, and they play a key role in current models of how galaxies have formed over the past billions of years. Matter falling onto a compact object is one of the most efficient mechanisms for releasing energy in the form of radiation, and matter falling onto black holes is thought to be responsible for some of the brightest astronomical phenomena imaginable. Notable examples of great interest to astronomers are quasars and other types of active galactic nuclei. Under the right conditions, falling matter accumulating around a black hole can lead to the formation of jets, in which focused beams of matter are flung away into space at speeds near that of light. There are several properties that make black holes most promising sources of gravitational waves. One reason is that black holes are the most compact objects that can orbit each other as part of a binary system; as a result, the gravitational waves emitted by such a system are especially strong. Another reason follows from what are called black hole uniqueness theorems: over time, black holes retain only a minimal set of distinguishing features (these theorems have become known as "no-hair" theorems, since different hairstyles are a crucial part of what gives different people their different appearances). For instance, in the long term, the collapse of a hypothetical matter cube will not result in a cube-shaped black hole. Instead, the resulting black hole will be indistinguishable from a black hole formed by the collapse of a spherical mass, but with one important difference: in its transition to a spherical shape, the black hole formed by the collapse of a cube will emit gravitational waves. One of the most important aspects of general relativity is that it can be applied to the universe as a whole. A key point is that, on large scales, our universe appears to be constructed along very simple lines: All current observations suggest that, on average, the structure of the cosmos should be approximately the same, regardless of an observer's location or direction of observation: the universe is approximately homogeneous and isotropic. Such comparatively simple universes can be described by simple solutions of Einstein's equations. The current cosmological models of the universe are obtained by combining these simple solutions to general relativity with theories describing the properties of the universe's matter content, namely thermodynamics, nuclear- and particle physics. According to these models, our present universe emerged from an extremely dense high-temperature state (the Big Bang) roughly 14 billion years ago, and has been expanding ever since. Einstein's equations can be generalized by adding a term called the cosmological constant. When this term is present, empty space itself acts as a source of attractive or, unusually, repulsive gravity. Einstein originally introduced this term in his pioneering 1917 paper on cosmology, with a very specific motivation: contemporary cosmological thought held the universe to be static, and the additional term was required for constructing static model universes within the framework of general relativity. When it became apparent that the universe is not static, but expanding, Einstein was quick to discard this additional term; prematurely, as we know today: From about 1998 on, a steadily accumulating body of astronomical evidence has shown that the expansion of the universe is accelerating in a way that suggests the presence of a cosmological constant or, equivalently, of a dark energy with specific properties that pervades all of space. Modern research: general relativity and beyond General relativity is very successful in providing a framework for accurate models which describe an impressive array of physical phenomena. On the other hand, there are many interesting open questions, and in particular, the theory as a whole is almost certainly incomplete. In contrast to all other modern theories of fundamental interactions, general relativity is a classical theory: it does not include the effects of quantum physics. The quest for a quantum version of general relativity addresses one of the most fundamental open questions in physics. While there are promising candidates for such a theory of quantum gravity, notably string theory and loop quantum gravity, there is at present no consistent and complete theory. It has long been hoped that a theory of quantum gravity would also eliminate another problematic feature of general relativity: the presence of spacetime singularities. These singularities are boundaries ("sharp edges") of spacetime at which geometry becomes ill-defined, with the consequence that general relativity itself loses its predictive power. Furthermore, there are so-called singularity theorems which predict that such singularities must exist within the universe if the laws of general relativity were to hold without any quantum modifications. The best-known examples are the singularities associated with the model universes that describe black holes and the beginning of the universe. Other attempts to modify general relativity have been made in the context of cosmology. In the modern cosmological models, most energy in the universe is in forms that have never been detected directly, namely dark energy and dark matter. There have been several controversial proposals to obviate the need for these enigmatic forms of matter and energy, by modifying the laws governing gravity and the dynamics of cosmic expansion, for example modified Newtonian dynamics. Beyond the challenges of quantum effects and cosmology, research on general relativity is rich with possibilities for further exploration: mathematical relativists explore the nature of singularities and the fundamental properties of Einstein's equations, ever more comprehensive computer simulations of specific spacetimes (such as those describing merging black holes) are run, and the race for the first direct detection of gravitational waves continues apace. More than ninety years after the theory was first published, research is more active than ever. - General relativity - Introduction to mathematics of general relativity - Introduction to special relativity - History of general relativity - Tests of general relativity - Numerical relativity - Derivations of the Lorentz transformations - - The Construction of Modern Science: Mechanisms and Mechanics, by Richard S. Westfall. Cambridge University Press. 1978 - This development is traced e.g. in Renn 2005, p. 110ff., in chapters 9 through 15 of Pais 1982, and in Janssen 2005. A precis of Newtonian gravity can be found in Schutz 2003, chapters 2–4. It is impossible to say whether the problem of Newtonian gravity crossed Einstein's mind before 1907, but by his own admission, his first serious attempts to reconcile that theory with special relativity date to that year, cf. Pais 1982, p. 178. - This is described in detail in chapter 2 of Wheeler 1990. - While the equivalence principle is still part of modern expositions of general relativity, there are some differences between the modern version and Einstein's original concept, cf. Norton 1985. - E. g. Janssen 2005, p. 64f. Einstein himself also explains this in section XX of his non-technical book Einstein 1961. Following earlier ideas by Ernst Mach, Einstein also explored centrifugal forces and their gravitational analogue, cf. Stachel 1989. - Einstein explained this in section XX of Einstein 1961. He considered an object "suspended" by a rope from the ceiling of a room aboard an accelerating rocket: from inside the room it looks as if gravitation is pulling the object down with a force proportional to its mass, but from outside the rocket it looks as if the rope is simply transferring the acceleration of the rocket to the object, and must therefore exert just the "force" to do so. - More specifically, Einstein's calculations, which are described in chapter 11b of Pais 1982, use the equivalence principle, the equivalence of gravity and inertial forces, and the results of special relativity for the propagation of light and for accelerated observers (the latter by considering, at each moment, the instantaneous inertial frame of reference associated with such an accelerated observer). - This effect can be derived directly within special relativity, either by looking at the equivalent situation of two observers in an accelerated rocket-ship or by looking at a falling elevator; in both situations, the frequency shift has an equivalent description as a Doppler shift between certain inertial frames. For simple derivations of this, see Harrison 2002. - See chapter 12 of Mermin 2005. - Cf. Ehlers & Rindler 1997; for a non-technical presentation, see Pössel 2007. - These and other tidal effects are described in Wheeler 1990, pp. 83–91. - Tides and their geometric interpretation are explained in chapter 5 of Wheeler 1990. This part of the historical development is traced in Pais 1982, section 12b. - For elementary presentations of the concept of spacetime, see the first section in chapter 2 of Thorne 1994, and Greene 2004, p. 47–61. More complete treatments on a fairly elementary level can be found e.g. in Mermin 2005 and in Wheeler 1990, chapters 8 and 9. - See Wheeler 1990, chapters 8 and 9 for vivid illustrations of curved spacetime. - Einstein's struggle to find the correct field equations is traced in chapters 13–15 of Pais 1982. - E.g. p. xi in Wheeler 1990. - A thorough, yet accessible account of basic differential geometry and its application in general relativity can be found in Geroch 1978. - See chapter 10 of Wheeler 1990. - In fact, when starting from the complete theory, Einstein's equation can be used to derive these more complicated laws of motion for matter as a consequence of geometry, but deriving from this the motion of idealized test particles is a highly non-trivial task, cf. Poisson 2004. - A simple explanation of mass–energy equivalence can be found in sections 3.8 and 3.9 of Giulini 2005. - See chapter 6 of Wheeler 1990. - For a more detailed definition of the metric, but one that is more informal than a textbook presentation, see chapter 14.4 of Penrose 2004. - The geometrical meaning of Einstein's equations is explored in chapters 7 and 8 of Wheeler 1990; cf. box 2.6 in Thorne 1994. An introduction using only very simple mathematics is given in chapter 19 of Schutz 2003. - The most important solutions are listed in every textbook on general relativity; for a (technical) summary of our current understanding, see Friedrich 2005. - More precisely, these are VLBI measurements of planetary positions; see chapter 5 of Will 1993 and section 3.5 of Will 2006. - For the historical measurements, see Hartl 2005, Kennefick 2005, and Kennefick 2007; Soldner's original derivation in the framework of Newton's theory is Soldner 1804. For the most precise measurements to date, see Bertotti 2005. - See Kennefick 2005 and chapter 3 of Will 1993. For the Sirius B measurements, see Trimble & Barstow 2007. - Pais 1982, Mercury on pp. 253–254, Einstein's rise to fame in sections 16b and 16c. - Everitt, C.W.F.; Parkinson, B.W. (2009), Gravity Probe B Science Results—NASA Final Report (PDF), retrieved 2009-05-02 - Kramer 2004. - An accessible account of relativistic effects in the global positioning system can be found in Ashby 2002; details are given in Ashby 2003. - An accessible introduction to tests of general relativity is Will 1993; a more technical, up-to-date account is Will 2006. - The geometry of such situations is explored in chapter 23 of Schutz 2003. - Introductions to gravitational lensing and its applications can be found on the webpages Newbury 1997 and Lochner 2007. - Schutz 2003, pp. 317–321; Bartusiak 2000, pp. 70–86. - The ongoing search for gravitational waves is described vividly in Bartusiak 2000 and in Blair & McNamara 1997. - For an overview of the history of black hole physics from its beginnings in the early 20th century to modern times, see the very readable account by Thorne 1994. For an up-to-date account of the role of black holes in structure formation, see Springel et al. 2005; a brief summary can be found in the related article Gnedin 2005. - See chapter 8 of Sparke & Gallagher 2007 and Disney 1998. A treatment that is more thorough, yet involves only comparatively little mathematics can be found in Robson 1996. - An elementary introduction to the black hole uniqueness theorems can be found in Chrusciel 2006 and in Thorne 1994, pp. 272–286. - Detailed information can be found in Ned Wright's Cosmology Tutorial and FAQ, Wright 2007; a very readable introduction is Hogan 1999. Using undergraduate mathematics but avoiding the advanced mathematical tools of general relativity, Berry 1989 provides a more thorough presentation. - Einstein's original paper is Einstein 1917; good descriptions of more modern developments can be found in Cowen 2001 and Caldwell 2004. - Cf. Maddox 1998, pp. 52–59 and 98–122; Penrose 2004, section 34.1 and chapter 30. - With a focus on string theory, the search for quantum gravity is described in Greene 1999; for an account from the point of view of loop quantum gravity, see Smolin 2001. - For dark matter, see Milgrom 2002; for dark energy, Caldwell 2004 - See Friedrich 2005. - A review of the various problems and the techniques being developed to overcome them, see Lehner 2002. - See Bartusiak 2000 for an account up to that year; up-to-date news can be found on the websites of major detector collaborations such as GEO 600 and LIGO. - A good starting point for a snapshot of present-day research in relativity is the electronic review journal Living Reviews in Relativity. - Ashby, Neil (2002), "Relativity and the Global Positioning System" (PDF), Physics Today 55 (5): 41–47, Bibcode:2002PhT....55e..41A, doi:10.1063/1.1485583 - Ashby, Neil (2003), "Relativity in the Global Positioning System", Living Reviews in Relativity 6: 1, Bibcode:2003LRR.....6....1A, doi:10.12942/lrr-2003-1, retrieved 2007-07-06 - Bartusiak, Marcia (2000), Einstein's Unfinished Symphony: Listening to the Sounds of Space-Time, Berkley, ISBN 978-0-425-18620-6 - Berry, Michael V. (1989), Principles of Cosmology and Gravitation (2nd ed.), Institute of Physics Publishing, ISBN 0-85274-037-9 - Bertotti, Bruno (2005), "The Cassini Experiment: Investigating the Nature of Gravity", in Renn, Jürgen, One hundred authors for Einstein, Wiley-VCH, pp. 402–405, ISBN 3-527-40574-7 - Blair, David; McNamara, Geoff (1997), Ripples on a Cosmic Sea. The Search for Gravitational Waves, Perseus, ISBN 0-7382-0137-5 - Caldwell, Robert R.; Crittenden, R (2004), "Dark Energy", Physics World, 17(5) (6969): 37–42, arXiv:astro-ph/0305001, Bibcode:2004Natur.427...45B, doi:10.1038/nature02139, PMID 14702078 - Chrusciel, Piotr (2006), "How many different kinds of black hole are there?", Einstein Online, retrieved 2007-07-15 - Cowen, Ron (2001), "A Dark Force in the Universe", Science News (Society for Science &) 159 (14): 218, doi:10.2307/3981642, JSTOR 3981642 - Disney, Michael (1998), "A New Look at Quasars", Scientific American 6 (6): 52–57, doi:10.1038/scientificamerican0698-52 - Ehlers, Jürgen; Rindler, Wolfgang (1997), "Local and Global Light Bending in Einstein's and other Gravitational Theories", General Relativity and Gravitation 29 (4): 519–529, Bibcode:1997GReGr..29..519E, doi:10.1023/A:1018843001842 - Einstein, Albert (1917), "Kosmologische Betrachtungen zur allgemeinen Relativitätstheorie", Sitzungsberichte der Preußischen Akademie der Wissenschaften: 142 - Einstein, Albert (1961), Relativity. The special and general theory, Crown Publishers - Friedrich, Helmut (2005), "Is general relativity 'essentially understood'?", Annalen Phys. 15 (1–2): 84–108, arXiv:gr-qc/0508016, Bibcode:2006AnP...518...84F, doi:10.1002/andp.200510173 - Geroch, Robert (1978), General relativity from A to B, University of Chicago Press, ISBN 0-226-28864-1 - Giulini, Domenico (2005), Special relativity. A first encounter, Oxford University Press, ISBN 0-19-856746-4 - Gnedin, Nickolay Y. (2005), "Digitizing the Universe", Nature 435 (7042): 572–573, Bibcode:2005Natur.435..572G, doi:10.1038/435572a, PMID 15931201 - Greene, Brian (1999), The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory, Vintage, ISBN 0-375-70811-1 - Greene, Brian (2004), "The Fabric of the Cosmos. Space, Time, and the Texture of Reality", The fabric of the cosmos : space (A. A. Knopf), Bibcode:2004fcst.book.....G, ISBN 0-375-41288-3 - Harrison, David M. (2002), A Non-mathematical Proof of Gravitational Time Dilation (PDF), retrieved 2007-05-06 - Hartl, Gerhard (2005), "The Confirmation of the General Theory of Relativity by the British Eclipse Expedition of 1919", in Renn, Jürgen, One hundred authors for Einstein, Wiley-VCH, pp. 182–187, ISBN 3-527-40574-7 - Hogan, Craig J. (1999), The Little Book of the Big Bang. A Cosmic Primer, Springer, ISBN 0-387-98385-6 - Janssen, Michel (2005), "Of pots and holes: Einstein's bumpy road to general relativity" (PDF), Ann. Phys. (Leipzig) 14 (S1): 58–85, Bibcode:2005AnP...517S..58J, doi:10.1002/andp.200410130 - Kennefick, Daniel (2005), "Astronomers Test General Relativity: Light-bending and the Solar Redshift", in Renn, Jürgen, One hundred authors for Einstein, Wiley-VCH, pp. 178–181, ISBN 3-527-40574-7 - Kennefick, Daniel (2007), "Not Only Because of Theory: Dyson, Eddington and the Competing Myths of the 1919 Eclipse Expedition", Proceedings of the 7th Conference on the History of General Relativity, Tenerife, 2005 0709, p. 685, arXiv:0709.0685, Bibcode:2007arXiv0709.0685K - Kramer, Michael (2004), "Millisecond Pulsars as Tools of Fundamental Physics", in Karshenboim, S. G., Astrophysics, Clocks and Fundamental Constants (Lecture Notes in Physics Vol. 648), Springer, pp. 33–54 (E-Print at astro-ph/0405178) - Lehner, Luis (2002), NUMERICAL RELATIVITY: STATUS AND PROSPECTS, "General Relativity and Gravitation - Proceedings of the 16th International Conference", GENERAL RELATIVITY AND GRAVITATION. Proceedings of the 16th International Conference. Held 15–21 July 2001 in Durban: 210, arXiv:gr-qc/0202055, Bibcode:2002grg..conf..210L, doi:10.1142/9789812776556_0010, ISBN 978-981-238-171-2 - Lochner, Jim, ed. (2007), "Gravitational Lensing", Imagine the Universe website (NASA GSFC), retrieved 2007-06-12 - Maddox, John (1998), What Remains To Be Discovered, Macmillan, ISBN 0-684-82292-X - Mermin, N. David (2005), It's About Time. Understanding Einstein's Relativity, Princeton University Press, ISBN 0-691-12201-6 - Milgrom, Mordehai (2002), "Does dark matter really exist?", Scientific American 287 (2): 30–37, doi:10.1038/scientificamerican0802-42 - Norton, John D. (1985), "What was Einstein's principle of equivalence?" (PDF), Studies in History and Philosophy of Science 16 (3): 203–246, doi:10.1016/0039-3681(85)90002-0, retrieved 2007-06-11 - Newbury, Pete (1997), Gravitational lensing webpages, retrieved 2007-06-12 - Nieto, Michael Martin (2006), "The quest to understand the Pioneer anomaly" (PDF), EurophysicsNews 37 (6): 30–34, Bibcode:2006ENews..37...30N, doi:10.1051/epn:2006604 - Pais, Abraham (1982), 'Subtle is the Lord ...' The Science and life of Albert Einstein, Oxford University Press, ISBN 0-19-853907-X - Penrose, Roger (2004), The Road to Reality, A. A. Knopf, ISBN 0-679-45443-8 - Pössel, M. (2007), "The equivalence principle and the deflection of light", Einstein Online, archived from the original on 2007-05-03, retrieved 2007-05-06 - Poisson, Eric (2004), "The Motion of Point Particles in Curved Spacetime", Living Rev. Relativity 7, doi:10.12942/lrr-2004-6, retrieved 2007-06-13 - Renn, Jürgen, ed. (2005), Albert Einstein – Chief Engineer of the Universe: Einstein's Life and Work in Context, Berlin: Wiley-VCH, ISBN 3-527-40571-2 - Robson, Ian (1996), Active galactic nuclei, John Wiley, ISBN 0-471-95853-0 - Schutz, Bernard F. (2003), Gravity from the ground up, Cambridge University Press, ISBN 0-521-45506-5 - Smolin, Lee (2001), Three roads to quantum gravity, Basic, ISBN 0-465-07835-4 - von Soldner, Johann Georg (1804), "Ueber die Ablenkung eines Lichtstrals von seiner geradlinigen Bewegung, durch die Attraktion eines Weltkörpers, an welchem er nahe vorbei geht", Berliner Astronomisches Jahrbuch: 161–172. - Sparke, Linda S.; Gallagher, John S. (2007), Galaxies in the universe – An introduction, Cambridge University Press, ISBN 0-521-85593-4 - Springel, Volker; White, Simon D. M.; Jenkins, Adrian; Frenk, Carlos S.; Yoshida, N; Gao, L; Navarro, J; Thacker, R; Croton, D et al. (2005), "Simulations of the formation, evolution and clustering of galaxies and quasars", Nature 435 (7042): 629–636, arXiv:astro-ph/0504097, Bibcode:2005Natur.435..629S, doi:10.1038/nature03597, PMID 15931216 - Stachel, John (1989), "The Rigidly Rotating Disk as the 'Missing Link in the History of General Relativity'", in Howard, D.; Stachel, J., Einstein and the History of General Relativity (Einstein Studies, Vol. 1), Birkhäuser, pp. 48–62, ISBN 0-8176-3392-8 - Thorne, Kip (1994), Black Holes and Time Warps: Einstein's Outrageous Legacy, W W Norton & Company, ISBN 0-393-31276-3 - Trimble, Virginia; Barstow, Martin (2007), "Gravitational redshift and White Dwarf stars", Einstein Online, retrieved 2007-06-13 - Wheeler, John A. (1990), A Journey Into Gravity and Spacetime, Scientific American Library, San Francisco: W. H. Freeman, ISBN 0-7167-6034-7 - Will, Clifford M. (1993), Was Einstein Right?, Oxford University Press, ISBN 0-19-286170-0 - Will, Clifford M. (2006), "The Confrontation between General Relativity and Experiment", Living Rev. Relativity 9: 3, arXiv:gr-qc/0510072, Bibcode:2006LRR.....9....3W, doi:10.12942/lrr-2006-3, retrieved 2007-06-12 - Wright, Ned (2007), Cosmology tutorial and FAQ, University of California at Los Angeles, retrieved 2007-06-12 |Wikibooks has a book on the topic of: General relativity| |Wikimedia Commons has media related to General relativity.| Additional resources, including more advanced material, can be found in General relativity resources. - Einstein Online. Website featuring articles on a variety of aspects of relativistic physics for a general audience, hosted by the Max Planck Institute for Gravitational Physics - NCSA Spacetime Wrinkles. Website produced by the numerical relativity group at the National Center for Supercomputing Applications, featuring an elementary introduction to general relativity, black holes and gravitational waves