text stringlengths 4 602k |
|---|
The Spanish–American War (Spanish: Guerra hispano-americana or Guerra hispano-estadounidense; Filipino: Digmaang Espanyol-Amerikano) was fought between the United States and Spain in 1898. Hostilities began in the aftermath of the internal explosion of USS Maine in Havana Harbor in Cuba, leading to U.S. intervention in the Cuban War of Independence. U.S. acquisition of Spain's Pacific possessions led to its involvement in the Philippine Revolution and ultimately in the Philippine–American War.
The main issue was Cuban independence. Revolts had been occurring for some years in Cuba against Spanish rule. The U.S. later backed these revolts upon entering the Spanish–American War. There had been war scares before, as in the Virginius Affair in 1873, but in the late 1890s, U.S. public opinion was agitated by anti-Spanish propaganda led by newspaper publishers such as Joseph Pulitzer and William Randolph Hearst which used yellow journalism to call for war. The business community across the United States had just recovered from a deep depression and feared that a war would reverse the gains. It lobbied vigorously against going to war.
The United States Navy armored cruiser USS Maine had mysteriously sunk in Havana Harbor; political pressures from the Democratic Party pushed the administration of Republican President William McKinley into a war that he had wished to avoid.
President McKinley signed a joint Congressional resolution demanding Spanish withdrawal and authorizing the President to use military force to help Cuba gain independence on April 20, 1898. In response, Spain severed diplomatic relations with the United States on April 21. On the same day, the U.S. Navy began a blockade of Cuba. On April 23, Spain stated that it would declare war if the U.S. forces invaded its territory. On April 25, Congress declared that a state of war between the U.S. and Spain had de facto existed since April 21, the day the blockade of Cuba had begun. The United States sent an ultimatum to Spain demanding that it surrender control of Cuba, but due to Spain not replying soon enough, the United States assumed Spain had ignored the ultimatum and continued to occupy Cuba.
The ten-week war was fought in both the Caribbean and the Pacific. As U.S. agitators for war well knew, U.S. naval power proved decisive, allowing expeditionary forces to disembark in Cuba against a Spanish garrison already facing nationwide Cuban insurgent attacks and further wasted by yellow fever. American, Cuban, and Philippine forces obtained the surrender of Santiago de Cuba and Manila despite the good performance of some Spanish infantry units and fierce fighting for positions such as San Juan Hill. Madrid sued for peace after two obsolete Spanish squadrons sank in Santiago de Cuba and Manila Bay and a third, more modern, fleet was recalled home to protect the Spanish coasts.
The result was the 1898 Treaty of Paris, negotiated on terms favorable to the U.S. which allowed it temporary control of Cuba and ceded ownership of Puerto Rico, Guam, and the Philippine islands. The cession of the Philippines involved payment of $20 million ($602,320,000 today) to Spain by the U.S. to cover infrastructure owned by Spain.
The defeat and loss of the last remnants of the Spanish Empire was a profound shock to Spain's national psyche and provoked a thorough philosophical and artistic reevaluation of Spanish society known as the Generation of '98. The United States gained several island possessions spanning the globe and a rancorous new debate over the wisdom of expansionism.
|Part of the Philippine Revolution and the Cuban War of Independence|
(clockwise from top left)
|Commanders and leaders|
|Casualties and losses|
The combined problems arising from the Peninsular War (1807–1814), the loss of most of its colonies in the Americas in the early 19th-century Spanish American wars of independence, and three Carlist Wars (1832–1876) marked the low point of Spanish colonialism. Liberal Spanish elites like Antonio Cánovas del Castillo and Emilio Castelar offered new interpretations of the concept of "empire" to dovetail with Spain's emerging nationalism. Cánovas made clear in an address to the University of Madrid in 1882 his view of the Spanish nation as based on shared cultural and linguistic elements – on both sides of the Atlantic – that tied Spain's territories together.
Cánovas saw Spanish imperialism as markedly different in its methods and purposes of colonization from those of rival empires like the British or French. Spaniards regarded the spreading of civilization and Christianity as Spain's major objective and contribution to the New World. The concept of cultural unity bestowed special significance on Cuba, which had been Spanish for almost four hundred years, and was viewed as an integral part of the Spanish nation. The focus on preserving the empire would have negative consequences for Spain's national pride in the aftermath of the Spanish–American War.
In 1823, the fifth American President James Monroe (1758–1831, served 1817–1825) enunciated the Monroe Doctrine, which stated that the United States would not tolerate further efforts by European governments to retake or expand their colonial holdings in the Americas or to interfere with the newly independent states in the hemisphere; at the same time, the doctrine stated that the U.S. would respect the status of the existing European colonies. Before the American Civil War (1861–1865), Southern interests attempted to have the United States purchase Cuba and convert it into a new slave territory. The pro-slavery element proposed the Ostend Manifesto proposal of 1854. It was rejected by anti-slavery forces.
After the American Civil War and Cuba's Ten Years' War, U.S. businessmen began monopolizing the devalued sugar markets in Cuba. In 1894, 90% of Cuba's total exports went to the United States, which also provided 40% of Cuba's imports. Cuba's total exports to the U.S. were almost twelve times larger than the export to her mother country, Spain. U.S. business interests indicated that while Spain still held political authority over Cuba, economic authority in Cuba, acting-authority, was shifting to the US.
The U.S. became interested in a trans-isthmus canal either in Nicaragua, or in Panama, where the Panama Canal would later be built (1903–1914), and realized the need for naval protection. Captain Alfred Thayer Mahan was an especially influential theorist; his ideas were much admired by future 26th President Theodore Roosevelt, as the U.S. rapidly built a powerful naval fleet of steel warships in the 1880s and 1890s. Roosevelt served as Assistant Secretary of the Navy in 1897–1898 and was an aggressive supporter of an American war with Spain over Cuban interests.
Meanwhile, the "Cuba Libre" movement, led by Cuban intellectual José Martí until his death in 1895, had established offices in Florida. The face of the Cuban revolution in the U.S. was the Cuban "Junta", under the leadership of Tomás Estrada Palma, who in 1902 became Cuba's first president. The Junta dealt with leading newspapers and Washington officials and held fund-raising events across the US. It funded and smuggled weapons. It mounted a large propaganda campaign that generated enormous popular support in the U.S. in favor of the Cubans. Protestant churches and most Democrats were supportive, but business interests called on Washington to negotiate a settlement and avoid war.
Cuba attracted enormous American attention, but almost no discussion involved the other Spanish colonies of the Philippines, Guam, or Puerto Rico. Historians note that there was no popular demand in the United States for an overseas colonial empire—Americans did not admire the British Empire or the others.
The first serious bid for Cuban independence, the Ten Years' War, erupted in 1868 and was subdued by the authorities a decade later. Neither the fighting nor the reforms in the Pact of Zanjón (February 1878) quelled the desire of some revolutionaries for wider autonomy and ultimately independence. One such revolutionary, José Martí, continued to promote Cuban financial and political autonomy in exile. In early 1895, after years of organizing, Martí launched a three-pronged invasion of the island.
The plan called for one group from Santo Domingo led by Máximo Gómez, one group from Costa Rica led by Antonio Maceo Grajales, and another from the United States (preemptively thwarted by U.S. officials in Florida) to land in different places on the island and provoke an uprising. While their call for revolution, the grito de Baíre, was successful, the result was not the grand show of force Martí had expected. With a quick victory effectively lost, the revolutionaries settled in to fight a protracted guerrilla campaign.
Antonio Cánovas del Castillo, the architect of Spain's Restoration constitution and the prime minister at the time, ordered General Arsenio Martínez-Campos, a distinguished veteran of the war against the previous uprising in Cuba, to quell the revolt. Campos's reluctance to accept his new assignment and his method of containing the revolt to the province of Oriente earned him criticism in the Spanish press.
The mounting pressure forced Cánovas to replace General Campos with General Valeriano Weyler, a soldier who had experience in quelling rebellions in overseas provinces and the Spanish metropole. Weyler deprived the insurgency of weaponry, supplies, and assistance by ordering the residents of some Cuban districts to move to reconcentration areas near the military headquarters. This strategy was effective in slowing the spread of rebellion. In the United States, this fueled the fire of anti-Spanish propaganda. In a political speech President William McKinley used this to ram Spanish actions against armed rebels. He even said this "was not civilized warfare" but "extermination".
The Spanish Government regarded Cuba as a province of Spain rather than a colony, and depended on it for prestige and trade, and as a training ground for the army. Spanish Prime Minister Antonio Cánovas del Castillo announced that "the Spanish nation is disposed to sacrifice to the last peseta of its treasure and to the last drop of blood of the last Spaniard before consenting that anyone snatch from it even one piece of its territory". He had long dominated and stabilized Spanish politics. He was assassinated in 1897 by Italian anarchist Michele Angiolillo, leaving a Spanish political system that was not stable and could not risk a blow to its prestige.
The eruption of the Cuban revolt, Weyler's measures, and the popular fury these events whipped up proved to be a boon to the newspaper industry in New York City, where Joseph Pulitzer of the New York World and William Randolph Hearst of the New York Journal recognized the potential for great headlines and stories that would sell copies. Both papers denounced Spain, but had little influence outside New York. American opinion generally saw Spain as a hopelessly backward power that was unable to deal fairly with Cuba. American Catholics were divided before the war began, but supported it enthusiastically once it started.
The U.S. had important economic interests that were being harmed by the prolonged conflict and deepening uncertainty about the future of Cuba. Shipping firms that had relied heavily on trade with Cuba now suffered losses as the conflict continued unresolved. These firms pressed Congress and McKinley to seek an end to the revolt. Other American business concerns, specifically those who had invested in Cuban sugar, looked to the Spanish to restore order. Stability, not war, was the goal of both interests. How stability would be achieved would depend largely on the ability of Spain and the U.S. to work out their issues diplomatically.
While tension increased among the Cubans and Spanish Government, popular support of intervention began to spring up in the United States, due to the emergence of the "Cuba Libre" movement and the fact that many Americans had drawn parallels between the American Revolution and the Cuban revolt, seeing the Spanish Government as the tyrannical colonial oppressor. Historian Louis Pérez notes that "The proposition of war in behalf of Cuban independence took hold immediately and held on thereafter. Such was the sense of the public mood." At the time many poems and songs were written in the United States to express support of the "Cuba Libre" movement. At the same time, many African Americans, facing growing racial discrimination and increasing retardation of their civil rights, wanted to take part in the war because they saw it as a way to advance the cause of equality, service to country hopefully helping to gain political and public respect amongst the wider population.
President McKinley, well aware of the political complexity surrounding the conflict, wanted to end the revolt peacefully. In accordance with this policy, McKinley began to negotiate with the Spanish government, hoping that the negotiations would be able to end the yellow journalism in the United States, and therefore, end the loudest calls to go to war with Spain. An attempt was made to negotiate a peace before McKinley took office, however, the Spanish refused to take part in the negotiations. In 1897 McKinley appointed Stewart L. Woodford as the new minister to Spain, who again offered to negotiate a peace. In October 1897, the Spanish government still refused the United States offer to negotiate between the Spanish and the Cubans, but promised the U.S. it would give the Cubans more autonomy. However, with the election of a more liberal Spanish government in November, Spain began to change their policies in Cuba. First, the new Spanish government told the United States that it was willing to offer a change in the Reconcentration policies (the main set of policies that was feeding yellow journalism in the United States) if the Cuban rebels agreed to a cessation of hostilities. This time the rebels refused the terms in hopes that continued conflict would lead to U.S. intervention and the creation of an independent Cuba. The liberal Spanish government also recalled the Spanish Governor General Valeriano Weyler from Cuba. This action alarmed many Cubans loyal to Spain.
The Cubans loyal to Weyler began planning large demonstrations to take place when the next Governor General, Ramon Blanco, arrived in Cuba. U.S. consul Fitzhugh Lee learned of these plans and sent a request to the U.S. State Department to send a U.S. warship to Cuba. This request lead to USS Maine being sent to Cuba. While Maine was docked in Havana, an explosion sank the ship. The sinking of Maine was blamed on the Spanish and made the possibility of a negotiated peace very slim. Throughout the negotiation process, the major European powers, especially Britain, France, and Russia, generally supported the American position and urged Spain to give in. Spain repeatedly promised specific reforms that would pacify Cuba but failed to deliver; American patience ran out.
McKinley sent USS Maine to Havana to ensure the safety of American citizens and interests, and to underscore the urgent need for reform. Naval forces were moved in position to attack simultaneously on several fronts if the war was not avoided. As Maine left Florida, a large part of the North Atlantic Squadron was moved to Key West and the Gulf of Mexico. Others were also moved just off the shore of Lisbon, and still others were moved to Hong Kong.
At 9:40 on the evening of February 15, 1898, Maine sank in Havana Harbor after suffering a massive explosion. While McKinley urged patience and did not declare that Spain had caused the explosion, the deaths of 250 out of 355 sailors on board focused American attention. McKinley asked Congress to appropriate $50 million for defense, and Congress unanimously obliged. Most American leaders took the position that the cause of the explosion was unknown, but public attention was now riveted on the situation and Spain could not find a diplomatic solution to avoid war. Spain appealed to the European powers, most of whom advised it to accept U.S. conditions for Cuba in order to avoid war. Germany urged a united European stand against the United States but took no action.
The U.S. Navy's investigation, made public on March 28, concluded that the ship's powder magazines were ignited when an external explosion was set off under the ship's hull. This report poured fuel on popular indignation in the US, making the war inevitable. Spain's investigation came to the opposite conclusion: the explosion originated within the ship. Other investigations in later years came to various contradictory conclusions, but had no bearing on the coming of the war. In 1974, Admiral Hyman George Rickover had his staff look at the documents and decided there was an internal explosion. A study commissioned by National Geographic magazine in 1999, using AME computer modelling, stated that the explosion could have been caused by a mine, but no definitive evidence was found.
After Maine was destroyed, New York City newspaper publishers Hearst and Pulitzer decided that the Spanish were to blame, and they publicized this theory as fact in their papers. They both used sensationalistic and astonishing accounts of "atrocities" committed by the Spanish in Cuba by using headlines in their newspapers, such as "Spanish Murderers" and "Remember The Maine". Their press exaggerated what was happening and how the Spanish were treating the Cuban prisoners. The stories were based on factual accounts, but most of the time, the articles that were published were embellished and written with incendiary language causing emotional and often heated responses among readers. A common myth falsely states that when illustrator Frederic Remington said there was no war brewing in Cuba, Hearst responded: "You furnish the pictures and I'll furnish the war."
This new "yellow journalism" was, however, uncommon outside New York City, and historians no longer consider it the major force shaping the national mood. Public opinion nationwide did demand immediate action, overwhelming the efforts of President McKinley, Speaker of the House Thomas Brackett Reed, and the business community to find a negotiated solution. Wall Street, big business, high finance and Main Street businesses across the country were vocally opposed to war and demanded peace. After years of severe depression, the economic outlook for the domestic economy was suddenly bright again in 1897. However, the uncertainties of warfare posed a serious threat to full economic recovery. "War would impede the march of prosperity and put the country back many years," warned the New Jersey Trade Review. The leading railroad magazine editorialized, "From a commercial and mercenary standpoint it seems peculiarly bitter that this war should come when the country had already suffered so much and so needed rest and peace." McKinley paid close attention to the strong anti-war consensus of the business community, and strengthened his resolve to use diplomacy and negotiation rather than brute force to end the Spanish tyranny in Cuba.
A speech delivered by Republican Senator Redfield Proctor of Vermont on March 17, 1898, thoroughly analyzed the situation and greatly strengthened the pro-war cause. Proctor concluded that war was the only answer.:210 Many in the business and religious communities which had until then opposed war, switched sides, leaving McKinley and Speaker Reed almost alone in their resistance to a war. On April 11, McKinley ended his resistance and asked Congress for authority to send American troops to Cuba to end the civil war there, knowing that Congress would force a war.
On April 19, while Congress was considering joint resolutions supporting Cuban independence, Republican Senator Henry M. Teller of Colorado proposed the Teller Amendment to ensure that the U.S. would not establish permanent control over Cuba after the war. The amendment, disclaiming any intention to annex Cuba, passed the Senate 42 to 35; the House concurred the same day, 311 to 6. The amended resolution demanded Spanish withdrawal and authorized the President to use as much military force as he thought necessary to help Cuba gain independence from Spain. President McKinley signed the joint resolution on April 20, 1898, and the ultimatum was sent to Spain. In response, Spain severed diplomatic relations with the United States on April 21. On the same day, the U.S. Navy began a blockade of Cuba. Spain stated, it would declare war if the U.S. forces invaded its territory, on April 23. On April 25, the U.S. Congress declared that a state of war between the U.S. and Spain had de facto existed since April 21, the day the blockade of Cuba had begun.
The Navy was ready, but the Army was not well-prepared for the war and made radical changes in plans and quickly purchased supplies. In the spring of 1898, the strength of the Regular U.S. Army was just 25,000 men. The Army wanted 50,000 new men but received over 220,000 through volunteers and the mobilization of state National Guard units, even gaining nearly 100,000 men on the first night after the explosion of USS Maine.
The overwhelming consensus of observers in the 1890s, and historians ever since, is that an upsurge of humanitarian concern with the plight of the Cubans was the main motivating force that caused the war with Spain in 1898. McKinley put it succinctly in late 1897 that if Spain failed to resolve its crisis, the United States would see “a duty imposed by our obligations to ourselves, to civilization and humanity to intervene with force." Intervention in terms of negotiating a settlement proved impossible—neither Spain nor the insurgents would agree. Louis Perez states, "Certainly the moralistic determinants of war in 1898 has been accorded preponderant explanatory weight in the historiography." By the 1950s, however, American political scientists began attacking the war as a mistake based on idealism, arguing that a better policy would be realism. They discredited the idealism by suggesting the people were deliberately misled by propaganda and sensationalist yellow journalism. Political scientist Robert Osgood, writing in 1953, led the attack on the American decision process as a confused mix of "self-righteousness and genuine moral fervor," in the form of a "crusade" and a combination of "knight-errantry and national self- assertiveness." Osgood argued:
In his War and Empire, Prof. Paul Atwood of the University of Massachusetts (Boston) writes:
The Spanish–American War was fomented on outright lies and trumped up accusations against the intended enemy. ... War fever in the general population never reached a critical temperature until the accidental sinking of the USS Maine was deliberately, and falsely, attributed to Spanish villainy. ... In a cryptic message ... Senator Lodge wrote that 'There may be an explosion any day in Cuba which would settle a great many things. We have got a battleship in the harbor of Havana, and our fleet, which overmatches anything the Spanish have, is masked at the Dry Tortugas.
Our own direct interests were great, because of the Cuban tobacco and sugar, and especially because of Cuba's relation to the projected Isthmian [Panama] Canal. But even greater were our interests from the standpoint of humanity. ... It was our duty, even more from the standpoint of National honor than from the standpoint of National interest, to stop the devastation and destruction. Because of these considerations I favored war.
In the 333 years of Spanish rule, the Philippines developed from a small overseas colony governed from the Viceroyalty of New Spain to a land with modern elements in the cities. The Spanish-speaking middle classes of the 19th century were mostly educated in the liberal ideas coming from Europe. Among these Ilustrados was the Filipino national hero José Rizal, who demanded larger reforms from the Spanish authorities. This movement eventually led to the Philippine Revolution against Spanish colonial rule. The revolution had been in a state of truce since the signing of the Pact of Biak-na-Bato in 1897, with revolutionary leaders having accepted exile outside of the country.
On April 23, 1898, a document appeared in the Manila Gazette newspaper warning of the impending war and calling for Filipinos to participate on the side of Spain.[e]
The first battle between American and Spanish forces was at Manila Bay where, on May 1, Commodore George Dewey, commanding the U.S. Navy's Asiatic Squadron aboard USS Olympia, in a matter of hours defeated a Spanish squadron under Admiral Patricio Montojo.[f] Dewey managed this with only nine wounded. With the German seizure of Tsingtao in 1897, Dewey's squadron had become the only naval force in the Far East without a local base of its own, and was beset with coal and ammunition problems. Despite these problems, the Asiatic Squadron not only destroyed the Spanish fleet but also captured the harbor of Manila.
Following Dewey's victory, Manila Bay was filled with the warships of Britain, Germany, France, and Japan. The German fleet of eight ships, ostensibly in Philippine waters to protect German interests, acted provocatively – cutting in front of American ships, refusing to salute the United States flag (according to customs of naval courtesy), taking soundings of the harbor, and landing supplies for the besieged Spanish.
The Germans, with interests of their own, were eager to take advantage of whatever opportunities the conflict in the islands might afford. There was a fear at the time that the islands would become a German possession. The Americans called the bluff of the Germans, threatening conflict if the aggression continued, and the Germans backed down. At the time, the Germans expected the confrontation in the Philippines to end in an American defeat, with the revolutionaries capturing Manila and leaving the Philippines ripe for German picking.
Commodore Dewey transported Emilio Aguinaldo, a Filipino leader who had led rebellion against Spanish rule in the Philippines in 1896, from exile in Hong Kong to the Philippines to rally more Filipinos against the Spanish colonial government. By June 9, Aguinaldo's forces controlled the provinces of Bulacan, Cavite, Laguna, Batangas, Bataan, Zambales, Pampanga, Pangasinan, and Mindoro, and had laid siege to Manila. On June 12, Aguinaldo proclaimed the independence of the Philippines.
On August 5, on instructions from Spain, Governor General Basilo Augistin turned over command of the Philippines to his deputy, Fermin Jaudenes. On August 13, with American commanders unaware that a cease-fire had been signed between Spain and the U.S. on the previous day in Washington D.C., American forces captured the city of Manila from the Spanish in the Battle of Manila. This battle marked the end of Filipino–American collaboration, as the American action of preventing Filipino forces from entering the captured city of Manila was deeply resented by the Filipinos. This later led to the Philippine–American War, which would prove to be more deadly and costly than the Spanish–American War.
The U.S. had sent a force of some 11,000 ground troops to the Philippines. On August 14, 1899, Spanish Captain-General Jaudenes formally capitulated and U.S. General Merritt formally accepted the surrender and declared the establishment of a U.S. military government in occupation. That same day, the Schurman Commission recommended that the U.S. retain control of the Philippines, possibly granting independence in the future. On December 10, 1898, the Spanish government ceded the Philippines to the United States in the Treaty of Paris. Armed conflict broke out between U.S. forces and the Filipinos when U.S. troops began to take the place of the Spanish in control of the country after the end of the war, resulting in the Philippine–American War.
On June 20, a U.S. fleet commanded by Captain Henry Glass, consisting of the protected cruiser USS Charleston and three transports carrying troops to the Philippines, entered Guam's Apra Harbor, Captain Glass having opened sealed orders instructing him to proceed to Guam and capture it. Charleston fired a few rounds at Fort Santa Cruz without receiving return fire. Two local officials, not knowing that war had been declared and believing the firing had been a salute, came out to Charleston to apologize for their inability to return the salute as they were out of gunpowder. Glass informed them that the U.S. and Spain were at war.
The following day, Glass sent Lieutenant William Braunersruehter to meet the Spanish Governor to arrange the surrender of the island and the Spanish garrison there. Some 54 Spanish infantry were captured and transported to the Philippines as prisoners of war. No U.S. forces were left on Guam, but the only U.S. citizen on the island, Frank Portusach, told Captain Glass that he would look after things until U.S. forces returned.
Theodore Roosevelt advocated intervention in Cuba, both for the Cuban people and to promote the Monroe Doctrine. While Assistant Secretary of the Navy, he placed the Navy on a war-time footing and prepared Dewey's Asiatic Squadron for battle. He also worked with Leonard Wood in convincing the Army to raise an all-volunteer regiment, the 1st U.S. Volunteer Cavalry. Wood was given command of the regiment that quickly became known as the "Rough Riders".
The Americans planned to capture the city of Santiago de Cuba to destroy Linares' army and Cervera's fleet. To reach Santiago they had to pass through concentrated Spanish defenses in the San Juan Hills and a small town in El Caney. The American forces were aided in Cuba by the pro-independence rebels led by General Calixto García.
For quite some time the Cuban public believed the United States government to possibly hold the key to its independence, and even annexation was considered for a time, which historian Louis Pérez explored in his book Cuba and the United States: Ties of Singular Intimacy. The Cubans harbored a great deal of discontent towards the Spanish Government, due to years of manipulation on the part of the Spanish. The prospect of getting the United States involved in the fight was considered by many Cubans as a step in the right direction. While the Cubans were wary of the United States' intentions, the overwhelming support from the American public provided the Cubans with some peace of mind, because they believed that the United States was committed to helping them achieve their independence. However, with the imposition of the Platt Amendment of 1903 after the war, as well as economic and military manipulation on the part of the United States, Cuban sentiment towards the United States became polarized, with many Cubans disappointed with continuing American interference.
From June 22 to 24, the Fifth Army Corps under General William R. Shafter landed at Daiquirí and Siboney, east of Santiago, and established an American base of operations. A contingent of Spanish troops, having fought a skirmish with the Americans near Siboney on June 23, had retired to their lightly entrenched positions at Las Guasimas. An advance guard of U.S. forces under former Confederate General Joseph Wheeler ignored Cuban scouting parties and orders to proceed with caution. They caught up with and engaged the Spanish rearguard of about 2,000 soldiers led by General Antero Rubín who effectively ambushed them, in the Battle of Las Guasimas on June 24. The battle ended indecisively in favor of Spain and the Spanish left Las Guasimas on their planned retreat to Santiago.
The U.S. Army employed Civil War-era skirmishers at the head of the advancing columns. Three of four of the U.S. soldiers who had volunteered to act as skirmishers walking point at the head of the American column were killed, including Hamilton Fish II (grandson of Hamilton Fish, the Secretary of State under Ulysses S. Grant), and Captain Allyn K. Capron, Jr., whom Theodore Roosevelt would describe as one of the finest natural leaders and soldiers he ever met. Only Oklahoma Territory Pawnee Indian, Tom Isbell, wounded seven times, survived.
The Battle of Las Guasimas showed the U.S. that quick-thinking American soldiers would not stick to the linear tactics which did not work effectively against Spanish troops who had learned the art of cover and concealment from their own struggle with Cuban insurgents, and never made the error of revealing their positions while on the defense. Americans advanced by rushes and stayed in the weeds so that they, too, were largely invisible to the Spaniards who used un-targeted volley fire to try to mass fires against the advancing Americans. While some troops were hit, this technique was mostly a waste of bullets as the Americans learned to duck as soon as they heard the Spanish word Fire, "Fuego" yelled by the Spanish officers. Spanish troops were equipped with smokeless powder arms that also helped them to hide their positions while firing.
Regular Spanish troops were mostly armed with modern charger-loaded, 7 mm 1893 Spanish Mauser rifles and using smokeless powder. The high-speed 7×57mm Mauser round was termed the "Spanish Hornet" by the Americans because of the supersonic crack as it passed overhead. Other irregular troops were armed with Remington Rolling Block rifles in .43 Spanish using smokeless powder and brass-jacketed bullets. U.S. regular infantry were armed with the .30–40 Krag–Jørgensen, a bolt-action rifle with a complex rotating magazine. Both the U.S. regular cavalry and the volunteer cavalry used smokeless ammunition. In later battles, state volunteers used the .45–70 Springfield a single-shot black powder rifle.
On July 1, a combined force of about 15,000 American troops in regular infantry and cavalry regiments, including all four of the army's "Colored" Buffalo soldier regiments, and volunteer regiments, among them Roosevelt and his "Rough Riders", the 71st New York, the 2nd Massachusetts Infantry, and 1st North Carolina, and rebel Cuban forces attacked 1,270 entrenched Spaniards in dangerous Civil War-style frontal assaults at the Battle of El Caney and Battle of San Juan Hill outside of Santiago. More than 200 U.S. soldiers were killed and close to 1,200 wounded in the fighting, thanks to the high rate of fire the Spanish put down range at the Americans. Supporting fire by Gatling guns was critical to the success of the assault. Cervera decided to escape Santiago two days later. First Lieutenant John J. Pershing, nicknamed "Black Jack", oversaw the 10th Cavalry Unit during the war. Pershing and his unit fought in the Battle of San Juan Hill. Pershing was cited for his gallantry during the battle.
The Spanish forces at Guantánamo were so isolated by Marines and Cuban forces that they did not know that Santiago was under siege, and their forces in the northern part of the province could not break through Cuban lines. This was not true of the Escario relief column from Manzanillo, which fought its way past determined Cuban resistance but arrived too late to participate in the siege.
After the battles of San Juan Hill and El Caney, the American advance halted. Spanish troops successfully defended Fort Canosa, allowing them to stabilize their line and bar the entry to Santiago. The Americans and Cubans forcibly began a bloody, strangling siege of the city. During the nights, Cuban troops dug successive series of "trenches" (raised parapets), toward the Spanish positions. Once completed, these parapets were occupied by U.S. soldiers and a new set of excavations went forward. American troops, while suffering daily losses from Spanish fire, suffered far more casualties from heat exhaustion and mosquito-borne disease. At the western approaches to the city, Cuban general Calixto Garcia began to encroach on the city, causing much panic and fear of reprisals among the Spanish forces.
Lieutenant Carter P. Johnson of the Buffalo Soldiers' 10th Cavalry, with experience in special operations roles as head of the 10th Cavalry's attached Apache scouts in the Apache Wars, chose 50 soldiers from the regiment to lead a deployment mission with at least 375 Cuban soldiers under Cuban Brigadier General Emilio Nunez and other supplies to the mouth of the San Juan River east of Cienfuegos. On June 29, 1898, a reconnaissance team in landing boats from the transports Florida and Fanita attempted to land on the beach, but were repelled by Spanish fire. A second attempt was made on June 30, 1898, but a team of reconnaissance soldiers was trapped on the beach near the mouth of the Tallabacoa River. A team of four soldiers saved this group and were awarded Medals of Honor. The USS Peoria and the recently arrived USS Helena then shelled the beach to distract the Spanish while the Cuban deployment landed forty miles east at Palo Alto, where they linked up with Cuban General Gomez.
The major port of Santiago de Cuba was the main target of naval operations during the war. The U.S. fleet attacking Santiago needed shelter from the summer hurricane season; Guantánamo Bay, with its excellent harbor, was chosen. The 1898 invasion of Guantánamo Bay happened between June 6 and 10, with the first U.S. naval attack and subsequent successful landing of U.S. Marines with naval support.
On April 23, a council of senior admirals of the Spanish Navy had decided to order Admiral Pascual Cervera y Topete's squadron of four armored cruisers and three torpedo boat destroyers to proceed from their present location in Cape Verde (having left from Cadiz, Spain) to the West Indies.
The Battle of Santiago de Cuba on July 3, was the largest naval engagement of the Spanish–American War and resulted in the destruction of the Spanish Caribbean Squadron (also known as the Flota de Ultramar). In May, the fleet of Spanish Admiral Pascual Cervera y Topete had been spotted by American forces in Santiago harbor, where they had taken shelter for protection from sea attack. A two-month stand-off between Spanish and American naval forces followed.
When the Spanish squadron finally attempted to leave the harbor on July 3, the American forces destroyed or grounded five of the six ships. Only one Spanish vessel, the new armored cruiser Cristóbal Colón, survived, but her captain hauled down her flag and scuttled her when the Americans finally caught up with her. The 1,612 Spanish sailors who were captured, including Admiral Cervera, were sent to Seavey's Island at the Portsmouth Naval Shipyard in Kittery, Maine, where they were confined at Camp Long as prisoners of war from July 11 until mid-September.
During the stand-off, U.S. Assistant Naval Constructor, Lieutenant Richmond Pearson Hobson had been ordered by Rear Admiral William T. Sampson to sink the collier USS Merrimac in the harbor to bottle up the Spanish fleet. The mission was a failure, and Hobson and his crew were captured. They were exchanged on July 6, and Hobson became a national hero; he received the Medal of Honor in 1933, retired as a Rear Admiral and became a Congressman.
Yellow fever had quickly spread amongst the American occupation force, crippling it. A group of concerned officers of the American army chose Theodore Roosevelt to draft a request to Washington that it withdraw the Army, a request that paralleled a similar one from General Shafter, who described his force as an "army of convalescents". By the time of his letter, 75% of the force in Cuba was unfit for service.
On August 7, the American invasion force started to leave Cuba. The evacuation was not total. The U.S. Army kept the black Ninth U.S. Cavalry Regiment in Cuba to support the occupation. The logic was that their race and the fact that many black volunteers came from southern states would protect them from disease; this logic led to these soldiers being nicknamed "Immunes". Still, when the Ninth left, 73 of its 984 soldiers had contracted the disease.
In May 1898, Lt. Henry H. Whitney of the United States Fourth Artillery was sent to Puerto Rico on a reconnaissance mission, sponsored by the Army's Bureau of Military Intelligence. He provided maps and information on the Spanish military forces to the U.S. government before the invasion.
The American offensive began on May 12, 1898, when a squadron of 12 U.S. ships commanded by Rear Adm. William T. Sampson of the United States Navy attacked the archipelago's capital, San Juan. Though the damage inflicted on the city was minimal, the Americans established a blockade in the city's harbor, San Juan Bay. On June 22, the cruiser Isabel II and the destroyer Terror delivered a Spanish counterattack, but were unable to break the blockade and Terror was damaged.
The land offensive began on July 25, when 1,300 infantry soldiers led by Nelson A. Miles disembarked off the coast of Guánica. The first organized armed opposition occurred in Yauco in what became known as the Battle of Yauco.
This encounter was followed by the Battle of Fajardo. The United States seized control of Fajardo on August 1, but were forced to withdraw on August 5 after a group of 200 Puerto Rican-Spanish soldiers led by Pedro del Pino gained control of the city, while most civilian inhabitants fled to a nearby lighthouse. The Americans encountered larger opposition during the Battle of Guayama and as they advanced towards the main island's interior. They engaged in crossfire at Guamaní River Bridge, Coamo and Silva Heights and finally at the Battle of Asomante. The battles were inconclusive as the allied soldiers retreated.
A battle in San Germán concluded in a similar fashion with the Spanish retreating to Lares. On August 9, 1898, American troops that were pursuing units retreating from Coamo encountered heavy resistance in Aibonito in a mountain known as Cerro Gervasio del Asomante and retreated after six of their soldiers were injured. They returned three days later, reinforced with artillery units and attempted a surprise attack. In the subsequent crossfire, confused soldiers reported seeing Spanish reinforcements nearby and five American officers were gravely injured, which prompted a retreat order. All military actions in Puerto Rico were suspended on August 13, after U.S. President William McKinley and French Ambassador Jules Cambon, acting on behalf of the Spanish Government, signed an armistice whereby Spain relinquished its sovereignty over Puerto Rico.
With defeats in Cuba and the Philippines, and both of its fleets destroyed, Spain sued for peace and negotiations were opened between the two parties. After the sickness and death of British consul Edward Henry Rawson-Walker, American admiral George Dewey requested the Belgian consul to Manila, Édouard André, to take Rawson-Walker's place as intermediary with the Spanish Government.
Hostilities were halted on August 12, 1898, with the signing in Washington of a Protocol of Peace between the United States and Spain. After over two months of difficult negotiations, the formal peace treaty, the Treaty of Paris, was signed in Paris on December 10, 1898, and was ratified by the United States Senate on February 6, 1899.
The United States gained Spain's colonies of the Philippines, Guam and Puerto Rico in the treaty, and Cuba became a U.S. protectorate. The treaty came into force in Cuba April 11, 1899, with Cubans participating only as observers. Having been occupied since July 17, 1898, and thus under the jurisdiction of the United States Military Government (USMG), Cuba formed its own civil government and gained independence on May 20, 1902, with the announced end of USMG jurisdiction over the island. However, the U.S. imposed various restrictions on the new government, including prohibiting alliances with other countries, and reserved the right to intervene. The U.S. also established a perpetual lease of Guantánamo Bay.
The war lasted ten weeks. John Hay (the United States Ambassador to the United Kingdom), writing from London to his friend Theodore Roosevelt, declared that it had been "a splendid little war". The press showed Northerners and Southerners, blacks and whites fighting against a common foe, helping to ease the scars left from the American Civil War. Exemplary of this was the fact that four former Confederate States Army generals had served in the war, now in the U.S. Army and all of them again carrying similar ranks. These officers included Matthew Butler, Fitzhugh Lee, Thomas L. Rosser and Joseph Wheeler, though only the latter had seen action. Still, in an exciting moment during the Battle of Las Guasimas, Wheeler apparently forgot for a moment which war he was fighting, having supposedly called out "Let's go, boys! We've got the damn Yankees on the run again!"
The war marked American entry into world affairs. Since then, the U.S. has had a significant hand in various conflicts around the world, and entered many treaties and agreements. The Panic of 1893 was over by this point, and the U.S. entered a long and prosperous period of economic and population growth, and technological innovation that lasted through the 1920s.
The war redefined national identity, served as a solution of sorts to the social divisions plaguing the American mind, and provided a model for all future news reporting.
The idea of American imperialism changed in the public's mind after the short and successful Spanish–American War. Due to the United States' powerful influence diplomatically and militarily, Cuba's status after the war relied heavily upon American actions. Two major developments emerged from the Spanish–American War: one, it greatly enforced the United States' vision of itself as a "defender of democracy" and as a major world power, and two, it had severe implications for Cuban–American relations in the future. As historian Louis Pérez argued in his book Cuba in the American Imagination: Metaphor and the Imperial Ethos, the Spanish–American War of 1898 "fixed permanently how Americans came to think of themselves: a righteous people given to the service of righteous purpose".
The war greatly reduced the Spanish Empire. Spain had been declining as an imperial power since the early 19th century as a result of Napoleon's invasion. The loss of Cuba caused a national trauma because of the affinity of peninsular Spaniards with Cuba, which was seen as another province of Spain rather than as a colony. Spain retained only a handful of overseas holdings: Spanish West Africa (Spanish Sahara), Spanish Guinea, Spanish Morocco, and the Canary Islands.
The Spanish soldier Julio Cervera Baviera, who served in the Puerto Rican Campaign, published a pamphlet in which he blamed the natives of that colony for its occupation by the Americans, saying, "I have never seen such a servile, ungrateful country [i.e., Puerto Rico].... In twenty-four hours, the people of Puerto Rico went from being fervently Spanish to enthusiastically American.... They humiliated themselves, giving in to the invader as the slave bows to the powerful lord." He was challenged to a duel by a group of young Puerto Ricans for writing this pamphlet.
Culturally, a new wave called the Generation of '98 originated as a response to this trauma, marking a renaissance in Spanish culture. Economically, the war benefited Spain, because after the war large sums of capital held by Spaniards in Cuba and the United States were returned to the peninsula and invested in Spain. This massive flow of capital (equivalent to 25% of the gross domestic product of one year) helped to develop the large modern firms in Spain in the steel, chemical, financial, mechanical, textile, shipyard, and electrical power industries. However, the political consequences were serious. The defeat in the war began the weakening of the fragile political stability that had been established earlier by the rule of Alfonso XII.
The Teller Amendment, which was enacted on April 20, 1898, was a promise from the United States to the Cuban people that it was not declaring war to annex Cuba, but to help it gain its independence from Spain. The Platt Amendment was a move by the United States' government to shape Cuban affairs without violating the Teller Amendment.
The U.S. Congress had passed the Teller Amendment before the war, promising Cuban independence. However, the Senate passed the Platt Amendment as a rider to an Army appropriations bill, forcing a peace treaty on Cuba which prohibited it from signing treaties with other nations or contracting a public debt. The Platt Amendment was pushed by imperialists who wanted to project U.S. power abroad (in contrast to the Teller Amendment which was pushed by anti-imperialists who called for a restraint on U.S. rule). The amendment granted the United States the right to stabilize Cuba militarily as needed. In addition, the Platt Amendment permitted the United States to deploy Marines to Cuba if its freedom and independence was ever threatened or jeopardized by an external or internal force. The Platt Amendment also provided for a permanent American naval base in Cuba. Guantánamo Bay was established after the signing of the Cuban–American Treaty of Relations in 1903. Thus, despite that Cuba technically gained its independence after the war ended, the United States government ensured that it had some form of power and control over Cuban affairs.
The U.S. annexed the former Spanish colonies of Puerto Rico, the Philippines and Guam. The notion of the United States as an imperial power, with colonies, was hotly debated domestically with President McKinley and the Pro-Imperialists winning their way over vocal opposition led by Democrat William Jennings Bryan, who had supported the war. The American public largely supported the possession of colonies, but there were many outspoken critics such as Mark Twain, who wrote The War Prayer in protest.
Roosevelt returned to the United States a war hero, and he was soon elected governor of New York and then became the vice president. At the age of 42 he became the youngest man to become President after the assassination of President William McKinley.
The war served to further repair relations between the American North and South. The war gave both sides a common enemy for the first time since the end of the Civil War in 1865, and many friendships were formed between soldiers of northern and southern states during their tours of duty. This was an important development, since many soldiers in this war were the children of Civil War veterans on both sides.
The African-American community strongly supported the rebels in Cuba, supported entry into the war, and gained prestige from their wartime performance in the Army. Spokesmen noted that 33 African-American seamen had died in the Maine explosion. The most influential Black leader, Booker T. Washington, argued that his race was ready to fight. War offered them a chance "to render service to our country that no other race can", because, unlike Whites, they were "accustomed" to the "peculiar and dangerous climate" of Cuba. One of the Black units that served in the war was the 9th Cavalry Regiment. In March 1898, Washington promised the Secretary of the Navy that war would be answered by "at least ten thousand loyal, brave, strong black men in the south who crave an opportunity to show their loyalty to our land, and would gladly take this method of showing their gratitude for the lives laid down, and the sacrifices made, that Blacks might have their freedom and rights."
In 1904, the United Spanish War Veterans was created from smaller groups of the veterans of the Spanish–American War. Today, that organization is defunct, but it left an heir in the Sons of Spanish–American War Veterans, created in 1937 at the 39th National Encampment of the United Spanish War Veterans. According to data from the United States Department of Veterans Affairs, the last surviving U.S. veteran of the conflict, Nathan E. Cook, died on September 10, 1992, at age 106. (If the data is to be believed, Cook, born October 10, 1885, would have been only 12 years old when he served in the war.)
The Veterans of Foreign Wars of the United States (VFW) was formed in 1914 from the merger of two veterans organizations which both arose in 1899: the American Veterans of Foreign Service and the National Society of the Army of the Philippines. The former was formed for veterans of the Spanish–American War, while the latter was formed for veterans of the Philippine–American War. Both organizations were formed in response to the general neglect veterans returning from the war experienced at the hands of the government.
To pay the costs of the war, Congress passed an excise tax on long-distance phone service. At the time, it affected only wealthy Americans who owned telephones. However, the Congress neglected to repeal the tax after the war ended four months later, and the tax remained in place for over 100 years until, on August 1, 2006, it was announced that the U.S. Department of the Treasury and the IRS would no longer collect the tax.
The change in sovereignty of Puerto Rico, like the occupation of Cuba, brought about major changes in both the insular and U.S. economies. Before 1898 the sugar industry in Puerto Rico was in decline for nearly half a century. In the second half of the nineteenth century, technological advances increased the capital requirements to remain competitive in the sugar industry. Agriculture began to shift toward coffee production, which required less capital and land accumulation. However, these trends were reversed with U.S. hegemony. Early U.S. monetary and legal policies made it both harder for local farmers to continue operations and easier for American businesses to accumulate land. This, along with the large capital reserves of American businesses, led to a resurgence in the Puerto Rican nuts and sugar industry in the form of large American owned agro-industrial complexes.
At the same time, the inclusion of Puerto Rico into the U.S. tariff system as a customs area, effectively treating Puerto Rico as a state with respect to internal or external trade, increased the codependence of the insular and mainland economies and benefitted sugar exports with tariff protection. In 1897 the United States purchased 19.6 percent of Puerto Rico's exports while supplying 18.5 percent of its imports. By 1905 these figures jumped to 84 percent and 85 percent, respectively. However, coffee was not protected, as it was not a product of the mainland. At the same time, Cuba and Spain, traditionally the largest importers of Puerto Rican coffee, now subjected Puerto Rico to previously nonexistent import tariffs. These two effects led to a decline in the coffee industry. From 1897 to 1901 coffee went from 65.8 percent of exports to 19.6 percent while sugar went from 21.6 percent to 55 percent. The tariff system also provided a protected market place for Puerto Rican tobacco exports. The tobacco industry went from nearly nonexistent in Puerto Rico to a major part of the country's agricultural sector.
The Spanish–American War was the first U.S. war in which the motion picture camera played a role. The Library of Congress archives contain many films and film clips from the war. In addition, a few feature films have been made about the war. These include
The United States awards and decorations of the Spanish–American War were as follows:
The governments of Spain and Cuba also issued a wide variety of military awards to honor Spanish, Cuban, and Philippine soldiers who had served in the conflict.
It has been a splendid little war; begun with the highest motives, carried on with magnificent intelligence and spirit, favored by the fortune which loves the brave. It is now to be concluded, I hope, with that firm good nature which is after all the distinguishing trait of our American character.
|work=at position 80 (help).
The 1898 United States elections occurred in the middle of Republican President William McKinley's first term, during the Fourth Party System. The elections took place shortly after the end of the Spanish–American War. Members of the 56th United States Congress were chosen in this election. Republicans retained control of both houses of Congress.
Democrats picked up several seats in the House at the expense of Republicans and the Populist Party. However, Republicans continued to control the chamber with a slightly diminished majority.In the Senate, Republicans picked up several seats at the expense of the Democrats, growing the Republican majority. Several Senators continued to affiliate with third parties.The elections helped Democrats further incorporate the remaining elements of the Populist Party, many of whom had been attracted to the Democratic Party after the 1896 candidacy of William Jennings Bryan. Republican Senate gains helped ensure ratification of the Treaty of Paris, which ended the Spanish–American War and left the US in control of Cuba, the Philippines, Guam, and Puerto Rico.A Message to Garcia (1936 film)
A Message to Garcia is a 1936 American war film directed by George Marshall and starring Wallace Beery and Barbara Stanwyck, John Boles and Alan Hale, Sr.. The film is inspired by the 1899 essay A Message to Garcia by Elbert Hubbard, loosely based on an incident during the Spanish–American War. The essay had previously been made into a 1916 silent film A Message to Garcia. Agent Rowan carries a message from President McKinley to General Garcia the leader of a rebellion against Spanish rule on the island of Cuba during the time of the Spanish–American War.Asiatic Squadron
The Asiatic Squadron was a squadron of United States Navy warships stationed in East Asia during the latter half of the 19th century. It was created in 1868 when the East India Squadron was disbanded. Vessels of the squadron were primarily involved in matters relating to American commerce with China and Japan, though it participated in several conflicts over 34 years of service until becoming the Asiatic Fleet in 1902.Battle of Manila (1898)
The Battle of Manila (Filipino: Labanan sa Maynila; Spanish: Batalla de Manila), sometimes called the Mock Battle of Manila, was a land engagement which took place in Manila on August 13, 1898, at the end of the Spanish–American War, four months after the decisive victory by Commodore Dewey's Asiatic Squadron at the Battle of Manila Bay. The belligerents were Spanish forces led by Governor-General of the Philippines Fermín Jáudenes, and American forces led by United States Army Brigadier General Wesley Merritt and United States Navy Commodore George Dewey. American forces were supported by units of the Philippine Revolutionary Army, led by Emilio Aguinaldo.
The battle is sometimes referred to as the "Mock Battle of Manila" because the local Spanish and American generals, who were legally still at war, secretly and jointly planned the battle to transfer control of the city center from the Spanish to the Americans while keeping the Philippine Revolutionary Army, led by Emilio Aguinaldo, out of the city center.The battle left American forces in control of Intramuros, the center of Manila, surrounded by Philippine revolutionary forces, creating the conditions for the Battle of Manila of 1899 and the start of the Philippine–American War.Battle of Manila Bay
The Battle of Manila Bay (Spanish: Batalla de Bahía de Manila), also known as the Battle of Cavite, took place on 1 May 1898, during the Spanish–American War. The American Asiatic Squadron under Commodore George Dewey engaged and destroyed the Spanish Pacific Squadron under Contraalmirante (Rear admiral) Patricio Montojo. The battle took place in Manila Bay in the Philippines, and was the first major engagement of the Spanish–American War. The battle was one of the most decisive naval battles in history and marked the end of the Spanish colonial period in Philippine history.Battle of Santiago de Cuba
The Battle of Santiago de Cuba was a naval battle that occurred on July 3, 1898, in which the United States Navy decisively defeated Spanish forces, sealing American victory in the Spanish–American War and achieving nominal independence for Cuba from Spanish rule.Cuban War of Independence
The Cuban War of Independence (Spanish: Guerra de Independencia cubana, 1895–98) was the last of three liberation wars that Cuba fought against Spain, the other two being the Ten Years' War (1868–1878) and the Little War (1879–1880). The final three months of the conflict escalated to become the Spanish–American War, with United States forces being deployed in Cuba, Puerto Rico, and the Philippine Islands against Spain. Historians disagree as to the extent that United States officials were motivated to intervene for humanitarian reasons but agree that yellow journalism exaggerated atrocities attributed to Spanish forces against Cuban civilians.Gatling gun
The Gatling gun is one of the best-known early rapid-fire spring loaded, hand cranked weapons and a forerunner of the modern machine gun and rotary cannon. Invented by Richard Gatling, it saw occasional use by the Union forces during the American Civil War in the 1860s, which was the first time it was employed in combat. Later, it was used again in numerous military conflicts, such as the Boshin War, the Anglo-Zulu War, and the assault on San Juan Hill during the Spanish–American War. It was also used by the Pennsylvania militia in episodes of the Great Railroad Strike of 1877, specifically in Pittsburgh.
The Gatling gun's operation centered on a cyclic multi-barrel design which facilitated cooling and synchronized the firing-reloading sequence. Each barrel fired a single shot when it reached a certain point in the cycle, after which it ejected the spent cartridge, loaded a new round, and, in the process, allowed the barrel to cool somewhat. This configuration allowed higher rates of fire to be achieved without the barrels overheating.
The Gatling gun was itself an early form of rotary cannon, and today modern rotary cannons are often referred to as Gatling guns.Guantánamo Bay
Guantánamo Bay (Spanish: Bahía de Guantánamo) is a bay located in Guantánamo Province at the southeastern end of Cuba. It is the largest harbor on the south side of the island and it is surrounded by steep hills which create an enclave that is cut off from its immediate hinterland.
The United States assumed territorial control over the southern portion of Guantánamo Bay under the 1903 Lease agreement. The United States exercises complete jurisdiction and control over this territory, while recognizing that Cuba retains ultimate sovereignty. The current government of Cuba regards the U.S. presence in Guantánamo Bay as illegal and insists the Cuban–American Treaty was obtained by threat of force and is in violation of international law. Some legal scholars judge that the lease may be voidable. It is the home of the Guantanamo Bay Naval Base and the Guantanamo Bay detention camp located within the base, which are both governed by the United States. Since the 1959 revolution, Cuba has only cashed a single lease payment from the United States government.List of Medal of Honor recipients for the Spanish–American War
The Spanish–American War (Spanish: Guerra Hispano-Estadounidense, desastre del 98, Guerra Hispano-Cubana-Norteamericana or Guerra de Cuba ) was a military conflict between Spain and the United States that began in April 1898. Hostilities halted in August of that year, and the Treaty of Paris was signed in December.
The war began after the American demand for Spain's peacefully resolving the Cuban fight for independence was rejected, though strong expansionist sentiment in the United States may have motivated the government to target Spain's remaining overseas territories: Cuba, Puerto Rico, the Philippines, Guam and the Caroline Islands.Riots in Havana by pro-Spanish "Voluntarios" gave the United States a reason to send in the warship USS Maine to indicate high national interest. Tension among the American people was raised because of the explosion of USS Maine, and "yellow journalism" that accused Spain of extensive atrocities, agitating American public opinion. The war ended after decisive naval victories for the United States in the Philippines and Cuba.
The Treaty of Paris ended the conflict 109 days after the outbreak of war giving the United States ownership of the former Spanish colonies of Puerto Rico, the Philippines and Guam.
The Medal of Honor was created during the American Civil War and is the highest military decoration presented by the United States government to a member of its armed forces. The recipient must have distinguished themselves at the risk of their own life above and beyond the call of duty in action against an enemy of the United States. Due to the nature of this medal, it is commonly presented posthumously.Merrimac coup
The Merrimac coup (also known as Hobson's coup or Hobson's choice) is a contract bridge coup where a player (usually a defender) sacrifices a high card in order to eliminate a vital entry from an opponent's hand (usually a dummy). It was named after American steam ship Merrimac, which was sunk during the Spanish–American War in 1898 in Santiago de Cuba in an attempt to bottle up the Spanish fleet.Nelson A. Miles
Nelson Appleton Miles (August 8, 1839 – May 15, 1925) was an American military general who served in the American Civil War, the American Indian Wars, and the Spanish–American War. From 1895 to 1903, he served as the last Commanding General of the United States Army before the office was abolished.Platt Amendment
On March 2, 1901, the Platt Amendment was passed as part of the 1901 Army Appropriations Bill. It stipulated seven conditions for the withdrawal of United States troops remaining in Cuba at the end of the Spanish–American War, and an eighth condition that Cuba sign a treaty accepting these seven conditions. It defined the terms of Cuban–U.S. relations to essentially be an unequal one of U.S. dominance over Cuba.
On December 25, 1901, Cuba amended its constitution to contain, word for word, the seven applicable demands of the Platt Amendment.On May 22, 1903, Cuba entered into a treaty with the United States to make the same required seven pledges: the Cuban–American Treaty of Relations of 1903. Two of the seven pledges were to allow the United States to intervene unilaterally in Cuban affairs, and a pledge to lease land to the United States for naval bases on the island. (The Cuban-American Treaty of Relations of 1934 replaced the 1903 Treaty of Relations, and dropped three of the seven pledges.)
The 1903 Treaty of Relations was used as justification for the Second Occupation of Cuba from 1906 to 1909. On September 29, 1906, Secretary of War (and future U.S. president) William Howard Taft initiated the Second Occupation of Cuba when he established the Provisional Government of Cuba under the terms of the treaty (Article three), declaring himself Provisional Governor of Cuba. On October 23, 1906, President Roosevelt issued Executive Order 518, ratifying the order.On May 29, 1934, the United States and Cuba signed the 1934 Treaty of Relations that in its first article abrogates the 1903 Treaty of Relations.Puerto Rican Campaign
The Puerto Rican Campaign was an American military sea and land operation on the island of Puerto Rico during the Spanish–American War. The offensive began on May 12, 1898, when the United States Navy attacked the capital, San Juan. Though the damage inflicted on the city was minimal, the Americans were able to establish a blockade in the city's harbor, San Juan Bay. On June 22, the cruiser Isabel II and the destroyer Terror delivered a Spanish counterattack, but were unable to break the blockade and the Terror was damaged.
The land offensive began on July 25, when 1,300 infantry soldiers led by Major General Nelson A. Miles disembarked off the coast of Guánica. After controlling the first skirmish, the Americans advanced to Coamo, where they engaged Puerto Rican and Spanish troops in battle. The battle concluded when the allied soldiers retreated after the battle left two dead on their side, and four on the American side. The United States was able to seize control of Fajardo on August 1, but was forced to withdraw on August 5 after a group of 200 Puerto Rican–Spanish soldiers led by Pedro del Pino gained control of the city, while most civilian inhabitants fled to a nearby lighthouse. The Americans encountered larger opposition as they advanced towards the main island's interior. They engaged in two crossfires in Guamani River and Coamo, both of which were inconclusive as the allied soldiers retreated. A battle in San Germán concluded in a similar fashion with the Spanish retreating to Lares.
On August 9, 1898, American troops that were pursuing units retreating from Coamo and Asomante encountered heavy resistance in Aibonito and retreated after six of their soldiers were injured. They returned three days later, reinforced with artillery units and attempted a surprise attack. After about an hour of fighting, Spanish artillery batteries had been silenced. American guns advanced some 2,150 yards and set up positions, but soldiers reported seeing Spanish reinforcements nearby and the guns were withdrawn back to the main line. Shortly before the launch of a flanking movement on the Spanish, all military actions in Puerto Rico were suspended on August 13, after U.S. President William McKinley and French Ambassador Jules Cambon, acting on behalf of the Spanish government, signed an armistice whereby Spain relinquished its sovereignty over the territories of Puerto Rico, Cuba, the Philippines and Guam.Rough Riders
The Rough Riders was a nickname given to the 1st United States Volunteer Cavalry, one of three such regiments raised in 1898 for the Spanish–American War and the only one to see action. The United States Army was small and understaffed in comparison to its status during the American Civil War roughly thirty years prior. As a measure towards rectifying this situation President William McKinley called upon 125,000 volunteers to assist in the war efforts. The regiment was also called "Wood's Weary Walkers" in honor of its first commander, Colonel Leonard Wood. This nickname served to acknowledge that despite being a cavalry unit they ended up fighting on foot as infantry.
Wood's second in command was former Assistant Secretary of the Navy, Theodore Roosevelt, a man who had pushed for American involvement in the Cuban War of Independence. When Colonel Wood became commander of the 2nd Cavalry Brigade, the Rough Riders then became "Roosevelt's Rough Riders." That term was familiar in 1898, from Buffalo Bill who called his famous western show "Buffalo Bill's Wild West and Congress of Rough Riders of the World." The Rough Riders were mostly made of college athletes, cowboys, ranchers, miners, and other outdoorsmen. With these men being from southwestern ranch country, they were quite skilled in horsemanship.Spanish–American War Soldier
Spanish–American War Soldier is a public art work created by the American Bronze Company and located in downtown Milwaukee, Wisconsin. The bronze figure depicts a uniformed soldier with an ammunition belt around his waist and a rifle in hand. It is located on West Wisconsin Avenue between North 9th and 10th Streets in the Court of Honor near the Milwaukee Public Library.Treaty of Paris (1898)
The Treaty of Paris of 1898 (Filipino: Kasunduan sa Paris ng 1898; Spanish: Tratado de París (1898)) was a treaty signed by Spain and the United States on December 10, 1898, that ended the Spanish–American War. In the treaty, Spain relinquished all claim of sovereignty over and title to Cuba, and ceded Puerto Rico, Guam, and the Philippines to the United States. The cession of the Philippines involved a compensation of $20 million from the United States to Spain. The Treaty of Paris came into effect on April 11, 1899, when the documents of ratification were exchanged.The Treaty of Paris marked the end of the Spanish Empire (apart from some small holdings in Northern Africa as well as several islands and territories around the Gulf of Guinea, also in Africa). It marked the beginning of the age of the United States as a world power. Many supporters of the war opposed the treaty, and it became one of the major issues in the election of 1900 when it was opposed by Democrat William Jennings Bryan because he opposed imperialism. Republican President William McKinley upheld the treaty and was easily reelected.Wesley Merritt
Wesley Merritt (June 16, 1834 – December 3, 1910) was an American major general who served in the cavalry of the United States Army during the American Civil War, American Indian Wars, Spanish–American War, and the Philippine–American War. Following the latter war, he became the first American Military Governor of the Philippines.Yellow journalism
Yellow journalism and the yellow press are American terms for journalism and associated newspapers that present little or no legitimate well-researched news while instead using eye-catching headlines for increased sales. Techniques may include exaggerations of news events, scandal-mongering, or sensationalism. By extension, the term yellow journalism is used today as a pejorative to decry any journalism that treats news in an unprofessional or unethical fashion.In English, the term is chiefly used in the US. In the UK, a roughly equivalent term is tabloid journalism, meaning journalism characteristic of tabloid newspapers, even if found elsewhere. Other languages, e.g. Russian (Жёлтая пресса), sometimes have terms derived from the American term. A common source of such writing is called checkbook journalism, which is the controversial practice of news reporters paying sources for their information without verifying its truth or accuracy. In the U.S. it is generally considered unethical, with most mainstream newspapers and news shows having a policy forbidding it. In contrast, tabloid newspapers and tabloid television shows, which rely more on sensationalism, regularly engage in the practice. |
The Uyghur Empire (744–840)
Abstract and Keywords
The Uyghurs (Chinese Huihe迴 紇, Huihu回鶻) were a pastoral nomadic people living in the region of the Selenga and Orkhon river valleys in modern Mongolia; they spoke a Turkic language. The empire that they created on the steppe lasted for nearly a century (744–840) and played an important role, both politically and culturally, in East Asia. Centered on the Mongolian Plateau, the Uyghur Empire at its height controlled numerous other peoples within a territory that included lands to the north in the modern regions of Tuva and Buryatia, as well as some parts of the northern Tarim Basin and eastern Inner Mongolia.1 During its eventful history, the Uyghur Empire sent cavalry to help the Tang Dynasty put down the An Lushan rebellion, maintained strong political and economic ties with China, fought with the Tibetan Empire for control of important international trade routes, built cities on the steppe, celebrated its rulers’ achievements in stone stelae, and—uniquely in the world—adopted Manichaeism as its state religion. After their empire collapsed, the Uyghurs developed new polities in Gansu and the Tarim Basin that continued to exercise influence in Inner Asia.
The Early Uyghurs
Prior to the establishment of their empire, the Uyghurs appear in Chinese historical records under various names. In the 6th century, they became subjects of the First Türk (Chinese Tujue突厥) Empire (552–630) and enjoyed some prestige within that empire’s administration, governing the “wild regions of the north” in the name of their Türk overlords. In 627, when the First Türk Empire was weakening, the Uyghurs revolted together with two other subject peoples, the Bayïrqu (Chinese Bayegu 拔也古, with variants) and the Xueyantuo 薛延陀. After the political collapse of the Türks a few years later, the Uyghurs continued to exercise their power and, especially after defeating the Xueyantuo in 646, dominated many of the other peoples of the region. This victory had been achieved with support from the Tang Dynasty (618–907) in China, marking an early connection between the Uyghurs and the Tang Empire. From this position of strength, the Uyghurs continued to collaborate with the Tang in a partnership that included coordinated military campaigns against various groups of Türks as well as the Goguryeo kingdom of northern Korea/southern Manchuria in the 650s. Indeed, until the Türks succeeded in throwing off Tang control and established the Second Türk Empire (682–744), the Mongolian Plateau and its environs were largely dominated by the Uyghurs for nearly four decades.2 Not enough is known of this period, however, to say much about the Uyghurs and their political organization during this interregnum. It does appear that they accepted close ties with the Tang, even being named the Hanhai Prefecture within the Tang administrative organization, although the precise level of Tang influence over the Uyghurs is uncertain, and there was at least one conflict between them in 661.
After the Türks reestablished control over Mongolia in 682, the Uyghurs were once again subordinated to them, although it appears that at least some Uyghurs moved into the region of Gansu under Tang administration for a time before being brought under Türk control. The power of the Second Türk Empire began to wane after about half a century, however, when the Türk ruler Bilge (Chinese Pijia毗迦) Qaghan (r. 716–734) was poisoned in 734. Even before that time, the Uyghurs and other peoples had begun to chafe under Türk rule. After Bilge’s predecessor and uncle Qapghan (Chinese Mochuo默啜) Qaghan (r. 691–716) was killed by subordinate peoples and his head sent to the Tang capital, the Uyghurs grew more independent and powerful. After Bilge’s death, his several successors ruled only briefly, and many Türk and Sogdian elites fled from the steppe to China to avoid the chaos. The Uyghurs allied with two other subject peoples, the Qarluq (Chinese Geluolu葛邏祿) and the Basmïl (Chinese Baximi拔悉密), to kill the last Türk qaghan. The Uyghurs and Qarluq then attacked the Basmïl. After they had been subjugated, the Uyghurs turned on their erstwhile allies and became the sole ruling power on the Mongolian steppe.
Foundation of the Uyghur Empire and Uyghur Involvement in Tang China
The Uyghur ruler who oversaw the foundation of the empire in 744 was a man of the Yaghlaqar (Chinese Yaoluoge藥羅葛) family known as Qullïg Boyla (Chinese Guli Peiluo骨力裴羅) Qaghan (r. 744–747).3 Although he died shortly after the empire’s establishment, his son, known in Chinese sources as Bayan Chor (Chinese Moyanchuo磨延啜, r. 747–759), ruled for more than a decade. Many scholars consider him the actual founder of the empire. The work of this father-and-son team helped to place the new empire on a solid footing. The Uyghur Shine-usu inscription describes Bayan Chor’s defeat of the Basmïl and Qarluq. Although the former became part of the Uyghur polity, a significant group of the latter moved westward into the region of Jungaria (around Lake Balkhash and Issïq-Köl) to maintain their autonomy. The same inscription notes that Bayan Chor built a capital city, the ruins of which can still be seen in the Orkhon River valley, as well as another city, known as Baybalïq, “Rich City” (Chinese Fuguicheng富貴成), built in the Selenga River valley by and/or for the Chinese and Sogdians within the empire. Although the sources of the period nowhere give us the capital city’s Uyghur name, a later source (the 13th-century Persian historian ‘Alā’ al-Dīn ‘Aṭa Malik Juwaynī) indicates that it was called Ordubalïq, “Royal Court City.”4
The early years of the empire were marked by frequent campaigns to neutralize other peoples and, in many cases, bring them under Uyghur control. The Uyghur Empire thus was from its inception a grouping of confederated tribes, including both willing and (at least initially) unwilling participants. Although many of these peoples spoke Turkic languages, it can be assumed that some did not. Furthermore, as the Shine-usu inscription attests, there were Sogdian and Chinese minorities living within the empire, some of whom achieved high status. At least one Chinese family was allowed to take the royal clan name of Yaghlaqar, replacing their traditional Chinese surname of Lü. As with most Inner Asian empires, the history of the Uyghur Empire indicates that subordinate peoples sometimes rebelled against Uyghur domination, causing both the power and the geographical extent of the empire to wax and wane.
The Uyghurs resembled their steppe predecessors in a number of ways. As pastoralists, they derived their livelihood from their herds (primarily horses and sheep, as well as goats, camels, and cattle) and engaged in seasonal nomadization, supplemented by hunting and some agriculture. They spoke a language that was identical to that of the Türks and wrote it in a runiform or “runic” script that the Türks had developed. The Uyghur rulers engaged in military campaigns to extend their power and celebrated their achievements through the creation of stone stelae with laudatory inscriptions. Their officials carried the same titles as had those of the earlier empire. In 744, the Uyghur ruler took the supreme title of qaghan (Chinese kehan可汗), used by the rulers of the Türks as well as those of many other Inner Asian polities, after defeating the Basmïl; this title was recognized by the Tang Dynasty in 746. In their use of imperial symbols the Uyghurs even employed wolf’s-head standards that harkened back to the Türks’ own foundation myth. To their neighbors, the Uyghurs would have seemed very much like the Türks; indeed, the Uyghur Empire has been called a “Third Türk Empire,” although this is problematic because the Uyghurs clearly saw themselves as having an identity distinct from the Türks. Furthermore, the similarities that have been noted were in some ways overshadowed by important political events that made the contours of Uyghur history and life ultimately quite different from those of the Türks.
The Rebellions of An Lushan and Pugu Huaien
These events began just over a decade after the Uyghurs had established their empire, when China was thrown into chaos by the rebellion of the frontier general An Lushan who declared the founding of a rival dynasty and set out to overthrow the Tang in 755. Although better known for its dramatic and lasting effects on the Tang Dynasty, the An Lushan rebellion had a significant impact on the Uyghur Empire as well. Finding itself in a desperate situation, the Tang court sought foreign assistance to help put down the rebellion; the Uyghurs responded to this call by sending troops, which played a pivotal role in the conflict. After a brief skirmish with the Tongra (Chinese Tongluo同羅), a Turkic people who had allied with An Lushan in 756, Bayan Chor deputed his eldest son to lead some 4,000 cavalry to China in the summer of 757. These troops helped turn the tide, allowing Tang forces to recapture the dynasty’s two capitals, Chang’an and Luoyang, and led to the rebellion’s ultimate collapse. But the Uyghur collaboration with the Tang court was not an easy one. After taking the western capital of Chang’an, the Uyghurs had to be offered additional inducements to continue on to Luoyang, the eastern capital. The Uyghurs were allowed to plunder Luoyang for three days.
Bayan Chor’s death in 759 did not stop the Uyghurs’ involvement in China’s civil conflict, which continued under his successor, Bügü (or Bögü, Chinese Mouyu牟羽) Qaghan (r. 759–779). Bügü was at least as difficult an ally as his father had been. When the rebel leader Shi Chaoyi sent news to the Uyghurs of the death of the Tang emperor Suzong (r. 756–762) in May 762 and encouraged Bügü to attack the Tang, the qaghan seriously considered this option. Although a new emperor (Daizong, r. 762–779) was soon enthroned in China, it was not until a Turkic general in Tang service, Pugu Huaien, who was also Bügü’s father-in-law, intervened that Bügü was convinced to return to the earlier alliance, helping the Tang to finish the destruction of the rebel forces by 763. Once again, the cost of Uyghur support was paid by the populace of Luoyang, which was plundered a second time.
The Uyghurs’ decision to respond positively to the Tang Dynasty’s request for help was to prove of fundamental importance in the development of their own empire. Their reasons for doing so are nowhere made explicit, but it has been argued that the Uyghurs were eager to destroy elements of the Türks, who had fled to China when their state collapsed.5 Possibly, the Uyghurs saw this as an opportunity to obtain leverage over China and exercise further power in this way, much as the Eastern Türks had done when the Sui Dynasty (581–618) had collapsed and been replaced by the Tang.
Uyghur involvement in the An Lushan rebellion had many consequences. First, the Uyghur qaghans enjoyed a level of prestige vis-à-vis China that earlier steppe rulers typically had not. This arrangement was parlayed into two particularly advantageous factors, one political and the other economic. The first was the marriage of three Tang imperial princesses to different Uyghur rulers, indicating an unprecedented degree of Chinese respect for the qaghans. The second was the profitable trade between China and the Uyghurs in which the former were required to pay high prices in silk for the latter’s horses. Although the Uyghurs were regularly granted gifts of silk by the Tang court, the trade network that Uyghur assistance had assured was far more significant. The wealth engendered by this trade helped the Uyghur rulers to build cities and defenses within their realm and enjoy a high standard of living that was bolstered by this unusual level of commerce.
The Uyghurs soon were once again involved in Chinese politics when General Pugu Huaien, who had helped put down the An Lushan rebellion, himself turned on the Tang Dynasty; through his marital connections to the Uyghur court, he immediately sought Uyghur assistance for his rebellion. Turning their backs on the Tang, the Uyghurs sent troops to help him in 764 under the leadership of their high official Ton Bagha (Chinese Dun Mohe頓莫賀); Bügü Qaghan remained in Mongolia. Pugu Huaien enjoyed the support of both Uyghur and Tibetan troops, but his death in September 765 caused the Uyghurs ultimately to do another about-face and link with the Tang general Guo Ziyi to repel the Tibetans, which they did successfully. The Uyghur connection to the Tang Empire thus was restored.6 Their brief support of Pugu Huaien may have had more to do with the qaghan’s marital relationship to Pugu Huaien than to any particular anti-Tang sentiment, or perhaps reflects a Uyghur desire to increase their influence in China.
At the same time that they were engaged in China, the Uyghurs were expanding their power in other directions. In 754–756, a western campaign advanced into the region around the Tarim Basin. At approximately the same time (ca. 756–759), the Khitans (Chinese Qidan契丹) to the southeast submitted to the Uyghurs, who appointed overseers for them. The Uyghurs also campaigned to the northwest, defeating a large Kirghiz (Chinese Xiajiasi黠戛斯, with variants) force in 758. The Kirghiz moved further to the northwest and the Uyghurs constructed at least eighteen defensive fortifications, the remains of which can still be seen, along their northern frontier in modern Tuva. This complex of Uyghur constructions in Tuva includes a walled compound built on an island in the lake called Tere-khol’ in southern Tuva. This site, known as Por-Bajin (Por-Bazhyn), has been dated to the late 8th century, probably ca. 770–790.7 The Uyghur settlement at Por-Bajin shows a strong influence of Tang building materials and techniques, which are found in other Uyghur constructions as well. From the available textual and archaeological evidence, it seems that the Uyghurs employed a syncretic approach to their construction projects, including both Chinese and Sogdian influences.8
The Uyghurs and Manichaeism
At about this time, Bügü Qaghan began to patronize the Manichaean religion. Manichaeism—which has an elaborate dualist theology and demands vegetarianism for its “elect”—seems a peculiar choice for a steppe empire in which meat and animal products played such an important role in the nomads’ diet. According to the Chinese text of the Karabalghasun inscription, while he was in Luoyang Bügü Qaghan encountered Sogdian Manichaean clerics who were well-versed in Manichaean doctrine. Modern scholars have assumed that it was this encounter, which they properly connect to the qaghan’s 762–763 campaign, which led to Bügü Qaghan’s conversion. But what seems odd about this interpretation, which is nowhere supported directly by the available sources, is the fact that Sogdians lived in the Türk and Uyghur Empires long before this happened. The qaghan was certainly aware of the Sogdians within his own realm as well as their culture, which included Manichaeism. In fact, the Uyghur contingent that he led to help defeat the rebels in China in 762 included at least two generals with names indicating their Sogdian background: An Ke and Shi Diting. Both played important roles in the qaghan’s campaign into China.9
Bügü Qaghan’s conversion to Manichaeism thus may have been earlier than 762–763, or perhaps at least his encounter with the religion began at an earlier date.10 As for his reasons for choosing Manichaeism, he may have sought a unifying force that would help bind the diverse peoples of his empire together. Aware that the Second Türk Empire had fallen to the revolt of subject peoples (including the Uyghurs themselves), he also may have sought a new type of religious legitimacy to help combat such centrifugal forces. A further benefit was that Manichaeism was not a Chinese religion, so its adoption would not suggest subservience to China. Or it may simply be that the clerics Bügü encountered in Luoyang were particularly learned and persuasive; the Karabalghasun inscription indeed supports this interpretation. Although the date of his conversion thus remains uncertain, clearly, Bügü Qaghan adopted this syncretic Iranian faith. Evidence from an Uyghur source indicates that the religion was not universally welcomed within the empire; Bügü agreed to redouble his efforts to protect and promote the faith.11
Uyghur rulers were not content simply to act as patrons of Manichaeism within their own realm; in 807 the Uyghur qaghan put pressure on the Tang court to establish Manichaean temples within the Tang Empire. The Tang government had previously restricted the practice of this religion to foreigners, but things changed with Uyghur patronage. In 768, Manichaeans received the court’s permission to build temples in Southern China; sources also indicate the presence of such temples in Chang’an. In 807, the Tang official Bai Juyi wrote a letter to the Uyghur Qaghan Qutlugh (Chinese Guduolu骨咄祿) agreeing to the building of Manichaean temples in Luoyang and the northern city of Taiyuan.12 Ultimately, it is difficult to determine how deeply Manichaeism penetrated to the general population of the Uyghurs and other peoples within the empire. The religion apparently did not last long on the steppe after the collapse of the empire in 840—an event which led to its persecution in China as well. Despite Tang efforts to uproot the Manichaean church in China, however, it managed to survive there.
Despite the Uyghur rulers’ general support of Manichaeism, some evidence indicates that the Uyghurs continued to employ the services of shamans. For example, Chinese sources state that in 765, during the rebellion of Pugu Huaien, the Uyghurs consulted shamans who prophesied that the Uyghurs would not do battle with the Tang and would return home after meeting a “great man,” Guo Ziyi. Soon thereafter, when the Uyghurs had switched sides and were fighting the Tibetan forces allied with Pugu Huaien, Uyghur shamans employed weather magic to create wind and snow that helped defeat the Tibetans. Many years later, the Arab traveler Tamīm ibn Baḥr, writing around 821, also noted the presence of “rain stones” that were employed by the Uyghur ruler to engage in weather magic.
The Continuing Uyghur–Tang Connection
After the collapse of Pugu Huaien’s rebellion, the Uyghurs re-established good relations with the Tang Dynasty. This began with an embassy to Daizong’s court in 765. Of the six persons comprising the leadership of this embassy only one, a “chieftain” named Shi Yena, is mentioned; his surname reveals his Sogdian background. The revived trade that the Uyghurs enjoyed with the Tang Empire enriched the Uyghurs dramatically, allowing them the wealth necessary not only to engage in the building of urban centers but also to maintain the loyalty of their subjects and support the state-sponsored Manichaean church and its clergy. The Tang Dynasty’s weakness after the An Lushan debacle, coupled with the Uyghurs’ ability to assert the moral high ground through their actions that helped to preserve the dynasty, lubricated this trade. Tang officials grumbled about the horse–silk trade and complained repeatedly about the poor quality of the horses they were obliged to purchase. As for the Uyghur Empire, this connection led to many Uyghurs and Sogdians travelling to, and even living in, the largest cities of China where they engaged in trade (including the purchase of tea) and moneylending. On the Uyghur side, furs from the forested regions of their empire were also an important part of their trade with China and other nations. Finally, it is worth noting that the Chinese were aware of Uyghur spies who relayed information on Tang political events back to Ordubalïq. It seems likely that these spies were Uyghur and Sogdian merchants and moneylenders living in Tang cities.
Within the Uyghur Empire, the wealth that trade engendered, and the stability that the empire generally enjoyed, caused people to engage in trade with other regions. It also helped promote agriculture, which was also necessary for the support of Manichaean elites. Modern archaeological techniques have recently affirmed Tamīm ibn Baḥr’s description of Ordubalïq as a city surrounded by cultivated fields.13 Uyghurs who had traded their nomadic ways for a more sedentary life resided in the empire’s urban centers. It is, however, difficult to measure such transformations in any meaningful way. While evidence of settled populations, agricultural activity, handicraft production, and urban markets is clear, it is difficult to ascertain how many persons were identified with such things or what their identities were. The sedentary component of the empire may have been largely comprised of Sogdian and Chinese populations, but it is likely that at least some Uyghurs were drawn to the settled life of the towns. And despite the urban court that the Uyghur rulers established, they themselves maintained a symbolic “nomadic” life in a magnificent golden tent, noted in Chinese accounts as well as that of Tamīm ibn Baḥr, which became an important marker of royal power.
The Uyghur-Tang connection was sufficiently important that when Bügü Qaghan contemplated an attack on China, he was killed in a coup led by Ton Bagha, who seized the Uyghur throne. This event was connected to the increased influence of Sogdians at the Uyghur court, who had encouraged the attack. Ton Bagha was determined to avoid damaging the lucrative trade with the Tang Empire, so he engineered his coup in 779 that not only led to the death of Bügü Qaghan but also of many of his Sogdian advisors. Bügü Qaghan seems to have been testing the waters; Chinese sources note that there had been an Uyghur raid on Tang territory in 778. Although some scholars have regarded Ton Bagha’s coup as not only pro-Tang but also anti-Sogdian and anti-Manichaean, it is striking that both were soon once again influential within the empire. However, the coup was likely aimed at stopping Bügü’s plans to invade China and eliminating those persons, Sogdians or otherwise, who had supported such an action. Ton Bagha Qaghan remained on friendly terms with the Tang; a brief Uyghur involvement in support of Zhu Tao’s anti-Tang rebellion in 784 appears to have been the action of a regional leader and not directed from, or sanctioned by, the Uyghur court.14
As noted previously, part of the Uyghur-Tang connection was the marriage of three Tang imperial princesses to Uyghur rulers; this was a sign of the great favor the qaghans enjoyed in the eyes of the Tang emperors. Unlike other Chinese princesses sent to wed foreign rulers during the Tang era and earlier, these women were the daughters of emperors—not the collateral relatives, etc., typically given the title “princess” and then sent to engage in diplomatic marriages with foreign rulers. The first of these imperial daughters was the Ningguo Princess, daughter of the emperor Suzong, who wed Bayan Chor in 758; she died in 791 after more than three decades living in the Uyghur Empire. The second was the Xian’an Princess, daughter of the emperor Dezong (r. 779–805), who married Ton Bagha in 788; she lived among the Uyghurs until her death in 808. The third imperial marriage was that of the Taihe Princess, daughter of the Tang emperor Xianzong (r. 805–820); her brother, the emperor Muzong (r. 820–824), agreed to betroth her to the Uyghur ruler in 821; she remained among the Uyghurs until the empire’s collapse.15 In addition to these imperial princesses, two daughters of Pugu Huaien were married to Bügü Qaghan; he wed the first before becoming qaghan. Many Inner Asian peoples maintained a practice in which widows were wed to male relatives of their former husbands, particularly sons—so long as those were not the women’s own biological sons. This was certainly the case for the Xian’an Princess, who enjoyed the honor of being qatun not only during the reign of Ton Bagha Qaghan but also the next three Uyghur rulers. There is insufficient information in this regard for the Taihe Princess.
As another show of favor, the Tang court regularly granted official titles to the Uyghur qaghans. These included long formulaic titles in Old Turkic, which are often so similar that it is difficult to distinguish one qaghan from another. These Turkic titles were certainly created and taken by the Uyghur rulers themselves. The Tang court affirmed these and added to them laudatory titles of two or four characters in Chinese. These grand titles were intended to enhance Uyghur prestige throughout the region, showing the favor they enjoyed from the Tang court and its recognition that the Uyghurs were the paramount power in North Asia.
Relations with Tibet
Uyghur relations with the Tibetan Empire, which had risen to power in the first half of the 7th century, were often hostile, and trade played an important role in the continuing conflict. With the outbreak of the An Lushan rebellion, the Tibetans had seized the opportunity to push deep into Tang territory, occupying parts of Qinghai and Gansu and even briefly seizing Chang’an for about two weeks late in 763. Although the capital was restored to the Tang government by Guo Ziyi and his forces, Tibetan incursions persisted for more than a decade after that.
Farther west, the Tang Empire’s control over Gansu and the Tarim Basin had seriously diminished, also as a consequence of the rebellion; the Uyghurs and Tibetans both eyed this region with great interest. This interest was due not only to strategic reasons but also to trade going through the region, which could be controlled and taxed by whichever power managed to dominate it. The An Lushan rebellion (which came hard on the heels of a Tang defeat at the hands of an Arab coalition at Talas in 751) had caused the Tang Dynasty to withdraw most of its garrison troops from Gansu and the Tarim Basin, and the Tibetans quickly attempted to extend their power into that region. They also raided the Ordos region and occupied several cities there in 786. Efforts to establish a Tang–Tibetan treaty proved fruitless, and the threat of Tibetan power led the Chinese chief minister Li Mi to propose a Tang alliance with the Uyghurs and other powers to weaken the Tibetan threat. Although the emperor Dezong initially rejected the idea of an alliance with the Uyghurs, he ultimately acquiesced to this plan, which was cemented with a marriage alliance.16 In the end, it did not severely diminish the Tibetans’ ability to threaten the Tang Empire.
The Uyghurs also sought to exercise power in the Tarim region, leading to conflict with the Tibetans. Much of this conflict centered on the northern Tarim cities of Beshbalïq (Chinese Beiting北庭), which changed hands more than once, and Qocho (Chinese Gaochang高昌). A Tibetan victory in 790–791 marked the end of significant Chinese presence in the region for nearly a millennium.17 Although the Uyghurs managed to dislodge the Tibetans from some of their strongholds in the Northern Tarim in 792, the sparseness of records for these events makes it difficult to comment definitively on the level of Uyghur control or influence there at any particular time, but it does seem that they held Beshbalïq and other cities in the region from 792 until their state’s collapse. Conflicts between the Uyghurs and Tibetans continued regularly for several decades. There was, for example, a major Uyghur attack on Tibetan forces near Turfan in 813; just a few years later, the Tibetans threatened Ordubalïq in 816. Such conflicts were destabilizing and costly; peace was finally established in 822/823, suggesting that neither polity believed that further warfare would gain them much. A successful Tang–Tibetan treaty was established at the same time. The Uyghurs’ willingness to make peace with their old enemies is undoubtedly related to both the easing of relations between the Tang and Tibetan courts as well as the heating up of Kirghiz hostility toward the Uyghurs at about the same time.
The Empire’s Later Years and Collapse
After Ton Bagha Qaghan’s death there was a period of instability during which the next two qaghans sat on the throne only briefly before being assassinated. In 790, a new qaghan, known only by the Chinese name Achuo阿啜, succeeded to the throne and held it until his death in 795. During this period the empire weakened, and the Qarluqs in particular managed to exert their strength and seize some Uyghur territory. After Achuo died, the throne was taken by one of his ministers—a man not of the royal Yaghlaqar family, but of another Uyghur family known as Ediz (Chinese Adie 阿跌, with variants). This is very likely the Uyghur minister and military leader known by his title of il-ögesi (Chinese xieyujiasi頡于伽思) who dominated the imperial government during the short reigns of Ton Bagha’s two successors. When he became qaghan, he adopted the royal surname of Yaghlaqar, showing its importance to Uyghur legitimacy. This qaghan, after some failures prior to his ascension to the throne, finally dislodged the Tibetans from Beshbalïq in 791–792 and recovered much of the empire’s strength and territory, enjoying significant influence throughout much of the Tarim Basin and beyond.
The significance of Sogdians and Manichaeans during the period after Ton Bagha’s rule can be seen in a number of factors, including the text of the Karabalghasun inscription18 and the role of Sogdians in Uyghur embassies and trade missions to China.19 Other documents attest to the continuing importance of Manichaeism among Uyghur elites, some of whom had Sogdian Manichaean names. The Sogdians’ importance as merchants can also be seen through many examples. During the reign of Bügü Qaghan, when they were enjoying their first wave of influence among the Uyghurs, Sogdian merchants regularly traveled to China with Uyghur envoys, and many of them remained at the capital city of Chang’an where they became quite wealthy. After Ton Bagha’s 779 coup, these Sogdians worried about returning to the Uyghur realm because Ton Bagha had put many Sogdians to death. Staying in China proved no safer, however, as they were soon massacred—along with many Uyghurs, including Ton Bagha’s own uncle—on the order of a Tang military official named Zhang Guangsheng. Later, beginning in the early 9th century, Uyghur envoys to China regularly included Manichaean clerics, who also engaged in commerce while there. Chinese sources note that the Uyghur qaghans frequently involved them in state affairs. Manichaeans were important in the back-and-forth exchanges between the two empires; they sometimes led Uyghur embassies to China, and they were also entrusted by the Tang government to convey important messages to the Uyghur court.
Around 820, the Uyghur Empire began to weaken. The primary causes of the empire’s decline were political factionalism within the ruling elite and the revolt of subject peoples. As noted previously, the Turkic Kirghiz became increasingly bellicose at about this time, leading to some two decades of constant struggle. Bloody political factionalism occurred at the Uyghur court throughout the 830s, beginning with the assassination of Qasar (Chinese He-sa曷薩) Qaghan in 832. The sources do not indicate the causes of this factionalism, but it was severe enough to weaken and ultimately destroy the state. Qasar Qaghan was succeeded by his nephew, known only by the Chinese name Hu胡 (or Sa薩) Tegin Qaghan (r. 832–839), who could not restore order. He successfully foiled a plot against him being led by an Uyghur prince and a minister of Sogdian ancestry, An Yunhe, who was executed for his involvement in the coup. In 839, another Uyghur minister named Küräbir (Chinese Jueluowu掘羅勿) led Shatuo Türk troops to attack Hu Tegin’s camp, and he committed suicide. Although another Uyghur qaghan was then enthroned by Küräbir, the situation remained perilous; an Uyghur general named Külüg Bagha (Chinese Julu Mohe句錄莫賀), who had opposed Küräbir’s coup, fled to the Kirghiz. After a hard winter (839/840) in which heavy snows caused the deaths of many herd animals, leading to famine and disease among the Uyghurs, Külüg Bagha attacked the Uyghur court with a large force of Kighiz cavalry and killed the new qaghan and Küräbir. Ordubalïq was now in the hands of the Kirghiz, as was the Taihe Princess, whom they had captured during their assault.
Although it can be assumed that some Uyghurs were killed or absorbed by the invading Kirghiz army, many fled from the Mongolian Plateau. A large number moved southwestward into the regions of Gansu and the Tarim Basin, where they were able to establish themselves and remain a significant cultural and political force. Some Uyghurs remained among the Khitans and one family, which had taken the Chinese name of Xiao, provided empresses for most Khitan rulers. Finally, two large groups of refugees fled to the Tang frontier. The Tang court played them against one another, eventually admitting one but attacking the other; the latter group was led by a new ruler, Ögä (Chinese Wujie烏介) Qaghan, who was, however, not universally accepted as sovereign. In the end, both groups were largely annihilated. Those who had hoped to restore Uyghur power were seen as a threat to the Tang, as were those who came into Tang territory with the plan to resettle in the frontier region. Neither ultimately bowed to Tang demands, so in the end most were killed, with only a few elites remaining. The Taihe Princess, who had been recaptured by the Uyghurs, was rescued. Also, in the winter of 842–843 a grand general of Sogdian background named Cao Mani (Chinese Cao Moni曹摩尼) sought asylum from the Tang government along with some 30,000 persons, including several other notables.20
The collapse of the Uyghur Empire relieved some pressures for the Tang Dynasty, particularly as the Kirghiz did not succeed in establishing an empire in the Uyghurs’ old realm that nestled against Tang territory; rather, they remained focused on their traditional homeland in the region of the upper Yenisei River, and their contacts with the Tang Dynasty were at best sporadic and not terribly significant. No powerful state emerged on the Mongolian Plateau in the years following the collapse of the Uyghur Empire.21 The Khitans shifted their allegiance to the Tang; after the collapse of the Tang polity early in the 10th century their independence (and power) increased. When the Khitans’ Liao Empire extended its influence into Mongolia in 924, they encountered little organized resistance (although some earlier historians have assumed that the Khitans must have encountered—and defeated—the Kirghiz, but this is incorrect). Indeed, the Mongolian steppe remained politically disorganized until the rise of the Mongols in the 13th century.
Within China, the collapse of the Uyghur Empire quickly led to the suppression of Manichaeism in 843 under the emperor Wuzong (r. 840–846) and his chief minister Li Deyu; Chinese rhetoric clearly links Chinese tolerance for Manichaeism to Uyghur power, and the persecution of the religion to the Uyghur Empire’s collapse. The success of the attack on Manichaeism led Wuzong to attack other “foreign religions,” including one of the most significant persecutions of Buddhism in Chinese history.
The weakening of the Tibetan Empire at about the same time further changed the political realities of Inner Asia. The Tibetans ceased to play a significant role beyond their own frontiers, again allowing the Tang Empire some respite and reducing the threat to the Uyghur refugees moving into the regions of Gansu and the Tarim Basin. Those Uyghurs who migrated to the west managed to develop a flourishing culture that remained important for centuries, ultimately being absorbed into the Mongol Empire of Chinggis Qan in the 13th century. Although their cultural legacy is profound, their political influence was relatively limited in comparison to that of the earlier Uyghur Empire.
Because only a handful of native sources are available from the Uyghur Empire, most of our information comes from nonnative sources (primarily but not exclusively Chinese), which often take a hostile viewpoint. The Uyghur texts (primarily stone inscriptions) that survive ameliorate this situation to a degree, but the inscriptions are often seriously damaged and quite difficult to read and interpret. Caution thus must be exercised in any historical analysis of the Uyghur Empire, given that it relies so heavily on antagonistic sources observing the empire from the outside and only a few fragmentary sources written by the Uyghurs themselves.
The most important of the Uyghur stelae is the trilingual Karabalghasun inscription, written in Old Turkic runiform script, Sogdian, and Chinese, which was erected some time during 809–821. The Chinese text has best survived the ravages of erosion. This inscription is particularly important because it is the only stone inscription that provides significant detail regarding the establishment of Manichaeism within the empire. The Sogdian text is in worse condition, but it still provides some interesting information. The Old Turkic text is the worst-preserved of the three.22
Other significant Uyghur inscriptions are monolingual, written in Old Turkic runiform script. The most important of these include the Shine-usu and Tes inscriptions, both of which date to ca. 759, and the Terkh or Terkhin inscription, also known as the Tariat inscription, which appears to be slightly earlier, ca. 753. All of these are particularly relevant to the reign of Bügü Qaghan. Finally, the Suci inscription, written ca. 840, is a brief text commemorating the life of one of the Kirghiz officers who helped destroy the Uyghur Empire.23 In addition to these stone inscriptions, a few documents have been found in the region of the Tarim Basin, especially Turfan, that relate to the Uyghur Empire. The most important of these is the so-called “Bügü Khan” text, which is also an important source on the introduction of Manichaeism within the empire.24
Chinese sources from the Tang era provide the largest body of information regarding the Uyghur Empire. The most significant include the two official Tang histories, Jiu Tang shu (completed ca. 945 by Liu Xu and others) and Xin Tang shu (completed significantly later, ca. 1060 by Ouyang Xiu and others, and containing a good deal of information not found in Jiu Tang shu). Each of these contains a chapter devoted to the Uyghurs, although there is much additional information on the Uyghurs scattered throughout.25 Also important are documents found in the literary collections of important Tang officials who lived during the era of the Uyghur Empire, such as Bai Juyi and Li Deyu.26 In addition, the magisterial chronicle by the Song scholar Sima Guang, Zizhi tongjian, completed in 1084, is indispensable, as are several other works that can be grouped together as collectanea: Tang huiyao, Tang da jiao lingji, Cefu yuangui, and others.
Finally, some rare but useful sources are available in other languages. The most significant of these is the account from the late years of the empire left by Tamīm ibn Baḥr, who most likely visited the empire, including Ordubalïq, in 821.27 In Islamic sources the Uyghurs are referred to as Toghuzghuz, a term which derives from Old Turkic Toquz Oghuz (“Nine Oghuz”), a term for a union of peoples frequently associated with the Uyghurs; this term is also used regularly in Chinese sources where it appears as Jiu Xing 九姓, “Nine Surnames.”
While it is possible that new sources related to the Uyghur Empire will be found, it seems unlikely that these will be texts of any great impact. The further expansion of knowledge about the Uyghur Empire will most likely rely significantly on archaeological investigation, which still has much to reveal regarding various aspects of life in the Uyghur Empire.
Arden-Wong, Lyndon A. “The Eastern Uighur Khaganate: An Exploration of Inner Asian Architectural and Cultural Exchange.” PhD diss., Macquarie University, Sydney, Australia, 2014.Find this resource:
Beckwith, Christopher I. “The Impact of the Horse and Silk Trade on the Economies of T’ang China and the Uighur Empire: On the Importance of International Commerce in the Early Middle Ages.” Journal of the Economic and Social History of the Orient 34.2 (1991): 183–198.Find this resource:
Beckwith, Christopher I.The Tibetan Empire in Central Asia: A History of the Struggle for Great Power among Tibetans, Turks, Arabs, and Chinese during the Early Middle Ages. Princeton, NJ: Princeton University Press, 1987. Revised edition in 1993.Find this resource:
Clark, Larry V. “The Conversion of Bügü Khan to Manichaeism.” In Studia Manichaica IV.; Internationaler Kongreß zum Manichäismus, Berlin, 14–18. Juli 1997. Edited by Ronald E. Emmerick, Werner Sundermann, and Peter Zieme, 83–123. Berlin: Akademie Verlag, 2000.Find this resource:
Drompp, Michael R.Tang China and the Collapse of the Uighur Empire: A Documentary History. Leiden: Brill, 2005.Find this resource:
Guo, Pingliang and Liu, Ge. Huihu shi zhinan. Urumqi: Xinjiang Renmin Chubanshe, 1995.Find this resource:
Hayashi, Toshio. “Uigur Policies toward Tang China.” Memoirs of the Research Department of the Toyo Bunko 60 (2002): 87–116.Find this resource:
Kamalov, Ablet. Drevnie Uĭgury VIII–IX vv. Almaty: Izdatel’skiĭ Dom “Nash Mir.” 2001.Find this resource:
Kamalov, Ablet. “Material Culture of the Nomadic Uighurs of the Eighth-Ninth Centuries in Central Asia.” In Religion, Customary Law, and Nomadic Technology, Edited by Michael Gervers and Wayne Schleppe, 27–33. Toronto: Joint Centre for Asia Pacific Studies, 2000.Find this resource:
Kamalov, Ablet. “The Moghon Shine Usu Inscription as the Earliest Uighur Historical Annals.” Central Asiatic Journal 47.1 (2003): 77–90.Find this resource:
Mackerras, Colin. The Uighur Empire According to the T’ang Dynastic Histories: A Study in Sino-Uighur Relations, 744–840. Columbia: University of South Carolina Press, 1973.Find this resource:
Skaff, Jonathan K.Sui-Tang China and Its Turko-Mongol Neighbors: Culture, Power, and Connections, 580–800. New York: Oxford University Press, 2012.Find this resource:
Twitchett, Denis, ed. The Cambridge History of China, Vol. 3: Sui and T’ang China, 589–906, Part I. New York: Cambridge University Press, 1979.Find this resource:
Twitchett, Denis. “Tibet in Tang’s Grand Strategy.” In Warfare in Chinese History. Edited by Hans Van de Ven, 106–179. Leiden: Brill, 2000.Find this resource:
(2.) See Jonathan K. Skaff, Sui-Tang China and Its Turko-Mongol Neighbors: Culture, Power, and Connections, 580–800 (New York: Oxford University Press, 2012), 188–189.
(3.) The Uyghur rulers adopted formal titles that often employed very similar terminology, making those titles difficult to differentiate from one another. For that reason, this article employs the less formal names/titles that are found throughout various sources, particularly Chinese. For a list of the formal titles of the various Uyghur qaghans and their variants, see James R. Hamilton, Les Ouïghours à l’époque des Cinq Dynasties d’après les documents chinois (Paris: Imprimerie National, Presses Universitaires de France, 1955), 139–142, as well as Colin Mackerras, The Uighur Empire According to the T’ang Dynastic Histories: A Study in Sino-Uighur Relations, 744–840 (Columbia: University of South Carolina Press, 1973), 192–193. As for the titles used here, preference has been given to Turkic (rather than Chinese) forms whenever possible, even if some of those are tentative.
(4.) The site is often referred to by its modern Mongol name, Karabalghasun (with variants).
(5.) Kamalov, Ablet, “Turks and Uighurs during the Rebellion of An Lu-shan [and] Shih Ch’ao-yi (755–762),” Central Asiatic Journal 45.2 (2001): 243–253.
(6.) For further information on these events, see Charles A. Peterson, “P’u-ku Huai-en僕固懷恩 and the T’ang Court: The Limits of Loyalty,” Monumenta Serica 29 (1970–1971): 423–455.
(7.) See I. Arzhantseva et al., “Por-Bajin: An Enigmatic Site of the Uighurs in Southern Siberia,” The European Archaeologist 35 (2011): 6–11 as well as Lyndon A. Arden-Wong, “The Eastern Uighur Khaganate: An Exploration of Inner Asian Architectural and Cultural Exchange” (PhD diss., Macquarie University, Sydney, Australia, 2014): 234–237.
(8.) See Arden-Wong, “The Eastern Uighur Khaganate.”
(9.) An and Shi (石) were two of the seven surnames commonly used by Sogdians in a Chinese context; the others were Cao, He, Kang, Mi, and Shi (史). Each of these was associated with a Sogdian city; see Edwin G. Pulleyblank, “A Sogdian Colony in Inner Mongolia,” T’oung Pao, 2nd series, 41.4/5 (1952): 320.
(10.) See Larry V. Clark, “The Conversion of Bügü Khan to Manichaeism,” in Studia Manichaica IV.; Internationaler Kongreß zum Manichäismus, Berlin, 14–18. Juli 1997, eds. Ronald E. Emmerick, Werner Sundermann, and Peter Zieme (Berlin: Akademie Verlag, 2000), 83–123.
(11.) See Willy Bang and Annemarie von Gabain, “Türkische Turfan-Texte II. Manichaica,” Sitzungsberichte der Preussische Akademie der Wissenschaften, Philosophische-historische Klasse, 1929, 411–430.
(12.) On Manichaeism in China under Uyghur patronage, see Samuel N. C. Lieu, Manichaeism in the Later Roman Empire and Medieval China: A Historical Survey (Manchester, U.K.: Manchester University Press, 1985), 194–198.
(13.) Jan Bemmann et al., “Bookmarkers in Archaeology—Land Use around the Uyghur Capital Karabalgasun, Orkhon Valley, Mongolia,” Praehistorische Zeitschrift 89.2 (2014): 337–370.
(14.) On the role of the Uyghurs in the Zhu Tao rebellion, see Mackerras, The Uighur Empire, 39–41.
(15.) See Yihong Pan, “Marriage Alliances and Chinese Princesses in International Politics from Han through T’ang,” Asia Major, 3rd series, 10.1–2 (1997): 95–131, as well as Michael R. Drompp, “From Qatun To Refugee: The Taihe Princess among the Uighurs,” in The Role of Women in the Altaic World: Permanent International Altaistic Conference 44th Meeting, Walberberg, 26–31 August 2001, ed. Veronika Veit (Wiesbaden: Harrassowitz Verlat, 2007), 57–68.
(16.) On Dezong’s hostile attitude towards the Uyghurs, see Martin Slobodník, “The Early Policy of Emperor Tang Dezong (779–805) towards Inner Asia,” Asian and African Studies 6.2 (1997): 184–196.
(17.) Twitchett, Denis, ed. The Cambridge History of China, Vol. 3, Sui and T’ang China, 589–906, Part I (Cambridge: Cambridge University Press, 1979), 610.
(18.) On this inscription see Mackerras, The Uighur Empire, 184–187.
(19.) For a complete list of all known missions, see Colin Mackerras, “Sino–Uighur Diplomatic and Trade Contacts (744 to 840),” Central Asiatic Journal 13 (1969): 215–240.
(20.) See Michael R. Drompp, Tang China and the Collapse of the Uighur Empire: A Documentary History (Leiden: Brill, 2005), 106.
(21.) Michael R. Drompp, “Breaking the Orkhon Tradition: Kirghiz Adherence to the Yenisei Region after A.D. 840,” Journal of the American Oriental Society 119.3 (1999): 390–403.
(22.) The Chinese inscription was translated, with commentary, by Gustav Schlegel in his “Die chinesische Inschrift auf dem uigurischen Denkmal in Kara Balgassun,” Mémoires de la Société Finno-Ougrienne 9 (1896). This translation contains many errors; a new translation is very much to be desired. An English translation of the Sogdian inscription may be found in Takao Moriyasu and Ayudai Ochir, eds., Mongorukoku genzon iseki, hibun chōsa kenkyū hōkoku (Osaka: The Society of Central Eurasian Studies, 1999), 209–227.
(23.) The Shine-usu and Suci inscriptions have been translated in Gustav Ramstedt, “Zwei uigurische Runeninschriften in der Nord-Mongolei,” Journal de la Société Finno-Ougrienne 30 (1913). An English translation of the former may be found in Moriyasu and Ochir, Mongorukoku genzon iseki, 177–195; on the latter, see also Louis Bazin, “L’Inscription kirghize de Sūǰi (Essai d’une nouvelle lecture),” in Documents et Archives provenant de l’Asie Centrale. Actes du Colloque Franco-Japonais Kyoto 4–8 Octobre 1988, ed. Akira Haneda (Kyoto: Association Franco-Japonaise des Études Orientales, 1990), 135–146. English translations of the Tes inscription are available in S. G. Klyashtorny, “The Tes Inscription of the Uighur Bögü Qaghan,” Acta Orientalia Scientiarum Hungaricae 39 (1985): 137–156 and in Moriyasu and Ochir, Mongorukoku genzon iseki, 158–167. English translations of the Terkh/Terkhin or Tariat inscription may be found in S. G. Klyashtorny, “TheTerkin Inscription,” Acta Orientalia Scientiarum Hungaricae 36 (1982): 335–366 and in Moriyasu and Ochir, Mongorukoku genzon iseki, 168–176. Translations of many of these inscriptions also exist in other languages, including Russian and Turkish.
(24.) For a German translation of this text, see Bang and von Gabain, “Türkische Turfan-Texte II. Manichaica.”
(25.) Most of the Jiu Tang shu and Xin Tang shu chapters on the Uyghurs are translated in Colin Mackerras, The Uighur Empire. This does not by any means exhaust the materials on the Uyghurs to be found in those two works.
(26.) The relevant writings of Li Deyu (a total of 69 documents) are translated in Drompp, Tang China and the Collapse of the Uighur Empire.
(27.) An English translation may be found in V. Minorsky, “Tamīm ibn Baḥr’s Journey to the Uyghurs,” Bulletin of the School of Oriental and African Studies 12.2 (1948): 275–305. |
The data can be organized into groups, and evaluated. Mean, mode, median and range are different ways to evaluate data. The mean is the average of the data. The mode refers to the number that occurs the most often in the data. The median is the middle number when the data is arranged in order from lowest to highest. The range is the difference in numbers when the lowest number is subtracted from the highest number. Data can be organized into a table, such as a frequency table. Read More...Create and Print your own Math Worksheets
with Math Worksheet Generator
The resources above cover the following skills:
Statistics and Probability
Draw informal comparative inferences about two populations.
Informally assess the degree of visual overlap of two numerical data distributions with similar variabilities, measuring the difference between the medians by expressing it as a multiple of the interquartile range.
NewPath Learning resources are fully aligned to US Education Standards. Select a standard below to view correlations to your selected resource: |
Box It Up
This Box It Up worksheet also includes:
- Answer Key
- Join to access all included materials
In this math worksheet, students determine how they could hinge six squares, of the same size, together so that when the sides are folded up they form a box.
3 Views 0 Downloads
- Activities & Projects
- Graphics & Images
- Lab Resources
- Learning Games
- Lesson Plans
- Primary Sources
- Printables & Templates
- Professional Documents
- Study Guides
- Writing Prompts
- AP Test Preps
- Lesson Planet Articles
- Interactive Whiteboards
- All Resource Types
- Show All
See similar resources:
Choose the Operation: Plus or minus (up to 100)
Investigate the relationship between addition and subtraction as scholars fill in these number sentences, each of which is missing a symbol. Is it a subtraction or addition equation? There are 20 of these to start, all written...
2nd - 4th Math CCSS: Designed
Get on your Mark, Get Set, Go! Collect, Interpret, and Represent Data Using a Bar Graph and a Circle Graph
Start an engaging data analysis study with a review of charts and graphs using the linked interactive presentation, which is both hilarious and comprehensive. There are 27 statistics-related vocabulary terms you can use in a word sort....
4th - 7th Math CCSS: Designed
Find the Area of a Rectangle by Partitioning it into Arrays of Same Size Squares
Tiling is a great method to use when introducing young mathematicians to the concept of area. In the second video of the series, this process is clearly modeled both by laying down unit squares in arrays and by drawing lines to create...
8 mins 2nd - 4th Math CCSS: Designed
Find Area of a Scaled Rectangle
When the sides scale larger the area scales larger as well, but not by the same amount. Show your learners with this second video from a series why you have to multiply both of the sides by the scale factor to adjust the size of the new...
6 mins 5th - 8th Math CCSS: Designed |
Please take a moment to view the "Preview" to see why I call this set "DELUXE." Feel free to print the FREE file, and start using the pages in your classroom.
You will see that along with "I Can" statements, I have emphasized relevant vocabulary words and added examples, strategies, and explanations to help children truly feel that they CAN achieve these expectations. I have also chosen an easy-to-read font and lively illustrations to make print-outs as 3rd-grade friendly as possible. Each poster is a full page to maximize your options. Choose to print 2 per page or 4 per page for smaller versions.
These standards posters include full-color and black-and-white versions. I recommend projecting full-color versions as you introduce lessons. Refer to them on your standards wall, word wall, or centers as you review concepts. Give children black-and-white versions. Ask them to highlight important concepts. Punch holes in them, and place them in binders or folders. Alternatively, use them as covers for each section of student notebooks.
The preview features the following standards:
Determine the unknown whole number in a multiplication or division equation relating three whole numbers. For example, determine the unknown number that makes the equation true in each of the equations 8 × ? = 48, 5 = _ ÷ 3, 6 × 6 = ?
Along with the "I can" statement, you will get 3 examples, plus strategies and explanations. In addition, I've included a poster that introduces the term "variable."
2. 3.OA.D.9 Identify arithmetic patterns (including patterns in the addition table or multiplication table), and explain them using properties of operations. For example, observe that 4 times a number is always even, and explain why 4 times a number can be decomposed into two equal addends.
For this standard, I've included examples of high-level explanations using a hundreds chart, an addition chart, and a multiplication chart. Feel free to use my free charts in your classroom!
Measure and estimate liquid volumes and masses of objects using standard units of grams (g), kilograms (kg), and liters (l). Add, subtract, multiply, or divide to solve one-step word problems involving masses or volumes that are given in the same units, e.g., by using drawings (such as a beaker with a measurement scale) to represent the problem.
I've included a poster showing illustrations of each measurement in this preview freebie. The product contains an example of an equation as well.
***Note that I also have a matching language arts standards set. Save money when you purchase my 3rd Grade Common Core "I Can" Statements: Language Arts and Math Bundled Set
Please also try more of my 3rd grade products. The following items feature songs that teach:
Skip-Counting Chants (1-12)
6 Traits of Writing CD-Book Set
Science Songs CD-Book Set
Advanced Word Study CD-Book Set
Health and the Human Body CD-Book Set
"Make a Difference" (Character Education) CD-Book Set
I also create theme-based materials:
Thanksgiving Core Math Practice: "Edu-Turkeys"
Thanksgiving Writing Craftivity
3rd Grade Math-tivities - Christmas Theme
Valentine's Day Math: "Edu-Valentines"
Mother's Day Craftivities
End of the Year Writing Craftivities
I offer a 100% guarantee on ALL of my products. If you are 100% satisfied with this packet, please give it 100% positive ratings on the TPT website ... and follow me for more ideas and specials. If you are dissatisfied for any reason, please contact me before rating the product at firstname.lastname@example.org. I will do anything and everything I can to make you 100% satisfied ... From editing pages, to giving you a 100% refund. ENJOY! |
|Edges and vertices||4|
In Euclidean geometry, a convex quadrilateral with at least one pair of parallel sides is referred to as a trapezoid (pronounced: /ˈtɹæpəzɔɪd/) in American and Canadian English but as a trapezium in English outside North America. The parallel sides are called the bases of the trapezoid and the other two sides are called the legs or the lateral sides (if they are not parallel; otherwise there are two pairs of bases). A scalene trapezoid is a trapezoid with no sides of equal measure, in contrast to the special cases below.
The term trapezium has been in use in English since 1570, from Late Latin trapezium, from Greek τραπέζιον (trapézion), literally "a little table", a diminutive of τράπεζα (trápeza), "a table", itself from τετράς (tetrás), "four" + πέζα (péza), "a foot, an edge". The first recorded use of the Greek word translated trapezoid (τραπεζοειδή, trapezoeidé, "table-like") was by Marinus Proclus (412 to 485 AD) in his Commentary on the first book of Euclid's Elements.
This article uses the term trapezoid in the sense that is current in the United States and Canada. In many other languages using a word derived from the Greek for this figure, the form closest to trapezium (e.g. French trapèze, Italian trapezio, Spanish trapecio, German Trapez, Russian трапеция) is used.
An acute trapezoid has two adjacent acute angles on its longer base edge, while an obtuse trapezoid has one acute and one obtuse angle on each base.
There is some disagreement whether parallelograms, which have two pairs of parallel sides, should be regarded as trapezoids. Some define a trapezoid as a quadrilateral having only one pair of parallel sides (the exclusive definition), thereby excluding parallelograms. Others define a trapezoid as a quadrilateral with at least one pair of parallel sides (the inclusive definition), making the parallelogram a special type of trapezoid. The latter definition is consistent with its uses in higher mathematics such as calculus. The former definition would make such concepts as the trapezoidal approximation to a definite integral ill-defined. This article uses the inclusive definition and considers parallelograms as special cases of a trapezoid. This is also advocated in the taxonomy of quadrilaterals.
Under the inclusive definition, all parallelograms (including rhombuses, rectangles and squares) are trapezoids. Rectangles have mirror symmetry on mid-edges; rhombuses have mirror symmetry on vertices, while squares have mirror symmetry on both mid-edges and vertices.
A Saccheri quadrilateral is similar to a trapezoid in the hyperbolic plane, with two adjacent right angles, while it is a rectangle in the Euclidean plane. A Lambert quadrilateral in the hyperbolic plane has 3 right angles.
Given a convex quadrilateral, the following properties are equivalent, and each implies that the quadrilateral is a trapezoid:
- The angle between a side and a diagonal is equal to the angle between the opposite side and the same diagonal.
- The diagonals cut each other in mutually the same ratio (this ratio is the same as that between the lengths of the parallel sides).
- The diagonals cut the quadrilateral into four triangles of which one opposite pair are similar.
- The diagonals cut the quadrilateral into four triangles of which one opposite pair have equal areas.:Prop.5
- The product of the areas of the two triangles formed by one diagonal equals the product of the areas of the two triangles formed by the other diagonal.:Thm.6
- The areas S and T of some two opposite triangles of the four triangles formed by the diagonals satisfy the equation
- where K is the area of the quadrilateral.:Thm.8
- :p. 25
- The cosines of two adjacent angles sum to 0, as do the cosines of the other two angles.:p. 25
- The cotangents of two adjacent angles sum to 0, as do the cotangents of the other two adjacent angles.:p. 26
- One bimedian divides the quadrilateral into two quadrilaterals of equal areas.:p. 26
- Twice the length of the bimedian connecting the midpoints of two opposite sides equals the sum of the lengths of the other sides.:p. 31
Additionally, the following properties are equivalent, and each implies that opposite sides a and b are parallel:
- The consecutive sides a, c, b, d and the diagonals p, q satisfy the equation:Cor.11
- The distance v between the midpoints of the diagonals satisfies the equation:Thm.12
Midsegment and height
The midsegment (also called the median or midline) of a trapezoid is the segment that joins the midpoints of the legs. It is parallel to the bases. Its length m is equal to the average of the lengths of the bases a and b of the trapezoid,
The midsegment of a trapezoid is one of the two bimedians (the other bimedian divides the trapezoid into equal areas).
The height (or altitude) is the perpendicular distance between the bases. In the case that the two bases have different lengths (a ≠ b), the height of a trapezoid h can be determined by the length of its four sides using the formula
where c and d are the lengths of the legs. This formula also gives a way of determining when a trapezoid of consecutive sides a, c, b, and d exists. There is such a trapezoid with bases a and b if and only if
The area K of a trapezoid is given by
where a and b are the lengths of the parallel sides, h is the height (the perpendicular distance between these sides), and m is the arithmetic mean of the lengths of the two parallel sides. In 499 AD Aryabhata, a great mathematician-astronomer from the classical age of Indian mathematics and Indian astronomy, used this method in the Aryabhatiya (section 2.8). This yields as a special case the well-known formula for the area of a triangle, by considering a triangle as a degenerate trapezoid in which one of the parallel sides has shrunk to a point.
Molloy's Rule takes this a step further by considering the circumference of a circle and its centre point as the "parallel" sides and the radius as the perpendicular distance between them to give the area of the circle.
From the formula for the height, it can be concluded that the area can be expressed in terms of the four sides as
When one of the parallel sides has shrunk to a point (say a = 0), this formula reduces to Heron's formula for the area of a triangle.
Another equivalent formula for the area, which more closely resembles Heron's formula, is
where is the semiperimeter of the trapezoid. (This formula is similar to Brahmagupta's formula, but it differs from it, in that a trapezoid might not be cyclic (inscribed in a circle). The formula is also a special case of Bretschneider's formula for a general quadrilateral).
From Bretschneider's formula, it follows that
The line that joins the midpoints of the parallel sides, bisects the area.
The lengths of the diagonals are
where a and b are the bases, c and d are the other two sides, and a < b.
If the trapezoid is divided into four triangles by its diagonals AC and BD (as shown on the right), intersecting at O, then the area of AOD is equal to that of BOC, and the product of the areas of AOD and BOC is equal to that of AOB and COD. The ratio of the areas of each pair of adjacent triangles is the same as that between the lengths of the parallel sides.
Let the trapezoid have vertices A, B, C, and D in sequence and have parallel sides AB and DC. Let E be the intersection of the diagonals, and let F be on side DA and G be on side BC such that FEG is parallel to AB and CD. Then FG is the harmonic mean of AB and DC:
The line that goes through both the intersection point of the extended nonparallel sides and the intersection point of the diagonals, bisects each base.
If the angle bisectors to angles A and B intersect at P, and the angle bisectors to angles C and D intersect at Q, then
More on terminology
The term trapezium is sometimes defined in the US as a quadrilateral with no parallel sides, though this shape is more usually called an irregular quadrilateral. The term trapezoid was once defined as a quadrilateral without any parallel sides in Britain and elsewhere, but this does not reflect current usage. (The Oxford English Dictionary says "Often called by English writers in the 19th century".)
According to the Oxford English Dictionary, the sense of a figure with no sides parallel is the meaning for which Proclus introduced the term "trapezoid". This is retained in the French trapézoïde (though not used anymore), German Trapezoid, and in other languages. A trapezium in Proclus' sense is a quadrilateral having one pair of its opposite sides parallel. This was the specific sense in England in 17th and 18th centuries, and again the prevalent one in recent use. A trapezium as any quadrilateral more general than a parallelogram is the sense of the term in Euclid. The sense of a trapezium as an irregular quadrilateral having no sides parallel was sometimes used in England from c. 1800 to c. 1875, but is now obsolete. This sense is the one that is sometimes quoted in the US, but in practice quadrilateral is used rather than trapezium.
Application in geometry
The crossed ladders problem is the problem of finding the distance between the parallel sides of a right trapezoid, given the diagonal lengths and the distance from the perpendicular leg to the diagonal intersection.
In architecture the word is used to refer to symmetrical doors, windows, and buildings built wider at the base, tapering towards the top, in Egyptian style. If these have straight sides and sharp angular corners, their shapes are usually isosceles trapezoids. This was the standard style for the doors and windows of the Incas.
- Polite number, also known as a trapezoidal number
- Trapezoidal rule, also known as trapezium rule
- Wedge, a polyhedron defined by two triangles and three trapezoid faces.
- Saccheri quadrilateral
- "trapezoid - Wiktionary".
- Types of quadrilaterals
- Oxford English Dictionary entry at trapezoid.
- Weisstein, Eric W., "Trapezoid", MathWorld.
- "American School definition from "math.com"". Retrieved 2008-04-14.
- Trapezoids, , accessed 2012-02-24.
- Martin Josefsson, "Characterizations of trapezoids", Forum Geometricorum, 13 (2013) 23-35.
- Quadrilateral Formulas, The Math Forum, Drexel University, 2012, .
- GoGeometry, , Accessed 2012-07-08.
- Owen Byer, Felix Lazebnik and Deirdre Smeltzer, Methods for Euclidean Geometry, Mathematical Association of America, 2010, p. 55.
- efunda, General Trapezoid, , Accessed 2012-07-09.
- Chambers 21st Century Dictionary Trapezoid
- "1913 American definition of trapezium". Merriam-Webster Online Dictionary. Retrieved 2007-12-10.
- Oxford English Dictionary entries for trapezoid and trapezium.
- "Larousse definition for trapézoïde".
- Weisstein, Eric W., "Right trapezoid", MathWorld.
- Trapezoid definition Area of a trapezoid Median of a trapezoid With interactive animations
- Trapezoid (North America) at elsy.at: Animated course (construction, circumference, area)
- on Numerical Methods for Stem Undergraduate
- Autar Kaw and E. Eric Kalu, Numerical Methods with Applications, (2008) |
In computer science, a three-way comparison takes two values A and B belonging to a type with a total order and determines whether A < B, A = B, or A > B in a single operation, in accordance with the mathematical law of trichotomy.
Many processors have instruction sets that support such an operation on primitive types. Some machines have signed integers based on a sign-and-magnitude or one's complement representation (see signed number representations), both of which allow a differentiated positive and negative zero. This does not violate trichotomy as long as a consistent total order is adopted: either −0 = +0 or −0 < +0 is valid. Common floating point types, however, have an exception to trichotomy: there is a special value "NaN" (Not a Number) such that x< NaN, x> NaN, and x = NaN are all false for all floating-point values x (including NaN itself).
In C, the functions
memcmp perform a three-way comparison between strings and memory buffers, respectively. They return a negative number when the first argument is lexicographically smaller than the second, zero when the arguments are equal, and a positive number otherwise. This convention of returning the "sign of the difference" is extended to arbitrary comparison functions by the standard sorting function
qsort , which takes a comparison function as an argument and requires it to abide by it.
In C++, the C++20 revision adds the "spaceship operator"
<=>, which similarly returns the sign of the difference and can also return different types (convertible to signed integers) depending on the strictness of the comparison.
In Perl (for numeric comparisons only, the
cmp operator is used for string lexical comparisons), PHP (since version 7), Ruby, and Apache Groovy, the "spaceship operator"
<=> returns the values −1, 0, or 1 depending on whether A < B, A = B, or A > B, respectively. The Python 2.x
cmp(removed in 3.x), OCaml
compare, and Kotlin
compareTo functions compute the same thing. In the Haskell standard library, the three-way comparison function
compare is defined for all types in the
Ord class; it returns type
Ordering, whose values are
LT (less than),
EQ (equal), and
GT (greater than):
Many object-oriented languages have a three-way comparison method, which performs a three-way comparison between the object and another given object. For example, in Java, any class that implements the
Comparable interface has a
compareTo method which either returns a negative integer, zero, or a positive integer, or throws a
NullPointerException (if one or both objects are
null). Similarly, in the .NET Framework, any class that implements the
IComparable interface has such a
Since Java version 1.5, the same can be computed using the
Math.signum static method if the difference can be known without computational problems such as arithmetic overflow mentioned below. Many computer languages allow the definition of functions so a compare(A,B) could be devised appropriately, but the question is whether or not its internal definition can employ some sort of three-way syntax or else must fall back on repeated tests.
When implementing a three-way comparison where a three-way comparison operator or method is not already available, it is common to combine two comparisons, such as A = B and A < B, or A < B and A > B. In principle, a compiler might deduce that these two expressions could be replaced by only one comparison followed by multiple tests of the result, but mention of this optimisation is not to be found in texts on the subject.
In some cases, three-way comparison can be simulated by subtracting A and B and examining the sign of the result, exploiting special instructions for examining the sign of a number. However, this requires the type of A and B to have a well-defined difference. Fixed-width signed integers may overflow when they are subtracted, floating-point numbers have the value NaN with undefined sign, and character strings have no difference function corresponding to their total order. At the machine level, overflow is typically tracked and can be used to determine order after subtraction, but this information is not usually available to higher-level languages.
In one case of a three-way conditional provided by the programming language, Fortran's now-deprecated three-way arithmetic IF statement considers the sign of an arithmetic expression and offers three labels to jump to according to the sign of the result:
The common library function strcmp in C and related languages is a three-way lexicographic comparison of strings; however, these languages lack a general three-way comparison of other data types.
The three-way comparison operator for numbers is denoted as
<=> in Perl, Ruby, Apache Groovy, PHP, Eclipse Ceylon, and C++, and is called the spaceship operator.
The name's origin is due to it reminding Randal L. Schwartz of the spaceship in an HP BASIC Star Trek game.Another coder has suggested that it was so named because it looked similar to Darth Vader's TIE fighter in the Star Wars saga.
Example in PHP:
echo1<=>1;// 0echo1<=>2;// -1echo2<=>1;// 1
Three-way comparisons have the property of being easy to compose and build lexicographic comparisons of non-primitive data types, unlike two-way comparisons.
Here is a composition example in Perl.
cmp, in Perl, is for strings, since
<=> is for numbers. Two-way equivalents tend to be less compact but not necessarily less legible. The above takes advantage of short-circuit evaluation of the
|| operator, and the fact that 0 is considered false in Perl. As a result, if the first comparison is equal (thus evaluates to 0), it will "fall through" to the second comparison, and so on, until it finds one that is non-zero, or until it reaches the end.
In some languages, including Python, Ruby, Haskell, etc., comparison of lists is done lexicographically, which means that it is possible to build a chain of comparisons like the above example by putting the values into lists in the order desired; for example, in Ruby:
In computer science, an integer is a datum of integral data type, a data type that represents some range of mathematical integers. Integral data types may be of different sizes and may or may not be allowed to contain negative values. Integers are commonly represented in a computer as a group of binary digits (bits). The size of the grouping varies so the set of integer sizes available varies between different types of computers. Computer hardware nearly always provides a way to represent a processor register or memory address as an integer.
IEEE 754-1985 was an industry standard for representing floating-point numbers in computers, officially adopted in 1985 and superseded in 2008 by IEEE 754-2008, and then again in 2019 by minor revision IEEE 754-2019. During its 23 years, it was the most widely used format for floating-point computation. It was implemented in software, in the form of floating-point libraries, and in hardware, in the instructions of many CPUs and FPUs. The first integrated circuit to implement the draft of what was to become IEEE 754-1985 was the Intel 8087.
In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed, or it may be fixed. A string is generally considered as a data type and is often implemented as an array data structure of bytes that stores a sequence of elements, typically characters, using some character encoding. String may also denote more general arrays or other sequence data types and structures.
In computing, NaN, standing for Not a Number, is a member of a numeric data type that can be interpreted as a value that is undefined or unrepresentable, especially in floating-point arithmetic. Systematic use of NaNs was introduced by the IEEE 754 floating-point standard in 1985, along with the representation of other non-finite quantities such as infinities.
In computer science, conditionals are programming language commands for handling decisions. Specifically, conditionals perform different computations or actions depending on whether a programmer-defined boolean condition evaluates to true or false. In terms of control flow, the decision is always achieved by selectively altering the control flow based on some condition.
In computer programming, a function object is a construct allowing an object to be invoked or called as if it were an ordinary function, usually with the same syntax. Function objects are often called functors.
In computer programming, operators are constructs defined within programming languages which behave generally like functions, but which differ syntactically or semantically.
XPath 2.0 is a version of the XPath language defined by the World Wide Web Consortium, W3C. It became a recommendation on 23 January 2007. As a W3C Recommendation it was superseded by XPath 3.0 on 10 April 2014.
In computer science, the Boolean data type is a data type that has one of two possible values which is intended to represent the two truth values of logic and Boolean algebra. It is named after George Boole, who first defined an algebraic system of logic in the mid 19th century. The Boolean data type is primarily associated with conditional statements, which allow different actions by changing control flow depending on whether a programmer-specified Boolean condition evaluates to true or false. It is a special case of a more general logical data type —logic doesn't always need to be Boolean.
A bit array is an array data structure that compactly stores bits. It can be used to implement a simple set data structure. A bit array is effective at exploiting bit-level parallelism in hardware to perform operations quickly. A typical bit array stores kw bits, where w is the number of bits in the unit of storage, such as a byte or word, and k is some nonnegative integer. If w does not divide the number of bits to be stored, some space is wasted due to internal fragmentation.
In computer science, a relational operator is a programming language construct or operator that tests or defines some kind of relation between two entities. These include numerical equality and inequalities.
In computer programming, the Schwartzian transform is a technique used to improve the efficiency of sorting a list of items. This idiom is appropriate for comparison-based sorting when the ordering is actually based on the ordering of a certain property of the elements, where computing that property is an intensive operation that should be performed a minimal number of times. The Schwartzian transform is notable in that it does not use named temporary arrays.
In computer programming, a sigil is a symbol affixed to a variable name, showing the variable's datatype or scope, usually a prefix, as in
$ is the sigil.
The computer programming languages C and Pascal have similar times of origin, influences, and purposes. Both were used to design their own compilers early in their lifetimes. The original Pascal definition appeared in 1969 and a first compiler in 1970. The first version of C appeared in 1972.
Signed zero is zero with an associated sign. In ordinary arithmetic, the number 0 does not have a sign, so that −0, +0 and 0 are identical. However, in computing, some number representations allow for the existence of two zeros, often denoted by −0 and +0, regarded as equal by the numerical comparison operations but with possible different behaviors in particular operations. This occurs in the sign and magnitude and ones' complement signed number representations for integers, and in most floating-point number representations. The number 0 is usually encoded as +0, but can be represented by either +0 or −0.
The syntax of the Python programming language is the set of rules that defines how a Python program will be written and interpreted. The Python language has many similarities to Perl, C, and Java. However, there are some definite differences between the languages.
This comparison of programming languages (array) compares the features of array data structures or matrix processing for various computer programming languages.
This article compares a large number of programming languages by tabulating their data types, their expression, statement, and declaration syntax, and some common operating-system interfaces.
Some programming languages provide a built-in (primitive) rational data type to represent rational numbers like 1/3 and -11/17 without rounding, and to do arithmetic on them. Examples are the
ratio type of Common Lisp, and analogous types provided by most languages for algebraic computation, such as Mathematica and Maple. Many languages that do not have a built-in rational type still provide it as a library-defined type.
The structure of the Perl programming language encompasses both the syntactical rules of the language and the general ways in which programs are organized. Perl's design philosophy is expressed in the commonly cited motto "there's more than one way to do it". As a multi-paradigm, dynamically typed language, Perl allows a great degree of flexibility in program design. Perl also encourages modularization; this has been attributed to the component-based design structure of its Unix roots, and is responsible for the size of the CPAN archive, a community-maintained repository of more than 100,000 modules.
<=>syntax, in a paper entitled "Consistent Comparison". See "Consistent Comparison" It was successfully merged into the C++20 draft in November 2017. |
To put an accurate price on carbon, you need to know how much you have and where it’s located, researchers say
Stanford University scientists have produced the first-ever high-resolution carbon geography of Peru, a country whose tropical forests are among the world’s most vital in terms of mitigating the global impact of climate change.
Released today, the 69-page report to Peru’s Ministry of the Environment could become a tool itself to battle rising temperatures. It is complete with vivid 3-D maps that pinpoint with a high degree of certainty the carbon density of Peru’s vast and varied landscape, from its western deserts and savannas, to its lowland forests, to its soaring Andean peaks, to its lush eastern Amazon rainforests.
The maps also reveal in sharp detail what’s missing: large swaths of once carbon-laden jungles now stripped bare by the extraction industry. Many of Peru’s gold, copper and silver mines operate legally; many of them do not. Environmental devastation is often the result.
The report represents two years of intensive aerial surveying by Greg Asner, a global ecologist with the Carnegie Institute for Science at Stanford University, and his team that operates the Carnegie Airborne Observatory (CAO).
The CAO is a twin-engine turboprop Dornier 228 filled with more than $10 million in the latest short-wave and infrared sensors. Those tools can scan as much as 100,000 hectares (240,000 acres) a day at a density of just one hectare (2.4 acres). At that level of precision, the data reveal not only the height of the trees below, but also the individual leaves on each tree as well as the chemical activity in all those leaves.
Asner’s team sampled 16 million acres of ecosystems within Peru’s 320-million total acreage. Those samples were scaled up to a country-wide map using field plots and existing satellite imagery. Colors in the national map indicate how much carbon is stored above ground. The scale goes from blue — zero carbon — to dark red in the highest areas.
The report found, for example, that 53 percent of all Peruvian carbon is stored in one large Amazonian region — Loreto, followed by Ucayali and Madre de Dios (26 percent combined).
Asner’s report emphasizes its underlying motivation and connection to climate-change mitigation: if global markets are going to value carbon at a competitive rate as an incentive to reduce greenhouse gas emissions, countries like Peru will need to know with great certainty how much carbon their forests contain when it comes to negotiating prices for carbon offsets.
“Peru’s minister of the environment can make very good use of this information,” says Enrique Ortiz, one of Peru’s leading conservationists and a senior program officer with the Blue Moon Fund in Washington, D.C., which supports global environmental projects. “It is a good proxy for creating mechanisms to reward good action, like which areas need to be protected and how to manage carbon trade. It also shows us where illegal mining is taking place, and that’s important, too.”
Asner and his Carnegie scientists used the CAO to map Panama’s carbon stock a year ago. But he says his maps of Peru are far larger and more accurate with potentially a broader impact.
“These maps suddenly place Peru at the very forefront of the challenge to combat climate change,” says Asner, whose 2013 TED talk on his aerial research has 522,000 views. “It’s a sea change in capabilities. It will enable the country to use its carbon stock as a tool for engaging the international community to slow the rate of deforestation and forest degradation through conservation.”
The importance of Peru
When it comes to the global environment, it is difficult to overstate Peru’s importance. Its Amazon jungles are deemed among the most biodiverse on earth in terms of tree, plant, animal and bird species. It also has the world’s fourth-largest store of tropical forests, behind only neighboring Brazil, the Congo and Indonesia. Critically, tropical forests soak up and store carbon emissions from industrialized countries. In doing so, they are an irreplaceable line of defense in slowing the rate of rising temperatures due to global warming.
The Carnegie report also comes at a critical time for Peru. While still largely poor, Peru’s economy is the fastest-growing in Latin America in part because of the prevalence of mining for minerals and drilling for fossil fuels. Those industries, which flourish as trees fall, helped put Peru at the top of another list — a leader among Amazonian countries in deforestation rates. The country lost some 400,000 acres in 2012.
To keep its economic growth rolling, the Peruvian Congress stunned environmentalists by passing a law on July 11 that reduces the authority of its Ministry of the Environment. The new laws could make it easier for mining and drilling activities to increase in forested lands, and make it more difficult to add additional protected land to Peru’s large inventory.
“Despite making good steps in regards to both economic growth and environmental protections over the last 20 years, these new laws show a lack of consistency and add doubt regarding the direction we are going,” says Pedro Solano, director of the influential Peruvian Society for Environmental Law in Lima. “Our environmental policy is still strong, but now politics will dictate more decisions.”
All this comes as Lima is preparing to host in December the 20th annual Conference of the Parties (COP) of the United Nations Framework Convention on Climate Change. The COP culminates in Paris in 2016 when the group will seek binding international agreements akin to the Kyoto Protocol signed in 1997.
In Lima, policy makers at the COP will negotiate amid the pressure of another looming deadline — the scientific community’s consensus view that widespread efforts to reduce carbon emissions must be implemented in the next 10 years to limit predicted global temperature increases to around 3 degrees F. If not, far worse droughts, hurricanes and sea-level rises than we’re already experiencing are anticipated in the next 50 to 75 years..
In such a context, the Peruvian carbon maps created by Asner and his team take on a heightened importance. He says the maps can help place a competitive value on carbon, which he believes is the best, and possibly only, hope to keep large tracts of tropical forests intact. “I’m going to the COP as a delegate and I’m going to give these maps to everyone I can,” Asner says. “The negotiators need to see the truth about what they have to lose and what they have to gain.”
Tropical forests: stand or fall?
As I wrote for National Geographic NewsWatch, one of the cruel ironies of climate change is that countries that ring the equator — most of them poor and eager to develop — possess a valuable asset in fighting rising temperatures that can they only monetize if they destroy it: their tropical forests.
These lush, dense woodlands are incomparably important to our lives on earth. They pull greenhouse gases from the air and store them, thus slowing the rate of global warming. They play a powerful role in the water cycle, thus creating clouds that make weather around the world. They harbor and sustain the majority of the earth’s biodiversity, thus providing shelter and habitats to countless species of wildlife and plantlife. And they provide these vital ecological services to all of us on the planet for free.
But these forests routinely sit atop lucrative troves of fossil fuels or minerals. Or they cover land that when cleared can be used for cattle ranching and palm-oil farming. Often, the governments in these countries, staring at high poverty and unemployment rates, chose revenue over conservation.
When that happens, a double whammy occurs. What was once a carbon sponge becomes a sieve. When trees fall, they release their carbon. Global deforestation contributes an estimated 17 percent of all greenhouse gas emissions, remarkably on par with the combined fuel burned in cars, trucks, trains and airplanes, according to the EPA.
Converting carbon to dollars
Asner believes that the high-resolution carbon maps he is producing can play a crucial role in changing the economic dynamic in tropical countries. If you know how much carbon is locked up in your forests and vegetation — in Peru’s case, his report estimates the amount at 6.92 billion metric tons — you can better place a value on it. And choose conservation over development.
“All the data Greg (Asner) is generating is being converted into knowledge about the carbon cycle,” says Miles Silman a tropical biologist at Wake Forest University who contributed field data to Asner’s report. “And that knowledge is being converted into how we think about remediating global climate change. We do that by converting carbon into dollars. We allow countries and enterprises to make an investment or get paid to keep forests standing.”
Asner picks up the thread: “The truth is, carbon is now worth about $6 a ton on the voluntary market. That’s pretty low. It’s worth so little because there is a huge uncertainty based on where the carbon is and how much is there. By offering really precise details about the amount of carbon in Peru and where it’s located, it has a chance of being worth a lot more.”
Economists speculate that when carbon returns to or exceeds its peak value in 2006 of $32 a ton, and when carbon offset policies take hold as a vital international strategy in battling climate change, countries like Peru will be rewarded for protecting their forests instead of allowing them to be plowed under. But such international strategies involving carbon offsets are far from reality. Among the best hopes appears to be REDD, the slow-moving, decade-old U.N. program of “reducing emissions from deforestation and forest degradation.”
There is both a supply and demand side to REDD. On the supply side, countries like Peru must show how much area they are saving from deforestation and prove that the land will remain protected. On the demand side, industrialized countries that exceed their annual agreed-upon carbon emissions must be ready to pay countries like Peru for the land it’s preserving in return for carbon offset credits.
But the financing mechanisms and national emissions caps to make REDD work do not exist.
“We want to push this ahead in Lima (at the COP) this year and try to get a (global financing and emissions) agreement next year in Paris,” says Chris Meyer, a expert on REDD policy with the Environmental Defense Fund. “To do this, we need to improve our ability to measure carbon stocks and density, and Asner is at the forefront of science in creating new technologies to provide these measurements.”
Will it help?
Alessandro Baccini, a remote sensing scientist with Woods Hole Research Center in Massachusetts, says that while national carbon maps have been produced before, the technology used by Asner and the Carnegie team allows for greater precision and accuracy.
“What is really important is the level of information that a country like Peru now has access to,” Baccini says. “It’s very enabling. It helps Peru really understand its landscape and understand the very best data out there for policy purposes — if they want to use it.”
Ideally, Peru will use these maps to steer development and extraction away from areas of highest carbon density; ideally, it will be rewarded by the industrialized world with carbon offset investments for doing so. Ideally, Asner’s high-resolution maps in Peru — which cost $1.2 million in private funding to produce — will be replicated in other countries dense with carbon-bearing tropical forests to help bolster the global carbon market and conserve more tropical forests..
“What’s important now is to ask: what will they do with it?” Baccini says. “Will it change the game? Will it open the eyes of the (Peruvian) government so that they say, “Gee, we can’t cut down these forests, look at how much carbon we’re losing.’? We’ll see.”
Justin Catanoso is an environmental writer based in Greensboro, N.C., and director of journalism at Wake Forest University. His reporting is sponsored by the Pulitzer Center on Crisis Reporting in Washington, D.C. |
Asteroids and meteors enter Earth daily, throwing hundreds of tons of space debris into the planet's atmosphere. About every 2000 years or so, an asteroid the size of a football field strikes, causing massive destruction. According to the NASA Jet Propulsion Laboratory (JPL), between April 15, 1988 and February 1, 2019, at least 771 significant meteorological impacts were recorded worldwide. Asteroids threatening civilizations are however much rarer because they only strike every few million years.
But the main deadly asteroids hide in the dark depths of space and their potential impact could be cataclysmic.
But what exactly would happen if any of these killer asteroids hit Earth tomorrow? What would be the extent of the destruction?
Discovery Channel answered this question by simulating the impact of an asteroid on a 500 km wide space rock.
The terrifying video titled "The Simulation of a Large Asteroid" describes the ferocious disappearance of the planet from the moment of impact.
READ MORE: NASA's Asteroid Stalker: Discover Three Giant Asteroids Traveling the Earth TODAY & # 39; HUI
Asteroid warning: this simulation shows the effect of a major asteroid striking the Earth
The video shows what would happen if the asteroid hit the planet in the Pacific Ocean just east of Southeast Asia.
When the video shows the asteroid above the Earth's surface, its colossal size clears the Sun and casts a shadow over entire cities.
Scientists discover asteroids and comets with unusual orbits
As the asteroid rushes toward the ground, the air surrounding it ignites under the effect of friction.
The apocalyptic rock then crashed into the Pacific, where the force of impact instantly removes more than 10 km from the Earth's crust.
The resulting shockwave throws a tremendous tidal wave in all directions from the point where the asteroid landed.
READ MORE: NASA plans to save the Earth by derailing its 2,600-foot asteroid system unveiled
Columns of fire and smoke appear in the video, propagating in an ever increasing radius of the asteroid.
The debris of the force of impact is pushed up into the low Earth orbit (LEO) and a storm of death and destruction invades the planet.
In the simulation video, nothing stands in the way of the fiery shock wave of the asteroid.
The day becomes night, life turns to death and the surface becomes uninhabitable.
READ MORE: Rogue asteroids could erase life on Earth, warns Stephen Hawking
The video shows one by one the whole country collapsing and getting rid of the crust that is deteriorating rapidly.
The shockwave, which is now spreading across the world at a hypersonic speed, vaporizes all that precedes it.
We see a land once green and teeming with life transformed into an ocean of fire and rubble.
Of all the deadly asteroids that hit Earth in its violent cosmic past, the asteroid Chicxulub, the dinosaur killer, was the most devastating of all.
The giant asteroid, which struck the Earth about 65 to 66 million years ago, would likely only have a diameter of about 9.5 km.
However, a study published in 2017 in the journal Nature revealed that the asteroid was the ideal place in modern Mexico to eliminate dinosaurs.
The study states: "Recent studies have shown that this impact on the Yucatan Peninsula warms the hydrocarbons and sulfur contained in these rocks, forming soot aerosols and stratospheric sulphates and causing extreme global cooling and drought. .
"These events triggered massive extinction, including dinosaurs, and led to the subsequent macroevolution of mammals."
Asteroid Warning: Asteroid impact simulation shows that all life on Earth is dying
Asteroid Warning: Asteroid simulation involved a 500 km asteroid hitting the Earth
Fortunately, according to the NASA space agency, very few asteroids hide in the depths of space, which represents a real threat to the Earth.
NASA said: "Scientists are discovering with increasing regularity asteroids and comets with unusual orbits, which bring them closer to the Earth and the Sun.
"Very few of these bodies are potential hazards to the Earth, but the more we know and understand, the better we will be prepared to take appropriate action, if necessary.
"Knowing the size, shape, mass, composition and structure of these objects will help determine the best way to divert a rock from the space that lies on a path threatening the Earth." |
Table of Contents Heading
Here, we used the log function to find the logarithmic value of different data types. If the number is Negative or Zero, log function returns ValueError. In this post, we will discuss the below-mentioned math module functions.
Both functions have the same objective, but the Python documentation notes that log2() is more accurate than using log. You can see how the value changes when the log base is changed. If the base is any other number except 0, then the function will return a valid power value. You can see from the above examples that nan is not close to any value, not even to itself. On the other hand, inf is not close to any numerical values, not even to very large ones, but it is close to itself. When you set the absolute tolerance to 1, the numbers 6 and 7 are close because the difference between them is equal to the absolute tolerance.
What is LN equal to?
The natural logarithm of a number is its logarithm to the base of the mathematical constant e, where e is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is generally written as ln x, loge x, or sometimes, if the base e is implicit, simply log x.
That relationship leads to radians being used in trigonometry and calculus, because they result in more compact formulas. is more accurate for values of x very close to zero because it uses an algorithm that compensates for round-off errors from the initial addition. returns the mantissa and exponent of a floating point number, and can be used to create a more portable representation of the value. Both values are limited in precision only by the platform’s floating point C library. function will give a more accurate calculation result. Python is well known for its ease of use, a diverse range of libraries, and easy-to-understand syntax.
The Log10 Method
Now, let’s see what happens if a negative number is entered in the above code. Now, If a negative number is entered, the following error occurs. Write a Python program to change a given string to a new string where the first and last chars have been log math python exchanged. ‘pip’ is not recognized as an internal or external command, operable program or batch file. Write a Python function to check whether a number is in a given range. complete the function digits that returns how many digits the number has.
In this article, we will study how to calculate the natural log of a number using the math module and some other ways. In this tutorial, we will learn how to calculate log to the base 2 in Python. There are various inbuilt logarithmic functions under module “math” in Python. Math module is a standard module available in Python.
Find The Closeness Of Numbers With Python Isclose()
Python has a built-in library, math, which has all sorts of mathematical functions to carry out mathematical computations. And, this library provides accessible functions to calculate logarithmic results as well. In this article, you learned about the Python math module. The module provides useful functions for performing mathematical calculations that have many practical applications.
The Python math module is complemented by the cmath module, which implements many of the same functions but for complex numbers. It deals with the relationship between angles and the sides of a triangle. Trigonometry is mostly interested in right-angled triangles , but it can also be applied to other types of triangles. The Python math module provides very useful functions that let you perform trigonometric calculations.
What is E equal to?
The number e, also known as Euler’s number, is a mathematical constant approximately equal to 2.71828, and can be characterized in many ways. It is the base of the natural logarithm. It is the limit of (1 + 1/n)n as n approaches infinity, an expression that arises in the study of compound interest.
To avoid an exception, this function is introduced. An imaginary number is a number that gives a negative result when squared. dist() returns the Euclidean distance between two points p and q, each given as a sequence of coordinates.
Python As A Calculator
The above-mentioned methods are built-in and can be found in the math module. They can be accessed and used after the math module is imported and referencing it with the dot operator. The Python Math Library provides us with functions and constants that we can use to perform arithmetic and trigonometric operations in Python. The library comes installed in Python, hence you are not required to perform any additional installation in order to be able to use it. For more info you can find the official documentation here. The float has been converted to an integer by removing the fractional part and keeping the base number. Note that when you convert a value to an int in this way, it will be truncated rather than being rounded off.
One mistake in the implementation could lead to bugs. But when using factorial(), you don’t have to worry about disaster cases because the function handles them all. Therefore, it’s a best practice to use factorial() whenever possible. It’s always a best practice to check if a value is NaN. If it is, then it could lead to invalid values in your program. Rather, it’s a mathematical concept representing something that is never-ending or boundless.
We have declared three variables and assigned values with different numeric data types to them. We have then passed them to the exp() method to calculate their exponents. In this section, we will explore the Math library functions used to find different types of exponents and logarithms. In this article, we have understood the working of Python Log functions and have unveiled the variants of the logarithmic function in Python.
Oops, You Will Need To Install Grepper And Log
We need to use the math module to access the log functions in the code. Calculating exponents and logarithms with Python is easy. This error results because we have not told Python to include the sin function. The sin function is part of the Python Standard Library. To use Python’s sin function, first import the sin function from the math module which is part of the Python Standard Library. The result is calculated in a way which is accurate for x near zero. returns the natural logarithm of the absolute value of Gamma for the input value.
However, in the second case, the difference between 6 and 7 is not less than or equal to the established absolute tolerance of 0.2. When the number is negative, floor() behaves the place the stages of the systems development life cycle in order: same as ceil(). You can’t give non-number values as input to ceil(). Inputting a negative value will result in a ValueError reading factorial() not defined for negative values.
To use the math module, we need to import it using import math. Both the math module and the NumPy library can be used for mathematical calculations. NumPy has several similarities with the math module. NumPy has a subset of functions, similar to math module functions, that deal with mathematical calculations. Both NumPy and math provide functions that deal with trigonometric, exponential, logarithmic, hyperbolic and arithmetic calculations. You don’t have to implement your own functions to calculate GCD.
Logarithm In Base 10
You can use math.pi to calculate the area and the circumference of a circle. If the number argument is a positive number, the log function returns the output. It takes two arguments and returns the log of the first argument, taking the second argument as the base. If the second parameter hasn’t been mentioned , it takes the natural log and calculates the value. The library is a built-in Python module, therefore you don’t have to do any installation to use it. In this article, we will be showing example usage of the Python Math Library’s most commonly used functions and constants. math.lgamma¶Return the natural logarithm of the absolute value of the Gamma function at x.
function is designed to calculate the natural logarithm of a number with a given base. The below code represents some of the logarithmic functions of the math module. The below code represents some of the trigonometric functions of the math module. In order to avail the functionalities of the math module, we need to import it in our code using import math statement.
Python Math Module Log Functions
Trigonometry functions such as sine, cosine, and tangent can also be calculated using the Python REPL. )¶With one argument, return the natural logarithm of x . Write a NumPy program to compute natural, base 10, and base 2 logarithms for all elements in a given array. Although degrees are more commonly used in everyday discussions of angles, radians are the standard unit of angular measure in science and math. Representing precise values in binary floating point memory is challenging.
- In this program, we have taken input from the user then we have calculated the logarithm of base 2.
- Number theory is a branch of pure mathematics, which is the study of natural numbers.
- If you ever want to find the sum of the values of an iterable without using a loop, then math.fsum() is probably the easiest way to do so.
- Surprisingly log base 10 is 14.5% faster than natural log.
- NumPy is much faster when working with N-dimensional arrays because of the optimizations for them.
Any attempt to calculate a square root of a negative value results in aValueError. The math module includes three functions for converting floating point values to whole numbers. Each takes a different approach, and will be useful in different circumstances. X- Though there are many parameters in numpy.log(), we will study only one parameter for calculating the natural log of one element.
Base- By default, the value of this is ‘e.’ It means that if we do not provide any base, it will calculate the natural log. But we can change the value of the base according to our needs. Given an integer, , and space-separated integers as input, create a tuple, , of those integers. The log2() function directly calculates the log base 2 of a number.
produces the largest integer following sequentially after the input value. Arr– In this parameter, we have to pass the array, whose ln we have to find.
General Logarithmic FunctionHere a is the base of the logarithm, which can be log math python any number. You learned about exponential functions in a previous section.
Scientific research has identified the half-lives of all radioactive elements. You can substitute values to the equation to calculate the remaining quantity of any radioactive substance.
As you can see from the execution times, factorial() is faster than the other methods. The recursion-based method is the slowest out of the three. Although you might get different timings depending on your CPU, the order of the functions should be the same. Inputting a decimal value results in a ValueError reading factorial() only accepts integral values. This approach returns the desired output with a minimal amount of code. , or four factorial, gives the value 24 by multiplying the range of whole numbers from 4 to 1.
Author: Matthew J. Belvedere |
What Is Energy? Elementary School Lesson
With an introduction to the ideas of energy, students discuss specific energy types and practical energy sources. Associated hands-on activities help them identify energy types in their surroundings and enhance their understanding of the concept of energy.
Design Step 1: Identify the Need High School Activity
Students practice the initial steps involved in an engineering design challenge. They begin by reviewing the steps of the engineering design loop and discussing the client need for the project. Next, they identify a relevant context, define the problem within their design teams, and examine the project's requirements and constraints. (Note: Conduct this activity in the context of a design project that students are working on, which could be a challenge determined by the teacher, brainstormed with the class, or the example project challenge provided [to design a prosthetic arm that can perform a mechanical function].)
Engineering: Simple Machines Elementary School Lesson
Simple machines are devices with few or no moving parts that make work easier. Students are introduced to the six types of simple machines — the wedge, wheel and axle, lever, inclined plane, screw, and pulley — in the context of the construction of a pyramid, gaining high-level insights into tools that have been used since ancient times and are still in use today. In two hands-on activities, students begin their own pyramid design by performing materials calculations, and evaluating and selecting a construction site. The six simple machines are examined in more depth in subsequent lessons in this unit.
Testing Model Structures: Jell-O Earthquake in the Classroom Elementary School Activity
Students make sense of the design challenges engineers face that arise from earthquake phenomena. Students work as engineering teams to explore concepts of how engineers design and construct buildings to withstand earthquake damage by applying elements of the engineering design process by building their own model structures using toothpicks and marshmallows. The groups design, build, and test their model buildings and then determine how earthquake-proof their designs are by testing them on an earthquake simulator pan of Jell-O®.
Biomimicry: Natural Designs Elementary School Activity
Students learn about biomimicry and how engineers often imitate nature in the design of innovative new products. They demonstrate their knowledge of biomimicry by practicing brainstorming and designing a new product based on what they know about animals and nature.
Model Greenhouses High School Activity
Students learn about the advantages and disadvantages of the greenhouse effect. They construct their own miniature greenhouses and explore how their designs take advantage of heat transfer processes to create controlled environments. They record and graph measurements, comparing the greenhouse indoor and outdoor temperatures over time. Students are also introduced to global issues such as greenhouse gas emissions and their relationship to global warming.
Kinetic and Potential Energy of Motion Middle School Lesson
In this lesson, students are introduced to both potential energy and kinetic energy as forms of mechanical energy. A hands-on activity demonstrates how potential energy can change into kinetic energy by swinging a pendulum, illustrating the concept of conservation of energy. Students calculate the potential energy of the pendulum and predict how fast it will travel knowing that the potential energy will convert into kinetic energy. They verify their predictions by measuring the speed of the pendulum.
Leaning Tower of Pasta Middle School Activity
Using spaghetti and marshmallows, students experiment with different structures to determine which ones are able to handle the greatest amount of load. Their experiments help them to further understand the effects that compression and tension forces have with respect to the strength of structures. Spaghetti cannot hold much tension or compression; therefore, it breaks very easily. Marshmallows handle compression well, but do not hold up to tension.
Connect the Dots: Isometric Drawing and Coded Plans Middle School Activity
Students learn about isometric drawings and practice sketching on triangle-dot paper the shapes they make using multiple simple cubes. They also learn how to use coded plans to envision objects and draw them on triangle-dot paper. A PowerPoint® presentation, worksheet and triangle-dot (isometric) paper printout are provided. This activity is part of a multi-activity series towards improving spatial visualization skills.
Action-Reaction! Rocket Middle School Activity
Students construct rockets from balloons propelled along a guide string. They use this model to learn about Newton's three laws of motion, examining the effect of different forces on the motion of the rocket.
Creating Mini Wastewater Treatment Plants High School Activity
Student teams design and then create small-size models of working filter systems to simulate multi-stage wastewater treatment plants. Drawing from assorted provided materials (gravel, pebbles, sand, activated charcoal, algae, coffee filters, cloth) and staying within a (hypothetical) budget, teams create filter systems within 2-liter plastic bottles to clean the teacher-made simulated wastewater (soap, oil, sand, fertilizer, coffee grounds, beads). They aim to remove the water contaminants while reclaiming the waste material as valuable resources. They design and build the filtering systems, redesigning for improvement, and then measuring and comparing results (across teams): reclaimed quantities, water quality tests, costs, experiences and best practices. They conduct common water quality tests (such as turbidity, pH, etc., as determined by the teacher) to check the water quality before and after treatment.
Life in Space: The International Space Station Elementary School Lesson
Students are introduced to the International Space Station (ISS) with information about its structure, operation and key experiments. The ISS itself is an experiment in international cooperation to explore the potential for humans to live in space. The space station features state-of-the-art science and engineering laboratories to conduct research in medicine, materials and fundamental science to benefit people on Earth as well as people who will live in space in the future.
Pop Rockets Elementary School Activity
Students design and build paper rockets around film canisters, which serve as engines. An antacid tablet and water are put into each canister, reacting to form carbon dioxide gas, and acting as the pop rocket's propellant. With the lid snapped on, the continuous creation of gas causes pressure to build up until the lid pops off, sending the rocket into the air. The pop rockets demonstrate Newton's third law of motion: for every action, there is an equal and opposite reaction. An instructions handout, worksheets (English and Spanish) and quiz are provided.
Creating an Electromagnet Elementary School Activity
Student teams investigate the properties of electromagnets. They create their own small electromagnets and experiment with ways to change their strength to pick up more paperclips. Students learn about ways that engineers use electromagnets in everyday applications.
Creating Model Working Lungs: Just Breathe Elementary School Activity
Students explore the inhalation/exhalation process that occurs in the lungs during respiration. Using everyday materials, each student team creates a model pair of lungs.
Last updated 11 hour(s) ago |
Grades K-8 Worksheets
Looking for high-quality Math worksheets aligned to Common Core standards for Grades K-8?
Our premium worksheet bundles contain 10 activities and answer key to challenge your students and help them understand each and every topic within their grade level.
Read the lesson below on multiplying by multiples of 10 and then work through it with your child. Print off the worksheets before starting.
Students normally learn best when they can associate what they are learning with what they already know. This introduction will provide your child with an opportunity to do this and help prepare them for what is to be learned later in this lesson.
This introduction should take around 10-15 minutes. Do not worry if it takes longer.
Materials Needed: Cut out the flashcards from the two sets below:
- Play the game “What’s my fact?” to review multiplication facts with your child. This will help them recall prior knowledge before getting into the lesson content.
- To play Memory with your child, place the flashcards on the table facedown.
- Instruct your child to turn over one multiplication card at a time and solve the problem. If they get the problem right they get to keep the card, if the answer is incorrect they put the card back.
- Alternate turns between your children. If you are playing with one child then play the game too. However, have them solve the problem on your turn as well. If they get the problem wrong then, you keep the card. If they get the problem right then they keep the card.
- The object of the game is to have the most cards at the end of the game.
- As you play the game evaluate how well your child answers the multiplication problems. This is designed to give you an idea of how well they know their multiplication facts before learning more complex concepts in multiplication. If they struggle with basic multiplication facts you may want to review those facts prior to moving forward.
This part should take around 20 to 30 minutes but you can take as long as necessary.
Review the above standard that this lesson is based on to help your child see what they will be learning and why.
- Explain that they will be learning more about multiplication.
- Let your child know that what was practiced in the warm-up was a great start but that they will be learning ways to complete harder multiplication problems
Tell your child that you will show him how to multiply one-digit whole numbers by multiples of 10.
- Discuss with them that when they multiply basic facts, like the facts that were used in the game, they are drawing on their own memorization skills.
- Let them know that when they multiply numbers with more than one digit they have to use these same skills to solve the problem.
- Ask your child to write the following problem on a sheet of paper. 3 x 20.
- When this is complete have them show you how they wrote the problem. Then complete the steps below to show them how to write the problem according to proper place value principles and how to solve the problem.
- Tell your child that when they write a problem with more than two-digits in the number they must line up the numbers so they are in the right place value columns.
- Demonstrate to them on a board or on a sheet of paper how to write the multiplication problem shown below.
- Explain that when the write a multiplication problem with a number that has more than two-digits they must put the larger number on top and then line it up according to place value. As in this example, the 3 must be underneath the 0 because they are both in the ones place. The 2 in 20 is in the tens column.
- Now show your child how to solve the problem. Explain that when they solve the problem the only need to look at the each set of numbers one at time. They will solve the problem from right to left starting with the lowest place value column, which in this case is the ones column.
- To solve the problem in this column they must multiply 0 x 3, which equals 0. See below.
- Once the ones column has been multiplied, tell your child that you will now multiply the tens column. See below.
- Explain to your child that when you write your answer the numbers must be lined up according to place value column they multiplied it from. So in this example they must place the 6 underneath the tens column and the 0 in the ones column.
- Once they do this the have solved the problem and in this case they now know that 20 x 3 = 60.
- Explain to your child that they have learned one method for solving multiplication problems with one and two-digits and now they can learn a shortcut to more quickly solve problems that have multiples of ten.
- Tell your child that to do this they must rely on their knowledge of basic multiplication facts.
- Work through the following
- Write the following problem on the board or on a sheet of paper.
- Explain to your child that when they see any two-digit number that has a zero in it, that they can easily multiply the problem.
- Tell them that what they do is simply cover up the 0 or pretend it is not there and bring the zero down as shown below. They will then multiply the other number. For example in the problem above they will multiply 4 x 2.
- Now ask your child to tell you what 4 x 2 is and then write the answer on the board next to the zero.
- Tell your child that they have just solved the problem and the answer is 80. So anytime they see a problem that is a multiple of ten or has a zero in it they bring the zero(s) down and then multiply the other numbers to get the answer.
- Complete this concept again with 50 x 7. Walk them through steps 3 and 4.
- Discuss with your child another way to think of 40 x 2; 40 can be thought of as 4 tens. 2 times 4 tens is 8 tens. 8 tens is 80
- Together complete worksheet #1 using the traditional method on half of the problems and the shortcut method on the other half. Work through each type of problem with your child to help them master concepts from the lesson.
- If you are working with more than one child then group them together to complete Worksheet #1.
- If you are working one-on-one with your child alternate turns completing problems on worksheet #1. However, make sure to let them ‘help’ you solve your problems to keep them involved in the learning.
- Review concepts that they may not completely understand or have questions with.
Independent Practice/ Lesson Closure
- Play bingo with your child to help them master the concepts taught individually. If you are completing the lesson with only one child play bingo with them.
- To play you will need to use multiplication bingo worksheet #2.
- Give each player a bingo card.
- Call out the various problems listed on multiplication bingo worksheet #2.
- If a problem is called on your child’s card then they are to solve the problem using their preferred method.
- The first person to have a card that is completely blacked out with correct answers wins!
- Children who struggle with this lesson may benefit from reviewing basic multiplication facts. They may also benefit from reviewing place value concepts.
- Children who excel with this lesson are ready from more complicated multiplication problems.
- If your child is advanced have him/her help you with solving the examples on the board to keep them engaged with the lesson.
- If your child is struggling, have them help with reading the problems, working out components to the problem that they excel in to keep them engaged.
- You may also consider giving your child a worksheet on simple multiplication facts if they are significantly struggling with the materials.
Multiplying by Multiples of 10 Worksheets
The 2 worksheets listed above are grouped below: |
The volcanic radiative “forcing” produced by major stratospheric eruptions is usually estimated from observations of atmospheric optical depth ( AOD ). This is obtained by a variety of methods including ground and satellite based measurements. Several groups have worked on producing global averages or geographically gridded data. One such dataset is provided by NASA and covers the Mt Pinatubo eruption period.
Data is given as a series of layers of the stratosphere ( above 15km ) and does not cover the relatively minor contributions from the troposphere.
It is sometimes convenient to have a mathematical description of the time evolution of such a “forcing” rather than a time series of discrete data points.
It was found that the convolution of a exponential function with another exponential will fit Mt Pinatubo disturbance very closely.
The above graph selects the tropical bands from the data and sums the optical densities over the different height layers. Since AOD is a logarithmic measurement based on the transmission of light, summing different layer values is appropriate to get the total AOD for the column.
[ Tropical bands used: 19.6 11.7 3.9 -3.9 -11.7 -19.6 ]
The two time constants found to give a close fit were 3 and 9 months. Non-linear least squares fitting produced 2.95 and 9.1 months. It was found that adding an offset of 0.3 months from the time of eruptions gave a better fit. This can probably be explained as being due to the initial geographic dispersal of the ejecta after the eruption.
This curve provides convenient analytical mathematical description of the time evolution of AOD but it would be good to have an explanation that this does actually represent the physical processes and is not just a fortuitous resemblance of form.
The third line on the graph is a further convolution which represents a linear relaxation response of the climate system to the radiative perturbation. The derivation of the 8 month climate response is discussed here: https://climategrog.wordpress.com/2015/01/17/on-determination-of-tropical-feedbacks/
The major atmospheric effects of volcanoes, once the initial dust and ash has settled, is in the creation of aerosols. Sulphur dioxide combines with water and water vapour to produce a fine aerosol of dilute sulphuric acid. These persist in the stratosphere for several years.
In kinetic chemistry these are referred to as linear rate processes, where the rate of the process is determined by the concentration of reactant .
For a fixed initial amount, the concentration of the reactant decays exponentially with time. In engineering terms this can be called the impulse response of the system.
One way to calculate the system reaction to a varying input in such a situation, is by convolution of the continuous input with exponential impulse response.
Approximating the explosive injection of SO2 into the atmosphere as an instantaneous impulsion producing a step change in the amount of SO2 and applying a convolution to represent its conversion by reaction with water vapour into the dilute acid aerosol, then feeding this into a second convolution to represent the removal process, will produce a simple model of the evolution of aerosol concentration over time.
The analytical function resulting from these convolutions, where the reaction rates are different, is given by :
λ1λ2 / ( λ1 – λ2 ) . ( exp-λ1t – exp-λ2t ) ; Eqn. 1
where λ1 and λ2 are the reciprocals of the time constants of the two reactions.
A special case where the two reactions have the same rate gives :
λ2t . exp-λ t ; Eqn. 2
Douglass & Knox 2005 used a function, such as eqn. 2, to model AOD, though no explanation was given for the choice. It seems to have been chosen as a convenient function which provided a reasonably close fit to the data, rather than having a specific theoretical origin. Here is figure 2 from that paper with the authors’ original caption:
The single time-constant of this model was derived to 7.6 months, between the two values found above for the dual exponential model. It can be noted that this does not rise quickly enough, peaks a little too late and decays somewhat too quickly in the tail.
Though the fit is good and mean squared residuals are small, it does mean that two of key features that are of importance when applying this to studying the climate reaction are more notably affected: the overall scaling is less, and the timing of the peak is later.
Also, if the fitted AOD model decays too quickly, the implied climate response will be lengthened. This was another parameter that the paper sought to estimate.
None of these differences will be large and do not undermine the results of the paper, however the double decay model may provide a more accurate result and has the appeal of being grounded in a physical explanation of the chemical processes.
Investigation of the magnitude of the differences may justify using the single exponential model as a more parsimonious substitution for the physically grounded description in some situations. The impact on any derived parameters, as discussed above, and knock on effects to subsequent regression analyses and attribution studies should also be determined. A physically meaningful model would seem preferable in trying to understand the behaviour of the system, rather than just describe it in the most parsimonious way.
European Journal of Statistics and Probability
Vol.1, No.2, pp.1-8, December 2013
“ON THE SUM OF EXPONENTIALLY DISTRIBUTED RANDOM VARIABLES: A CONVOLUTION APPROACH”
Oguntunde P.E1; Odetunmibi O.A2 ;and Adejumo, A. O 3.
Dartmouth university course notes: section 7.2
Douglass and Knox 2005: “Climate forcing by the volcanic eruption of Mount Pinatubo” |
History of Palermo
- 1 First settlements
- 2 Greeks and Phoenicians
- 3 Punic Wars
- 4 Roman and Byzantine age
- 5 Palermo under Arab rule
- 6 The Normans
- 7 Swabian period and the House of Anjou
- 8 Palermo under Aragon
- 9 Bourbon House
- 10 After the unification of Italy
- 11 21st-century
- 12 See also
- 13 References
- 14 Further reading
- 15 External links
Human settlement in the Palermo area goes back to prehistoric times. It has one of the most ancient sites in Sicily: Interesting graffiti and prehistoric paintings were discovered in the Addaura grottoes in 1953 by archaeologist Jole Bovio Marconi. They portray dancing figures performing a propitiatory rite, perhaps shamans.
Greeks and Phoenicians
In 734 BC Phoenicians from Tyre (Lebanon) established a flourishing merchant colony in the Palermo area. The relationship of the new city with the Siculi, the people living in the Eastern part of the Island involved both commerce and war. The first building in which soon became a great city was called Mabbonath ("lodging" in Phoenician). It was the most important of the three buildings forming the “Phoenician triangle” cited by Thucydides. The others were the Motya and Soluntum. Only traces of the necropolis remain from the Phoenician age in Palermo.
Between the 8th and the 7th centuries BC, the Greeks colonized Sicily. They called the city Panormus ("All port") and traded with the Carthaginians, Phoenician descendants who were creating an empire from the coast of what is today’s Tunisia. The two civilizations lived together in Sicily until the Roman conquest. The Greek colony of Panormus had two nuclei: the Palepolis ("ancient city"), between the two rivers Kemonia and Papirethos, and the Neapolis ("new city"). Curiously, early Naples was divided in two parts with the same name. Its current name stems from the latter.
In the course of the Punic Wars Palermo was fought over by the Carthaginians and the Romans until, in 254 BC, the Roman fleet besieged the city. It eventually surrendered and the population had to pay a war tribute to save their liberty. Hasdrubal[disambiguation needed] tried to take it back, but the Roman consul Quintus Caecilius Metellus[disambiguation needed] defeated him and imposed a lasting Roman rule over Palermo. In 247, Hamilcar camped with the Carthaginian army on Monte Pellegrino, then called Ercta. However this was in vain, as Palermo remained loyal to Rome. It therefore gained the titles of Praetura, the Golden Eagle, and the right to mint a coin of its own, as one of only five free cities in Sicily.
Roman and Byzantine age
Panormus was a flourishing and beautiful city during the Golden Age of the Roman Republic and Empire. In Piazza Vittoria (“Victory’s Square”) notable palaces and mosaics have been discovered and a large theatre still existed in the Norman age. According to geographer Strabo, during the Roman Empire it provided large amounts of wheat for the capital. However, after the reign of Vespasian, it decayed, and in 445 was sacked by the King of Vandal Africa, Gaiseric. Later it was part of the territory of Odoacer and Theoderic's Ostrogoths.
In 535, the Byzantine general Belisarius stormed the port, during Justinian I's program of reconquering Italy, which soon turned into the fierce and disastrous Gothic War. The Byzantine rule lasted until 831, when the Aghlabid Arabs, disembarked in Mazara del Vallo. The Arabs captured Palermo after a year-long siege and made it the capital city of their Sicilian emirate.
Palermo under Arab rule
After the Byzantines were betrayed by admiral Euphemius who fled to Tunisia in 827 and begged the Aghlabid leader Ziyadat Allah to help him there was a Muslim conquest of Sicily, putting in place the Emirate of Sicily. The Aghlabids were good administrators. Under their rule Sicily became a rich and flourishing land.
Although their rule was short in time, it was then that Palermo (called Balharm in Arabic) displaced Syracuse as the prime city of Sicily; it was said to have competed with Córdoba and Cairo in terms of importance and splendor. In general, the Arab rule was tolerant, and Jews also had their space and were allowed to prosper. The Arabs are also said to have left a lasting influence on Sicilian culture and especially "the Arab concept of the family",
After dynasty related quarrels however, there was a Christian reconquest in the form of the Normans from the Duchy of Normandy, descendants of the Vikings; the family who returned the city to Christianity were called the Hautevilles . Palermo was conquered in 831 by Arabs from North Africa and became the capital of the Arabian Emirate of Sicily until 1072, when it returned under Christian rule due largely to the efforts of Robert Guiscard and his army. For more than two hundred years Palermo was the capital of a flourishing Islamic civilisation in Sicily. By 1050, Palermo had a population of 350,000, making it one of the largest cities in Europe, second only to Islamic Spain's capital Córdoba, which had a population of 450,000.
Traces of the ancient Arab domination can be seen still today. Muslim artifacts include the Kasr ("Castle"), on the cape of the Paleopolis, the district of the great mosque, the Kalsa ("Elected"), the emirs’ seat along the sea, the area of the Schiavoni ("slaves"), crossed by the Papireto river; and, in the western region, the Moasker, the soldiers' quarter.
However, the Arab emirate became increasingly torn by inner disputes and was a rather easy prey for the Normans, who had entered Sicily in 1061. In 1072, after four years of siege, Palermo fell to Count Roger I of Sicily, ending Muslim rule there.
The Normans restored Christianity as the official religion and declared Palermo to be the capital of the island. In 1130, Roger II was crowned King of Sicily in Palermo. Although Christian, the Normans were tolerant towards the Muslim population, which was a majority in Palermo and the main cities. Jews also remained an important community. However, many mosques were turned into Christian churches. The high level of this multicultural civilization can be seen by the splendour of the new monuments that the new King had built in Palermo. These buildings, which include the church of the Marturana and the Palatine Chapel, show a fascinating mix of Arab, Byzantine and Italian influences.
It was under Roger II of Sicily that his holdings of Sicily and the southern part of the Italian Peninsula were promoted, from the County of Sicily into the Kingdom of Sicily; the kingdom was ruled from Palermo as its capital, with the king's court held at Palazzo dei Normanni. Much construction was undertaken during this period, such as the building of the Palermo Cathedral. The Kingdom of Sicily became one of the wealthiest states in Europe, as wealthy as the fellow Norman state, the Kingdom of England. Though the city's population had dropped to 150,000, it became the largest city in Europe, due to the larger decline in Cordoba's population.
Sicily in 1194 fell under the control of the Holy Roman Empire. Palermo was the preferred city of the Emperor Frederick II. Muslims of Palermo were migrated and expelled during Holy Roman rule. After an interval of Angevin rule (1266–1282), Sicily came under the house of Aragon. By 1330, Palermo's population had declined to 51,000.
Swabian period and the House of Anjou
The state marriage between the emperor Henry VI and the last descendant of the Norman monarchs, Constance of Hauteville, gave the Kingdom of Sicily and Palermo to the Hohenstaufen house of Germany. However, the noblemen refused to be ruled by a foreigner, and Henry had to fight a rival king, before conquering Palermo in 1194 and being crowned as King. The second ruler of the house of Swabia was the famed Frederick II, who spent his early years in Palermo as a rogue. He probably considered himself as primarily a true Sicilian instead of a German. Under his reign Palermo became the effective capital of the Holy Roman Empire. Palermo’s court anticipated Renaissance courts and hosted some of the better intellectuals, artists and scholars of the period. The first Italian poetical school was born in Palermo.
Frederick died in 1250 and was buried in the cathedral. His illegitimate son Manfred succeeded him and continued his cultural and administrative politics. However, in 1268 Manfred was defeated by Charles I of Anjou, and the Kingdom of Sicily passed to this new French house. Palermo suffered a decay as the capital was moved to Naples. Charles and his officials exploited Sicily heavily and the island rebelled in 1282 (the Sicilian Vespers), giving itself to the Aragonese.
Palermo under Aragon
Under the Aragonese, Palermo saw internal struggles of noble families such as the Ventimiglia, Alagona and Chiaramante, who contended for control over western Sicily. The sumptuous Palazzo Steri and Palazzo Sclafani were constructed under the Aragonese. The city flourished again by trading raw materials and crafted products with Genoa and Spain. In 1494, after the death of King Martin, Sicily was annexed by Spain and Palermo became the seat of a viceroy. The Jews were expelled and the Holy Inquisition increased its power over the city. Arts were still pre-eminent with buildings like the church of San Giuseppe, the Spasimo theatre and the Porta Nuova. However, heavy taxes were imposed, to pay this construction program. The era of Charles V and his son Philip II were difficult for Palermo, as the barons felt free to dominate the city through their unruly bands of bravoes.
After the Treaty of Utrecht (1713), Sicily was handed over to the Savoia, but in 1734 it was again a Bourbon possession. Charles III chose Palermo for his coronation as King of the Two Sicilies. Charles had new houses built for the increased population, while trade and industry grew as well. However, Palermo was now just another provincial city as the royal court resided in Naples. Charles' son Ferdinand, though disliked by the population, took refuge in Palermo after the event following the French Revolution in 1798.
From 1820 to 1848 all Sicily was shaken by upheavals, which culminated on January 12, 1848, with the popular insurrection led by Giuseppe La Masa, the first one in Europe that year. A parliament and constitution were proclaimed. The first president was Ruggero Settimo. The Bourbons soon reconquered Palermo (May 1849), which remained under their rule until the appearance of Giuseppe Garibaldi.
This famous general entered Palermo with his troops (the “Thousands”) on May 27, 1860. After the plebiscite later that year Palermo and the whole of Sicily became part of the new Kingdom of Italy (1861).
After the unification of Italy
From that year onwards, Palermo followed the history of Italy as the administrative centre of Sicily. A certain economic and industrial growth was spurred by the Florio family. In the early 20th century Palermo expanded outside the old city walls, mostly to the north along the new boulevard, the Via della Libertà. This road would soon boast a huge number of villas in the Art Nouveau style or Stile Liberty as it is known in Italy. Many of these were built by the famous architect Ernesto Basile. The Grand Hotel Villa Igeia, built by Ernesto Basile for the Florio family, is a good example of Palermitan Stile Liberty. The Teatro Massimo was built in the same period by Basile and his son and was opened in 1897.
During World War II, Palermo was untouched until the Allies began to advance up Italy after the Allied invasion of Sicily in 1943. In July, the harbour and the surrounding quarters were heavily bombed by the allied forces and were all but destroyed. Six decades later, the city centre has still not been fully rebuilt, and hollow walls and devastated buildings are commonplace.
In 1946, the city was declared the seat of the Regional Parliament, as capital of a Special Status Region (1947) whose seat is in the Palazzo dei Normanni. Palermo's future seemed to look bright again. Unfortunately, many opportunities were lost in the coming decades, due to incompetence, incapacity, corruption and abuse of power.
The main topic of the contemporary age is the struggle against Mafia and bandits like Salvatore Giuliano, who controlled the neighbouring area of Montelepri. The Italian State had to share effective control, economic as well as the administrative, of the territory with the Mafiosi families.
The so-called "Sack of Palermo" is one of the major visible faces of this problem. The term is used today to indicate the heavy building speculations that filled the city with poor buildings. The reduced importance of agriculture in the Sicilian economy had led to a massive migration to the cities, especially Palermo, which swelled in size. Instead of rebuilding the city centre the town was thrown into a frantic expansion towards the north, where practically a new town was built. The regulatory plan for the expansion was largely ignored. New parts of town appeared almost out of nowhere, but without parks, schools, public buildings, proper roads and the other amenities that characterise a modern city. The Mafia played a huge role in this process, which was an important element in the Mafia's transition from a mostly rural phenomenon into a modern criminal organisation. The Mafia took advantage of corrupt city officials (a former mayor of Palermo, Vito Ciancimino, has been condemned for his bribery with Mafiosi) and protection coming from the Italian central government itself.
Many civil servants lost their life in the struggle against the criminal organisations of Palermo and Sicily. These include the Carabinieri general Carlo Alberto Dalla Chiesa, the region’s president Piersanti Mattarella, Don Giuliani, a priest, and magistrates such as Giovanni Falcone and Paolo Borsellino. The latter was killed, together with five memberos of his escort, in a massive car bombing in Via D'Amelio in Palermo, in July 1992. As of 2012[update], this has been the last bloodbath in Palermo ordered by mafia.
Today, Palermo is a city of 720,000 inhabitants still struggling to recover from the devastation of World War II and the damage caused by decades of uncontrolled urban growth. The historic city centre is still partly in ruins, the traffic is horrific, and poverty is widespread. Being the city in which the Italian Mafia historically had its main interests, it has also been the place of several recent well-publicized murders. Situated on one of the most beautiful promontories of the Mediterranean, Palermo is anyway an important trading and business centre and the seat of a University frequented by many students coming from Islamic countries, as its relationships with Muslim world were never ceased.
Palermo is connected to the mainland through an international airport and an increasing number of maritime links. However, land connections remain poor. This and other reasons have until now thwarted the development of tourism. This has been identified as the main resource to exploit for the city's recovery, the marvellous legacy of three millennia of history and folklore.
- "Brief history of Sicily" (PDF). Archaeology. Stanford.edu. 7 October 2007.
- Of Italy, Touring Club. Authentic Sicily. Touring Editore. ISBN 88-365-3403-1.
- Privitera, Joseph. Sicily: An Illustrated History. Hippocrene Books. ISBN 978-0-7818-0909-2.p. 29 "the Saracens had laid their imprint on the island's mores, language, art, poetry, and cuisine. To this day, the Arab concept of the family still holds firm in Sicily." The origins of the ethos of the Mafia are traced to this period by G. Servadio, Mafioso: A History of the Mafia from Its Origins to the Present Day (1976), p. 5.
- Appleton, The World in the Middle Ages, 100.
- Joseph Strayer, Dictionary of the Middle Ages, Scribner, 1987, t.9, p.352
- J. Bradford De Long and Andrei Shleifer (October 1993), "Princes and Merchants: European City Growth before the Industrial Revolution", The Journal of Law and Economics (University of Chicago Press) 36 (2): 671–702 , doi:10.1086/467294
- John Julius, Norwich. The Normans in Sicily: The Normans in the South 1016–1130 and the Kingdom in the Sun 1130-1194. Penguin Global. ISBN 978-0-14-015212-8.
- "Palermo", Italy : Handbook for Travellers: Third Part, Southern Italy, Sicily, Karl Baedeker, 1867, OCLC 4158305
Media related to History of Palermo at Wikimedia Commons |
A logarithmic scale is a scale of measurement that displays the value of a physical quantity using intervals corresponding to orders of magnitude, rather than a standard linear scale. The function of the curve may include an exponent, which is the source of its curved nature.
A simple example is a chart whose vertical or horizontal axis has equally spaced increments that are labelled 1, 10, 100, 1000, instead of 0, 1, 2, 3. Each unit increase on the logarithmic scale thus represents an exponential increase in the underlying quantity for the given base (10, in this case).
Presentation of data on a logarithmic scale can be helpful when the data covers a large range of values. The use of the logarithms of the values rather than the actual values reduces a wide range to a more manageable size. Some of our senses operate in a logarithmic fashion (Weber–Fechner law), which makes logarithmic scales for these input quantities especially appropriate. In particular our sense of hearing perceives equal ratios of frequencies as equal differences in pitch. In addition, studies of young children in an isolated tribe have shown logarithmic scales to be the most natural display of numbers by humans.
- 1 Definition and base
- 2 Example scales
- 3 Logarithmic units
- 4 Graphic representation
- 5 Logarithmic and semi-logarithmic plots and equations of lines
- 6 Estimating values in a diagram with logarithmic scale
- 7 See also
- 8 References
- 9 External links
Definition and base
Logarithmic scales are either defined for ratios of the underlying quantity, or one has to agree to measure the quantity in fixed units. Deviating from these units means that the logarithmic measure will change by an additive constant. The base of the logarithm also has to be specified, unless the scale's value is considered to be a dimensional quantity expressed in generic (indefinite-base) logarithmic units.
On most logarithmic scales, small values (or ratios) of the underlying quantity correspond to negative values of the logarithmic measure. Well-known examples of such scales are:
- Richter magnitude scale and moment magnitude scale (MMS) for strength of earthquakes and movement in the earth.
- ban and deciban, for information or weight of evidence;
- bel and decibel and neper for acoustic power (loudness) and electric power;
- cent, minor second, major second, and octave for the relative pitch of notes in music;
- logit for odds in statistics;
- Palermo Technical Impact Hazard Scale;
- Logarithmic timeline;
- counting f-stops for ratios of photographic exposure;
- rating low probabilities by the number of 'nines' in the decimal expansion of the probability of their not happening: for example, a system which will fail with a probability of 10−5 is 99.999% reliable: "five nines".
- Entropy in thermodynamics.
- Information in information theory.
- Particle Size Distribution curves of soil
Some logarithmic scales were designed such that large values (or ratios) of the underlying quantity correspond to small values of the logarithmic measure. Examples of such scales are:
- pH for acidity and alkalinity;
- stellar magnitude scale for brightness of stars;
- Krumbein scale for particle size in geology.
- Absorbance of light by transparent samples.
Logarithmic units are abstract mathematical units that can be used to express any quantities (physical or mathematical) that are defined on a logarithmic scale, that is, as being proportional to the value of a logarithm function. In this article, a given logarithmic unit will be denoted using the notation [log n], where n is a positive real number, and [log ] here denotes the indefinite logarithm function Log().
Examples of logarithmic units include common units of information and entropy, such as the bit [log 2] and the byte 8[log 2] = [log 256], also the nat [log e] and the ban [log 10]; units of relative signal strength magnitude such as the decibel 0.1[log 10] and bel [log 10], neper [log e], and other logarithmic-scale units such as the Richter scale point [log 10] or (more generally) the corresponding order-of-magnitude unit sometimes referred to as a factor of ten or decade (here meaning [log 10], not 10 years).
The motivation behind the concept of logarithmic units is that defining a quantity on a logarithmic scale in terms of a logarithm to a specific base amounts to making a (totally arbitrary) choice of a unit of measurement for that quantity, one that corresponds to the specific (and equally arbitrary) logarithm base that was selected. Due to the identity
the logarithms of any given number a to two different bases (here b and c) differ only by the constant factor logc b. This constant factor can be considered to represent the conversion factor for converting a numerical representation of the pure (indefinite) logarithmic quantity Log(a) from one arbitrary unit of measurement (the [log c] unit) to another (the [log b] unit), since
For example, Boltzmann's standard definition of entropy S = k ln W (where W is the number of ways of arranging a system and k is Boltzmann's constant) can also written more simply as just S = Log(W), where "Log" here denotes the indefinite logarithm, and we let k = [log e]; that is, we identify the physical entropy unit k with the mathematical unit [log e]. This identity works because
Thus, we can interpret Boltzmann's constant as being simply the expression (in terms of more standard physical units) of the abstract logarithmic unit [log e] that is needed to convert the dimensionless pure-number quantity ln W (which uses an arbitrary choice of base, namely e) to the more fundamental pure logarithmic quantity Log(W), which implies no particular choice of base, and thus no particular choice of physical unit for measuring entropy.
A logarithmic scale is also a graphical scale on one or both sides of a graph where a number x is printed at a distance c·log(x) from the point marked with the number 1. A slide rule has logarithmic scales, and nomograms often employ logarithmic scales. On a logarithmic scale an equal difference in order of magnitude is represented by an equal distance. The geometric mean of two numbers is midway between the numbers.
Logarithmic graph paper, before the advent of computer graphics, was a basic scientific tool. Plots on paper with one log scale can show up exponential laws, and on log-log paper power laws, as straight lines (see semi-log graph, log-log graph).
Comparing the scales
A plot of x v. log10(x). Note two things: first, log(x) increases quickly at first: by x = 3, log(x) is almost at .5; it is useful to remember that sqrt(10) ~ 3. Second, log(x) grows ever more slowly as x approaches 10; this shows how logarithms can be used to 'tame' large numbers.
Logarithmic and semi-logarithmic plots and equations of lines
Log and semilog scales are best used to view two types of equations (for ease, the natural base 'e' is used):
In the first case, plotting the equation on a semilog scale (log Y versus X) gives: log Y = −aX, which is linear.
In the second case, plotting the equation on a log-log scale (log Y versus log X) gives: log Y = b log X, which is linear.
When values that span large ranges need to be plotted, a logarithmic scale can provide a means of viewing the data that allows the values to be determined from the graph. The logarithmic scale is marked off in distances proportional to the logarithms of the values being represented. For example, in the figure below, for both plots, y has the values of: 1, 2, 3, 4, 5, 6, 7, 8, 9 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100. For the plot on the left, the log10 of the values of y are plotted on a linear scale. Thus the first value is log10(1) = 0; the second value is log10(2) = 0.301; the 3rd value is log10(3) = 0.4771; the 4th value is log10(4) = 0.602, and so on. The plot on the right uses logarithmic (or log, as it is also referred to) scaling on the vertical axis. Note that values where the exponent term is close to a decimal fraction of an integer (0.1, 0.2, 0.3, etc.) are shown as 10 raised to the power that yields the original value of y. These are shown for y = 2, 4, 8, 10, 20, 40, 80 and 100.
Note that for y = 2 and 20, y = 100.301 and 101.301; for y = 4 and 40, y = 100.602 and 101.602. This is due to the law that
So, knowing log10(2) = 0.301, the rest can be derived:
Note that the values of y are easily picked off the above figure. By comparison, values of y less than 10 are difficult to determine from the figure below, where they are plotted on a linear scale, thus confirming the earlier assertion that values spanning large ranges are more easily read from a logarithmically scaled graph.
If both the vertical and horizontal axis of a plot is scaled logarithmically, the plot is referred to as a log-log plot.
Semi logarithmic plots
Estimating values in a diagram with logarithmic scale
One method for accurate determination of values on a logarithmic axis is as follows:
- Measure the distance from the point on the scale to the closest decade line with lower value with a ruler.
- Divide this distance by the length of a decade (the length between two decade lines).
- The value of your chosen point is now the value of the nearest decade line with lower value times 10a where a is the value found in step 2.
Example: What is the value that lies halfway between the 10 and 100 decades on a logarithmic axis? Since it is the halfway point that is of interest, the quotient of steps 1 and 2 is 0.5. The nearest decade line with lower value is 10, so the halfway point's value is (100.5) × 10 = 101.5 ≈ 31.62.
To estimate where a value lies within a decade on a logarithmic axis, use the following method:
- Measure the distance between consecutive decades with a ruler. You can use any units provided that you are consistent.
- Take the log (value of interest/nearest lower value decade) multiplied by the number determined in step one.
- Using the same units as in step 1, count as many units as resulted from step 2, starting at the lower decade.
Example: To determine where 17 is located on a logarithmic axis, first use a ruler to measure the distance between 10 and 100. If the measurement is 30mm on a ruler (it can vary — ensure that the same scale is used throughout the rest of the process).
- [log (17/10)] × 30 = 6.9
x = 17 is then 6.9mm after x = 10 (along the x-axis).
Interpolating logarithmic values is very similar to interpolating linear values. In linear interpolation, values are determined through equal ratios. For example, in linear interpolation, a line that increases one ordinate (y-value) for every two abscissa (x-value) has a ratio (also known as slope or rise-over-run) of 1/2. To determine the ordinate or abscissa of a particular point, you must know the other value. The calculation of the ordinate corresponding to an abscissa of 12 in the example below is as follows:
- 1/2 = Y/12
Y is the unknown ordinate. Using cross-multiplication, Y can be calculated and is equal to 6.
In logarithmic interpolation, a ratio of logarithmic values is set equal to a ratio of linear values. For example, consider a log base 10 scale graph of paper reams sold per day measuring 191⁄32 inches from 1 to 10. How many reams were sold in a day if the value on the graph is 111⁄32 between 1 and 10? To solve this problem, it is necessary to use a basic logarithmic definition:
- log(A) − log(B) = log(A/B)
Decade lines, those values that denote powers of the log base, are also important in logarithmic interpolation. Locate the lower decade line. It is the closest decade line to the number you are evaluating that is lower than that number. Decade lines begin at 1. The next decade line is the first power of your log base. For log base 10, the first decade line is 1, the second is 10, the third is 100, and so on.
The ratio of linear values is the number of units from the lower decade line to the value of interest (111⁄32 in this example, since the lower decade line in this example is 1) divided by the total number of units between the lower decade line and the upper decade line (the upper decade line is 10 in this example). Therefore, the linear ratio is:
Notice that the units (1/32 inch) are removed from the equation because both measurements are in the same units. Conversion to a single unit before calculating the ratio is required if the measurements were made in different units.
The logarithmic ratio uses the same graphical measurements as the linear ratio. The difference between the log of the upper decade line (10) and the log of the lower decade line (1) represents the same graphical distance as the total number of units between the two decade lines in the linear ratio (191⁄32nds of an inch). Therefore, the lower part of the logarithmic ratio (the bottom part of the fraction) is:
- log(10) − log(1)
The upper part of the logarithmic ratio (the top part of the fraction) represents the same graphical distance as the number of units between the value of interest (number of reams of paper sold) and the lower decade line in linear ratio (111⁄32nds of an inch). The unknown in this ratio is the value of interest, which we will call X. Therefore, the top part of the fraction is:
- log(X) − log(1)
The logarithmic ratio is:
- [log(X) − log(1)]/[log(10) − log(1)]
The linear ratio is equal to the logarithmic ratio. Therefore, the equation required to determine the number of paper reams sold in a particular day is:
- 11/19 = [log(X) − log(1)]/[log(10) − log(1)]
This equation can be rewritten using the logarithmic definition mentioned above:
- 11/19 = log(X/1)/log(10)
log(10) = 1, therefore:
- 11/19 = log(X/1)
To remove the "log" from the right side of the equation, both sides must be used as exponents for the number 10, meaning 10 to the power of 11/19 and 10 to the power of log(X/1). The "log" function and the "10 to the power of" function are reciprocal and cancel each other out, leaving:
- 1011/19 = X/1
Now both sides must be multiplied by 1. While the 1 drops out of this equation, it is important to note that the number X is divided by is the value of the lower decade line. If this example involved values between 10 and 100, the equation would include X/10 instead of X/1.
- 1011/19 = X
X = 3.793 reams of paper.
Units of information
Units of relative signal strength
- "Slide Rule Sense: Amazonian Indigenous Culture Demonstrates Universal Mapping Of Number Onto Space". ScienceDaily. 2008-05-30. Retrieved 2008-05-31.
which references: Stanislas, Dehaene; Véronique Izard, Elizabeth Spelke, and Pierre Pica. (2008). "Log or linear? Distinct intuitions of the number scale in Western and Amazonian indigene cultures". Science 320 (5880): 1217–20. Bibcode:2008Sci...320.1217D. doi:10.1126/science.1156540. PMC 2610411. PMID 18511690.
- (English) Why using logarithmic scale to display share prices?
- Media related to Logarithmic scale at Wikimedia Commons
- Example Logarithmic Graph Paper Template |
These two concepts are used in an economy to measure the prosperity of a nation. The definition of income differs from person to person or from entity to entity. In economic terms income means the total of wages, salary, profits, rent, interest and many other gains over a period of time.
What Is National Income?
National income means the total value of the total output of a nation, it includes all goods and services produced over a period of one year.
Definition Of National Income
There are two types of definition
1) Traditional definition
According to Marshall: “The labour and capital of a country acting on its natural resources produce annually a certain net aggregate of commodities, material and immaterial including services of all kinds. This is the true net annual income or revenue of the country or national dividend.”
2) Modern definition
According to Simon Kuznets, “the net output of commodities and services flowing during the year from the country’s productive system in the hands of the ultimate consumers.”
Methods of Measuring National Income
1) Gross Domestic Method
It is the sum total of the market price of all goods and services produced in a financial year.
Method to measure GDP?
(i) Product method
In it, the value of all goods and services produced in a nation over a period of time is added.
Another name of this method is a value-added method.
(ii) Income method
GDP by income method is the sum of salary and wages, rent, interest, and profit.
(iii) Expenditure method
It includes expenditure on all items, it includes services, investment, goods, and import-export.
2) GDP at factor cost.
It is the sum total of net value added by all producers in Nation.
GDP at factor cost=net value added + depreciation
3) Net domestic product (NDP)
It is the total of the net output of the economy during the financial year.
NDP= GDP at factor cost- depreciation
4) Nominal and real GDP
Nominal GDP is measured GDP on the basis of current market price.
If GDP is calculated on the basis of fixed market price over a period of time.
5) GDP Deflator
It is an index of price changes in goods and services, it includes GDP.
6) Gross national product
It is the total of the flow of all goods and services at market value produced during a year in a country, it includes net income of abroad.
What Is Disposable Income?
- Disposable income is the total amount of money that households used to spend on goods and services and saving after paying income taxes.
- It is also known as a Disposable personal income.
- It is an important indicator to measure the overall economy.
- It is a net amount of a household or an individual available to spend on needs, to invest, save after paying income taxes.
Disposable income= Personal income - personal income tax payment
National Income Vs Disposable Income
National income is the total value of the total output of a country, it includes all goods and services produced in one year.
Disposable income is the amount available to a household for spending, investing, and saving after paying income tax.
Input method, Output method, income method.
Income- tax payment
Effect of tax
It doesn’t consider tax.
It is calculated considering tax.
What's trending in BankExamsToday
Smart Prep Kit for Banking Exams by Ramandeep Singh - Download here |
In 1851, George Gabriel Stokes derived an expression, now known as Stokes' law, for the frictional force – also called drag force – exerted on spherical objects with very small Reynolds numbers (e.g., very small particles) in a viscous fluid. Stokes' law is derived by solving the Stokes flow limit for small Reynolds numbers of the Navier–Stokes equations:
Statement of the law
The force of viscosity on a small sphere moving through a viscous fluid is given by:
where Fd is the frictional force – known as Stokes' drag – acting on the interface between the fluid and the particle, μ is the dynamic viscosity, R is the radius of the spherical object, and V is the flow velocity relative to the object. In SI units, Fd is given in Newtons, μ in Pa·s, R in meters, and V in m/s.
Stokes' law makes the following assumptions for the behavior of a particle in a fluid:
The CGS unit of kinematic viscosity was named "stokes" after his work.
Stokes' law is the basis of the falling-sphere viscometer, in which the fluid is stationary in a vertical glass tube. A sphere of known size and density is allowed to descend through the liquid. If correctly selected, it reaches terminal velocity, which can be measured by the time it takes to pass two marks on the tube. Electronic sensing can be used for opaque fluids. Knowing the terminal velocity, the size and density of the sphere, and the density of the liquid, Stokes' law can be used to calculate the viscosity of the fluid. A series of steel ball bearings of different diameters are normally used in the classic experiment to improve the accuracy of the calculation. The school experiment uses glycerine or golden syrup as the fluid, and the technique is used industrially to check the viscosity of fluids used in processes. Several school experiments often involve varying the temperature and/or concentration of the substances used in order to demonstrate the effects this has on the viscosity. Industrial methods include many different oils, and polymer liquids such as solutions.
The importance of Stokes' law is illustrated by the fact that it played a critical role in the research leading to at least 3 Nobel Prizes.
In air, the same theory can be used to explain why small water droplets (or ice crystals) can remain suspended in air (as clouds) until they grow to a critical size and start falling as rain (or snow and hail). Similar use of the equation can be made in the settlement of fine particles in water or other fluids.
Terminal Velocity of Sphere Falling in a Fluid
with ρp and ρf the mass density of the sphere and the fluid, respectively, and g the gravitational acceleration. Demanding force balance: Fd = Fg and solving for the velocity V gives the terminal velocity Vs. Note that since buoyant force increases as R3 and Stokes drag increases as R, the terminal velocity increases as R2 and thus varies greatly with particle size as shown below. If the particle is falling in the viscous fluid under its own weight due to gravity, then a terminal velocity, or settling velocity, is reached when this frictional force combined with the buoyant force exactly balances the gravitational force. The resulting terminal velocity (or settling velocity) is given by:
where V is the flow settling velocity (m/s) (vertically downwards if ρp > ρf, upwards if ρp < ρf ), g is the gravitational acceleration (m/s2), ρp is the mass density of the particles (kg/m3), ρf is the mass density of the fluid (kg/m3) and μ is the dynamic viscosity (kg /m*s).
Steady Stokes flow
- p is the fluid pressure (in Pa),
- u is the flow velocity (in m/s), and
- ω is the vorticity (in s−1), defined as
Additional forces like those by gravity and buoyancy have not been taken into account, but can easily be added since the above equations are linear, so linear superposition of solutions and associated forces can be applied.
Flow around a sphere
For the case of a sphere in a uniform far field flow, it is advantageous to use a cylindrical coordinate system ( r , φ , z ). The z–axis is through the centre of the sphere and aligned with the mean flow direction, while r is the radius as measured perpendicular to the z–axis. The origin is at the sphere centre. Because the flow is axisymmetric around the z–axis, it is independent of the azimuth φ.
with ur and uz the flow velocity components in the r and z direction, respectively. The azimuthal velocity component in the φ–direction is equal to zero, in this axisymmetric case. The volume flux, through a tube bounded by a surface of some constant value ψ, is equal to 2π ψ and is constant.
From the previous two equations, and with the appropriate boundary conditions, for a far-field uniform-flow velocity u in the z–direction and a sphere of radius R, the solution is found to be
The viscous force per unit area σ, exerted by the flow on the surface on the sphere, is in the z–direction everywhere. More strikingly, it has also the same value everywhere on the sphere:
with ez the unit vector in the z–direction. For other shapes than spherical, σ is not constant along the body surface. Integration of the viscous force per unit area σ over the sphere surface gives the frictional force Fd according to Stokes' law.
Other types of Stokes flow
- Stokes flow
- Einstein relation (kinetic theory)
- Scientific laws named after people
- Drag equation
- Equivalent spherical diameter
- Deposition (geology)
- Batchelor (1967), p. 233.
- Dusenbery, David B. (2009). Living at Micro Scale, p.49. Harvard University Press, Cambridge, Mass. ISBN 978-0-674-03116-6.
- Dusenbery, David B. (2009). Living at Micro Scale. Harvard University Press, Cambridge, Mass. ISBN 978-0-674-03116-6.
- Lamb (1994), §337, p. 599.
- Batchelor (1967), section 4.9, p. 229.
- Batchelor (1967), section 2.2, p. 78.
- Lamb (1994), §94, p. 126.
- Batchelor (1967), section 4.9, p. 230
- Batchelor (1967), appendix 2, p. 602.
- Lamb (1994), §337, p. 598. |
A Glossary of Gifted Education
Giftedness and education from the perspective of sociologic social psychology
by Steven M. Nordby © 1997-2002
Ability grouping – Placing students of similar ability in the same class or group for purposes of instruction. Research shows higher academic achievement gains for all students when grouped by ability and taught at a pace that matches their learning rates. Compare with tracking.
Achievement – Accomplishment or performance; the realization of potential. Compare with aptitude.
Anti-intellectual – One who is suspicious or hostile toward intelligent people and their pursuits. This may take the form of “name calling” (geeks or nerds) or a more serious form, such as author Edward de Bono’s coining of the term intelligence trap.
Appropriate – A subjective judgement of suitability often used in phrases such as “appropriate behavior” and “appropriate education” whose definitions are relative to specific cultures, situations, institutional and personal values and educational philosophies. The ability to define and label “appropriate” and “inappropriate” is a jealously guarded power of teachers.
Aptitude – Undeveloped potential or ability. Compare with achievement.
Assessment – Assignment of value. Academically, this usually means grades. In psychology, it means comparing the tested measures of a subject’s mental characteristics (e.g., intelligence, personality, self-esteem) to a norm, or average. See grading, standardized test, authentic assessment, reliability, validity, and IQ.
Asynchronous development – Differing rates for physical, cognitive, and emotional development, also known as dyssynchronous development. For example, a gifted child may be chronologically 13 years old, intellectually 18, emotionally 8, and physically 11. The discrepancies are greatest for everyone at the chronological age of about 13, but the extremes displayed by gifted children have led some experts to define giftedness itself as asynchronous development. If you tell a gifted child to “Act your age!” s/he could legitimately respond: “Which one?” See characteristics of the gifted, middle school movement, peer group.
Attention Deficit Disorder (ADD) – A sub-type of Attention Deficit Hyperactivity Disorder.
Attention Deficit Hyperactivity Disorder (ADHD) – As defined by the American Psychiatric Association, ADHD is a mental disorder characterized by inattention and/or impulsivity. ADD is a sub-type with fewer impulsive symptoms. Earlier labels for these symptoms included “minimal brain dysfunction.” Gifted students in understimulating environments may demonstrate identical symptoms. If giftedness, learning disabilities, depression, or other problems are accommodated first, ADHD-like symptoms can be reduced or eliminated. ADHD treatment usually includes drug and/or behavior therapy. The effects of drugs on attention, concentration, frustration tolerance, and conforming behavior are displayed in all children, not just those diagnosed with ADHD. The school-based giftedness accommodations which are best supported by research differ from ADHD behavioral interventions in that they offer integrated subject matter, concern for the whole child and his/her own interests, and they address the emotional as well as cognitive aspects of learning. See also labeling theory, mental health. Problems in Identification and Assessment of ADHD. ADHD versus Overexcitabilities.
Authentic assessment – (1) In classroom testing, tests which cover the material actually taught. (2) In psychology, testing under natural, actual conditions rather than in a clinical or artificial environment.
Behavior modification – Changing the environment and using reinforcers (or their absence) to control the behavior of others. Practitioners set up the environment to prompt a behavior, then reward the desired behavior and/or punish undesired behavior in that specific situation. The absolute control of reinforcers, the maintenance of the behavior when environmental controls are removed, and the generalization of the behavior to other situations are problematic. It tends to be used to produce conformity and obedience. See behaviorism, mental health.
Behavior therapy – Behavior modification.
Behaviorism – A deterministic, stimulus-response theory of psychology which forms the basis for behavior modification and many common teaching methods. Behaviorism is concerned only with observable behavior, not with internal processes, meanings, emotions, attitudes, beliefs or values. Accounts of these things are treated as merely “verbal behavior.” In behaviorist teaching strategies, the teacher, not the student, establishes the goals, directs and controls the environment, and makes assessments. See educational philosophy. Contrast with Constructivism, interactionism.
- the ability to detect patterns and make approximations,
- the capacity for various types of memory,
- the ability to self-correct and learn from experience through analysis of data and self-reflection, and
- the inexhaustible capacity to create,
so as to optimize the extraction of meaning for the individual learner. Brain based teaching incorporates integrated curriculum, and is built on these principles:
- The brain is a parallel processor.
- Learning engages the entire physiology.
- The search for meaning is innate in human nature.
- The search for meaning occurs through patterning.
- Emotions are critical to patterning.
- The brain processes parts and wholes simultaneously.
- Learning involves both focused attention and peripheral perception.
- Learning always involves conscious and unconscious processes.
- We have at least two different types of memory: a spatial memory system and a set of systems for rote learning.
- We undertand and remember best when facts and skills are embedded in natural, spatial memory.
- Learning is enhanced by challenge and inhibited by threat.
- Each brain in unique.
“Our purpose here is to identify some features of the innate drive that students have to act and to understand. Educators need to capitalize on these drives. In fact, this is precisely what exceptional teachers of gifted and talented students do. They provide opportunities for students to pursue their own interests. They support student creativity. They provide a rich and stimulating environment and in that context introduce students to more and more of what the world has to offer. That is the general philosophy that should apply to all students, everywhere.” — Renate Nummela Caine and Geoffrey Caine (1991), Making Connections: Teaching and the Human Brain. See also Montessori method.
Brain lateralization – Specialization of the brain hemispheres. In right handed people, the right brain hemisphere is more involved with spatial relations, imagery, and non-verbal, non-sequential processing, while the left brain hemisphere is more involved in verbal and sequential processing.
Bright – See levels of giftedness.
Ceiling effect – Compression of top scores on a test. For example, if a group IQ test can only measure reliably to 130, then a student with an IQ of 160 (if measured by some other test) may only score 130 due to the ceiling effect of the group test. Group intelligence tests often have low ceilings, so a relatively low IQ score, perhaps 115, could be accepted as evidence of potential giftedness. See intelligence quotient.
- Shows superior abilities to reason, generalize or problem solve.
- Shows persistent intellectual curiosity.
- Has a wide range of interests; develops one or more interests to considerable depth.
- Produces superior written work or has a large vocabulary.
- Reads avidly.
- Learns quickly and retains what is learned.
- Grasps mathematical or scientific concepts readily.
- Shows creative ability or imaginative expression in the arts.
- Sustains concentration for lengthy periods on topics or activities of interest.
- Sets high standards for self.
- Shows initiative, originality, or flexibility in thinking; considers problems from a number of viewpoints.
- Observes keenly and is responsive to new ideas.
- Shows social poise or an ability to communicate with adults in a mature way.
- Enjoys intellectual challenge; shows an alert and subtle sense of humor.
These characteristics can lead to conflicts in the regular classroom, as the gifted child may:
- Get bored with routine tasks.
- Resist changing away from interesting topics or activities.
- Be overly critical of self and others, impatient with failure, perfectionistic.
- Disagree vocally with others, argue with teachers.
- Make jokes or puns at times adults consider inappropriate.
- Be so emotionally sensitive and empathetic that adults consider it over-reaction, may get angry, or cry when things go wrong or seem unfair.
- Ignore details, turn in messy work.
- Reject authority, be non-conforming, stubborn.
- Dominate or withdraw in cooperative learning situations.
- Be highly sensitive to environmental stimuli such as lights or noises.
These reactions of gifted students to the regular education environment are normal only within the context of an understanding of the gifted. Without that understanding, they may be used to label the student as ADD/ADHD or SED. See overexcitabilities.
Compacting – Eliminating repetition, minimizing drill, and accelerating instruction in basic skills or lower level classes so that gifted students can move to more challenging material.
Conformity – Unexceptional behavior and/or convergent thinking.
Constructivism – The theory that new knowledge is an active product of the learner integrating new information and perceptions with prior knowledge. It is based on the work of John Dewey, Jean Piaget, and Lev Vygotsky, and complementary with interactionism. Educational philosophies based on constructivist ideas stand in contrast with behaviorist teaching techniques, such as Direct Instruction.
Cooperative learning – Students working in small groups, where often the same grade is given to all. In heterogeneous groupings, achievement and extrinsicly motivated students may dominate the group and do all the work so their own grades don’t suffer, and underachievers may simply withdraw or refuse to participate. Cooperative learning groups with students of similar ability with complementary skills tend to work most smoothly.
Counseling the gifted – Gifted students can benefit from talking with counselors educated in the characteristics of the gifted. Without such education, counselors may misinterpret these characteristics as psychological disorders. Because counseling consists largely of talk, the counselor may also be manipulated, fooled, or looked down on by students highly gifted in verbal ability and reasoning skills. See mental health.
Creativity – Artistic or intellectual inventiveness. “Stamped out of kids by third grade,” says education professor George Sheperd, University of Oregon. Creativity depends on divergent thinking, but schools emphasize and reward convergent thinking and conformity. Arts are a safe outlet, but that doesn’t help the child who’s more interested and intuitive in science and math. Silliness, immaturity and disruptive behavior are characteristics of students whose creativity has been stifled.
Critical thinking skills – The higher order thinking skill of applying logic in order to reduce ambiguity and lead to understanding of complex problems or ideas. Educators may use task analysis to develop step by step methods to teach critical thinking skills, but critical thinking itself cannot be reduced to step by step thinking.
Curriculum based assessment – See authentic assessment (1).
Depression – There is some research evidence and considerable anecdotal evidence that the gifted are at a significantly higher risk for depression and suicide than the general population. This may be due to characteristics, such as keen insight into the inequities of life, and asynchronous development, which make the gifted individual feel out of place in the social structure. Counseling with someone fluent in the issues surrounding giftedness can be helpful.
Development – Cognitive (intellectual), emotional and physical growth. See asynchronous development.
Diagnostic test – An assessment prompted by a perceived problem in order to determine current level of functioning. Test results are then used to prescribe a solution.
Direct instruction (DI) – Teacher directed and structured programmed instruction in explicit skills with an emphasis on efficiency. The teacher sets the goals, chooses the materials, and sets the pace. Instruction proceeds through specific steps:
- guided practice,
- feedback, and
- independent pracice.
DI requires pretesting and ability grouping, and is most often used in primary and remedial reading. It is ineffective in teaching things which are not easily broken into ordered tasks, such as complex problem solving, creativity, and higher order thinking skills.
Discovery method – A variety of student-centered approaches to teaching, including the Socratic method, in which the teacher acts as a guide and/or resource. Unlike programmed instruction, the emphasis is not on efficiency in mastering a predetermined body of knowledge, but in developing students’ abilities to learn how to learn. Discovery is an assumed method in unschooling.
Dysychronous development – See Asynchronous development.
Educational philosophy – The basic value orientation on which educational systems, agendas, and programs are built. Conflicting educational philosophies lie at the heart of many problems in getting appropriate education for the gifted. See human nature, middle school movement, behaviorism, constructivism, interactionism, homeschooling, unschooling, Montessori method.
Educational reform – Popularly used to describe efforts to increase standanrdized test scores of public school students. As such, it is more a description of assessment reform than a change of educational philosophies or methods, although a back to basics philosophy is often implied.
Elitist – A criticism of gifted education programs. If students in gifted programs act as if they are socially or morally superior, or if the program supports the social order rather than identifying and serving all gifted students, then charges of elitism have merit. Gifted programs which serve gifted students from all social classes and ethnic groups, whether achievers, underachievers or handicapped, are not elitist.
Emotional shutdown – A psychological defense mechanism characterized by withdrawal. A gifted student in a hostile or anti-intellectual environment may react this way. See underachievement.
Empathy – Understanding and feeling from the point of view of the other person, believed in interactionism to lie at the heart of development of self and society.
Engaged time – That part of on-task time actually spent with the subject matter. One sociologist has estimated non-engaged time to be 45 percent of the school day (Richard Everhart, Reading, Writing and Resistance 1983).
Evaluation – See assessment.
Exceptional learners – Students with an IQ in the bottom (retarded) or top (gifted) three percent of the population, or those with other physical or mental differences which affect learning. See special education.
Exceptionally gifted – See levels of giftedness.
Free appropriate public education (FAPE) – As required by IDEA, instruction to disabled children, at no cost to parents, provided by the public school, which allows students to make satisfactory progress.
Flynn effect – A rise in IQ of the general population of about 3 points per decade, discovered by James Flynn of New Zealand in the early 1980′s. If true, the average person of 100 years ago would be considered retarded today. A variety of explanations have been offered, either explaining the rise as an artifact of testing or as a real increase in intelligence, but no explantion has gained widespread acceptance. To compensate for the IQ increase, test makers select a new sample for the norm reference on their tests about every ten years. See IQ.
Geek – A label for a person who does not seem to fit in socially because of high intelligence or achievement. Sometimes used interchangeably with nerd, but geek implies higher social status.
Generalization – (1) In behaviorism, applying skills learned in one situation to other situations. (2) In research, applying the results of one study to the general population.
Genius – A popular term for extraordinary intelligence which has no fixed meaning in education or psychology, where it is rarely used.
Gifted – Having superior mental ability or intelligence. A label of potential. The intellect and emotions of gifted students are both quantitatively and qualitatively different. See characteristics of the gifted, labeling theory, overexcitabilities, levels of giftedness.
Gifted programs – Special academic and social opportunities which try to meet the needs of gifted students. See acceleration, ability grouping, enrichment, independent study, pull-out, special education.
Goals – (1) As written in an IEP, goals are the desired long-term outcomes of individualized instruction. (2) In the philosophy of Dreikurs, the natural, healthy goal of children is to achieve belonging and significance. When this is thwarted, mistaken goals of attention, power, revenge or inadequacy take their place. When a child reaches adolescence, the more complex, healthy goal of individuation (establishing one’s own identity by rebelling or breaking away) enters the mix. (3) When choosing their own goals, students who feel responsible for their success tend to choose reasonable goals. Illustration.
Grade advancement – See grade skipping.
Grade skipping – Promotion to a higher grade. Often confused with acceleration. A grade-skipped gifted child can still learn at an accelerated rate and may eventually outperform students at a higher grade placement.
Higher order thinking skills – Abstract reasoning, critical thinking, and problem solving abilities. See Critical thinking skills.
Highly gifted – See levels of giftedness.
Home schooling – An option for students whose needs are not being met at school. It allows for greater student involvement and responsibility for his/her education and individualization in pacing and content. See independent study, unschooling.
Human nature – Assumptions about human nature form the starting point of all educational philosophies. Conflicting assumptions lead to conflicting educational philosophies, which lie at the heart of the problem of defining appropriate education for the gifted. It involves such dichotomous and contentious issues as good and evil, needs and desires, absolutism and relativism, free-will and determinism. See mental health, behaviorism, interactionism, brain based teaching.
Identification – The selecting and labeling process. Requirements to be identified as gifted vary between school districts. Generally, a group IQ test is used to screen students. Those scoring high enough (usually about 115 due to the lower reliability and ceiling effect of group tests) are given an individual IQ test. Those scoring above 130 are usually considered gifted without further ado. Those scoring lower may also be considered gifted based on teacher and parent nominations, outstanding achievement, or other evidence.
Individual education plan (IEP) – A written document which states the student’s unique characteristics and needs, educational goals and objectives to meet those needs, and instructional materials and services to be provided.
Individual referenced – One’s score is compared to one’s previous score on a test covering the same material in order to show that learning has occurred.
Individuals with Disabilities Education Act (IDEA) – Federal legislation to provide special education for specific categories of disability. For qualifying disabled students, school districts must provide free appropriate public education in the least restrictive environment as specified in an annual individual education plan.
Inquiry method – See discovery method.
Integrated curriculum – Combination of content from two or more subjects to enhance meaning through interconnectedness of knowledge. See brain based teaching.
Intelligence – A general concept of mental ability, often summed up as the ability to learn from experience. The concept was put into a measurable form as intelligence quotient, but theorists such as Howard Gardner believe there are multiple intelligences which traditional IQ tests do not sample. Others counter that multiple intelligences are merely manifestations of an underlying general factor (“Spearman’s g”). Pragmatically in schools, intelligence has come to mean whatever intelligence tests measure, regardless of the test’s reliablity or validity
Intelligence quotient (IQ) – A quantitative representation of cognitive ability which results from testing a sample of cognitive skills. The formula is intellectual age divided by chronological age, times 100. For example, someone 10 years old with an intellectual age of 13 would have an IQ of 130. This is called the “ratio IQ.”
The scales of different IQ tests vary slightly due to differences in test construction and the sample which provided the norm. Variation in scores is described by the standard deviation. Assuming that intelligence is normally distributed, the IQs of about 95 percent of the population are between 70 (about 2 standard deviations below the mean) and 130 (about 2 standard deviations above the mean). Below 70 is considered retarded, and above 130 is considered gifted. Individual tests such as the WISC and Stanford-Binet are considered the most reliable, but no published test since the older Stanford-Binet Form LM (1972) is valid above 160. Most IQ tests since 1960 have reported IQ as “deviation IQ,” which adjusts the ratio IQ scale slightly based on the different means and standard deviations of each age group in the sample used to construct the test. Ratio and deviation IQ’s seldom differ by more than 4 points. See levels of giftedness, ceiling effect, multiple intelligences.
Intelligence trap – A term coined by Edward de Bono refering to what he reports as the tendency of self-ascribed highly intelligent people to be “poor thinkers” because of their arrogance, prejudice, “intellectualizing,” ability to defend many sides of an issue, and their need to display their superior minds (de Bono (1991), I Am Right – You Are Wrong, and (1996), De Bono’s thinking course). Only rhetorical and anecdotal support exists, and such claims are at odds with the usually accepted characteristics of the gifted. See Anti intellectual. An Essay on Edward de Bono. For additional opinion on the use of de Bono’s ideas in business, see A Review of “Personal Best”.
Interactionism – A social-psychological theory that the self is formed by interacting with others and that social life depends on the ability to imagine ourselves in other social roles. Interactionist and constructivist educational philosophies make the student an active partner in all aspects of his/her education, as opposed to behaviorist philosophies where the teacher selects the content, sets the pace, sets the goals, directs, manipulates and evaluates. Effective strategies with gifted students, especially underachievers, are usually interactionist.
Intrinsic motivation – The desire to satisfy natural needs and interests, which includes a desire to understand and make sense of the world. The discovery method, and unschooling depend on intrinsic motivation. Compare with extrinsic motivation.
J K L
Javits Act – Federal legislation originally passed in 1988 to provide grant money for gifted and talented programs and research. 1997 appropriations were less than one-hundredth of one-percent of total federal special education dollars, less than, for example, literacy programs for prison inmates.
Labeling theory – The proposition that labels placed on a person may lead him/her to act the role associated with the label whether or not it was initially accurate. When a label is know to others, they may interpret the labeled person’s behavior as abnormal whether it is or not. This changes their actions toward the labeled person so that their interactions reinforce the label. Gifted, learning disabled, underachiever, ADD, and SED are all labels which may affect students’ future behavior even in the absence of objective evidence supporting the label. See interactionism, normal. Labeling Theory Tested: Pygmalion in the Classroom.
Lateral thinking – A popular term coined by Edward de Bono in the 1960′s for unorthodox thinking. See divergent thinking.
Learning disability – A deficit in a specific area, such as word decoding or arithmetic computation, which is out of line with overall intellectual ability. Some learning disabilities may interfere with proper measurement on conventional IQ tests, so a learning disabled student might be considered gifted with an IQ test score significantly lower than the usual 130 cut-off.
Left brained – See brain lateralization.
- Bright – 115 and above
- Gifted – 130 and above
- Highly gifted – 145 and above
- Exceptionally gifted -160 and above
- Profoundly gifted – 175 and above
Because of measurement error and ceiling effect, the exceptionally and profoundly gifted labels are often used interchangably.
Mental health – A concept based on socially acceptable behavior and subjective feeling. Simplified, here are two competing philosophies: (1) People need to have their natural, selfish impulses controlled in order to fit into society, or (2) Like physical health, mental health is something people grow toward naturally. See behaviorism, interactionism, human nature, labeling theory, Montessori method.
Middle school movement – Advocacy of developmental approaches to schooling in grades 5 through 8 which build the curriculum around perceived social-emotional needs of the average early adolescent. Asynchronous development and the characteristics of gifted children makes this problematic. Tracking and ability grouping are often eliminated in favor of inclusion and cooperative learning. Also frequently used: small advising groups assigned to each teacher, team-teaching, interdisciplinary courses or integrated curriculum, personal health and affective education.
Montessori method – An educational philosophy based on the ideas of Italian physician/educator Maria Montessori (1870 – 1952). Although originally developed with students labeled “mentally defective” her tremendous successes led her approach to be widely embraced, especially in upper class pre- and elementary schools world-wide. Montessori saw students’ learning as the result of innately self-motivated activity. The teacher’s job, then, is to supervise and guide rather than transmit knowledge. Many private and a few public schools in the U.S. call themselves “Montessori,” however there is no official body to regulate use of the name and actual teaching practices vary considerably.
Multiple Intelligences – Constructs of intelligence that include more aspects of mental ability than the conventional concept of intelligence. Howard Gardner proposed seven intelligences: musical, bodily-kinesthetic, logical-mathematical, linguistic, spatial, interpersonal, and intrapersonal. He recently added an eighth: naturalist. Conventional IQ tests measure mainly logical-mathematical and linguistic intelligence. Intellectual profile illustration.
Multipotentiality – The idea that gifted children have the ability to succeed in virtually any career. Use of interest inventories and ability tests with higher ceilings can help differentiate between areas in which students are merely competent and those in which they truly excel and are highly motivated toward.
Needs – A word often used in such phrases as “behavioral needs” and “educational needs” which can only be understood when the goals are known. A statement of needs makes sense only with an explicit or implied “in order to.” For example: “The student needs to turn in homework” is meaningful if it is followed by: “in order to earn credit for it” but is nonsense if followed by “in order to learn.”
Nerd – A particularly socially unattractive or awkward subset of geek.
Non-production – Unrealized ability in which the student knows s/he is capable, but chooses not to do the assigned work. See underachievement.
Normal – A range of behavior that is considered socially acceptable. Behavior that tests the limits of normal is normal, but behavior consistently outside normal is considered deviant. Experimenting with behavior is normal for children, especially gifted ones. Behaviorist educators and psychologists are concerned with ways to produce normal behavior in others. But to be gifted is not normal. Abnormal qualities define leaders, heroes, and eminent people.
Off-task – Behavior which the teacher disapproves.
Outcome based education – Teaching designed to lead student to demonstrate a specific level of mastery.
Overachievement – Performance that exceeds ability. Because this is not possible, overachievement does not exist.
Overexcitabilities – A term originated by Kazimierz Dabrowski to describe excessive response to stimuli in five psychic domains (psychomotor, sensual, intellectual, imaginational, and emotional) which may occur singly or in combination. Overexcitabilities are often used to describe certain characteristics of the gifted. “It is often recognized that gifted and talented people are energetic, enthusiastic, intensely absorbed in their pursuits, endowed with vivid imagination, sensuality, moral sensitivity and emotional vulnerability. . . . [They are] experiencing in a higher key.” – Michael Piechowski. Extreme overexcitabilities or a strong imbalance between them may reduce the individual’s ability to function in society. See also ADHD Versus Overexcitabilities.
Pacing – The speed at which content is presented and instruction delivered. Pacing which matches the student’s rate of learning is optimal. Because gifted students are usually able to learn faster, they often prefer accelerated pacing.
Peer group – People with which one feels equal. Due to gifted students’ asynchronous development, they may have very different intellectual, social, and emotional peer groups.
Perfectionism – The desire to execute tasks flawlessly. Gifted children may develop perfectionism after entering school, as they perform better than their classmates. Later, such perfectionism may lead to avoiding challenges so as not to appear imperfect. See characteristics of the gifted, underachievement.
Play – An important part of the learning process that allows for teamwork, risk taking, creativity, and testing one’s ability against others.
Precocity – Development significantly earlier than normal. Most gifted children show precocious intelligence, but not all who develop skills early are gifted: they may reach a plateau, allowing those of average ability to catch up.
Prodigy – A child (usually under age 10) who is able to perform at an adult level in a specific skill. Unlike savants, prodigies often have high intelligence and are aware of their thinking strategies.
Profoundly gifted – See levels of giftedness.
Psychometrics – The quantitative measurement of mental characteristics, as in IQ.
Pull-out – A part-time special educational program that takes exceptional learners out of the regular classroom for a limited time. Many elementary gifted programs are once a week, pull-out, enrichment activities. Since gifted students are gifted all day, every day, pull-out programs alone seldom meet their needs.
Punishment – Causing psychological or physical pain to another usually with the goal of changing the other’s future behavior. Punishment may quickly produce submission or obedience, with longer term side effects such as rebellion, revenge, or withdrawal. See social control.
Q R S
Right brained – See brain lateralization.
Savant – A person with exceptional ability in a specific skill, often artistic, mathematical or musical, who seems intuitively to “know” but is unaware of thinking strategies. Savants often display flattened emotions and little creativity.
School psychologist – The person who gives diagnostic tests to students and acts as a consultant to teachers, counselors, and administrators. Like teachers and counselors, they often have special training in disabilities but little or no training in giftedness.
Section 504 – Federal law mandating accommodations for children with disabilities.
Self-contained – A classroom is self-contained if the students in it spend the entire day (or the bulk of the day) with the same teacher. Elementary education is almost always conducted in self-contained classrooms. Self-contained programs can also be geared toward grouping by ability, disability, or other labels placed on students, such as the label “gifted.”
Serious emotional disturbance (SED) – A special education category under IDEA. The terms “behavior disorder” or “emotional/behavior disorder” are synonymous with SED. A student may be identified as having SED for not having “satisfactory interpersonal relationships with peers and teachers” or for displaying “inappropriate types of behavior or feelings.” The characteristics of the gifted combined with the subjectivity of these criteria may lead educators to mislabel some gifted children as SED. See labeling theory.
Socialization – Acquiring the cultural values, knowledge and skills which allow one to function productively in a society. Pro-inclusion and anti-homeschooling arguments are often based on the socialization value of the heterogeneous classroom. However, there is no empirical evidence that ability grouped or homeschooled students have poorer social skills.
Socratic method – Dialog and discussion to expose logic, meaning, and truth. See discovery method.
Special Education – The promise of individualized instruction for exceptional learners. Appropriate education is supposed to be based on the unique characteristics of each student but often is provided categorically according to the labels placed on students. Federal law does not mandate special education for the gifted, but some states have their own mandates. Appropriate special education for underachieving gifted students is extremely rare.
Standard deviation – A statistical measure of variability from the mean. To calculate it, find the difference of each and every score from the mean, square each difference, average them, then take the square root. For IQ tests, the mean is designed to be 100, and the standard deviation is calculated to be about 15 or 16. See intelligence quotient, norm, normally distributed.
State mandates – In the absence of a federal mandate for gifted education, many states have passed mandates. The level, quality, and availability of services varies widely from state to state.
Statistics – Quantitative abstractions of group measurements, such as mean, median and mode. Statistics about groups of individuals are often invoked erroneously to define characteristics of an individual, regardless of contradictory evidence, as in Estimated True Scores.
Structure – Social organization and rules to minimize the hassles of routine tasks so people can get on with more interesting aspects of living. Functional structures define what’s important and stress it, define what’s not important and ignore it, and minimize everyone’s inconvenience. Structure enables social functioning, but excessive structure limits creativity, spontaneity and motivation. See also mental health.
Task analysis – Breaking down complex skills into a highly structured series of simpler, smaller, sequential subskills, and omitting higher order thinking skills.
Teacher preparation – Regular education teachers are required to pass courses in disabilities but not giftedness. Even most teachers of the gifted have not had specific training. Many are told: “The gifted can take care of themselves.” Education majors have the lowest average scores on standardized tests such as the GRE, which does not bode well for their capacity to understand the characteristics of the gifted and provide appropriate education for them.
Tracking – Full-time, often permanent assignment to achievement groups. Compare with ability grouping, where students may be temporarily grouped and regrouped for immediate instructional needs.
Twice special – A student both gifted and handicapped, for example, gifted and learning disabled.
Underachievement – A significant difference between ability and performance. A gifted underachiever is often defined as having superior intelligence, yet working below grade level. Underachievement is sometimes differentiated from non-production by including a psychological factor of perceived inability to succeed academically. Some underachiveres may withdraw, others may become disruptive. Factors that can contribute to underachievement include:
- Lack of respect for the individual.
- An overly competitive environment.
- Inflexible and rigid structure.
- Stress on external evaluation and criticism.
- Authoritarian control.
- Unrewarding curriculum.
- Family conflicts, such as divorce.
Underachievement shows up often in the most stressful grades: fourth, when students stop learning how to read and start reading to learn; and ninth, with adolescence and the transition to high school.
Unschooling – A contructivist, interactionist educational philosophy which relies on the natural desire of the child to make sense of the real-world environment around him rather than the environment of school. See intrinsic motivation, human nature.
V W X Y Z
Validity – (1) In testing or assessment – A measurement’s ability to measure what it purports to measure. (2) The truthfulness of an argument, i.e., how well the hypothesis is supported by the evidence. |
The Somatic Sensory Cortex
Somatosensory Cortex: Have you ever wondered if you feel things the same way other people do? How do you know ‘red’ is really the same red to everyone? Maybe the person next to you sees green as red… These thought-provoking questions can’t be answered precisely with science, but we can learn more about how external stimuli, like colors, are processed in the brain. This is where the somatosensory cortex comes in. This part of the brain processes sensations, or external stimuli, from our environment. Before we learn more about the somatosensory cortex, we need to learn a little bit about brain anatomy and where the somatosensory cortex is located.
The brain is the control center of the whole body. It is made up of a right and left side, or lobes, which are connected in the middle by the corpus callosum. Each lobe is devoted to a different function. The outer layer of the brain is called the cerebral cortex. Think of it like the skin on fruit, the skin is the cerebral cortex, and the fruit is the white insides of the apple. The cerebral cortex helps with processing and higher-order thinking skills, like reasoning, language, and interpreting the environment. This image shows a cross-section of the brain, with the cerebral cortex shown as the dark outline.
Primary Somatosensory Cortex
The somatosensory cortex is a part of the cerebral cortex and is located in the middle of the brain. This image shows the somatosensory cortex, highlighted in red in the brain. The somatosensory cortex receives all sensory input from the body. Cells that are part of the brain or nerves that extend into the body are called neurons. Neurons that sense feelings in our skin, pain, visual, or auditory stimuli, all send their information to the somatosensory cortex for processing. The following diagram shows how sensations in the skin are sent through neurons to the brain for processing.
Some neurons are very important and a big chunk of the somatosensory cortex is devoted to understanding their information. The senior scientist sends the most important information to our analyst, and he spends a lot of time understanding it. However, our junior scientists or volunteers gather less important information, so our analyst, or somatosensory cortex, spend less time on that data. Each neuron takes its information to a specific place in the somatosensory cortex. Next, that part of the somatosensory cortex gets to work on figuring out what the information means. Think of it like scientists sending data to a data analyst. Each scientist, like the neuron, gathers information and sends it to a master analyzer or the somatosensory cortex.
Somatosensory Cortex Function
The primary somatosensory cortex is located in the postcentral gyrus and is part of the somatosensory system. It was initially defined from surface stimulation studies of Wilder Penfield, and parallel surface potential studies of Bard, Woolsey, and Marshall. Although initially defined to be roughly the same as Brodmann areas 3, 1 and 2, more recent work by Kaas has suggested that for homogeny with other sensory fields only area 3 should be referred to as “primary somatosensory cortex”, as it receives the bulk of the thalamocortical projections from the sensory input fields.
For More Latest’s Audition Date and Places News
Brodmann areas 3, 1, and 2 make up the primary somatosensory cortex of the human brain (or S1). Because Brodmann sliced the brain somewhat obliquely, he encountered area 1 first; however, from anterior to posterior, the Brodmann designations are 3, 1, and 2, respectively.
Brodmann area (BA) 3 is subdivided into areas 3a and 3b. Where BA 1 occupies the apex of the postcentral gyrus, the rostral border of BA 3a is in the nadir of the Central sulcus and is caudally followed by BA 3b, then BA 1, with BA 2 following and ending in the nadir of the postcentral sulcus. BA 3b is now conceived as the primary somatosensory cortex because 1) it receives dense inputs from the NP nucleus of the thalamus; 2) its neurons are highly responsive to somatosensory stimuli, but not other stimuli; 3) lesions here impair somatic sensation; and 4) electrical stimulation evokes the somatic sensory experience. BA 3a also receives dense input from the thalamus; however, this area is concerned with proprioception.
Areas 1 and 2 receive dense inputs from BA 3b. The projection from 3b to 1 primarily relays texture information; the projection to area 2 emphasizes size and shape. Lesions confined to these areas produce predictable dysfunction in texture, size, and shape discrimination.
The somatosensory cortex, like another neocortex, is layered. Like other sensory cortex (i.e., visual and auditory) the thalamic inputs project into layer IV, which in turn projects into other layers. As in other sensory cortices, S1 neurons are grouped together with similar inputs and responses into vertical columns that extend across cortical layers (e.g., As shown by Vernon Mountcastle, into alternating layers of slowly adapting and rapidly adapting neurons; or spatial segmentation of the vibrissae on mouse/rat cerebral cortex).
This area of cortex, as shown by Wilder Penfield and others, is organized somatotopically, having the pattern of a homunculus. That is, the legs and trunk fold over the midline; the arms and hands are along the middle of the area shown here, and the face is near the bottom of the figure. While it is not well-shown here, the lips and hands are enlarged on a proper homunculus, since a larger number of neurons in the cerebral cortex are devoted to processing information from these areas.
Primary Somatosensory Cortex Function
The primary somatosensory cortex is located in a ridge of cortex called the postcentral gyrus, which is found in the parietal lobe. It is situated just posterior to the central sulcus, a prominent fissure that runs down the side of the cerebral cortex. The primary somatosensory cortex consists of Brodmann’s areas 3a, 3b, 1, and 2.
At the primary somatosensory cortex, tactile representation is orderly arranged (in an inverted fashion) from the toe (at the top of the cerebral hemisphere) to mouth (at the bottom). However, somebody parts may be controlled by partially overlapping regions of cortex. Each cerebral hemisphere of the primary somatosensory cortex only contains a tactile representation of the opposite (contralateral) side of the body. The amount of primary somatosensory cortex devoted to a body part is not proportional to the absolute size of the body surface, but, instead, to the relative density of cutaneous tactile receptors on that body part. The density of cutaneous tactile receptors on a body part is generally indicative of the degree of sensitivity of tactile stimulation experienced at a said body part. For this reason, the human lips and hands have a larger representation than other body parts.
What is the primary somatosensory cortex and what does it do?
The primary somatosensory cortex is responsible for processing somatic sensations. These sensations arise from receptors positioned throughout the body that are responsible for detecting touch, proprioception (i.e. the position of the body in space), nociception (i.e. pain), and temperature. When such receptors detect one of these sensations, the information is sent to the thalamus and then to the primary somatosensory cortex.
The primary somatosensory cortex is divided into multiple areas based on the delineations of the German neuroscientist Korbinian Brodmann. Brodmann identified 52 distinct regions of the brain according to differences in cellular composition; these divisions are still widely used today and the regions they form are referred to as Brodmann’s areas. Brodmann divided the primary somatosensory cortex into areas 3 (which is subdivided into 3a and 3b), 1, and 2.
The Somatosensory Cortex Is Responsible For Processing
The axons arising from neurons in the ventral posterior complex of the thalamus project to cortical neurons located primarily in layer IV of the somatic sensory cortex (see Figure 9.7; also see Box A in Chapter 26 for a more detailed description of cortical lamination). The somatic sensory cortex in humans, which is located in the parietal lobe, comprises four distinct regions, or fields, known as Brodmann’s areas 3a, 3b, 1, and 2. Although area 3b is generally known as the primary somatic sensory cortex (also called SI), all four areas are involved in processing tactile information. Experiments carried out in nonhuman primates indicate that neurons in areas 3b and 1 respond primarily to cutaneous stimuli, whereas neurons in 3a respond mainly to stimulation of proprioceptors; area 2 neurons process both tactile and proprioceptive stimuli. Mapping studies in humans and other primates show further that each of these four cortical areas contains a separate and complete representation of the body. In these somatotopic maps, the foot, leg, trunk, forelimbs, and face are represented in a medial to the lateral arrangement.
Although the topographic organization of the several somatic sensory areas is similar, the functional properties of the neurons in each region and their organization are distinct (Box D). For instance, the neuronal receptive fields are relatively simple in area 3b; the responses elicited in this region are generally to stimulation of a single finger. In areas 1 and 2, however, the majority of the receptive fields respond to the stimulation of multiple fingers. Furthermore, neurons in area 1 respond preferentially to particular directions of skin stimulation, whereas many area 2 neurons require complex stimuli to activate them (such as a particular shape). Lesions restricted to area 3b produce a severe deficit in both texture and shape discrimination. In contrast, damage confined to area 1 affects the ability of monkeys to perform accurate texture discrimination. Area 2 lesions tend to produce deficits in finger coordination, and in shape and size discrimination.
A salient feature of cortical maps, recognized soon after their discovery, is their failure to represent the body in actual proportion. When neurosurgeons determined the representation of the human body in the primary sensory (and motor) cortex, the homunculus (literally, “little man”) defined by such mapping procedures had a grossly enlarged face and hands compared to the torso and proximal limbs (Figure 9.8C). These anomalies arise because manipulation, facial expression, and speaking are extraordinarily important for humans, requiring more central (and peripheral) circuitry to govern them. Thus, in humans, the cervical spinal cord is enlarged to accommodate the extra circuitry related to the hand and upper limb, and as stated earlier, the density of receptors is greater in regions such as the hands and lips. Such distortions are also apparent when topographical maps are compared across species. In the rat brain, for example, an inordinate amount of the somatic sensory cortex is devoted to representing the large facial whiskers that provide a key component of the somatic sensory input for rats and mice (see Boxes B and D), while raccoons overrepresent their paws and the platypus its bill. In short, the sensory input (or motor output) that is particularly significant to a given species gets relatively more cortical representation.
Somatosensory Cortex Definition
Previous chapters described the ways in which the different somatosensory receptors respond to specific types of somatosensory stimuli and that the receptors, by virtue of their selective sensitivities, extract specific information about the somatosensory stimulus. The specificity of the receptors forms the basis for a parsing (i.e., a sorting) of somatosensory experience into separate “information channels” or pathways. For example, sharp-pricking pain is mediated in the neospinothalamic (information channel) pathway, whereas proprioception is mediated in the medial lemniscus pathway. Recall that the receptor’s extraction of somatosensory information is very specific (e.g., during limb movement, muscle spindles respond to muscle stretch, whereas Golgi tendon organs respond to muscle contraction) and the processing of this extracted information is kept separate along most of the ascending pathway. In addition to this parsing of stimulus information, the somatosensory system is also organized to provide a somatotopic representation of the body surface and parts. The resulting spatial maps provide the anatomical basis for our ability to localize somatosensory stimuli and for our sense of a ‘body image”.
As described above, the nervous system reduces somatosensory experience into parallel streams of neural activity – a decomposition of the experience into stimulus fragments spread over body pieces. So how does one have a sense of “oneness” of the body and how does one identify an object by handling it? One can do so because somatosensory information converges in the parietal lobe of the cerebral cortex to provide a cohesive perception of the body and of somatosensory stimuli.
The first part of this chapter will present additional details about the general organization of the somatosensory system and how somatosensory information is represented and processed in the parietal cortex. This understanding of the general organization of the somatosensory pathways will be used in the clinical assessments of somatosensory function.
What is the role of the somatosensory cortex?
What is the function of the somatosensory system?
The somatosensory system is the part of the sensory system concerned with the conscious perception of touch, pressure, pain, temperature, position, movement, and vibration, which arise from the muscles, joints, skin, and fascia. |
Step 1: Gather materials.
- A simple picture printed on card stock, laminated if possible (Cut into 10 equal strips, each with the numbers 1-10 on the bottom of the picture. Check out these teacher sites for an idea of what kinds of pictures you can use. Sometimes there are free downloads available. www.teacherspayteachers.com/Product/Car-Number-Sequence-Puzzle-671324
- Attach magnets to the backs of the number strips, putting one at the top and one at the bottom of each strip. Use metal trays for the children to use as a surface to put the puzzle pieces together on. This step is optional, but it can help the children keep their pieces in one place, while providing a framework that helps them organize their work.
Note: Small parts pose a choking hazard and are not appropriate for children age five or under. Be sure to choose lesson materials that meet safety requirements.
Step 2: Introduce activity.
- Poll the children. Ask: “Who likes to do puzzles?” Explain that today they are going to learn a new puzzle: a number sequence puzzle.
- Explain the instructions of the puzzle. Say: “You are going to receive 10 strips with the numbers 1 through 10 on the bottom. Your job is to put those strips in order from 1 to 10 and, when you do, it will form a picture.”
- Review sequence counting. Count together starting at 1 and going to the number 10.
Step 3: Engage children in lesson activities.
- Give the children the 10 number strips and have them work on putting the strips into the correct sequence.
- The children can use the metal trays as a work surface.
- Print and prepare multiple puzzles so that, when the children are done with one puzzle, they can work on another one. The children love to see the pictures they create by following the number sequence.
- While still cutting a picture into 10 strips, use a different number sequence. You can choose 30-40, even numbers starting with 2 and ending at 20 or skip counting by 5s starting at 5 and ending at 50. Any number sequence or counting pattern that you are working on can be applied to this puzzle.
- Instead of cutting 10 number strips, cut the picture into 3×3 squares. You can choose to put the numbers on the square or not. If you do choose to put the numbers on the square, place them in the bottom right-hand corner or on the back.
Step 4: Teach math vocabulary.
- Sequence: An ordered set of numbers, shapes or other mathematical objects arranged according to a rule (e.g.,”The number 2 comes before the number 3 in our number sequence.”)
Step 5: Adapt lesson for toddlers or preschoolers.
Adapt Lesson for Toddlers
- Have difficulty putting the numbers 1-10 in sequential order
Home child care providers may:
- Cut a simple picture into five strips and label the strips 1-5
- Have the children work with a shorter, simpler number sequence
Adapt Lesson for Preschoolers
- Put the number strip in numerical order with ease
Home child care providers may:
- While still cutting a picture into 10 strips, use a different number sequence. You can choose 30-40, even numbers starting with 2 and ending at 20 or skip counting by fives starting at five and ending at 50. Any number sequence or counting pattern that you are working on can be applied to this puzzle.
- Instead of cutting 10 number strips, cut the picture into 3″x3″ squares. You can choose to put the numbers on the square or not. If you do choose to put the numbers on the square, place them in the bottom right-hand corner or on the back.
- Ten Black Dots by Donald Crews (New York: Greenwillow Books, 1995)
- The Very Hungry Caterpillar by Eric Carle (London: Hamilton Hamish Children, 1994)
Music and Movement
- “Counting 1-20” www.songsforteaching.com/jackhartmann/counting1to20.htm
- Recite nursery rhymes and sing songs that include counting such as: “One, Two, Buckle My Shoe,” “There Were Ten in the Bed,” “This Old Man,” “Five Little Ducks” and “The Ants Go Marching One by One.” This will give the children an opportunity to practice counting in a fun and playful manner. You can find free song lyrics and listen to melodies at www.kididdles.com.
Wagon Walk: Every day, place a different number on a small wagon. Let the children take turns, taking the wagon around the yard for a walk collecting items to put into the wagon. The object of the game is to place the same number of items in the wagon as the number on the wagon indicates. When the child has the correct number of items in the wagon, he/she can show you. Then ask the child to put the items back where they belong so that another child can have a turn.
- A wonderful game for ordering numbers and for number sequences (fantastic on an interactive whiteboard)
- Online counting games www.familylearning.org.uk/counting_games.html |
Informal definitions of key concepts in propositional logic
This post is the first of a series on propositional logic that introduces the foundational logical concepts of validity, soundness, truth, falsity, possibility and indeterminacy. These concepts are defined here informally and will receive formal articulation in later posts.
What is propositional logic?
The chief unit of propositional logic is the proposition. It is typically said that the sentences of a language express propositions. In natural languages, propositions are expressed via declarative sentences: a statement that such and such is the case. For example, in English: snow is white, London is the capital of the United Kingdom, John is travelling to Stockholm etcetera. Whilst this is true it is important not to assume that a proposition reduces to its expression by a given sentence. There are numerous examples of why this assumption proves problematic. For example, il pleut and it is raining express the same proposition but do so via different sentences in different languages. Similarly the semantically ambiguous English sentence Two cars were reported stolen by the police yesterday expresses two possible propositions (the police reported the car stolen, the police stole the car) that are left underdetermined by the surface grammar of the sentence. For simplicity we will talk about propositions exclusively in terms of sentences but it is important to note that the two are not straightforwardly interchangeable.
Not every sentence in a language has a propositional form. Consider for example Áchtung! or thanks. It is not obvious that these sentences express a proposition in the manner of declarative sentences. Philosophers and linguists have called expressions that do not satisfy propositional criteria speech acts. The difference between declarative sentences and speech acts (like thanking someone or issuing an order) consists in the fact that the former possess clear truth-conditions that reduce to a given truth-value. In the case of il pleut the sentence is true if it is raining and false otherwise. Those are its truth-conditions. Its truth-value is the particular assignment that is made on the basis of these conditions. Assume it is not raining; in this case the truth-value of il pleut is false.
Although declarative sentences are a subset of the totality of possible grammatical expressions in any natural language, they are clearly an important subset. They form the basis of all scientific and mathematical discourse and are our primary means of spreaking and reasoning about the world. The scope of propositional logic is limited to sentences that have this declarative property. There are no questions, commands or exhortations in propositional logic, only statements which may be true or false.1
The purpose of propositional logic is to analyse propositions in terms of their truth conditions and to derive rules governing their proper application and combination in arguments. Equipped with these rules, we are able to demonstrate for example that an argument is valid or that it displays sound reasoning. On the other hand, the same rules will allow us to demonstrate when an argument is invalid or that it leads to contradiction.
Arguments and consistency
Propositional logic proceeds upon two interconnected axes:
- Analysing compound propositions in terms of their constituent parts
- Analysing a proposition in relation to other propositions
In this post we are focused on the second axis. The first will be covered in a future post on logical connectives and truth-tables.
When we analyse a series of propositions and the relations between them we call the group a set. Sets possess logical properties. The first such property we will define is consistency.
The following set of propositions form an inconsistent set. Can you spot the inconsistency? 2
- Anyone who takes astrology seriously is crazy.
- Jane is my sister and no sister of mine has a crazy person for a husband.
- Richard is Jane’s husband and he checks his horoscope every morning.
- Anyone who checks their horoscope takes astrology seriously.
The set is inconsistent because it is not the case that all the propositions can be true at once. Specifically: if (1), (3), and (4) are true, (2) cannot be. Alternatively, if (2), (3) and (4) are true (1) cannot be.
Let’s illuminate the first instance of inconsistency. On the one hand we assert that taking astrology seriously is crazy and we assume that checking your horoscope means that you take astrology seriously. Richard is Jane’s husband and he checks his horoscope each morning. By definition then, Richard is crazy and Jane is married to him, but if I believe this, I cannot believe that my sister would not marry a crazy person. My beliefs are inconsistent.
Now, the second instance of inconsistency. Jane, my sister, is married to Richard. None of my sisters have a crazy husband. Richard checks his horoscope each day and therefore takes astrology seriously. If this is the case, I cannot believe that taking astrology seriously is crazy because otherwise my sister cannot be married to Richard.
In deconstructing the inconsistencies in the scenario of Richard and Jane we have been concerned to point out that one proposition implies or follows from another proposition. Intuitively we have been invoking the logical notion of an argument. This is not so different from what we mean by an argument in ordinary life. If we are arguing with someone, we believe that they are wrong about something where ‘something’ is a proposition and ‘wrong’ means false. For example the prime minister is a liar. A more logical way to put this is that we believe their beliefs about a set of propositions are inconsistent. In order to make assertions about the relative consistency or inconsistency of a set of propositions we advance arguments. This is like seeking to change the person’s viewpoint by showing that their belief A conflicts with their other belief B or that if A is true, they cannot believe B.
In the example above each proposition in the set has equal footing; we have not distinguished one type of proposition from any other. When we construct an argument however, we distinguish the propositions by type. We say that one or more propositions are premises and one proposition is the conclusion:
Let’s demonstrate how this works by making an implicit argument from previous example explicit:
- P: Anyone who checks their horoscope takes astrology seriously.
- P: Richard checks his horoscope.
- C: Richard takes astrology seriously.
This constitutes a logical argument because of how the propositions are arranged: we are asserting that the conclusion (C) is supported the two premises (P). We call such arguments syllogisms.
Evaluation criteria for arguments
In what sense do the premises support the conclusion of an argument? What is the relation between these two types of proposition? The word ‘support’ is rather vague. In logic there are different ways to assess the qualility of an argument: inductive strength and deductive validity.
Consider the following argument:
- P: When a cat scratches itself it can mean it has fleas.
- P: Our tabby, Carrot has been scratching himself a lot lately.
- C: Carrot has fleas.
To ask ourselves whether this is a strong argument is to reflect on whether we have good grounds for believing the conclusion given the premises. My intuition that this is a reasonable argument but not a particularly strong one. It doesn’t strain credulity but it is by no means watertight.
Contrast it with this argument:
- P: Every day the sun rises in the east.
- P: The sun rose in the east today.
- C: The sun will rise in the east tomorrow.
This strikes me as a stronger argument than the first. If you had to bet on Carrot having fleas or the sun rising in the East you would put your money on the latter although you would probably get better returns on the former. With arguments of this nature we are proceeding on the basis of likelihood. The technical term for this is induction: given some background context of beliefs (for instance the typical behaviour of cats and the planet) there are stronger or weaker grounds for accepting the conclusion of an argument bases on its premises.
Although the arguments differ in their relative strength they are both inductive arguments. This is because they are each falsifiable. In the first case, this is obvious: Carrot might not have fleas and could be scratching for some other reason. Perhaps surprisingly, the second argument is also falsifiable. The magnetic field of the Earth could switch polarity meaning that while the Earth would not change its position relative to the sun, our compass would be inverted and therefore indicate that the sun rising in the west. This is very unlikely to happen imminently but it will happen at some point in the next 10,000 years. Therefore the conclusion could prove false.
The next obvious is whether all arguments are like this. Is probability the best we can hope for? Fortunately not. Propositional logic is a deductive schema which means it aims for truths that are not falsifiable in the manner of the two examples above. It is this criterion of evaluation that we mainly interested in when we use logic as a formal discipline. This is the domain of deductive validity. Validity is our second key logical property.
Validity and soundness
The following syllogism is an example of a valid argument:
- P: All fish live in the sea and only in the sea.
- P: Cod are fish.
- C: Cod live in the sea.
And here is an invalid argument:
- P: All fish live in the sea and only in the sea.
- P: Cod are fish.
- C: Cod live on land.
In the valid instance, there is no sense to the idea that we might accept each premise and yet deny the conclusion. In the invalid instance this is not the case: we can accept each premise and deny the conclusion. We can relate this back to the the notion of consistency. Recall that a set of propositions is consistent if and only if it is possible for each member to be true at once. With the first argument we cannot consistently accept the premises and deny the conclusion whereas in the case of the second argument we can quite consistently accept the premises and deny the conclusion since the propositions do not comprise a consistent set.
In contrast to the previous inductive arguments, with valid arguments the conclusion is supported purely in virtue of the terms used in the premises and the propositions they express. In order to assess the validity of an argument like the first it is not neccessary to aquaint oneself with cod and to study their behaviour so as to determine whether they do in fact live in the sea. All that is necessary is to understand the propositions expressed. Philosophers refer to statements of this sort as analytic. They are true or false ‘by definition’. More specifically: the concept of the predicate is contained within the concept of the subject for example all brothers are male. In the case of the argument, we have defined at (P1) that fish are creatures that live in the sea so given this definition, the conclusion is bound to follow from the premises since it is just a specific instance of the general property already defined. Validity is therefore an entirely formal notion that exists over and above any facts of the matter. If I have defined ‘fish’ universally as sea-dwellers I cannot without inconsistency say that they are not sea-dwellers.
This is further exemplified with an argument like the following:
- P: Manchester is the capital of the UK.
- P: Manchester is north of Birmingham.
- C: The capital of the UK is north of Birmingham.
Is this a valid argument? To answer this question remember that invalidity means that it is possible for the premises to be true and the conclusion false. In the strict logical sense, this is valid argument since were the premises true, the conclusion would also be true. The point is that validity is a function of truth-conditions not truth-values. The truth value of the first premise and the conclusion in the above argument happen to be false but this does not affect its validity. There is no necessity to London being the capital of the UK and not Manchester. We can imagine things being otherwise which is to say that we can entertain the truth-conditions of the proposition and make judgements in accordance with it being true or false quite independently of whether it is in actuality true/false.
We can take this back to the earlier example of the invalid argument about cod: in order to judge the argument invalid it was not necessary for us to look for cod that live on land and come back empty. Rather we just had to assume that if all fish live in the sea then it must be the case that if something is a fish, it is a sea-dweller. We made no commitment to fish actually living in the sea.
Does this mean that actual truth does not matter to logic? No, it just means that validity as a property is decoupled from truth as a property although we cannot of course have a grasp of the notion of validity without possessing a prior notion of truth. A proposition being true in fact is a property it may or not possess in addition to its membership within a valid sequence of reasoning. If an argument is both valid and its premises are true in fact we say that it is a sound argument. This is a stronger criterion of evaluation than validity alone.
- an argument cannot be sound if it is not also valid
- an argument can be valid without being sound
- if an argument is sound its conclusion must be true
(The last point follows from the fact that soundness means the premises are true and validity requires that if the premises are true the conclusion must also be true.)
We have already seen examples of arguments that are valid but not sound in the Manchester example, let’s close this section with an example where both premises and conclusion are true yet the argument is invalid, demonstrating that truth alone is not sufficient for soundness.
- P: London is the capital of the UK.
- P: The capital of the UK is in the southern part of the country.
- P: Cambridge is not the capital of the United Kingdom.
- C: London is south of Cambridge.
This argument is deductively invalid because we can consistently assert the premises but deny the conclusion. Specifically: there isn’t anything about the premises that makes the denial of the conclusion inconsistent. From the point of view of the premises alone, London could be north of Cambridge whilst still being in the southern part of the country.
In distinguishing the properties of logical consistency and validity we have been making much tacit use of the notion of possibility. This is because when we consider the validity of an argument we are assessing truth-conditions and this consists in asking ourselves what could or could not be the case: were it such that P, then it would be the case that Q. It is important to understand what possibility means in the context of logic and how it differs from what we might mean ordinarily when we use the term.
It is evident from the case of arguments that are valid but not sound that logic operates with a specialised notion of possibility. For example it has to be the case that the proposition Every woman can levitate is logically possible since the following argument is valid:
- P: Janice is a woman.
- P: Every woman can levitate.
- C: Janice can levitate.
But we know of course that women cannot levitate. When we assert that this is impossible we are relying on a stronger notion of possibility than logical possibility. It follows that the concept of possibility can have different degrees. The scope of the concept of possibility has been the concern of logicians and philosophers since at least the time of Plato and numerous different formulations exist. The notion that we mostly work with unreflectively in everyday life is nomological possibility. This means ‘governed by the application of laws’ where these laws pertain to our current understanding of the natural world as determined by physics. Levitation is therefore nomologically impossible but logically possible.
If logical possibility is not contrained by the laws of physics does it place any restrictions on what is possible? Logic applies a single restriction, the law of non-contradiction: a proposition cannot both be true and false at once. The following propositions are examples of a contradictory propositions.
- There is a dog that is not a dog.
- Today is Tuesday and today is not Tuesday.
- The cat that is dead is alive.
From this we can derive the following property of logical possibility:
Logical truth, falsity and indeterminacy
What are the truth-conditions of a contradictory proposition? We know that a logically possible proposition such as every woman can levitate could be true or false. It has to be so because we are capable of constructing valid arguments where it features as a premise and a valid argument implies the possible truth of its premises.
In the case of a contradiction there are no conditions under which it could be judged to be true. For this reason, contradictions are classified as logically false. This is distinct from ordinary falsity where a proposition could be true but happens to be false. Logically false propositions are universally false and could never be true. This is consistent with our previous observation of the law of non-contradiction: if a proposition cannot be both true and false at once we are saying that something cannot be the case which is of course to say is false.
Logical falsity is therefore another property that a proposition may possess and it is a property that is possessed by all propositions that are contradictions:
Complementing logical falsity is logical truth:
We call logically true propositions tautologies. Some examples:
- An apple is an apple.
- Today is Tuesday or today is not Tuesday.
- The cat is dead or alive.
The properties of logical truth and falsity are alike in their universality. Propositions that are logically true do not exclude any possibility (today is Tuesday or it is not Tuesday; there is no possible state outside of this) whereas logically false propositions exclude all possibilities (there is no scenario where today is both Tuesday and not Tuesday).
We class all propositions that are not contradictions or tautologies logically indeterminate propositions. This means that their truth-value is not assigned purely on the basis of the meanings of the terms of which they are comprised. It is raining for example, is logically indeterminate because we cannot know its truth-value just by reflecting on the meaning of the predicate is raining. It may be true under certain conditions and false under others and in order to know the specific truth-value at a given moment, we must look to states of affairs beyond the sentence. The vast majority of propositions expressed in natural and formal languages are indeterminate in this manner.
In this post we introduced propositions as descriptions of states of affairs that possess truth-conditions. We noted that propositions are expressed in language through the medium of declarative sentences and that not every expression in a language possesses a propositional form. Two key properties that pertain to sets of propositions were introduced and exemplified: consistency and validity. In addition we considered different evaluative criteria for logical arguments comparing inductive strength with deductive validity. We distinguished logical possibility from nomological possibility and explained how the law of non-contradiction places bounds on what is logically possible. Equipped with the concept of logical possibility we were able to introduce logical truth and falsity, analysing the truth-conditional form of tautologies and contradictions and noting that propositions that are neither logically true or false are logically indeterminate.
Bergmann, M., Moor, J. and Nelson, J. (2014). The logic book. Boston: Mcgraw-Hill/Connect Learn Succeed.
Wikipedia Contributors (2019). Speech-act. [online] Wikipedia. Available at: https://en.wikipedia.org/wiki/Speech_act. |
Land reform is a broad term. It refers to an institutional measure directed towards altering the existing pattern of ownership, tenancy and management of land.
It entails “a redistribution of the rights of ownership and/or use of land away from large landowners and in favour of cultivators with very limited or no landholdings.”
Land reform is a part of heritage of the country’s freedom movement since the agrarian structure that we inherited from the British at the time of independence was of the feudalistic exploitative character. Zamindars- intermediaries-moneylenders played a big role in exploiting the masses.
1st generation (1947-1970) of land reform in india
The peculiarities of Indian agriculture, combined with the declared desire to bring about economic development as well as social justice led the govt., in the post-Independence period, to under-take a comprehensive programme of land reforms. These reforms, be it noted, had a popular base in as much as they were preceded by peasant, disturbances and violent clashes in several parts of the country.
These reforms comprised:
- Abolition of intermediaries
- Ceiling on land holdings
- Tenancy legislation
- cooperative farming
- Abolition of forced labour and
- consolidation of holdings.
Abolition of Intermediaries
One of the first aims of the agrarian reforms was to eliminate the middlemen such as the Zamindars and Jagirdars so as to bring the cultivator into direct relationship with the govt. The work of Zamindari abolition was comparatively easy in the temporarily settled areas such as U.P. and M.P. where adequate records and administrative machinery existed.
In the permanently settled areas of Bihar, Orissa, and West Bengal ans in areas under Jagirdari settlements such as Rajasthan and Saurashtra “land records and revenue administration had to be built from the beginning.” Nevertheless, laws abolishing intermediary tenures were given effect to in most of the states.
The general pattern was made up of the following features:
- All land including common lands, forests, mines, mineral, rivers, channels, and fisheries were vested in the govt. for purposes of management and development.
- Home-farm lands and lands under the ‘personal’ cultivation, of intermediaries were left with them.
- In most states, the tenants in-chief holding land, directly from intermediaries, were brought in direct contact with the State with some exceptions such as in Bombay, Hyderabad and Mysore. In these states, intermediaries were, in some cases, allotted lands held by tenants.
- In some States, tenants possessed permanent and transferable rights and it was not necessary to confer further rights upon them. These included Assam, West Bengal, Bihar, Orissa, Bhopal and Vindhya Pradesh.
There were other states such as Bombay, U.P, M.P, Hyderabad, Mysore and Delhi where tenants were required to make payments in order to acquire rights of ownership. In a few states such as Andhra, Madras, Rajasthan, either larger rights were conferred upon tenants or their rents were reduced without any direct payment being required of them.
A distinct feature of the Zamindari Abolition Acts was the payment of compensation to the landlords although the rate and the mode differed from state to state. Barring Kashmir where no compensation was paid, in others it was fixed either as a multiple of land revenue assessment or of rent or net income.
In all, compensation, including rehabilitation grants, payable to the intermediaries amounted to Rs. 670 crores. Only a part of this compensation or rehabilitation grant and that too to small land owners was paid in cash, the remaining being paid in long term bonds.
The removal of intermediaries had far reaching effects. As Daniel Thorner points out, the new laws took away from intermediaries their rights to collect rents on lands which they themselves did not cultivate. They also relieved them of the responsibility for paying land revenue on such lands.
On the other side, 173 million acres were acquired and 20 million tenants brought into direct relationship with the state. In some cases, tenants acquired full ownership rights, including the right of transfer without any payment. In others, they were required to make some payment for acquisition of full occupancy rights.
It also brought about improvement in the administrative machinery and social services. But more important was the downward revision in the rates of land revenue which were brought in line with rates prevailing in the ryotwari areas. An effect of great significance was to give the richer peasants an opportunity to become landed proprietors.
The absentee-landlord, having considerably large resources at his disposal, began to invest large amounts in the lands under his control and with the use of modern techniques, managed to show a higher level of productivity. The way was thus paved for the emergence of rural capitalism.
The abolition of intermediaries, though elaborately conceived, left many glaring loopholes. In the first instance, the Acts did not apply to landlord holdings in ryotwari areas which embraced 57% of the total area under cultivation in the country. Even where they applied, the Acts did not divest the feudal landlords of their large holdings of land.
Rather, they were permitted to retain large areas provided it was under their personal cultivation and not let out to tenants. It is true that in some states a ceiling was fixed as to the amount of land a former intermediary could own but the ceiling was so high that very few of the intermediaries were affected.
In any case, it was possible for them to side step the law by passing over part of their land to other members of the family. Thus is how estates of even 100 acres persisted in Post Reform Bihar. Besides, personal cultivation was not clearly defined.
In Kashmir, the law suppressing big landowners transferred the land to real cultivators defined as those who “till and work the land with their own hands.” This was in contrast to other states which considered cultivator as one who merely financed production.
In the words of Daniel Thorner, the cultivator was not required “to participate in the actual work of cultivation; he did not have to go to the fields and work. In-fact, he did not have to leave his house or to get off his divan. Worse still, he was not even required to be in the village at all. This made it possible for land owners even remotely connected with agriculture to pass as tillers of the soil.
The Act was supposed to eliminate absentee-landlords but it allowed plenty of room for them to stay. No wonder that the National Sample Survey (8th round) found about 31 million acres of land representing 50% of all land leased still under absentee—owners. In other words, Feudalism was curbed but not eliminated.
The procedure of allowing the Zamindars to retain lands under personal cultivation had far reaching consequences. To be able to declare a large proportion of their lands as ‘Khud Kasht’ or under personal cultivation, it became necessary for the owners to show these lands as free from any tenancy.
All means, legal and illegal, were used to expel tenants or force them to renounce their tenures voluntarily under threats of physical violence or economic sanctions.
This resulted, as Dantwala has observed, in more tenants being evicted during the decade following Independence than during the last 100 years of the British rule. At the other end, the Zamindars of U.P. alone ended up with some six million acres of ‘Sir’ and ‘Khud Kasht’ Land.
Dantwala, after hailing the Zamindari abolition measure as revolutionary, admits that the actual results were far from satisfactory. This is well brought out by the fact that while official documents claimed total abolition of Zamindaris, in U.P. alone. 10% of the families were still holding something like 50% of the land in 1955.
This is in marked contrast to the experience in Japan where an ambitious programme of tenancy reform—transferring land to tenants—was so successfully implemented as to reduce land under tenancy from 47% to 9% in the post-war period.
To conclude, the Zamindari Abolition was an attempt at a half-hearted readjustment of agrarian relations. The policy perused was not one of radical change but of compromise, not one of creating conditions for the capitalist development of peasant farms in general but of converting feudal landlords and rich peasants into capitalist agriculturists.
Therefore, even though their position was slightly weakened, the zamindars still managed to retain their position as the largest land owners in the states. However, along side them, the richer section of the tenants also began to play a greater role in the economic and political life of the village. The condition of the bulk of the peasantry—the actual tillers of the soil —however remained practically unchanged.
According to the Report of the Panel on Land Reforms, the aim of land ceilings was to:
- Meet widespread desire to possess land;
- Reduce glaring inequalities in ownership and use of land;
- Reduce inequalities in agricultural income and enlarge the sphere of self employment; and
- Give a new status to the land-less.
With a view to achieving these objectives, legislation was passed in all states imposing ceiling on existing land holdings as well as on future acquisition of land.
However, provisions relating to level, transfers, and exemptions differed considerably from state to state. In Assam, Jammu and Kashmir, West Bengal and Manipur, there was one uniform ceiling limit irrespective of the class of land, ceiling being fixed at 50 acres, 22 ¾ acres and 25 acres respectively.
In all other states, the level of ceiling was fixed to take account of different classes of land. For example, the ceiling ranged all the way from 27-134 acres in Andhra, 20-80 acres in Orissa, 19-132 acres in Gujarat, 18-126 acres in Maharashtra. In others, it was fixed in terms of standard acres, a standard acre being equal to a certain number of ordinary acres a laid down in the Act passed in each state.
Thus ceiling was fixed at 30 standard acres in the Punjab (Pepsu area only) Rajasthan, Delhi and Madras; 25 standard acres in Madhya Pradesh and 27 standard acres in Mysore. In U.P., ceiling was imposed at 40 acres of ‘fair-quality’ land.
These different levels of ceilings, as M.L. Dantwala points out, did not bear any relation either to climate or soil conditions prevailing in different regions or to the density of population. It appears these ceilings were fixed primarily on the basis of the average size of large holdings in a particular state or by the influence exerted by different political forces in the legislature.
In some states, transfers made after the publication of the bill or its introduction in the legislature were disregarded as in Assam, Kerala, Madras, Maharashtra, Uttar Pradesh and Tripura. Some states enforced and measure with retrospective effect from a certain date e.g., Gujarat, Punjab, West Bengal, Delhi and Manipur.
In others, there was no provision for disregarding transfers made before the commencement of the ceiling law. In Mysore, transfers of land could take place even after the enactment of the law while Madhya Pradesh and Orissa Acts permitted owners to transfer their surplus lands to specified categories of persons within specified periods.
As regards exemptions, the ceiling laws passed by the Bihar, Andhra and Madras legislatures provided exemptions to lands under sugarcane belonging to sugar factories; in Maharashtra, the ceiling was extended to cover sugar plantations as well.
In all states, except Jammu and Kashmir, provision was made for the payment of compensation for the acquisition of surplus land. However, the amount of compensation specified in ceiling legislation was not the same in different states, nor was the principle underlying it the same.
Five different patterns were followed:
- Compensation was fixed as multiple of land revenue assessment in Assam, Gujarat, Madhya Pradesh and Maharashtra.
- In Andhra, Mysore, Madras, West Bengal, Delhi, Manipur and Tripura, it was fixed as a multiple of income.
- In the Pepsu area of the Punjab, it was fixed as a multiple of rent.
- In Kerala and Orissa, it was related to the market value of land.
- In Bihar, specified amounts were provided for different classes of land.
Despite these differences, there was one thing common in all states: every where the compensation paid was higher than what was paid to the zamindars and it came close to the market price of land. The Orissa Bill specifically provided that the surplus land “was to be sold by the owners at market price”.
Thus, the recommendations of the First Panel on land Reforms that “the amount of compensation should in no case be more than 25% of the market values of land” was not carried out.
The practical effect of the ceiling laws was negligible. Taking the country as a whole, total of 9.6 lakh hectares was declared surplus out of which about 6.4 lakh hectares were taken possession of by the State Governments and only 4.6 lakh hectares were finally distributed.
The reasons for this meagre result are not far to seek. In most of the Acts, ceiling was fixed on individual and not family holdings which gave the landlords the opportunity of fictitiously dividing up their property, “among the new —born still to-be-born, and still-born” by making Benami transactions, B. Chattopadhya rightly stated in this connection that “unless these things were strictly defined in terms of the aggregate holding of the entire family and persons otherwise related and not in terms of individual’s holdings, one does not see how evasion can at all be legally reprehensible.”
Besides, in some states such as Andhra, Bihar, J and K, and Mysore, the laws did not prohibit alienation and transfer before the law came into force while in others such as Maharashtra, Manipur, and Uttar Pradesh, too remote a date was fixed for the measure to come into force. H.D. Malaviya attributes the failure to the people in charge of implementing the measure who never mentally accepted it.
The enforcement was left to the administrative and revenue authorities who colluded with the land owners and interpreted the law in such a manner as to defeat its purpose. Pitted against the powerful bureaucracy was the ignorance of the illiterate peasants about the laws.
The whole argument may be summed up in the apt words of Dr. Joshi. “The wide latitude given to state govts. (in defining a family holding, in determining the level of ceilings, in deciding whether ceilings should apply to individual or family holdings, and in fixing exemptions or methods of distribution of surplus lands) opened the door to endless manipulations and manouverings, pulls and pressures in a manner that the very object of ceilings was put in jeopardy and even defeated.”
To make a change in the existing pattern of land ownership, these loopholes had to be plugged. But this involved an attack on entrenched interests in the country-side on a much larger scale than was actually attempted. The result was that 90% of the usefulness of land ceilings was lost.
The results speak for themselves. In Gujarat, 2.15 lakh acres were likely to be declared surplus but only 39 thousand acres were actually declared. In Maharashtra, 2 lakh acres were declared surplus but only 67.5 thousand acres were taken possession of by the govt.
In Mysore, Kerala and Orissa, the ceiling laws did not release a single acre of surplus land for redistribution. In the whole of Andhra, only 1400 acres were taken over and none distributed. And the performance in Tamil Nadu was only marginally better.
Their ineffective implementation notwithstanding, the ceiling laws had a certain significance. It lay in the fact that a new but irrevocable step had been taken. A new process had begun, set off by the needs of a rural population living in steadily deteriorating social and economic conditions. The old style landed-gentry was replaced by the middle level land-owners.
In this respect, in contrast of Pakistan, there was a certain forward, though un-even, movement in our society. The laws discouraged the richest classes from buying up land. Although a major portion of their capital went into “the purchase of urban property, in politics money lending and in Conspicous Consumption”, a part at least was used for land improvement and agricultural equipment.
Holdings well-equipped with machinery and employing hired labour increased, again indicating the beginnings of a new agricultural capitalist class. This is, it appears, what the govt. had in mind.
This is confirmed by the fact that the ceiling laws did not apply to plantations, Sugar-cane farms owned by sugar factories, orchards, cattle—breeding and Dairy farms, farms in compact block ; efficient farms ; mechanised farms and farms with heavy investment. While this had a favourable effect on agricultural production, land ceilings did not solve the problem of land-less peasants or those with too little land.
A satisfactory system of land tenure had long been recognised as the essential basis of a strong and efficient organisation.
The congress Agrarian Reforms Committee very strongly felt that the welfare of the Indian peasantry and the progress of agriculture in India depend to a large extent on whether the peasantry feels secure about the source of livelihood and whether the tenure system provides incentives and opportunity for local development.
The First Five Year Plan, while according the highest priority to increase in agricultural production, recommended and agrarian policy aimed at reducing disparities in wealth and income, eliminating exploitation providing security for the tenant and worker and opportunity to different sections of the rural population.
With these guidelines provided by the planning Commission, the State Govts. adopted certain measures, viz., regulation of rents, security of tenure and conferment of ownership on tenants.
As regards fixation of rents, most of the slates following the directive laid down in the First Plan and fixed rents at ¼ of the gross produce or less. However, in some states like the Punjab, J & K, Madras, West Bengal and Andhra, fair rent, as fixed by law, continued to be 1/3 to 1/2 of the gross produce.
Security of tenure was increased by temporarily excluding eviction or by giving new rights to tenants or else by fixing a maximum limit to the area a landowner could resume for ‘personal cultivation’ in the widest sense or by fixing a minimum amount of land that could be held by the tenant and could not be resumed by the proprietor for personal cultivation. In Bombay and the Punjab, the tenant could retain half of his holding and in Himachal Pradesh 3/4 of his holding.
The maximum area that a landowner could retain for personal cultivation also varied considerably. In some of the largest states, no maximum was fixed. In others such as Bombay, Assam, Hyderabad, the maximum was generally between 12-50 acres; In a few states, the maximum was lower than this ; in J & K, it was about 2—6 acres and in Orissa 7—19 acres.
The beneficiary tenant differed but was almost never the one who works the land with his own hand. In west Bengal, the largest rights were given to the jotedars who seldom worked on land. The true cultivators were given no new right in the land except that of retaining a third.
Even this was theoretical for if the proprietor owned less than 7.5 acres, he could reclaim all of Bardagar’s holding. And this he could easily achieve by dividing his land among his family members. In other states such as the U.P., Bihar, and the Punjab, the crop-sharers found themselves treated in much the same way.
As for conferring the right of ownership, different states followed different procedure. In states like Gujarat, Maharashtra, Madhya Pradesh and Rajasthan, the law declared tenants as owners but required them to pay compensation to owners in suitable instalments.
In Delhi and West Bengal, the State first acquired the ownership rights and then transferred the same to tenants, recovering the compensation from them in suitable instalments.
In Kerala and U.P., ownership rights were acquired by the govt. thereby establishing a direct relationship with the tenants. As a result of these measures, it is claimed that about 3 million tenants and share-croppers acquired ownership of more than 7 million acres in the country.
But this was more a result of the Zamindari abolition than a consequence of tenancy legislation. Under the Tenancy laws, tenants lost more land than they actually acquired. In Gujarat, for instance, Desai found that between 1948-55 excluding 1950 and 1951, tenants purchased 0.8% of the leased land but lost, as a result of resumption by the landlords, 1.5% of the leased land.
Similarly, Dandekar and Khudanpur found that of the total area with tenants in 1948-49,27% was resumed by the owners and only 3% was acquired by tenants. In the villages of Baroda covered by the inquiry of Kolhatkar and Mahabal “there were very few or no cases of landlords selling their lands to tenants.”
Although tenancy laws theoretically improved the position of the tenants, the actual position was not much better than before. Dandekar and Khudanpur, in the course of their study of the working of the Bombay Tenancy Act 1948, found that “the statutory rents were almost entirely ignored; most crop sharers still gave ½ of the their produce to the landlords despite the new laws, nor was there any secret about it.”
They concluded that “in practice the law does not exist.” And Khusro found that the ‘machinery to regulate rents was seldom invoked.’ As regards security of tenures, the Third Plan confesses that “in a number of states, ejectment of tenants have taken place on a considerable scale under the plea of Voluntary surrenders.”
What were the reasons for such poor results of more than 20 years of implementation of tenancy legislation in Independent India? The first was the ignorance of the tenants. Desai found that 60% of the tenants he interviewed had no knowledge of the important provisions of the Tenancy Act of which they could take advantage.
Even where they knew the provisions of the law, they did not find it simple and easy to avail themselves of them. To go to the court was expensive and, as Desai noted, “frequent amendments to the Act added to the bewilderment of the peasant and made its effectiveness slow and difficult.”
Another important cause was that economic and political power being in the hands of the landlords, a tenant was “likely to prefer to forgo his rights to courting the bitterness in relations with and the hostility of, the landlord with all its consequence.”
Under these circumstances, it was very difficult for a tenant, even if his name was entered in the village records, to prove a long and continuous occupation of the leased plot so as obtain full rights of protected tenants as established by law.
Yet another cause was the inherent weakness of the provisions in the Acts. In the first instance, the Acts did not cover share-cropping or sub-letting. A second flaw was the permission given to the landlords to retain the lands ‘voluntarily’ surrendered by the tenants.
With the economic and social power that a majority of landlords possessed over their tenants, it was not very difficult for them to obtain voluntary surrenders at will.
A third flaw was in regard to the limitations placed on the right of tenants to purchase land. The conclusion follows that the Tenancy legislation did not benefit agricultural workers and crop sharers. The only one to gain was the rural upper class whose members had previously been dependent on the big landlords.
Cooperative farming did not receive any attention before the planning period although the congress Agrarain Reforms Committee had recommended cooperative farming for holdings below the ‘basic’ holding.
It was the Second Plan which envisaged that “the main task is to lake such essential steps as will provide sound foundations for the development of cooperative farming, so that over a period of 10 years or so, a substantial proportion of agricultural land is cultivated on the cooperative lines.”
The Progress was rather meagre. Up to 1965-66, a total of 7294 cooperative farming societies having a membership of 1.88 lakhs had been formed and these covered an area of 3.93 lakh hectares. However, many of these societies were defunct and some existed only on paper for the sake of obtaining state grants though their land was cultivated in the old way. Quite a few permitted individual cultivation.
In these, there was neither the pooling of resources nor joint operation of land. A number of these were formed with a desire to evade land reforms measures in various states.
Gunnar Myrdal opines that cooperative farming was found by urban landowners as a convenient device for converting share croppers into wage labourers and hence a means whereby absentee-owners could reap gains from agricultural modernisation. This explains why ‘absentee landowners were among the supporters of the cooperative farming idea.’
One of the basic requirements of the pilot Programme launched during the Third Plan was that “the bulk of the members should be small cultivators or landless persons or both.” This was to ensure that absentee landowners were kept out.
The Gadgil Committee on cooperative farming, however, found that only 1/3 of the societies satisfied this requirement while 2/3 had no qualification to be included in the Pilot Programme. This was bound to happen where cooperatives were introduced without first altering the rural class structure.
Abolition of Forced Labour
Another significant development since 1947 was the virtual disappearance of forced labour. At the turn of the century, the vast majority of agricultural labourers were un-free men who were either in debt-bondage or some other form of servitude.
However, since independence the force of hired labourers in Indian agriculture, by and large, was made up of free men. This was a change of great significance which was likely to have far-reaching reprecussions in the future.
Consolidation of Holdings
The consolidation of fragmented holdings was regarded as “an integral part of the agricultural production programme.”Legislation for compulsory consolidation of holdings was enacted in Bombay in 1947, in the Punjab in 1948, in Pepsu and Saurashtra in 1951 and in U.P. in 1953. Similar provisions were made in other provinces except Kerala and Madras. By 1964-65, a total area of 55 million acres was consolidated.
The progress was especially marked in Gujarat, Maharashtra, Mysore, Punjab, Rajasthan, and U.P. while in West Bengal, Assam, Orissa and J & K, the scheme had not been taken up for implementation. Those who gained the most were the upper strata of the peasantry for whom the elimination of strip—farming facilitated the shift to capitalist farming.
The Right to Fair Compensation and Transparency in Land Acquisition, Rehabilitation and Resettlement (Second Amendment) Bill, 2015
Highlights of the Bill
- This Bill amends the principal Act passed in 2013.
- The Bill enables the government to exempt five categories of projects from the requirements of: (i) social impact assessment, (ii) restrictions on acquisition of multi-cropped land, and (iii) consent for private projects and public private partnerships (PPPs) projects.
- The five categories of projects are: (i) defence, (ii) rural infrastructure, (iii) affordable housing, (iv) industrial corridors, and (v) infrastructure including PPPs where government owns the land.
- The Act would apply retrospectively, if an award had been made five years earlier and compensation had not been paid or possession not taken. The Bill exempts any period when a court has given a stay on the acquisition while computing the five year period.
- The Act deemed the head of a government department guilty for an offence by the department. The Bill removes this, and adds the requirement of prior sanction to prosecute a government employee.
Key Issues and Analysis
- The five types of projects being exempt from the provisions of social impact assessment, restrictions in case of multi-cropped land and consent are broad and may cover many public purpose projects.
- The Act requires consent of 70% of landholders for PPP projects, and 80% for private projects. Acquisition, being different from purchase, implies that land owners were unwilling to part with the land. Requiring consent from them may be impractical. Also, it is not clear why the consent requirement depends on who owns the project.
- The amendments in the Bill propose to expedite the process of acquisition. However, the changes in the Bill will reduce the time for acquisition from 50 months to 42 months.
- The removal of the provision that deemed the head of department guilty, and addition of a new requirement of prior sanction to prosecute government employees may raise the bar to hold them accountable.
- The change in the retrospective provision may be ineffective in cases instituted until 2014 in light of a recent Supreme Court judgment.
- TSPSC Mains Tests and Notes Program
- TSPSC Group I Prelims Exam 2020- Test Series and Notes Program
- TSPSC Prelims and Mains Tests Series and Notes Program
- TSPSC Detailed Complete Prelims Notes |
Monetary policy is how the supply of money is controlled within a country. Monetary policy involves the increase or decrease in the money supply by the central bank (in most cases). The money supply is linked directly to the interest rate and thus monetary policy is used to affect the interest rate. Monetary policy can be either expansionary or contractionary (sometimes called loose and tight) with contractionary monetary policy increasing the interest rate and expansionary monetary policy decreasing the interest rate. Lowering interest rates will typically boost the economy by encouraging investment, but this often comes at the price of higher inflation.
The government can change the money supply in three ways. Firstly, it can change the reserve requirements of banks. Reserve requirements are the amount of actual cash that a bank has to hold proportionate to the deposits in the bank. Lower reserve requirements means that the bank can lend out more money. When this happens the supply of money increases which will decrease the interest rate when at equilibrium. Secondly, the central bank can also change it's overnight interest rate that it offers to banks. If the central banks lower it, then the commercial banks will lower interest rates as well due to competition. Lower interest rates will cause an increase in the money supply as people are discourage to save and encouraged to invest. Lastly, the government can engage in open market operations by either buying or selling government bonds which would either decrease or increase the money supply respectively.
Monetary policy is an important concept to know in order to understand economics because it demonstrates how significant decisions are made based on expectations. The relationship between an economy’s interest rates and money supply is what monetary policy is based on. Monetary policy is an important economic concept because it influences parts of the economy that affect our own lives, such as unemployment, inflation, and economic growth.© BrainMass Inc. brainmass.com June 16, 2021, 4:32 pm ad1c9bdddf |
In physics, a moment is a mathematical expression involving the product of a distance and physical quantity. Moments are usually defined with respect to a fixed reference point and refer to physical quantities located some distance from the reference point. In this way, the moment accounts for the quantity's location or arrangement. For example, the moment of force, often called torque, is the product of a force on an object and the distance from the reference point to the object. In principle, any physical quantity can be multiplied by a distance to produce a moment. Commonly used quantities include forces, masses, and electric charge distributions.
In its most basic form, a moment is the product of the distance to a point, raised to a power, and a physical quantity (such as force or electrical charge) at that point:
where is the physical quantity such as a force applied at a point, or a point charge, or a point mass, etc. If the quantity is not concentrated solely at a single point, the moment is the integral of that quantity's density over space:
where is the distribution of the density of charge, mass, or whatever quantity is being considered.
More complex forms take into account the angular relationships between the distance and the physical quantity, but the above equations capture the essential feature of a moment, namely the existence of an underlying or equivalent term. This implies that there are multiple moments (one for each value of n) and that the moment generally depends on the reference point from which the distance is measured, although for certain moments (technically, the lowest non-zero moment) this dependence vanishes and the moment becomes independent of the reference point.
Each value of n corresponds to a different moment: the 1st moment corresponds to n = 1; the 2nd moment to n = 2, etc. The 0th moment (n = 0) is sometimes called the monopole moment; the 1st moment (n = 1) is sometimes called the dipole moment, and the 2nd moment (n = 2) is sometimes called the quadrupole moment, especially in the context of electric charge distributions.
- The moment of force, or torque, is a first moment: , or, more generally, .
- Similarly, angular momentum is the 1st moment of momentum: . Momentum itself is not a moment.
- The electric dipole moment is also a 1st moment: for two opposite point charges or for a distributed charge with charge density .
Moments of mass:
- The total mass is the zeroth moment of mass.
- The center of mass is the 1st moment of mass normalized by total mass: for a collection of point masses, or for an object with mass distribution .
- The moment of inertia is the 2nd moment of mass: for a point mass, for a collection of point masses, or for an object with mass distribution . The center of mass is often (but not always) taken as the reference point.
Multipole moments edit
The coefficients are known as multipole moments, and take the form:
where expressed in spherical coordinates is a variable of integration. A more complete treatment may be found in pages describing multipole expansion or spherical multipole moments. (The convention in the above equations was taken from Jackson – the conventions used in the referenced pages may be slightly different.)
When represents an electric charge density, the are, in a sense, projections of the moments of electric charge: is the monopole moment; the are projections of the dipole moment, the are projections of the quadrupole moment, etc.
Applications of multipole moments edit
The multipole expansion applies to 1/r scalar potentials, examples of which include the electric potential and the gravitational potential. For these potentials, the expression can be used to approximate the strength of a field produced by a localized distribution of charges (or mass) by calculating the first few moments. For sufficiently large r, a reasonable approximation can be obtained from just the monopole and dipole moments. Higher fidelity can be achieved by calculating higher order moments. Extensions of the technique can be used to calculate interaction energies and intermolecular forces.
The technique can also be used to determine the properties of an unknown distribution . Measurements pertaining to multipole moments may be taken and used to infer properties of the underlying distribution. This technique applies to small objects such as molecules, but has also been applied to the universe itself, being for example the technique employed by the WMAP and Planck experiments to analyze the cosmic microwave background radiation.
In works believed to stem from Ancient Greece, the concept of a moment is alluded to by the word ῥοπή (rhopḗ, lit. "inclination") and composites like ἰσόρροπα (isorropa, lit. "of equal inclinations"). The context of these works is mechanics and geometry involving the lever. In particular, in extant works attributed to Archimedes, the moment is pointed out in phrasings like:
- "Commensurable magnitudes (σύμμετρα μεγέθεα) [A and B] are equally balanced (ἰσορροπέοντι)[a] if their distances [to the center Γ, i.e., ΑΓ and ΓΒ] are inversely proportional (ἀντιπεπονθότως) to their weights (βάρεσιν)."
Around 1450, Jacobus Cremonensis translates ῥοπή in similar texts into the Latin term momentum (lit. "movement"). The same term is kept in a 1501 translation by Giorgio Valla, and subsequently by Francesco Maurolico, Federico Commandino, Guidobaldo del Monte, Adriaan van Roomen, Florence Rivault, Francesco Buonamici, Marin Mersenne, and Galileo Galilei. That said, why was the word momentum chosen for the translation? One clue, according to Treccani, is that momento in Medieval Italy, the place the early translators lived, in a transferred sense meant both a "moment of time" and a "moment of weight" (a small amount of weight that turns the scale).[b]
"[...] equal weights at unequal distances do not weigh equally, but unequal weights [at these unequal distances may] weigh equally. For a weight suspended at a greater distance is heavier, as is obvious in a balance. Therefore, there exists a certain third kind of power or third difference of magnitude—one that differs from both body and weight—and this they call moment.[c] Therefore, a body acquires weight from both quantity [i.e., size] and quality [i.e., material], but a weight receives its moment from the distance at which it is suspended. Therefore, when distances are reciprocally proportional to weights, the moments [of the weights] are equal, as Archimedes demonstrated in The Book on Equal Moments.[d] Therefore, weights or [rather] moments like other continuous quantities, are joined at some common terminus, that is, at something common to both of them like the center of weight, or at a point of equilibrium. Now the center of gravity in any weight is that point which, no matter how often or whenever the body is suspended, always inclines perpendicularly toward the universal center.
In addition to body, weight, and moment, there is a certain fourth power, which can be called impetus or force.[e] Aristotle investigates it in On Mechanical Questions, and it is completely different from [the] three aforesaid [powers or magnitudes]. [...]"
In 1765, the Latin term momentum inertiae (English: moment of inertia) is used by Leonhard Euler to refer to one of Christiaan Huygens's quantities in Horologium Oscillatorium. Huygens 1673 work involving finding the center of oscillation had been stimulated by Marin Mersenne, who suggested it to him in 1646.
In 1811, the French term moment d'une force (English: moment of force) with respect to a point and plane is used by Siméon Denis Poisson in Traité de mécanique. An English translation appears in 1842.
In 1884, the term torque is suggested by James Thomson in the context of measuring rotational forces of machines (with propellers and rotors). Today, a dynamometer is used to measure the torque of machines.
In 1893, Karl Pearson uses the term n-th moment and in the context of curve-fitting scientific measurements. Pearson wrote in response to John Venn, who, some years earlier, observed a peculiar pattern involving meteorological data and asked for an explanation of its cause. In Pearson's response, this analogy is used: the mechanical "center of gravity" is the mean and the "distance" is the deviation from the mean. This later evolved into moments in mathematics. The analogy between the mechanical concept of a moment and the statistical function involving the sum of the nth powers of deviations was noticed by several earlier, including Laplace, Kramp, Gauss, Encke, Czuber, Quetelet, and De Forest.
See also edit
- Torque (or moment of force), see also the article couple (mechanics)
- Moment (mathematics)
- Mechanical equilibrium, applies when an object is balanced so that the sum of the clockwise moments about a pivot is equal to the sum of the anticlockwise moments about the same pivot
- Moment of inertia , analogous to mass in discussions of rotational motion. It is a measure of an object's resistance to changes in its rotation rate
- Moment of momentum , the rotational analog of linear momentum.
- Magnetic moment , a dipole moment measuring the strength and direction of a magnetic source.
- Electric dipole moment, a dipole moment measuring the charge difference and direction between two or more charges. For example, the electric dipole moment between a charge of –q and q separated by a distance of d is
- Bending moment, a moment that results in the bending of a structural element
- First moment of area, a property of an object related to its resistance to shear stress
- Second moment of area, a property of an object related to its resistance to bending and deflection
- Polar moment of inertia, a property of an object related to its resistance to torsion
- Image moments, statistical properties of an image
- Seismic moment, quantity used to measure the size of an earthquake
- Plasma moments, fluid description of plasma in terms of density, velocity and pressure
- List of area moments of inertia
- List of moments of inertia
- Multipole expansion
- Spherical multipole moments
- An alternative translation is "have equal moments" as used by Francesco Maurolico in the 1500s. A literal translation is "have equal inclinations".
- Treccani writes in its entry on moménto: "[...] alla tradizione medievale, nella quale momentum significava, per lo più, minima porzione di tempo, la più piccola parte dell’ora (precisamente, 1/40 di ora, un minuto e mezzo), ma anche minima quantità di peso, e quindi l’ago della bilancia (basta l’applicazione di un momento di peso perché si rompa l’equilibrio e la bilancia tracolli in un momento);"
- In Latin: momentum.
- The modern translation of this book is "on the equilibrium of planes". The translation "on equal moments (of planes)" as used by Maurolico is also echoed in his four-volume book called De momentis aequalibus ("about equal moments") where he applies Archimedes' ideas to solid bodies.
- In Latin: impetus or vis. This fourth power was the intellectual precursor to the English Latinism momentum, also called quantity of motion.
- This is very much in line with other Latin -entum words such as documentum, monumentum, or argumentum which turned into document, monument, and argument in French and English.
- J. D. Jackson, Classical Electrodynamics, 2nd edition, Wiley, New York, (1975). p. 137
- Spackman, M. A. (1992). "Molecular electric moments from x-ray diffraction data". Chemical Reviews. 92 (8): 1769–1797. doi:10.1021/cr00016a005.
- Dittrich and Jayatilaka, Reliable Measurements of Dipole Moments from Single-Crystal Diffraction Data and Assessment of an In-Crystal Enhancement , Electron Density and Chemical Bonding II, Theoretical Charge Density Studies, Stalke, D. (Ed), 2012, https://www.springer.com/978-3-642-30807-9
- Baumann, Daniel (2009). "TASI Lectures on Inflation". arXiv:0907.5424 [hep-th].
- Mersenne, Marin (1634). Les Méchaniques de Galilée. Paris. pp. 7–8.
- Clagett, Marshall (1964–84). Archimedes in the Middle Ages (5 vols in 10 tomes). Madison, WI: University of Wisconsin Press, 1964; Philadelphia: American Philosophical Society, 1967–1984.
- ῥοπή. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project
- Clagett, Marshall (1959). The Science of Mechanics in the Middle Ages. Madison, WI: University of Wisconsin Press.
- Dijksterhuis, E. J. (1956). Archimedes. Copenhagen: E. Munksgaard. p. 288.
- "moment". Oxford English Dictionary. 1933.
- Galluzzi, Paolo (1979). Momento. Studi Galileiani. Rome: Edizioni dell' Ateneo & Bizarri.
- Euler, Leonhard (1765). Theoria motus corporum solidorum seu rigidorum: Ex primis nostrae cognitionis principiis stabilita et ad omnes motus, qui in huiusmodi corpora cadere possunt, accommodata [The theory of motion of solid or rigid bodies: established from first principles of our knowledge and appropriate for all motions which can occur in such bodies.] (in Latin). Rostock and Greifswald (Germany): A. F. Röse. p. 166. ISBN 978-1-4297-4281-8. From page 166: "Definitio 7. 422. Momentum inertiae corporis respectu eujuspiam axis est summa omnium productorum, quae oriuntur, si singula corporis elementa per quadrata distantiarum suarum ab axe multiplicentur." (Definition 7. 422. A body's moment of inertia with respect to any axis is the sum of all of the products, which arise, if the individual elements of the body are multiplied by the square of their distances from the axis.)
- Huygens, Christiaan (1673). Horologium oscillatorium, sive de Motu pendulorum ad horologia aptato demonstrationes geometricae (in Latin). p. 91.
- Huygens, Christiaan (1977–1995). "Center of Oscillation (translation)". Translated by Mahoney, Michael S. Retrieved 22 May 2022.
- Poisson, Siméon-Denis (1811). Traité de mécanique, tome premier. p. 67.
- Thompson, Silvanus Phillips (1893). Dynamo-electric machinery: A Manual For Students Of Electrotechnics (4th ed.). New York, Harvard publishing co. p. 108.
- Thomson, James; Larmor, Joseph (1912). Collected Papers in Physics and Engineering. University Press. p. civ.
- Pearson, Karl (October 1893). "Asymmetrical Frequency Curves". Nature. 48 (1252): 615–616. Bibcode:1893Natur..48..615P. doi:10.1038/048615a0. S2CID 4057772.
- Venn, J. (September 1887). "The Law of Error". Nature. 36 (931): 411–412. Bibcode:1887Natur..36..411V. doi:10.1038/036411c0. S2CID 4098315.
- Walker, Helen M. (1929). Studies in the history of statistical method, with special reference to certain educational problems. Baltimore, Williams & Wilkins Co. p. 71. |
ALGEBRAIC EXPRESSIONS AND IDENTITIES- ESSENTIAL POINTS
Expressions are formed from variables and constants.
Constant: A symbol having a fixed numerical value. Example: 4, ½ , 4.5, etc.
Variable: A symbol which takes various numerical values. Example: x, y, z, etc.
Algebraic Expression: A combination of constants and variables connected by the sign +, -, × and ÷ is called algebraic expression.
Terms are added to form expressions. Terms themselves are formed as product of factors.
Expressions that contain exactly one, two and three terms are called monomials, binomials and trinomials respectively.
Like terms have same variables and same powers of these variables, while their coefficients may be different.
While adding (or subtracting) polynomials, first add or subtract like terms.
Monomial: An expression containing only one term. Example: -3, 4x, 3xy, etc.
Binomial: An expression containing two terms. Example: 2x-3, 4x+3y, xy-4, etc.
Trinomial: An expression containing three terms. Example: 2 2x + 3xy + 9 , 3x+2y+5z, etc.
Polynomial: In general, any expression containing one or more terms with non-zero coefficients (and with variables having non-negative exponents). A polynomial may contain any number of terms, one or more than one.
A monomial multiplied by a monomial always gives a monomial.
We multiply every term in the polynomial by the monomial while multiplying a polynomial by a monomial
In carrying out the multiplication of a polynomial by a binomial (or trinomial), we multiply term by term, i.e., every term of the polynomial is multiplied by every term in the binomial (or trinomial).
An identity is an equality, which is true for all values of the variables in the equality. On the other hand, an equation is true only for certain values of its variables. An equation is not an identity.
The following are the standard identities:
(a + b)2 = a2 + 2ab + b2
(a – b)2 = a2 – 2ab + b2
(a + b) (a – b) = a2 - b2
(y + a) (y + b) = y2 + (a + b)y + ab
Coefficients: In the term of an expression any of the factors with the sign of the term is called the coefficient of the product of the other factors.
Terms: Various parts of an algebraic expression which are separated by + and – signs. Example: The expression 4x + 5 has two terms 4x and 5.
Constant Term: A term of expression having no lateral factor.
Like term: The term having the same literal factors. Example 2xy and -4xy are like terms.
Unlike term: The terms having different literal factors. Example: 4x2 and 3xy are unlike terms.
Factors: Each term in an algebraic expression is a product of one or more number (s) and/or literals. These number (s) and/or literal (s) are known as the factor of that term. Numerical factor is the constant factor, while literal factor is a variable factor. The term 8x is the product of its factors 8 and x
Degree of polynomial is highest power of variables in the polynomial.
Degree of constant polynomial is always 0.
Product of positive and negative quantity is always negative. |
- In this lesson you will learn how to write equations of quantities which vary inversely.
- Graphs of inverse relationships will be modified to show a linear relationship.
Consider the following example:
- Quantities vary inversely if they are related by the relationship .
- Another way to express this is .
- We also say that y varies inversely with x.
- When quantities vary inversely, the constant k is called the constant of proportionality.
- Quantities which vary inversely are also said to be inversely proportional.
Suppose that . The constant of proportionality is 3. A graph of this relationship for x > 0 is shown below.
This is in fact one branch of a hyperbola. The other branch is located in Quadrant III and is found if the values of x are negative. Features of the graph to notice are the characteristic shape sloping negatively and the fact that the graph approaches the x-axis as x gets large (end behavior).
A table of values containing x, , and is shown below.
|3.5||This initial graph plots points from the table using (x, y) as coordinates. |
This second graph also plots the points (w,y) or as coordinates.
Notice that these points are collinear with the slope 3.
Notice in the second graph that the points lie on a straight line. The slope of this line is 3. This is no accident. The equation can be written as . In this form, acts as the independent variable. If we let , we get y = 3w and can clearly see a linear equation with slope 3. The process of rewriting as y = 3w is known as “rectifying into a line.” The equation for the original branch of a hyperbola is rewritten so that the data can be represented linearly.
According to Boyle’s Law, the product of the pressure, P, and volume, V, of a gas under constant temperature is a constant. Calling this constant c, we get PV = c. This can also be written as where w = . In this case, a graph of would be a hyperbolic curve, whereas a graph of P = cw would be a line of slope c.
For example, if P = 4 x 105 N/m2 when V = 0.5 m3, we have PV = c = 2 x 105 Nm.
Notice that our variables need units of measurement since they represent experimental data. Pressure is the ratio of force per unit area. One atmosphere of pressure equals 1.01 x 105 N/m2 where the units N/m2 are often called a Pascal. The volume of the confined gas is measured in cubic meters, or m3. The linear representation of this data is given by rectifying this data into a line. . This would be a line with slope 2 x 105 Nm and is easier to visualize graphically.
To see Boyle’s Law in action, visit this site. Or, if you would like to view Boyle’s original data from his experimentation in the 17th century, visit here.
The table below shows the values for x, , and .
|3.5||0.29||0.13||This first graph shows the hyperbolic representation . The WINDOW for this graph is X: (0, 3, 1) and Y: (-1, 6, 1)|
Using the same WINDOW, this second graph plots the points where as shown in the table.
Notice that these points are collinear with the slope 0.45. |
Monte Carlo Simulation
Good morning! Today, we’ll discuss Monte Carlo Simulation, a modeling technique that accounts for uncertainty. Monte Carlo Simulation is named after Monte Carlo, a very famous casino resort in Monaco. The technique was introduced during the Second World War, when a group of scientists were working on the atom bomb. The scientists, John von Neumann and Stanislaw Ulam, used Monte Carlo simulation to explore the effects of neutrons that go through radiation shielding.
How Does Monte Carlo Simulation Work?
To understand how Monte Carlo Simulation works, let’s use an example.
Company A has produced 100 tables and expects to sell all of them during the next year. However, the price and the cost of each tables for the next year are uncertain. Company A is certain only about being able to sell all the tables. Moreover, looking at the historic price and historic cost, Company A’s analysts say that the price will range from $100 to $250, and the most likely value for price will be $120. As for the cost of tables, the analysts think that the cost will range from $50 to $100, and the most likely value for price will be $90. Our objective would be to estimate the possible profit of Company A after the company sells all of its tables.
Step 1: Determine uncertain parameters and their probability distribution.
(Probability distribution illustrates how likely each possible outcome is to occur.)
We know that for our problem, price and cost are uncertain parameters. For our case, we will use triangular distribution, whose graphs for each parameter are displayed below.
(The most popular probability distributions are normal, uniform, triangular, and discrete. For different cases, you will use different distribution.)
Step 2: Take random samples of uncertain parameters and calculate the output.
If we run Monte Carlo Simulation on any software (for instance, Analytics Solver Platform), we will have to choose how many times the software should randomly choose price and cost from the distributions above to calculate profit. The more repetitions we use, the more our probability distribution of profits will resemble normal distribution (a bell-shaped distribution with the center at its mean). Below is the graph of how our distribution of profits would look after many repetitions. The average of 1,000 repetitions would be the most probable value of profit for one unit.
Finally, we’ll receive the following results: The most probable price is $120 and the most probable cost is $90. After 1,000 repetitions, the average most probable profit will be close to $120 – $90 = $30 per unit.
This is, of course, is a very simple example. In real life, the applications of Monte Carlo Simulation are much more complicated and might have a bunch of uncertain parameters and other values in the model.
Most Common Applications of Monte Carlo Simulation
Here are the most common examples of the application of Monte Carlo Simulation:
• Risk-analysis. Monte Carlo Simulation is one of the most prominent risk analysis tools and is widely used in the areas with big ambiguity and uncertainty.
• Financial forecasting. We use Monte Carlo Simulation for financial forecasting, project management, and other areas.
That’s it for today. Tomorrow, we will finish our course by discussing other techniques in business analytics.
Share with friends |
Descriptive Statistics > Grouped Data
The data is grouped together by classes or bins.
Ungrouped data is the data you first gather from an experiment or study. The data is raw — that is, it’s not sorted into categories, classified, or otherwise grouped. An ungrouped set of data is basically a list of numbers.
When you have a frequency table or other group of data, the original set of data is lost — replaced with statistics for the group. You can’t find the exact sample mean (as you don’t have the original data) but you can find an estimate. The formula for estimating the sample mean for grouped data is:
Example question: Find the sample mean for the following frequency table.
|Score||Frequency ( f )|
|Between 5 and 10||1|
|10 ≤ t < 15||4|
|15 ≤ t < 20||6|
|20 ≤ t < 25||4|
|25 ≤ t < 30||2|
|30 ≤ t < 35||3|
Step 1: Find the midpoint for each class interval. the midpoint is just the middle of each interval. For example, the middle of 10 and 15 is 12.5:
|Score||Frequency ( f )||Midpoint ( x )|
|Between 5 and 10||1||7.5|
|10 ≤ t < 15||4||12.5|
|15 ≤ t < 20||6||17.5|
|20 ≤ t < 25||4||22.5|
|25 ≤ t < 30||2||27.5|
|30 ≤ t < 35||3||32.5|
Step 2: Multiply the midpoint (x) by the frequency (f):
|Frequency ( f )||Midpoint ( x )||Midpoint x * frequency f|
|Between 5 and 10||1||7.5||7.5|
|10 ≤ t < 15||4||12.5||50|
|15 ≤ t < 20||6||17.5||105|
|20 ≤ t < 25||4||22.5||90|
|25 ≤ t < 30||2||27.5||55|
|30 ≤ t < 35||3||32.5||97.5|
Add up all of the totals for this step. In other words, add up all the values in the last column (you should get 405).
Step 3: Divide the last column (f*x) by the second column (f):
The mean of grouped data (x̄) = 405 / 20 = 20.25.
Confused and have questions? Head over to Chegg and use code “CS5OFFBTS18” (exp. 11/30/2018) to get $5 off your first month of Chegg Study, so you can understand any concept by asking a subject expert and getting an in-depth explanation online 24/7.
Comments? Need to post a correction? Please post a comment on our Facebook page. |
Lines And Angles
(a) Segment: A part of line with two end points is called a line-segment.
A line segment is denoted by AB and its length is is denoted by AB.
(b) Ray: A part of a line with one end-point is called a ray.
We can denote a line-segment AB, a ray AB and length AB and line AB by the same symbol AB.
(c) Collinear points: If three or more points lie on the same line, then they are called collinear points, otherwise they are called non-collinear points.
An angle is formed by two rays originating from the same end point.
The rays making an angle are called the arms of the angle and the end-points are called the vertex of the angle.
Types of Angles:
(i) Acute angle: An angle whose measure lies between 0° and 90°, is called an acute angle.
(ii) Right angle: An angle, whose measure is equal to 90°, is called a right angle.
(iii) Obtuse angle: An angle, whose measure lies between 90° and 180°, is called an obtuse angle.
(iv) Straight angle: The measure of a straight angle is 180°.
(v) Reflex angle: An angle which is greater than 180° and less than 360°, is called the reflex angle.
(vi) Complimentary angle: Two angles, whose sum is 90°, are called complimentary angle.
(vii) Supplementary angle: Two angles whose sum is 180º, are called supplementary angle.
(viii) Adjacent angle: Two angles are adjacent, if they have a common vertex, common arm and their non-common arms are on different sides of the common arm.
In the above figure ∠ABD and ∠DBC are adjacent angles. Ray BD is their common arm and point B is their common vertex. Ray BA and ray VC are non common arsm.
When the two angles are adjacent, then their sum is always equal to the angle formed by two non-common arms.
Thus, ∠ABC = ∠ABD + ∠DBC
Here, we can observe that ∠ABC and ∠DBC are not adjacent angles, because their non-common arms and AB lie on the same side of the common arm BC
(ix) Linear pair of angles: If the sum of two adjacent angles is 180º, then their non-common lines are in the same straight line and two adjacent angles form a linear pair of angles.
In this figure, ∠ABD and ∠CBD form a linear pair of angles because ∠ABD + ∠CBD = 180°
(x) Vertically opposite angles: When two lines AB and CD intersect at a point O, the vertically opposite angles are formed.
Here are two pairs of vertically opposite angles. One pair is ∠AOD and ∠BOC and the second pair is ∡AOC and ∠BOD. The vertically opposite angles are always equal.
So, ∠AOD = ∠BOC
And ∠AOC = ∠BOD
(e) Intersecting lines and non-intersecting lines: Two lines are intersecting if they have one point in common. We have observed in the above figure that lines AB and CD are intersecting lines, intersecting at O, their point of intersection. |
What Is the History of Recall Elections?History Q & A
The recall has always been at the forefront of a fundamental question about the role of an elected officials, namely whether the official should act as a trustee and vote his own opinion or perform as a delegate and vote according to the wishes of his constituency. This long running debate continues to this day with criticism of poll-driven politicians. This clash of ideologies was much in evidence during the debate about the recall's place in the new U.S. Constitution.
The actual origins of the recall is shrouded in conjecture. Its modern day creator, Dr. John Randolph Haynes, claimed that it was "derived historically from Greek and Latin sources...." However, the authors of many of the works on the practice cite Haynes as expropriating the idea from the Swiss.
While the first instance of the recall can be found in the laws of the General Court of the Massachusetts Bay Colony of 1631, and again in the Massachusetts Charter of 1691, the recall gained a firm footing in American politics with the democratic ideals that burst forth from the American Revolution. After declaring their independence, 11 of the 13 colonies wrote new constitutions, and many of these documents showed the new spirit of democracy. They specifically spelled out the laws in their constitution, which was a sharp departure from the unwritten British constitution. Most lessened the power of the executive and strengthened the legislature. Some opened up the right to vote to a larger portion of the population. And a few states wrote the recall into law as a method of controlling their elected representatives.
The states which adopted the recall were mainly concerned with the power of the representatives who served the states in the national government's congress. Unlike its modern day counterpart, the seventeenth and eighteenth century versions of the recall involved the removal of an official by another elected body, such as a state legislature recalling its United States senator. While this form provides a different relationship between the elected official and the general population the principles and the debates that engulfed the issue had not substantially changed.
The Revolution's success led the states to form a government under the Articles of Confederation, which were finally ratified in 1781. The government under the Articles was weak and at the mercy of the individual states. Unsurprisingly, the recall was included in the Articles of Confederation. According to recall proponent and New York delegate John Lansing, the recall was never exercised by any of the states throughout the brief history of the Confederation.
As the Articles of Confederation government proved a failure in leading the new country, some of the brightest lights in America met in Philadelphia in 1787 and drafted the new Constitution. There is a plethora of materials on the Constitutional Convention, the debates surrounding its adoption, and its eventual impact. However, the issue of the recall has been mostly ignored, despite the fact that the idea was discussed. It was proposed by Edmund Randolph in his presentation of the Virginia Plan on May 29. The plan would have allowed the recall of the members of the first house of the legislature, who were directly elected by the people. On June 12, the convention passed Charles Pickney's motion to strike out the recall. The only other mention of the procedure in Madison's notes on the convention was a speech by future Vice President Elbridge Gerry exploring how the convention exceeded its mandate.
The argument for the recall was a strong component of the anti-federalist attack. The American Revolution was in many ways an attack on the existing power structure, or as Carl Becker said it was not just about home rule, but who rules at home. The new Constitution, in the view of many leading anti-federalists, was a conservative reaction to the American Revolution. One of the major opponents of the Constitution, Luther Martin, stressed the absence of a recall for senators, and the freedom from popular control that this absence represented, as a reason to reject the document. Martin was opposed to granting senators, who were elected by the state legislators and were seen as representing the more traditional aristocratic population, a large degree of freedom. He feared that senators would disregard their position as delegates of the people, and be free to work against the interests of their own states. Martin said: "Thus, sir, for six years, the senators are rendered totally and absolutely independent of their states, of whom they ought to be the representatives, without any bond or tie between them."
The idea of tightly binding the senators to their states was strongly opposed by the Federalists, most notably Alexander Hamilton. The topic gained new life when the Constitution was sent to the states to ratify. Each state elected a ratifying convention to approve or disapprove of the Constitution. Nine of the thirteen states votes were required for ratification. The topic took up several days of debate in the New York Ratifying Convention and was also proposed in the Massachusetts Convention. Using arguments that opponents of the recall would still be making more than a century later, Hamilton feared, that the recall "will render the senator a slave to all the capricious humors among the people."
In New York's Ratifying Convention on June 24, 1788, Gilbert Livingston introduced a measure calling for the recall of senators by state legislatures. Livingston was concerned that states would have "little or no check" on senators who have a six year term of office. John Lansing, an opponent of the new Constitution, said in words that echoed more than a century later, "they (the Senators) will lose their respect for the power from whom they receive their existence, and consequently disregard the great object for which they are instituted."
Hamilton denied the premise that the state legislatures would be more in tune with the will of the people, and argued that the recall would prevent the senators from being able to make difficult decisions. Hamilton said " in whatever body the power of recall is vested, the senator will perpetually feel himself in such a state of vassalage and dependence, that he never can posses that firmness which is necessary to the discharge of his great duty to the Union."
By the time the New York Convention finally ratified the Constitution, enough states had ratified to form the government. However, there were still attempts to bring up various amendments to the new Constitution. Rhode Island, the last state to ratify in 1790, proposed 21 amendments, including granting state legislatures the power to recall their federal senators. However, the recall did not have the backing to continue as a major topic of debate after the failure of the anti-federalists. The recall of senators came up twice more, as the legislature in Virginia attempted to bring the topic up as a constitutional amendment in 1803 and 1808. The 1808 amendment was met by resolutions of disapproval from six states.
The recall received a considerable degree of support in America's early years. However, its proposed use as a weapon against the power of federal government officers failed to generate sufficient excitement to push its way through to adoption. With the Federalists' victory, the recall went into hibernation. It was not until the early part of the twentieth century, when the country was faced with a very different set of circumstances, that the recall reemerged as a viable political option. By that time, the field of debate had shifted to the state level, with the people themselves possessing the power of the recall. But the focus of the debates and the nature of the arguments had remained the same.
comments powered by Disqus
Eric D Frank - 2/2/2007
Do you by any chance know which of the 13 colonies actually wrote a recall provision in to their original constitutions?
JOHN - 1/18/2004
DOES NEW YORK STATE HAVE A RECALL PROVISION AND IF SO DOES IT APPLY TO ELECTED OFFICIAL (EX. LOCAL TO STATE LEVEL). WHAT ARE THE LAWS AND RULES THAT APPLY.
Rod Farmer - 10/30/2003
I enjoyed your web site. In case you are interested, I wrote the following article:
Farmer, Rod. "Power to the People: The Progressive Movement for the Recall, 1890s-1920," The New England Journal of History,
Winter, 2001, Vol. 57, No. 2, pp. 59-83.
bob adams - 10/8/2003
Which states have this provision? Does it apply
uniformly to all elected officials?
G Bozeman - 10/7/2003
The Iroquois praciced a form of the recall in the Iroquois Confederacy and the Six Nations. Traditionally, the women of each clan selected a male member of the clan as a representative. If this representative failed to perform his job to the benefit of his clan, he was "fired" by the women of the clan and another man was selected to replace him.
I find it interesting that, here in the most successful democracy in the world, the act of recalling an errant elected official has been practiced so few times. As a Political Science Teacher, I feel it is imperative that I stress the mechanisims by which "We The People" must maintain our authority over our government.
Joshua Spivak - 10/5/2003
In 1921, Lynn Frazier, the Governor of North Dakota was successfully recalled, along with the Attorney General and the Commissioner of Agriculture and Labor. It didn't hurt Frazier's career that much: He was elected to the U.S. Senate 2 years later.
In addition, the Governor of Arizona, Evan Mecham, was about to face a recall election in 1988, but he was impeached before the vote took place.
D. R. Taylor - 10/5/2003
I have read and heard several times that only one other governor in US history has faced a recall election. But none of the news clips or articles has stated who the governor was or when the event occurred.
Who was the first US governor to face a recall election?
Joshua Spivak - 10/3/2003
I'm not that well informed on the Omaha platform or on the Populist Party. However, I do know that the Socialist Labor and the Populist Party included the "Imperative Mandate," which was an earlier version of the recall, in their party platforms in the 1890s. Before John Randolph Haynes successful championing of the recall, many of the direct legislationists did not want to include on the same level as the intiative and referendum. They felt that it would be construed as a personal attack on an elected official.
John King - 10/3/2003
Is there any evidence that the recall was debated or considered as part of the drafting of the Omaha Platform which did endorse the referendum and the right of petition, as well as direct election of US Senators?
K. G. Schneider - 9/30/2003
Very interesting, useful article. Could you please tweak the following sentence: "Despite its infrequently usage, the recall has a long, if spotty, history in America..." We would like to feature this article in our database this Thursday, and that typo sticks out like a sore loser--I mean, thumb.
K. G. Schneider
Director, Librarians' Index to the Internet
Joshua Spivak - 9/16/2003
I agree. I appreciate you bringing the subject to my attention. I intend to mention it in any future writing I do on this subject.
Oscar Chamberlain - 9/15/2003
Thank you for the links.
I had understood the distinction. It simply struck me that the similarity was revealing.
Joshua Spivak - 9/7/2003
I should explain that the reason Instructions were not the equivalent to the recall is that Senators were under no legal obligation to either follow the instructions or resign. This is in marked contrast to the recall, which would cause the removal of a Senator from office by force of law.
Below are the links to two websites that discuss the use of Instructions in the 18th and 19th Century Senate.
Joshua Spivak - 9/5/2003
I've heard of such behavior, but I don't know that I would consider that a recall. I do remember a similar situation in 1880, when New York Senators Roscoe Conkling and Thomas Platt resigned over a disagreement with President Garfield regarding presidential appointments. They expected the NY State legislature to reappoint them in a show of force, but their gambit failed.
Oscar Chamberlain - 9/4/2003
In at least some states in the antebellum period, there was a tradition that a Senator would resign if he felt he could not follow the instructions of the legislature.
I know I have seen such a debate in Michigan, when one of its senators used the threat of retiring as a way of avoiding instructions being passed. (If memory serves, this occurred in the debate leading up to the Compromise of 1850).
I would be curious if anyone knows of other examples. |
United States Congress
|United States Congress|
|115th United States Congress|
House of Representatives
|Founded||March 4, 1789|
|Preceded by||Congress of the Confederation|
New session started
|January 3, 2017|
535 voting members
House of Representatives political groups
Senate political groups
House of Representatives last election
|November 8, 2016|
Senate last election
|November 8, 2016|
|United States Capitol
Washington, D.C., United States
|This article is part of a series on the|
|Politics of the
United States of America
The United States Congress is the bicameral legislature of the federal government of the United States consisting of two chambers: the Senate and the House of Representatives. The Congress meets in the Capitol in Washington, D.C. Both senators and representatives are chosen through direct election, though vacancies in the Senate may be filled by a gubernatorial appointment. Members are usually affiliated to the Republican Party or to the Democratic Party, and only rarely to a third party or as independents. Congress has 535 voting members: 435 Representatives and 100 Senators. The House of Representatives has six non-voting members in addition to its 435 voting members. These members can, however, sit on congressional committees and introduce legislation. These members represent Washington, D.C., Puerto Rico, American Samoa, Guam, the Northern Mariana Islands and the U.S. Virgin Islands.
The members of the House of Representatives serve two-year terms representing the people of a single constituency, known as a "district". Congressional districts are apportioned to states by population using the United States Census results, provided that each state has at least one congressional representative. Each state, regardless of population or size, has two senators. Currently, there are 100 senators representing the 50 states. Each senator is elected at-large in their state for a six-year term, with terms staggered, so every two years approximately one-third of the Senate is up for election.
- 1 Overview
- 2 History
- 3 Congress in the United States government
- 4 Structure
- 5 Procedures of Congress
- 6 Congress and the public
- 7 Privileges and pay
- 8 See also
- 9 Notes
- 10 References
- 11 Further reading
- 12 External links
Article One of the United States Constitution states, "All legislative Powers herein granted shall be vested in a Congress of the United States, which shall consist of a Senate and House of Representatives." The House and Senate are equal partners in the legislative process—legislation cannot be enacted without the consent of both chambers. However, the Constitution grants each chamber some unique powers. The Senate ratifies treaties and approves presidential appointments while the House initiates revenue-raising bills. The House initiates impeachment cases, while the Senate decides impeachment cases. A two-thirds vote of the Senate is required before an impeached person can be forcibly removed from office.
The term Congress can also refer to a particular meeting of the legislature. A Congress covers two years; the current one, the 115th Congress, began on January 3, 2017, and will end on January 3, 2019. The Congress starts and ends on the third day of January of every odd-numbered year. Members of the Senate are referred to as senators; members of the House of Representatives are referred to as representatives, congressmen, or congresswomen.
Scholar and representative Lee H. Hamilton asserted that the "historic mission of Congress has been to maintain freedom" and insisted it was a "driving force in American government" and a "remarkably resilient institution". Congress is the "heart and soul of our democracy", according to this view, even though legislators rarely achieve the prestige or name recognition of presidents or Supreme Court justices; one wrote that "legislators remain ghosts in America's historical imagination". One analyst argues that it is not a solely reactive institution but has played an active role in shaping government policy and is extraordinarily sensitive to public pressure. Several academics described Congress:
Congress reflects us in all our strengths and all our weaknesses. It reflects our regional idiosyncrasies, our ethnic, religious, and racial diversity, our multitude of professions, and our shadings of opinion on everything from the value of war to the war over values. Congress is the government's most representative body ... Congress is essentially charged with reconciling our many points of view on the great public policy issues of the day.
- —Smith, Roberts, and Wielen
Congress is constantly changing and is constantly in flux. In recent times, the American south and west have gained House seats according to demographic changes recorded by the census and includes more minorities and women although both groups are still underrepresented, according to one view. While power balances among the different parts of government continue to change, the internal structure of Congress is important to understand along with its interactions with so-called intermediary institutions such as political parties, civic associations, interest groups, and the mass media.
The Congress of the United States serves two distinct purposes that overlap: local representation to the federal government of a congressional district by representatives and a state's at-large representation to the federal government by senators.
Most incumbents seek re-election, and their historical likelihood of winning subsequent elections exceeds 90 percent.
Congress is directly responsible for the governing of the District of Columbia, the current seat of the federal government.
The First Continental Congress was a gathering of representatives from twelve of the thirteen British Colonies in North America. On July 4, 1776, the Second Continental Congress adopted the Declaration of Independence, referring to the new nation as the "United States of America". The Articles of Confederation in 1781 created the Congress of the Confederation, a unicameral body with equal representation among the states in which each state had a veto over most decisions. Congress had executive but not legislative authority, and the federal judiciary was confined to admiralty. and lacked authority to collect taxes, regulate commerce, or enforce laws.
Government powerlessness led to the Convention of 1787 which proposed a revised constitution with a two–chamber or bicameral congress. Smaller states argued for equal representation for each state. The two-chamber structure had functioned well in state governments. A compromise plan was adopted with representatives chosen by population (benefiting larger states) and exactly two senators chosen by state governments (benefiting smaller states). The ratified constitution created a federal structure with two overlapping power centers so that each citizen as an individual was subjected to both the power of state government and the national government. To protect against abuse of power, each branch of government—executive, legislative, and judicial—had a separate sphere of authority and could check other branches according to the principle of the separation of powers. Furthermore, there were checks and balances within the legislature since there were two separate chambers. The new government became active in 1789.
Political scientist Julian E. Zelizer suggested there were four main congressional eras, with considerable overlap, and included the formative era (1780s–1820s), the partisan era (1830s–1900s), the committee era (1910s–1960s), and the contemporary era (1970s–today).
The formative era (1780s–1820s)
Federalists and anti-federalists jostled for power in the early years as political parties became pronounced, surprising the Constitution's Founding Fathers of the United States. With the passage of the Constitution and the Bill of Rights, the Anti-Federalist movement was exhausted. Some activists joined the Anti-Administration Party that James Madison and Thomas Jefferson were forming about 1790–91 to oppose policies of Treasury Secretary Alexander Hamilton; it soon became the Democratic-Republican Party or the Jeffersonian Democrat Party and began the era of the First Party. Thomas Jefferson's election to the presidency marked a peaceful transition of power between the parties in 1800. John Marshall, 4th Chief Justice of the Supreme Court empowered the courts by establishing the principle of judicial review in law in the landmark case Marbury v. Madison in 1803, effectively giving the Supreme Court a power to nullify congressional legislation.
The partisan era (1830s–1900s)
These years were marked by growth in the power of political parties. The watershed event was the Civil War which resolved the slavery issue and unified the nation under federal authority, but weakened the power of states rights. A Gilded Age (1877–1901) was marked by Republican dominance of Congress. During this time, lobbying activity became more intense, particularly during the administration of President Ulysses S. Grant in which influential lobbies advocated for railroad subsidies and tariffs on wool. Immigration and high birth rates swelled the ranks of citizens and the nation grew at a rapid pace. The Progressive Era was characterized by strong party leadership in both houses of Congress as well as calls for reform; sometimes reformers would attack lobbyists as corrupting politics. The position of Speaker of the House became extremely powerful under leaders such as Thomas Reed in 1890 and Joseph Gurney Cannon. The Senate was effectively controlled by a half dozen men.
The committee era (1910s–1960s)
A system of seniority—in which long-time Members of Congress gained more and more power—encouraged politicians of both parties to serve for long terms. Committee chairmen remained influential in both houses until the reforms of the 1970s. Important structural changes included the direct election of senators by popular election according to the Seventeenth Amendment with positive effects (senators more sensitive to public opinion) and negative effects (undermining the authority of state governments). Supreme Court decisions based on the Constitution's commerce clause expanded congressional power to regulate the economy. One effect of popular election of senators was to reduce the difference between the House and Senate in terms of their link to the electorate. Lame duck reforms according to the Twentieth Amendment ended the power of defeated and retiring members of Congress to wield influence despite their lack of accountability.
The Great Depression ushered in President Franklin Roosevelt and strong control by Democrats and historic New Deal policies. Roosevelt's election in 1932 marked a shift in government power towards the executive branch. Numerous New Deal initiatives came from the White House rather than being initiated by Congress. The Democratic Party controlled both houses of Congress for many years. During this time, Republicans and conservative southern Democrats formed the Conservative Coalition. Democrats maintained control of Congress during World War II. Congress struggled with efficiency in the postwar era partly by reducing the number of standing congressional committees. Southern Democrats became a powerful force in many influential committees although political power alternated between Republicans and Democrats during these years. More complex issues required greater specialization and expertise, such as space flight and atomic energy policy. Senator Joseph McCarthy exploited the fear of communism and conducted televised hearings. In 1960, Democratic candidate John F. Kennedy narrowly won the presidency and power shifted again to the Democrats who dominated both houses of Congress until 1994.
The contemporary era (1970s–today)
Congress enacted Johnson's Great Society program to fight poverty and hunger. The Watergate Scandal had a powerful effect of waking up a somewhat dormant Congress which investigated presidential wrongdoing and coverups; the scandal "substantially reshaped" relations between the branches of government, suggested political scientist Bruce J. Schulman. Partisanship returned, particularly after 1994; one analyst attributes partisan infighting to slim congressional majorities which discouraged friendly social gatherings in meeting rooms such as the Board of Education. Congress began reasserting its authority. Lobbying became a big factor despite the 1971 Federal Election Campaign Act. Political action committees or PACs could make substantive donations to congressional candidates via such means as soft money contributions. While soft money funds were not given to specific campaigns for candidates, the money often benefited candidates substantially in an indirect way and helped reelect candidates. Reforms such as the 2002 McCain-Feingold act limited campaign donations but did not limit soft money contributions. One source suggests post-Watergate laws amended in 1974 meant to reduce the "influence of wealthy contributors and end payoffs" instead "legitimized PACs" since they "enabled individuals to band together in support of candidates". From 1974 to 1984, PACs grew from 608 to 3,803 and donations leaped from $12.5 million to $120 million along with concern over PAC influence in Congress. In 2009, there were 4,600 business, labor and special-interest PACs including ones for lawyers, electricians, and real estate brokers. From 2007 to 2008, 175 members of Congress received "half or more of their campaign cash" from PACs.
From 1970 to 2009, the House expanded delegates, along with their powers and privileges representing U.S. citizens in non-state areas, beginning with representation on committees for Puerto Rico's Resident Commissioner in 1970. In 1971, a delegate for the District of Columbia was authorized, and in 1972 new delegate positions were established for U.S. Virgin Islands and Guam. 1978 saw an additional delegate for American Samoa, and another for the Commonwealth of the Northern Mariana Islands began in 2009. These six Members of Congress enjoy floor privileges to introduce bills and resolutions, and in recent congresses they vote in permanent and select committees, in party caucuses and in joint conferences with the Senate. They have Capitol Hill offices, staff and two annual appointments to each of the four military academies. While their votes are constitutional when Congress authorizes their House Committee of the Whole votes, recent Congresses have not allowed for that, and they cannot vote when the House is meeting as the House of Representatives.
In the late 20th century, the media became more important in Congress's work. Analyst Michael Schudson suggested that greater publicity undermined the power of political parties and caused "more roads to open up in Congress for individual representatives to influence decisions". Norman Ornstein suggested that media prominence led to a greater emphasis on the negative and sensational side of Congress, and referred to this as the tabloidization of media coverage. Others saw pressure to squeeze a political position into a thirty-second soundbite. A report characterized Congress in 2013 as being unproductive, gridlocked, and "setting records for futility". In October 2013, with Congress unable to compromise, the government was shut down for several weeks and risked a serious default on debt payments, causing 60% of the public to say they would "fire every member of Congress" including their own representative. One report suggested Congress posed the "biggest risk to the US economy" because of its brinksmanship, "down-to-the-wire budget and debt crises" and "indiscriminate spending cuts", resulting in slowed economic activity and keeping up to two million people unemployed. There has been increasing public dissatisfaction with Congress, with extremely low approval ratings which dropped to 5% in October 2013.
Congress in the United States government
Powers of Congress
Overview of congressional power
Article I of the Constitution creates and sets forth the structure and most of the powers of Congress. Sections One through Six describe how Congress is elected and gives each House the power to create its own structure. Section Seven lays out the process for creating laws, and Section Eight enumerates numerous powers. Section Nine is a list of powers Congress does not have, and Section Ten enumerates powers of the state, some of which may only be granted by Congress. Constitutional amendments have granted Congress additional powers. Congress also has implied powers derived from the Constitution's Necessary and Proper Clause.
Congress has authority over financial and budgetary policy through the enumerated power to "lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and general Welfare of the United States". There is vast authority over budgets, although analyst Eric Patashnik suggested that much of Congress's power to manage the budget has been lost when the welfare state expanded since "entitlements were institutionally detached from Congress's ordinary legislative routine and rhythm". Another factor leading to less control over the budget was a Keynesian belief that balanced budgets were unnecessary.
The Sixteenth Amendment in 1913 extended congressional power of taxation to include income taxes without apportionment among the several States, and without regard to any census or enumeration. The Constitution also grants Congress the exclusive power to appropriate funds, and this power of the purse is one of Congress's primary checks on the executive branch. Congress can borrow money on the credit of the United States, regulate commerce with foreign nations and among the states, and coin money. Generally, both the Senate and the House of Representatives have equal legislative authority, although only the House may originate revenue and appropriation bills.
Congress has an important role in national defense, including the exclusive power to declare war, to raise and maintain the armed forces, and to make rules for the military. Some critics charge that the executive branch has usurped Congress's constitutionally defined task of declaring war. While historically presidents initiated the process for going to war, they asked for and received formal war declarations from Congress for the War of 1812, the Mexican–American War, the Spanish–American War, World War I, and World War II, although President Theodore Roosevelt's military move into Panama in 1903 did not get congressional approval. In the early days after the North Korean invasion of 1950, President Truman described the American response as a "police action". According to Time magazine in 1970, "U.S. presidents [had] ordered troops into position or action without a formal congressional declaration a total of 149 times." In 1993, Michael Kinsley wrote that "Congress's war power has become the most flagrantly disregarded provision in the Constitution," and that the "real erosion [of Congress's war power] began after World War II." Disagreement about the extent of congressional versus presidential power regarding war has been present periodically throughout the nation's history."
Congress can establish post offices and post roads, issue patents and copyrights, fix standards of weights and measures, establish Courts inferior to the Supreme Court, and "make all Laws which shall be necessary and proper for carrying into Execution the foregoing Powers, and all other Powers vested by this Constitution in the Government of the United States, or in any Department or Officer thereof." Article Four gives Congress the power to admit new states into the Union.
One of Congress's foremost non-legislative functions is the power to investigate and oversee the executive branch. Congressional oversight is usually delegated to committees and is facilitated by Congress's subpoena power. Some critics have charged that Congress has in some instances failed to do an adequate job of overseeing the other branches of government. In the Plame affair, critics including Representative Henry A. Waxman charged that Congress was not doing an adequate job of oversight in this case. There have been concerns about congressional oversight of executive actions such as warrantless wiretapping, although others respond that Congress did investigate the legality of presidential decisions. Political scientists Ornstein and Mann suggested that oversight functions do not help members of Congress win reelection. Congress also has the exclusive power of removal, allowing impeachment and removal of the president, federal judges and other federal officers. There have been charges that presidents acting under the doctrine of the unitary executive have assumed important legislative and budgetary powers that should belong to Congress. So-called signing statements are one way in which a president can "tip the balance of power between Congress and the White House a little more in favor of the executive branch," according to one account. Past presidents, including Ronald Reagan, George H. W. Bush, Bill Clinton, and George W. Bush have made public statements when signing congressional legislation about how they understand a bill or plan to execute it, and commentators including the American Bar Association have described this practice as against the spirit of the Constitution. There have been concerns that presidential authority to cope with financial crises is eclipsing the power of Congress. In 2008, George F. Will called the Capitol building a "tomb for the antiquated idea that the legislative branch matters."
The Constitution enumerates the powers of Congress in detail. In addition, other congressional powers have been granted, or confirmed, by constitutional amendments. The Thirteenth (1865), Fourteenth (1868), and Fifteenth Amendments (1870) gave Congress authority to enact legislation to enforce rights of African Americans, including voting rights, due process, and equal protection under the law. Generally militia forces are controlled by state governments, not Congress.
Implied powers and the commerce clause
Congress also has implied powers deriving from the Constitution's Necessary and Proper Clause which permit Congress to "make all Laws which shall be necessary and proper for carrying into Execution the foregoing Powers, and all other Powers vested by this Constitution in the Government of the United States, or in any Department or Officer thereof." Broad interpretations of this clause and of the Commerce Clause, the enumerated power to regulate commerce, in rulings such as McCulloch v Maryland, have effectively widened the scope of Congress's legislative authority far beyond that prescribed in Section 8.
Constitutional responsibility for the oversight of Washington, D.C., the federal district and national capital and the U.S. territories of Guam, American Samoa, Puerto Rico, the U.S. Virgin Islands, and the Northern Mariana Islands rests with Congress. The republican form of government in territories is devolved by Congressional statute to the respective territories including direct election of governors, the D.C. mayor and locally elective territorial legislatures.
Each territory and Washington, D.C. elect a non-voting delegate to the U.S. House of Representatives as they have throughout Congressional history. They "possess the same powers as other members of the House, except that they may not vote when the House is meeting as the House of Representatives.” They are assigned offices and allowances for staff, participate in debate, and appoint constituents to the four military service academies for the Army, Navy, Air Force and Coast Guard.
Washington, D.C. citizens alone among U.S. territories have the right to directly vote for the President of the United States, although the Democratic and Republican political parties nominate their presidential candidates at national conventions which include delegates from the five major territories.
Checks and balances
Representative Lee H. Hamilton explained how Congress functions within the federal government:
To me the key to understanding it is balance. The founders went to great lengths to balance institutions against each other—balancing powers among the three branches: Congress, the president, and the Supreme Court; between the House of Representatives and the Senate; between the federal government and the states; among states of different sizes and regions with different interests; between the powers of government and the rights of citizens, as spelled out in the Bill of Rights ... No one part of government dominates the other.:6
The influence of Congress on the presidency has varied from period to period depending on factors such as congressional leadership, presidential political influence, historical circumstances such as war, and individual initiative by members of Congress. The impeachment of Andrew Johnson made the presidency less powerful than Congress for a considerable period afterwards. The 20th and 21st centuries have seen the rise of presidential power under politicians such as Theodore Roosevelt, Woodrow Wilson, Franklin D. Roosevelt, Richard Nixon, Ronald Reagan, and George W. Bush. However, in recent years, Congress has restricted presidential power with laws such as the Congressional Budget and Impoundment Control Act of 1974 and the War Powers Resolution. Nevertheless, the Presidency remains considerably more powerful today than during the 19th century. Executive branch officials are often loath to reveal sensitive information to members of Congress because of concern that information could not be kept secret; in return, knowing they may be in the dark about executive branch activity, congressional officials are more likely to distrust their counterparts in executive agencies. Many government actions require fast coordinated effort by many agencies, and this is a task that Congress is ill-suited for. Congress is slow, open, divided, and not well matched to handle more rapid executive action or do a good job of overseeing such activity, according to one analysis.
The Constitution concentrates removal powers in the Congress by empowering and obligating the House of Representatives to impeach both executive and judicial officials for "Treason, Bribery, or other high Crimes and Misdemeanors." Impeachment is a formal accusation of unlawful activity by a civil officer or government official. The Senate is constitutionally empowered and obligated to try all impeachments. A simple majority in the House is required to impeach an official; however, a two-thirds majority in the Senate is required for conviction. A convicted official is automatically removed from office; in addition, the Senate may stipulate that the defendant be banned from holding office in the future. Impeachment proceedings may not inflict more than this; however, a convicted party may face criminal penalties in a normal court of law. In the history of the United States, the House of Representatives has impeached sixteen officials, of whom seven were convicted. Another resigned before the Senate could complete the trial. Only two presidents have ever been impeached: Andrew Johnson in 1868 and Bill Clinton in 1999. Both trials ended in acquittal; in Johnson's case, the Senate fell one vote short of the two-thirds majority required for conviction. In 1974, Richard Nixon resigned from office after impeachment proceedings in the House Judiciary Committee indicated he would eventually be removed from office.
The Senate has an important check on the executive power by confirming Cabinet officials, judges, and other high officers "by and with the Advice and Consent of the Senate." It confirms most presidential nominees but rejections are not uncommon. Furthermore, treaties negotiated by the President must be ratified by a two-thirds majority vote in the Senate to take effect. As a result, presidential arm-twisting of senators can happen before a key vote; for example, President Obama's secretary of state, Hillary Clinton, urged her former senate colleagues to approve a nuclear arms treaty with Russia in 2010. The House of Representatives has no formal role in either the ratification of treaties or the appointment of federal officials, other than in filling a vacancy in the office of the vice president; in such a case, a majority vote in each House is required to confirm a president's nomination of a vice president.
In 1803, the Supreme Court established judicial review of federal legislation in Marbury v. Madison, holding, however, that Congress could not grant unconstitutional power to the Court itself. The Constitution does not explicitly state that the courts may exercise judicial review; however, the notion that courts could declare laws unconstitutional was envisioned by the founding fathers. Alexander Hamilton, for example, mentioned and expounded upon the doctrine in Federalist No. 78. Originalists on the Supreme Court have argued that if the constitution does not say something explicitly it is unconstitutional to infer what it should, might or could have said. Judicial review means that the Supreme Court can nullify a congressional law. It is a huge check by the courts on the legislative authority and limits congressional power substantially. In 1857, for example, the Supreme Court struck down provisions of a congressional act of 1820 in its Dred Scott decision. At the same time, the Supreme Court can extend congressional power through its constitutional interpretations.
Investigations are conducted to gather information on the need for future legislation, to test the effectiveness of laws already passed, and to inquire into the qualifications and performance of members and officials of the other branches. Committees may hold hearings, and, if necessary, compel individuals to testify when investigating issues over which it has the power to legislate by issuing subpoenas. Witnesses who refuse to testify may be cited for contempt of Congress, and those who testify falsely may be charged with perjury. Most committee hearings are open to the public (the House and Senate intelligence committees are the exception); important hearings are widely reported in the mass media and transcripts published a few months afterwards. Congress, in the course of studying possible laws and investigating matters, generates an incredible amount of information in various forms, and can be described as a publisher. Indeed, it publishes House and Senate reports and maintains databases which are updated irregularly with publications in a variety of electronic formats.
Congress also plays a role in presidential elections. Both Houses meet in joint session on the sixth day of January following a presidential election to count the electoral votes, and there are procedures to follow if no candidate wins a majority.
The main result of congressional activity is the creation of laws, most of which are contained in the United States Code, arranged by subject matter alphabetically under fifty title headings to present the laws "in a concise and usable form".
Congress is split into two chambers—House and Senate—and manages the task of writing national legislation by dividing work into separate committees which specialize in different areas. Some members of Congress are elected by their peers to be officers of these committees. Further, Congress has ancillary organizations such as the Government Accountability Office and the Library of Congress to help provide it with information, and members of Congress have staff and offices to assist them as well. In addition, a vast industry of lobbyists helps members write legislation on behalf of diverse corporate and labor interests.
The committee structure permits members of Congress to study a particular subject intensely. It is neither expected nor possible that a member be an expert on all subject areas before Congress. As time goes by, members develop expertise in particular subjects and their legal aspects. Committees investigate specialized subjects and advise the entire Congress about choices and trade-offs. The choice of specialty may be influenced by the member's constituency, important regional issues, prior background and experience. Senators often choose a different specialty from that of the other senator from their state to prevent overlap. Some committees specialize in running the business of other committees and exert a powerful influence over all legislation; for example, the House Ways and Means Committee has considerable influence over House affairs.
Committees write legislation. While procedures such as the House discharge petition process can introduce bills to the House floor and effectively bypass committee input, they are exceedingly difficult to implement without committee action. Committees have power and have been called independent fiefdoms. Legislative, oversight, and internal administrative tasks are divided among about two hundred committees and subcommittees which gather information, evaluate alternatives, and identify problems. They propose solutions for consideration by the full chamber. In addition, they perform the function of oversight by monitoring the executive branch and investigating wrongdoing.
At the start of each two-year session the House elects a speaker who does not normally preside over debates but serves as the majority party's leader. In the Senate, the Vice President is the ex officio president of the Senate. In addition, the Senate elects an officer called the President pro tempore. Pro tempore means for the time being and this office is usually held by the most senior member of the Senate's majority party and customarily keeps this position until there's a change in party control. Accordingly, the Senate does not necessarily elect a new president pro tempore at the beginning of a new Congress. In both the House and Senate, the actual presiding officer is generally a junior member of the majority party who is appointed so that new members become acquainted with the rules of the chamber.
Library of Congress
The Library of Congress was established by an act of Congress in 1800. It is primarily housed in three buildings on Capitol Hill, but also includes several other sites: the National Library Service for the Blind and Physically Handicapped in Washington, D.C.; the National Audio-Visual Conservation Center in Culpeper, Virginia; a large book storage facility located at Ft. Meade, Maryland; and multiple overseas offices. The Library had mostly law books when it was burned by a British raiding party during the War of 1812, but the library's collections were restored and expanded when Congress authorized the purchase of Thomas Jefferson's private library. One of the Library's missions is to serve the Congress and its staff as well as the American public. It is the largest library in the world with nearly 150 million items including books, films, maps, photographs, music, manuscripts, graphics, and materials in 470 languages.
Congressional Research Service
The Congressional Research Service provides detailed, up-to-date and non-partisan research for senators, representatives, and their staff to help them carry out their official duties. It provides ideas for legislation, helps members analyze a bill, facilitates public hearings, makes reports, consults on matters such as parliamentary procedure, and helps the two chambers resolve disagreements. It has been called the "House's think tank" and has a staff of about 900 employees.
Congressional Budget Office
It was created as an independent nonpartisan agency by the Congressional Budget and Impoundment Control Act of 1974. It helps Congress estimate revenue inflows from taxes and helps the budgeting process. It makes projections about such matters as the national debt as well as likely costs of legislation. It prepares an annual Economic and Budget Outlook with a mid-year update and writes An Analysis of the President's Budgetary Proposals for the Senate's Appropriations Committee. The Speaker of the House and the Senate's President pro tempore jointly appoint the CBO Director for a four-year term.
Lobbyists represent diverse interests and often seek to influence congressional decisions to reflect their clients' needs. Lobby groups and their members sometimes write legislation and whip bills. In 2007 there were approximately 17,000 federal lobbyists in Washington. They explain to legislators the goals of their organizations. Some lobbyists represent non-profit organizations and work pro bono for issues in which they are personally interested.
United States Capitol Police
Partisanship versus bipartisanship
Congress has alternated between periods of constructive cooperation and compromise between parties known as bipartisanship and periods of deep political polarization and fierce infighting known as partisanship. The period after the Civil War was marked by partisanship as is the case today. It is generally easier for committees to reach accord on issues when compromise is possible. Some political scientists speculate that a prolonged period marked by narrow majorities in both chambers of Congress has intensified partisanship in the last few decades but that an alternation of control of Congress between Democrats and Republicans may lead to greater flexibility in policies as well as pragmatism and civility within the institution.
Procedures of Congress
A term of Congress is divided into two "sessions", one for each year; Congress has occasionally been called into an extra or special session. A new session commences on January 3 each year unless Congress decides differently. The Constitution requires Congress meet at least once each year and forbids either house from meeting outside the Capitol without the consent of the other house.
Joint Sessions of the United States Congress occur on special occasions that require a concurrent resolution from both House and Senate. These sessions include counting electoral votes after a presidential election and the president's State of the Union address. The constitutionally-mandated report, normally given as an annual speech, is modeled on Britain's Speech from the Throne, was written by most presidents after Jefferson but personally delivered as a spoken oration beginning with Wilson in 1913. Joint Sessions and Joint Meetings are traditionally presided over by the Speaker of the House except when counting presidential electoral votes when the Vice President (acting as the President of the Senate) presides.
Bills and resolutions
Ideas for legislation can come from members, lobbyists, state legislatures, constituents, legislative counsel, or executive agencies. Anyone can write a bill, but only members of Congress may introduce bills. Most bills are not written by Congress members, but originate from the Executive branch; interest groups often draft bills as well. The usual next step is for the proposal to be passed to a committee for review. A proposal is usually in one of these forms:
- Bills are laws in the making. A House-originated bill begins with the letters "H.R." for "House of Representatives", followed by a number kept as it progresses.
- Joint resolutions. There is little difference between a bill and a joint resolution since both are treated similarly; a joint resolution originating from the House, for example, begins "H.J.Res." followed by its number.
- Concurrent Resolutions affect only both the House and Senate and accordingly are not presented to the president for approval later. In the House, they begin with "H.Con.Res."
- Simple resolutions concern only the House or only the Senate and begin with "H.Res." or "S.Res."
Representatives introduce a bill while the House is in session by placing it in the hopper on the Clerk's desk. It's assigned a number and referred to a committee which studies each bill intensely at this stage. Drafting statutes requires "great skill, knowledge, and experience" and sometimes take a year or more. Sometimes lobbyists write legislation and submit it to a member for introduction. Joint resolutions are the normal way to propose a constitutional amendment or declare war. On the other hand, concurrent resolutions (passed by both houses) and simple resolutions (passed by only one house) do not have the force of law but express the opinion of Congress or regulate procedure. Bills may be introduced by any member of either house. However, the Constitution states, "All Bills for raising Revenue shall originate in the House of Representatives." While the Senate cannot originate revenue and appropriation bills, it has power to amend or reject them. Congress has sought ways to establish appropriate spending levels.
Each chamber determines its own internal rules of operation unless specified in the Constitution or prescribed by law. In the House, a Rules Committee guides legislation; in the Senate, a Standing Rules committee is in charge. Each branch has its own traditions; for example, the Senate relies heavily on the practice of getting "unanimous consent" for noncontroversial matters. House and Senate rules can be complex, sometimes requiring a hundred specific steps before becoming a law. Members sometimes use experts such as Walter Oleszek, a senior specialist in American national government at the Congressional Research Service, to learn about proper procedures.
Each bill goes through several stages in each house including consideration by a committee and advice from the Government Accountability Office. Most legislation is considered by standing committees which have jurisdiction over a particular subject such as Agriculture or Appropriations. The House has twenty standing committees; the Senate has sixteen. Standing committees meet at least once each month. Almost all standing committee meetings for transacting business must be open to the public unless the committee votes, publicly, to close the meeting. A committee might call for public hearings on important bills. Each committee is led by a chair who belongs to the majority party and a ranking member of the minority party. Witnesses and experts can present their case for or against a bill. Then, a bill may go to what's called a mark-up session where committee members debate the bill's merits and may offer amendments or revisions. Committees may also amend the bill, but the full house holds the power to accept or reject committee amendments. After debate, the committee votes whether it wishes to report the measure to the full house. If a bill is tabled then it is rejected. If amendments are extensive, sometimes a new bill with amendments built in will be submitted as a so-called clean bill with a new number. Both houses have procedures under which committees can be bypassed or overruled but they are rarely used. Generally, members who have been in Congress longer have greater seniority and therefore greater power.
A bill which reaches the floor of the full house can be simple or complex and begins with an enacting formula such as "Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled." Consideration of a bill requires, itself, a rule which is a simple resolution specifying the particulars of debate—time limits, possibility of further amendments, and such. Each side has equal time and members can yield to other members who wish to speak. Sometimes opponents seek to recommit a bill which means to change part of it. Generally, discussion requires a quorum, usually half of the total number of representatives, before discussion can begin, although there are exceptions. The house may debate and amend the bill; the precise procedures used by the House and Senate differ. A final vote on the bill follows.
Once a bill is approved by one house, it is sent to the other which may pass, reject, or amend it. For the bill to become law, both houses must agree to identical versions of the bill. If the second house amends the bill, then the differences between the two versions must be reconciled in a conference committee, an ad hoc committee that includes both senators and representatives sometimes by using a reconciliation process to limit budget bills. Both Houses use a budget enforcement mechanism informally known as pay-as-you-go or paygo which discourages members from considering acts which increase budget deficits. If both houses agree to the version reported by the conference committee, the bill passes, otherwise it fails.
The Constitution specifies that a majority of members known as a quorum be present before doing business in each house. However, the rules of each house assume that a quorum is present unless a quorum call demonstrates the contrary. Since representatives and senators who are present rarely demand quorum calls, debate often continues despite the lack of a majority.
Voting within Congress can take many forms, including systems using lights and bells and electronic voting. Both houses use voice voting to decide most matters in which members shout "aye" or "no" and the presiding officer announces the result. The Constitution, however, requires a recorded vote if demanded by one-fifth of the members present. If the voice vote is unclear or if the matter is controversial, a recorded vote usually happens. The Senate uses roll-call voting, in which a clerk calls out the names of all the senators, each senator stating "aye" or "no" when their name is announced. In the Senate, the vice president may cast the tie-breaking vote if present.
The House reserves roll-call votes for the most formal matters, as a roll call of all 435 representatives takes quite some time; normally, members vote by using an electronic device. In the case of a tie, the motion in question fails. Most votes in the House are done electronically, allowing members to vote yea or nay or present or open. Members insert a voting ID card and can change their votes during the last five minutes if they choose; in addition, paper ballots are used on some occasions—yea indicated by green and nay by red. One member can not cast a proxy vote for another. Congressional votes are recorded on an online database.
After passage by both houses, a bill is enrolled and sent to the president for approval. The president may sign it making it law or veto it, perhaps returning it to Congress with their objections. A vetoed bill can still become law if each house of Congress votes to override the veto with a two-thirds majority. Finally, the president may do nothing—neither signing nor vetoing the bill—and then the bill becomes law automatically after ten days (not counting Sundays) according to the Constitution. But if Congress is adjourned during this period, presidents may veto legislation passed at the end of a congressional session simply by ignoring it; the maneuver is known as a pocket veto, and cannot be overridden by the adjourned Congress.
Congress and the public
Challenges of reelection
Citizens and representatives
Senators face reelection every six years, and representatives every two. Reelections encourage candidates to focus their publicity efforts at their home states or districts. Running for reelection can be a grueling process of distant travel and fund-raising which distracts senators and representatives from paying attention to governing, according to some critics although others respond that the process is necessary to keep members of Congress in touch with voters.
Nevertheless, incumbent members of Congress running for reelection have strong advantages over challengers. They raise more money because donors expect incumbents to win, they give their funds to them rather than challengers, and donations are vital for winning elections. One critic compared being elected to Congress to receiving life tenure at a university. Another advantage for representatives is the practice of gerrymandering. After each ten-year census, states are allocated representatives based on population, and officials in power can choose how to draw the congressional district boundaries to support candidates from their party. As a result, reelection rates of members of Congress hover around 90 percent, causing some critics to accuse them of being a privileged class. Academics such as Princeton's Stephen Macedo have proposed solutions to fix gerrymandering. Both senators and representatives enjoy free mailing privileges called franking privileges.
In 1971, the cost of running for congress in Utah was $70,000 but costs have climbed. The biggest expense is television ads. Today's races cost more than a million dollars for a House seat, and six million or more for a Senate seat. Since fundraising is vital, "members of Congress are forced to spend ever-increasing hours raising money for their re-election."
Nevertheless, the Supreme Court has treated campaign contributions as a free speech issue. Some see money as a good influence in politics since it "enables candidates to communicate with voters." Few members retire from Congress without complaining about how much it costs to campaign for reelection. Critics contend that members of Congress are more likely to attend to the needs of heavy campaign contributors than to ordinary citizens.
Elections are influenced by many variables. Some political scientists speculate there is a coattail effect (when a popular president or party position has the effect of reelecting incumbents who win by "riding on the president's coattails"), although there is some evidence that the coattail effect is irregular and possibly declining since the 1950s. Some districts are so heavily Democratic or Republican that they are called a safe seat; any candidate winning the primary will almost always be elected, and these candidates do not need to spend money on advertising. But some races can be competitive when there is no incumbent. If a seat becomes vacant in an open district, then both parties may spend heavily on advertising in these races; in California in 1992, only four of twenty races for House seats were considered highly competitive.
Television and negative advertising
Since members of Congress must advertise heavily on television, this usually involves negative advertising, which smears an opponent's character without focusing on the issues. Negative advertising is seen as effective because "the messages tend to stick." However, these ads sour the public on the political process in general as most members of Congress seek to avoid blame. One wrong decision or one damaging television image can mean defeat at the next election, which leads to a culture of risk avoidance, a need to make policy decisions behind closed doors, and concentrating publicity efforts in the members' home districts.
Public perceptions of Congress
Prominent Founding Fathers writing in The Federalist Papers felt that elections were essential to liberty, and that a bond between the people and the representatives was particularly essential and that "frequent elections are unquestionably the only policy by which this dependence and sympathy can be effectually secured." In 2009, however, few Americans were familiar with leaders of Congress. The percentage of Americans eligible to vote who did, in fact, vote was 63% in 1960, but has been falling since, although there was a slight upward trend in the 2008 election. Public opinion polls asking people if they approve of the job Congress is doing have, in the last few decades, hovered around 25% with some variation. Scholar Julian Zeliger suggested that the "size, messiness, virtues, and vices that make Congress so interesting also create enormous barriers to our understanding the institution... Unlike the presidency, Congress is difficult to conceptualize." Other scholars suggest that despite the criticism, "Congress is a remarkably resilient institution ... its place in the political process is not threatened ... it is rich in resources" and that most members behave ethically. They contend that "Congress is easy to dislike and often difficult to defend" and this perception is exacerbated because many challengers running for Congress run against Congress, which is an "old form of American politics" that further undermines Congress's reputation with the public:
The rough-and-tumble world of legislating is not orderly and civil, human frailties too often taint its membership, and legislative outcomes are often frustrating and ineffective ... Still, we are not exaggerating when we say that Congress is essential to American democracy. We would not have survived as a nation without a Congress that represented the diverse interests of our society, conducted a public debate on the major issues, found compromises to resolve conflicts peacefully, and limited the power of our executive, military, and judicial institutions ... The popularity of Congress ebbs and flows with the public's confidence in government generally ... the legislative process is easy to dislike—it often generates political posturing and grandstanding, it necessarily involves compromise, and it often leaves broken promises in its trail. Also, members of Congress often appear self-serving as they pursue their political careers and represent interests and reflect values that are controversial. Scandals, even when they involve a single member, add to the public's frustration with Congress and have contributed to the institution's low ratings in opinion polls.—Smith, Roberts & Wielen
An additional factor that confounds public perceptions of Congress is that congressional issues are becoming more technical and complex and require expertise in subjects such as science, engineering and economics. As a result, Congress often cedes authority to experts at the executive branch.
Since 2006, Congress has dropped 10 points in the Gallup confidence poll with only 9% having "a great deal" or "quite a lot" of confidence in their legislators. Since 2011, Gallup poll has reported Congress's approval rating among Americans at 10% or below three times. Public opinion of Congress plummeted further to 5% in October 2013 after parts of the U.S. government deemed 'nonessential government' shut down.
Smaller states and bigger states
When the Constitution was ratified in 1787, the ratio of the populations of large states to small states was roughly twelve to one. The Connecticut Compromise gave every state, large and small, an equal vote in the Senate. Since each state has two senators, residents of smaller states have more clout in the Senate than residents of larger states. But since 1787, the population disparity between large and small states has grown; in 2006, for example, California had seventy times the population of Wyoming. Critics such as constitutional scholar Sanford Levinson have suggested that the population disparity works against residents of large states and causes a steady redistribution of resources from "large states to small states." However, others argue that the Connecticut compromise was deliberately intended by the Framers to construct the Senate so that each state had equal footing not based on population, and contend that the result works well on balance.
Members and constituents
A major role for members of Congress is providing services to constituents. Constituents request assistance with problems. Providing services helps members of Congress win votes and elections and can make a difference in close races. Congressional staff can help citizens navigate government bureaucracies. One academic described the complex intertwined relation between lawmakers and constituents as home style.:8
One way to categorize lawmakers, according to political scientist Richard Fenno, is by their general motivation:
- Reelection. These are lawmakers who "never met a voter they didn't like" and provide excellent constituent services.
- Good public policy. Legislators who "burnish a reputation for policy expertise and leadership."
- Power in the chamber. Lawmakers who spend serious time along the "rail of the House floor or in the Senate cloakroom ministering to the needs of their colleagues." Famous legislator Henry Clay in the mid-19th century was described as an "issue entrepreneur" who looked for issues to serve his ambitions.:34
Privileges and pay
Privileges protecting members
Members of Congress enjoy parliamentary privilege, including freedom from arrest in all cases except for treason, felony, and breach of the peace and freedom of speech in debate. This constitutionally derived immunity applies to members during sessions and when traveling to and from sessions. The term arrest has been interpreted broadly, and includes any detention or delay in the course of law enforcement, including court summons and subpoenas. The rules of the House strictly guard this privilege; a member may not waive the privilege on their own, but must seek the permission of the whole house to do so. Senate rules, however, are less strict and permit individual senators to waive the privilege as they choose.
The Constitution guarantees absolute freedom of debate in both houses, providing in the Speech or Debate Clause of the Constitution that "for any Speech or Debate in either House, they shall not be questioned in any other Place." Accordingly, a member of Congress may not be sued in court for slander because of remarks made in either house, although each house has its own rules restricting offensive speeches, and may punish members who transgress.
Obstructing the work of Congress is a crime under federal law and is known as contempt of Congress. Each branch has the power to cite individuals for contempt but can only issue a contempt citation—the judicial system pursues the matter like a normal criminal case. If convicted in court, an individual found guilty of contempt of Congress may be imprisoned for up to one year.
The franking privilege allows members of Congress to send official mail to constituents at government expense. Though they are not permitted to send election materials, borderline material is often sent, especially in the run-up to an election by those in close races. Indeed, some academics consider free mailings as giving incumbents a big advantage over challengers.
Pay and benefits
From 1789 to 1815, members of Congress received only a daily payment of $6 while in session. Members received an annual salary of $1,500 per year from 1815 to 1817, then a per diem salary of $8 from 1818 to 1855; since then they have received an annual salary, first pegged in 1855 at $3,000. In 1907, salaries were raised to $7,500 per year, the equivalent of $173,000 in 2010. In 2006, members of Congress received a yearly salary of $165,200. Congressional leaders were paid $183,500 per year. The Speaker of the House of Representatives earns $212,100 annually. The salary of the President pro tempore for 2006 was $183,500, equal to that of the majority and minority leaders of the House and Senate. Privileges include having an office and paid staff. In 2008, non-officer members of Congress earned $169,300 annually. Some critics complain congressional pay is high compared with a median American income of $45,113 for men and $35,102 for women. Others have countered that congressional pay is consistent with other branches of government. In January 2014, it was reported that for the first time over half of the members of Congress are millionaires. Congress has been criticized for trying to conceal pay raises by slipping them into a large bill at the last minute. Others have criticized the wealth of members of Congress. Representative Jim Cooper of Tennessee told Harvard professor Lawrence Lessig that a chief problem with Congress was that members focused on lucrative careers as lobbyists after serving––that Congress was a "Farm League for K Street"––instead of on public service.
Members elected since 1984 are covered by the Federal Employees Retirement System (FERS). Like other federal employees, congressional retirement is funded through taxes and participants' contributions. Members of Congress under FERS contribute 1.3% of their salary into the FERS retirement plan and pay 6.2% of their salary in Social Security taxes. And like Federal employees, members contribute one-third of the cost of health insurance with the government covering the other two-thirds.
The size of a congressional pension depends on the years of service and the average of the highest three years of their salary. By law, the starting amount of a member's retirement annuity may not exceed 80% of their final salary. In 2006, the average annual pension for retired senators and representatives under the Civil Service Retirement System (CSRS) was $60,972, while those who retired under FERS, or in combination with CSRS, was $35,952.
Members of Congress make fact-finding missions to learn about other countries and stay informed, but these outings can cause controversy if the trip is deemed excessive or unconnected with the task of governing. For example, the Wall Street Journal reported lawmaker trips abroad at taxpayer expense, which included spas, $300-per-night extra unused rooms, and shopping excursions. Lawmakers respond that "traveling with spouses compensates for being away from them a lot in Washington" and justify the trips as a way to meet officials in other nations.
- Caucuses of the United States Congress
- Elections in the United States § Congressional elections
- Current members of the United States House of Representatives
- List of current United States Senators
- List of United States Congresses
- Lobbying in the United States
- 114th United States Congress
- Oath of office § United States
- Party divisions of United States Congresses
- Term limits in the United States
- United States Congressional Baseball Game
- United States congressional hearing
- United States presidents and control of congress
- United States Congress Joint Select Committee on Deficit Reduction
- Radio and Television Correspondents' Association
- Manning, Jennifer E. (2016-09-07). "Membership of the 114th Congress: A Profile" (PDF). Congressional Research Service. Retrieved 2016-11-12.
- John V. Sullivan (July 24, 2007). "How Our Laws Are Made". U.S. House of Representatives. Retrieved November 27, 2016.
- Lee H. Hamilton (2004). How Congress works and why you should care. Indiana University Press. ISBN 0-253-34425-5. Retrieved September 11, 2010.
- Steven S. Smith; Jason M. Roberts; Ryan J. Vander Wielen (2006). "The American Congress (Fourth Edition)". Cambridge University Press. p. 23. Retrieved September 11, 2010.
- Julian E. Zelizer (editor) Joanne Barrie Freeman, Jack N. Rakove, Alan Taylor; et al. (2004). "The American Congress: The Building of Democracy". Houghton Mifflin Company. pp. xiii–xiv. ISBN 0-618-17906-2. Retrieved September 11, 2010.
- Steven S. Smith; Jason M. Roberts; Ryan J. Vander Wielen (2006). "The American Congress (Fourth Edition)". Cambridge University Press. Retrieved September 11, 2010.
- Perry Bacon Jr. (August 31, 2009). "Post Politics Hour: Weekend Review and a Look Ahead". The Washington Post. Retrieved September 20, 2009.
- "Information about the Archives of the United States Senate". U.S. Senate. Retrieved January 6, 2014.
- Kramnick, Isaac (ed); Thomas Paine (1982). Common Sense. Penguin Classics. p. 21.
- "References about weaknesses of the Articles of Confederation".*Pauline Maier (book reviewer) (November 18, 2007). "HISTORY – The Framers' Real Motives (book review) UNRULY AMERICANS AND THE ORIGINS OF THE CONSTITUTION book by Woody Holton". The Washington Post. Retrieved October 10, 2009.*"The Constitution and the Idea of Compromise". PBS. October 10, 2009. Retrieved October 10, 2009.*Alexander Hamilton (1788). "FEDERALIST No. 15 – The Insufficiency of the Present Confederation to Preserve the Union". FoundingFathers.info. Retrieved October 10, 2009.
- English (2003), pp. 5–6
- Collier (1986), p. 5
- James Madison (1787). "James Madison and the Federal Constitutional Convention of 1787 – Engendering a National Government". The Library of Congress – American memory. Retrieved October 10, 2009.
- "The Founding Fathers: New Jersey". The Charters of Freedom. October 10, 2009. Retrieved October 10, 2009.
- "THE PRESIDENCY: Vetoes". Time. March 9, 1931. Retrieved September 11, 2010.
- David E. Kyvig, author, Julian E. Zelizer (editor) (2004). "The American Congress: The Building of Democracy". Houghton Mifflin Company. p. 362. ISBN 0-618-17906-2. Retrieved September 11, 2010.
- David B. Rivkin Jr. & Lee A. Casey (August 22, 2009). "Illegal Health Reform". The Washington Post. Retrieved October 10, 2009.
- Founding Fathers via FindLaw (1787). "U.S. Constitution: Article I (section 8 paragraph 3) – Article Text – Annotations". FindLaw. Retrieved October 10, 2009.
- English (2003), p. 7
- English (2003), p. 8
- "The Convention Timeline". U.S. Constitution Online. October 10, 2009. Retrieved October 10, 2009.
- Eric Patashnik, author, Julian E. Zelizer (editor) (2004). "The American Congress: The Building of Democracy". Houghton Mifflin Company. ISBN 0-618-17906-2. Retrieved September 11, 2010.
- Margaret S. Thompson, The "Spider Web": Congress and Lobbying in the Age of Grant (1985)
- Elisabeth S. Clemens, The People's Lobby: Organizational Innovation and the Rise of Interest-Group Politics in the United States, 1890–1925 (1997)
- David B. Rivkin Jr. & Lee A. Casey (August 22, 2009). "Illegal Health Reform". The Washington Post. Retrieved September 28, 2009.
- Steven S. Smith; Jason M. Roberts; Ryan J. Vander Wielen (2006). "The American Congress (Fourth Edition)". Cambridge University Press. p. 38. Retrieved September 11, 2010.
- David E. Kyvig, author, Julian E. Zelizer (editor) (2004). "The American Congress: The Building of Democracy". Houghton Mifflin Company. ISBN 0-618-17906-2. Retrieved September 11, 2010.
- "THE CONGRESS: 72nd Made". Time. November 17, 1930. Retrieved October 5, 2010.
- English (2003), p. 14
- "THE CONGRESS: Democratic Senate". Time. November 14, 1932. Retrieved October 10, 2010.
- "POLITICAL NOTES: Democratic Drift". Time. November 16, 1936. Retrieved October 10, 2010.
- "THE CONGRESS: The 76th". Time. November 21, 1938. Retrieved October 10, 2010.
- "THE VICE PRESIDENCY: Undeclared War". Time. March 20, 1939. Retrieved October 10, 2010.
- "CONGRESS: New Houses". Time. November 11, 1940. Retrieved October 10, 2010.
- "Before the G.O.P. Lay a Forked Road". Time. November 16, 1942. Retrieved October 10, 2010.
- "Business & Finance: Turn of the Tide". Time. November 16, 1942. Retrieved October 10, 2010.
- "The Congress: Effort toward Efficiency". Time. May 21, 1965. Retrieved September 11, 2010.
- "National Affairs: JUDGMENTS & PROPHECIES". Time. November 15, 1954. Retrieved October 10, 2010.
- "THE CONGRESS: Ahead of the Wind". Time. November 17, 1958. Retrieved October 10, 2010.
- "Party in Power - Congress and Presidency - A Visual Guide to the Balance of Power in Congress, 1945-2008". Uspolitics.about.com. Retrieved September 17, 2012.
- Bruce J. Schulman (author), Julian E. Zelizer (editor) (2004). "The American Congress: The Building of Democracy". Houghton Mifflin Company. p. 638. ISBN 0-618-17906-2. Retrieved September 11, 2010.
- "THE HOUSE: New Faces and New Strains". Time. November 18, 1974.
- Steven S. Smith; Jason M. Roberts; Ryan J. Vander Wielen (2006). "The American Congress (Fourth Edition)". Cambridge University Press. p. 58. Retrieved September 11, 2010.
- Nick Anderson (March 30, 2004). "Political Attack Ads Already Popping Up on the Web". Los Angeles Times. Retrieved September 30, 2009.
- Susan Tifft; Richard Homik; Hays Corey (August 20, 1984). "Taking an Ax to the PACs". Time. Retrieved October 2, 2009.
- ADAM CLYMER (October 29, 1992). "Campaign spending in congress races soars to new high". The New York Times. Retrieved October 2, 2009.
- Jeffrey H. Birnbaum (October 3, 2004). "Cost of Congressional Campaigns Skyrockets". The Washington Post. Retrieved October 1, 2009.
- Richard E. Cohen (August 12, 1990). "PAC Paranoia: Congress Faces Campaign Spending – Politics: Hysteria was the operative word when legislators realized they could not return home without tougher campaign finance laws.". Los Angeles Times. Retrieved October 2, 2009.
- Walter Isaacson, Evan Thomas, other bureaus (October 25, 1982). "Running with the PACs". Time. Retrieved October 2, 2009.
- John Fritze (March 2, 2009). "PACs spent record $416M on federal election". USA Today. Retrieved October 2, 2009.
- Thomas Frank (October 29, 2006). "Beer PAC aims to put Congress under influence". USA TODAY. Retrieved October 2, 2009.
- Michael Isikoff & Dina Fine Maron (March 21, 2009). "Congress – Follow the Bailout Cash". Newsweek. Retrieved October 2, 2009.
- Richard L. Berke (February 14, 1988). "Campaign Finance; Problems in the PAC's: Study Finds Frustration". The New York Times. Retrieved October 2, 2009.
- Palmer, Betsy. Delegates to the U.S. Congress: history and current status, Congressional Research Service; U.S. House of Representatives, "The House Explained", viewed January 9, 2015.
- Julian E. Zelizer (editor) Michael Schudson (author) (2004). "The American Congress: The Building of Democracy". Houghton Mifflin Company. ISBN 0-618-17906-2. Retrieved September 11, 2010.
- Steven S. Smith; Jason M. Roberts; Ryan J. Vander Wielen (2006). "The American Congress (Fourth Edition)". Cambridge University Press. p. 12. Retrieved September 11, 2010.
- Mark Murray, NBC News, June 30, 2013, Unproductive Congress: How stalemates became the norm in Washington DC. Retrieved June 30, 2013
- Domenico Montanaro, NBC News, October 10, 2013, NBC/WSJ poll: 60 percent say fire every member of Congress. Retrieved October 10, 2013, "...60 percent of Americans ... if they had the chance to vote to defeat and replace every single member of Congress ... they would..."
- Andy Sullivan of Reuters, NBC News, October 17, 2013, Washington: the biggest risk to US economy. Retrieved October 18, 2013, "...the biggest risk to the world's largest economy may be its own elected representatives... Down-to-the-wire budget and debt crises, indiscriminate spending cuts and a 16-day government shutdown ..."
- Domenico Montanaro, NBC News, October 10, 2013, NBC/WSJ poll: 60 percent say fire every member of Congress. Retrieved October 10, 2013, "...60 percent of Americans ... saying if they had the chance to vote to defeat and replace every single member of Congress, including their own representative, they would..."
- Wall Street Journal, Approval of Congress Matches All-Time Low. Retrieved June 13, 2013
- Carrie Dann, NBC News, Americans' faith in Congress lower than all major institutions – ever. Retrieved June 13, 2013
- "White House: Republicans Will 'Do the Right Thing'". Voice of America. October 9, 2013. Retrieved October 10, 2013.
- Epps, Garrett (2013). American Epic: Reading the U.S. Constitution. New York: Oxford. p. 9. ISBN 978-0-19-938971-1.
- Eric Patashnik (2004). "The American Congress: The Building of Democracy". Houghton Mifflin Company. pp. 671–2. ISBN 0-618-17906-2. Retrieved September 11, 2010.
- Davidson (2006), p. 18
- "Congress and the Dollar". New York Sun. May 30, 2008. Retrieved September 11, 2010.
- Kate Zernike (September 28, 2006). "Senate Passes Detainee Bill Sought by Bush". The New York Times. Retrieved September 11, 2010.
- "References about congressional war declaring power".
- Dana D. Nelson (October 11, 2008). "The 'unitary executive' question". Los Angeles Times. Retrieved October 4, 2009.
- Steve Holland (May 1, 2009). "Obama revelling in U.S. power unseen in decades". Reuters UK. Retrieved September 28, 2009.
- "The Law: The President's War Powers". Time. June 1, 1970. Retrieved September 28, 2009.
- "The Law: The President's War Powers". Time. June 1, 1970. Retrieved September 28, 2009.
- "The President's News Conference of June 29, 1950". Teachingamericanhistory.org. June 29, 1950. Retrieved December 20, 2010.
- Michael Kinsley (March 15, 1993). "The Case for a Big Power Swap". Time. Retrieved September 28, 2009.
- "Time Essay: Where's Congress?". Time. May 22, 1972. Retrieved September 28, 2009.
- "The Law: The President's War Powers". Time. June 1, 1970. Retrieved September 11, 2010.
- "The proceedings of congress.; senate.". The New York Times. June 28, 1862. Retrieved September 11, 2010.
- David S. Broder (March 18, 2007). "Congress's Oversight Offensive". The Washington Post. Retrieved September 11, 2010.
- Thomas Ferraro (April 25, 2007). "House committee subpoenas Rice on Iraq". Reuters. Retrieved September 11, 2010.
- James Gerstenzang (July 16, 2008). "Bush claims executive privilege in Valerie Plame Wilson case". Los Angeles Times. Archived from the original on August 1, 2008. Retrieved October 4, 2009.
- Elizabeth B. Bazan and Jennifer K. Elsea, legislative attorneys (January 5, 2006). "Presidential Authority to Conduct Warrantless Electronic Surveillance to Gather Foreign Intelligence Information" (PDF). Congressional Research Service. Retrieved September 28, 2009.
- Linda P. Campbell & Glen Elsasser (October 20, 1991). "Supreme Court Slugfests A Tradition". Chicago Tribune. Retrieved September 11, 2010.
- Eric Cantor (July 30, 2009). "Obama's 32 Czars". The Washington Post. Retrieved September 28, 2009.
- Christopher Lee (January 2, 2006). "Alito Once Made Case For Presidential Power". The Washington Post. Retrieved October 4, 2009.
- Dan Froomkin (March 10, 2009). "Playing by the Rules". The Washington Post. Retrieved October 4, 2009.
- Dana D. Nelson (October 11, 2008). "The 'unitary executive' question". Los Angeles Times. Retrieved October 4, 2009.
- Charlie Savage (March 16, 2009). "Obama Undercuts Whistle-Blowers, Senator Says". The New York Times. Retrieved October 4, 2009.
- Binyamin Appelbaum & David Cho (March 24, 2009). "U.S. Seeks Expanded Power to Seize Firms Goal Is to Limit Risk to Broader Economy". The Washington Post. Retrieved September 28, 2009.
- George F. Will – op-ed columnist (December 21, 2008). "Making Congress Moot". The Washington Post. Retrieved September 28, 2009.
- Davidson (2006), p. 19
- J. Leslie Kincaid (January 17, 1916). "To Make the Militia a National Force: The Power of Congress Under the Constitution "for Organizing, Arming, and Disciplining" the State Troops.". The New York Times. Retrieved September 11, 2010.
- Stephen Herrington (February 25, 2010). "Red State Anxiety and The Constitution". The Huffington Post. Retrieved September 11, 2010.
- "Timeline". CBS News. 2010. Retrieved September 11, 2010.
- Randy E. Barnett (April 23, 2009). "The Case for a Federalism Amendment". The Wall Street Journal. Retrieved September 11, 2010.
- Executive Order 13423 Sec. 9. (l). "The 'United States' when used in a geographical sense, means the fifty states, the District of Columbia, the Commonwealth of Puerto Rico, Guam, American Samoa, the U.S. Virgin Islands, and the Northern Mariana Islands, and associated territorial waters and airspace."
- U.S. State Department, Dependencies and Areas of Special Sovereignty Chart, under "Sovereignty", lists five places under United States sovereignty administered by a local 'Administrative Center', with 'Short form names', American Samoa, Guam, Northern Mariana Islands, Puerto Rico, Virgin Islands, U.S.
- House Learn webpage. Viewed January 26, 2013.
- The Green Papers, 2016 Presidential primaries, caucuses and conventions, viewed September 3, 2015.
- "The very structure of the Constitution gives us profound insights about what the founders thought was important... the Founders thought that the Legislative Branch was going to be the great branch of government." —Hon. John Charles Thomas
- Susan Sachs (January 7, 1999). "Impeachment: The Past; Johnson's Trial: 2 Bitter Months for a Still-Torn Nation". The New York Times. Retrieved September 11, 2010.
- Greene, Richard (January 19, 2005). "Kings in the White House". BBC News. Retrieved October 7, 2007.
- Steven S. Smith; Jason M. Roberts; Ryan J. Vander Wielen (2006). "The American Congress (Fourth Edition)". Cambridge University Press. pp. 18–19. Retrieved September 11, 2010.
- Steven S. Smith; Jason M. Roberts; Ryan J. Vander Wielen (2006). "The American Congress (Fourth Edition)". Cambridge University Press. p. 19. Retrieved September 11, 2010.
- Charles Wolfson (August 11, 2010). "Clinton Presses Senate to Ratify Nuclear Arms Treaty with Russia". CBS News. Retrieved September 11, 2010.
- "Constitutional Interpretation the Old Fashioned Way". Center For Individual Freedom. Retrieved September 15, 2007.
- "Decision of the Supreme Court in the Dred Scott Case". The New York Times. March 6, 1851. Retrieved September 11, 2010.
- Frank Askin (July 21, 2007). "Congress's Power To Compel". The Washington Post. Retrieved September 28, 2009.
- Ben's Guide to US Government (2010). "Congressional Hearings: About". GPO Access. Retrieved September 11, 2010.
- United States government (2010). "Congressional Reports: Main Page". U.S. Government Printing Office. Retrieved September 11, 2010.
- 112th Congress, 1st session (2011). "Tying It All Together: Learn about the Legislative Process". United States House of Representatives. Archived from the original on 2011-04-20. Retrieved April 20, 2011.
- English (2003), pp. 46–47
- English, p. 46
- Schiller, Wendy J. (2000). Partners and Rivals: Representation in U.S. Senate Delegations. Princeton University Press. ISBN 0-691-04887-8.
- "Committees". U.S. Senate. 2010. Retrieved September 12, 2010.
- Committee Types and Roles, Congressional Research Service, April 1, 2003
- "General Information - Library of Congress".
- "The Congressional Research Service and the American Legislative Process" (PDF). Congressional Research Service. 2008. Retrieved July 25, 2009.
- O'Sullivan, Arthur; Sheffrin, Steven M. (2003). Economics: Principles in Action. Upper Saddle River, New Jersey 07458: Pearson Prentice Hall. p. 388. ISBN 0-13-063085-3.
- "Congressional Budget Office – About CBO". Cbo.gov. Archived from the original on December 5, 2010. Retrieved December 20, 2010.
- Washington Representatives (32 ed.). Bethesda, MD: Columbia Books. November 2007. p. 949. ISBN 1-880873-55-9.
- Steven S. Smith; Jason M. Roberts; Ryan J. Vander Wielen (2006). The American Congress (Fourth Edition). Cambridge University Press. pp. 17–18. Retrieved September 11, 2010.
- Partnership for Public Service (March 29, 2009). "Walter Oleszek: A Hill Staffer's Guide to Congressional History and Habit". The Washington Post. Retrieved September 11, 2010.
- "BLACKS: Confronting the President". Time. April 5, 1971. Retrieved September 11, 2010.
- "News from Washington". The New York Times. December 3, 1861. Retrieved September 11, 2010.
- United States government (2010). "Recent Votes". United States Senate. Retrieved September 11, 2010.
- "The U.S. Congress – Votes Database – Members of Congress / Robert Byrd". The Washington Post. June 17, 2010. Archived from the original on November 10, 2010. Retrieved September 11, 2010.
- Larry J. Sabato (September 26, 2007). "An amendment is needed to fix the primary mess". USA Today. Retrieved September 20, 2009.
- Joseph A. Califano Jr. (May 27, 1988). "PAC's Remain a Pox". The New York Times. Retrieved October 2, 2009.
- Brian Kalish (May 19, 2008). "GOP exits to cost party millions". USA TODAY. Retrieved October 1, 2009.
- Susan Page (May 9, 2006). "5 keys to who will control Congress: How immigration, gas, Medicare, Iraq and scandal could affect midterm races". USA TODAY. Retrieved September 11, 2010.
- Macedo, Stephen (August 11, 2008). "Toward a more democratic Congress? Our imperfect democratic constitution: the critics examined" (PDF). Boston University Law Review. Boston University Law Review. 89: 609–628. Retrieved September 20, 2009.
- "Time Essay: Campaign Costs: Floor, Not Ceiling". Time. May 17, 1971. Retrieved October 1, 2009.
- Barbara Borst, Associated Press (October 29, 2006). "Campaign spending up in U.S. congressional elections". USA Today. Retrieved October 1, 2009.
- Dan Froomkin (September 15, 1997). "Campaign Finance – Introduction". The Washington Post. Retrieved October 1, 2009.
- Evan Thomas (April 4, 2008). "At What Cost? – Sen. John Warner and Congress's money culture.". Newsweek. Retrieved October 1, 2009.
- "References about diffname".
- Jean Merl (October 18, 2000). "Gloves Come Off in Attack Ads by Harman, Kuykendall". Los Angeles Times. Retrieved September 30, 2009.
- Shanto Iyengar – Director, Political Communications Lab, Stanford University (August 12, 2008). "Election 2008: The Advertising". The Washington Post. Retrieved September 30, 2009.
- Dave Lesher (September 12, 1994). "COLUMN ONE – TV Blitz Fueled by a Fortune – Once obscure, Huffington now is pressing Feinstein. His well-financed rapid-response team has mounted an unprecedented ad attack.". Los Angeles Times. Retrieved September 30, 2009.
- Howard Kurtz (October 28, 1998). "Democrats Chase Votes With a Safety Net". The Washington Post. Retrieved September 30, 2009.
- James Oliphant (April 9, 2008). "'08 Campaign costs nearing $2 Billion. Is it worth it?". Los Angeles Times. Retrieved October 1, 2009.
- "Campaign Finance Groups Praise Rep. Welch for Cosponsoring Fair Elections Now Act". Reuters. May 19, 2009. Archived from the original on January 22, 2010. Retrieved October 1, 2009.
- John Balzar (May 24, 2006). "Democrats Battle Over a Safe Seat in Congress". Los Angeles Times. Retrieved September 30, 2009.
- "The Congress: An Idea on the March". Time. January 11, 1963. Retrieved September 30, 2009.
- "Decision '92 – SPECIAL VOTERS' GUIDE TO STATE AND LOCAL ELECTIONS – THE CONGRESSIONAL RACES". Los Angeles Times. October 25, 1992. Retrieved September 30, 2009.
- "References about prevalence of attack ads".
- Brooks Jackson & Justin Bank (February 5, 2009). "Radio, Radio – New Democratic ads attacking House Republicans in the lead-up to the 2010 midterm elections don't tell the whole story.". Newsweek. Retrieved September 30, 2009.
- Fredreka Schouten (September 19, 2008). "Union helps non-profit groups pay for attack ads". USA Today. Retrieved September 30, 2009.
- Ruth Marcus (August 8, 2007). "Attack Ads You'll Be Seeing". The Washington Post. Retrieved September 30, 2009.
- Chris Cillizza (September 20, 2006). "Ads, Ads Everywhere!". The Washington Post. Retrieved September 30, 2009.
- SAMANTHA GROSS Associated Press (September 7, 2007). "Coming Soon: Personalized Campaign Ads". The Washington Post. Retrieved September 30, 2009.
- Howard Kurtz (January 6, 2008). "CAMPAIGN ON TELEVISION People May Dislike Attack Ads, but the Messages Tend to Stick". The Washington Post. Retrieved September 30, 2009.
- Steven S. Smith; Jason M. Roberts; Ryan J. Vander Wielen (2006). "The American Congress (Fourth Edition)". Cambridge University Press. p. 21. Retrieved September 11, 2010.
- Alexander Hamilton or James Madison (February 8, 1788). "The Federalist Paper No. 52". Retrieved October 1, 2009.
- "Congress' Approval Rating at Lowest Point for Year". Reuters. September 2, 2009. Retrieved October 1, 2009.
- "THE CONGRESS: Makings of the 72nd (Cont.)". Time. September 22, 1930. Retrieved October 1, 2009.
- Jonathan Peterson (October 21, 1996). "Confident Clinton Lends Hand to Congress Candidates". Los Angeles Times. Retrieved October 1, 2009.
- "References about diffname".
- "THE CONGRESS: Makings of the 72nd (Cont.)". Time. September 22, 1930. Retrieved October 1, 2009.
- Maki Becker (June 17, 1994). "Informed Opinions on Today's Topics – Looking for Answers to Voter Apathy". Los Angeles Times. Retrieved October 1, 2009.
- Daniel Brumberg (October 30, 2008). "America's Re-emerging Democracy". The Washington Post. Retrieved October 1, 2009.
- Karen Tumulty (July 8, 1986). "Congress Must Now Make Own Painful Choices". Los Angeles Times. Retrieved October 1, 2009.
- Janet Hook (December 22, 1997). "As U.S. Economy Flows, Voter Vitriol Ebbs". Los Angeles Times. Retrieved October 1, 2009.
- "Congress gets $4,100 pay raise". USA Today. Associated Press. January 9, 2008. Retrieved September 28, 2009.
- Gallup Poll/Newsweek (October 8, 2009). "Congress and the Public: Congressional Job Approval Ratings Trend (1974 – present)". The Gallup Organization. Retrieved October 8, 2009.
- "References about low approval ratings".
- "Congress' Approval Rating Jumps to 31%". Gallup. February 17, 2009. Retrieved October 1, 2009.
- "Congress' Approval Rating at Lowest Point for Year". Reuters. September 2, 2009. Retrieved October 1, 2009.
- John Whitesides (September 19, 2007). "Bush, Congress at record low ratings: Reuters poll". Reuters. Retrieved October 1, 2009.
- Seung Min Kim (February 18, 2009). "Poll: Congress' job approval at 31%". USA TODAY. Retrieved October 1, 2009.
- interview by David Schimke (September–October 2008). "Presidential Power to the People – Author Dana D. Nelson on why democracy demands that the next president be taken down a notch". Utne Reader. Retrieved September 20, 2009.
- Guy Gugliotta (November 3, 2004). "Politics In, Voter Apathy Out Amid Heavy Turnout". The Washington Post. Retrieved October 1, 2009.
- "Voter Turnout Rate Said to Be Highest Since 1968". The Washington Post. Associated Press. December 15, 2008. Retrieved October 1, 2009.
- Julian E. Zelizer (editor) (2004). "The American Congress: The Building of Democracy". Houghton Mifflin Company. p. xiv–xv. ISBN 0-618-17906-2. Retrieved September 11, 2010.
- Norman, Jim (2016-06-13). "Americans' Confidence in Institutions Stays Low". Gallup. Retrieved 2016-06-14.
- "Roger Sherman and The Connecticut Compromise". Connecticut Judicial Branch: Law Libraries. January 10, 2010. Retrieved January 10, 2010.
- Cass R. Sunstein (October 26, 2006). "It Could Be Worse". The New Republic. Archived from the original on July 30, 2010. Retrieved January 10, 2010.
- Reviewed by Robert Justin Lipkin (January 2007). "OUR UNDEMOCRATIC CONSTITUTION: WHERE THE CONSTITUTION GOES WRONG (AND HOW WE THE PEOPLE CAN CORRECT IT)". Widener University School of Law. Archived from the original on September 25, 2009. Retrieved September 20, 2009.
- Sanford Levinson (2006). "Our Undemocratic Constitution". p. 60. Retrieved January 10, 2010.
- Richard Labunski interviewed by Policy Today's Dan Schwartz (October 18, 2007). "Time for a Second Constitutional Convention?". Policy Today. Retrieved September 20, 2009.
- Charles L. Clapp, The Congressman, His Work as He Sees It (Washington, D.C.: The Brookings Institution, 1963), p. 55; cf. pp. 50–55, 64–66, 75–84.
- Congressional Quarterly Weekly Report 35 (September 3, 1977): 1855. English, op. cit., pp. 48–49, notes that members will also regularly appear at local events in their home district, and will maintain offices in the home congressional district or state.
- Robert Preer (August 15, 2010). "Two Democrats in Senate race stress constituent services". Boston Globe. Retrieved September 11, 2010.
- Daniel Malloy (August 22, 2010). "Incumbents battle association with stimulus, Obama". Pittsburgh Post-Gazette. Retrieved September 11, 2010.
- Amy Gardner (November 27, 2008). "Wolf's Decisive Win Surprised Even the GOP". The Washington Post. Retrieved September 11, 2010.
- William T. Blanco, editor (2000). "Congress on display, Congress at work". University of Michigan. ISBN 0-472-08711-8. Retrieved September 11, 2010.
- Davidson (2006), p. 17
- English (2003), pp. 24–25
- Simpson, G. R. (October 22, 1992). "Surprise! Top Frankers Also Have the Stiffest Challenges". Roll Call.
- Steven S. Smith; Jason M. Roberts; Ryan J. Vander Wielen (2006). "The American Congress (Fourth Edition)". Cambridge University Press. p. 79. Retrieved September 11, 2010.
- Senate Salaries since 1789. United States Senate. Retrieved August 13, 2007.
- Salaries of Members of Congress (PDF). Congressional Research Service. Retrieved August 12, 2007.
- Salaries of Legislative, Executive, and Judicial Officials (PDF). Congressional Research Service. Retrieved August 12, 2007.
- "US Census Bureau news release in regards to median income". Archived from the original on January 17, 2010. Retrieved August 28, 2007.
- Lipton, Eric (January 9, 2014). "Half of Congress Members Are Millionaires, Report Says". The New York Times. Retrieved January 11, 2014.
- "A Quiet Raise—Congressional Pay—special report". The Washington Post. 1998. Retrieved February 23, 2015.
- Lawrence Lessig (February 8, 2010). "How to Get Our Democracy Back". CBS News. Retrieved December 14, 2011.
- Lawrence Lessig (November 16, 2011). "Republic, Lost: How Money Corrupts Congress—and a Plan to Stop It". Google, YouTube, The Huffington Post. Retrieved December 13, 2011.
(see 30:13 minutes into the video)
- Scott, Walter (April 25, 2010). "Personality Parade column:Q. Does Congress pay for its own health care?". New York, NY: Parade. p. 2.
- Retirement Benefits for Members of Congress (PDF). Congressional Research Service, February 9, 2007.
- Brody Mullins & T.W. Farnam (December 17, 2009). "Congress Travels More, Public Pays: Lawmakers Ramp Up Taxpayer-Financed Journeys; Five Days in Scotland". The Wall Street Journal. Retrieved December 17, 2009.
- "How To Clean Up The Mess From Inside The System, A Plea—And A Plan—To Reform Campaign Finance Before It's Too". NEWSWEEK. October 28, 1996. Retrieved September 20, 2009.
- "The Constitution and the Idea of Compromise". PBS. October 10, 2009. Retrieved October 10, 2009.
- Alexander Hamilton (1788). "FEDERALIST No. 15 – The Insufficiency of the Present Confederation to Preserve the Union". FoundingFathers.info. Retrieved October 10, 2009.
- Bacon, Donald C.; Davidson, Roger H.; Keller, Morton, editors (1995). Encyclopedia of the United States Congress (4 vols.). Simon & Schuster.
- Collier, Christopher & Collier, James Lincoln (1986). Decision in Philadelphia: The Constitutional Convention of 1787. Ballantine Books. ISBN 0-394-52346-6.
- Davidson, Roger H. & Walter J. Oleszek (2006). Congress and Its Members (10th ed.). Congressional Quarterly (CQ) Press. ISBN 0-87187-325-7. (Legislative procedure, informal practices, and other information)
- English, Ross M. (2003). The United States Congress. Manchester University Press. ISBN 0-7190-6309-4.
- Francis-Smith, Janice (October 22, 2008). "Waging campaigns against incumbents in Oklahoma". The Oklahoma City Journal Record. Retrieved September 20, 2009.[dead link]
- Herrnson, Paul S. (2004). Congressional Elections: Campaigning at Home and in Washington. CQ Press. ISBN 1-56802-826-1.
- Huckabee, David C. (2003). Reelection Rates of Incumbents. Hauppauge, New York: Novinka Books, an imprint of Nova Science Publishers. p. 21. ISBN 1-59033-509-0.
- Huckabee, David C. – Analyst in American National Government – Government Division (March 8, 1995). "Reelection rate of House Incumbents 1790–1990 Summary (page 2)" (PDF). Congressional Research Service – The Library of Congress. Retrieved September 20, 2009.
- Maier, Pauline (book reviewer) (November 18, 2007). "HISTORY – The Framers' Real Motives (book review) UNRULY AMERICANS AND THE ORIGINS OF THE CONSTITUTION book by Woody Holton". The Washington Post. Retrieved October 10, 2009.
- Oleszek, Walter J. (2004). Congressional Procedures and the Policy Process. CQ Press. ISBN 0-87187-477-6.
- Polsby, Nelson W. (2004). How Congress Evolves: Social Bases of Institutional Change. Oxford University Press. ISBN 0-19-516195-5.
- Price, David E. (2000). The Congressional Experience. Westview Press. ISBN 0-8133-1157-8.
- Struble, Robert, Jr. (2007). chapter seven, Treatise on Twelve Lights. TeLL.
- Zelizer, Julian E. (2004). The American Congress: The Building of Democracy. Houghton Mifflin. ISBN 0-618-17906-2.
- Baker, Ross K. (2000). House and Senate, 3rd ed. New York: W. W. Norton. (Procedural, historical, and other information about both houses)
- Barone, Michael and Richard E. Cohen. The Almanac of American Politics, 2006 (2005), elaborate detail on every district and member; 1920 pages
- Berg-Andersson, Richard E. (2001). Explanation of the types of Sessions of Congress (Term of congress)
- Berman, Daniel M. (1964). In Congress Assembled: The Legislative Process in the National Government. London: The Macmillan Company. (Legislative procedure)
- Bianco, William T. (2000) Congress on Display, Congress at Work, University of Michigan Press.
- Hamilton, Lee H. (2004) How Congress Works and Why You Should Care, Indiana University Press.
- Herrick, Rebekah (2001). "Gender effects on job satisfaction in the House of Representatives". Women and Politics. 23 (4): 85–98. doi:10.1300/J014v23n04_04.
- Hunt, Richard (1998). "Using the Records of Congress in the Classroom". OAH Magazine of History. 12 (Summer): 34–37. doi:10.1093/maghis/12.4.34.
- Imbornoni, Ann-Marie, David Johnson, and Elissa Haney. (2005). "Famous Firsts by American Women." Infoplease.
- Lee, Frances and Bruce Oppenheimer. (1999). Sizing Up the Senate: The Unequal Consequences of Equal Representation. University of Chicago Press: Chicago. (Equal representation in the Senate)
- Rimmerman, Craig A. (1990). "Teaching Legislative Politics and Policy Making." Political Science Teacher, 3 (Winter): 16–18.
- Ritchie, Donald A. (2010). The U.S. Congress: A Very Short Introduction. (History, representation, and legislative procedure)
- Smith, Steven S., Jason M. Roberts, and Ryan Vander Wielen (2007). The American Congress (5th ed.). Cambridge University Press. ISBN 0-521-19704-X. (Legislative procedure, informal practices, and other information)
- Story, Joseph. (1891). Commentaries on the Constitution of the United States. (2 vols). Boston: Brown & Little. (History, constitution, and general legislative procedure)
- Tarr, David R. and Ann O'Connor. Congress A to Z (CQ Congressional Quarterly) (4th 2003) 605pp
- Wilson, Woodrow. (1885). Congressional Government. New York: Houghton Mifflin.
- Some information in this article has been provided by the Senate Historical Office.
|Wikimedia Commons has media related to United States Congress.|
|Wikiquote has quotations related to: United States Congress|
- U.S. House of Representatives
- U.S. Senate
- Women in Congress, Office of the Clerk, U.S. House of Representatives
- Black Americans in Congress, Office of the Clerk, U.S. House of Representatives
- Congress and Legislation from UCB Libraries GovPubs
- How Laws Are Made, via U.S. Government Printing Office
- Selected Congressional Research Service Reports on Congress and Its Procedures, via Law Librarians' Society of Washington, D.C.
- Sessions of Congress with Corresponding Debate Record Volume Numbers, via Law Librarians' Society of Washington, D.C.
- Legislative Information (Congress.gov) via Library of Congress
- Teaching about the U.S. Congress via U.S. Department of Education
- GovTrack.us, a free reference and tracking tool for congressional legislation and voting records
- Bill Hammons' American Politics Guide – Members of Congress by State, by Committee, and by House District with District Map and Partisan Voting Index |
FEDERALIST PARTY. The name "Federalist Party" originated in the ratification debates over the U.S. Constitution. In 1788 the group that favored ratification and a strong central government called themselves "federalists," which at that time indicated a preference for a more consolidated government rather than a loose "confederation" of semi-sovereign states. After the Constitution was ratified, the term "federalist" came to be applied to any supporter of the Constitution and particularly to members of the Washington administration. The term received wide currency with the publication of a series of eighty-one articles by Alexander Hamilton, James Madison, and John Jay arguing for the ratification of the Constitution. Thus, in the early 1790s, not only George Washington, John Adams, and Hamilton, but even Madison, then the floor leader of the administration in the House of Representatives, were all "federalists."
The Washington administration found itself divided, however, over Hamilton's debt, banking, and manufacturing policies, all of which favored the commercial and financial interests of the Northeast over the agrarian interests of the South and West. Foreign policy questions also split Washington's cabinet in his first term, especially the problems arising from treaty obligations to the increasingly radical republicans in France. These questions deeply divided the government, and eventually caused the resignations of the secretary of state, Thomas Jefferson, and James Madison as floor leader. Nevertheless, these questions did not precipitate permanent, consistent political divisions in Congress or in the states.
The Emergence of a Party
The Federalist Party took permanent and consistent form in Washington's second term as president during the controversy over the Jay Treaty with Great Britain. John Jay negotiated a treaty that alienated the frontier interests, the commercial grain exporters of the middle states, and the slaveholders of the South. The division over foreign policy—between "Anglomen" who hoped for favorable relations with Britain and "Gallomen" who hoped for continued strong relations with France—generated a climate of distrust, paranoia, and repression that propelled these foreign policy divisions into sustained political conflict at the elite level and eventually promoted the expansion of a party press, party organizations, and strong party identification in the electorate.
Although the Federalist Party did not arise from the controversy over Hamilton's economic policies, those states and interests that had benefited from Hamiltonian policies tended to favor the Federalists from the beginning. New England and the seaboard states of New Jersey, Delaware, Maryland, and South Carolina favored the Federalists in part because each of these states was dominated by commercial interests and an entrenched social and religious elite. Similarly, the urban seaboard interests and prosperous agrarian regions of Pennsylvania and New York also favored the Federalists. In New England, federalism was closely associated with the Established Congregational church in Connecticut, Massachusetts, and New Hampshire. In the middle states, Federalists tended to be Episcopalian in New York, Presbyterian in New Jersey, and might be either of these, or Quakers, in the area around Philadelphia. In Delaware, on the other hand, Federalists were more likely to be Episcopalians from the lower part of the state, rather than Presbyterians or Quakers from Wilmington.
In the South, federalism dominated only one state, South Carolina, and that was in part the result of its benefit from the Hamiltonian funding policy of state debts. Like the northern Federalists, South Carolina Federalists formed a solid elite in the Low Country along the coast. Mostly Episcopalian and Huguenot Presbyterians, their great wealth and urban commercial interests in Charleston, the South's only significant city, led them to make common cause with Hamiltonians in New England and the middle states. Elsewhere in the South, federalism thrived in regions where the social order was more hierarchical, wealth was greater, and the inroads of evangelicalism were weakest. Thus the Eastern Shore of Maryland, once Loyalist and Anglican, was a Federalist bastion, as were the Catholic counties of southern Maryland. The Tidewater of Virginia was another Federalist stronghold, as were the Cape Fear region of North Carolina and the Lowland counties of Georgia. Outside of a few New England exiles in the Western Reserve area of Ohio, Federalists did not gather much support in the new states of the West.
With strong political support across the Union at the time of Washington's retirement, the Federalists managed to hold the presidency for their party and for their candidate, John Adams, but only by three electoral votes. Adams allowed Washington's cabinet to retain their posts into his new term. They were followers of Alexander Hamilton, arch-Federalists, and far more ideological than Adams himself.
In 1798 the Federalists reached the peak of their national popularity in the war hysteria that followed the XYZ Affair. In the congressional elections of 1798 the Federalists gained greater support in their strongholds in New England, the middle states, Delaware, and Maryland. They made significant gains in Virginia, North Carolina, South Carolina, and Georgia. North and South, the popular slogan in 1798 was "Adams and Liberty." Even as they gained strength over their Democratic Republican adversaries, however, they viewed their opponents with increasing alarm. In a time of war hysteria, extreme Federalists genuinely believed that many Jeffersonians had allied themselves with the most radical factions of Revolutionary France. At a time when the Democratic Republicans were out of favor, their criticisms of the Federalists took on a shrill, often vituperative tone.
The harsh personal criticism by the leading Democratic Republican newspapers prompted some Federalists in Congress to find a way to curb this "licentious" press, punish the opposition editors, and perhaps cripple Democratic Republican political chances in the upcoming presidential election. In Congress, Representative Robert Goodloe Harper of South Carolina and Senator William Lloyd of Maryland introduced legislation in 1798 known as the Alien and Sedition Acts. The Sedition Act, modeled on the British Sedition Act of 1795, made it unlawful to "print, utter, or publish … any false, scandalous, and malicious writing" against any officer of the government. Under the energetic enforcement of Secretary of State Timothy Pickering, the leading Democratic Republican newspapers in Philadelphia, Boston, New York, and Richmond, Virginia, were closed down in 1799.
The Election of 1800
The election year of 1800 was the last time an incumbent Federalist engaged himself in a contest for the presidency. Despite Thomas Jefferson's referral to the election as a "revolution," the presidential contest was in fact narrowly won. Only five states allowed for the popular vote for presidential electors, and both parties used every means available—especially legislative selection of electors—to maximize their candidate's electoral vote. This was the first and last year the Federalists and Democratic Republicans contested every single state in the congressional elections. The Republicans won 67 of the 106 seats in the House of Representatives. Despite the decisive popular vote for the Democratic Republicans in Congress, the electoral vote was not at all a clear mandate for Thomas Jefferson. In fact, Thomas Jefferson owed his victory in the Electoral College to the infamous "three-fifths" rule, which stipulated that slaves would be counted in congressional (and electoral college) apportionment as a concession to the South.
Although the contest for president was mostly conducted in the legislatures and the congressional contests were conducted at the local level, the party press of both the Federalists and the Jeffersonian Republicans played up the contrast between Jefferson and Adams. Jefferson was a "Jacobin," an "atheist," and a "hypocrite" with all his talk about equality, while keeping slaves. Adams was an "aristocrat," a "monocrat," and a defender of hereditary privileges. The religious issue played an important part in the election. The Gazette of the United States put this controversy in its starkest form: "God—And a Religious President; Or Jefferson—And No God!!!"
The Decline of Federalism
The Federalists lost more congressional seats in 1802 and in 1804, despite Hamilton's attempt to inject the religious issue into the former election. Their opposition to the Louisiana Purchase seemed to spell certain doom for them in the West. Thanks to the unpopularity of Jefferson's Embargo Act, however, the Federalist Party experienced a revival in New England and the middle states in 1808 at the congressional and state level. By 1812 the Federalist Party and dissident anti-war Republicans grouped together behind DeWitt Clinton and the "Friends of Peace." With the unpopularity of the war in the Northeast, the Federalists and their anti-war allies gave James Madison a close contest for his reelection. The Federalist Party gained seats in Congress in 1812 and 1814 as the fortunes of war seemed arrayed against the Americans.
Some of the more extreme Federalists, however, including Timothy Pickering and Harrison Gray Otis of Massachusetts and Oliver Wolcott of Connecticut, toyed with New England secession in the midst of this unpopular war. They met in Hartford, Connecticut, from 15 December 1814 to 5 January 1815. Although the Federalist delegates defeated a secession resolution, their party was thereafter associated with disloyalty, and even treason. The end of the war made the Hartford Convention nothing more than an embarrassing irrelevance.
The Federalist Party hung on, however, in a long twilight in the seaboard states of Delaware, Maryland, New Jersey, Connecticut, Massachusetts, and New Hampshire, and even enjoyed a modest revival in Pennsylvania and New York in the early 1820s. The Federalist Party never again held power at the national level after 1800 in the election triumph that Jefferson called a "revolution." The death of Alexander Hamilton in 1804 killed the one Federalist leader who had youth, national stature, and significant popular support.
The extended influence of the Federalist Party lay in the judiciary. With the appointment of many Federalists to the bench, John Adams ensured that the Federalists would continue to exert a dominant influence on the federal judiciary for many years to come. Federalist judges predominated until the Era of Good Feeling. Thereafter, federalism continued to have influence in the law, thanks in no small part to the intellectual authority of John Marshall, chief justice of the U.S. Supreme Court, who remained on the Court until his death in 1835.
Banner, James M. To the Hartford Convention: Federalists and the Origins of Party Politics in Massachusetts, 1789–1815. New York: Harper, 1970.
Ben-Atar, Doron, and Barbara B. Oberg, eds. Federalists Reconsidered. Charlottesville: University Press of Virginia, 1998.
Broussard, James. The Southern Federalists. Baton Rouge: Louisiana State University Press, 1978.
Chambers, William Nisbet. The First Party System. New York: John Wiley, 1972.
Dauer, Manning J. The Adams Federalists. Baltimore: Johns Hopkins University Press, 1953.
Elkins, Stanley, and Eric McKittrick. The Age of Federalism: The Early American Republic, 1788–1800. New York: Oxford University Press, 1993.
Fischer, David Hackett. The Revolution of American Conservatism: The Federalist Party in the Era of Jeffersonian Democracy. New York: Harper, 1965.
Formisano, Ronald P. The Transformation of Political Culture: Massachusetts Parties, 1790–1840. New York: Oxford University Press, 1983.
Hofstadter, Richard. The Idea of a Party System: The Rise of Legitimate Opposition in the United States, 1780–1840. Berkeley: University of California Press, 1969.
Kerber, Linda. Federalists in Dissent: Imagery and Ideology in Jeffersonian America. Ithaca, N.Y.: Cornell University Press, 1970.
Miller, John C. The Federalist Era, 1789–1801. New York: Harper, 1960.
Sharpe, James Roger. American Politics in the Early Republic: The New Nation in Crisis. New Haven, Conn.: Yale University Press, 1993.
"Federalist Party." Dictionary of American History. 2003. Encyclopedia.com. (June 30, 2016). http://www.encyclopedia.com/doc/1G2-3401801497.html
"Federalist Party." Dictionary of American History. 2003. Retrieved June 30, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3401801497.html
The Federalist Party was an American political party during the late eighteenth and early nineteenth centuries. It originated in the loosely affiliated groups advocating the creation of a stronger national government after 1781 and culminated with the laws and policies established by Federalist lawmakers from 1789 to 1801. These laws and policies laid the foundation for a strong central government in the United States, thereby securing the transition from the provisional national government established during the Revolutionary War and continuing under the articles of confederation to the intricate system of checks and balances contemplated for the three branches of government in the U.S. Constitution.
The Federalist party's early leaders included alexander hamilton, john jay, james madison, and george washington. These men provided much of the impetus and organization behind the movement to draft and ratify the federal Constitution. Their support came from the established elites of old wealth in the commercial cities and in the less rapidly developing rural regions.
Even before the Articles of Confederation were ratified by the original 13 states in 1781, prominent Americans were criticizing the document for having failed to create a strong federal government. In 1783, George Washington, as commander in chief of the army, sent a circular to state governors discussing the need to add tone to our federal government. Three years later Washington and his political allies were referring to those who opposed strengthening the power of the central government under the Articles of Confederation as antifederal.
At the Constitutional Convention in 1787, those favoring a stronger central government drafted a Constitution that greatly increased the powers of Congress and the executive. Debate over ratification of the Constitution sharpened the lines separating those who called themselves federalists and those who called themselves antifederalists. Much of this debate was formalized in The Federalist, later called The Federalist Papers.
Originally written as 85 tracts under the name Publius, the pro-Federalist essays were published in New York City newspapers between October 27, 1787, and May 28, 1788. Each essay was written to persuade the people of New York to elect delegates who would ratify the federal Constitution in the forthcoming state convention. Alexander Hamilton and James Madison were the principal authors, while John Jay wrote five essays. The Federalist Papers are today considered America's most important political treatise and the most authoritative source for understanding the original intent of the Founding Fathers.
After the Constitution was ratified, the Federalist party dominated the national government until 1801. The Federalists believed that the Constitution should be loosely interpreted to build up federal power. They were generally pro-British, favored the interests of commerce and manufacturing over agriculture, and wanted the new government to be developed on a sound financial basis. Accordingly, Secretary of Treasury Hamilton proposed tax increases and the establishment of a national bank.
During their 12-year reign, the Federalist party settled the problems of the revolutionary debt, sought closer relations with Great Britain in Jay's Treaty of 1794, and tried to silence their domestic critics with the alien and sedition acts of 1798. These repressive laws cost the Federalist party much of its support, including that of Madison, who with thomas jefferson organized the democratic-republican party.
The Democratic-Republicans, also known as just the Republicans, opposed the policies and laws of the Federalist party at every turn. Republicans were generally pro-French and pro-agriculture. They believed that the Constitution should be strictly interpreted, favored strong, independent states at the expense of the federal government, and opposed the creation of a national bank.
The Federalist party lost control of the national government when Jefferson became president in 1801. The Federalists continued to diminish in popularity for the next 20 years. The party's last significant political victory came in the impeachment trial of samuel chase, associate justice to the U.S. Supreme Court and staunch Federalist, who had been impeached by a Republican-controlled House of Representatives for what they called judicial misconduct. However, in his trial before the Senate, Chase and his attorney convinced enough Senators that the impeachment charges boiled down to little more than partisan politics and that convicting Chase would imperil the independence of the federal judiciary. Chase was thus acquitted on all eight articles of impeachment.
The Federalist party ceased to exist as a national organization after the election of 1816, in which Republican james monroe defeated Federalist Rufus King. However, the party remained influential in a number of states until it disappeared completely during the 1820s. Most Federalists, such as daniel webster, joined the National republican party in the 1820s and later the whig party in the 1830s.
Boyer, Paul S. 2001. Oxford Companion to United States History. New York: Oxford Univ. Press.
Hall, Kermit L. 1992. Oxford Companion to the Supreme Court of the United States. New York: Oxford Univ. Press.
Lenner, Andrew. 1996."A Tale of Two Constitutions: Nationalism in the Federalist Era." American Journal of Legal History 40 (January): 72-105.
Lynch, Joseph M. 2000 "The Federalists and The Federalist: A Forgotten History." Seton Hall Law Review 31 (winter): 18-29.
"Federalist Party." West's Encyclopedia of American Law. 2005. Encyclopedia.com. (June 30, 2016). http://www.encyclopedia.com/doc/1G2-3437701785.html
"Federalist Party." West's Encyclopedia of American Law. 2005. Retrieved June 30, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3437701785.html
Federalist party, in U.S. history, the political faction that favored a strong federal government.
Origins and Members
In the later years of the Articles of Confederation there was much agitation for a stronger federal union, which was crowned with success when the Constitutional Convention drew up the Constitution of the United States. The men who favored the strong union and who fought for the adoption of the Constitution by the various states were called Federalists, a term made famous in that meaning by the Federalist Papers (see Federalist, The) of Alexander Hamilton, James Madison, and John Jay.
After the Constitution was adopted and the new government was established under the presidency of George Washington, political division appeared within the cabinet, the opposing groups being headed by Alexander Hamilton and by Thomas Jefferson. The party that emerged to champion Hamilton's views was the Federalist party. Its opponents, at first called Anti-Federalists, drew together into a Jeffersonian party; first called the Republicans and later the Democratic Republicans, they eventually became known as the Democratic party. Party politics had not yet crystallized when John Adams was elected President, but the choice of Adams was, nevertheless, a modest Federalist victory.
The Federalists were conservatives; they favored a strong centralized government, encouragement of industries, attention to the needs of the great merchants and landowners, and establishment of a well-ordered society. In foreign affairs they were pro-British, while the Jeffersonians were pro-French. The members of the Federalist party were mostly wealthy merchants, big property owners in the North, and conservative small farmers and businessmen. Geographically, they were concentrated in New England, with a strong element in the Middle Atlantic states.
During Washington's second administration, and under that of John Adams, Federalist domestic policies were given a chance to prove themselves. The young nation's economy was established on a sound basis, while the governmental structure was expanded and an honest and efficient administrative system was developed. In foreign affairs, however, trouble with France led to virtual warfare in 1798. It led also to the Alien and Sedition Acts, passed by the Federalist-controlled Congress ostensibly in response to hostile actions of the French Revolutionary government but actually designed to destroy the Jeffersonians. John Adams, who was a moderate and honest man, followed the course he considered wise, and by rejecting Hamilton's extreme desires, he caused something of a division in the Federalist ranks.
The Triumph of the Jeffersonian Opposition
The Jeffersonians were meanwhile winning popular support not only among Southern landowners but also among the mechanics, workers, and generally the less privileged everywhere. Jefferson showed skill in building his party, and the Jeffersonians were much better at publicity than were the Federalists.
The election of 1800 was a Federalist debacle. The Jeffersonians came to power and stayed there, establishing the so-called Virginia dynasty, with James Madison succeeding Jefferson and James Monroe succeeding Madison. The Federalist party remained powerful locally, but increasingly the leadership passed to the reactionaries rather than to the moderates. It tended to be a New England party.
This trend was accentuated in the troubled period before the War of 1812. Merchants and shipowners were opposed to the Embargo Act of 1807, which caused considerable economic loss to the seaboard cities, and their feelings were expressed through the Federalist party. The Federalists, however, failed to enlist De Witt Clinton and his followers in New York in their cause, and their challenge in the elections of 1808 was easily overridden by the Jeffersonians.
Dissolution of the Party
Opposition to war brought the Federalists the support of Clinton and many others, and the party made a good showing in the election of 1812, winning New England (except for radical Vermont), New York, New Jersey, Delaware, and part of Maryland. They failed, however, in Pennsylvania and lost the election. While the country was at war, the disgruntled merchants of New England, represented by the Essex Junto, contemplated secession and called the Hartford Convention. Thus, paradoxically the Federalists became the champions of states' rights.
The successful issue of the war ruined the party, which became firmly and solely the party of New England conservatives. The so-called era of good feelings followed, and politics became a matter of internal strife within the Democratic party. The Federalist party did not even offer a presidential candidate in 1820, and by the election of 1824 it was virtually dead.
See C. G. Bowers, Jefferson and Hamilton (1925); W. O. Lynch, Fifty Years of Party Warfare (1931); L. D. White, The Federalists (1948); S. G. Kurtz, The Presidency of John Adams: The Collapse of Federalism, 1795–1800 (1957, repr. 1961); J. C. Miller, The Federalist Era, 1789–1801 (1960, repr. 1963); S. Livermore, The Twilight of Federalism (1962); D. H. Fischer, The Revolution of American Conservatism (1965); L. K. Kerber, Federalists in Dissent (1970).
"Federalist party." The Columbia Encyclopedia, 6th ed.. 2016. Encyclopedia.com. (June 30, 2016). http://www.encyclopedia.com/doc/1E1-FedistP.html
"Federalist party." The Columbia Encyclopedia, 6th ed.. 2016. Retrieved June 30, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1E1-FedistP.html
"Federalist Party." World Encyclopedia. 2005. Encyclopedia.com. (June 30, 2016). http://www.encyclopedia.com/doc/1O142-FederalistParty.html
"Federalist Party." World Encyclopedia. 2005. Retrieved June 30, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1O142-FederalistParty.html |
Accounting: Objectives, Characteristics, Advantages, Disadvantages and Role of Accounting
The American Institute of Certified Public Accountants(AICPA) defines accounting as an art of recording, classifying, and summarising the transactions and events that are in monetary terms efficiently and effectively and interpreting the results. The main aim of the accounting process is the ascertainment of an organization’s operation’s net results and financial position so that the firm can communicate the same with the interest parties or users of Accounting Information. The nature of Accounting is dynamic and analytical and hence, requires special abilities and skills in an individual to interpret the information better and effectively.
The accounting process involves summarising, analysing, and reporting these transactions to supervisors, regulators, and tax collectors. Financial statements used in accounting are like a summary of financial transactions for an accounting period, summarising a company’s operations, and cash flows.
Objectives of Accounting:
Following are the objectives of accounting:
1. Record: The basic role of accounting is to maintain a systematic, complete, accurate, and permanent record of all business transactions that can be searched and checked at any time. Reliable financial records are the backbone of any accounting system, without which all other accounting objectives will be compromised.
2. Planning: Organisations must plan how they intend to allocate their limited resources (eg, cash, labour, materials, machinery, and equipment) for competitive needs in the future. An effective way to do this is to use different forms of budgets. Budgets allow organisations to plan ahead by anticipating business needs and resources. Budgeting helps in coordinating various segments of the organisation.
3. Decision: Accounting helps managers make a number of business decisions and create policies to make organisational processes more efficient. Examples of management decisions that are based on accounting information include: What price should be charged for products and services to achieve maximum profit; Which products should be produced when resources such as cash, labour, or materials are scarce to maximise profit, etc.
4. Performance: Accounting helps determine how well a business is doing by summarising financial information into quantifiable indicators (e.g., sales revenue, profit, costs, etc.). It is important for organisations to have a reliable source for measuring their KPIs so that they can improve by comparing their past performance and their competition.
5. Liquidity: Poor cash management is often the reason for the failure of many businesses. Accounting helps businesses determine how much cash and other liquid resources they have available to pay their financial obligations. This information is essential for working capital management and helps organizations reduce the risk of bankruptcy by early detection of financial bottlenecks.
6. Financing: Accounting information is necessary to secure finances. Whether an organisation is applying for a bank loan or shareholder investment, it will need to provide historical financial records (e.g., profit or loss for the last five years) as well as financial projections (e.g., projected sales for the next 3 years).
7. Management: One of the key objectives of an accounting system is to place sufficient internal controls in an organisation to protect its valuable resources. Business assets (e.g., cash, buildings, inventory, etc.) are susceptible to loss due to theft, fraud, error, obsolescence, damage, and mismanagement. Accounting ensures that these risks are reduced to an acceptable level by implementing various controls across the organisation. For example, an organisation’s accounting policy may require that payments above a certain threshold be approved by a senior member of management to ensure accuracy and minimise the risk of fraudulent payment.
8. Responsibility: Accounting provides a basis for evaluating the performance of a business over a period of time, which promotes accountability at several levels of the organisation. Shareholders can ultimately hold directors accountable for the overall performance of their company based on the accounting information disclosed in the financial statements.
9. Users: The role of accounting is not limited to the informational needs of the company’s employees and investors. Accounting today fulfils the information needs of a diverse group of stakeholders, each with their own information requirement.
Main Characteristics of Accounting:
The following attributes or characteristics can be derived from the definition of accounting:
1. Reliability: Reliability can be defined as the ability to trust. Accounting helps in providing reliable information to businesses. Reliable information should be free of errors and distortions, and should correctly represent what it purports to represent. To ensure some reliability, the published facts should be credible, neutral, and verifiable through unbiased events using an identical measurement approach.
2. Relevance: Relevant information is recorded and presented in the process of accounting. For relevant information, facts must be available in a timely manner, they must assist in forecasting and feedback, and should influence customer choices by: (a) helping them form a prediction about the outcome of a past, current or future event; ane b) confirming or correcting the previous ratings.
3. Clarity: Accounting helps in providing clear information about all business transactions. It is an art of recording, classifying, and summarising accounting information.
4. Comparability: With proper accounting, records relating to various costs, sales, gross and net profit, etc., can be compared. As such, accounting helps in inter-company and intra-company comparisons.
Advantages of Accounting
The main benefits of accounting include:
1. A complete and systematic record: Accounting is based on generally accepted principles and a scientific way of presenting business transactions in books of accounts. Accounting as such is the complete and systematic recording of all business transactions. The limitation of people not being able to remember all transactions can be overcome by accounting because every business transaction can be recorded and analysed through it.
2. Determination of the selling price: The main function of management is decision-making. Accounting helps and guides management in making decisions about setting the selling price, deducting costs, increasing sales, etc.
3. Valuation of the enterprise: In the case of the sale of a business or conversion of one business to another, the actual and fair value of the business is calculated. Through accounting, the correct picture can be displayed on the balance sheet, and thus the purchase price can be determined. A balance sheet shows the value of a business’s assets and liabilities, which can be used to calculate its net worth.
4. It helps in obtaining a loan: For further expansion, the business must have sufficient funds. Sometimes due to lack of funds, the business cannot do well. In these cases, additional funds can be obtained by borrowing from some financial institutions, like banks, IDBI, ICICI, etc. These financial institutions lend money based on the profitability and reliability of the business. Profitability and reliability can be measured using the Profit and Loss Statements and the Balance Sheet, the final results of the accounting process.
5. Evidence in court: Business transactions are recorded in accounting books supported by certified documents, viz. vouchers, etc. Accounts can thus be used as evidence in court.
6. In accordance with the law: Every business has to deal with various government departments, like Income Tax, Sales Tax, Customs and Excise, etc. Various regular returns need to be filed with these departments. Accounting helps in the preparation and filing of such returns.
7. Inter-company or intra-company comparison: A trading account and a profit and loss account show the net profit or net loss incurred by the business. With proper accounting, records relating to various costs, sales, gross and net profit, etc., can be compared. As such, accounting helps in inter-company and intra-company comparisons. Comparing the accounts of two different companies for the same year is known as inter-company comparison and comparing two different periods for the same company is known as an intra-company comparison. The company’s performance is then compared with predetermined goals, and any deficiencies can be corrected accordingly.
8. Facilitates auditing: Depending on the size, nature, and type of business, certification of the books of account, known as an audit, is mandatory. The audit certificate issued by the accounting auditor is a clean document of the organisation, which proves that there are no irregularities in the organisation.
9. Effective management: Accounting facilitates proper management feedback. As such, it helps the management in planning as well as controlling the various activities of the enterprise. It also helps the management to evaluate the performance of the company and take timely measures to eliminate management deficiencies.
Disadvantages of Accounting
1. Does not guarantee accuracy: Accounting records all financial transactions with past value. It does not take into account the fair or market value of assets and liabilities. Values are easy to manipulate.
2. Actual value of items: Financial account does not show the actual value of assets. It shows the past value of assets. Depreciation can be charged in any way and at any rate.
3. Accounting ignores the qualitative element: It records all financial transactions that are in monetary form but doesn’t consider qualitative factors, i.e., emotions, employees, relationships and public relations.
4. Accounts can be manipulated: Accounts can be manipulated to avoid tax and show a false position to investors. By making small changes to the account, the financial statements can be manipulated.
5. Costly for a small business: A small business does not have a lot of finances, so it is very expensive for them to get proper accounting tools, and get it audited by a chartered accountant.
6. Business Privacy: There is no privacy for those who prepare the accounts, as they have to show it to the general public including your competitors.
Role of Accounting in business
1. Evaluates business performance: Financial situation of a business can be represented with the help of Accounting statements. Once you have a clear idea of what is going on in your business financially, you can easily plan your future tasks accordingly. You will be able to track expenses effortlessly, further allowing you to allocate the budget accordingly.
2. Create budget projections: Accounting also helps in creating future projections that have the power to make or break the business. It helps to evaluate business trends and projections to keep the operations profitable. Thus, it’s important to have a well-structured accounting process.
3. Maintain financial statements: Accounting also helps in preparing financial statements. Every business must file its financial statements for tax purposes. If you have proper records of your business finances, you can easily handle all scenarios and achieve your goals.
4. Ensure compliance with the law: Businesses need legal compliance to ensure their accounting system is validated against various laws and regulations. All liabilities, such as income tax, sales tax, pensions, employee funds, etc., can be easily dealt with if we have a structured accounting system.
Please Login to comment... |
|Computer architecture bit widths|
|Binary floating-point precision|
|Decimal floating-point precision|
In computer architecture, 128-bit integers, memory addresses, or other data units are those that are 128 bits (16 octets) wide. Also, 128-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers, address buses, or data buses of that size.
While there are currently no mainstream general-purpose processors built to operate on 128-bit integers or addresses, a number of processors do have specialized ways to operate on 128-bit chunks of data.
128-bit processors could be used for addressing directly up to 2128 (over 3.40×1038) bytes, which would greatly exceed the total data captured, created, or replicated on Earth as of 2018, which has been estimated to be around 33 zettabytes (over 274 bytes).
A 128-bit register can store 2128 (over 3.40 × 1038) different values. The range of integer values that can be stored in 128 bits depends on the integer representation used. With the two most common representations, the range is 0 through 340,282,366,920,938,463,463,374,607,431,768,211,455 (2128 − 1) for representation as an (unsigned) binary number, and −170,141,183,460,469,231,731,687,303,715,884,105,728 (−2127) through 170,141,183,460,469,231,731,687,303,715,884,105,727 (2127 − 1) for representation as two's complement.
Quadruple precision (128-bit) floating-point numbers can store 113-bit fixed-point numbers or integers accurately without losing precision (thus 64-bit integers in particular). Quadruple precision floats can also represent any position in the observable universe with at least micrometer precision.
Decimal128 floating-point numbers can represent numbers with up to 34 significant digits.
Most modern CPUs feature single instruction, multiple data (SIMD) instruction sets (Streaming SIMD Extensions, AltiVec etc.) where 128-bit vector registers are used to store several smaller numbers, such as four 32-bit floating-point numbers. A single instruction can then operate on all these values in parallel. However, these processors do not operate on individual numbers that are 128 binary digits in length; only their vector registers have the size of 128 bits.
The DEC VAX supported operations on 128-bit integer ('O' or octaword) and 128-bit floating-point ('H-float' or HFLOAT) datatypes. Support for such operations was an upgrade option rather than being a standard feature. Since the VAX's registers were 32 bits wide, a 128-bit operation used four consecutive registers or four longwords in memory.
A CPU with 128-bit multimedia extensions was designed by researchers in 1999.
The Dreamcast and the PlayStation 2 among the Sixth generation of video game consoles used the term "128-bit" in their marketing to describe their capability. The Playstation 2's CPU had 128-bit SIMD capabilities. Neither console supported 128-bit addressing or 128-bit integer arithmetic.
The RISC-V ISA specification from 2016 includes a reservation for a 128-bit version of the architecture, but the details remain undefined intentionally, because there is yet so little practical experience with such large word size.
In the same way that compilers emulate e.g. 64-bit integer arithmetic on architectures with register sizes less than 64 bits, some compilers also support 128-bit integer arithmetic. For example, the GCC C compiler 4.6 and later has a 128-bit integer type
__int128 for some architectures. GCC and compatible compilers signal the presence of 128-bit arithmetic when the macro
__SIZEOF_INT128__ is defined. For the C programming language, 128-bit support is optional, e.g. via the
int128_t type, or it can be implemented by a compiler-specific extension. The Rust programming language has built-in support for 128-bit integers (originally via LLVM), which is implemented on all platforms. A 128-bit type provided by a C compiler can be available in Perl via the
- The free software used to implement RISC-V architecture is defined for 32, 64 and 128 bits of integer data width.
- Universally unique identifiers (UUID) consist of a 128-bit value.
- IPv6 routes computer network traffic amongst a 128-bit range of addresses.
- ZFS is a 128-bit file system.
- 128 bits is a common key size for symmetric ciphers and a common block size for block ciphers in cryptography.
- The IBM i virtual instruction set defines all pointers as 128-bit. This gets translated to the hardware's real instruction set as required, allowing the underlying hardware to change without needing to recompile the software. Past hardware was 48-bit CISC, while current hardware is 64-bit PowerPC. Because pointers are defined to be 128-bit, future hardware may be 128-bit without software incompatibility.
- Increasing the word size can speed up multiple precision mathematical libraries, with applications to cryptography, and potentially speed up algorithms used in complex mathematical processing (numerical analysis, signal processing, complex photo editing and audio and video processing).
- MD5 is a hash function producing a 128-bit hash value.
- Apache Avro uses a 128-bit random number as synchronization marker for efficient splitting of data files.
- Reinsel, David; Gantz, John; Rydning, John (November 2018). "The Digitalization of the World from Edge to Core" (PDF). Seagate Technology. IDC. p. 3. Archived (PDF) from the original on 7 September 2021. Retrieved 14 September 2021.
- Mead, Carver A.; Pashley, Richard D.; Britton, Lee D.; Daimon, Yoshiaki T.; Sando, Stewart F., Jr. (October 1976). "128-Bit Multicomparator" (PDF). IEEE Journal of Solid-State Circuits. 11 (5): 692–695. Bibcode:1976IJSSC..11..692M. doi:10.1109/JSSC.1976.1050799. S2CID 27262034. Archived (PDF) from the original on 3 November 2018.
- Padegs A (1968). "Structural aspects of the System/360 Model 85, III: Extensions to floating-point architecture". IBM Systems Journal. 7: 22–29. doi:10.1147/sj.71.0022.
- Assembler Instructions (BS2000/OSD). 1993.
- Suzuoki, M.; Kutaragi, K.; Hiroi, T.; Magoshi, H.; Okamoto, S.; Oka, M.; Ohba, A.; Yamamoto, Y.; Furuhashi, M.; Tanaka, M.; Yutaka, T.; Okada, T.; Nagamatsu, M.; Urakawa, Y.; Funyu, M.; Kunimatsu, A.; Goto, H.; Hashimoto, K.; Ide, N.; Murakami, H.; Ohtaguro, Y.; Aono, A. (November 1999). "A microprocessor with a 128-bit CPU, ten floating-point MAC's, four floating-point dividers, and an MPEG-2 decoder". IEEE Journal of Solid-State Circuits. 34 (11): 1608–1618. Bibcode:1999IJSSC..34.1608S. doi:10.1109/4.799870.
- John L. Hennessy and David A. Patterson. "Computer Architecture: A Quantitative Approach, Third Edition". ISBN 1-55860-724-2
- Keith Diefendorff. "Sony's Emotionally Charged Chip". Microprocessor Report, Volume 13, Number 5, April 19, 1999. Microdesign Resources.
- Woligroski, Don (24 July 2006). "The Graphics Processor". Tom's Hardware. Archived from the original on 11 April 2013. Retrieved 24 February 2013.
- Waterman, Andrew; Asanović, Krste. "The RISC-V Instruction Set Manual, Volume I: Base User-Level ISA version 2.2". University of California, Berkeley. EECS-2016-118. Retrieved 25 May 2017.
- "GCC 4.6 Release Series - Changes, New Features, and Fixes". Retrieved 25 July 2016.
- Marc Glisse (26 August 2015). "128-bit integer - nonsensical documentation?". GCC-Help. Retrieved 23 January 2020.
- "i128 - Rust". doc.rust-lang.org. Retrieved 25 June 2020.
- "Math::Int128". metacpan.org. Retrieved 25 June 2020.
- Kleppmann, Martin (24 January 2013). "Re: Synchronization Markers". Archived from the original on 27 September 2015.
- "Apache Avro 1.8.0 Specification". Apache Software Foundation. |
Work, Energy, and Energy Resources
- Explain how an object must be displaced for a force on it to do work.
- Explain how relative directions of force and displacement determine whether the work done is positive, negative, or zero.
What It Means to Do Work
The scientific definition of work differs in some ways from its everyday meaning. Certain things we think of as hard work, such as writing an exam or carrying a heavy load on level ground, are not work as defined by a scientist. The scientific definition of work reveals its relationship to energy—whenever work is done, energy is transferred.
For work, in the scientific sense, to be done, a force must be exerted and there must be displacement in the direction of the force.
Formally, the work done on a system by a constant force is defined to be the product of the component of the force in the direction of motion times the distance through which the force acts. For one-way motion in one dimension, this is expressed in equation form as
where is work, is the displacement of the system, and is the angle between the force vector and the displacement vector , as in [link]. We can also write this as
To find the work done on a system that undergoes motion that is not one-way or that is in two or three dimensions, we divide the motion into one-way one-dimensional segments and add up the work done over each segment.
The work done on a system by a constant force is the product of the component of the force in the direction of motion times the distance through which the force acts. For one-way motion in one dimension, this is expressed in equation form as
where is work, is the magnitude of the force on the system, is the magnitude of the displacement of the system, and is the angle between the force vector and the displacement vector .
To examine what the definition of work means, let us consider the other situations shown in [link]. The person holding the briefcase in [link](b) does no work, for example. Here , so . Why is it you get tired just holding a load? The answer is that your muscles are doing work against one another, but they are doing no work on the system of interest (the “briefcase-Earth system”—see Gravitational Potential Energy for more details). There must be displacement for work to be done, and there must be a component of the force in the direction of the motion. For example, the person carrying the briefcase on level ground in [link](c) does no work on it, because the force is perpendicular to the motion. That is, , and so .
In contrast, when a force exerted on the system has a component in the direction of motion, such as in [link](d), work is done—energy is transferred to the briefcase. Finally, in [link](e), energy is transferred from the briefcase to a generator. There are two good ways to interpret this energy transfer. One interpretation is that the briefcase’s weight does work on the generator, giving it energy. The other interpretation is that the generator does negative work on the briefcase, thus removing energy from it. The drawing shows the latter, with the force from the generator upward on the briefcase, and the displacement downward. This makes , and ; therefore, is negative.
Work and energy have the same units. From the definition of work, we see that those units are force times distance. Thus, in SI units, work and energy are measured in newton-meters. A newton-meter is given the special name joule (J), and . One joule is not a large amount of energy; it would lift a small 100-gram apple a distance of about 1 meter.
How much work is done on the lawn mower by the person in [link](a) if he exerts a constant force of at an angle below the horizontal and pushes the mower on level ground? Convert the amount of work from joules to kilocalories and compare it with this person’s average daily intake of (about ) of food energy. One calorie (1 cal) of heat is the amount required to warm 1 g of water by , and is equivalent to , while one food calorie (1 kcal) is equivalent to .
We can solve this problem by substituting the given values into the definition of work done on a system, stated in the equation . The force, angle, and displacement are given, so that only the work is unknown.
The equation for the work is
Substituting the known values gives
Converting the work in joules to kilocalories yields . The ratio of the work done to the daily consumption is
This ratio is a tiny fraction of what the person consumes, but it is typical. Very little of the energy released in the consumption of food is used to do work. Even when we “work” all day long, less than 10% of our food energy intake is used to do work and more than 90% is converted to thermal energy or stored as chemical energy in fat.
- Work is the transfer of energy by a force acting on an object as it is displaced.
- The work that a force does on an object is the product of the magnitude of the force, times the magnitude of the displacement, times the cosine of the angle between them. In symbols,
- The SI unit for work and energy is the joule (J), where .
- The work done by a force is zero if the displacement is either zero or perpendicular to the force.
- The work done is positive if the force and displacement have the same direction, and negative if they have opposite direction.
Give an example of something we think of as work in everyday circumstances that is not work in the scientific sense. Is energy transferred or changed in form in your example? If so, explain how this is accomplished without doing work.
Give an example of a situation in which there is a force and a displacement, but the force does no work. Explain why it does no work.
Describe a situation in which a force is exerted for a long time but does no work. Explain.
Problems & Exercises
How much work does a supermarket checkout attendant do on a can of soup he pushes 0.600 m horizontally with a force of 5.00 N? Express your answer in joules and kilocalories.
A 75.0-kg person climbs stairs, gaining 2.50 meters in height. Find the work done to accomplish this task.
(a) Calculate the work done on a 1500-kg elevator car by its cable to lift it 40.0 m at constant speed, assuming friction averages 100 N. (b) What is the work done on the lift by the gravitational force in this process? (c) What is the total work done on the lift?
(c) The net force is zero.
Suppose a car travels 108 km at a speed of 30.0 m/s, and uses 2.0 gal of gasoline. Only 30% of the gasoline goes into useful work by the force that keeps the car moving at constant speed despite friction. (See [link] for the energy content of gasoline.) (a) What is the magnitude of the force exerted to keep the car moving at constant speed? (b) If the required force is directly proportional to speed, how many gallons will be used to drive 108 km at a speed of 28.0 m/s?
Calculate the work done by an 85.0-kg man who pushes a crate 4.00 m up along a ramp that makes an angle of with the horizontal. (See [link].) He exerts a force of 500 N on the crate parallel to the ramp and moves at a constant speed. Be certain to include the work he does on the crate and on his body to get up the ramp.
How much work is done by the boy pulling his sister 30.0 m in a wagon as shown in [link]? Assume no friction acts on the wagon.
A shopper pushes a grocery cart 20.0 m at constant speed on level ground, against a 35.0 N frictional force. He pushes in a direction below the horizontal. (a) What is the work done on the cart by friction? (b) What is the work done on the cart by the gravitational force? (c) What is the work done on the cart by the shopper? (d) Find the force the shopper exerts, using energy considerations. (e) What is the total work done on the cart?
(c) 700 J
(d) 38.6 N
Suppose the ski patrol lowers a rescue sled and victim, having a total mass of 90.0 kg, down a slope at constant speed, as shown in [link]. The coefficient of friction between the sled and the snow is 0.100. (a) How much work is done by friction as the sled moves 30.0 m along the hill? (b) How much work is done by the rope on the sled in this distance? (c) What is the work done by the gravitational force on the sled? (d) What is the total work done?
- the ability to do work
- the transfer of energy by a force that causes an object to be displaced; the product of the component of the force in the direction of the displacement and the magnitude of the displacement
- SI unit of work and energy, equal to one newton-meter |
The following proof of the Pythagorean theorem using trigonometry was discovered (or is the proper word invented?) by David Houston, an eighth-grade student from Sterling Heights, Michigan, who comes to Oakland University to take mathematics courses and to chat with some of our faculty members on a regular basis. Bright students in a trigonometry class or a geometry class should be able to follow it.
If we define sinθ for any acute angle θ as the ratio of the opposite side to the hypotenuse in a right triangle with angle θ, then the area formula
To establish the Pythagorean theorem, we want to prove that
Without loss of generality, we assume that
In the figure on the left, we see that on the one hand, the total area is
ab = c² sinθ / 2.
The figure on the right is formed by reflecting the original triangle about the line through the right-angle vertex, parallel to the hypotenuse. Since each copy of the right triangle clearly occupies half of its half of the large rectangle, the total area is 2ab. On the other hand, if we add the areas of the four triangles in this figure, we see that the area ab (from the two copies of the original right triangle) plus
ab = (a² + b²) sinθ / 2.
The Pythagorean formula then follows immediately from the two displayed equations.
As I mentioned in the introduction to the Pythagorean theorem page, E. Loomis expressed an opinion that the Pythagorean theorem does not admit a trigonometric proof. That opinion is no doubt shared by the majority of mathematicians. The reason for this is the fact that
The definitions of trigonometric functions are based on the theory of proportion and similarity. For example, all right triangles with the same angle α are similar and, for similar triangles, the ratios of two corresponding sides are equal; in particular, the ratio of a leg to the hypotenuse is a function of the adjacent (or the opposite) acute angle. For the adjacent angle the function is called cosine, for the opposite angle it is called sine. As long as a proof of the Pythagorean identity contains only the definitions of trigonometric functions, one may reasonably claim that the use of trigonometry is entirely spurious. The functions can be simply replaced by the ratios, leading to a plain algebraic proof. A remark to this effect has been made at the end of Proof #6.
David Houston's proof makes use of trigonometry twice. First, the proof depends on the trigonometric formula for the area of a triangle
We might have proceeded as follows.
The theory of similarity and proportion allows us to make an observation: let x, y be the sides of a triangle with the included angle θ. Let h be the length of the altitude to side x. Then the ratio h/y is the function of (depends only on) angle θ. Let's denote this function as F: h/y = F(θ).
The above diagram implies
David's configuration is obtained by reflecting the above in the lowest orange line. The argument that also employs similarity of triangles, proportions and areas could be extended to David's configuration.
However, David's argument is different and, to boot, the trigonometric formula for the area of a triangle plays such a prominent role in the proof that replacing sine with an auxiliary function F seems (to me at least) a rather artificial device. In a hindsight, one was indeed able to do away with trigonometry thus confirming Loomis' view. On the other hand, there are grounds for a reasonable doubt whether anybody without the knowledge of trigonometry would have been able to come up with such a nice and simple proof as David's.
Note: Luc Gheysens came up with a modification of David Houston's proof that leads to a shorter derivation.
- I. M. Gelfand, M. Saul, Trigonometry, Birkhäuser, 2001
- J. Grossman, a reader's letter in Mathematics Teacher, v. 87, n. 1, January 1994, NCTM
- E. S. Loomis, The Pythagorean Proposition, NCTM, 1968 |
We will discuss here about some of the general properties of quadratic equation.
We know that the general form of quadratic equation is ax^2 + bx + c = 0, where a is the co-efficient of x^2, b is the coefficient of x, c is the constant term and a ≠ 0, since, if a = 0, then the equation will no longer remain a quadratic
When we are expressing any quadratic equation in the form of ax^2 + bx + c =0, we have in the left side of the equation a quadratic expression.
For example, we can write the quadratic equation x^2 + 3x = 10 as x^2 + 3x – 10 = 0.
Now we will learn how to factorize the above quadratic expression.
x^2 + 3x - 10
= x^2 + 5x - 2x - 10
= x(x + 5) -2 (x + 5)
= (x + 5)(x – 2),
Therefore, x^2 + 3x – 10 = (x + 5)(x – 2) ............ (A)
Note: We know that mn = 0 implies that, either (i) m = 0 or n = 0 or (ii) m = 0 and n = 0. It is not possible that both of m and n are non-zero.
From (A) we get,
(x + 5)(x – 2) = 0, then any one of x + 5 and x - 2 must be zero.
So, factorizing the left side of the equation x^2 + 3x – 10 = 0 we get, (x + 5)(x – 2) = 0
Therefore, any one of (x + 5) and (x – 2) must be zero
i.e., x + 5 = 0 ................ (I)
or, x – 2 = 0 .................. (II)
Both of (I) and (II) represent linear equations, which we can solve to get the value of x.
From equation (I), we get x = -5 and from equation (II), we get x = 2.
Therefore the solutions of the equation are x = -5 and x = 2.
We will solve a quadratic equation in the following way:
(i) First we need to express the given equation in the general form of the quadratic equation ax^2 + bx + c = 0, then
(ii) We need to factorize the left side of the quadratic equation,
(iii) Now express each of the two factor equals to 0 and solve them
(iv)The two solutions are called the roots of the given quadratic equation.
Notes: (i) If b ≠ 0 and c = 0, one root of the quadratic equation is always zero.
For example, in the equation 2x^2 - 7x = 0, there is no constant term. Now factoring the left side of the equation, we get x(2x - 7).
Therefore, x(2x - 7) = 0.
Thus, either x = 0 or, 2x – 7 = 0
either x = 0 or, x = 7/2
Therefore, the two roots of the equation 2x^2 - 7x = 0 are 0, 7/2.
(ii) If b = 0, c = 0, both the roots of the quadratic equation will be zero. For example, if 11x^2 = 0, then dividing both sides by 11, we get x^2 = 0 or x = 0, 0. |
Much like the flapping of a windsock displays the quick changes in wind's speed and direction, called turbulence, comet tails can be used as probes of the solar wind - the constant flowing stream of material that leaves the sun in all directions.
According to new studies of a comet tail observed by NASA's Solar and Terrestrial Relations Observatory, or STEREO, the vacuum of interplanetary space is filled with turbulence and swirling vortices similar to gusts of wind on Earth.
This image, captured by NASA's STEREO mission, shows the motion of Comet Encke and its tail as it approached the sun in April 2007. Scientists studied the movements of hundreds of dense chunks of glowing ionized gas within the comet's tail, finding evidence of turbulence that may explain both the solar wind's variability and its unexpectedly high temperatures.
Such turbulence can help explain two of the wind's most curious features: its variable nature and unexpectedly high temperatures. A paper on this work was published in "The Astrophysical Journal" on Oct. 13, 2015.
"The solar wind at Earth is about 70 times hotter than one might expect from the temperature of the solar corona and how much it expands as it crosses the void," said Craig DeForest, a solar physicist at the Southwest Research Institute in Boulder, Colorado, and lead author on the study. "The source of this extra heat has been a mystery of solar wind physics for several decades."
There is much that is conclusively known about the solar wind: It is made of a sea of electrically-charged electrons and ions and also carries the interplanetary magnetic field along for the ride, forging a magnetic connection between the sun and Earth and the other planets in the solar system.
There is no consensus, however, on what powers the wind's acceleration, especially when it is traveling at its fastest speeds. Complicating the search for such understanding are two of its most distinctive characteristics: The solar wind can be highly variable, meaning that measurements just short times or distances apart can yield quite different results. It is also very, very hot--remarkably so.
The new study helped explain these characteristics using the heliospheric imager onboard STEREO. The scientists studied the movements of hundreds of dense chunks of glowing ionized gas within the ribbon of Comet Encke's tail, which passed within STEREO's field of view in 2007. Fluctuations in the solar wind are mirrored in what is seen in the tail, so by tracking these clumps, scientists were able to reconstruct the motion of the solar wind, catching an unprecedented look at the turbulence.
Identifying this turbulence in the solar wind has the potential to solve the mystery of how the solar wind gets so hot. Based on the intensity of the turbulence researchers saw, they calculated that the energy available from turbulence is more than ten times what would be required to heat the solar wind to observed temperatures.
What's more, it also helps to solve the variability problem, which other theories have not yet done successfully.
"This turbulent motion mixes up the solar wind, leading to the rapid variation that we see at Earth," said DeForest.
For years, scientists have taken direct measurements of the solar wind--known as in situ measurements, which are captured as the solar wind passes over one of the dozens of satellites carrying the appropriate instruments. Most of these satellites observe the sun from a vantage point similar to that of Earth. STEREO-A, however, orbits the sun in a slightly smaller and faster orbit than Earth, meaning it moves around the sun farther and farther from Earth over time. So, in addition to the images of Comet Encke as it streamed past in April 2007, STEREO-A also provides us with in situ solar wind measurements from a unique perspective.
On the other hand, the solar wind is notoriously hard to study remotely--that is, with measurements from afar. Its particles flow at 250 miles per second, and they are so dispersed that interplanetary space at Earth's orbit has about a thousand times fewer particles in one cubic inch of space than the best laboratory vacuum on Earth.
This solar wind dominates the space environment within our solar system and travels well past Pluto, creating a huge bubble known as the heliosphere. Closer to home, the solar wind also interacts with Earth's magnetic field, sometimes initiating changes in near-Earth space that can disrupt our space technology or cause auroras. So scientists needed to come up with a way to look at something that's invisible--and that's where Comet Encke came in.
All comets, if they get close enough to the sun, will form what's called an ion tail. One of the most recognizable features of these hunks of ice and rock, the ion tail is created when the solar wind--made of hot, charged gas, called plasma--sweeps over the comet, capturing the material that has been vaporized into plasma by sunlight, causing it to trail out behind the comet. This tail follows the lines of the magnetic field embedded in the solar wind and reveals its motion.
Comet Encke has some unusual characteristics that scientists were able to leverage to study the solar wind. Unlike most comets, Comet Encke has what is called a compact tail. Rather than feathering out loosely, creating a wide spray of ions, Comet Encke's ion tail streams out in a tight, bright ribbon of glowing gas with compact features.
"In situ measurements are limited because they don't follow the turbulence along its path," said William Matthaeus, a professor of physics and astronomy at the University of Delaware and co-author on the study. "Now, for the first time, we observed the turbulent motions along their complex paths and quantified the mixing. We actually see the turbulence."
Using the images from STEREO-A, scientists tracked 230 different features as they weaved through Comet Encke's tail over the course of about 9.3 million miles of its journey around the sun. They then compared these motions to how they would expect solid objects to orbit around the sun, finding evidence that these gas clumps were being picked up by drag against the solar wind. They found that, though the gas clumps moved more or less randomly on smaller scales, they exhibited clear patterns on the scale of about 300,000 miles, indicating large-scale swirling eddies are mixing the solar wind--and possibly heating it as well.
"Turbulent motion cascades down into motion on smaller and smaller scales until it hits the level of the fundamental gyrations of the particles about the magnetic field, where it becomes heat," said Aaron Roberts, a heliophysicist at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "This study estimates that there is enough energy contained in these swirling eddies to explain the extra heat several times over."
These observations of the solar wind provide a preview of what NASA plans to observe more directly with the Solar Probe Plus, or SPP, mission in 2018. SPP will travel to within nine solar radii of the sun, which is nine times the radius of the Sun, or about 3.9 million miles. Since it's possible to remotely observe comets closer to the sun than any spacecraft can travel, studying them does provide unique information about the solar wind and our sun's atmosphere.
STEREO is the third mission in the NASA Heliophysics Division's Solar Terrestrial Probes program, which is managed by NASA Goddard for NASA's Science Mission Directorate, in Washington.
Susan Hendrix | EurekAlert!
Move over, lasers: Scientists can now create holograms from neutrons, too
21.10.2016 | National Institute of Standards and Technology (NIST)
Finding the lightest superdeformed triaxial atomic nucleus
20.10.2016 | The Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences
Researchers from the Institute for Quantum Computing (IQC) at the University of Waterloo led the development of a new extensible wiring technique capable of controlling superconducting quantum bits, representing a significant step towards to the realization of a scalable quantum computer.
"The quantum socket is a wiring method that uses three-dimensional wires based on spring-loaded pins to address individual qubits," said Jeremy Béjanin, a PhD...
In a paper in Scientific Reports, a research team at Worcester Polytechnic Institute describes a novel light-activated phenomenon that could become the basis for applications as diverse as microscopic robotic grippers and more efficient solar cells.
A research team at Worcester Polytechnic Institute (WPI) has developed a revolutionary, light-activated semiconductor nanocomposite material that can be used...
By forcefully embedding two silicon atoms in a diamond matrix, Sandia researchers have demonstrated for the first time on a single chip all the components needed to create a quantum bridge to link quantum computers together.
"People have already built small quantum computers," says Sandia researcher Ryan Camacho. "Maybe the first useful one won't be a single giant quantum computer...
COMPAMED has become the leading international marketplace for suppliers of medical manufacturing. The trade fair, which takes place every November and is co-located to MEDICA in Dusseldorf, has been steadily growing over the past years and shows that medical technology remains a rapidly growing market.
In 2016, the joint pavilion by the IVAM Microtechnology Network, the Product Market “High-tech for Medical Devices”, will be located in Hall 8a again and will...
'Ferroelectric' materials can switch between different states of electrical polarization in response to an external electric field. This flexibility means they show promise for many applications, for example in electronic devices and computer memory. Current ferroelectric materials are highly valued for their thermal and chemical stability and rapid electro-mechanical responses, but creating a material that is scalable down to the tiny sizes needed for technologies like silicon-based semiconductors (Si-based CMOS) has proven challenging.
Now, Hiroshi Funakubo and co-workers at the Tokyo Institute of Technology, in collaboration with researchers across Japan, have conducted experiments to...
14.10.2016 | Event News
14.10.2016 | Event News
12.10.2016 | Event News
21.10.2016 | Health and Medicine
21.10.2016 | Information Technology
21.10.2016 | Materials Sciences |
Sometimes you see the advice to round up figures and present them with the same number of decimals. However, this is not a good advice because it does not take into account that figures need to be presented with a varying number of decimals to reflect the underlying precision of the figures. Using the same number of decimals will present some figures less precise than they are while others are presented as being more precise than they are. We have two types of figures to consider in a research project; raw data and calculated data.
Table of Contents
Significant figures when presenting raw data
If an observation is exact such as number of participants or number of children in a family then the exact number should be given. However, a lot of retained raw data are measurements producing a figure that is not exact. It is an estimate. Example can be height, weight, cholesterol levels in blood etc. Figures that are estimates should be given with a precision that reflects the accuracy of the measurement. Raw data can be presented with many significant figures if you are measuring using a method with very high accuracy. However, you should present the raw data with few significant figures if your method of measuring / estimating is unreliable . This is elegantly explained in the (rather funny) video “Why are Significant Figures Important?” by Tyler DeWitt:
Significant figures when presenting calculated data
Significant figures are also of interest as soon as you present any type of descriptive statistics or inferential statistics. How do I know what number of significant figures are appropriate? The number of significant figures that should be used when presenting research results (such as mean, standard deviation, odds ratios, p-values, etc) is given by the size of the sample you have. As a rule of thumb the appropriate number of significant figures can be obtained by by taking the base 10 logarithm of the sample size and rounding to the nearest integer. The base 10 logarithm for a sample size of 100 is 2, for 1,000 is 3, for 10,000 is 4, for 100,000 is 5 and so forth.
Please note that if your raw data are very unreliable and only valid with one significant figure (unusual) it means that your calculated output should also be presented with only one significant figure. Please also note that a lot of significant figures will make your manuscript more difficult to read and in most situations there is no need to present more than 3 significant figures even if your sample size is large enough to allow more significant figures. Some examples of proper rounding of figures:
|Result of a calculation / analysis||Two significant figures||Three significant figures|
|0.0000021463||2.1 x 10-6||2.15 x 10-6|
Consequences when writing a manuscript
The above means that you should vary the number of decimals when writing a manuscript. Please see the following examples.
- Percentages presented in table 1 in Nordeman et al 2017 and in table 1 in Tenenbaum et al 2017 .
- P-values presented in table 4 in Tenenbaum et al 2017 .
- Point estimate and confidence intervals for odds ratio as well as p-values in table 3 in Sundvall et al 2014 .
Systematic literature reviews
A systematic literature review does not aim to simply copy conclusions from previous authors. It aim to evaluate previous studies and if possible making new conclusions. It is common that authors of included studies use many more significant figures than their observations support. In this case don’t repeat their mistake. Reduce the number of significant figures to reflect what their observations actually support if previous authors of included publications provide more significant figures than their observations support. Give the significant figures they provide, but don’t invent more significant figures (unless you have access to raw data so you can calculate yourself), if their observations support more significant figures than they provide. In many situations there is no reason to provide more than three significant figures even if the number of observations or precision of measurements would support that.
More about proper rounding of figures
Further details about how to round figures is explained in the video “How to Count and Round Significant Figures” by MahanChem:
Ronny Gunnarsson. Significant figures [in Science Network TV]. Available at: https://science-network.tv/significant-figures/. Accessed December 19, 2018. |
A Dominion refers to one of a group of autonomous polities that were nominally under British sovereignty, within the British Empire and British Commonwealth, from 1907. They have included (at varying times) Canada, Australia, New Zealand, Newfoundland, Union of South Africa, and the Irish Free State. Southern Rhodesia and Malta were special cases in the British Empire. Although they were never dominions, they were treated as dominions in many respects. After 1948, the term ‘Dominion’ was briefly used to denote independent nations that retained the British monarch as head of state. The term was phased out in the 1950's.
The concept of self-government for some of the colonies was first formulated in Lord Durham's Report on the Affairs of British North America in 1839 which recommended that responsible government (the acceptance by governors of the advice of local ministers) should be granted to Upper Canada (Ontario) and Lower Canada (Quebec). This pattern was subsequently applied to the other Canadian provinces and to the Australian colonies which attained responsible government by 1859, except for Western Australia (1890). New Zealand obtained responsible government in 1856 and the Cape Colony in 1872, followed by Natal in 1893. In 1880, British Empire countries began to exchange High Commissioners to each other. Each unitary colony or dominion had a Governor, but federations like Canada, Australia and South Africa, had a Governor General. A further intermediate form of government, Dominion status, was devised in the late 19th and early 20th century at a series of Colonial Conferences (renamed Imperial Conferences in 1907).
In 1907, all of the self-governing British colonies were restyled as Dominions, a title which previously had only been used by Canada. Canada became a Dominion in 1867, Australia in 1901 (though titled as a Commonwealth), New Zealand and Newfoundland in 1907, the Union of South Africa by 1910 and the Irish Free State in 1922. These five self-governing countries were known as Dominions within the British Empire. Their meetings with the British government were the basis for the idea of the future Commonwealth of Nations. Very limited self-government was granted to India in 1919. This was updated in 1935 with a new act which organised the British Indian Empire into a partially self-governing federation, with the plan to achieve full Dominion Status for India in the near future. Malta and Southern Rhodesia were granted self-government in the 1920's and were almost Dominions. Dominion status, meaning a self-governing territory within the British Empire, existed from 1907 to 1949.
The delegation for the secession of Western Australia in London with the flag of their proposed new Dominion of Western Australia which would remain loyal to the British Crown.
The new Australian Labor government of Philip Collier sent a delegation to London with the referendum result to petition the British government to effectively overturn the previous Act of Parliament which had allowed for the creation of the Australian Federation. The delegation included the Agent General, Sir Hal Colebatch, Matthew Lewis Moss, James MacCallum Smith, and Keith Watson. At that time, the states of Australia had the right to by-pass their Federal government and appeal directly to the Brtish Parliament.
The United Kingdom House of Commons established a select committee to consider the issue but after 18 months of negotiations and lobbying, it finally refused to consider the matter, further declaring that it could not legally grant secession. The delegation returned home empty-handed. As a consequence of the failure of negotiations and of the economic revival, the Secession League gradually lost support and by 1938 had ceased to exist.
The establishment of Imperial Airways occurred in the context of British hopes of prolonging and modernizing maritime empire by using a new transport technology that would facilitate settlement, colonial government and trade. The launch of the airline followed a burst of air route survey in the British Empire after the First World War, and after some experimental (and sometimes dangerous) long-distance flying to the margins of Empire. Created following the advice of the government Hambling Committee in 1923 — that the main existing aircraft companies should be merged to create a company which would be strong enough to develop Britain's external air services — and offered a £1m subsidy over ten years if they merged. Imperial Airways Limited was formed in March 1924 from the British Marine Air Navigation Company Ltd (three flying boats), the Daimler Airway (five aircraft), Handley Page Transport Ltd (three aircraft) and the Instone Air Line Ltd (two aircraft). The land operations were based at Croydon Airport to the south of London. IAL immediately discontinued its predecessors' service to points north of London, the airline not being interested in serving what they regarded as the 'provinces'. The first commercial flight was in April 1924, when a daily London-Paris service was opened. Additional services to other European destinations were started throughout the summer. The first new airliner was commissioned by Imperial Airways in November 1924. In the first year of operation the company carried 11,395 passengers and 212,380 letters. In April 1925, The Lost World(a recent blockbuster film) was shown to the passengers on the London-Paris route. This was the first time that a film had been screened for passengers on a plane. The extension of service to the British Empire (Empire Services) was not begun until 1927 when, with the addition of six new aircraft, a service was instituted from Cairo to Basra. but the first service from London for Karachi did not start until 1929 using newly purchased Short S.8 Calcutta flying boats, even then the passengers were transported by train from Paris to the Mediterranean where the Short flying boats were. In February 1931 a weekly service between London and Tanganyika was started as part of the proposed route to Cape Town and in April an experimental London-Australia air mail flight took place; the mail was transferred at the Dutch East Indies, and took 26 days in total to reach Sydney.
The purchase of eight Handley Page HP.42 four-engined airliners boosted the range of services, in 1932 the service to Africa was extended to Cape Town. Typically, services were inaugurated with considerable ceremony and publicity. In Australia in 1934 Imperial and Qantas (Queensland and Northern Territory Aerial Services Ltd) formed Qantas Empire Airways Limited to extended services in Southeast Asia. But it was not until 1937 with the Short Empire flying boats that Imperial could offer a real through service from Southampton to the Empire. The journey to the Cape consisted of flights via Marseille, Rome, Brindisi, Athens, Alexandria, Khartoum, Port Bell, Kisumu and onwards by land-based craft to Nairobi, Mbeya and eventually Cape Town. Survey flights were also made across the Atlantic and to New Zealand. By mid-1937 Imperial had completed its thousandth service to the Empire. In 1934 the Government began negotiations with Imperial Airways to establish a service to carry mail by air on routes served by the airline. Indirectly these negotiations led to the dismissal of Sir Christopher Bullock, the Permanent Secretary of the Air Ministry, who was found by a board of enquiry to have abused his position in seeking a position on the board of the company while these negotiations were in train. The Empire Air Mail Programme began in July 1937, delivering anywhere for 1½ d./oz.
By mid-1938 a hundred tons of mail had been delivered to India and a similar amount to Africa. In the same year, construction was started on the Empire Terminal in Victoria, London, designed by A. Lakeman and with a statue by Eric Broadbent, Speed Wings Over the World gracing the portal above the main entrance. The terminal provided train connections to flying boats at Southampton and to the since closed Croydon Airport. The terminal operated as recently as 1980. To help promote use of the Air Mail service, in June and July, 1939, Imperial Airways participated with Pan American Airways in providing a special "around the world" service with Imperial carrying the souvenir mail eastbound over the Foynes, Ireland, to Hong Kong portion of the New York to New York route. Pan American provided service from New York (departing on June 24) to Foynes (via the first flight of Northern FAM 18) and Hong Kong to San Francisco (via FAM 14), while United Airlines carried it on the final leg from San Francisco to New York where it arrived on July 28. Captain H.W.C. Alger was the first pilot to fly the inaugural air mail flight carrying mail from England to Australia for the first time on the Castor for Imperial Airways' Empires Air Routes, in 1937. Compared to other operators (Air France, KLM, Lufthansa) it was lagging behind in Europe and it was suggested that all European operations be handed over to its competitor British Airways Ltd (founded in 1935) which had more modern aircraft and better organisation. However in November 1939 both Imperial and British Airways Ltd were merged into a new state-owned national carrier: British Overseas Airways Corporation (BOAC). The new carrier adopted the Imperial Speedbird logo, which has evolved into the present British Airways Speedmarque, and the term (Speedbird) continues to be used as BA's call sign.
Imperial Airways poster
British Empire Games began in Hamilton, Ontario, Canada in 1930 with teams from Australia, Bermuda,British Guiana, Canada, England, Newfoundland, New Zealand, Northern Ireland, Scotland, the Union of South Africa and Wales. Since these games were only for the British parts of the world, the United Kingdomwas represented by its four constituent countries on separate teams. However, they came together as a singleGreat Britain team in the Olympics. In 1930, events included track and field, bowling, boxing, rowing, swimming and wrestling. The games were held every four years, except during the Second World War, in 1934, 1938 and in 1950. The Games were restyled as the British Empire and Commonwealth Games in 1954. They continue today as the Commonwealth Games.
Detailed information about the first British Empire Games in 1930 in Hamilton, Ontario, Canada
Detailed information about the second British Empire Games in 1934 in London, England, UK
Detailed information about the third British Empire Games in 1938 in Sydney, New South Wales, Australia
Detailed information about the fourth British Empire Games in 1950 in Aukland, New Zealand
Union of South Africa 1910
Packages of food destined for London, UK during the 'Food for the people of Britain Campaign' in Toronto, Ontario, Canada 1947.
The Quebec Conference in Quebec City, Canada in 1942 among British and American leaders to coordinate war plans. From left to right: Prime Minister MacKenzie King of Canada, the Governor General of Canada the Earl of Athlone, United States President Franklin D. Roosevelt, United States First Lady Mrs. Eleanor Roosevelt and Prime Minister Winston Churchill of the United Kingdom.
At the end of the war in 1945, all imperial territories lost to enemy powers were retaken and restored to the British Empire. Also in 1945, the British Empire expanded to its widest extent as Britain took over administration of Italy’s possessions in Africa including Eritrea, Italian Somaliland (Somalia), part of Libya, the Ogaden region of Ethiopia, and the Dodecanese Islands in the Mediterranean. A British occupation zone was set up in defeated Germany and Austria, as well as French, American and Soviet zones, until 1949. The British planned to make the Dodecanese into a self-governing territory under the British Crown, but they were transferred to Greece in 1947. Britain also briefly administered Madagascar, Syria, Sicily, the Dutch East Indies, French Indochina. Iraq, Egypt, Ethiopia (Italian East Africa after 1936) and southern Iran had also been re-occupied by the British during the war. Egypt, as a fully independent state, had officially remained neutral and did not declare war on Germany and Japan until 24 February 1945. The British removed a pro-German government in Iraq and occupied the country according to the terms of a 1930 treaty which guaranteed an alliance between Britain and Iraq. Iraq officially declared war on Germany and Japan in January 1943. The British occupation of the country lasted until 26 October 1947. Britain planned to merge the Ogaden region of Ethiopia with British and Italian Somaliland into a grand Somalia Protectorate. However, this was later abandoned and Ogaden was returned to Ethiopia in 1948.
The partition of Germany among the Allies at the end of the Second World War in 1945. The British administered the northwest and part of Berlin. West Germany became an independent republic again in 1949.
In 1945, after the end of the war, a general election in Britain swept Winston Churchill, who was a great imperialist who opposed Indian independence, out of power, to be replaced by Labour Prime Minister Clement Attlee, who was more sympathetic to nationalistic demands and creating a British Commonwealth of Nations of completely independent members. Winston Churchill was to return to power in the early 1950's. After 1945, there was now no doubt that the Dominions were nations in their own right. This was recognised by the British Parliament in 1947, and in the case of Canada, King George VI transferred some of his powers to the Canadian Governor General. Australia and New Zealand finally ratified the Statute of Westminster and the Union of South Africa elected a Nationalist government in 1948 which desired a republic. The term 'Dominion' would be abandoned soon after the Second World War as British Commonwealth countries would prefer to be referred to as nations. Indian independence, a key demand for participation in the war by Indian leaders, was only a matter of time. The fall of the world’s greatest Empire was imminent. A new Commonwealth of Nations was about to take shape.
In 1946, following the founding of the United Nations organisation to replace the old League Of Nations, a new Trusteeship Council was set up for previous League of Nations mandated territories. All of the mandated territores of the British Empire such as Tanganyika, Cameroons, Togoland, New Guinea, Nauru and Western Samoa were placed under it along with mandates belonging to other empires. The British and Dominion administration of these territories continued under the supervision of the new Trusteeship Council until these territories eventually gained independence later in the century. The Union of South Africa, however, incorporated Southwest Africa into its own territory, but this was not recognised by the Trusteeship Council and remained a disputed point for many years. Independence was proposed for the mandated territory of Palestine containing a new Jewish state.
The Korean War of the early 1950's was the last conflict in which Commonwealth forces fought together as one Imperial unit as they had done in the Boer War and First and Second World Wars. Afterwards, they went their separate ways though some joining international alliances such as NATO.
Following the Second World War, Britain's economy was devastated and took well into the 1950's to recover. Food rationing continued after the war until it was finally phased out by 1954. Due to a particularly harsh winter in 1947, food rations in Britain were cut during that year. The overseas Dominions showed their loyalty to Britain by coming to Britain's aid with generous donations of food.
Click on the gallery of British Empire maps for 1934 and 1946 below to enlarge them
These maps are low resolution. For quality high resolution maps, please purchase a CD.
A British Empire flag designed in 1937 showing the coats of arms of the Dominions
All citizens of the British Empire were British Subjects if they lived in the United Kingdom, Dominions. Colonies or British India or were British Protected Persons if they lived in Protectorates, Protected States, Indian Princely States or Mandates. This common Imperial citizenship lasted until 1948, after which individual national citizenships began to appear.
Movement around the Empire was easy and British people tended to emigrate to Canada, Australia, New Zealand, the Union of South Africa, Kenya and Southern Rhodesia only needing a passport and a ticket for travel. They could get a job and buy a home when they got there. Many British people also went to India particularly to serve in the vast administration of the Raj. People in the United Kingdom and in all of the Dominions, colonies, protectorates and mandates had British passports which indicated whether they were British Subjects or British Protected Persons. These were all standardised dark blue passports with the front cover showing the British coat of arms or Dominion coat of arms with the title 'British Passport' on the top and the name of each country they was issued in below the coat of arms. For example, the cover would have BRITISH PASSPORT - UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND, BRITISH PASSPORT - AUSTRALIA, BRITISH PASSPORT - INDIAN EMPIRE, etc. One of these passports gave any British Subject or British Protected Person anywhere in the Empire instant access to their one quarter of the world as it was clearly stated on the first inside page of the passport that it was good for free travel anywhere in the British Empire. This also lasted until 1948. The details inside the passport were written by hand.
Imperial Airways began the first long-distance flights around the British Empire in the mid-1930's. The self-governing Dominions had the best of both - they were autonomous, but they kept all the links that the Empire offered. Many people from the West Indies and Africa moved to Britain, particularly after the Second World War to seek better employment.
Egypt had been officially independent since 28 February 1922. A new Anglo-Egyptian Treaty of 1936 withdrew British troops from the country to only the Suez Canal Zone with a plan to eventualy withdraw them altogether within twenty years. Egyptian independence was strengthened and the British High Commissioner in Cairo was retitled as the British Ambassador to emphasise Egypt's independence. A boundary change in 1934 transferred a small section of the Anglo-Egyptian Sudan, known as the Sarra Triangle, to Libya. The Sarra Triangle was considered by the British to be a worthless desert area so it was transferred to Italy to become part of Libya as an appeasement to the Mussolini government in Rome, which was attempting to expand its empire. Italy, which ruled Libya, Eritrea and Italian Somaliland, later conquered Abyssinia (Ethiopia) in 1936. The British mandate over Iraq was terminated in 1932 and it was recognised as an independent kingdom in an alliance treaty with Britain.
Economically, the Empire was united. The Empire Marketing Board existed from 1926 to 1933 to promote inter-British Empire trade. A British Empire Conference in Ottawa, Canada in 1932 established Empire preferential trade in which preference was given to goods being traded between Empire countries and at lower tariffs than for other countries. British trade with Empire countries vastly increased after this. Empire countries, and former Empire countries of Egypt and Iraq, formed the Sterling Area which was a bloc made up of countries that used British Pounds Sterling as their currency or the base for their currency. Canada(which had a dollar since 1850) and Newfoundland did not belong to the Sterling Area since their currencies were pegged to the U.S. Dollar.
King George V and British Prime Minister Stanley Baldwin with Dominion Prime Ministers of Newfoundland, New Zealand, Australia, the Union of South Africa, the Irish Free State and Canada at the Imperial Commonwealth Conference of 1926. Southern Rhodesia would be represented at the next Imperial Conference in 1930. India and Burma would send observers in 1937.
* This does not include all of the British colonies, protectorates and mandates in Africa, the Caribbean, the Mediterranean, the Middle East and the Far East.
The British Empire possessed further resources for war, Canada and Australia had significant industries, and their populations, like those of New Zealand and white Union of South Africa, were well-educated and physically and mentally capable of providing high-quality recruits. These four self-governing dominions followed the British lead and declared war in 1939.
Dominion Status was very inexactly defined until the Statute of Westminster in 1931 established it as complete self-government within the British Empire, as recommended in the Balfour Report of 1926. The Statute had established a new sovereign Dominion Status in the Empire which it was hoped would eventually sastisfy the demand for self-government in other parts of the Empire as well such as India. Southern Rhodesia and Malta had gained autonomy in the 1920's and were considered de facto British Dominions, but the Statute of Westminster did not apply to them yet, so they continued with the pre-Statute British constitutional status. It was expected that they would soon become fully autonomous Dominions like the others and they even participated in the Imperial Conferences.
By an Act of the United Kingdom Parliament, the Statute of Westminster took effect immediately in Canada, the Union of South Africa and the Irish Free State making them the first sovereign British Dominions. Australia, New Zealand and Newfoundland would remain as colonial Dominions until their parliaments passed resolutions adopting the Statute as was required in their constitutions. Australia did not adopt the Statute until 1942 (though it was backdated to 3 September 1939 at the start of the Second World War) and New Zealand did not adopt it until 1947. Newfoundland never adopted the Statute since on 16 February 1934, Newfoundland reverted back to full colony status governed by British Commissioners due to financial difficulties and eventually joined Canada as its tenth province in 1949. The Canadian government requested that the British North America Act, acting as Canada’s constitution, remain in the possession of the British government since Canadian politicians could not agree on an amending formula. Australian States still had the right of appeal to the British Government and New Zealand's constitution remained in British hands. The Statute had granted legislative independence, but not constitutional independence, which came much later.
The Dominions even gained the right to secede from the Empire, a right which Ireland soon exercised. The Union of South Africa contented itself for now by giving itself its own national flag, but it too would ultimately secede thirty years later. Canada, Australia and New Zealand remain under the Crown today by their own choice. The Statute of Westminster provided that all new Dominions created in the future would be fully sovereign. Discussions had already begun on granting this status to India, which was not to happen until after the Second World War.
Despite this new sovereign status, the Dominions were still firmly parts of the British Empire by still being constitutionally bound to the United Kingdom. Governors General often moved around the Empire. Lord Willingdon, for example, was Governor General of Canada in 1926 and then Viceroy of India in 1931. The Earl of Athlone had been Governor General of the Union of South Africa since 1924 and he later became Governor General of Canada in 1931. Members of the British Royal Family continued to be Governors General in the Dominions until well after the Second World War.
On 11 December 1931, the United Kingdom and the now completely legislatively self-governing Dominions of the British Empire: the Dominion of Canada, the Union Of South Africa and the Irish Free State formed the British Commonwealth. The Commonwealth of Australia and the Dominion of New Zealand became parts of it in 1942 and 1947 respectively when they adopted the Statute of Westminster. The British Commonwealth was the collective name for the now completely autonomous Dominions of the British Empire united by a common allegiance to the Crown. The United Kingdom would only act on their behalf with their consent and they made their own declarations of war at the outbreak of the Second World War in 1939. They could even negotiate treaties with foreign countries with no British involvement.
In 1935, a large measure of self-government was granted to India with an elected central parliament, but ultimate executive power still remained with the British-appointed Viceroy. There was an expectation that India would soon gain full Dominion Status within the British Commonwealth, though many Indian nationalists preferred full independence outside the British Empire. India, Southern Rhodesia and Malta continued to be Dominions in the pre-Statute of Westminster sense with self-government, but with the final power still resting with Britain. In 1937, Burma was separated from the British Indian Empire and made into a separate British colony. It was expected to eventually gain Dominion status along with India. Northern Ireland was almost a Dominion with a full parliament of its own, but continued to send members to the British Imperial Parliament at Westminster as a part of the United Kingdom. In 1938, a West Indies Royal Commission recommended the establishment of a federation of the British West Indies as a new self-governing Dominion in the British Commonwealth similar to Canada. This did not happen for another twenty years and then only briefly, ending in failure due to disagreement amongst the constituent islands.
Union of South Africa
Empire troops from the United Kingdom, Canada, Australia, South Africa, India and other colonies served loyally in the Boer War (1899 – 1902), in the First World War (1914-1918) and the Second World War (1939 – 1945). In the Boer War and in the First World War, the Dominions were automatically at war when Britainwent to war. However, after the passage of the Statute of Westminster in 1931, the Dominions could choose to serve or to remain out of Britain’s wars. British wartime Prime Minister Winston Churchill, who led the British Empire during the Second World War, consulted the Dominions on the war effort. In the Second World War, the self-governing Dominions came loyally to Britain's side immediately when war broke out.Australia and New Zealand declared war on the same day as Britain – September 3, 1939. A bitterly divided South African parliament also declared war on this day. The Canadian parliament took one week to debate and approve the declaration of war, which was issued for Canada on September 10, 1939. However, Ireland, which had declared itself a de facto republic in 1937, remained neutral. India, not yet fully self-governing, was automatically at war when Britain went to war, much to the anger of Indian nationalists who were demanding independence. Many Indians fought loyally with the British and others helped the Japanese.
During 1939 -1945 the British Empire joined together in war for the final time. More than 8.7 million from the colonies and dominions rallied round the Union Flag. More than 450,000 were killed. Churchill in 1940 proclaimed "Without victory there is no survival. Let that be realised. No survival for the British Empire. No survival for all that the British Empire has stood."
Citizens of the new British Commonwealth retained the common British Subject status. The Governor General of each Dominion would now represent the Crown and not the British Government. New Zealandhad had a Governor General since 1917 but Newfoundland still had a Governor. The King continued to have a common Imperial title throughout all part of the British Empire of 'King of Great Britain and Ireland and the British Dominions Beyond the Seas and Emperor of India'. Individual citizenships of the Dominions was not to be created until after the Second World War.
Despite the passing of the Statute of Westminster and the granting of full self-government to the Dominions, complete constitutional independence was still not yet achieved. The Statute did not, however, immediately provide for any changes to the legislation establishing the constitutions of the Dominions. This meant, for example, that many constitutional changes continued to require the intervention of the British Parliament, although only at the request and with the consent of the Dominions. The constitutional powers of the British Parliament over the Dominions was not removed in Canada, Australia and New Zealand until the 1980's, in the Union of South Africa until it left the Commonwealth in 1961, and in the Irish Free State until it adopted its own quasi-republican constitution in 1937.
Under the provisions of section 9 of the Statute, the British Parliament still had the power to pass legislation regarding the Australian states, although "in accordance with the [existing] constitutional practice". In practice, these powers were not exercised. For example, in a referendum held in April 1933 in Western Australia, 68% of voters voted for the state to leave the Commonwealth of Australia with the aim of becoming a separate Dominion within the British Empire. The state government sent a delegation to Westminster to cause the result to be enacted, but the British Parliament refused to intervene on the grounds that it was a matter for the Commonwealth of Australia. As a result no action was taken.
So in a legal sense, the Dominions remained colonies for long after the Statute of Westminster was passed, gaining full independence once the power of the British Parliament to legislate for them was completely removed. They were completely self-governing in all other matters and were no longer automatically at war when Britain went to war. The Dominions made their own declarations of war at the start of the Second World War in 1939. The Statute of Westminster did provide that any new dominions created in the future would be fully self-governing. This was encouraging to Indian nationalists who were seeking full independence by the 1930's.
Pounds Sterling circulated in the British Empire, but in some parts, they were used alongside local currencies such as the Indian Rupee. For example, the gold sovereign was legal tender in Canada despite the use of the Canadian dollar. Several colonies and dominions adopted the Pound as their own currency. These included the Australian, British West African, Cypriot, Fijian, Jamaican, New Zealand, South African and Southern Rhodesian Pounds. Some of these Pounds retained parity with Sterling throughout their existence (e.g. the South African Pound), whilst others deviated from parity in later years (e.g. the Australian Pound). These currencies and others tied to Sterling constituted the Sterling Area.
The Sterling Area arrangement lasted until 1967 and was phased out by 1972, when most of its members had either quit or pegged to the U.S. Dollar and the United Kingdom was negotiating to enter the European Common Market.
In population and in industrial capacity, the allies, even after losing France, were stronger than the axis powers.
Second World War poster showing Imperial unity in the war
At the start of the war, many thought the empire was finished. But the dominions especially had other ideas. The Australian Prime Minister, Robert Menzies announced "We are in this most holy war with you; everything that we have of manpower or treasure or skill or determination is pledged to work and fight for you and with you until victory is attained ... One King, one Flag, One Cause."
The New Zealand Prime Minister Michael Savage, asked the governor general for a formal declaration of war before proclaiming 'Where she goes, we go, where she stands, we stand'.
The Canadians contributed nearly 500,000 and their first contingents arrived in Britain by December 1939. The Australians raised more than half million men and women - 27,000 of them were killed. Two divisions of New Zealanders were in the Pacific and the Middle East. The South Africans, who at first stayed in their own continent, later fought through Italy. Tens of thousands of colonials went through aircrew training - much of it in Canada. Of the more than 30,000 merchant sailors who perished during the Battle of the Atlantic, 5,000 were from the colonies.
In Africa, as many as 200,000 became miners, carriers and labourers to harvest the natural resources needed to manufacture weapons and feed those who would use them. Ghana produced industrial diamonds and manganese for guns. Nigeria produced timber, palm oil, groundnuts, rubber and tin. Sierra Leone raised war funds for Britain "in grateful recognition of the great benefits which Sierra Leone has received during the past 135 years under the British flag." The ruler of Benin gave £10 a month out of his salary.
The Dominions, including Australia, had the option of joining the war or not. Not so the Indians. The viceroy, Lord Linlithgow declared war without consulting any of the major political or cultural figures. They were treated just as they had been at the start of the First World War. Indian Congress refused to participate in government. But Gandhi told Linlithgow that he viewed the war with an English heart. Nehru said he was offended by the viceroy's proclamation but not its sentiment and, 2.25 million Indian Army soldiers were committed to the war.
The Seonc World War briefly revived British imperialism. But the Attlee government that followed Churchill's coalition in 1945 knew the end of empire was in sight. By 1947, even Churchill could see that.
The Dominions and India had had their own armed forces since the early 1900's. They were all modelled on the British armed forces, flew British ensigns and they served alongside British forces as one Imperial fighting force in both world wars. Even though they flew British flags, the Dominions began to use their own markings on their ships and fighter planes after 1940.
A banner from Australia in the 1930's promoting inter-Empire trade with Canada
Historical Atlas of the British Empire
Union of South Africa 1928
The First World War had enhanced a sense of nationhood among the British Dominions and they no longer wanted to be considered to be nothing more than colonies. This was particularly pushed by the nationalistic Union of South Africa and Irish Free State. They were mostly self-governing and wanted a new status in the British Empire which would give them a large measure of independence and allow them to be consulted on imperial affairs and even to opt out of decisions they did not agree with. This led to the setting up of an inter-imperial affairs committee and the Balfour Declaration of 1926 which stated that the Dominions were"autonomous Communities within the British Empire, equal in status, in no way subordinate one to another in any aspect of their domestic or external affairs, though united by a common allegiance to the Crown, and freely associated as members of the British Commonwealth of Nations." By 1936, the flags of the Governors General were changed from being based on the Union Jack to the royal crest on a blue flag. In 1928, the Union of South Africa adopted a new tricolour national flag containing the Union Jack and the flags of the old Boer Republics. This flag was flown equally alongside the Union Jack. South Africa’s previous British-style ensign continued for maritime use until after the Second World War.
The British Union Jack was the national flag of the entire British Empire including all of the Dominions until well after the Second World War. The Union Jack was the official flag in India until 1947, in Australia until 1953, in New Zealand until 1956, in Newfoundland until 1980 well after it joined Canada in 1949, in Ceylon until 1956, in the Union of South Africa until 1958 along with its own flag for 30 years, and in Canada until 1965. The Irish Free State, however, dropped the Union Jack in the 1920's. However, all British territories also had a colonial ensign which was either red or blue with the Union Jack in the upper left corner (the canton) and their own badge or emblem on the fly. These ensigns were used at sea and at international gatherings. As the Dominions became more autonomous, these ensigns evolved into their national flags. Australia adopted its blue and red ensign flags with the white stars of the Southern Cross and its Australian Commonwealth star for unofficial use in 1901, while the Union Jack remained the official flag. New Zealand adopted its blue ensign with red-bordered white stars of the Southern Cross in 1902 as a national flag which flew along with the Union Jack. Australia's red ensign was commonly used until 1953 when the blue ensign was declared to be the offiical national flag. The Australian and New Zealand ensigns are still in use today. Newfoundland introduced a red ensign with its coat of arms on the fly in 1904, but adopted the Union Jack as its national flag in 1931. The Union Jack remained as Newfoundland's provincial flag after it joined Canada in 1949 until 1980. The Union of South Africa, which had used a red ensign with its coat of arms since 1910, formally adopted two flags in 1928. The Union of South Africa brought in its own distinctive orange, white and blue horizontal tricolour national flag containing a small Union Jack and two small Boer flags in the centre, and the other flag was the Union Jack which would continue to be flown to show loyalty to the Empire. The South African red ensign continued to be used at sea until 1951. Canada, however, adopted the British Union Jack as its national flag in 1904 and only used a red ensign with its coat of arms at sea and at international gatherings. Calls for a distinctive Canadian flag to be used on land, probably an ensign containing the Union Jack for loyalty to the Empire and a maple leaf for Canada, began in 1925 and debated again in 1938 and 1946, when the red ensign with the coat of arms was authorised for use as a de facto national flag for Canada, while the Union Jack remained the official flag. Disagreement between English-speaking Canadians who wanted the Union Jack and more nationalistic French-speaking Canadians kept the issue unresolved for many more years. The nationalistic Irish Free State used a distinctive tricolour as its national flag, which was adopted during the Irish Revolution of 1919, and still in use by the Irish Republic today. India used a red ensign with the star of India on the fly, however in 1931, Indian nationalists began to use an orange, white and green horizontal tricolour flag for their movement, which evolved into the national flag after independence in 1947. British territories obviously dropped the Union Jack when they got their independence after the Second World War, except for a few which continued to use ensigns with the Union Jack in the upper left corner (the canton).
Imperial Conferences continued with the British Prime Minister and Dominion Prime Ministers from Canada, Australia, New Zealand, the Union of South Africa, the Irish Free State and Newfoundland. The Imperial Conferences of 1926 and 1930 adopted the Balfour Report with the recommendations which were enacted in the Statute of Westminster of 1931. Southern Rhodesia was represented at the 1930 and 1937 Imperial Conferences. The last Imperial Conference before the Second World War was held in 1937 for the Coronation of King George VI. India and Burma were represented at the 1937 Imperial Conference, but the Irish Free State (Eire) was absent (it had declared itself to be a pseudo-republic in that year). Imperial Conferences were renamed as Commonwealth Prime Ministers' Conferences in 1944.
Royal tours of the Empire increased over the years. King George V travelled to India in 1911 after his coronation for the one and only Delhi Durbar for his investiture as Emperor of India. He was the only reigning monarch to visit India. Plans for a Durbar in India for King George VI in 1937 were cancelled due to the growing nationalist political situation demanding independence for India. The Prince of Wales carried out a tour of the Empire in 1919-1920. To show the new status of the Dominions as autonomous communities, Royal visits of the reigning monarch to the Dominions began as the Second World War approached. The monarch would spend some time in the Dominions to show that they now had a more important function to play in international relations. In 1939, King George VI became the first reigning monarch to tour an overseas Dominion with his visit to Canada. He was also the first reigning monarch to tour the Union of South Africa in 1947. Queen Elizabeth II was the first reigning monarch to tour Australia, New Zealand and Ceylon in 1954.
James MacCallum Smith, the proprietor of the local weekly newspaper, The Sunday Times started publishing pro-secessionist articles in 1907 under its editor Alfred Chandler. Smith was a committed secessionist and continued to agitate until the mid-1930s when a syndicate of mainly nationalists purchased the paper's parent company. In 1926, Smith and others established the Secession League to provide a public vehicle for advancing the secession cause. Prior to the Great Depression in 1930, the State's major export had beenwheat. However, with the depression, wheat prices plummeted and unemployment in Perth reached 30%, creating economic havoc. Also in 1930, http://en.wikipedia.org/wiki/Keith_Watson_(politician) founded the Dominion League which advocated secession and the creation of a separate Dominion of Western Australia. The league held numerous rallies and public meetings which tapped into the general discontent brought on by the depression.
To counter the pro-secession movement, a Federal League of Western Australia was formed which organised a 'No' campaign. They brought several high profile people to Western Australia including the Prime Minister Joseph Lyons, Senator George Pearce, and former Prime Minister Billy Hughes for a brief speaking tour of Perth, Fremantle and country centres, but often received hostile receptions. The Federalists argued for a constitutional convention to examine the state's grievances but was unable to counter the grassroots campaign of the Dominion League. The question of holding a constitutional convention was the second question asked in the referendum.
On 8 April 1933, Nationalist Premier Sir James Mitchell's government held a referendum on secession alongside the State parliamentary election. The Nationalists had campaigned in favour of secession while the Labor party had campaigned against breaking from the Federation. 68% of the 237,198 voters voted in favour of secession, but at the same time the Nationalists were voted out of office. Only the mining areas, populated by keen Federalists, voted against the move.
It is often said that the British Empire peaked in the 1920s, following World War One (1914-18), in which it gained most of the German territories in Africa, and Ottoman provinces in the Near East by League of Nations mandates. After the passage of the Statute of Westminster in 1931, the British monarch remained (and still remains, except for South Africa), the monarch of the Dominions, represented by British Governors General and their citizens remained British Subjects until at least 1947, so the Dominions continued to be counted as parts of the British Empire. World War Two (1939-45) showed that they were indeed parts of the Empire: in 1939 the Australian prime minister informed his country that Britain had declared war on Germany and that "as a result Australia is also at war", and in 1940 millions of pounds of gold were shipped to Canada in preparation for a possible relocation of the British royal family. By this reckoning, the Empire reached its greatest extent following that war, in 1945 when most of the Italian territories in Africa (Libya, Eritrea and Somaliland) were occupied by Britain, as was all of Northwest Germany and parts of Austria and Berlin. |
The Celts were one of the representative European ethnic groups, and were called barbarians by Romans together with Germanic peoples and Slavs. The Celts had spread throughout the western Europe. In ancient times, Irish, Scottish, Welsh, Cornish and Bretons were all Celts. Modern representatives of Celts are Irish, Scottish and Welsh. Many of them made great achievements in academic and scientific fields as well as arts and crafts.
The appearance of Celts
The origin of the name “Celt”: Celt was the Latino name given by Caesar to this ethnic group. As was described in Caesar’s war diaries, the most typical physical feature of Celts was their red hairs. v sc Today, in Scotland and Ireland where there are a large number of Celtic descendants, 8% of local people have red hairs.
The life of Celts
Celts lived in large families or tribes. They kept expanding their living space so as to expand the tribe. Usually, the whole tribe was ruled by a Celtic knight or a tribal chief. One thirds of the tribal population was the privileged stratum, who was referred to as “the men of arts”. Among these privileged people, Druids and troubadours are better known to modern people for praising Celtic warriors with their poems and handicrafts. The whole Celtic society was built on complex genetic relationships and obligations. The aristocratic stratum must accumulate wealth and improve reputation by making contributions in agriculture, trade and victories of wars. Then they used the wealth to found or invest their own families or tribes.
Celtic boys could join the battles when they turned fourteen, while girls were allowed to get married and have children at the same age. A young nobleman or a descendant of free man was also allowed to be a retainer in the home of a feudal lord or Celtic knight at his fourteenth birthday. Such retainers were called the “Fenian”. By following those experienced warriors, these young men would have more chances to win for themselves the wealth and honor.
The Celts might be the first ethnic group in human history which promoted gender equality and accepted abnormal sexual orientation, as a Celtic woman could not only become the queen, but could also became a religions leader. In later-times European countries not in compliance with the Lex Salica, a woman could inherit the throne, but would never be allowed into the religious domain (the Abrahamic religions such as Judaism, Christianity and Islamism). The native religion of Celtic people (Druidism) accepted homosexuality in tradition, and the religions leader of Druidism was a Celtic woman.
The migration of Celts
The Celts had migrated in large scale to almost everywhere in Europe. They did business with Greeks, had wars with Romans; they also climbed over the Alps in droves, and brought ironwares to other areas in Europe.
Around the first century BC, ancient Greek geographer Strabo described Celts that, “The whole their ethnic group madly love wars. They go to battle bravely and quickly. And whatever excuses you provoke them with, you will be in danger. They always have strength and courage, even when they do not have any weapons.” What we know about the Celtic culture today is based on writers and geographers at that time; as well as some relics of Celtic burial ceremony in Bavaria, Bohemia and northern Austria. Celts had founded a loosely organized empire whose territory included the Central Europe. But their territory was unfixed as they often migrated. Archaeologists today have discovered imprints of Celtic culture in a large area from the British Isles and southern Spain in the west to Transylvania and the Black Sea in the east.
The history of Celts
In the long history, Celts’ sphere of activity had greatly expanded, but later it gradually diminished.
The Seine Basin, the upstream area of Loire river in Eastern France as well as the upstream areas of the Rhine and Danube was the birthplace of Celts. Around the early 10th century BC, they first appeared in these areas. In the following centuries, the Celts spread and migrated to surrounding regions in armed tribal unions. They were the first ethnic group in Europe that learned to make and use ironwares as well as golden ornaments. With iron weapons, they defeated other bronze-age tribes, and settled down in eastern and central France as early as 7th century BC.
They began their infiltration and expansion all across the Europe from the 5th century BC.
From around 500 BC, Celts invaded and conquered the British Isles from European continent. Some Celts settled down in Ireland and Scotland, while some others conquered southern and eastern England. Celts spoke the Celtic languages. Today, Gaels (Scottish Highlanders) in northern and western Scottish highlands still use this language. Before the formation of the English language, Celtic language was the only and earliest language discovered in the British Isles with relevant historical materials. Almost at the same time when Celts invaded the British Isles, some Celts went across the Rhine river, entered northeastern France, and settled down where was to the north of Seine river, to the west and south of the Ardennes.
Around 500 BC, France had been the main habitation area of Celts. Romans called the Celts living in France, Belgium, Switzerland, the Netherlands, southern Germany and northern Italy the Gauls, and referred to the region they lived in, which was approximately six hundred thousand square kilometers, as Gaul. Later, the Celts had once spread all over the European continent, and had conquered France, Spain, Portugal and Italy.
In 387 BC and 279 BC, Celts invaded and plundered Rome and Greece. Some Celtic tribes even arrived in the Anatolia region in Turkey. At the peak of Celts, they conquered the vast land from Portugal to the Black Sea, and was almost as strong as the Roman Empire in later times. However, they ultimately failed to found a unified nation. With the rise of the Roman Empire, the Celtic culture began to decline. Facing the highly-disciplined and tactically-advanced Roman troops, tall and brave Celts were no match for them. But they remained a military force that could not be underestimated before the rise of the Roman Empire.
In 385 BC, Celts looted Rome. This painful part of history had always been remembered by Romans till Julius Caesar made the revenge by utterly defeating Gaulish Celts between 59 BC and 49 BC. Gaul, the cultural center of Celts, had became a province of Roman Empire since then. As a result of Caesar’s conquest of Gaul, one million Celts were killed and another one million became salves.
In the history of Britain, the real “Roman conquest” started in 43 BC. In that year, Roman Emperor Claudius led an army of forty thousand soldiers to conquer the middle part and south-central part of Britain Island in three years. After that, the whole England became firmly controlled by the Roman Empire. The Celtic culture gradually disappeared on the European continent following the Roman conquering wars and became integrated into the Roman culture. Only in Ireland (where Romans never arrived) and Scotland (where Romans never completely occupied), the Celtic kingdoms existed and continued. Romans occupied Britain for four hundred years. They did not give up their military presence until the year of 407 AD where they were beset with difficulties and contradictions in both internal and external affairs. Celts, the ancient inhabitants of Britain, thereupon re-established their order.
Around 449 AD, three Germanic tribes that lived in northwestern Europe invaded Britain. But they encountered heavy resistance from Celts, and their invasion lasted one and a half century. The heroic deeds of a tribal general during this time, integrating with the stories of three heroes in Celtic legends, were circulated in Europe. Finally they became the famous Arthurian legend. By the late 6th century, Celts, the original inhabitants on British Isles, were nearly extinct. The survivors escaped into the mountains or became slaves. This was the “Germanic Conquest” or “Teutonic Conquest” in British history.
Ancient Celts did not have capitals. As they lived in tribes, their expansion in Europe could be seen as “tribal migration”. In the Middle Ages, some Celtic tribes began to fuse with each other and found the states of modern sense. The Celts in Ireland (Irish) captured Dublin from Vikings and designated it as their capital; while the Celts in Scotland (Scots) chose Edinburgh as the capital.
In the early Middle Ages, the Celts in Ireland maintained the custom of living in small groups. Four Irish provinces: Leinster, Munster, Connaught and Ulster, were not unified until around 800 AD.
In 795 AD, Vikings invaded Ireland and began to establish permanent settlements in the middle 9th century, and the most important one was Dublin.
Around 1000 AD, Brian Boru became the first king of all Irishmen. And in 1014 AD, he led Irish troops to defeat Danes in Clontarf outside Dublin.
The earliest inhabitants in Scotland were mostly Picts. In 6th century AD, a Celtic tribe named “Scot” from Ireland invaded southern Scotland (the Argyll county today ), settled down there and named this newly-occupied land after their tribe. Then they expanded southwards, accepted and fused with the native Picts (before that, the Picts were the mortal malady of Romans in the south). The Kingdom of Scotland was basically formed in 11th century, however, the Kingdom of England in the south soon expressed a keen interest in this land. As a response to the ambition of English, Scottish signed the “Auld Alliance” (Old Alliance) with French. The “Auld Alliance” was also the basis of Scottish diplomacy in the following centuries.
In 1296 AD, Edward I of England (also known as “Edward Longshank” and “the Hammer of the Scots”) annexed Scotland. William Wallace led Scottish people to rise up against English occupation, and he almost won the independence of Scotland after his victory in the Battle of Stirling Bridge in 1297 AD. Following the failure in the Battle of Falkirk in the next year, William Wallace led his men to wage a guerrilla war against English, till he was betrayed by comrades and executed under the order of Edward I in 1305. After that, Robert Bruce declared himself the king of Scotland after assassinating his main political opponents. He gained a complete victory in the Battle of Bannockburn in 1314, and drove all the English troops out of Scottish territory.
In 1328 AD, Edward III of England was obliged to admit the independence of Scotland.
Welsh are also descendants of ancient Celts. But during that time Wales was divided internally and there was no warlord strong enough to unify this region.
In the 13th century, the king of England even tried to stop the unification of Wales by aligning with numerous Welsh vassal states. Although Wales was within the English sphere of influence, it had been always a fortress of Celts. However, after the death of Prince llewelyn in 1282, Edward I started a war on Wales. He won the war and made Wales under the ruling of England. The Welsh maintained a high national sentiment, which was proved by the uprising led by Owain Glyndŵr in the early 15th century.
The Act of Union in 1536 and 1542 unified England and Wales administratively, politically and legally (that is why the crown prince of England is also referred to as the “Prince of Wales”).
The Celtic economy was based on farming and herding. The Celts had been engaged in agricultural production before they began their military expeditions and migrations. They knew how to use work horses and iron ploughs. They also knew how to choose the right crop to grow according to the natural conditions in different areas. They mainly grew barley, wheat, rye, oat, as well as beet, turnip, flax, hemp, onion and garlic. Their abundant grain production provided an advantage condition for population growth. It is estimated that the population of Gaul increased from seven hundred thousand in 1000 BC to three million in 400 BC.
Animal husbandry was an important basic economic industry for Celts, only second to agriculture. The Celts raised horses, sheep, cattle and pigs. And horse-raising and sheep-raising were particularly common. Some tribal unions only raised one single kind of livestock, and they usually used the glades to raise pigs. In some areas, it was a popular way to raise semi-domesticated pigs in oak groves.
Since the 5th century BC, in most areas of Gaul people had been living settled lives based on farming or a mixed economy of agriculture and animal husbandry. They built houses with wood and clay. There were no furniture in the house; the ground was covered by hay or straw, which was then covered by hides. There were cellars for storing grain in the most courtyards beside the houses.
According to archaeological materials, in the later phase of Hallstatt culture, the Celtic handicraft industry had begun to sprout up and become split from agriculture and simple household production. Mining, smelting and processing were the most important sectors in Celtic handicraft industry. During the age of Hallstatt Culture, Celts mainly exploited marsh ore mines and open-pit iron mines. And during the age of La Tene Culture, they began to look for high-grad iron mines which were easier to exploit, and smelted ores near the mines. In usual, they used charcoal as fuel and smelted iron ores in vertical furnaces. What came out of their furnaces were rectangular iron ingots with two cuspidal ends, each of which weighted 6-7 kilograms. In some areas, people used such iron ingots as the universal equivalent in trading. La Tene-age mine sites and workshop relics had been discovered throughout Gaul. Weapons (such as daggers) accounted for the biggest share of the Celtic ironwares; followed by production tools, including ploughs, sickles, files, pliers, chisels, saws, axes, drill bits, scissors and razors.
Celtic handicraftsmen had superb skills in the processing of metals such as bronze, gold and silver. The Celtic bracelets, brooches and waist tags with vignettes and carved patterns were very popular in central and western Europe. They also knew how to inlay as well as plate gold and silver. In Europe at that time, Celts had the most advanced skills in the smelting and processing of metal as well as the manufacture of ironwares and other metalwares.
Apart from the smelting and processing of metal, Celtic handicraft industry also included the manufacture of leatherwares, potteries, glass, enamels and vehicles. In the 2nd century BC, the pottery industry in Gaul became more and more perfect. In Gaulish pottery workshops, there were not only pottery wheels, but also well-structured kilns. The Gaulish potteries were famous for the superb crafts and graceful styles. Leather was not only used to meet inhabitants’ daily needs, but also used to make various items, including soldiers’ jackets, sheath belts, saddles, harnesses, shield skins and helmets.
Celts had been engaged in handicraft production for the purpose of exchange since the late phase of Hallstatt Culture. The trade between Celtic tribes, between Celts and the Mediterranean coast as well as other areas in Europe also developed. In the late 7th century BC, some Greek settlements appeared along the Mediterranean coast in southern France. The most famous one among them was Massalia (Marseilles today). At first, Greek merchants and handicraftsmen established relations with inland Celts via the Ligurians in southern France, and thus made trades with Celts in central Europe. Corals, ivories, glass, wine, bronze vessels were transported from Mediterranean coast to inland region, while livestock, leatherwares and raw metals (such as gold, silver, tin) were exported from Gaul. Celts’ early commodity exchanging with outside world was mainly to meet the needs of tribal aristocrats’ luxury lives.
From the middle phase of La Tene Culture, the exchange between different areas in Gaul became active, and the commodities for exchange increased day by day. The commodity exchanging and trade between Celts and other areas became regular. In Gaul, they built a road network, as well as trading facilities at the intersections of roads and water roads. Chalon-sur-Saône and Mâcon on the Saône River, Orleans and Lyons on the Loire River, Paris and Vernon on the Seine River were all commodity transit points of both waterborne trade and overland trade. Chalon-sur-Saône was an important transit point on the trade routes from northern Gaul to southern Gaul. In Celtic languages, “magus” meant market. The places whose names ended with “magus” were mostly located at ferries or near bridges. They were the trade hubs at that time. Long-distance commodity exchanging also appeared following the establishment of trade routes and trade hubs.
Also ,as a result of the development of commodity production and trade activities, Celts began to make coins. The earliest Celtic coin was an imitation of Tetradrachm of Philip II that appeared in western Gaul. After that, tribal groups in the northern, southern and central Gaul also began to make coins. The design and pattern of coins were imitations of ancient Greek coins, or characters, geometric figures. For example, on the coins used in Bretagne (Brittany), there was a head portrait with laurel on one side, and an image of soldier holding a spear and a shield on the other side.
As Celts lived in an appropriate geographical location in Europe, “where various influences rendezvoused”, they might have some connections and communications with other neighboring ethnic groups such as Germanic peoples, Illyrians, Ligurians, Italians and Greeks. Through such communications, Celts kept enriching themselves, strengthening themselves, and created their unique culture.
The Celtic culture can be divided into following periods:
1. The Bell-Beaker Culture and Battle-Axe Culture
The Bell-Beaker Culture and Battle-Axe Culture (approximately middle 30th century BC – early 20th century BC)
seem somewhat related to the ancestors of Celts. They were named for the burial objects (such as distinctive bell-shape pottery cups and perforated stone axes) discovered in relevant tomb sites.
2. The Unetician Culture
The Unetician Culture (approximately 17th century BC – 14th century BC)
spread all over a broad region from the western Slovakian border, Moravia, central and northwestern Czech, to the central Germany. The richest tin mines in central Europe were located in the Ore Mountains (German: Erzgebirge; Czech: Krušné Hory). The tin ores from these mines and the bronze ores exploited locally or in nearby areas provided a good foundation for the development of bronze culture.
3. The Urnfield Culture
The Urnfield Culture (approximately 13th century BC – 8th century BC)
was called ”a new cultural trend that gave birth to the Celtic society”. It sprouted out in northern Italy and the eastern part of central Europe, then spread to western Europe, the Nordic region and even Ukraine in the eastern Europe.
4. The Hallstatt Culture
The Hallstatt Culture (approximately 1100 BC – 450 BC)
was named after the Hallstatt relics near Salzburg, Austria. Its relevant historic relics spread over former Yugoslavia, Austria, western Poland and France. It succeeded the Urnfield Culture and preceded the La Tene Culture.
5. The La Tene Culture
The La Tene Culture (approximately middle 5th century BC – late 1st century BC)
was named after the La Tene relics on the eastern coast of Lake Neuchâtel in Switzerland. Its relevant historic relics also spread over Austria, France and Britain. The La Tene Culture was famous for its distinctive decorative art. It developed from the plain and simple geometric patterns in the Hallstatt Culture, and was also influenced by the images of weird animals in the Scythian Culture as well as the realistic style of classical Greek art, finally formed a decorative style centered on spiral curves and circle patterns, and supplemented with Greek-style flower patterns and Scythian-style animal veins.
The Celtic languages, or the Celtic language group, is a group of languages under the Indo-European Language Family. In ancient times, the Celtic languages had been very popular in western Europe. But today, only a small number of people in some areas of Britain and the Brittany peninsula of France speak Celtic languages (such as Cornish and Breton).
There are mainly four Celtic ethnic groups, but in the academic circle there also have been disputes regarding how to classify them. The languages of two Celtic ethnic groups had become extinct.
Gaulish language: Gaulish language and its branch languages such as the Lepontic language and Galatian language had been widely spoken in a broad area from France to Turkey and from Holland to northern Italy.
Celtiberian language: it had been used in Aragón and other areas in Spain.
Goidelic languages: including Irish Gaelic, Scottish Gaelic, and Manx.
Brythonic languages: including Welsh, Cornish, and Breton.
Usually the four ethnic groups are classified into two categories, but there are two influential methods of classifying them. The first method puts the Gaulish language and Brythonic languages together and refers to them as the “P-Celtic languages”; Celtiberian language and Goidelic languages as the “Q-Celtic languages”. The difference between P-Celtic languages and Q-Celtic languages can be seen from the word “son”: it is “map” in P-Celtic languages and “mac” in Q-Celtic languages (“c” here is pronounced as “k”).
Another method puts Goidelic languages and Brythonic languages in the category “Insular Celtic languages” and the rest in category “Continental Celtic languages”. (The theory of this method believes that the variation between “Q” and “P” takes place in every single language). Supporters of this method also pointed out that there were universal variations among the Insular Celtic languages, including the inflexion of preposition and variations in the word order of VSO (Verb–subject–object). But nobody believes that the Continental Celtic languages was derived from “Primitive Continental Celtic languages”. This classification is to make it easier to include all the non-Continental Celtic languages into one category.
Ancient Celts were known for their Druidism. This religion was named after Druid, the priestly class entitled to a special status in Celtic society. The name ”Druid” was originated from “dru”, which meant “live oak”. Tall live oak was the idol and sacred tree to Celts. The Celtic Religious and scarification rituals were very arcane and often held in the quiet of the night. The rituals were often held in a grove or glade called “Holy Place”. Druid (priest) wore a white dress, cut mistletoes with a golden sickle, and held the scarification ritual with two white cattle under the sacred tree. In ancient Europe densely covered by desolate, dark, and silent forests, many ethnic groups regarded mistletoe as a holy plant or worshipped the sacred tree. But the highly mysterious Druid held the rituals following very distinctive customs.
The core of Druidic doctrine was reincarnation, which claimed that the soul would not perish after the death, but moving from one body to another.
The deities worshipped by Druidists were mostly the guardians of regions and tribes, and usually named after the tribes.
Similar to Celtic religion, Celtic mythology was also unique and distinctive. It was originated from the time when they lived in the inland region of central Europe to the north of the Alps. Had gone through all kinds of evolutions, the development of Celtic mythology was relatively stable before the Roman Conquest, but there was some delay in Britain. Celtic mythology and religion blended with each other. They comprehensively reflected Celts’ characters, philosophy and social lives in the form of idea. For example, Celts were good at hunting, so they had deities in the shapes of wild boar, deer and bear; they focused on farming and herding, so they had the protector god of farmland; they also had goddess Epona (protector of horses) and goddess Damona (protector of cattle).
Celts loved to eat pork and hold feasts. There were such scenarios in Welsh mythology: there were inexhaustible sacred pots and hearty pork dishes in the luxury banquet halls in the other world. For another example, Celts were tough, intrepid and aggressive. Therefore, these Celtic disposition resulted in many images of war gods such as Belatukadros, Caturix and Cocidius; and their legends also created the images of some reckless, tough and chivalric guys who also loved to brag.
Among the Celtic gods, the most important one was the sun god Lugus. Ancient Greek litterateurs equated Lugus and Apollo, and believed that both of them were good at crafts and music. Another important god was the god of all animals Cernunnos who had antlers on the head.
Among Celtic goddesses, the protector of horses (mares) was the strongest. She had different names in different regions: in Gaul, she was known as Epona; in Ireland, she was called Macha; and in Britain, Rhiannon. The same as the war goddess Morrigan, she closely controlled the fates of kings and tribes. The former represented death and rebirth, and the latter represented fertility. Goddesses in Celtic religion often appeared in three forms or in the form of trinity. For example: The Martellogne trinity in Gaul; in Ireland, goddess Brigid governed poetry, healing and metalwork; the “Great Queen” Morrigan had three forms which respectively represented death (prediction), war (fear) and being killed in battle. Another goddess was recorded by Luka. She accepted human sacrifices, had three faces and three forms – thunder, war and mysterious ox (symbolizing fertility).
November 1st – the Day of the Departed
May 1st – the Day of Belenus (people held sacrifice ceremony for war, herded, hunted and courted on this day)
February 1st – the Day of goddess Brigid’s early spring
August 1st – the Day of Lugus’ marriage (also the Day of harvest)
One theory is that the bagpipe was originated from Ireland, and it first appeared in Scotland in around 13th century. This theory is actually wrong. The bagpipe was not originated from Ireland, but northern Italy; it came to Britain following the Roman Conquest. In northern Italy, a similar musical instrument is still in use by local people. This musical instrument is largely identical to bagpipe, but is not often seen nowadays. The real Celtic traditional musical instrument is harp, which was originated from Ireland and later also widely used in Scotland.
Irish harp or Celtic harp (in Celtic languages, “clarsach”), is an important element in the Irish national emblem. It is about 90 centimeters in height and 55 centimeters in width. Traditional Irish harpist plays it with fingernails. According to legend, if the performance of a harpist made his audience feel distressed, his fingernails would break. When Celtic peoples brought this musical instrument to Africa, it was made of wood which was hollowed and became the castanets. It had three or five strings, and was wrapped by dried hide. There were small holes on the harp body so that the sound could come out.
The most popular sport among Celts – golf, was also invented by Scottish in Middle Ages.
The majority of Celtic troops were the poor armed with spears and armors. Those who brandished sharp swords were the awful “Celtic swordsmen”. These guys had much more wealth than the others and could arm many others. But they did not wear helmets or armors, and only brandished their swords. They always went to war with great enthusiasm toplessly or nakedly. Similar to Viking Berserkers, they sometimes wore trousers or cloaks with distinctive patterns. They started their battle by vicious abuse, then madly charged at the enemy. In Celtic society, this was the standard behavior for a Celtic swordsman in tribal conflicts. These tribal conflicts also honed young Celts, making them qualified warriors. Celtic warriors were also known for joining other classical-age troops as mercenaries. The best example is, during the Second Punic War they joined Hannibal’s army to invade Roma, and made great contributions to the final victory.
In Brennus, Celts started their looting of Roman cities, which left indelible wounds in Romans’ hearts, and resulted in that Romans treated Celts in the same way. The mutual hostility never ended even when Gaul and Britain became independent from the Roman Empire.
- 16 Celtic/scottish Symbols and meanings
- 8 symbols of welsh gaelic irish and their meanings
- Did Vikings Celebrate Christmas?
- Is there a difference between gaelic and celtic
- what is a traditional viking wedding
- who is santa claus history?
- Irish/Scottish/Celtic/Gaelic Christmas Traditions |
Dilating a Polygon
Read the directions and complete the problems below.
Step 1: Create a polygon. This time, let's create a regular polygon. A regular polygon is a polygon is that has all sides equal and all angles equal. A square is an example of a regular polygon. a) Click the polygon tool and select regular polygon. b) This tool will require you graph two points. (graph any two points) c) Then enter the number of vertices. (let's do 6 for this example) Step 2: Decide which is your center of dilation. Use the point tool (2nd button) to plot a point, unless the point is a vertex of your polygon. Note: The point created needs to be dark blue. If you get a black dot or a light blue dot, delete it. Create a dot someplace else and move it to where you need it to be. Step 3: Dilating a) Click (8th button) and choose dilating from a point b) Click the polygon. c) Click the center of dilation d) Enter the scale factor. Note: If you move the vertices, the corresponding image point will appropriately change as well. Problem #1: D 5(ABCDEF) with (0, 0) as the center of dilation. Problem #2:D 1/5 (ABCDEF) with as Point A. *Perform each transformation within this applet. **Take a screen shot of your work (you will have problem #1 and problem #2 on one screenshot.) Insert it into the appropriate google slide. ***Geogebra will NOT save your work!!!! |
2 4.1 Angles & Radian Measure ObjectivesRecognize & use the vocabulary of anglesUse degree measureUse radian measureConvert between degrees & radiansDraw angles in standard positionFind coterminal anglesFind the length of a circular arcUse linear & angular speed to describe motion on a circular path
3 Angles An angle is formed when two rays have a common endpt. Standard position: one ray lies along the x-axis extending toward the rightPositive angles measure counterclockwise from the x-axisNegative angles measure clockwise from the x-axis
4 Angle Measure Degrees: full circle = 360 degrees Half-circle = 180 degreesRight angle = 90 degreesRadians: one radian is the measure of the central angle that intercepts an arc equal in length to the length of the radius (we can construct an angle of measure = 1 radian!)Full circle = 2 radiansHalf circle = radiansRight angle = radians
5 Radian MeasureThe measure of the angle in radians is the ratio of the arc length to the radiusRecall half circle = 180 degrees= radiansThis provides a conversion factor. If they are equal, their ratio=1, so we can convert from radians to degrees (or vice versa) by multiplying by this “well-chosen one.”Example: convert 270 degrees to radians
7 Coterminal angles Angles that have rays at the same spot. Angle may be positive or negative (move counterclockwise or clockwise) (i.e. 70 degree angle coterminal to -290 degree angle)Angle may go around the circle more than once (i.e. 30 degree angle coterminal to 390 degree angle)
8 Arc lengthSince radians are defined as the central angle created when the arc length = radius length for any given circle, it makes sense to consider arc length when angle is measured in radiansRecall theta (in radians) is the ratio of arc length to radiusArc length = radius x theta (in radians)
9 Linear speed & Angular speed Speed a particle moves along an arc of the circle (v) is the linear speed (distance, s, per unit time, t)Speed which the angle is changing as a particle moves along an arc of the circle is the angular speed.(angle measure in radians, per unit time, t)
10 Relationship between linear speed & angular speed Linear speed is the product of radius and angular speed.Example: The minute hand of a clock is 6 inches long. How fast is the tip of the hand moving?We know angular speed = 2 pi per 60 minutes
11 4.2 Trigonometric Functions: The Unit Circle ObjectivesUse a unit circle to define trigonometric functions of real numbersRecognize the domain & range of sine & cosineFind exact values of the trig. functions at pi/4Use even & odd trigonometric functionsRecognize & use fundamental identitiesUse periodic propertiesEvaluate trig. functions with a calculator
12 What is the unit circle? A circle with radius = 1 unit Why are we interested in this circle? It provides convenient (x,y) values as we work our way around the circle.(1,0), theta = 0(0,1), theta = pi/2(-1,0), theta = pi(0,-1), theta = 3 pi/2ALSO, any (x,y) point on the circle would be at the end of the hypotenuse of a right triangle that extends from the origin, such that
13 sin t and cos tFor any point (x,y) found on the unit circle, x=cos t and y=sin tt = any real number, corresponding to the arc length of the unit circleExample: at the point (1,0), the cos t = 1 and sin t = 0. What is t? t is the arc length at that point AND since it’s a unit circle, we know the arc length = central angle, in radians. THUS, cos (0) = 1 and sin (0)=0
14 Relating all trigonometric functions to sin t and cos t
15 Pythagorean Identities Every point (x,y) on the unit circle corresponds to a real number, t, that represents the arc length at that pointSince and x = cos(t) and y=sin(t), thenIf each term is divided by , the result is
16 Given csc t = 13/12, find the values of the other 6 trig Given csc t = 13/12, find the values of the other 6 trig. functions of tsin t = 12/13 (reciprocal)cos t = 5/13 (Pythagorean)sec t = 13/5 (reciprocal)tan t = 12/5 (sin(t)/cos(t))cot t = 5/12 (reciprocal)
17 Trig. functions are periodic sin(t) and cos(t) are the (x,y) coordinates around the unit circle and the values repeat every time a full circle is completedThus the period of both sin(t) and cos(t) = 2 pisin(t)=sin(2pi + t) cos(t)=cos(2pi + t)Since tan(t) = sin(t)/cos(t), we find the values repeat (become periodic) after pi, thus tan(t)=tan(pi + t)
18 4.3 Right Triangle Trigonometry ObjectivesUse right triangles to evaluate trig. FunctionsFind function values for 30 degrees, 45 degrees & 60 degreesUse equal cofunctions of complementsUse right triangle trig. to solve applied problems
19 Within a unit circle, and right triangle can be sketched The point on the circle is (x,y) and the hypotenuse = 1. Therefore, the x-value is the horizontal leg and the y-value is the vertical leg of the right triangle formed.cos(t)=x which equals x/1, therefore the cos (t)=horizontal leg/hypotenuse = adjacent leg/hypotensesin(t)=y which equals y/1, therefore the sin(t) = vertical leg/hypotenuse = opposite leg/hypotenuse
20 The relationships holds true for ALL right triangles (other 3 trig The relationships holds true for ALL right triangles (other 3 trig. functions are found as reciprocals)
21 Find the value of 6 trig. functions of the angles in a right triangle. Given 2 sides, the value of the 3rd side can be found, using Pythagorean theoremAfter side lengths of all 3 sides is known, find sin as opposite/hypotenusecos = adjacent/hypotenusetan = opposite/adjacentcsc = 1/sinsec = 1/coscot= 1/tan
22 Given a right triangle with hypotenuse =5 and side adjacent angle B of length=2, find tan B
23 Special Triangles30-60 right triangle, ratio of sides of the triangle is 1:2: , 2 (longest) is the length of the hypotenuse, the shortest side (opposite the 30 degree angle) is 1 and the remaining side (opposite the 60 degree angle) is45-45 right triangle: The 2 legs are the same length since the angles opposite them are equal, thus 1:1. Using pythagorean theorem, the remaining side, the hypotenuse, is
24 Cofunction Identities Cofunctions are those that are the reciprocal functions (cofunction of tan is cot, cofunction of sin is cos, cofunction of sec is csc)For an acute angle, A, of a right triangle, the side opposite A would be the side adjacent to the other acute angle, BTherefore sin A = cos BSince A & B are the acute angles of a right triangle, their sum = 90 degrees, thus B=function(A)=cofunction
25 4.4 Trigonometric Functions of Any Angle ObjectivesUse the definitions of trigonometric functions of any angleUse the signs of the trigonometric functionsFind reference anglesUse reference angles to evaluate trigonometric functions
26 Trigonometric functions of Any Angle Previously, we looked at the 6 trig. functions of angles in a right triangle. These angles are all acute. What about negative angles? What about obtuse angles?These angles exist, particularly as we consider moving around a circleAt any point on the circle, we can drop a vertical line to the x-axis and create a triangle. Horizontal side = x, vertical side=y, hypotenuse=r.
27 Trigonometric Functions of Any Angle (continued) If, for example, you have an angle whose terminal side is in the 3rd quadrant, then the x & y values are both negative. The radius, r, is always a positive value.Given a point (-3,-4), find the 6 trig. functions associated with the angle formed by the ray containing this point.x=-3, y=-4, r =(continued next slide)
28 Example continued sin A = -4/5, cos A = -3/5, tan A = 4/3 csc A = -5/4, sec A = -5/3, cot A = ¾Notice that the same values of the trig. functions for angle A would be true for the angles 360+A, A-360 (negative values)
29 Examining the 4 quadrants Quadrant I: x & y are positiveall 6 trig. functions are positiveQuadrant II: x negative, y positivepositive: sin, csc negative: cos, sec, tan, cotQuadrant III: x negative, y negativepositive: tan, cot negative: sin, csc, cos, secQuadrant IV: x positive, y negativepositive: cos, sec negative: sin, csc, cot, tan
30 Reference anglesAngles in all quadrants can be related to a “reference” angle in the 1st quadrantIf angle A is in quadrant II, it’s related angle in quad I is 180-A. The numerical values of the 6 trig. functions will be the same, except the x (cos, sec, tan, cot) will all be negativeIf angle A is in quad III, it’s related angle in quad I is 180+A. Now x & y are both neg, so sin, csc, cos, sec are all negative.
31 Reference angles cont.If angle A is in quad IV, the reference angle is 360-A. The y value is negative, so the sin, csc, tan & cot are all negative.
32 Special anglesWe often work with the “special angles” of the “special triangles.” It’s good to remember them both in radians & degreesIf you know the trig. functions of the special angles in quad I, you know them in every quadrant, by determining whether the x or y is positive or negative
33 4.5 Graphs of Sine & Cosine Objectives Understand the graph of y = sin xGraph variations of y = sin xUnderstand the graph of y = cos xGraph variations of y = cos xUse vertical shifts of sin & cosine curvesModel periodic behavior
34 Graphing y = sin xIf we take all the values of sin x from the unit circle and plot them on a coordinate axis with x = angles and y = sin x, the graph is a curveRange: [-1,1]Domain: (all reals)
35 Graphing y = cos xUnwrap the unit circle, and plot all x values from the circle (the cos values) and plot on the coordinate axes, x = angle measures (in radians) and y = cos xRange: [-1,1]Domain: (all reals)
36 Comparisons between y=cos x and y=sin x Range & Domain: SAMErange: [-1,1], domain: (all reals)Period: SAME (2 pi)Intercepts: Differentsin x : crosses through origin and intercepts the x-axis at all multiples ofcos x: intercepts y-axis at (0,1) and intercepts x-axis at all odd multiples of
37 Amplitude & PeriodThe amplitude of sin x & cos x is 1. The greatest distance the curves rise & fall from the axis is 1.The period of both functions is 2 pi. This is the distance around the unit circle.Can we change amplitude? Yes, if the function value (y) is multiplied by a constant, that is the NEW amplitude, example: y = 3 sin x
38 Amplitude & Period (cont) Can we change the period? Yes, the length of the period is a function of the x-value.Example: y = sin(3x)The amplitude is still 1. (Range: [-1,1])Period is
39 Phase ShiftThe graph of y=sin x is “shifted” left or right of the original graphChange is made to the x-values, so it’s addition/subtraction to x-values.Example: y = sin(x- ), the graph of y=sin x is shifted right
40 Vertical ShiftThe graph y=sin x can be shifted up or down on the coordinate axis by adding to the y-value.Example:y = sin x + 3 moves the graph of sin x up 3 units.
41 Graph y = 2cos(x- ) - 2 Amplitude = 2 Phase shift = right Vertical shift = down 2
42 4.6 Graphs of Other Trigonometric Functions ObjectivesUnderstand the graph of y = tan xGraph variations of y = tan xUnderstand the graph of y = cot xGraph variations of y = cot xUnderstand the graphs of y = csc x and y = sec x
43 y = tan xGoing around the unit circle, where the y value is 0, (sin x = 0), the tangent is undefined.At x = the graph of y = tan x has vertical asymptotesx-intercepts where cos x = 0, x =
44 Characteristics of y = tan x Period =Domain: (all reals except odd multiples ofRange: (all reals)Vertical asymptotes: odd multiples ofx – intercepts: all multiples ofOdd function (symmetric through the origin, quad I mirrors to quad III)
45 Transformations of y = tan x Shifts (vertical & phase) are done as the shifts to y = sin xPeriod change (same as to y=sin x, except the original period of tan x is pi, not 2 pi)
46 Graph y = -3 tan (2x) + 1 Period is now pi/2 Vertical shift is up 1 -3 impacts the “amplitude”Since tan x has no amplitude, we consider the point ½ way between intercept & asymptote, where the y-value=1. Now the y-value at that point is -3.See graph next slide.
53 4.7 Inverse Trigonometric Functions ObjectivesUnderstand the use the inverse sine functionUnderstand and use the inverse cosine functionUnderstand and use the inverse tangent functionUse a calculator to evaluate inverse trig. functionsFind exact values of composite functions with inverse trigonometric functions
54 What is the inverse sin of x? It is the ANGLE (or real #) that has a sin value of x.Example: the inverse sin of ½ is pi/6 (arcsin ½ = pi/6)Why? Because the sin(pi/6)= ½Shorthand notation for inverse sin of x is arcsin x orRecall that there are MANY angles that would have a sin value of ½. We want to be consistent and specific about WHICH angle we’re referring to, so we limit the range to (quad I & IV)
55 Find the domain of y =The domain of any function becomes the range of its inverse, and the range of a function becomes the domain of its inverse.Range of y = sin x is [-1,1], therefore the domain of the inverse sin (arcsin x) function is [-1,1]
56 Trigonometric values for special angles If you know sin(pi/2) = 1, you know the inverse sin(1) = pi/2KNOW TRIG VALUES FOR ALL SPECIAL ANGLES (once you do, you know the inverse trigs as well!)
59 The inverse cosine function The inverse cosine of x refers to the angle (or number) that has a cosine of xInverse cosine of x is represented as arccos(x) orExample: arccos(1/2) = pi/3 because the cos(pi/3) = ½Domain: [-1,1]Range: [0,pi] (quadrants I & II)
61 The inverse tangent function The inverse tangent of x refers to the angle (or number) that has a tangent of xInverse tangent of x is represented as arctan(x) orExample: arctan(1) = pi/4 because the tan(pi/4)=1Domain: (all reals)Range: [-pi/2,pi/2] (quadrants I & IV)
63 Evaluating compositions of functions & their inverses Recall: The composition of a function and its inverse = x. (what the function does, its inverse undoes)This is true for trig. functions & their inverses, as well ( PROVIDED x is in the range of the inverse trig. function)Example: arcsin(sin pi/6) = pi/6, BUT arcsin(sin 5pi/6) = pi/6WHY? 5pi/6 is NOT in the range of arcsin x, but the angle that has the same sin in the appropriate range is pi/6
64 4.8 Applications of Trigonometric Functions ObjectivesSolve a right triangle.Solve problems involving bearings.Model simple harmonic motion.
65 Solving a Right Triangle This means find the values of all angles and all side lengths.Sum of angles = 180 degrees, and if one is a right angle, the sum of the remaining angles is 90 degrees.All sides are related by the Pythagorean Theorem:Using ratio definition of trig functions (sin x = opposite/hypotenuse, tan x = opposite/adjacent, cos x = adjacent/hypotenuse), one can find remaining sides if only one side is given
66 Example: A right triangle has an hypotenuse = 6 cm with an angle = 35 degrees. Solve the triangle. cos(35 degrees) = .819 (using calculator)cos(35 degrees) = adjacent/6 cmThus, .819 = adjacent/6 cm, adjacent = 4.9 cmRemaining angle = 55 degreesRemaining side:
67 Trigonometry & Bearings Bearings are used to describe position in navigation and surveying. Positions are described relative to a NORTH or SOUTH axis (y-axis). (Different than measuring from the standard position, the positive x-axis.)means the direction is 55 degrees from the north toward the east (in quadrant I)means the direction is 35 degrees from the south toward the west (in quadrant III)
Your consent to our cookies if you continue to use this website. |
The French Fur Trade Beginning in the mid sixteenth century, French explorers were able to establish a powerful and lasting presence in what is now the Northern United States and Canada. The explorers placed much emphasis on searching and colonizing the area surrounding the St. Lawrence River "which gave access to the Great Lakes and the heart of the continent" (Microsoft p? ). They began exploring the area around 1540 and had early interactions with many of the Natives, which made communication easier for both peoples when the French returned nearly fifty years later. The French brought a new European desire for fur with them to America when they returned and began to trade with the Indians for furs in order to supply the European demands. The Natives and the French were required to interact with each other in order to make these trades possible, and, over time, the two groups developed a lasting alliance.
However, the French began to face strong competition in the fur trading industry, which caused many problems between different European nations and different native tribes. Therefore, the trading of fur allowed early seven- teeth century French explorers to establish peaceful relations with the Natives, however, com- petit ive trading also incited much quarreling between competing colonies and Indian tribes. Since the early seventeenth century, French explorers had been able to keep peaceful relations with the Native Americans as a result of fur trading. Samuel de Champlain was a French explorer who established one of the first trading posts along the St.
Lawrence River. He helped to establish an industry of fur trading that would continue for the next one hundred fifty years. By strategically placing many other trading posts in the St. Lawrence River and Great Lakes regions, the French were able to draw many Natives who were interested in European goods and, at the same time, collect the furs that they desired. This mutual interest in each other's goods allowed both peoples to experience each other's culture and understand each other's society. Once the French understood the Natives, they began to trust them and adopted many parts of their culture.
Some explorers used the "Indian canoe... to explore the entire Great Lakes chain and most of the rivers that fed into them" (Birchfeild p 560). Even some of the French explorers "married into indigenous families... and [blended] French and indigenous elements in the way they lived" (Microsoft p? ).
These developing relationships were helpful in keeping peace between the French and the Natives and were especially helpful in developing political alliances between the French and certain Native tribes. The French, especially Champlain, were particularly helpful in protecting many tribes indigenous to the Great Lakes region. Champlain "joined four hundred Indians in an overland attack on an Iroquois fort" (Sandoz p 34) as a representative of French support for the Algonquians, Montagnais, and Huron Indians. This strong support shows that the French were committed to keeping peace with their Native friends and the Natives, in turn, helped the French to succeed in fur trading and nearly monopolize the industry. On the other hand, competitive trading caused many arguments and much fighting between European countries and Indian tribes.
The British noticed the success of the French and decided to focus more energy on fur trading. Therefore, they became allies of the Iroquois, the enemy of the French, and attempted to take over fur trading in North America and eventually attempted to conquer the French settlements. The Iroquois and Algonquian Indians stayed loyal to their Eur- open allies and fought many bloody battles against each other in order to gain control of the main fur trading traffic in North America. Samuel de Champlain reflected on the hatred brought about by arguments over territory, but also over control of the fur trade, after witnessing the death of an Iroquois prisoner of the Algonquian tribe: They were kept to be put to death by the women and girls, who in this respect are no less inhuman than the men, and, indeed, much more so; for by their sub-tl ty they invent more cruel tortures, and take pleasure in it. (Sandoz p 34) The French and British also fought over control of territory and eventually went to war. They fought many small, localized battles prior to a succession of important battles that would decide each country's position in America.
They fought in the French and Indian War from 1754-1763, not only to decide who would control the fur trade, but also to designate a single major power in the Americas. The Choctaw Indians fought with the French against the British their reluctant allies, the Cherokees and Chickasaws. The French were defeated and surrendered in 1760 at Montreal, Quebec. "This shifted the balance of power from Native nations that had allied themselves with colonial powers to Britain" (Birchfeild p 560) and signified the end of French domination in any area, but especially in fur trading.
The French had risen to a powerful position in America with the help of the fur trading industry. They had nearly monopolized the trading and were allies with many Native American tribes. However, the British and other European countries noticed France's great success in fur trading and shifted their attention toward obtaining native allies and trading. France lost its hold on the business only to see the power shift to the British and saw them take control of America.
French explorers were able to establish peaceful relations with the Indians; however, the end result of the fur trade was much fighting and wars between nations and tribes. |
5th Grade Geometry
5th Grade Geometry Vocabulary
Area and Circumference of Circles
Use 3.14 when calculating π.
Tic Tac Toe Area and Perimeter
Solve an area or perimeter problem to place an X or O on the board! If your answer is correct, click the empty square where you want to place your X or O. Beat the computer by getting 3 in a row!
Geometry- Quadrilaterals and 3D shapes
Test your knowledge of properties of quadrilaterals, area of 2D figures, and surface area and volume of 3D figures.
Surface Area and Volume Practice Grade 6
Surface area and volume questions about right rectangular prisms.
Hi! This is has 19 terms about some basic geometry vocabulary. The question is the definition, the answer the term. Happy studying! :)
Translations and Reflections in Geometry chakalaka
Hi! This game has 11 questions about translations (slides) and reflections (flips). Have fun!
Math's Key (Area of a shape)
Geometry Year 5, topic Space
Properties of parallelogram tested |
The resource has been added to your collection
This data analysis activity requires students to read and interpret six written or graphical representations of data. Students must determine which graphs and analysis belong together. The data representations used include a pictograph, a circle graph, a frequency chart, a bar graph, and two written analyses including the terms mode, median, and mean for each data set. Included with the activity are teacher's notes, a hint, and the solution.
This resource has not yet been reviewed.
Not Rated Yet. |
Data Fusion and Satellite Imagery
Data fusion is the process of integrating multiple sources of data to create a more comprehensive and accurate understanding of a particular subject or phenomenon. In the context of satellite imagery, data fusion refers to the combination of information obtained from various types of sensors and platforms to enhance the quality, interpretability, and usability of the imagery.
Data Fusion and Synthetic Aperture Radar (SAR)
SAR satellite imagery is a type of radar imagery that uses microwave signals to capture high-resolution images of the Earth’s surface, regardless of weather conditions or time of day. SAR imagery is particularly useful in applications such as environmental monitoring, natural resource exploration, and disaster management.
There are several ways data fusion can be applied to SAR satellite imagery:
SAR imagery can be fused with data from other sensors, such as optical, multispectral, or hyperspectral sensors, to create a more complete and detailed representation of the Earth’s surface. This can help overcome limitations of individual sensors, such as cloud cover for optical sensors or lower spatial resolution in SAR imagery.
Combining SAR imagery collected at different times allows for the analysis of changes in the Earth’s surface over time. This can be used to monitor land use changes, track the progress of natural disasters, or identify illegal activities such as deforestation.
SAR systems can operate at various frequencies, such as L-band, C-band, and X-band. Each frequency has unique characteristics and penetration abilities, which can be combined to provide more information about the target area. For example, L-band can penetrate deeper into vegetation, while X-band can provide finer spatial resolution.
SAR systems can capture images in different polarizations, such as horizontal (HH), vertical (VV), or cross-polarization (HV or VH). Fusing images from different polarizations can provide additional information about the surface features and improve classification accuracy.
Data fusion techniques can also be used to combine SAR imagery with different spatial resolutions. This can help to create images with improved resolution and detail.
Data fusion in the context of SAR satellite imagery involves the integration of various sources of data to enhance the quality and interpretability of the images. By combining SAR imagery with other types of data or by fusing multiple SAR images obtained at different times, frequencies, polarizations, or resolutions, a more comprehensive understanding of the Earth’s surface can be achieved, which can be useful for various applications such as environmental monitoring, natural resource exploration, and disaster management. |
An equal temperament is a musical temperament, or a system of tuning, in which the frequency interval between every pair of adjacent notes has the same ratio. In other words, the ratios of the frequencies of any adjacent pair of notes is the same, and, as pitch is perceived roughly as the logarithm of frequency, equal perceived "distance" from every note to its nearest neighbor.
In equal temperament tunings, the generating interval is often found by dividing some larger desired interval, often the octave (ratio 2:1), into a number of smaller equal steps (equal frequency ratios between successive notes).
In classical music and Western music in general, the most common tuning system since the 18th century has been twelve-tone equal temperament (also known as 12 equal temperament, 12-TET or 12-ET), which divides the octave into 12 parts, all of which are equal on a logarithmic scale, with a ratio equal to the 12th root of 2 ( ? 1.05946). That resulting smallest interval, the width of an octave, is called a semitone or half step. In modern times, 12TET is usually tuned relative to a standard pitch of 440 Hz, called A440, meaning one note, A, is tuned to 440 hertz and all other notes are defined as some multiple of semitones apart from it, either higher or lower in frequency. The standard pitch has not always been 440 Hz. It has varied and generally risen over the past few hundred years.)
Other equal temperaments divide the octave differently. For example, some music has been written in 19-TET and 31-TET. Arabic music uses 24-TET as a notational convention. In Western countries the term equal temperament, without qualification, generally means 12-TET. To avoid ambiguity between equal temperaments that divide the octave and those that divide some other interval (or that use an arbitrary generator without first dividing a larger interval), the term equal division of the octave, or EDO is preferred for the former. According to this naming system, 12-TET is called 12-EDO, 31-TET is called 31-EDO, and so on.
An example of an equal temperament that finds its smallest interval by dividing an interval other than the octave into equal parts is the equal-tempered version of the Bohlen-Pierce scale, which divides the just interval of an octave and a fifth (ratio 3:1), called a "tritave" or a "pseudo-octave" in that system, into 13 equal parts.
Unfretted string ensembles, which can adjust the tuning of all notes except for open strings, and vocal groups, who have no mechanical tuning limitations, sometimes use a tuning much closer to just intonation for acoustic reasons. Other instruments, such as some wind, keyboard, and fretted instruments, often only approximate equal temperament, where technical limitations prevent exact tunings. Some wind instruments that can easily and spontaneously bend their tone, most notably trombones, use tuning similar to string ensembles and vocal groups.
The two figures frequently credited with the achievement of exact calculation of equal temperament are Zhu Zaiyu (also romanized as Chu-Tsaiyu. Chinese: ) in 1584 and Simon Stevin in 1585. According to Fritz A. Kuttner, a critic of the theory, it is known that "Chu-Tsaiyu presented a highly precise, simple and ingenious method for arithmetic calculation of equal temperament mono-chords in 1584" and that "Simon Stevin offered a mathematical definition of equal temperament plus a somewhat less precise computation of the corresponding numerical values in 1585 or later." The developments occurred independently.
Kenneth Robinson attributes the invention of equal temperament to Zhu Zaiyu and provides textual quotations as evidence. Zhu Zaiyu is quoted as saying that, in a text dating from 1584, "I have founded a new system. I establish one foot as the number from which the others are to be extracted, and using proportions I extract them. Altogether one has to find the exact figures for the pitch-pipers in twelve operations." Kuttner disagrees and remarks that his claim "cannot be considered correct without major qualifications." Kuttner proposes that neither Zhu Zaiyu or Simon Stevin achieved equal temperament and that neither of the two should be treated as inventors.
The origin of the Chinese pentatonic scale is traditionally ascribed to the mythical Ling Lun. Allegedly his writings discussed the equal division of the scale in the 27th century BC. However, evidence of the origins of writing in this period (the early Longshan) in China is limited to rudimentary inscriptions on oracle bones and pottery.
A complete set of bronze chime bells, among many musical instruments found in the tomb of the Marquis Yi of Zeng (early Warring States, c. 5th century BCE in the Chinese Bronze Age), covers 5 full 7 note octaves in the key of C Major, including 12 note semi-tones in the middle of the range.
An approximation for equal temperament was described by He Chengtian, a mathematician of Southern and Northern Dynasties around 400 AD. He came out with the earliest recorded approximate numerical sequence in relation to equal temperament in history: 900 849 802 758 715 677 638 601 570 536 509.5 479 450.
Zhu Zaiyu (), a prince of the Ming court, spent thirty years on research based on the equal temperament idea originally postulated by his father. He described his new pitch theory in his Fusion of Music and Calendar ? published in 1580. This was followed by the publication of a detailed account of the new theory of the equal temperament with a precise numerical specification for 12-TET in his 5,000-page work Complete Compendium of Music and Pitch (Yuelü quan shu ?) in 1584. An extended account is also given by Joseph Needham. Zhu obtained his result mathematically by dividing the length of string and pipe successively by ? 1.059463, and for pipe length by , such that after twelve divisions (an octave) the length was divided by a factor of 2:
Similarly, after 84 divisions (7 octaves) the length was divided by a factor of 128:
According to Gene Cho, Zhu Zaiyu was the first person to solve the equal temperament problem mathematically.Matteo Ricci, a Jesuit in China recorded this work in his personal journal and very likely brought it back to the West. In 1620, Zhu's work was referenced by a European mathematician.Murray Barbour said, "The first known appearance in print of the correct figures for equal temperament was in China, where Prince Tsaiyü's brilliant solution remains an enigma." The 19th-century German physicist Hermann von Helmholtz wrote in On the Sensations of Tone that a Chinese prince (see below) introduced a scale of seven notes, and that the division of the octave into twelve semitones was discovered in China.
Zhu Zaiyu illustrated his equal temperament theory by the construction of a set of 36 bamboo tuning pipes ranging in 3 octaves, with instructions of the type of bamboo, color of paint, and detailed specification on their length and inner and outer diameters. He also constructed a 12-string tuning instrument, with a set of tuning pitch pipes hidden inside its bottom cavity. In 1890, Victor-Charles Mahillon, curator of the Conservatoire museum in Brussels, duplicated a set of pitch pipes according to Zhu Zaiyu's specification. He said that the Chinese theory of tones knew more about the length of pitch pipes than its Western counterpart, and that the set of pipes duplicated according to the Zaiyu data proved the accuracy of this theory.
One of the earliest discussions of equal temperament occurs in the writing of Aristoxenus in the 4th century BC.
Vincenzo Galilei (father of Galileo Galilei) was one of the first practical advocates of twelve-tone equal temperament. He composed a set of dance suites on each of the 12 notes of the chromatic scale in all the "transposition keys", and published also, in his 1584 "Fronimo", 24 + 1 ricercars. He used the 18:17 ratio for fretting the lute (although some adjustment was necessary for pure octaves).
Galilei's countryman and fellow lutenist Giacomo Gorzanis had written music based on equal temperament by 1567. Gorzanis was not the only lutenist to explore all modes or keys: Francesco Spinacino wrote a "Recercare de tutti li Toni" (Ricercar in all the Tones) as early as 1507. In the 17th century lutenist-composer John Wilson wrote a set of 30 preludes including 24 in all the major/minor keys.
Henricus Grammateus drew a close approximation to equal temperament in 1518. The first tuning rules in equal temperament were given by Giovani Maria Lanfranco in his "Scintille de musica".Zarlino in his polemic with Galilei initially opposed equal temperament but eventually conceded to it in relation to the lute in his Sopplimenti musicali in 1588.
The first mention of equal temperament related to the twelfth root of two in the West appeared in Simon Stevin's manuscript Van De Spiegheling der singconst (ca. 1605), published posthumously nearly three centuries later in 1884. However, due to insufficient accuracy of his calculation, many of the chord length numbers he obtained were off by one or two units from the correct values. As a result, the frequency ratios of Simon Stevin's chords has no unified ratio, but one ratio per tone, which is claimed by Gene Cho as incorrect.
The following were Simon Stevin's chord length from Van de Spiegheling der singconst:
|Tone||Chord 10000 from Simon Stevin||Ratio||Corrected chord|
|tone and a half||8404||1.0600904||8409|
|ditone and a half||7491||1.0594046||7491.5|
|tritone and a half||6674||1.0594845||6674.2|
|four-tone and a half||5944||1.0595558||5946|
|five-tone and a half||5296||1.0594788||5297.2|
From 1450 to about 1800, plucked instrument players (lutenists and guitarists) generally favored equal temperament, and the Brossard lute Manuscript compiled in the last quarter of the 17th century contains a series of 18 preludes attributed to Bocquet written in all keys, including the last prelude, entitled Prelude sur tous les tons, which enharmonically modulates through all keys.Angelo Michele Bartolotti published a series of passacaglias in all keys, with connecting enharmonically modulating passages. Among the 17th-century keyboard composers Girolamo Frescobaldi advocated equal temperament. Some theorists, such as Giuseppe Tartini, were opposed to the adoption of equal temperament; they felt that degrading the purity of each chord degraded the aesthetic appeal of music, although Andreas Werckmeister emphatically advocated equal temperament in his 1707 treatise published posthumously.
J. S. Bach wrote The Well-Tempered Clavier to demonstrate the musical possibilities of well temperament, where in some keys the consonances are even more degraded than in equal temperament. It is possible that when composers and theoreticians of earlier times wrote of the moods and "colors" of the keys, they each described the subtly different dissonances made available within a particular tuning method. However, it is difficult to determine with any exactness the actual tunings used in different places at different times by any composer. (Correspondingly, there is a great deal of variety in the particular opinions of composers about the moods and colors of particular keys.)
Twelve-tone equal temperament took hold for a variety of reasons. It was a convenient fit for the existing keyboard design, and permitted total harmonic freedom at the expense of just a little impurity in every interval. This allowed greater expression through enharmonic modulation, which became extremely important in the 18th century in music of such composers as Francesco Geminiani, Wilhelm Friedemann Bach, Carl Philipp Emmanuel Bach and Johann Gottfried Müthel.
The progress of equal temperament from the mid-18th century on is described with detail in quite a few modern scholarly publications: it was already the temperament of choice during the Classical era (second half of the 18th century), and it became standard during the Early Romantic era (first decade of the 19th century), except for organs that switched to it more gradually, completing only in the second decade of the 19th century. (In England, some cathedral organists and choirmasters held out against it even after that date; Samuel Sebastian Wesley, for instance, opposed it all along. He died in 1876.)
A precise equal temperament is possible using the 17th-century Sabbatini method of splitting the octave first into three tempered major thirds. This was also proposed by several writers during the Classical era. Tuning without beat rates but employing several checks, achieving virtually modern accuracy, was already done in the first decades of the 19th century. Using beat rates, first proposed in 1749, became common after their diffusion by Helmholtz and Ellis in the second half of the 19th century. The ultimate precision was available with 2-decimal tables published by White in 1917.
It is in the environment of equal temperament that the new styles of symmetrical tonality and polytonality, atonal music such as that written with the twelve tone technique or serialism, and jazz (at least its piano component) developed and flourished.
This section does not cite any sources. (June 2011) (Learn how and when to remove this template message)
In an equal temperament, the distance between two adjacent steps of the scale is the same interval. Because the perceived identity of an interval depends on its ratio, this scale in even steps is a geometric sequence of multiplications. (An arithmetic sequence of intervals would not sound evenly spaced, and would not permit transposition to different keys.) Specifically, the smallest interval in an equal-tempered scale is the ratio:
Scales are often measured in cents, which divide the octave into 1200 equal intervals (each called a cent). This logarithmic scale makes comparison of different tuning systems easier than comparing ratios, and has considerable use in Ethnomusicology. The basic step in cents for any equal temperament can be found by taking the width of p above in cents (usually the octave, which is 1200 cents wide), called below w, and dividing it into n parts:
In musical analysis, material belonging to an equal temperament is often given an integer notation, meaning a single integer is used to represent each pitch. This simplifies and generalizes discussion of pitch material within the temperament in the same way that taking the logarithm of a multiplication reduces it to addition. Furthermore, by applying the modular arithmetic where the modulus is the number of divisions of the octave (usually 12), these integers can be reduced to pitch classes, which removes the distinction (or acknowledges the similarity) between pitches of the same name, e.g. c is 0 regardless of octave register. The MIDI encoding standard uses integer note designations.
In twelve-tone equal temperament, which divides the octave into 12 equal parts, the width of a semitone, i.e. the frequency ratio of the interval between two adjacent notes, is the twelfth root of two:
This interval is divided into 100 cents.
Frequency ratio can also be calculated by:
To find the frequency, Pn, of a note in 12-TET, the following definition may be used:
In this formula Pn refers to the pitch, or frequency (usually in hertz), you are trying to find. Pa refers to the frequency of a reference pitch (usually 440Hz). n and a refer to numbers assigned to the desired pitch and the reference pitch, respectively. These two numbers are from a list of consecutive integers assigned to consecutive semitones. For example, A4 (the reference pitch) is the 49th key from the left end of a piano (tuned to 440 Hz), and C4 (middle C) is the 40th key. These numbers can be used to find the frequency of C4:
|1580||Vincenzo Galilei||18:17 [1.058823529]||99.0|
Reference: Date, name, ratio, cents: from equal temperament monochord tables p55-p78; J. Murray Barbour Tuning and Temperament, Michigan State University Press 1951
This section does not cite any sources. (June 2011) (Learn how and when to remove this template message)
The intervals of 12-TET closely approximate some intervals in just intonation. The fifths and fourths are almost indistinguishably close to just, while thirds and sixths are further away.
In the following table the sizes of various just intervals are compared against their equal-tempered counterparts, given as a ratio as well as cents.
|Name||Exact value in 12-TET||Decimal value in 12-TET||Cents||Just intonation interval||Cents in just intonation||Difference|
|Unison (C)||2 = 1||1||0|| = 1||0||0|
|Minor second (C♯/D♭)||2 =||1.059463||100|| = 1.06666...||111.73||-11.73|
|Major second (D)||2 =||1.122462||200|| = 1.125||203.91||-3.91|
|Minor third (D♯/E♭)||2 =||1.189207||300|| = 1.2||315.64||-15.64|
|Major third (E)||2 =||1.259921||400|| = 1.25||386.31||+13.69|
|Perfect fourth (F)||2 =||1.334840||500|| = 1.33333...||498.04||+1.96|
|Tritone (F♯/G♭)||2 =||1.414214||600|| = 1.4||582.51||+17.49|
|Perfect fifth (G)||2 =||1.498307||700|| = 1.5||701.96||-1.96|
|Minor sixth (G♯/A♭)||2 =||1.587401||800|| = 1.6||813.69||-13.69|
|Major sixth (A)||2 =||1.681793||900|| = 1.66666...||884.36||+15.64|
|Minor seventh (A♯/B♭)||2 =||1.781797||1000|| = 1.77777...||996.09||+3.91|
|Major seventh (B)||2 =||1.887749||1100|| = 1.875||1088.27||+11.73|
|Octave (C)||2 = 2||2||1200|| = 2||1200.00||0|
Violins, violas and cellos are tuned in perfect fifths (G - D - A - E, for violins, and C - G - D - A, for violas and cellos), which suggests that their semi-tone ratio is slightly higher than in the conventional twelve-tone equal temperament. Because a perfect fifth is in 3:2 relation with its base tone, and this interval is covered in 7 steps, each tone is in the ratio of to the next (100.28 cents), which provides for a perfect fifth with ratio of 3:2 but a slightly widened octave with a ratio of ? 517:258 or ? 2.00388:1 rather than the usual 2:1 ratio, because twelve perfect fifths do not equal seven octaves. During actual play, however, the violinist chooses pitches by ear, and only the four unstopped pitches of the strings are guaranteed to exhibit this 3:2 ratio.
A Thai xylophone measured by Morton (1974) "varied only plus or minus 5 cents," from 7-TET. According to Morton, "Thai instruments of fixed pitch are tuned to an equidistant system of seven pitches per octave ... As in Western traditional music, however, all pitches of the tuning system are not used in one mode (often referred to as 'scale'); in the Thai system five of the seven are used in principal pitches in any mode, thus establishing a pattern of nonequidistant intervals for the mode." Play (help·info)
Indonesian gamelans are tuned to 5-TET according to Kunst (1949), but according to Hood (1966) and McPhee (1966) their tuning varies widely, and according to Tenzer (2000) they contain stretched octaves. It is now well-accepted that of the two primary tuning systems in gamelan music, slendro and pelog, only slendro somewhat resembles five-tone equal temperament while pelog is highly unequal; however, Surjodiningrat et al. (1972) has analyzed pelog as a seven-note subset of nine-tone equal temperament (133-cent steps Play (help·info)).
A South American Indian scale from a pre-instrumental culture measured by Boiles (1969) featured 175-cent seven-tone equal temperament, which stretches the octave slightly as with instrumental gamelan music.
24-EDO, the quarter tone scale (or 24-TET), was a popular microtonal tuning in the 20th century probably because it represented a convenient access point for composers conditioned on standard Western 12-EDO pitch and notation practices who were also interested in microtonality. Because 24-EDO contains all of the pitches of 12-EDO plus new pitches halfway between each adjacent pair of 12-EDO pitches, they could employ the additional colors without losing any tactics available in 12-tone harmony. The fact that 24 is a multiple of 12 also made 24-EDO easy to achieve instrumentally by employing two traditional 12-EDO instruments purposely tuned a quarter-tone apart, such as two pianos, which also allowed each performer (or one performer playing a different piano with each hand) to read familiar 12-tone notation. Various composers including Charles Ives experimented with music for quarter-tone pianos.
29-TET is the lowest number of equal divisions of the octave that produces a better perfect fifth than 12-TET. Its major third is roughly as inaccurate as 12-TET; however, it is tuned 14 cents flat rather than 14 cents sharp. It tunes the 7th, 11th, and 13th harmonics flat as well, by roughly the same amount. This means intervals such as 7:5, 11:7, 13:11, etc., are all matched extremely well in 29-TET.
31 tone equal temperament was advocated by Christiaan Huygens and Adriaan Fokker. 31-TET has a slightly less accurate fifth than 12-TET, but provides near-just major thirds, and provides decent matches for harmonics up to at least 13, of which the seventh harmonic is particularly accurate.
34 EDO gives slightly less total combined errors of approximation to the 5-limit just ratios 3:2, 5:4, 6:5, and their inversions than 31 EDO does, although the approximation of 5:4 is worse. 34 EDO doesn't approximate ratios involving prime 7 well. It contains a 600-cent tritone, since it is an even-numbered EDO.
41-TET is the second lowest number of equal divisions that produces a better perfect fifth than 12-TET. Its major third is more accurate than 12-ET and 29-ET, about 6 cents flat.
53-TET is better at approximating the traditional just consonances than 12, 19 or 31-TET, but has had only occasional use. Its extremely good perfect fifths make it interchangeable with an extended Pythagorean tuning, but it also accommodates schismatic temperament, and is sometimes used in Turkish music theory. It does not, however, fit the requirements of meantone temperaments, which put good thirds within easy reach via the cycle of fifths. In 53-TET the very consonant thirds would be reached instead by strange enharmonic relationships like C-F♭, as it is an example of schismatic temperament. A consequence of this is that chord progressions like I-vi-ii-V-I won't land you back where you started in 53-TET, but rather one 53-tone step flat (unless the motion by I-vi wasn't by the 5-limit minor third).
72-TET approximates many just intonation intervals well, even into the 7-limit and 11-limit, such as 7:4, 9:7, 11:5, 11:6 and 11:7. 72-TET has been taught, written and performed in practice by Joe Maneri and his students (whose atonal inclinations typically avoid any reference to just intonation whatsoever). It can be considered an extension of 12 EDO because 72 is a multiple of 12. 72 EDO has a smallest interval that is six times smaller than the smallest interval of 12 EDO and therefore contains six copies of 12 EDO starting on different pitches. It also contains three copies of 24 EDO and two copies of 36 EDO, which are themselves multiples of 12 EDO.
2, 5, 12, 41, 53, 306, 665 and 15601 are denominators of first convergents of log2(3), so 2, 5, 12, 41, 53, 306, 665 and 15601 twelfths (and fifths), being in correspondent equal temperaments equal to an integer number of octaves, are better approximation of 2, 5, 12, 41, 53, 306, 665 and 15601 just twelfths/fifths than for any equal temperaments with less tones.
1, 2, 3, 5, 7, 12, 29, 41, 53, 200... (sequence in the OEIS) is the sequence of divisions of octave that provide better and better approximations of the perfect fifth. Related sequences contain divisions approximating other just intervals.
This application: calculates the frequencies, approximate cents, and MIDI pitch bend values for any systems of equal division of the octave. Note that 'rounded' and 'floored' produce the same MIDI pitch bend value.
The equal-tempered version of the Bohlen-Pierce scale consists of the ratio 3:1, 1902 cents, conventionally a perfect fifth plus an octave (that is, a perfect twelfth), called in this theory a tritave ( play (help·info)), and split into thirteen equal parts. This provides a very close match to justly tuned ratios consisting only of odd numbers. Each step is 146.3 cents ( play (help·info)), or .
Wendy Carlos created three unusual equal temperaments after a thorough study of the properties of possible temperaments having a step size between 30 and 120 cents. These were called alpha, beta, and gamma. They can be considered as equal divisions of the perfect fifth. Each of them provides a very good approximation of several just intervals. Their step sizes:
Alpha and Beta may be heard on the title track of her 1986 album Beauty in the Beast.
In this section, semitone and whole tone may not have their usual 12-EDO meanings, as it discusses how they may be tempered in different ways from their just versions to produce desired relationships. Let the number of steps in a semitone be s, and the number of steps in a tone be t.
There is exactly one family of equal temperaments that fixes the semitone to any proper fraction of a whole tone, while keeping the notes in the right order (meaning that, for example, C, D, E, F, and F♯ are in ascending order if they preserve their usual relationships to C). That is, fixing q to a proper fraction in the relationship qs = t also defines a unique family of one equal temperament and its multiples that fulfil this relationship.
For example, where k is an integer, 12k-EDO sets q = , and 19k-EDO sets q = . The smallest multiples in these families (e.g. 12 and 19 above) has the additional property of having no notes outside the circle of fifths. (This is not true in general; in 24-EDO, the half-sharps and half-flats are not in the circle of fifths generated starting from C.) The extreme cases are 5k-EDO, where q = 0 and the semitone becomes a unison, and 7k-EDO, where q = 1 and the semitone and tone are the same interval.
Once one knows how many steps a semitone and a tone are in this equal temperament, one can find the number of steps it has in the octave. An equal temperament fulfilling the above properties (including having no notes outside the circle of fifths) divides the octave into 7t - 2s steps, and the perfect fifth into 4t - s steps. If there are notes outside the circle of fifths, one must then multiply these results by n, which is the number of nonoverlapping circles of fifths required to generate all the notes (e.g. two in 24-EDO, six in 72-EDO). (One must take the small semitone for this purpose: 19-EDO has two semitones, one being tone and the other being .)
The smallest of these families is 12k-EDO, and in particular, 12-EDO is the smallest equal temperament that has the above properties. Additionally, it also makes the semitone exactly half a whole tone, the simplest possible relationship. These are some of the reasons why 12-EDO has become the most commonly used equal temperament. (Another reason is that 12-EDO is the smallest equal temperament to closely approximate 5-limit harmony, the next-smallest being 19-EDO.)
Each choice of fraction q for the relationship results in exactly one equal temperament family, but the converse is not true: 47-EDO has two different semitones, where one is tone and the other is , which are not complements of each other like in 19-EDO ( and ). Taking each semitone results in a different choice of perfect fifth.
The diatonic tuning in twelve equal can be generalized to any regular diatonic tuning dividing the octave as a sequence of steps TTSTTTS (or a rotation of it) with all the T's and all the S's the same size and the S's smaller than the T's. In twelve equal the S is the semitone and is exactly half the size of the tone T. When the S's reduce to zero the result is TTTTT or a five-tone equal temperament, As the semitones get larger, eventually the steps are all the same size, and the result is in seven tone equal temperament. These two endpoints are not included as regular diatonic tunings.
The notes in a regular diatonic tuning are connected together by a cycle of seven tempered fifths. The twelve-tone system similarly generalizes to a sequence CDCDDCDCDCDD (or a rotation of it) of chromatic and diatonic semitones connected together in a cycle of twelve fifths. In this case, seven equal is obtained in the limit as the size of C tends to zero and five equal is the limit as D tends to zero while twelve equal is of course the case C = D.
Some of the intermediate sizes of tones and semitones can also be generated in equal temperament systems. For instance if the diatonic semitone is double the size of the chromatic semitone, i.e. D = 2*C the result is nineteen equal with one step for the chromatic semitone, two steps for the diatonic semitone and three steps for the tone and the total number of steps 5*T + 2*S = 15 + 4 = 19 steps. The resulting twelve-tone system closely approximates to the historically important 1/3 comma meantone.
If the chromatic semitone is two-thirds of the size of the diatonic semitone, i.e. C = (2/3)*D, the result is thirty one equal, with two steps for the chromatic semitone, three steps for the diatonic semitone, and five steps for the tone where 5*T + 2*S = 25 + 6 = 31 steps. The resulting twelve-tone system closely approximates to the historically important 1/4 comma meantone.
'Hepta-equal temperament' in our folk music has always been a controversial issue.
From the flute for two thousand years of the production process, and the Japanese shakuhachi remaining in the production of Sui and Tang Dynasties and the actual temperament, identification of people using the so-called 'Seven Laws' at least two thousand years of history; and decided that this law system associated with the flute law. |
Working with Shell Arithmetic and Boolean Operators in Unix:
In this tutorial, we will review the various operators that are supported by the Unix shell.
Operators are used for manipulating variables and constants in shell programs. They are required to perform mathematical operations.
Here, we will explain you more about working with Arithmetic Operators.
Unix Video #14:
Note that the back-tick (`) is often used here – when executing a command, everything between the back-ticks is executed and substituted with the result before the reminder of the command is executed.
In newer shells (E.g. bash), the same result can be achieved by embedding the expression between ‘$(’ and ‘)’.
Operators in Unix
#1) Shell Arithmetic Operators Example
These consist of the basic mathematical operations:
- Addition: +
- Subtraction: –
- Multiplication: *
- Division: /
- Modulus: %
Each of these operators perform the operation on two integer variables or constants.
For Example, the below program illustrates each of these operations:
$ c=`expr $a + $b` $ echo “the value of addition=$c” $ d=`expr $a - $b` $ echo “the value of subtraction=$d” $ e= expr $a \* $b` $ echo “the value of multiplication=$e” $ f=`expr $a / $b` $ echo “the value of division=$f” $ g= echo `expr $a % $b` $ echo “the value of modulus=$c”
The Unix shell does not natively support floating point operations. A separate command line tool must be used for this. The ‘bc’ co0mmand is the most standard tool for this.
$ c = `echo “$a + $b” | bc` $ d = `echo “$a + $b” | bc`
Note that each of the operators need to be surrounded by a space on both sides, and the ‘*’ operators need to be escaped with a backslash ‘\’.
#2) Shell Logical Boolean Operators Example
The logical operators in Unix are as follows:
- Not: !
- And: -a
- Or: -o
These operators and their usage will be covered in detail in the next tutorial. |
What are Asteroids?
Asteroids are rocky, airless worlds that orbit our sun, but are too small to be called planets. Tens of thousands of these “minor planets” are gathered in the main asteroid belt, a vast doughnut-shaped ring between the orbits of Mars and Jupiter. Asteroids that pass close to Earth are called Near-Earth Objects (NEOs).
Asteroids, sometimes called minor planets, are small, rocky fragments left over from the formation of our solar system about 4.6 billion years ago. Most of this ancient space rubble can be found orbiting the sun between Mars and Jupiter. Asteroids range in size from Ceres, about 952 km (592 miles) in diameter, to bodies that are less than 1 km (0.6 mile) across. The total mass of all the asteroids is less than that of Earth’s Moon. Even with more than one-half million asteroids known (and there are probably many more), they are still much more widely separated than sometimes seen in Hollywood movies: on average, their separation is in excess of 1-3 million km (depending on how one calculates it).
Early in the history of the solar system, the formation of Jupiter brought an end to the formation of planetary bodies in the gap between Mars and Jupiter and caused the small bodies that occupied this region to collide with one another, fragmenting them into the asteroids we observe today. This region, called the asteroid belt or simply the main belt, may contain millions of asteroids. Because asteroids have remained mostly unchanged for billions of years, studies of them could tell us a great deal about the early solar system.
Nearly all asteroids are irregularly shaped, though a few are nearly spherical, and are often pitted or cratered. As they revolve around the sun in elliptical orbits, the asteroids also rotate, sometimes quite erratically, tumbling as they go. More than 150 asteroids are known to have a small companion moon (some have two moons). There are also binary (double) asteroids, in which two rocky bodies of roughly equal size orbit each other, as well as triple asteroid systems.
The three broad composition classes of asteroids are C-, S- and M-types. The C-type asteroids (carbonaceous) are most common. They probably consist of clay and silicate rocks and are dark in appearance. C-type asteroids are among the most ancient objects in our solar system. The S-types (silicaceous) are made up of silicate (stony) materials and nickel-iron. M-types (metallic) are made up of nickel-iron. The asteroids’ compositional differences are related to how far from the sun they formed. Some experienced high temperatures after they formed and partly melted, with iron sinking to the center and forcing basaltic (volcanic) lava to the surface. One such asteroid, Vesta, survives to this day.
Jupiter’s massive gravity and occasional close encounters with Mars or another object changed the asteroids’ orbits, knocking them out of the main belt and hurling them into space in both directions towards or away from the sun, across the orbits of the planets. Stray asteroids and asteroid fragments have slammed into Earth and the other planets in the past, playing a major role in altering the geological history of the planets and in the evolution of life on Earth.
Scientists monitor asteroids whose paths intersect Earth’s orbit. These are Near Earth Objects (NEOs) that may pose an impact danger. Besides optical observations, radar is a valuable tool in detecting and monitoring potential impact hazards. By bouncing transmitted signals off objects, images and information can be derived from the echoes, such as the asteroid’s orbit, rotation, size, shape, and metal concentration.
The U.S. is the most active and successful country operating a survey and detection program for discovering NEOs.
NASA space missions have flown by and observed asteroids. The Galileo spacecraft flew by asteroids Gaspra in 1991 and Ida in 1993; the NEAR-Shoemaker mission studied asteroids Mathilde and Eros; and Deep Space 1 and Stardust both had close encounters with asteroids.
In 2005, the Japanese spacecraft Hayabusa landed on the near-Earth asteroid Itokawa in order to collect samples. Hayabusa returned to Earth in June 2010, and the tiny asteroid particles collected in the capsule are currently being examined. Hayabusa was the first spacecraft to successfully land, take off and collect samples from the surface of an asteroid.
NASA’s Dawn mission (launched September 2007) is on a 3-billion-km (1.7-billion-mile) journey to the asteroid belt, and is planned to orbit the asteroids Vesta and Ceres. Vesta and Ceres are sometimes called baby planets — their growth was interrupted by the formation of Jupiter, and they followed different evolutionary paths. Scientists hope to characterize the conditions and processes of the solar system’s earliest epoch by studying these two very different large asteroids.
What are Comets?
Comets are cosmic snowballs of frozen gases, rock and dust roughly the size of a small town. When a comet’s orbit brings it close to the sun, it heats up and spews dust and gases into a giant glowing head larger than most planets. The dust and gases form a tail that stretches away from the sun for millions of kilometers.
In the distant past, people were both awed and alarmed by comets, perceiving them as “long-haired” stars that appeared unpredictably and unannounced in the sky. To some ancient observers, an elongated comet looked like a fiery sword blazing across the night sky. Chinese astronomers kept extensive records for centuries, including illustrations of characteristic types of comet tails. They recorded the times of cometary appearances and disappearances in addition to celestial positions. These historic comet annals have proven to be a valuable resource for later astronomers.
We now know that comets are leftovers from the dawn of the solar system around 4.6 billion years ago, and consist mostly of ice coated with dark organic material. They have been referred to as dirty snowballs. They may yield important clues about the formation of our solar system. Comets may have brought water and organic compounds, the building blocks of life, to the early Earth and other parts of the solar system.
Each comet has a tiny frozen part, called a nucleus, often no bigger than a few kilometers across. The nucleus contains icy chunks and frozen gases with bits of embedded rock and dust. The nucleus may have a small rocky core.
As theorized by astronomer Gerard Kuiper in 1951, a disc-like belt of icy bodies exists just beyond Neptune, where a population of dark comets orbits the sun in the realm of Pluto. These icy objects, occasionally pushed by gravity into orbits bringing them closer to the sun, become the so-called short-period comets. They take less than 200 years to orbit the sun, and in many cases their appearance is predictable because they have passed by before.
Less predictable are long-period comets, many of which arrive from a region called the Oort Cloud about 100,000 astronomical units (AU) (that is, 100,000 times the distance between Earth and the sun) from the sun. These Oort Cloud comets can take as long as 30 million years to complete one trip around the sun.
A comet warms up as it nears the sun and develops an atmosphere, or coma. The sun’s heat causes ices on the nucleus surface to change to gases so that the coma gets larger. The coma may be hundreds of thousands of kilometers in diameter. The pressure of sunlight and high-speed solar particles (solar wind) blows the coma materials away from the sun, forming a long, and sometimes bright, tail. Comets actually have two tails — a dust tail and a plasma (ionized gas) tail.
Most comets travel a safe distance from the sun — comet Halley comes no closer than 89 million km (55 million miles). However, some comets, called sun-grazers, crash straight into the sun or get so close that they break up and evaporate.
Scientists have long wanted to study comets in some detail, tantalized by the few 1986 images of comet Halley’s nucleus from the Giotto mission. NASA’s Deep Space 1 spacecraft flew by comet Borrelly in 2001 and photographed its nucleus, which is about 8 km (5 miles) long.
NASA’s Stardust mission successfully flew within 236 km (147 miles) of the nucleus of Comet Wild 2 in January 2004, collecting cometary particles and interstellar dust for a sample return to Earth in 2006. The photographs taken during this close flyby of a comet nucleus show jets of dust and a rugged, textured surface. Analysis of the Stardust samples suggests that comets may be more complex than originally thought. Minerals that formed near the sun or other stars were found in the samples, suggesting that materials from the inner regions of the solar system traveled to the outer regions where comets formed.
Another NASA mission, called Deep Impact, consisted of a flyby spacecraft and an impactor. In July 2005, the impactor was released into the path of the nucleus of comet Tempel 1 in a planned collision, which vaporized the impactor and ejected massive amounts of fine, powdery material from beneath the comet’s surface. En route to impact, the impactor camera imaged the comet in increasing detail. Two cameras and a spectrometer on the flyby spacecraft recorded the dramatic excavation that revealed the interior composition and structure of the nucleus.
The Deep Impact spacecraft and the Stardust spacecraft are healthy and have been retargeted. Deep Impact’s mission, EPOXI (Extrasolar Planet Observation and Deep Impact Extended Investigation), comprises two projects: the Deep Impact Extended Investigation (DIXI) will encounter comet Hartley 2 in 2010 and the Extrasolar Planet Observation and Characterization (EPOCh) investigation will search for Earth-size planets around other stars. NASA returns to comet Tempel 1 in 2011, when the Stardust New Exploration of Tempel 1 (NExT) mission will observe changes since Deep Impact’s 2005 encounter. |
Transmitter is the electronic unit that accepts the information signal to be transmitted and converts it to an RF signal capable of being transmitted over long distances. Every transmitter has four basic requirements.
1. It must generate a carrier signal of the correct frequency at a desired point in the spectrum.
2. It must provide some form of modulation that causes the information signal to modify the carrier signal.
3. It must provide sufficient power amplification to ensure that the signal level is high enough to carry over the desired distance.
4. It must provide circuits that match the impedance of the power amplifier to that of the antenna for maximum transfer of power.
the simplest transmitter is a single-transistor oscillator connected directly to an antenna. The oscillator generates the carrier and can be switched off and on by a telegraph key to produce the dots and dashes of the International Morse code. Information transmitted in this way is referred to as continuous-wave (CW) transmission. Such a transmitter is rarely used today, for the Morse code is nearly extinct and the oscillator power is too low for reliable communication. Nowadays transmitter such as this are built only by amateur (ham) radio operators for what is called QRP or low-power operation for personal hobby communication.
The CW transmitter can be greatly improved by simply adding a power amplifier to it, as illustrated in Fig. 8-1. The oscillator is still keyed off and on to produce dots and dashes, and the amplifier increases the power level of the signal. The result is a stronger signal that carries farther and produces more reliable transmission. The basic oscillator-amplifier combination shown in Fig. 8-1 is the basis for virtually all radio transmitter. Many other circuits are added depending on the type of modulation used, the power level, and other considerations.
Fig. 8-2 shows an AM transmitter using high-level modulation. An oscillator, in most applications a crystal oscillator, generates the final carrier frequency. The carrier signal is then fed to a buffer amplifier whose primary purpose is to isolate the oscillator from the remaining power amplifier stages. The buffer amplifier usually operates at the class A level and provides a modest increase in power output. The main purpose of the buffer amplifier is simply to prevent load changes in the power amplifier stages or in the antenna from causing frequency variations in the oscillator.
The signal from the buffer amplifier is applied to a class C driver amplifier designed to provide an intermediate level of power amplification. The purpose of this circuit is to generate sufficient output power to drive the final power amplifier stage. The final power amplifier, normally just referred to as the final, also operates at the class C level at very high power. The actual amount of power depends on the application. For example, in a CB transmitter, the power input is only 5 W. However, AM radio stations operate at much higher powers—say, 250, 500, 1000, 5000, or 50,000 W—and the video transmitter at a TV station operates at even higher power levels. Cell phone base stations operate at the 30- to 40-W level.
All the RF circuits in the transmitter are usually solid-state; i.e., they are implemented with either bipolar transistors or metal-oxide semiconductor field-effect transistors (MOSFETs). Although bipolar transistors are by far the most common type, the use of MOSFETs is increasing because they are now capable of handling high power at high frequencies. Transistors are also typically used in the final as long as the power level does not exceed several hundred watts. Individual RF power transistors can handle up to about 800 W. Many of these can be connected in parallel or in push-pull configurations to increase the power-handling capability to many kilowatts. For higher power levels, vacuum tubes are still used in some transmitter, but rarely in new designs. Vacuum tubes function into the VHF and UHF ranges, with power levels of 1 kW or more.
Now, assume that the AM transmitter shown in Fig. 8-2 is a voice transmitter. The input from the microphone is applied to a low-level class A audio amplifier, which boosts the small signal from the microphone to a higher voltage level. (One or more stages of amplification could be used.) The voice signal is then fed to some form of speech-processing (filtering and amplitude control) circuit. The filtering ensures that only voice frequencies in a certain range are passed, which helps to minimize the bandwidth occupied by the signal. Most communication transmitter limit the voice frequency to the 300- to 3000-Hz range, which is adequate for intelligible communication. However, AM broadcast stations offer higher fidelity and allow frequencies up to 5 kHz to be used. In practice, many AM stations modulate with frequencies up to 7.5 kHz, and even 10 kHz, since the FCC uses alternate channel assignments within a given region and the outer sidebands are very weak, so no adjacent channel interference occurs.
Final power amplifier Speech processing Figure 8-2 An AM transmitter using high-level collector modulation. Carrier oscillator Driver Buffer amplifier Final power amplifier Microphone Audio amplifier Speech processing Driver Modulation amplifier Radio Transmitter 239 Speech processors also contain a circuit used to hold the amplitude to some maximum level. High-amplitude signals are compressed and lower-amplitude signals are given more amplification. The result is that over modulation is prevented, yet the transmitter operates as close to 100 percent modulation as possible. This reduces the possibility of signal distortion and harmonics, which produce wider sidebands that can cause adjacent channel interference, but maintains the highest possible output power in the sidebands. After the speech processor, a driver amplifier is used to increase the power level of the signal so that it is capable of driving the high-power modulation amplifier.
Low-Level FM Transmitter
In low-level modulation, modulation is performed on the carrier at low power levels, and the signal is then amplified by power amplifiers. This arrangement works for both AM and FM. FM transmitter using this method are far more common than low-level AM transmitter. Fig. 8-3 shows the typical configuration for an FM or PM transmitter. The indirect method of FM generation is used. A stable crystal oscillator is used to generate the carrier signal, and a buffer amplifier is used to isolate it from the remainder of the circuitry. The carrier signal is then applied to a phase modulator such as those discussed in article. 6.
In the AM transmitter of Fig. 8-2, high-level or collector modulation (plate modulation in a tube) is used. As stated previously, the power output of the modulation amplifier must be one-half the input power of the RF amplifier. The high-power modulation amplifier usually operates with a class AB or class B push-pull configuration to achieve these power levels.
The voice input is amplified and processed to limit the frequency range and prevent over deviation. The output of the modulator is the desired FM signal. Most FM transmitters are used in the VHF and UHF range. Because crystals are not available for generating those frequencies directly, the carrier is usually generated at a frequency considerably lower than the final output frequency. To achieve the desired output frequency, one or more frequency multiplier stages are used. A frequency multiplier is a class C amplifier whose output frequency is some integer multiple of the input frequency. Most frequency multipliers increase the frequency by a factor of 2, 3, 4, or 5. Because they are class C amplifiers, most frequency multipliers also provide a modest amount of power amplification. Not only does the frequency multiplier increase the carrier frequency to the desired output frequency, but also it multiplies the frequency deviation produced by the modulator. Many frequency and phase modulators generate only a small frequency shift, much lower than the desired final deviation. The design of the transmitter must be such that the frequency multipliers will provide the correct amount of multiplication not only for the carrier frequency but also for the modulation deviation. After the frequency multiplier stage, a class C driver amplifier is used to increase the power level sufficiently to operate the final power amplifier, which also operates at the class C level.
Most FM communication transmitter operate at relatively low power levels, typically less than 100 W. All the circuits, even in the VHF and UHF range, use transistors. For power levels beyond several hundred watts, vacuum tubes must be used. The final amplifier stages in FM broadcast transmitters typically use large vacuum tube class C amplifiers. In FM transmitter operating in the microwave range, klystrons, magnetrons, and traveling-wave tubes are used to provide the final power amplification
A typical single-sideband (SSB) transmitter is shown in Fig. 8-4. An oscillator signal generates the carrier, which is then fed to the buffer amplifier. The buffer amplifier supplies the carrier input signal to the balanced modulator. The audio amplifier and speech-processing circuits described previously provide the other nput to the balanced modulator. The balanced modulator output—a DSB signal—is then fed to a sideband filter that selects either the upper or lower sideband. Following this, the SSB signal is fed to a mixer circuit, which is used to convert the signal to its final operating frequency. Mixer circuits, which operate as simple amplitude modulators, are used to convert a lower frequency to a higher one or a higher frequency to a lower one. (Mixers are discussed more fully in Chap. 9.)
Typically, the SSB signal is generated at a low RF. This makes the balanced modulator and filter circuits simpler and easier to design. The mixer translates the SSB signal to a higher desired frequency. The other input to the mixer is derived from a local oscillator set at a frequency that, when mixed with the SSB signal, produces the desired operating frequency. The mixer can be set up so that the tuned circuit at its output selects either the sum or the difference frequency. The oscillator frequency must be set to provide the desired output frequency. For fixed-channel operation, crystals can be used in this local oscillator. However, in some equipment, such as that used by hams, a variable frequency oscillator(VFO) is used to provide continuous tuning over the desired range. In most modern communication equipment, a frequency synthesizer is used to set the final output frequency
The output of the mixer in Fig. 8-4 is the desired final carrier frequency containing the SSB modulation. It is then fed to linear driver and power amplifiers to increase the power level as required. Class C amplifiers distort the signal and therefore cannot be used to transmit SSB or low-level AM of any kind, including DSB. Class A or AB linear amplifiers must be used to retain the information content in the AM signal.
Most modern digital radios such as cell phones use DSP to produce the modulation and related processing of the data to be transmitted. Refer to Fig. 8-5. The serial data representing the data to be transmitted is sent to the DSP, which then generates two data streams that are then converted to RF for transmission. The data paths from the DSP chip are sent to DACs where they are translated to equivalent analog signals.
The analog signals are filtered in a low-pass filter (LPF) and then applied to mixers that will up-convert them to the final output frequency. The mixers receive their second inputs from an oscillator or a frequency synthesizer that selects the operating frequency. Note that the oscillator signals are in quadrature; i.e., one is shifted 90° from the other. One is a sine wave, and the other is a cosine wave. The upper signal is referred to as the in-phase (I) signal and the other as the quadrature (Q) signal. The output signals from the mixers are then added, and the result is amplified and transmitted by the power amplifier (PA). Two quadrature signals are needed at the receiver to recover the signal and demodulate it in a DSP chip. This configuration works for any type of modulation as all of the modulation is done with mathematical algorithms. You will learn more about this technique in Chap. 11.
The starting point for all transmitter is carrier generation. Once generated, the carrier can be modulated, processed in various ways, amplified, and finally transmitted. The source of most carriers in modern transmitter is a crystal oscillator. PLL frequency synthesizers in which a crystal oscillator is the basic stabilizing reference are used in applications requiring multiple channels of operation
Most radio transmitter are licensed by the FCC either directly or indirectly to operate not only within a specific frequency band but also on predefined frequencies or channels. Deviating from the assigned frequency by even a small amount can cause interference with signals on adjacent channels. Therefore, the transmitter carrier generator must be very precise, operating on the exact frequency assigned, often within very close tolerances. In some radio services, the frequency of operation must be within 0.001 percent of the assigned frequency. In addition, the transmitter must remain on the assigned frequency. It must not drift off or wander from its assigned value despite the many operating conditions, such as wide temperature variations and changes in power supply voltage, that affect frequency. The only oscillator capable of meeting the precision and stability demanded by the FCC is a crystal oscillator.
A crystal is a piece of quartz that has been cut and ground into a thin, flat wafer and mounted between two metal plates. When the crystal is excited by an ac signal across its plates, it vibrates. This action is referred to as the piezoelectric effect. The frequency of vibration is determined primarily by the thickness of the crystal. Other factors influencing frequency are the cut of the crystal, i.e., the place and angle of cut made in the base quartz rock from which the crystal was derived, and the size of the crystal wafer.
Crystals frequencies range from as low as 30 kHz to as high as 150 MHz. As the crystal vibrates or oscillates, it maintains a very constant frequency. Once a crystal has been cut or ground to a particular frequency, it will not change to any great extent even with wide voltage or temperature variations. Even greater stability can be achieved by mounting the crystal in sealed, temperature-controlled chambers known as crystal ovens. These devices maintain an absolute constant temperature, ensuring a stable output frequency.
As you saw in Chap. 4, the crystal acts as an LC tuned circuit. It can emulate a series or parallel LC circuit with a Q as high as 30,000. The crystal is simply substituted for the coil and capacitor in a conventional oscillator circuit. The end result is a very precise, stable oscillator. The precision, or stability, of a crystal, is usually expressed in parts per million (ppm). For example, to say that a crystal with a frequency of 1 MHz has a precision of 100 ppm means that the frequency of the crystal can vary from 999,900 to 1,000,100 Hz. Most crystals have tolerance and stability values in the 10- to 1000-ppm range. Expressed as a percentage, the precision is (100/1,000,000) x 100 = 0.0001 x 100 = 0.01 percent
You can also use ratio and proportion to figure the frequency variation for a crystal with a given precision. For example, a 24-MHz crystal with a stability of 650 ppm has a maximum frequency variation Δf of 50 (1,000,000)/(24,000,000). Thus, Δf 550(24,000,000)/1,000,000 = 24×50 = 1200 Hz or 1200 Hz.
Example 8.1 What are the maximum and minimum frequencies of a 16-MHz crystal with a stability of 200 ppm? The frequency can vary as much as 200 Hz for every 1 MHz of frequency or 200 x 16 = 3200 Hz. The possible frequency range is
However, the simplest way to convert from percentage to ppm is to convert the percentage value to its decimal form by dividing by 100, or moving the decimal point two places to the left, and then multiplying by 106 , or moving the decimal point six places to the right. For example, the ppm stability of a 5-MHz crystal with a precision of 0.005 percent is found as follows. First, put 0.005 percent in decimal form: 0.005 percent = 0.00005. Next, multiply by 1 million: 0.00005 x 1,000,000 = 50 ppm
A radio transmitter uses a crystal oscillator with a frequency of 14.9 MHz and a frequency multiplier chain with factors of 2, 3, and 3. The crystal has a stability of +-300 ppm.
a. Calculate the transmitter output frequency.
Typical Crystal Oscillator Circuits
The most common crystal oscillator is a Colpitts type, in which the feedback is derived from the capacitive voltage divider made up of C1 and C2. An emitter-follower version is shown in Fig. 8-6. Again, the feedback comes from the capacitor voltage divider C1–C2. The output is taken from the emitter, which is untuned. Most oscillators of this type operate as class A amplifiers with a sine wave output. JFETs are also widely used in discrete component amplifiers.
Occasionally you will see a capacitor in series or in parallel with the crystal (not both), as shown in Fig. 8-6. These capacitors can be used to make minor adjustments in the crystal frequency.
As discussed previously, it is not possible to affect large frequency changes with series or shunt capacitors, but they can be used to make fine adjustments. The capacitors are called crystal pulling capacitors, and the whole process of fine-tuning a crystal is sometimes referred to as rubbering. When the pulling capacitor is a varactor, FM or FSK can be produced. The analog or binary modulating signal varies the varactor capacitance that, in turn, shifts the crystal frequency.
The main problem with crystals is that their upper-frequency operation is limited. The higher the frequency, the thinner the crystal must be to oscillate at that frequency. At an upper limit of about 50 MHz, the crystal is so fragile that it becomes impractical to use. However, over the years, operating frequencies have continued to move upward as a result of the quest for more frequency space and greater channel capacity, and the FCC has continued to demand the same stability and precision that are required at the lower frequencies. One way to achieve VHF, UHF, and even microwave frequencies using crystals is by employing frequency multiplier circuits, as described earlier. The carrier oscillator operates on a frequency less than 50 MHz, and multipliers raise that frequency to the desired level. For example, if the desired operating frequency is 163.2 MHz and the frequency multipliers multiply by a factor of 24, the crystal frequency must be 163.2/24 = 6.8 MHz.
Another way to achieve crystal precision and stability at frequencies above 50 MHz is to use overtone crystals. An overtone crystal is cut in a special way so that it optimizes its oscillation at an overtone of the basic crystal frequency. An overtone is like a harmonic as it is usually some multiple of the fundamental vibration frequency. However, the term harmonic is usually applied to electric signals, and the term overtone refers to higher mechanical vibration frequencies. Like a harmonic, an overtone is usually some integer multiple of the base vibration frequency. However, most overtones are slightly more or slightly less than the integer value. In a crystal, the second harmonic is the first overtone, the third harmonic is the second overtone, and so on. For example, a crystal with a fundamental frequency of 20 MHz would have a second harmonic or first overtone of
40 MHz, and a third harmonic or second overtone of 60 MHz.
The term overtone is often used as a synonym for harmonic. Most manufacturers refer to their third overtone crystals as third harmonic crystals.
The odd overtones are far greater in amplitude than the even overtones. Most overtone crystals oscillate reliably at the third or fifth overtone of the frequency at which the crystal is originally ground. There are also seventh-overtone crystals. Overtone crystals can be obtained with frequencies up to about 250 MHz. A typical overtone crystal oscillator may use a crystal cut for a frequency of, say, 16.8 MHz, and optimized for overtone service will have a third-overtone oscillation at 3 x 16.8 = 50.4 MHz. The tuned output circuit made up of L1 and C1 will be resonant at 50.4 MHz.
Most crystal oscillators are circuits built into other integrated circuits. The crystal is external to the IC. Another common form is that shown in Fig. 8-7, where the crystal and oscillator circuit are fully packaged together as an IC. Both sine and square output versions are available.
There are many different versions of these packaged crystal oscillators. These are the basic crystal oscillator (XO), the voltage-controlled crystal oscillator (VCXO), the temperature-compensated crystal oscillator (TCXO), and the oven-controlled crystal oscillator (OCXO). The selection depends upon the desired degree of frequency stability required by the application. The basic XO has a stability in the tens of ppm
A VCXO uses a varactor in series or parallel with the crystal (Fig. 8-6) to vary the crystal frequency over a narrow range with an external DC voltage.
Improved stability is obtained in the TCXO, which uses a feedback network with a thermistor to sense temperature variations, which in turn controls a voltage variable capacitor (VVC) or varactor to pull the crystal frequency to some desired value. TCXOs can achieve stability values of 60.2 to 62 ppm.
An OCXO packages the crystal and its circuit in a temperature-controlled oven that holds the frequency stable at the desired frequency. A thermistor sensor in a feedback network varies the temperature of a heating element in the oven. Stabilities in the 1 x 10-8 or better can be obtained.
Frequency synthesizers are variable-frequency generators that provide the frequency stability of crystal oscillators but with the convenience of incremental tuning over a broad frequency range. Frequency synthesizers usually provide an output signal that varies in fixed frequency increments over a wide range. In a transmitter, a frequency synthesizer provides basic carrier generation for channelized operation. Frequency synthesizers are also used in receivers as local oscillators and perform the receiver tuning function.
Using frequency synthesizers overcomes certain cost and size disadvantages associated with crystals. Assume, e.g., that a transmitter must operate on 50 channels. Crystal stability is required. The most direct approach is simply to use one crystal per frequency and add a large switch. Although such an arrangement works, it has major disadvantages. Crystals are expensive, ranging from $1 to $10 each, and even at the lowest price, 50 crystals may cost more than all the rest of the parts in the transmitter. The same 50 crystals would also take up a great deal of space, possibly occupying more than10 times the volume of all the rest of the transmitter parts. With a frequency synthesizer, only one crystal is needed, and the requisite number of channels can be generated by
using a few tiny ICs.
Over the years, many techniques have been developed for implementing frequency synthesizers with frequency multipliers and mixers. Today, however, most frequency synthesizers use some variation of the phase-locked loop (PLL). A newer technique called digital signal synthesis (DSS) is becoming more popular as integrated-circuit technology has made high-frequency generation practical.
Phase-Locked Loop Synthesizers
An elementary frequency synthesizer based on a PLL is shown in Fig. 8-8. Like all phase-locked loops, it consists of a phase detector, a low-pass filter, and a VCO. The input to the phase detector is a reference oscillator. The reference oscillator is normally crystal controlled to provide high-frequency stability. The frequency of the reference oscillator sets the increments in which the frequency may be changed. Note that the VCO output is not connected directly back to the phase detector, but applied to a frequency divider first. A frequency divider is a circuit whose output frequency is some integer submultiple of the input frequency. A divide-by-10 frequency synthesizer produces an output frequency that is one-tenth of the input frequency. Frequency dividers can be easily implemented with digital circuits to provide any integer value of frequency division.
In the PLL in Fig. 8-8, the reference oscillator is set to 100 kHz (0.1 MHz). Assume that the frequency divider is initially set for a division of 10. For a PLL to become locked or synchronized, the second input to the phase detector must be equal in frequency to the reference frequency; for this PLL to be locked, the frequency divider output must be 100 kHz. The VCO output has to be 10 times higher than this, or 1 MHz. One way to look at this circuit is as a frequency multiplier: The 100-kHz input is multiplied by 10 to produce the 1-MHz output. In the design of the synthesizer, the VCO frequency is set to 1 MHz so that when it is divided, it will provide the 100-kHz input signal required by the phase detector for the locked condition. The synthesizer output is the output of the VCO. What has been created, then, is a 1-MHz signal source. Because the PLL is locked to the crystal reference source, the VCO output frequency has the same stability as that of the crystal oscillator. The PLL will track any frequency variations, but the crystal is very stable and the VCO output is as stable as that of the crystal reference oscillator.
To make the frequency synthesizer more useful, some means must be provided to vary its output frequency. This is done by varying the frequency division ratio. Through various switching techniques, the flip-flops in a frequency divider can be arranged to provide any desired frequency division ratio. In the most sophisticated circuits, a microprocessor generates the correct frequency division ratio based on software inputs.
Varying the frequency division ratio changes the output frequency. For example, in the circuit in Fig. 8-8, if the frequency division ratio is changed from 10 to 11, the VCO output frequency must change to 1.1 MHz. The output of the divider then remains at 100 kHz (1,100,000/11 = 100,000), as necessary to maintain a locked condition. Each incremental change in frequency division ratio produces an output frequency change of 0.1 MHz. This is how the frequency increment is set by the reference oscillator.
A more complex PLL synthesizer, a circuit that generates VHF and UHF frequencies over the 100- to 500-MHz range, is shown in Fig. 8-9. This circuit uses a FET oscillator to generate the carrier frequency directly. No frequency multipliers are needed. The output of the frequency synthesizer can be connected directly to the driver and power amplifiers in the transmitter. This synthesizer has an output frequency in the 390-MHz range, and the frequency can be varied in 30-kHz increments above and below that frequency.
The VCO circuit for the synthesizer in Fig. 8-9 is shown in Fig. 8-10. The frequency of this LC oscillator is set by the values of L1, C1, C2 and the capacitances of the varactor diodes D1 and D2, Ca and Cb, respectively. The dc voltage applied to the varactors changes the frequency. Two varactors are connected back to back, and thus the total effective capacitance of the pair is less than either individual capacitance. Specifically, it is equal to the series capacitance CS, where CS= CaCb/(Ca 1 Cb). If D1 and D2 are identical, CS = Ca/2. A negative voltage with respect to ground is required to reverse bias the diodes. Increasing the negative voltage increases the reverse bias and decreases the capacitance. This, in turn, increases the oscillator frequency.
Using two varactors allows the oscillator to produce higher RF voltages without the problem of the varactors becoming forward-biased. If a varactor, which is a diode, becomes forward-biased, it is no longer a capacitor. High voltages in the tank circuit of the oscillator can sometimes exceed the bias voltage level and cause forward conduction. When forward conduction occurs, rectification takes place, producing a dc voltage that changes the dc tuning voltage from the phase detector and loop filter. The result is called phase noise. With two capacitors in series, the voltage required to forward-bias the combination is double that of one varactor. An additional benefit is that two varactors in series produce a more linear variation of capacitance with voltage than one diode. The
dc frequency control voltage is, of course, derived by filtering the phase detector output with the low-pass loop filter.
In most PLLs, the phase detector is a digital circuit rather than a linear circuit, since the inputs to the phase detector are usually digital. Remember, one input comes from the output of the feedback frequency divider chain, which is certainly digital, and the other comes from the reference oscillator. In some designs, the reference oscillator frequency is also divided down by a digital frequency divider to achieve the desired frequency step increment. This is the case in Fig. 8-9. Since the synthesizer frequency can be stepped in increments of 30 kHz, the reference input to the phase detector must be 30 kHz. This is derived from a stable 3-MHz crystal oscillator and a frequency divider of 100.
The design shown in Fig. 8-9 uses an exclusive-OR gate as a phase detector. Recall that an exclusive-OR (XOR) gate generates a binary 1 output only if the two inputs are complementary; otherwise, it produces a binary 0 output.
Fig. 8-11 shows how the XOR phase detector works: Remember that the inputs to a phase detector must have the same frequency. This circuit requires that the inputs have a 50 percent duty cycle. The phase relationship between the two signals determines the output of the phase detector. If the two inputs are exactly in phase with each other, the XOR output will be zero, as Fig. 8-11(b) shows. If the two inputs are 180° out of phase with each other, the XOR output will be a constant binary 1 [see Fig. 8-11(c)]. Any other phase relationship will produce output pulses at twice the input frequency. The duty cycle of these pulses indicates the amount of phase shift. A small phase shift produces narrow pulses; a larger phase shift produces wider pulses. Fig. 8-11(d) shows a 90° phase shift.
The output pulses are fed to the loop filter (Fig. 8-9), an op amp with a capacitor in the feedback path that makes it into a low-pass filter. This filter averages the phase detector pulses into a constant dc voltage that biases the VCO varactors. The average dc voltage is proportional to the duty cycle, which is the ratio of the binary 1 pulse time to the period of the signal. Narrow pulses (low duty cycle) produce a low average dc voltage, and wide pulses (high duty cycle) produce a high average dc voltage. Fig. 8-11(e) shows how the average dc voltage varies with phase shift. Most PLLs lock in at a phase difference of 90°. Then, as the frequency of the VCO changes because of drift or because of changes in the frequency divider ratio, the input to the phase detector from
the feedback divider changes, varying the duty cycle. This changes the dc voltage from the loop filter and forces a change of the VCO frequency to compensate for the original change. Note that the XOR produces a positive dc average voltage, but the op amp used in the loop filter inverts this to a negative dc voltage, as required by the VCO.
The output frequency of the synthesizer fo and the phase detector reference frequency fr are related to the overall divider ratio R as follows:
In our example, the reference input to the phase detector fr must be 30 kHz to match the feedback from the VCO output fo. Assume a VCO output frequency of 389.76 MHz. A frequency divider reduces this amount to 30 kHz. The overall division ratio is R = fo /fr = 389,760,000/30,000 = 12,992.
In some very high-frequency PLL synthesizers, a special frequency divider called a prescaler is used between the high-output frequency of the VCO and the programmable part of the divider. The prescaler could be one or more emitter- coupled logic (ECL) flipflops or a low-ratio CMOS frequency divider that can operate at frequencies up to 1 to 2 GHz. Refer again to Fig. 8-9. The prescaler divides by a ratio of M = 64 to reduce the 389.76-MHz output of the VCO to 6.09 MHz, which is well within the range of most programmable frequency dividers. Since we need an overall division ratio of R = 12,992 and a factor of M = 64 is in the prescaler, the programmable portion of the feedback divider N can be computed. The total division factor is R = MN = 12,992. Rearranging,
we have N = R/M = 12,992/64 = 203.
Now, to see how the synthesizer changes output frequencies when the division ratio is changed, assume that the programmable part of the divider is changed by one increment, to N = 204. For the PLL to remain in a locked state, the phase detector input must remain at 30 kHz. This means that the VCO output frequency must change. The new frequency division ratio is 204 x 64 = 13,056. Multiplying this by 30 kHz yields the new VCO output frequency fo = 30,000 x 13,056 =391,680,000 Hz = 391.68 MHz. Instead of the desired 30-kHz increment, the VCO output varied by 391,680,000 – 389,760,000 = 1,920,000 Hz, or a step of 1.92 MHz. This was caused by the prescaler. For a 30-kHz step to be achieved, the feedback divider should have changed its ratio from 12,992 to 12,993. Since the prescaler is fixed with a division factor of 64, the smallest increment step is 64 times the reference frequency, or 64 x 30,000 = 1,920,000 Hz. The prescaler solves the problem of having a divider with a high enough frequency capability to handle the VCO output, but forces the use of programmable dividers for only a portion of the total divider ratio. Because of the prescaler, the divider ratio is not stepped in integer increments but in increments of 64. Circuit designers can either live with this or find another solution.
One possible solution is to reduce the reference frequency by a factor of 64. In the example, the reference frequency would become 30 kHz/64 – 468.75 Hz. To achieve this frequency at the other input of the phase detector, an additional division factor of 64 must be included in the programmable divider, making it N = 203 x 64 = 12,992. Assuming the original output frequency of 389.76 MHz, the overall divider ratio is R = MN = 12,992(64) = 831,488. This makes the output of the programmable divider equal to the reference frequency, or fr = 389,760,000/831,488 = 468.75 Hz.
This solution is logical, but it has several disadvantages. First, it increases cost and complexity by requiring two more divide-by-64 ICs in the reference and feedback paths. Second, the lower the operating frequency of the phase detector, the more difficult it is to filter the output into direct current. Further, the low-frequency response of the filter slows the process of achieving lock. When a change in the divider ratio is made, the VCO frequency must change. It takes a finite amount of time for the filter to develop the necessary value of the corrective voltage to shift the VCO frequency. The lower the phase detector frequency, the greater this lock delay time. It has been determined that the lowest acceptable frequency is about 1 kHz, and even this is too low in some applications. At 1 kHz, the change in VCO frequency is very slow as the filter capacitor changes its charge in response to the different duty-cycle pulses of the phase detector. With a 468.75-Hz phase detector frequency, the loop response becomes even slower. For more rapid frequency changes, a much higher frequency must be used. For spread spectrum and in some satellite applications, the frequency must change in a few microseconds or less, requiring an extremely high reference frequency.
To solve this problem, designers of high-frequency PLL synthesizers created special IC frequency dividers, such as the one diagrammed in Fig. 8-12. This is known as a fractional N divider PLL. The VCO output is applied to a special variable-modulus pre scaler divider. It is made of emitter-coupled logic or CMOS circuits. It is designed to have two divider ratios, M and M + 1. Some commonly available ratio pairs are 10/11, 64/65, and 128/129. Let’s assume the use of a 64/65 counter. The actual divider ratio is determined by the modulus control input. If this input is binary 0, the prescaler divides by M, or 64; if this input is binary 1, the prescaler divides by M + 1, or 65. As Fig. 8-12 shows, the modulus control receives its input from an output of counter A. Counters A and N are programmable down-counters used as frequency dividers. The divider ratios are preset into the counters each time a full divider cycle is achieved. These ratios are such that N . A. The count input to each counter comes from the output of the variable-modulus prescaler.
A divider cycle begins by presetting the down-counters to A and N and setting the Prescaler to M+ 1 = 65. The input frequency from the VCO is fo. The input to the down-counters is fo/65. Both counters begin down-counting. Since A is a shorter counter than N, A will decrement to 0 first. When it does, its detect-0 output goes high, changing the modulus of the Prescaler from 65 to 64. The N counter initially counts down by a factor of A but continues to down-count with an input of fo /64. When it reaches 0, both down-counters are preset again, the dual modulo Prescaler is changed back to a divider ratio of 65, and the cycle starts over.
The total division ratio R of the complete divider in Fig. 8-12 is R = MN + A. If M = 64, N = 203, and A = 8, the total divider ratio is R = 64(203) + 8 = 12,992 + 8 = 13,000. The output frequency is fo = Rfr = 13,000(30,000) = 390,000,000 = 390 MHz.
Any divider ratio in the desired range can be obtained by selecting the appropriate preset values for A and N. Further, this divider steps the divider ratio one integer at a time so that the step increment in the output frequency is 30 kHz, as desired.
As an example, assume that N is set to 207 and A is set to 51. The total divider ratio is R = MN = A = 64(207) + 51 + 13,248 + 51 = 13,299. The new output frequency is fo = 13,299(30,000) =398,970,000 = 398.97 MHz.
If the A value is changed by 1, raising it to 52, the new divide ratio is R = MN + A= 64(207) + 52 = 13,248 + 52 = 13,300. The new frequency is fo = 13,300(3,000) =399,000,000 = 399 MHz. Note that with an increment change in A of 1, R changed by 1 and the final output frequency increased by a 30-kHz (0.03-MHz) increment, from 398.97 to 399 MHz.
The preset values for N and A can be supplied by almost any parallel digital source but are usually supplied by a microprocessor or are stored in a ROM. Although this type of circuit is complex, it achieves the desired results of stepping the output frequency in increments equal to the reference input to the phase detector and allowing the reference frequency to remain high so that the change delay in the output frequency is shorter.
A frequency synthesizer has a crystal reference oscillator of 10 MHz followed by a divider with a factor of 100. The variable-modulus Prescaler has M = 31/32. The A and N down-counters have factors of 63 and 285, respectively. What is the synthesizer output frequency?
The reference input signal to the phase detector is
10 MHz/100 = 0.1 MHz = 100 kHz
The total divider factor R is
R 5=MN + A = 32 (285) + 63 = 9183
The output of this divider must be 100 kHz to match the 100-kHz reference signal to achieve lock. Therefore, the input to the divider, the output of the VCO, is R times 100 kHz, or
fo = 9183 (0.1 MHz) = 918.3 MHz
Demonstrate that the step change in output frequency for the synthesizer in Example 8.3 is equal to the phase detector reference range, or 0.1 MHz. Changing the A factor one increment to 64 and recalculating the output yield
R = 32(285) + 64 = 9184
fo = 9184(0.1 MHz) = 918.4 MHz
The increment is 918.4 – 918.3 = 0.1 MHz.
Direct Digital Synthesis
A newer form of frequency synthesis is known as direct digital synthesis (DDS). A DDS synthesizer generates a sine wave output digitally. The output frequency can be varied in increments depending upon a binary value supplied to the unit by a counter, a register, or an embedded microcontroller.
The basic concept of the DDS synthesizer is illustrated in Fig. 8-13. A read-only memory (ROM) is programmed with the binary representation of a sine wave. These are the values that would be generated by an analog-to-digital (A/D) converter if an analog sine wave were digitized and stored in the memory. If these binary values are fed to a digital-to-analog (D/A) converter, the output of the D/A converter will be a stepped approximation of the sine wave. A low-pass filter (LPF) is used to remove the high frequency content near the clock frequency, thereby smoothing the ac output into a nearly perfect sine wave.
To operate this circuit, a binary counter is used to supply the address word to the ROM. A clock signal steps the counter that supplies a sequentially increasing address to ROM. The binary numbers stored in ROM are applied to the D/A converter, and the stepped sine wave is generated. The frequency of the clock determines the frequency of the sine wave.
To illustrate this concept, assume a 16-word ROM in which each storage location has a 4-bit address. The addresses are supplied by a 4-bit binary counter that counts from 0000 through 1111 and recycles. Stored in ROM are binary numbers representing values that are the sine of particular angles of the sine wave to be generated. Since a sine wave is 360° in length, and since the 4-bit counter produces 16 addresses or increments, the binary values represent the sine values at 360/16 = 22.5° increments.
Assume further that these sine values are represented with 8 bits of precision. The 8-bit binary sine values are fed to the D/A converter, where they are converted to a proportional voltage. If the D/A converter is a simple unit capable of a dc output voltage only, it cannot produce a negative value of voltage as required by a sine wave. Therefore, we will add to the sine value stored in ROM an offset value that will produce a sine wave output, but shifted so that it is all positive. For example, if we wish to produce a sine wave with a 1-V peak value, the sine wave would vary from 0 to +1, then back to 0, from 0 to -1, and then back to 0, as shown in Fig. 8-14(a). We add a binary 1 to the waveform so that the output of the D/A converter will appear as shown in Fig. 8-14(b). The D/A converter output will be 0 at the peak negative value of the sine wave. This value of 1 is added to each of the sine values stored in ROM. Fig. 8-15 shows the ROM address, the phase angle, sine value, and the sine value plus 1.
If the counter starts counting at zero, the sine values will be sequentially accessed from ROM and fed to the D/A converter, which produces a stepped approximation of the sine wave. The resulting waveform (red) for one complete count of the counter is shown in Fig. 8-16. If the clock continues to count, the counter will recycle and the sine wave output cycle will be repeated.
An important point to note is that this frequency synthesizer produces one complete sine wave cycle for every 16 clock pulses. The reason for this is that we used 16 sine values to create the one cycle of the sine wave in ROM.
To get a more accurate representation of the sine wave, we could have used more bits. For example, if we had used an 8-bit counter with 256 states, the sine values would be spaced every 360/256 = 1.4°, giving a highly accurate representation of the sine wave. Because of this relationship, the output frequency of the sine wave f0 = the clock frequency fclk/2n, where N is equal to the number of address bits in ROM.
If a clock frequency of 1 MHz were used with our 4-bit counter, the sine wave output frequency would be
f0 = 1,000,000/24 = 1,000,000/16 = 62,500 Hz
The stepped approximation of the sine wave is then applied to a low-pass filter where the high-frequency components are removed, leaving a low-distortion sine wave.
The only way to change the frequency in this synthesizer is to change the clock frequency. This arrangement does not make much sense in view of the fact that we want our synthesizer output to have crystal oscillator precision and stability. To achieve this, the clock oscillator must be crystal-controlled. The question then becomes, How can you modify this circuit to maintain a constant clock frequency and also change the frequency digitally?
The most commonly used method to vary the synthesizer output frequency is to replace the counter with a register whose content will be used as the ROM address but also one that can be readily changed. For example, it could be loaded with an address from an external microcontroller. However, in most DDS circuits, this register is used in conjunction with a binary adder, as shown in Fig. 8-17. The output of the address register is applied to the adder along with a constant binary input value. This constant value can also be changed. The output of the adder is fed back into the register. The combination of the register and adder is generally referred to as an accumulator. This circuit is arranged so that upon the occurrence of each clock pulse, the constant C is added to the
previous value of the register content and the sum is re-stored in the address register. The constant value comes from the phase increment register, which in turn gets it from an embedded microcontroller or other source.
To show how this circuit works, assume that we are using a 4-bit accumulator register and the same ROM described previously. Assume also that we will set the constant value to 1. For this reason, each time a clock pulse occurs, a 1 is added to the content of the register. With the register initially set to 0000, the first clock pulse will cause the register to increment to 1. On the next clock pulse the register will increment to 2, and so on. As a result, this arrangement acts just as the binary counter described earlier.
Now assume that the constant value is 2. This means that for each clock pulse, the register value will be incremented by 2. Starting at 0000, the register contents would be 0, 2, 4, 6, and so on. Looking at the sine value table in Fig. 8-15, you can see that the values output to the D/A converter also describe the sine wave, but the sine wave is being generated at a more rapid rate. Instead of having eight amplitude values represent the peak-to-peak value of the sine wave, only four values are used. Refer to Fig. 8-16, which illustrates what the output looks like (blue curve). The output is, of course, a stepped approximation of a sine wave, but during the complete cycle of the counter from 0000 through 1111, two cycles of the output sine wave occur. The output has fewer steps and
is a cruder representation. With an adequate low-pass filter, the output will be a sine wave whose frequency is twice that generated by the circuit with a constant input of 1.
The frequency of the sine wave can further be adjusted by changing the constant value added to the accumulator. Setting the constant to 3 will produce an output frequency that is three times that produced by the original circuit. A constant value of 4 produces a frequency four times the original frequency.
With this arrangement, we can now express the output sine wave frequency with the formula
The higher the constant value C, the fewer the samples used to reconstruct the output sine wave. When the constant is set to 4, every fourth value in Fig. 8-15 will be sent to the D/A converter, generating the dashed waveform in Fig. 8-16. Its frequency is four times the original. This corresponds to two samples per cycle, which is the least number that can be used and still generate an accurate output frequency. Recall the Nyquist criterion, which says that to adequately reproduce a sine wave, it must be sampled a minimum of two times per cycle to reproduce it accurately in a D/A converter.
to make the DDS effective, then, the total number of sine samples stored in ROM must be a very large value. Practical circuits use a minimum of 12 address bits, giving 4096 sine samples. Even a larger number of samples can be used.
The DDS synthesizer described earlier offers some advantages over a PLL synthesizer. First, if a sufficient number of bits of resolution in ROM word size and the accumulator size are provided, the frequency can be varied in very fi ne increments. And because the clock is crystal-controlled, the resulting sine wave output will have the accuracy and precision of the crystal clock.
A second benefit is that the frequency of the DDS synthesizer can usually be changed much faster than that of a PLL synthesizer. Remember that to change the PLL synthesizer frequency, a new frequency-division factor must be entered into the frequency divider. Once this is done, it takes a finite amount of time for the feedback loop to detect the error and settle into the new locked condition. The storage time of the loop low-pass filter considerably delays the frequency change. This is not a problem in the DDS synthesizer, which can change frequencies within nanoseconds.
A downside of the DDS synthesizer is that it is difficult to make one with very high output frequencies. The output frequency is limited by the speed of the available D/A converter and digital logic circuitry. With today’s components, it is possible to produce a DDS synthesizer with an output frequency as high as 200 MHz. Further developments in IC technology will increase that in the future. For applications requiring higher frequencies, the PLL is still the best alternative.
DDS synthesizers are available from several IC companies. The entire DDS circuitry is contained on a chip. The clock circuit is usually contained within the chip, and its frequency is set by an external crystal. Parallel binary input lines are provided to set the constant value required to change the frequency. A 12-bit D/A converter is typical. An example of such a chip is the Analog Devices AD9852, shown in Fig. 8-18. The on-chip clock is derived from a PLL used as a frequency multiplier that can be set to multiply by any integer value between 4 and 20. With a maximum of 20, a clock frequency of 300 MHz is generated. To achieve this frequency, the external reference clock input must be 300/20 = 15 MHz. With a 300-MHz clock, the synthesizer can generate sine waves up to 150 MHz.
The outputs come from two 12-bit DACs that produce both the sine and the cosine waves simultaneously. A 48-bit frequency word is used to step the frequency in 248 increments. A 17-bit phase accumulator lets you shift the phase in 217 increments.
A 17-bit phase accumulator lets you shift the phase in 217 increments. This chip also has circuitry that lets you modulate the sine wave outputs. AM, FM, FSK, PM, and BPSK can be implemented. More advanced DDS ICs are available with DAC resolution to 14-bits and a maximum clock input of 1 GHz.
Keep one important thing in mind. Although there are individual PLL and DDS synthesizer chips, today these circuits are more likely to be part of a larger system on a chip (SoC).
An important specification and characteristic of any signal (carrier) source, crystal oscillator, or frequency synthesizer is phase noise. Phase noise is the minor variation in the amplitude and phase of the signal generator output. The noise comes from natural semiconductor sources, power supply variations, or thermal agitation in the components. The phase variations manifest themselves as frequency variations. The result is what appears to be a sine wave signal source that has been amplitude and frequency modulated. Although these variations are small, they can result in degraded signals in both the transmitter and the receiver circuits.
For example, in the transmitter, variations in the carrier other than those imposed by the modulator can produce a “fuzzy” signal that can result in transmission errors. In the receiver, any added noise can mask and interfere with any small signals being received. A particularly difficult problem is the multiplication of the phase noise in PLL synthesizers. A PLL is a natural frequency multiplier that in effect amplifies the phase noise of the input crystal oscillator. The goal, therefore, is to minimize phase noise in the carrier signal by design or through the selection of signal sources with the least phase noise.
When looking at a sine wave carrier on a spectrum analyzer, what you should see is a single vertical straight line, its amplitude representing the signal power and its horizontal position representing the carrier frequency (fc).
See Fig. 8-19a. However, because of signal distortion or noise, what you actually see is the carrier signal accompanied by sidebands around the carrier made up of harmonics and phase noise components. Fig. 8-19b is an example. Serious harmonic distortion can be filtered out, but the phase noise cannot.
Notice in Fig. 8-19b that the noise sidebands occur both above and below the carrier frequency. When measuring phase noise, only the upper sidebands are considered; it is assumed that, because of the random nature of the noise, both upper and lower sidebands will be identical. Phase noise is designated as L(f) and represents the single-sideband power referenced to the carrier. It is calculated and measured as the ratio of the average noise power (Pn) in a 1 Hz bandwidth at a point offset from the carrier to the carrier signal power (Pc) expressed in dBc/Hz. The average noise power is referred to as the spectral power density,
L(f) = Pn /Pc
Fig. 8-20 shows a plot of the phase noise. Note that the noise power is averaged over a narrow 1-Hz bandwidth. The location of that 1-Hz window is offset from the carrier.
The phase noise is measured at different offset values from 1 kHz to 10 MHz or more, depending on the frequencies involved, the modulation type, and the application. Close-in phase noise is in the 1-kHz to 10-kHz range, whereas far-out phase noise is offset by 1 MHz or more.
The range of common phase noise values is −40 dBc/Hz to −170 dB/Hz. The greater the number, of course, the lower the phase noise. The noise floor is the lowest possible level and is defined by the thermal power in the circuitry and could be as low as −180 dBc/Hz. In Fig. 8-20 the phase noise is −120 dBc/Hz at 100 kHz.
Transmission | A/D Conversion | A/D Conversion (Transmitter Fundamentals | Frequency Synthesizers | Digital Transmitters)
Digital Communication | Advantages and Disadvantages ( Transmitter Fundamentals | Frequency Synthesizers | Digital Transmitters )
Frequency Demodulator and Its Types ( Transmitter Fundamentals | Frequency Synthesizers | Digital Transmitters )
click here for more ( Transmitter Fundamentals | Frequency Synthesizers | Digital Transmitters )
reference : Electronic communication by Louis Frenzel, book |
This article needs additional citations for verification. (December 2019))
Impeachment is the process by which a
In Latin America, which includes almost 40% of the world's presidential systems, ten presidents from six countries were removed from office by their national legislatures via impeachments or declarations of incapacity between 1978 and 2019.
National legislations differ regarding both the consequences and definition of impeachment, but the intent is nearly always to expeditiously vacate the office. In most nations the process begins in the lower house of a bicameral assembly who bring charges of misconduct, then the upper house administers an impeachment trial and sentencing. Most commonly, an official is considered impeached after the house votes to accept the charges, and impeachment itself does not remove the official from office.
Because impeachment involves a departure from the normal constitutional procedures by which individuals achieve high office (election, ratification, or appointment) and because it generally requires a
Impeachment is provided for in the constitutional laws of many countries including Brazil, France, India, Ireland, the Philippines, Russia, South Korea, and the United States. It is distinct from the motion of no confidence procedure available in some countries whereby a motion of censure can be used to remove a government and its ministers from office. Such a procedure is not applicable in countries with presidential forms of government like the United States.
Etymology and history
The word "impeachment" likely derives from
The process was first used by the English "Good Parliament" against William Latimer, 4th Baron Latimer in the second half of the 14th century. Following the English example, the constitutions of Virginia (1776), Massachusetts (1780) and other states thereafter adopted the impeachment mechanism, but they restricted the punishment to removal of the official from office.
In various jurisdictions
In Brazil, as in most other Latin American countries, "impeachment" refers to the definitive removal from office. The
Initiation: An accusation of a responsibility crime against the President may be brought by any Brazilian citizen however the President of the Chamber of Deputies holds prerogative to accept the charge, which if accepted will be read at the next session and reported to the President of the Republic.
Extraordinary Committee: An extraordinary committee is elected with member representation from each political party proportional to that party's membership. The President is then allowed ten parliamentary sessions for defense, which lead to two legislative sessions to form a rapporteur's legal opinion as to if impeachment proceedings will or will not be sent for a trial in the Senate. The rapporteur's opinion is voted on in the Committee; and on a simple majority it may be accepted. Failing that, the Committee adopts an opinion produced by the majority. For example, if the rapporteur's opinion is that no impeachment is warranted, and the Committee vote fails to accept it, then the Committee adopts the opinion to proceed with impeachment. Likewise, if the rapporteur's opinion is to proceed with impeachment, but it fails to achieve majority in the Committee, then the Committee adopts the opinion not to impeach. If the vote succeeds, then the rapporteur's opinion is adopted.
Chamber of Deputies: The Chamber issues a call-out vote to accept the opinion of the Committee, requiring a supermajority of two thirds in favor of an impeachment opinion (or a supermajority of two thirds against a dismissal opinion) of the Committee, in order to authorize the Senate impeachment proceedings. The President is suspended (provisionally removed) from office as soon as the Senate receives and accepts from the Chamber of Deputies the impeachment charges and decides to proceed with a trial.
The Senate: The process in the Senate had been historically lacking in procedural guidance until 1992, when the Senate published in the Official Diary of the Union the step-by-step procedure of the Senate's impeachment process, which involves the formation of another special committee and closely resembles the lower house process, with time constraints imposed on the steps taken. The committee's opinion must be presented within 10 days, after which it is put to a call-out vote at the next session. The vote must proceed within a single session; the vote on President Rousseff took over 20 hours. A simple majority vote in the Senate begins formal deliberation on the complaint, immediately suspends the President from office, installs the Vice President as acting president, and begins a 20-day period for written defense as well as up to 180-days for the trial. In the event the trial proceeds slowly and exceeds 180 days, the Brazilian Constitution determines that the President is entitled to return and stay provisionally in office until the trial comes to its decision.
Senate plenary deliberation: The committee interrogates the accused or their counsel, from which they have a right to abstain, and also a probative session which guarantees the accused rights to contradiction, or audiatur et altera pars, allowing access to the courts and due process of law under Article 5 of the constitution. The accused has 15 days to present written arguments in defense and answer to the evidence gathered, and then the committee shall issue an opinion on the merits within ten days. The entire package is published for each senator before a single plenary session issues a call-out vote, which shall proceed to trial on a simple majority and close the case otherwise.
Senate trial: A hearing for the complainant and the accused convenes within 48 hours of notification from deliberation, from which a trial is scheduled by the president of the Supreme Court no less than ten days after the hearing. The senators sit as judges, while witnesses are interrogated and cross-examined; all questions must be presented to the president of the Supreme Court, who, as prescribed in the Constitution, presides over the trial. The president of the Supreme Court allots time for debate and rebuttal, after which time the parties leave the chamber and the senators deliberate on the indictment. The President of the Supreme Court reads the summary of the grounds, the charges, the defense and the evidence to the Senate. The senators in turn issue their judgement. On conviction by a supermajority of two thirds, the president of the Supreme Court pronounces the sentence and the accused is immediately notified. If there is no supermajority for conviction, the accused is acquitted.
Upon conviction, the officeholder has his or her political rights revoked for eight years, which bars them from running for any office during that time.
Fernando Collor de Mello, the 32nd President of Brazil, resigned in 1992 amidst impeachment proceedings. Despite his resignation, the Senate nonetheless voted to convict him and bar him from holding any office for eight years, due to evidence of bribery and misappropriation.
In 2016, the
The process of impeaching the
In 2013, the constitution was changed. Since 2013, the process can be started by at least three-fifths of present senators, and must be approved by at least three-fifths of all members of the Chamber of Deputies within three months. Also, the President can be impeached for high treason (newly defined in the Constitution) or any serious infringement of the Constitution.
The process starts in the Senate of the Czech Republic which has the right to only impeach the president. After the approval by the Chamber of Deputies, the case is passed to the Constitutional Court of the Czech Republic, which has to decide the verdict against the president. If the Court finds the President guilty, then the President is removed from office and is permanently barred from being elected President of the Czech Republic again.
No Czech president has ever been impeached, though members of the Senate sought to impeach President Václav Klaus in 2013. This case was dismissed by the court, which reasoned that his mandate had expired. The Senate also proposed to impeach president Miloš Zeman in 2019 but the Chamber of Deputies did not vote on the issue in time and thus the case did not even proceed to the Court.
In Denmark the possibility for current and former ministers being impeached was established with the
In 1995 the former Minister of Justice Erik Ninn-Hansen from the
In February 2021 the former Minister for Immigration and Integration Inger Støjberg at that time member of the Danish Liberal Party Venstre was impeached when it was discovered that she had possibly against both Danish and International law tried to separate couples in refugee centres in Denmark, as the wives of the couples were under legal age. According to a commission report Inger Støjberg had also lied in the Danish Parliament and failed to report relevant details to the Parliamentary Ombudsman The decision to initiate an impeachment case was adopted by the Danish Parliament with a 141–30 vote and decision (In Denmark 90 members of the parliament need to vote for impeachment before it can be implemented). On 13 December 2021 former Minister for Immigration and Integration Inger Støjberg was convicted by the special Court of Impeachment of separating asylum seeker families illegally according to Danish and international law and sentenced to 60 days in prison. The majority of the judges in the special Court of Impeachment (25 out of 26 judges) found that it had been proven that Inger Støjberg on 10 February 2016 decided that an accommodation scheme should apply without the possibility of exceptions, so that all asylum-seeking spouses and cohabiting couples where one was a minor aged 15–17, had to be separated and accommodated separately in separate asylum centers. On 21 December, a majority in the Folketing voted that the sentence means that she is no longer worthy of sitting in the Folketing and she therefore immediately lost her seat.
In France the comparable procedure is called destitution. The
There is no formal impeachment process for the chancellor of Germany, however the Bundestag can replace the chancellor at any time by voting for a new chancellor (constructive vote of no confidence, Article 67 of the Basic Law).
There has never been an impeachment against the President so far. Constructive votes of no confidence against the chancellor occurred in 1972 and 1982, with only the second one being successful.
The chief executive of Hong Kong can be impeached by the Legislative Council. A motion for investigation, initiated jointly by at least one-fourth of all the legislators charging the Chief Executive with "serious breach of law or dereliction of duty" and refusing to resign, shall first be passed by the council. An independent investigation committee, chaired by the chief justice of the Court of Final Appeal, will then carry out the investigation and report back to the council. If the Council find the evidence sufficient to substantiate the charges, it may pass a motion of impeachment by a two-thirds majority.: Article 73(9)
However, the Legislative Council does not have the power to actually remove the chief executive from office, as the chief executive is appointed by the Central People's Government (State Council of China). The council can only report the result to the Central People's Government for its decision.: Article 45
Article 13 of Hungary's Fundamental Law (constitution) provides for the process of impeaching and removing the president. The president enjoys immunity from criminal prosecution while in office, but may be charged with crimes committed during his term afterwards. Should the president violate the constitution while discharging his duties or commit a willful criminal offense, he may be removed from office. Removal proceedings may be proposed by the concurring recommendation of one-fifth of the 199 members of the country's unicameral Parliament. Parliament votes on the proposal by secret ballot, and if two thirds of all representatives agree, the president is impeached. Once impeached, the president's powers are suspended, and the Constitutional Court decides whether or not the President should be removed from office.
The president and judges, including the chief justice of the supreme court and high courts, can be impeached by the parliament before the expiry of the term for violation of the Constitution. Other than impeachment, no other penalty can be given to a president in position for the violation of the Constitution under Article 361 of the constitution. However a president after his/her removal can be punished for her/his already proven unlawful activity under disrespecting the constitution, etc. No president has faced impeachment proceedings. Hence, the provisions for impeachment have never been tested. The sitting president cannot be charged and needs to step down in order for that to happen.
Where one house impeaches the president, the remaining house either investigates the charge or commissions another body or committee to do so. The investigating house can remove the president if it decides, by at least a two-thirds majority of its members, both that the president is guilty of the charge and that the charge is sufficiently serious as to warrant the president's removal. To date no impeachment of an Irish president has ever taken place. The president holds a largely ceremonial office, the dignity of which is considered important, so it is likely that a president would resign from office long before undergoing formal conviction or impeachment.
In Italy, according to Article 90 of the Constitution, the President of Italy can be impeached through a majority vote of the Parliament in joint session for high treason and for attempting to overthrow the Constitution. If impeached, the president of the Republic is then tried by the Constitutional Court integrated with sixteen citizens older than forty chosen by lot from a list compiled by the Parliament every nine years.
Italian press and political forces made use of the term "impeachment" for the attempt by some members of parliamentary opposition to initiate the procedure provided for in Article 90 against Presidents
By Article 78 of the Constitution of Japan, judges can be impeached. The voting method is specified by laws. The National Diet has two organs, namely 裁判官訴追委員会 (Saibankan sotsui iinkai) and 裁判官弾劾裁判所 (Saibankan dangai saibansho), which is established by Article 64 of the Constitution. The former has a role similar to prosecutor and the latter is analogous to Court. Seven judges were removed by them.
Members of the Liechtenstein Government can be impeached before the State Court for breaches of the Constitution or of other laws.: Article 62 As a hereditary monarchy the Sovereign Prince cannot be impeached as he "is not subject to the jurisdiction of the courts and does not have legal responsibility".: Article 7 The same is true of any member of the Princely House who exercises the function of head of state should the Prince be temporarily prevented or in preparation for the Succession.: Article 7
Members of government, representatives of the national assembly (Stortinget) and Supreme Court judges can be impeached for criminal offenses tied to their duties and committed in office, according to the Constitution of 1814, §§ 86 and 87. The procedural rules were modeled after the U.S. rules and are quite similar to them. Impeachment has been used eight times since 1814, last in 1927. Many argue that impeachment has fallen into desuetude. In cases of impeachment, an appointed court (Riksrett) takes effect.
Impeachment in the Philippines follows procedures similar to the
A main difference from U.S. proceedings however is that only one third of House members are required to approve the motion to impeach the president (as opposed to a simple majority of those present and voting in their U.S. counterpart). In the Senate, selected members of the House of Representatives act as the prosecutors and the senators act as judges with the Senate president presiding over the proceedings (the chief justice jointly presides with the Senate president if the president is on trial). Like the United States, to convict the official in question requires that a minimum of two thirds (i.e. 16 of 24 members) of all the members of the Senate vote in favor of conviction. If an impeachment attempt is unsuccessful or the official is acquitted, no new cases can be filed against that impeachable official for at least one full year.
Impeachment proceedings and attempts
In 2005, 2006, 2007 and 2008, impeachment complaints were filed against President Gloria Macapagal Arroyo, but none of the cases reached the required endorsement of 1⁄3 of the members for transmittal to, and trial by, the Senate.
In March 2011, the House of Representatives impeached Ombudsman Merceditas Gutierrez, becoming the second person to be impeached. In April, Gutierrez resigned prior to the Senate's convening as an impeachment court.
In December 2011, in what was described as "blitzkrieg fashion", 188 of the 285 members of the
To date, three officials had been successfully impeached by the House of Representatives, and two were not convicted. The latter, Chief Justice Renato C. Corona, was convicted on 29 May 2012, by the Senate under Article II of the Articles of Impeachment (for betraying public trust), with 20–3 votes from the Senator Judges.
The president can be impeached by Parliament and is then suspended. A referendum then follows to determine whether the suspended President should be removed from office. President Traian Băsescu was impeached twice by the Parliament: in 2007 and then again in July 2012. A referendum was held on 19 May 2007 and a large majority of the electorate voted against removing the president from office. For the most recent suspension a referendum was held on July 29, 2012; the results were heavily against the president, but the referendum was invalidated due to low turnout.[circular reference]
In 1999, members of the
The Constitution of Singapore allows the impeachment of a sitting president on charges of treason, violation of the Constitution, corruption, or attempting to mislead the Presidential Elections Committee for the purpose of demonstrating eligibility to be elected as president. The prime minister or at least one-quarter of all members of Parliament (MPs) can pass an impeachment motion, which can succeed only if at least half of all MPs (excluding nominated members) vote in favor, whereupon the chief justice of the Supreme Court will appoint a tribunal to investigate allegations against the president. If the tribunal finds the president guilty, or otherwise declares that the president is "permanently incapable of discharging the functions of his office by reason of mental or physical infirmity", Parliament will hold a vote on a resolution to remove the president from office, which requires a three-quarters majority to succeed. No president has ever been removed from office in this fashion.
When the Union of South Africa was established in 1910, the only officials who could be impeached (though the term itself was not used) were the chief justice and judges of the Supreme Court of South Africa. The scope was broadened when the country became a republic in 1961, to include the state president. It was further broadened in 1981 to include the new office of vice state president; and in 1994 to include the executive deputy presidents, the public protector and the Auditor-General. Since 1997, members of certain commissions established by the Constitution can also be impeached. The grounds for impeachment, and the procedures to be followed, have changed several times over the years.
According to the Article 65(1) of
Two presidents have been impeached since the establishing of the
In February 2021, Judge Lim Seong-geun of the Busan High Court was impeached by the National Assembly for meddling in politically sensitive trials, the first ever impeachment of a judge in Korean history. Unlike presidential impeachments, only a simple majority is required to impeach. Judge Lim's term expired before the Constitutional Court could render a verdict, leading the court to dismiss the case.
In Turkey, according to the Constitution, the Grand National Assembly may initiate an investigation of the president, the vice president or any member of the Cabinet upon the proposal of simple majority of its total members, and within a period less than a month, the approval of three-fifths of the total members. The investigation would be carried out by a commission of fifteen members of the Assembly, each nominated by the political parties in proportion to their representation therein. The commission would submit its report indicating the outcome of the investigation to the speaker within two months. If the investigation is not completed within this period, the commission's time may be renewed for another month. Within ten days of its submission to the speaker, the report would be distributed to all members of the Assembly, and ten days after its distribution, the report would be discussed on the floor. Upon the approval of two thirds of the total number of the Assembly by secret vote, the person or persons, about whom the investigation was conducted, may be tried before the Constitutional Court. The trial would be finalized within three months, and if not, a one-time additional period of three months shall be granted. The president, about whom an investigation has been initiated, may not call for an election. The president, who is convicted by the Court, would be removed from office.
The provision of this article shall also apply to the offenses for which the president allegedly worked during his term of office.
In the federal system,
According to the House practice manual, "Impeachment is a constitutional remedy to address serious offenses against the system of government. It is the first step in a remedial process—that of removal from public office and possible disqualification from holding further office. The purpose of impeachment is not punishment; rather, its function is primarily to maintain constitutional government."
The U.S. House of Representatives has impeached an official 21 times since 1789: four times for presidents, 15 times for federal judges, once for a Cabinet secretary, and once for a senator. Of the 21, the Senate voted to remove 8 (all federal judges) from office. The four impeachments of presidents were: Andrew Johnson in 1868, Bill Clinton in 1998, and Donald Trump in 2019 and again in 2021. All four impeachments were followed by acquittal in the Senate. An impeachment process was also commenced against Richard Nixon, but he resigned in 1974 to avoid likely removal from office.
- ^ a b "impeachment | Definition, Process, History, & Facts". Encyclopedia Britannica. Retrieved 15 November 2020.
- ISBN 978-0-308-10353-5.
1. To charge (a high public official) before a legally constituted tribunal with crime or misdemeanor in office. 2. To bring discredit upon the honesty or validity of.
- ^ Michael J. Gerhardt. "Impeachment is the law. Saying 'political process' only helps Trump's narrative". Washington Post.
while it's true that politics are bound up in how impeachment plays out, it's a myth that impeachment is just political. Rather, it's the principal legal remedy that the Constitution expressly specifies to hold presidents accountable
- ^ ISBN 9780226554976.
The ratification debates support the conclusion that 'other high Crimes and Misdemeanors' were not limited to indictable offenses but rather included great offenses against the federal government. ... Justices James Wilson and Joseph Story expressed agreement with Hamilton's understanding of impeachment as a political proceeding and impeachable offenses as political crimes.
- ^ LCCN 2018013560.
Impeachment has elements of both legal and political proceedings. As a result, it is a unique process.
- ^ ISBN 0-7166-0105-2.
- ^ "Impeachment". UK Parliament Glossary. Retrieved 5 February 2021.
Impeachment is when a peer or commoner is accused of 'high crimes and misdemeanours, beyond the reach of the law or which no other authority in the state will prosecute.'
- ^ Lawler, David (19 December 2019). "What impeaching leaders looks like around the world". Axios. Retrieved 8 February 2021.
- ^ Huq, Aziz; Ginsburg, Tom; Landau, David. "Designing Better Impeachments: How other countries' constitutions protect against political free-for-alls". Boston Review. Retrieved 8 February 2021.
Constitutions in 9 democracies give a court—often the country's constitutional court—the power to begin an impeachment; another 61 constitutions place the court at the end of the process.
- ^ Ignacio Arana Araya, To Impeach or Not to Impeach: Lessons from Latin America, Georgetown Journal of International Affairs (December 13, 2019).
- ISSN 1546-6981.
- ^ Peter Brandon Bayer (23 May 2019). "The Constitution dictates that impeachment must not be partisan". The Conversation.
Noted scholars Ronald Rotunda and John Nowak explain that the Framers wisely intended the phrase "or other high Crimes and Misdemeanors" to include undermining the Constitution and similar, "great offenses against the federal government (like abuse of power) even if they are not necessarily crimes.' For instance, Alexander Hamilton asserted that, while likely to be criminal acts, impeachable wrongdoings 'are those offenses which proceed from the misconduct of public men ... from the abuse or violation of some public trust.' James Madison urged that impeachment is appropriate for 'loss of capacity, or corruption ... [that] might be fatal to the republic.'
- ^ a b "Impeachment". U.S. Constitution Annotated. Congressional Research Service – via Legal Information Institute, Cornell Law School.
- ^ a b c Cole, J. P.; Garvey, T. (29 October 2015). "Report No. R44260, Impeachment and Removal" (PDF). Congressional Research Service. pp. 15–16. Archived (PDF) from the original on 19 December 2019. Retrieved 22 September 2016.
- ^ Hauss, Charles (29 December 2006). "Vote of confidence". Britannica. Retrieved 9 February 2021.
- ^ ISBN 978-90-04-10631-4.
- ^ a b Maciel, Lourenço (8 February 2020). "Was it a coup? Democracy and Constitutionality in the 2016 Brazilian Impeachment Process". Dilma Rousseff's Impeachment. Archived from the original on 24 March 2021. Retrieved 5 March 2021.
- ^ Andrew Jacobs (17 April 2016). "Brazil's Lower House of Congress Votes for Impeachment of Dilma Rousseff". The New York Times. Archived from the original on 3 January 2022. Retrieved 13 November 2016.
- ^ "Constitution of Croatia". § 105. Archived from the original (PDF) on 28 June 2018. Retrieved 12 March 2017.
- ^ Ústava České republiky. Psp.cz. Retrieved on 2016-10-23.
- ^ Ústava České republiky. Psp.cz. Retrieved 2013-07-12.
- ^ "Czech President Vaclav Klaus faces treason charge". BBC News. 4 March 2013. Retrieved 23 October 2016.
- Radio Praha.
- ^ "Senát schválil ústavní žalobu na prezidenta republiky". 24 July 2019.
- ^ "The Danish Constitution". Archived from the original on 2 July 2021. Retrieved 3 February 2021.
- ^ "Tamilsagen 1986–1995". danmarkshistorien.dk.
- ^ "HUDOC". European Court of Human Rights.
- ^ "Denmark's ex-immigraton minister set to face impeachment trial". euronews. 14 January 2021.
- ^ "Denmark's ex-immigration minister convicted over asylum seeker policy". euronews. 13 December 2021.
- ^ "Rigsretten – Rigsretten har afsagt dom i sagen mod fhv. minister Inger Støjberg". rigsretten.dk. Archived from the original on 14 December 2021. Retrieved 21 January 2022.
- ^ "Folketinget har stemt: Inger Støjberg er ikke værdig til at sidde i Folketinget". www.dr.dk. 21 December 2021.
- ^ "Le président de la République peut-il être destitué ? Et si oui, pour quelles raisons ?". Libération.fr. 25 July 2018. Archived from the original on 27 May 2019. Retrieved 17 March 2019.
- ^ a b "Basic Law of Hong Kong". basiclaw.gov.hk. Hong Kong Special Administrative Region Government. Archived from the original on 30 December 2014. Retrieved 13 November 2016.
- ^ "Magyarország Alaptörvénye—Hatályos Jogszabályok Gyűjteménye". net.jogtar.hu (in Hungarian). 25 April 2011. Retrieved 5 November 2019.
- ^ "Fundamental Law of Hungary". www.constituteproject.org. Retrieved 5 November 2019.
- ^ "The Prevention of Insults to National Honour (Amendment) Act of 1971" (PDF). Archived from the original (PDF) on 23 January 2017. Retrieved 2 July 2017.
- ^ Cowell, Alan (13 December 1991). "President of Italy is Making Political Waves". The New York Times.
- ^ "Italy parliament rejects bid to impeach President Napolitano". Reuters. 11 February 2014.
- ^ Horowitz, Jason (28 May 2018). "Italian President's Loyalty to the Euro Creates Chaos". The New York Times. Archived from the original on 3 January 2022.
- ^ "The Constitution of Japan". Japanese Law Translation. Archived from the original on 5 January 2021. Retrieved 10 August 2020.
- ^ "裁判官弾劾裁判所公式サイト / トップページ (音声ブラウザ対応)". www.dangai.go.jp.
- ^ a b c "Constitution of the Principality of Liechtenstein" (PDF). hrlibrary.umn.edu. Legal Service of the Government of the Principality of Liechtenstein. 2003. Retrieved 13 November 2016.
- ^ "The Constitution of the Republic of Lithuania". Retrieved 4 April 2016.
- ^ "Lithuanian Parliament Removes Country's President After Casting Votes on Three Charges". The New York Times. 7 April 2004. Retrieved 4 April 2016.
- ^ Chan-Robles Virtual Law Library. "The 1987 Constitution of the Republic of the Philippines—Article XI". Retrieved 25 July 2008.
- ^ "Peru's leader faces impeachment". Bbc.com. 15 December 2017. Retrieved 28 December 2017.
- ^ "Lawmakers who helped Peru president survive impeachment bid say democracy won". Efe.com. 22 December 2017. Retrieved 28 December 2017.
- ^ ro:Referendumul pentru demiterea președintelui României, 2012
- ^ "Yeltsin impeachment hearings begin", The Guardian (May 13, 1999).
- ^ David Hoffman, "Bid to Impeach Yeltsin Defeated", Washington Post (May 16, 1999).
- ^ Michael Wines, "Drive to Impeach Russian President Dies in Parliament", New York Times (May 16, 1999).
- ^ "Constitution of the Republic of Singapore—Singapore Statutes Online". /sso.agc.gov.sg. 2019.
- ^ "Constitution of the Republic of Korea". Korea Legislation Research Institute. Retrieved 5 May 2022.
- ^ Kim, Da-sol (8 December 2016). "Revisiting Roh Moo-hyun impeachment". The Korea Herald. Retrieved 9 February 2021.
- Al Jazzera. 10 March 2017.
- ^ "Legislature impeaches judge for political meddling". Korea JoongAng Daily. 4 February 2021.
- ^ "Constitutional Court rejects first-ever impeachment of judge". 28 October 2021.
- ^ "Grand National Assembly of Turkey" (PDF). tbmmgov.tr. 2018.
- ^ a b Simson Caird, Jack (6 June 2016). "Commons Briefing papers CBP-7612" (PDF). House of Commons Library. Retrieved 14 May 2019.
- U.S. Government Publishing Office, p. 594 (quoting U.S. Const. art. I, Sec. 2, cl. 5; Sec. 3, cl. 6.).
- ^ ArtII.S220.127.116.11 Offices Eligible for Impeachment Archived 18 March 2021 at the Wayback Machine, Constitution Annotated, Congress.gov.
- ^ U.S. Constitution. Article I, § 3, clause 6. 12 November 2009.
- U.S. Government Publishing Office, p. 594: "An impeachment is instituted by a written accusation, called an 'Article of Impeachment,' which states the offense charged. The articles serve a purpose similar to that of an indictment in an ordinary criminal proceeding. Manual Sec. 609."
- U.S. Government Publishing Office, p. 591.
- ^ Art I.S3.C7.1.1 Judgment in Cases of Impeachment: Overview Archived 24 February 2021 at the Wayback Machine, Constitution Annotated.
- ^ "Memorandum: Whether a Former President May Be Indicted and Tried for the Same Offenses for Which He was Impeached by the House and Acquitted by the Senate", U.S. Department of Justice, Office of Legal Counsel (August 18, 2000).
- ^ a b c "U.S. Senate: Impeachment". www.senate.gov. Retrieved 19 September 2018.
- ^ Maggie Astor (13 January 2021). "The Impeachment Proceedings That Came Before". The New York Times.
- ISBN 9780226289571.
attempted Impeachment of William O. Douglas.
- ^ "Impeachment and the states: A look at the history, provisions in place". knowledgecenter.csg.org.[permanent dead link]
- ^ "Research Response: Governors' Impeachments in U.S. History", Illinois General Assembly Legislative Research Unit (July 8, 2008).
- The dictionary definition of impeachmentat Wiktionary
- Media related to Impeachmentsat Wikimedia Commons |
History of Bipartisanship
1787: The Great Compromise
In debating a new model for self-rule that would eventually become the Constitution, states’ delegates in the summer of 1787 were so intensely divided over the difficult idea of congressional representation that the very topic threatened to end the Constitutional Convention. Representatives from small states were loathe approve any plan that tampered with the equal representation they currently enjoyed under the Articles of Confederation. Representatives from large, populous states—who wanted proportional representation—thought the current system was obviously unfair. It was Connecticut’s well-respected Roger Sherman who proposed a compromise: a proportional House of Representatives and a Senate with equal representation, an idea that seems familiar to us now, but was so radical in 1787 that, at first, it was dismissed by the group. Eventually the Connecticut Compromise—known now as the Great Compromise—was adopted and the opposing sides in the debate each felt vindicated.
1860: Lincoln’s Team of Rivals
As smaller political parties were evolving into what was to become the modern Republican party, each faction, representing differing viewpoints on slavery and federal power, had a favorite son in the presidential election of 1860. By the time of the Republican party convention, three men representing these factions emerged as party favorites: N.Y. Sen. William Seward, Ohio Gov. Salmon P. Chase and Missouri judge Edward Bates. That all three lost the presidential nomination to a country lawyer named Abraham Lincoln was the first surprise of 1860; that Lincoln won the general election and then appointed all three of his Republican rivals to his cabinet was the second. Lincoln later added a Democrat—Edwin Stanton—as his Secretary of War. Lincoln’s so-called “team of rivals” has come to be seen as a watershed political moment; as Lincoln himself explained to newspaper reporter, he felt had no right to deprive the country of its strongest minds simply because they sometimes disagreed with him.
1945: Truman’s Supreme Court Appointee
While President Franklin D. Roosevelt had some bipartisan record—he appointed Republicans as Secretaries of War and Navy—his squelched plan to pack the Supreme Court was still a bitter pill among Washington Republicans. Three months after FDR’s death, new President Harry S Truman was faced with an open Supreme Court seat, seven associate Court justices already appointed by the Democratic FDR and a legislative branch full of skeptical Republican eyes waiting to see what he would do. While naming a Democrat to the seat likely would have been approved, Truman broke with his party and instead chose Republican Ohio Sen. Harold Burton for the Court. It was an olive branch to congressional Republicans—and a chance for a new president to find common ground with the congressional opposition.
1945: Senator Vandenberg’s Bipartisan Foreign Policy
While Americans were fighting overseas in World War II, many congressional Republicans were increasingly wary of a lengthy American involvement in Europe after the war ended. Among these isolationists, Michigan Republican Senator Arthur Vandenberg was the unofficial spokesman. But seeing Democrats and Republicans growing increasingly polarized about America’s role in the world while recognizing the threat a remilitarized Germany and Japan might pose, Vandenberg was moved to address the Senate in 1945, declaring that no country could “immunize itself” from the rest of the world. Vandenberg offered his cooperation to FDR in post-war planning that eventually encompassed America’s role in both the United Nations and NATO. Years later, Vandenberg summed up his view of bipartisan foreign policy: “In a word, it simply seeks national security ahead of partisan advantage.” Politics, he famously said, “stops at the water’s edge.”
1964: Civil Rights Act
With civil rights marches and racial violence dominating the news, the issue of African Americans’ legal rights could no longer be ignored. A civil rights bill proposed by congressional Democrats and supported by the White House had just passed the House of Representatives when, in early 1964, the Senate took it up for debate. Twenty-one of the Senate’s 67 Democrats were from the South and publicly opposed the bill; as a bloc they began what became the longest filibuster in Senate history. The Senate’s Democratic leaders needed Republican votes to stop the filibuster and Democratic majority leader Mike Mansfield asked his counterpart, Republican Senator Everett Dirksen to step in: “I appeal to the distinguished minority leader whose patriotism has always taken precedence over his partisanship, to join with me … in finding the Senate’s best contribution … to the resolution of this grave national issue,” Mansfield said. Dirksen did more than join with Mansfield—he exhorted his colleagues to end not just the filibuster but America’s difficult past and bring the Civil Rights Act to a vote. “I appeal to all Senators,” he told the chamber. “We are confronted with a moral issue. Today let us not be found wanting …” With Dirksen’s leadership, 27 Republican senators joined 44 Democrats to end debate on June 10, 1964; the bill passed nine days later.
1965: The Great Society
A vision of President Lyndon B. Johnson, the Great Society program was given to Congress as a policy agenda in January 1965. As one of the most ambitious agendas in American history, the Great Society program, which took its name from one of President Johnson’s speeches, aimed to eliminate poverty and racial injustice, increase aid to education, promote urban renewal, and conservation, to name just a few. Congress answered the president’s call to action and enacted, with some adjustments, many of Johnson’s recommendations. The Secondary Education Act of 1965’s foundation lies within Johnson’s Great Society and garnered great support from legislators of both parties, passing with no amendments and little debate in only 87 days. The Civil Rights Act of 1964, Medicare, and the creation of the Corporation of Public Broadcasting are just some of the programs that resulted in both parties of Congress working together to implement real change in the American societal landscape.
1969: Man on the Moon
When the Soviet Union launched the first man-made satellite, Sputnik 1, into space on October 4, 1957, the U.S. found itself with only a fledgling space program. Alarmed at what it perceived as the Soviet Union’s technological lead in space, Congress urged President Dwight D. Eisenhower to take immediate action and support a larger U.S. space program. It was only with the collaboration and bipartisanship of members of Congress that the National Aeronautics and Space Administration (NASA) was conceived and then signed into being by President Eisenhower in 1958. Eleven years later, astronaut Neil Armstrong became the first human to walk on the moon, successfully returning to earth in Apollo 11. Only with the bipartisan support of presidents and Congress alike has NASA, 30 years later, still been provided with the resources and tools it needs to keep our space dreams alive.
1973: Endangered Species Act
In 1973, President Richard Nixon called on Congress to make sweeping changes to U.S. environmental policy, calling current species conservation efforts inadequate. Democratic lawmakers Representative John Dingell and Senator Harrison Williams authored the endangered species bills which drew wide support of their Republican colleagues. Congress passed the Endangered Species Act of 1973 with overwhelming support from both sides of the aisle. The new law included protections for plants, invertebrates and the ecosystems on which they depend. Once a species was placed on the endangered list, the ESA would be tasked to come up with a plan to return it to healthy, stable levels. In 2009, more than 20 species have been de-listed due to recovery and many others have had their status down listed from “endangered” to “threatened.”
1977: The Food Stamp Program
The United State’s first Food Stamp program—the government assistance plan to provide food to the needy—was created during the Great Depression but phased out in 1943 when it was no longer needed. When the Kennedy Administration reintroduced a pilot test of the program in the early 1960s, it was not universally welcomed back, a division that only increased when the Johnson Administration made the program a permanent part of its “Great Society” a few years later. Though it was a federal assistance program, it was run by the states, which, backed by Republicans in Congress, worried about the administrative costs associated with the rapidly growing program. As various bills were introduced in the 1970s to control costs and refine the eligibility requirements of the burgeoning program, Democratic supporters began to worry too many obstacles were being put in front of families who needed help. But in 1977, Republican Senator Bob Dole and Democratic Senator George McGovern joined forces to support a bipartisan compromise intended to address both sides’ concerns: control costs by more tightly focusing eligibility requirements to the truly needy while also streamlining the program’s purchase processes. In the end, the two senators convinced their colleagues that the legislation they supported could achieve both Democratic and Republican goals—and the 1977 Food Stamp Act became law.
1983: Social Security Reform
Almost from its inception in 1935, Social Security has been one of the thorniest political issues in Washington. Seen on the left as an immutable promise to American citizens and on the right as an unmanageable beast destined to bankrupt the government, it’s easy to see why Social Security is nicknamed the “third rail” of policy debate; it burns anyone who dares touch it. But in the early 1980s, official Washington had no choice; the Social Security Trust Fund was poised to begin running a deficit. In 1981, President Ronald Reagan appointed a commission to study solutions to the looming problem. When the commission made its recommendations in 1983, it was Republican Sen. Bob Dole and Democratic Sen. Daniel Patrick Monyihan—party leaders respected at both ends of Pennsylvania Avenue—who led a bipartisan group of legislators in turning the recommendations into legislation. Trying to keep the Social Security Fund solvent would mean amending the program, a move the group knew would likely mean an intense and bitter partisan battle in the halls of power. But Moynihan reminded his cohorts to focus on solving the discrete problem at hand and not get swayed by the partisan debate swirling around them. “Everyone is entitled to their own opinions,” Moynihan famously quipped, “but not their own facts.” In the end, the group’s reforms to the Social Security Act passed and were signed into law by President Reagan.
1986: Tax Reform Act
Some bipartisan moments are borne of a desire to stand on high moral principles, others are borne of more down-to-earth interests. In the divided government of 1986, Republican President Ronald Reagan found himself with a Democratic House and a Republican Senate. While the situation seemed ripe for gridlock, when it came to the 1986 Tax Reform Act, just the opposite happened—nobody wanted to look like the bad guy who killed tax reform. Lowering taxes was a hallmark of Reagan’s presidential campaigns; reforming the tax code was a goal of both parties (Democrats favored simplifying the system and eliminating loopholes, Republicans favored treating capital gains and investment income the same as regular income). An unlikely alliance was formed. Add in two powerful committee chairmen in the House (Rep. Dan Rostenkowski) and Senate (Sen. Bob Packwood) who saw passage of the bill as a test of their political might and the United States got what cynics said could never be done: the biggest and most complete overhaul of the tax code in post-war America.
1990: Americans with Disabilities Act
While Americans had elected a disabled man as president in 1932, it was not until almost 70 years later that the rights of people like President Franklin D. Roosevelt became protected under law. The Americans with Disabilities Act, which makes it illegal to discriminate based on disability, was signed into law by President H.W. Bush in 1990. The landmark civil rights legislation had been difficult to pass, however, with critics claiming that disabled individuals were being accommodated unnecessarily and would cause an undue burden on employers. Seeing the need to protect a minority from discrimination, members of Congress on both sides of aisle came together to pass the ADA. Bipartisan Policy Center founders Senators Bob Dole and George Mitchell were early supporters of the law and were instrumental in its passage.
1995: Blue Dog Democrats Formed
In the historic 1994 mid-term elections, House Republicans staged an unprecedented takeover of the congressional body, turning a large Democratic majority in a serious minority. For some Democrats, though, the election-day thumping wasn’t surprising. Forty-seven House Democrats, fiscally moderate if not downright conservative and mostly from conservative-leaning districts, had long grown wary of what they saw as their party’s drift to the left and its unyielding demand to toe an orthodox party line. Feeling they’d been “choked blue” by their party’s leaders, they named themselves the “Blue Dog Coalition” and set about finding a middle ground between the warring edges of both parties. Encompassing a variety of viewpoints, the Blue Dogs are, to this day, engaged in the search for common fiscal ground between the political parties.
1996: Welfare Reform
Despite a bitterly divided government in 1996, Congress passed and President Bill Clinton signed into law one of the most sweeping changes to the country’s welfare system. Welfare programs had long been a political dividing line between liberals and conservatives, but by 1996, the threat of intergenerational dependency on government welfare was clear to members of both parties. The Congress, working with the White House, walked a tightrope that made welfare opponents and supporters alternately elated and enraged. Work requirements and child support enforcement were strengthened (a Republican goal), while spending on education and child care was increased (a Democratic goal). Years later, President Clinton wrote that “I was widely criticized by liberals who thought the work requirements too harsh and conservatives who thought the work incentives too generous.” But sometimes, that’s what compromise is.
Despite making health care reform a centerpiece of the 1992 Democratic platform, the issue remained an unfulfilled goal for much of the 1990s until Democratic Sen. Edward Kennedy stepped into the breach. To address the growing problem of health care for children of the “working poor”—families who couldn’t afford health care coverage on their own but had too much income to qualify for Medicare, Sen. Kennedy proposed legislation to create a federal matching fund for states that helped pay for such care. Sen. Kennedy, as Eastern and liberal a senator as they come, found an unlikely partner across the aisle to co-sponsor his legislation, Republican Sen. Orrin Hatch, a western conservative whose career would seem to be the polar opposite. With Hatch involved, congressional conservatives were mollified that the program would not derail the quest for a balanced budget and the Hatch-Kennedy bill, signed into law later that year, established the State Children’s Health Insurance Program (SCHIP).
2001: No Child Left Behind
Republican President George W. Bush, following up on campaign promise, introduced a blueprint to Congress for a new and sweeping federal slate of standards-based education programs. Using the president’s goals as a draft, two Republicans (Rep. John Boehner and Sen. Judd Gregg) and one Democrat (Rep. George Miller) signed on as co-authors of the joint legislation. But it was when Democratic Sen. Edward Kennedy, one of his chamber’s most outspoken proponents of education reform and also one the president’s most powerful detractors, lent his name to the bill that it stood a chance to overcome the obstacles of inertia and interest group politicking. While the ultimate effectiveness of what became known as the No Child Left Behind Act is still being measured, its bipartisan birth is already in the history books.
2001: September 11
The terrorists who carried out the massive attacks of Sept. 11, 2001, hoped for death and destruction. The bodies of 3,000-plus Americans can attest to that. But their plans to cower America and weaken our government were thwarted almost from the moment the first hijacked airliner hit. Through the tears of shock and sorrow, American citizens united in a show of unprecedented national resolve. It was an attack on the things we hold most dear—and it shook America out of an almost decade-long political schism. In Congress, planned parliamentary obstacles and committee objections were forgotten as members gathered on the Capitol’s East Steps to sing “God Bless America”—not for the cameras, but for each other. And Wal-Mart, the very model of world-wide retail efficiency, struggled to keep up with the demand for American flags as our citizens felt an urgency not seen in generations to remind each other of what unites us.
2002: The McCain-Feingold Act
For decades, the role of campaign donations in influencing elections was a source of consternation for members of both political parties—each of which, of course, believed it was always the “other guys” who weren’t playing by the spirit of the rules. In such an atmosphere, compounded by a close and bitter presidential campaign in 2000, Democratic Sen. Russell Feingold and Republican Sen. John McCain, both ardent supporters of campaign finance reform, believed they needed to bridge the gap to avoid any reform bill from being seen as the “other guys’” solution. Enacted in 2002, the Bipartisan Campaign Reform Act—commonly referred to as the McCain-Feingold Act, changed how donations could be used to support political parties and candidates and demanded that television campaign ads clearly identify who paid for them.
2005: The Gang of 14
After the 2004 elections, Senate Republicans found their power enhanced. In the previous congress, Senate Democrats ten times killed President George W. Bush’s nominations of conservative appellate court judges by threatening to filibuster. Now, with a 55-vote majority, Republicans announced the possibility of changing Senate rules to forbid the use of filibuster in considering judicial nominations—a change to the staid and traditional rules of the Senate so unprecedented that Republican Sen. Trent Lott nicknamed it “the nuclear option.” With Democratic leadership unwilling to stop filibustering nominations and Republican leadership threatening to change the rules of debate, it was a group of 14 senators—seven from each side—who stepped in to broker a peace. The so-called “Gang of 14” came to a written agreement: Democrats would not filibuster judicial nominations and Republicans would drop “the nuclear option.” With seven senators from each side part of the deal, it effectively meant that neither party had enough votes to rescind their portion.
2009: Cabinet Selections
In his 2008 campaign for president, Democrat Barack Obama made no secret of his admiration for President Lincoln and his so-called “team of rivals” approach to government. Obama campaigned on a pledge to make his cabinet bipartisan—even “post-partisan”—with an eye toward finding middle ground between political factions. Eventually, President Obama took office with Democratic primary rivals Joe Biden and Hillary Clinton as his Vice President and Secretary of State, respectively, and Republican Rep. Ray LaHood as Secretary of Transportation. And despite being a vocal critic of how the war in Iraq was being run by his Republican predecessors, Obama asked President George W. Bush’s Secretary of Defense, Robert Gates, to stay in the job, to maintain continuity in the authority of American forces.
2010: Tax Deal
In order to fulfill his campaign promise to support and promote bipartisanship, President Obama signed a deal to extend the Bush-era tax cuts. The legislation cut taxes for all incomes for two years. While Obama did not agree to all aspects of the deal, he described it as “a package that will protect the middle class, grow our economy, and create jobs for the American people.” The deal angered some Democrats who were opposed to legislation they felt catered to the wealthy. However, bipartisan support for the bill coalesced around additions to the measure, such as an extension for unemployment benefits and preventing a tax increase for the middle class. The president bowed to compromise, stating, “it’s not perfect, but this compromise is an essential step on the road to recovery.”
2012: JOBS Act
In April 2012, President Obama and Congress passed bipartisan legislation known as the “Jumpstart Our Business Startups (JOBS) Act.” The legislation was created to help aid entrepreneurship and small business growth by limiting federal regulations and allowing individuals to invest in new companies. It dramatically increased the use of crowdfunding platforms, which are used to raise money for a variety of causes, such as startups, nonprofit organizations, or personal projects. As stated by former House Majority Leader Eric Cantor, “the bipartisan JOBS Act represents an increasingly rare legislative victory in Washington where both sides seized the opportunity to work together, improved the bill, and passed it with strong bipartisan support.”
2013: Bipartisan Budget Act of 2013
Two years after reaching a bipartisan agreement on the debt ceiling, Congress announced a two-year budget agreement prior to the budget conference in December. The Bipartisan Budget Act of 2013 set overall discretionary spending for the 2013 fiscal year at $1.012 trillion, which was about half-way between the proposed budgets of the House and the Senate. Rep. Paul Ryan (R-WI) and Sen. Patty Murray (D-WA) stated that both sides of the aisle agreed to the proposed legislation after having several extended discussions. During the announcement of the agreement, Ryan and Murray noted that they specifically avoided striking a “grand bargain,” which required the Democrats to agree to reduced entitlement spending in exchange for the Republicans agreeing to higher tax rights. As an alternative, Ryan stated that congressional members strived to “focus on common ground… to get some minimal accomplishments.” The Bipartisan Budget Act of 2013 was a rare, but promising act of across-the-aisle collaboration in a time of intense gridlock.
2015: Every Student Succeeds Act
In December 2015, the Every Student Succeeds Act (ESSA) was enacted and replaced the No Child Left Behind Act. The legislation was passed by both the House and the Senate with bipartisan support. ESSA reauthorized the Elementary and Secondary Education Act that was passed in 1965. This legislation was the first bill since the 1980s to narrow the government’s role in public education specifically for elementary and secondary education. ESSA maintained requirement for standardized testing that were established with No Child Left Behind but gave more control to states in deciding what standards children should be held to in their districts and schools.
2016: 21st Century Cures Act
The debate regarding health care legislation remains a combative issue across the United States. Yet, a sweeping bipartisan agreement occurred around the 21st Century Cures Act, signed into law on December 13, 2016. The bill easily passed both chambers of Congress due to the bipartisan initiatives that were included in it. It strategically provided the National Institutes of Health with resources to expand biomedical research to find cures and treatments for various illnesses and diseases. It allowed for more collaboration among government and private sector researchers and provided for faster drug approval. The legislation supported extensive research funding into studying the human brain, mental and neurological disorders, and regenerative medicine. Funding included $1 billion over two years to combat the opioid crisis, $1.8 billion for former Vice President Joe Biden’s “moonshot legislation” for cancer research, and a ground breaking mental health plan. As stated by President Obama, “this is a reminder of what we can do when we look out for one another.”
2017: John McCain’s Speech after Health Care Vote
During President Trump’s campaign, he promised to repeal and replace the Affordable Care Act (ACA). After numerous attempts to pass a bill, the House finally approved a repeal measure on a purely party-line vote. In the Senate, Sen. John McCain (R-AZ) proved to be the decisive vote in killing the Republican effort to repeal individual and employer mandates from ACA. Sens. Susan Collins (R-ME) and Lisa Murkowski (R-AK) joined Sen. McCain in voting noto the skinny repeal of ACA. McCain provided rationalization for his vote. In a speech following his decision, he urged his fellow members of Congress to work together, rather than forcefully pushing partisan bills. McCain directly encouraged bipartisanship by emploring his fellow senators “Let’s trust each other… Let’s return to regular order.” We’ve been spinning our wheels on too many important issues because we keep trying to find a way to win without help from across the aisle.”
Support Research Like This
With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.Give Now
Join Our Mailing List
BPC drives principled and politically viable policy solutions through the power of rigorous analysis, painstaking negotiation, and aggressive advocacy. |
This Math quiz is called ‘Number Sequences – Arithmetic Sequences’ and it has been written by teachers to help you if you are studying the subject at middle school. Playing educational quizzes is a fabulous way to learn if you are in the 6th, 7th or 8th grade – aged 11 to 14. What is a sequence?
- 1 What grade level is arithmetic sequence?
- 2 What is sequence math 10th grade?
- 3 What do you learn in arithmetic sequence?
- 4 What grade do you learn sequences?
- 5 What is the arithmetic mean between 10 and 24?
- 6 What is an arithmetic sequence in math?
- 7 What is 9th term in the sequence?
- 8 Is 7 a term?
- 9 How is arithmetic sequence used in real life?
- 10 What are the 5 examples of arithmetic sequence?
- 11 Why do we need to study arithmetic series?
- 12 What math is 6th grade?
- 13 What should a 5th grader learn in math?
- 14 What is the hardest level of math?
- 15 Understanding Arithmetic Series in Algebra – Video & Lesson Transcript
- 16 Finding the Common Difference
- 17 Arithmetic Series Sum
- 18 Arithmetic Sequences
- 19 5-Year-Olds Can Learn Calculus
- 20 Recommended Reading
- 21 Summary: Arithmetic Sequences
- 22 Key Concepts
- 23 Glossary
- 24 Contribute!
- 25 Arithmetic Sequences – Explicit & Recursive Formula
- 26 What is an Arithmetic Sequence?
- 27 Notation for Terms of a Sequence
- 28 The “nth” Term of an Arithmetic Sequence
- 29 Recursive Formulas
- 30 Example 1
- 31 Example 2
- 32 Explicit Formulas
- 33 Example 1
- 34 Example 2
- 35 Finding Terms of a Sequence
- 36 Patterns in Arithmetic Sequences
- 37 Writing an Explicit Formulafor an Arithmetic Sequence
- 38 Example 1
- 39 Example 2
- 40 Practice
- 41 Arithmetic Sequences and Sums
- 42 Arithmetic Sequence
- 43 Advanced Topic: Summing an Arithmetic Series
- 44 Footnote: Why Does the Formula Work?
- 45 Arithmetic Progression – Formula, Examples
- 46 What is Arithmetic Progression?
- 47 Arithmetic Progression Formulas
- 48 Terms Used in Arithmetic Progression
- 49 General Term of Arithmetic Progression (Nth Term)
- 50 Formula for Calculating Sum of Arithmetic Progression
- 51 Difference Between Arithmetic Progression and Geometric Progression
- 52 Solved Examples on Arithmetic Progression
- 53 FAQs on Arithmetic Progression
- 53.1 What is Arithmetic Progression in Maths?
- 53.2 Write the Formula To Find the Sum of N Terms of the Arithmetic Progression?
- 53.3 How to Find Common Difference in Arithmetic Progression?
- 53.4 How to Find Number of Terms in Arithmetic Progression?
- 53.5 How to Find First Term in Arithmetic Progression?
- 53.6 What is the Difference Between Arithmetic Sequence and Arithmetic Progression?
- 53.7 How to Find the Sum of Arithmetic Progression?
- 53.8 What are the Types of Progressions in Maths?
- 53.9 Where is Arithmetic Progression Used?
- 53.10 What is Nth Term in Arithmetic Progression?
- 53.11 How do you Solve Arithmetic Progression Problems?
What grade level is arithmetic sequence?
IXL | Arithmetic sequences | 8th grade math.
What is sequence math 10th grade?
A Sequence is a list of things (usually numbers) that are in order.
What do you learn in arithmetic sequence?
We’ve learned that arithmetic sequences are strings of numbers where each number is the previous number plus a constant. The common difference is the difference between the numbers. If we add up a few or all of the numbers in our sequence, then we have what is called an arithmetic series.
What grade do you learn sequences?
The 6th grade scope and sequence of a curriculum lists out all the topics and concepts that are going to be taught throughout the length of a particular course.
What is the arithmetic mean between 10 and 24?
Using the average formula, get the arithmetic mean of 10 and 24. Thus, 10+24/2 =17 is the arithmetic mean.
What is an arithmetic sequence in math?
Sequences with such patterns are called arithmetic sequences. In an arithmetic sequence, the difference between consecutive terms is always the same. For example, the sequence 3, 5, 7, 9 is arithmetic because the difference between consecutive terms is always two.
What is 9th term in the sequence?
The sequence is an arithmetic one with first term a1=−4 and difference d=2. To calculate a9 we use the formula: an=a1+(n−1)⋅d. Here we have: a9=−4+(9− 1 )⋅2=−4+8⋅2=−4+16=12.
Is 7 a term?
The 5x is one term and the 7y is the second term. The two terms are separated by a plus sign. + 7 is a three termed expression.
How is arithmetic sequence used in real life?
Arithmetic sequences are used in daily life for different purposes, such as determining the number of audience members an auditorium can hold, calculating projected earnings from working for a company and building wood piles with stacks of logs.
What are the 5 examples of arithmetic sequence?
= 3, 6, 9, 12,15,. A few more examples of an arithmetic sequence are: 5, 8, 11, 14, 80, 75, 70, 65, 60,
Why do we need to study arithmetic series?
The arithmetic sequence is important in real life because this enables us to understand things with the use of patterns. An arithmetic sequence is a great foundation in describing several things like time which has a common difference of 1 hour. An arithmetic sequence is also important in simulating systematic events.
What math is 6th grade?
The major math strands for a sixth-grade curriculum are number sense and operations, algebra, geometry, and spatial sense, measurement, and functions, and probability.
What should a 5th grader learn in math?
Math Lesson Plan – Fifth Grade Curriculum
- Lesson 1: Roman and Greek Numerals.
- Lesson 2: Read and Write Whole Numbers.
- Lesson 3: Expanding Whole Numbers up to Billions.
- Lesson 4: Comparing and Ordering Whole Numbers.
- Lesson 5: Round Numbers.
- Lesson 6: Estimate Sums and Differences.
- Lesson 7: Evaluating for Reasonableness.
What is the hardest level of math?
The Harvard University Department of Mathematics describes Math 55 as “probably the most difficult undergraduate math class in the country.” Formerly, students would begin the year in Math 25 (which was created in 1983 as a lower-level Math 55) and, after three weeks of point-set topology and special topics (for
Understanding Arithmetic Series in Algebra – Video & Lesson Transcript
Anarithmetic series is what we get when we add up a few or all of the numbers in our sequence. As an alternative definition, an arithmetic series is just a collection of the numbers in our arithmetic sequence. As you progress through your math studies, you will come across math problems that need you to discover the total of an arithmetic series, which you must solve. Continue to watch, and I will demonstrate a formula that you may use to calculate this total. But, before we can achieve that, we need to figure out what the common ground is.
Finding the Common Difference
Identifying our points of agreement is a straightforward task. What you do is pick any pair of consecutive integers and subtract the first from the second, and you get your answer. After that, you pick another pair and subtract the first from the second to see if it has the same difference as the first pair. The common difference may be determined if both of you acquire the same difference. Each arithmetic series will have a common difference that is unique to it. Using the above example, the series is an arithmetic sequence because each pair of succeeding integers has a difference of two between them and because the sequence has a common difference of two between them.
In addition, we have 8 – 6 = 2.
- Take a look at the order.
- When we subtract 5 from 2, we obtain the number 3.
- Is the answer also three?
- So far, everything has gone smoothly.
- Oh, that’s right, it’s also three.
Arithmetic Series Sum
Finding our points of commonality is a simple procedure to do. What you do is take any pair of consecutive integers and subtract the first from the second, and you get your result. Then you pick another pair and subtract the first from the second to check whether it has the same difference as the first pair did before. The common difference may be determined if both of you receive the same difference. A common difference will be found in each arithmetic series on its own. Using the above example, the series is an arithmetic sequence because each pair of consecutive integers has a difference of two between them and because the sequence has a common difference of two between them.
- 8 – 6 = 2 is also an option.
- Pay attention to the order.
- 5 minus 2 equals 3, and so forth.
- Was there a 3rd number as well?
- Right now everything appears to be in working order.
- Fortunately, the number 3 is present as well.
In mathematics, an arithmetic sequence is a succession of integers in which the value of each number grows or decreases by a fixed amount each term. When an arithmetic sequence has n terms, we may construct a formula for each term in the form fn+c, where d is the common difference. Once you’ve determined the common difference, you can calculate the value ofcby substituting 1fornand the first term in the series fora1 into the equation. Example 1: The arithmetic sequence 1,5,9,13,17,21,25 is an arithmetic series with a common difference of four.
- For the thenthterm, we substituten=1,a1=1andd=4inan=dn+cto findc, which is the formula for thenthterm.
- As an example, the arithmetic sequence 12-9-6-3-0-3-6-0 is an arithmetic series with a common difference of three.
- It is important to note that, because the series is decreasing, the common difference is a negative number.) To determine the next3 terms, we just keep subtracting3: 6 3=9 9 3=12 12 3=15 6 3=9 9 3=12 12 3=15 As a result, the next three terms are 9, 12, and 15.
- As a result, the formula for the fifteenth term in this series isan=3n+15.
Exemple No. 3: The number series 2,3,5,8,12,17,23,. is not an arithmetic sequence. Differencea2 is 1, but the following differencea3 is 2, and the differencea4 is 3. There is no way to write a formula in the form of forman=dn+c for this sequence. Geometric sequences are another type of sequence.
5-Year-Olds Can Learn Calculus
It is well knowledge that the traditional hierarchical order of mathematics education begins with counting, then moves on to addition and subtraction, then multiplication and division. The computational set grows in size as larger and larger numbers are added, and at some point, fractions are included into the mix as well. Then, in early adolescence, pupils are introduced to patterns of numbers and letters through the wholly new topic of algebra, which is completely different from the previous subjects.
However, according to Maria Droujkova, a pioneering math educator and curriculum designer, this development “has nothing to do with how people think, how children grow and learn, or how mathematics is formed.” She is one of a growing number of voices from around the world who are calling for a transformation in the way mathematics is taught, bringing it more in line with these fundamental principles.
Her explanation is that the current sequence is nothing more than an entrenched historical accident that has taken much of the fun out of what she refers to as the “playful universe” of mathematics, which includes more than 60 top-level disciplines and manifestations in everything from weaving to building, nature, music, and the arts.
- “The calculations that children are required to perform are frequently so developmentally incorrect that the experience is comparable to torture,” she explains.
- Starting with costumes, lighting, and other technical factors before focusing on telling meaningful stories is analogous to aspiring filmmakers learning about storyboarding, editing, and other technical aspects first.
- It also stops many others from studying mathematics as effectively or profoundly as they could otherwise.
- They recall how a single course, or even a single topic, such as fractions, caused them to get disoriented and go off the sequential path.
- Droujkova, who immigrated to the United States from Ukraine and received her PhD in mathematics education there, pushes for a more holistic approach to mathematics education that she refers to as “natural math,” which she teaches to children as young as infants and their parents.
The following is Droujkova’s statement: “Studies have proven that games or free play are effective ways for children to learn, and they love them.” Moreover, they pave the door to the more organized, though no less creative, process of observing, remixing, and creating mathematical patterns.”
Finding the most appropriate method is contingent on understanding an often-overlooked reality, according to her: “the intricacy of the concept and the difficulty of achieving it are two, independent aspects,” she states. The authors write that “unfortunately, a lot of what little children are offered is simple but difficult—primitive ideas that are difficult for humans to implement,” owing to the fact that they readily tax the limits of working memory capacity, attention span, precision, and other cognitive functions.
According to her, it is far preferable to begin by developing rich and social mathematical experiences that are complicated (enabling them to be taken in a variety of ways) yet simple (making them conducive to immediate play).
As Droujkova explains, “you can look in any discipline of mathematics and discover things that are both complicated and easy to understand.” In collaboration with various colleagues throughout the world, I’m on a journey to take the mathematical treasure trove and discover the most accessible routes into it all.
The book “Calculus by and for Young People,” by Don Cohen, is another example of this type of work.
“However, before we get there, we’d like to engage in some hands-on, grounded, metaphoric play.” In free play, you are learning in a very fundamental way—you truly own your notion on all levels: cognitively, physically, emotionally, and culturally.” As a result of this technique, “deep roots are established, and the canopy of high abstraction does not wither.” What is learned in the absence of play is of a qualitatively different kind.
- Taking tests and other routine activities are made easier, but logical thinking and problem solving are not improved.
- “There are different levels of comprehension,” she explains.
- Following the casual level, there is a level when students exchange ideas and look for patterns in their work.
- However, it is preferable if the element of playfulness is maintained throughout the voyage.
- There is no single piece of mathematics that is appropriate for everyone.
- It is also not necessary for everyone, aside from those who must be able to operate in their own cultures, to understand any particular piece of mathematics.
- The world would benefit from greater mathematical literacy, and mankind as a whole would require excellent mathematics to survive the next 100 years, due to the extremely complicated challenges we’re confronted with,” he adds.
- Nevertheless, they require visual evidence of significant (to them) individuals engaged in significant mathematics activities while enjoying the process.
Droujkova believes that math know-how (activities and examples) “must be accompanied by communities of practice that assist newcomers in making sense of it.” “It is impossible to have one without the other.” Whatever the case, if learning is to be as efficient and thorough as possible, it is necessary that it be done in a free and open environment.
As Droujkova points out, “this is the most significant conflict with traditional curriculum development.” Those instances when a youngster would like to be doing anything other than the scheduled activity must be anticipated by adults and organized accordingly.
This is difficult to perform since it takes both pedagogical and mathematical idea understanding, but it is something that can be acquired.
Droujkova has observed that in most groups, one or two children are engaged in an activity other than the primary activity, while the others are engaged in the main activity.
Those who believe in “letting children be children” are concerned that legitimizing the idea of involving toddlers in algebra and calculus will encourage Tiger Mom types to push their children into formal abstractions in these subjects at ever younger ages, which would be completely counterproductive.
Droujkova believes that these comments are symptomatic of something considerably more serious: ‘They indicate quite substantial chasms between various educational ideologies, or to put it another way, gaps in the futures we see for children.’ The children are placed in settings that need industrial accuracy when a large number of comparable exercises are assigned.
- Despite the fact that “it does not operate so directly,” she acknowledges, “these attitudes affect what mathematical instruction the adults choose or create for the children.” Additionally, others question if this technique is feasible for marginalized communities.
- She and her colleagues are working hard to strengthen local networks and increase accessibility on all fronts, including the mathematical, cultural, and financial fronts, as well as the technological front.
- As Droujkova points out, “the know-how about making community-centered, open learning available to disenfranchised populations is growing,” citing experiments conducted by Sugata Mitra and Dave Eggers as examples.
- Droujkova claims that one of the most difficult problems has been changing the mentality of the adults.
“Parents feel they get a fresh start” with these calculus and algebra games, according to the article. They can re-discover the thrill of mathematics play, much as toddlers do when they discover a new universe.”
Summary: Arithmetic Sequences
|recursive formula for nth term of an arithmetic sequence||_ = _ +d textnge 2|
|explicit formula for nth term of an arithmetic sequence||_ = _ +dleft(n – 1right)|
- An arithmetic sequence is a series in which the difference between any two successive terms is a constant
- An example would be The common difference is defined as the constant that exists between two successive terms. It is the number added to any one phrase in an arithmetic sequence that creates the succeeding term that is known as the common difference. The terms of an arithmetic series can be discovered by starting with the first term and repeatedly adding the common difference
- A recursive formula for an arithmetic sequence with common differencedis provided by = +d,nge 2
- A recursive formula for an arithmetic sequence with common differencedis given by = +d,nge 2
- As with any recursive formula, the first term in the series must be specified
- Otherwise, the formula will fail. An explicit formula for an arithmetic sequence with common differenced is provided by = +dleft(n – 1right)
- An example of this formula is = +dleft(n – 1right)
- When determining the number of words in a sequence, it is possible to apply an explicit formula. In application situations, we may modify the explicit formula to = +dn, which is a somewhat different formula.
Arithmetic sequencea sequence in which the difference between any two consecutive terms is a constantcommon difference is a series in which the difference between any two consecutive terms is a constant an arithmetic series is the difference between any two consecutive words in the sequence
Do you have any suggestions about how to make this article better? We would much appreciate your feedback. Make this page more user-friendly. Read on to find out more
Arithmetic Sequences – Explicit & Recursive Formula
When we write a list of numbers in a specific order, we’re creating what’s known as a sequence of numbers. For example, here are Tom’s last five English grades: 93, 85, 71, 86, and 100, which correspond to his last five English grades. A sequence is a collection of numbers that has been arranged in a certain way. Another example of a sequence: Five, ten, fifteen, twenty, twenty-five, thirty. This is an example of what is known as an endless sequence in mathematics. Infinite sequences are sequences that continue indefinitely without end.
- When a particular number of words (for example, the list of Tom’s English grades) are reached, the sequence is known as an Afinite sequence.
- The first number on the list is referred to as the first term, the second number is referred to as the second term, and so on.
- In fact, a sequence does not even need to include numbers to be valid!
- In the case of a list, it is also regarded to be a sequence.
What is an Arithmetic Sequence?
It is also possible to characterize an arithmetic sequence in terms of a constant difference between subsequent words. If you look at the difference between the phrases in the sequence above, you will see that the difference is always two letters long. 18 minus 16 equals 2, 16 plus 14 equals 2, 14 plus 12 equals 2, and 12 plus 10 equals 2. The word “common difference” refers to the number that is added to the end of each term in order to go to the following term. You receive a consistent difference between each pair of subsequent terms if you do not repeat the process.
- If the common difference of an arithmetic sequence is 6, this signifies that 6 is being added to each phrase in the series in order to get to the following term in the sequence.
- The common difference is positive, which indicates that the numbers in the sequence are growing larger since you are adding a positive number to each word in order to reach the next term.
- Keep in mind that adding a negative is the same as subtracting a negative.
- If d = -3, you remove 3 from the answer each time.
Never forget that the common difference d will be a positive number if your phrases are increasing in size, and a negative number if your terms are decreasing in size It’s always possible to take any two consecutive phrases and subtract them to figure out the common difference if you’re not sure what number is being added or removed.
Unless the number to which the addition is being made is exactly the same each time, the addition does not constitute an arithmetic sequence.
Notation for Terms of a Sequence
When referring to a term number, you can use a subscript to denote it. When writing sequences, we often start with the letter a and then put a little number below and to the right of the letter to indicate which topic is being discussed. Think at it this way: you see the letter A with a little number 4 printed after it. The little number 4 denotes that this is the fourth phrase in the series of words. Subscripts can be used to label each phrase in a succession of words. To illustrate how you may name the first five terms in a series, consider the following example.
The “nth” Term of an Arithmetic Sequence
When referring to a term number, a subscript might be used. To indicate which phrase is being used in a succession, we normally begin with the letter a, followed by a little number below and to the right. Consider the following scenario: you see an a with a little 4 written after it. The little number 4 denotes that it is the fourth phrase in the sequence of numbers. Subscripts can be used to identify each phrase in a succession. To illustrate how you may identify the first five words in a series, consider the following illustration.
The term “recursive” refers to something that is repeated or something that recurs again and over indefinitely. Recursive formulas are used to create sequences, and they are formulas that must be used repeatedly in order to come up with the terms of the series. If you have an arithmetic series, a recursive formula for it simply tells you what you need to do to get to the next term. A recursive formula is illustrated in the following example.
Let’s take this formula piece by piece and see how it works. The first sentence of the paragraph states that a sub 1 equals 8. The little subscript 1 indicates that it is the first phrase in the sequence. As a result, the first line just instructs us on how to begin the sequence. The second line begins with a sub n, which stands for subtraction. Keep in mind that this is the “nth” term in the series and is simply a generic phrase. It’s essentially saying, “in order to find whatever term you want in the sequence, you must_.” A recursive formula outlines the procedures that must be followed in order to determine the next word in the series.
If we wish to discover the second term, we must substitute 2 for n in the equation (in both spots).
Remember that a recursive formula is one that must be used repeatedly in order to obtain more terms from a sequence.
For example, if you wish to locate the 4th word, you would substitute 4 for n and so on.
It is necessary to repeat the process over and over again in order to discover other words. Did you detect a trend in the data? In order to find a term in the series, you add 4 to the preceding term in the sequence. It is possible to construct the sequence by repeatedly doing this procedure.
To find the first four terms of the series, use the recursive formula shown below. The first term is indicated by the number 10 on the top line. Bottom line: To discover a term in the series, subtract 3 from the preceding term. That’s it. It follows from this that your sequence will begin with 10 and then remove 3 from it each time to get the following several terms.
To find the first four terms of the series, use the recursive formula shown below. The first term, as indicated by the top line, is ten in length. The basic line is that, in order to discover a term in a series, you subtract 3 from the preceding term. The result is that your sequence will begin with 10 and will be reduced by 3 each time to obtain the following several phrases.
With this type of formula, you may enter whatever number for n that you want in order to locate the phrase that you are looking for. If you want to know what the first word is, you just substitute 1 for n. If you want the thirty-first phrase, you just substitute 30 for n. With this method, you can locate whatever word you desire without having to resort to a recursive formula again and over.
The 50th phrase in the sequence shown below must be found. You would have had to use the formula again and over again until you reached the 50th term if it had been a recursive formula. With an explicit formula, you just enter the desired value for n in the appropriate field. For the sake of this example, we want the 50th term, so we substitute 50 for n and simplify.
Finding Terms of a Sequence
If you have a sequence that follows a pattern, you’ll frequently be asked to locate a certain phrase later on in the series when it appears. It is sufficient to have an explicit formula for the sequence; otherwise, you must just enter in the value for n that you require. However, there are situations when they do not provide a formula. A teacher may assign you the series 5, 7, 9, 11,. and ask you to discover the 20th term or the 100th term in the sequence, as an example. It is possible to obtain the solution by continuing the pattern and listing out all 20 terms or all 100 terms, depending on the length of the list.
Fortunately, there is a more expedient method!
Patterns in Arithmetic Sequences
Try to discover any patterns in the sequence above, rather than attempting to list out each of the 100 phrases. This will assist you in determining the solution. The initial term is 5, and then the number 2 is added to it again and over again to construct the terms of the series, until it is completed. Are you able to guess what the 100th phrase is going to be? The number 2 was multiplied by three times in order to get the fourth term. The number 2 was multiplied by four times in order to get the fifth term.
When you reach the 100th phrase, how many times will you have added two to the end of the sentence? This is one less than the phrase you’re currently on because you didn’t add 2 the first time. To get to the 100th term, you must add 2 to the previous term a total of 99 times.
Writing an Explicit Formulafor an Arithmetic Sequence
In order to construct an explicit formula for an arithmetic series, you may make advantage of the pattern we discussed before. The following is the generic formula: To find the nth term (which may be any term you choose), start with the first term and add the common difference n – 1 times until you get the desired result. In order to discover the 50th term, you would take the first term and multiply it by 49 times, which would give you the 50th term. It’s always one less since you don’t include the common difference when you’re calculating the answer.
Once you’ve entered these values, you’ll have an explicit formula that you may use to find any phrase you’re interested in finding.
Create an explicit formula for the numbers 10, 14, 18, 22, and so forth. Prior to writing the explicit formula, you must first determine the initial word as well as the common difference between them. The series begins with the number 10, thus that is the sub 1. Because 4 is being added to each phrase in the series in order to get to the next term in the sequence, the common difference d is 4. Adding 10 for the first term and 4 for the second term is all that’s left to accomplish now. The explicit formula for this sequence may be obtained by distributing the 4 and simplifying it.
For the numbers 10, 14, 18, 22., provide an explicit formula. You must first determine the first term and the common difference in order to formulate the explicit formula. Given that the sequence begins with ten, the sub 1 is ten. At go through the series, 4 is added to the end of each phrase, resulting in a total of 4 as the common difference. Adding 10 for the first term and 4 for the second term is all that’s left to accomplish now! Distribution and simplification are required to obtain the explicit formula for this sequence.
Are you ready to experiment with a few issues on your own? To take a practice quiz, click on the START button to the right.
Arithmetic Sequences and Sums
A sequence is a collection of items (typically numbers) that are arranged in a specific order. Each number in the sequence is referred to as aterm (or “element” or “member” in certain cases); for additional information, see Sequences and Series.
An Arithmetic Sequence is characterized by the fact that the difference between one term and the next is a constant. In other words, we just increase the value by the same amount each time. endlessly.
1, 4, 7, 10, 13, 16, 19, 22, and 25 are the numbers 1 through 25. Each number in this series has a three-digit gap between them. Each time the pattern is repeated, the last number is increased by three, as seen below: As a general rule, we could write an arithmetic series along the lines of
- There are two words: Ais the first term, and dis is the difference between the two terms (sometimes known as the “common difference”).
1, 4, 7, 10, 13, 16, 19, 22, and 25 are the numbers 1 through 25. Has:
- In this equation, A = 1 represents the first term, while d = 3 represents the “common difference” between terms.
And this is what we get:
Following is the outcome:
Example: Write a rule, and calculate the 9th term, for this Arithmetic Sequence:
3, 8, 13, 18, 23, 28, 33, and 38 are the numbers three, eight, thirteen, and eighteen.
Each number in this sequence has a five-point gap between them. The values ofaanddare as follows:
- A = 3 (the first term)
- D = 5 (the “common difference”)
- A = 3 (the first term).
Making use of the Arithmetic Sequencerule, we can see that_xn= a + d(n1)= 3 + 5(n1)= 3 + 3 + 5n 5 = 5n 2 xn= a + d(n1) = 3 + 3 + 3 + 5n n= 3 + 3 + 3 As a result, the ninth term is:x 9= 5 9 2= 43 Is that what you’re saying? Take a look for yourself! Arithmetic Sequences (also known as Arithmetic Progressions (A.P.’s)) are a type of arithmetic progression.
Advanced Topic: Summing an Arithmetic Series
To summarize the terms of this arithmetic sequence:a + (a+d) + (a+2d) + (a+3d) + (a+4d) + (a+5d) + (a+6d) + (a+7d) + (a+8d) + (a+9d) + (a+9d) + (a+9d) + (a+9d) + (a+9d) + (a+9d) + ( make use of the following formula: What exactly is that amusing symbol? It is referred to as The Sigma Notation is a type of notation that is used to represent a sigma function. Additionally, the starting and finishing values are displayed below and above it: “Sum upnwherengoes from 1 to 4,” the text states. 10 is the correct answer.
Example: Add up the first 10 terms of the arithmetic sequence:
The values ofa,dandnare as follows:
- In this equation, A = 1 represents the first term, d = 3 represents the “common difference” between terms, and n = 10 represents the number of terms to add up.
As a result, the equation becomes:= 5(2+93) = 5(29) = 145 Check it out yourself: why don’t you sum up all of the phrases and see whether it comes out to 145?
Footnote: Why Does the Formula Work?
Let’s take a look at why the formula works because we’ll be employing an unusual “technique” that’s worth understanding. First, we’ll refer to the entire total as “S”: S = a + (a + d) +. + (a + (n2)d) +(a + (n1)d) + (a + (n2)d) + (a + (n1)d) + (a + (n2)d) + (a + (n1)d) + (a + (n1)d) + (a + (n2)d) + (a + (n1)d) + (a + (n1)d) + (a + (n1)d) + After that, rewrite S in the opposite order: S = (a + (n1)d)+ (a + (n2)d)+. +(a + d)+a. +(a + d)+a. +(a + d)+a. Now, term by phrase, add these two together:
|S||=||a||+||(a+d)||+||.||+||(a + (n-2)d)||+||(a + (n-1)d)|
|S||=||(a + (n-1)d)||+||(a + (n-2)d)||+||.||+||(a + d)||+||a|
|2S||=||(2a + (n-1)d)||+||(2a + (n-1)d)||+||.||+||(2a + (n-1)d)||+||(2a + (n-1)d)|
Each and every term is the same! Furthermore, there are “n” of them. 2S = n (2a + (n1)d) = n (2a + (n1)d) Now, we can simply divide by two to obtain the following result: The function S = (n/2) (2a + (n1)d) is defined as This is the formula we’ve come up with:
Arithmetic Progression – Formula, Examples
When the differences between every two subsequent terms are the same, this is referred to as an arithmetic progression, or AP for short. The possibility of obtaining a formula for the n th term exists in the context of an arithmetic progression. In the example above, the sequence 2, 6, 10, 14,. is an arithmetic progression (AP) because it follows a pattern in which each number is produced by adding 4 to the number gained by adding 4 to the preceding term. In this series, the n thterm equals 4n-2 (fourth term).
in the n thterm, you will get the terms in the series.
- When n = 1, 4n-2 = 4(1)-2 = 4(2)=2
- When n = 1, 4n-2 = 4(1)-2 = 4(2)=2
- When n = 2, 4n-2 = 4(2)-2 = 8-2=6. When n = 2, 4n-2 = 4(2)-2 = 8-2=6. When n = 3, 4n-2 = 4(3)-2 = 12-2=10
- When n = 3, 4n-2 = 4(3)-2 = 12-2=10
However, how can we determine the n th word in a given series of numbers? In this post, we will learn about arithmetic progression with the use of solved instances.
|1.||What is Arithmetic Progression?|
|2.||Arithmetic Progression Formulas|
|3.||Terms Used in Arithmetic Progression|
|4.||General Term of Arithmetic Progression|
|5.||Formula for Calculating Sum of AP|
|6.||Difference Between AP and GP|
|7.||FAQs on Arithmetic Progression|
What is Arithmetic Progression?
There are two methods in which we might define anarithmetic progression (AP):
- An arithmetic progression is a series in which the differences between every two subsequent terms are the same
- It is also known as arithmetic progression. A series in which each term, with the exception of the first term, is created by adding a predetermined number to the preceding term is known as an arithmetic progression.
For example, the numbers 1, 5, 9, 13, 17, 21, 25, 29, 33, and so on. Has:
- In this case, A = 1 (the first term)
- D = 4 (the “common difference” across terms)
- And E = 1 (the second term).
In general, an arithmetic sequence can be written as follows:= Using the preceding example, we get the following:=
Arithmetic Progression Formulas
The AP formulae are listed below.
- An AP’s common difference is denoted by the symbol d = a2 – a1
- An AP’s n thterm is denoted by the symbol a n= a + (n – 1)d
- S n = n/2(2a+(n-1)d)
- The sum of the n terms of an AP is: S n= n/2(2a+(n-1)d)
Terms Used in Arithmetic Progression
From here on, we shall refer to arithmetic progression by the abbreviation AP. Here are some more AP illustrations: 6, 13, 20, 27, 34,.91, 81, 71, 61, 51,.2, 3, 4, 5,.- An AP is often represented as follows: a1, a2, a3,. are the first letters of the alphabet. Specifically, the following nomenclature is used. Initial Term: The first term of an AP corresponds to the first number of the progression, as implied by the name. It is often symbolized by the letters a1 (or) a.
is the number 6.
One common difference is that we are all familiar with the fact that an AP is a series in which each term (save the first word) is formed by adding a set integer to the term before it.
For example, if the first term is a1, then the second term is a1+d, the third term is a1+d+d = a1+2d, and the fourth term is a1+2d+d= a1+3d, and so on and so forth.
As a result, d=7 is the common difference. In general, the common difference between every two consecutive words of an AP is the difference between the two phrases immediately before them. To calculate the common difference of an AP, use the following formula: d = an-a.
General Term of Arithmetic Progression (Nth Term)
It is possible to determine the general term (or) nthterm of an AP whose initial term is a and the common difference is d by using the formula a n =a+(n-1)d. We may use the first term, a 1 =6, and the common difference, d=7 to obtain the general term (or) n thterm of a sequence of numbers such as 6, 13, 20, 27 and 34, for example, in the formula for the nth terms. As a result, we have a n=a+(n-1)d = 6+. (n-1) 7 = 6+7n-7 = 7n -1. 7 = 6+7n-7 = 7n -1. The general term (or) nthterm of this sequence is: a n= 7n-1, which is the n thterm.
- We already know that we can locate a word by adding d to its preceding term.
- We can simply add d=7 to the 5 thterm, which is 34, to get the answer.
- But what happens if we have to locate the 102nd phrase in the dictionary?
- In this example, we can simply substitute n=102 (as well as a=6 and d=7) in the calculation for the n thterm of an AP to obtain the desired result.
- This is referred to as the thearithmetic sequence explicit formula when the general term (or) nthterm of an AP is used as an example.
- Some AP instances are included in the following table, along with the initial term, the common difference, and the general term in each case.
|Arithmetic Progression||First Term||Common Difference||General Termn thterm|
|AP||a||d||a n = a + (n-1)d|
|–√3, −2√3, −3√3, −4√3–,…||-√3||√3||-√3 n|
Formula for Calculating Sum of Arithmetic Progression
If the first term of an AP is a and the common difference is d, the general term (or) the nth term of that AP is obtained using the formula: a + (n-1)d. Example: In order to obtain the general term (or) n thterm of the series 6,13,20,27,34,., we would need to replace the first term, a 1 =6, and the common difference, d=7 into the formula for the nth terms. Example: We get a result of 6+n=a+(n-1)d (n-1) Seven equals six plus seven and seven and seven and seven and seven and seven and seven and seven and seven and seven and seven This series has seven terms in total, and the general term (or) nthterm in the sequence is: a=7n-1.
We already know that we can locate a word by adding the letter d to the end of the term before it in the alphabet.
A period of six years is five years plus seven years, which is 34 plus seven years equals forty-one years.
Manually calculating it isn’t that tough, now is it?
So we have:a n=a+(n-1)da 102= 6+a 102= 6 (102-1) Seven hundred and twenty-two is equal to six hundred and one and seven hundred and two is equal to seven hundred and two So the 713th phrase in the preceding series is the 102nd term in the preceding sequence This is referred to as the thearithmetic sequence explicit formula for the general term (or) nthterm of an AP.
In addition, it may be used to find any phrase in the AP without having to look for its predecessor. Some AP instances are provided in the following table, along with the initial term, the common difference, and the general term in each case.
- When the n th term of an arithmetic progression is unknown, the sum of the first n terms is S n= n/2
- Otherwise, the sum of the first n terms is S n= n/3. It is known that the sum of the first n terms of an arithmetic progression is S n= n/2 when the nth term, a, is known, but it is not known what the sum of the first n terms is.
As an illustration, Mr. Kevin makes $400,000 per year and sees his pay rise by $50,000 every year. Then, how much money does he have at the conclusion of the first three years of employment? Solution: Mr. Kevin’s earnings for the first year equal to a total of $400,000 (a = 400,000). The annual increase is denoted by the symbol d = 50,000. We need to figure out how much he will make over the next three years. As a result, n=3. In the AP sum formula, by substituting these numbers for the default values, S n =n/2 S n = 3/2 (n/2(2(400000)+(3-1)(50000))= 3/2 (800000+100000)= 3/2 (900000)= 1350000 In three years, he made $1,350,000.
Kevin earned the following amount every year for the first three years of his employment.
The aforementioned formulae, on the other hand, are beneficial when n is a greater number.
Derivation of Arithmetic Progression Formula
Arithmetic progression is a type of progression in which every term following the first is derived by adding a constant value, known as the common difference, to the previous term (d). As a result, we know that a n= a 1+ (n – 1)d is the formula for finding the n thterm in an arithmetic progression. The first term is a 1, the second term is a 1+ d, the third term is a 1+ 2d, and so on. The first term is a 1. In order to get the sum of the arithmetic series, S n, we begin with the first term and proceed by adding the common difference in each succeeding term.
- However, when we combine those two equations, we obtainSn = a 1+ (a 1+ d) + (a 1+ 2d) +.
- +_2S n = (a 1+ a n) + (a 1+ a n) + (a 1+ a n) + (a 1+ a n) +.
- As a result, 2S n= n (a 1 + a n).
- n Equals n/2 when simplified.
Difference Between Arithmetic Progression and Geometric Progression
For clarification, the following table describes the distinction between arithmetic and geometric progression:
|Arithmetic progression||Geometric progression|
|Arithmetic progression is a series in which the new term is the difference between two consecutive terms such that they have a constant value||Geometric progression is defined as the series in which the new term is obtained bymultiplyingthe two consecutive terms such that they have a constant factor|
|The series is identified as an arithmetic progression with the help of a common difference between consecutive terms.||The series is identified as a geometric progression with the help of a commonratiobetween consecutive terms.|
|The consecutive terms vary linearly.||The consecutive terms vary exponentially.|
Important Points to Remember About Arithmetic Progression
- AP is a list of numbers in which each term is generated by adding a fixed number to the number immediately preceding it. The first term is represented by the letter a, the second term by the letter d, the nth term is represented by the letter n, and the total number of terms by the letter n. In general, AP may be expressed as a, a+d, a+2d, and a+3d
- The nth term of an AP can be obtained as a n= a + (n1)d
- And the nth term of an AP can be obtained as a n= a + (n1)d. The total of an AP may be calculated using either s n =n/2 or s n =n/3. It is not necessary for the common difference to be positive in order for the graph of an AP to be a straight line with a slope as the common difference. As an illustration, consider the sequence 16,8,0,8,16,. There is a common discrepancy in the following formulas: d=8-16=0-8=-8 – 0=16-(-8) =-8
There are n terms in an AP, with the first term representing the first term, the second term representing the second term, and the number of terms representing the number of terms representing the number of terms representing each term; an AP can be represented by the letter an or by a n representing the nth term; and n representing the number of terms representing the first term; Generalized additive polynomials (AP) can be represented by the letters a+d, a+2d, and a+3d; the nth term of an AP can be found by decomposing it into the letters a + (n1)d; and One way to calculate the sum of an AP is to use the formulas s n =n/2 and s n =n/3.
It is not necessary for the common difference to be positive in order for the AP to have a straight line graph; the common difference might be negative in some cases.
Taking the sequence 16,8,0,8,16 as an example: There is a frequent discrepancy in the following formulas: d=8-16=0-8=-8 – 0=16-(-8) =-8; d=8-16=0-16=0-16=0-16=-8
- Sum of a GP
- Arithmetic Sequence Calculator
- Sequence Calculator
Solved Examples on Arithmetic Progression
- For instance, in Example 1, determine the general term of the arithmetic progression. -3, -(1/2), 2, 3. In this case, the numbers 3 and (1/2) are substituted for each other. There are two terms in this equation: first, a=-3, and second, the common difference. The common difference is denoted by the symbol d = (1/2) (-3) = (1/2) 3 + 2 = 5/2 The general term of an AP is computed using AP formulae, and it is calculated using the following formula: a n= a+(n-1)da n= -3 + a n= -3 + a n= -3 + a n= -3 + a n= -3 + a n= -3 + a n= -3 + a n= -3 + a n= -3 + a n= -3 + a n= -3 + a n= -3 + a n= (n-1) 5/2= -3+ (5/2) and 5/2= -3 the product of five twos equals five twos plus one half equals one half As a result, the following is the common phrase for the provided AP: A n= 51/2 – 11/2 is the answer. Example 2: Which of the following terms from the AP 3, 8, 13, 18, and 19 is 78? Solution: The numbers 3, 8, 13, and 18 are in the provided sequence. a=3 is the first term, and the common difference is d = 8-3= 13-8=.5 is the second term. Assume that the n thterm is, for example, a n =78. All of these values should be substituted in the general term of an arithmetic progression: The value of a n equals the value of an a+ (n-1) d78 = 3 and up (n-1) The number 578 is equal to 3+5n-578, which equals 5n-280, which equals 5n16, which is n. Answer: The number 78 represents the sixteenth term.
Continue to the next slide proceed to the next slide Simple graphics might help you break through difficult topics. Math will no longer be a difficult topic for you, especially if you visualize the concepts and grasp them as a result. Schedule a No-Obligation Trial Class.
FAQs on Arithmetic Progression
AP formulae that correlate to the following AP values are given: a, a + d, a + 2d, a + 3d,. a + (n – 1)d:
- The formula for finding the nth term is a n= a + (n – 1) d
- The formula for finding the sum of the terms is S n= n/2
- And the formula for finding the nth term is a n= a + (n – 1) d.
What is Arithmetic Progression in Maths?
An arithmetic progression is a succession of integers in which there is a common difference between any two consecutive values in the sequence (A.P.). The numbers 3, 6, 9, 12, 15, 18, 21, and so on are examples of A.P.
Write the Formula To Find the Sum of N Terms of the Arithmetic Progression?
When the nth term of an arithmetic progression is unknown, the sum of the first n terms of the progression is S n= n/2. When the nth term, an n, of an arithmetic progression is known, the sum of the first n terms of the progression is S n= n/2.
How to Find Common Difference in Arithmetic Progression?
The common difference between each number in an arithmetic series is the value of the difference between them. To summarize: the formula to find the common difference between two terms in an arithmetic sequence is d = a(n)-1, where the last term in the sequence and the previous term in the sequence are both equal to a(n – 1), where the common difference between two terms equals one and the common difference between two terms equals one.
How to Find Number of Terms in Arithmetic Progression?
An arithmetic progression may be easily calculated by dividing the difference between the final and first terms by the common difference, and then adding one to get the number of terms.
How to Find First Term in Arithmetic Progression?
The word ‘a’ in the progression may be found if we know ‘d’ (common difference) and any term (nth term) in the progression (first term). As an illustration, the numbers 2, 4, 6, 8, and so on. In the case of arithmetic progression, the nth term is equal to a+ (n-1) d, where an is the first term of the arithmetic progression, n is the number of terms in the arithmetic progression, and d is the common difference In this case, a = 2, d = 4 – 2 = 6 – 4 = 2, and e = 2. Assuming that the 5th term is 10 and d=2, the equation is 5 = a + 4d; 10 = a + 4(2); 10 = an even number of terms; and a = 2.
What is the Difference Between Arithmetic Sequence and Arithmetic Progression?
Arithmetic Sequence/Arithmetic Series is the sum of the parts of Arithmetic Progression, which is a mathematical concept. It is possible to have any number of sequences inside any range that produce a common difference. Arithmetic progression is defined as
How to Find the Sum of Arithmetic Progression?
In order to calculate the sum of arithmetic progression, we must first determine the first term, the number of terms, and the common difference between succeeding terms, among other things, S n= n/2 is the formula for calculating the sum of an arithmetic progression if and only if a = initial term of progression, n = number of terms in progression, and d = common difference are all positive integers.
What are the Types of Progressions in Maths?
In order to calculate the sum of arithmetic progression, we must first determine the first term, the number of terms, and the common difference between succeeding terms, among other things.
Therefore, Sn= n/2 may be used to obtain arithmetic progression sums where a denotes first term of progression and n denotes number of terms in the progression; and d denotes average of two terms in progression.
- Arithmetic Progression (AP), Geometric Progression (GP), and Harmonic Progression (HP) are all examples of progression.
Where is Arithmetic Progression Used?
When you get into a cab, you may see an example of how arithmetic progression is used in real life. Following your first taxi journey, you will be charged an initial flat amount, followed by a charge per mile or kilometers traveled. This diagram illustrates an arithmetic sequence in which you will be charged a particular fixed (constant) rate plus the beginning rate for every kilometer traveled.
What is Nth Term in Arithmetic Progression?
nth term is a formula that contains the letter n and allows you to locate any term in a series without having to move from one term to the next in the sequence. Because the term number is represented by the letter ‘n,’ we can simply insert the number 50 in the calculation to discover the 50th term.
How do you Solve Arithmetic Progression Problems?
nth term is a formula that contains the letter n and allows you to locate any term in a series without having to move from one term to the next in a sequence. We may obtain the 50th term by simply substituting 50 in the formula for ‘n’, which is where the term number is represented.
- An AP’s common difference is denoted by the symbol d = a2 – a1
- An AP’s n thterm is denoted by the symbol a n= a + (n – 1)d
- S n = n/2(2a+(n-1)d)
- The sum of the n terms of an AP is: S n= n/2(2a+(n-1)d)
Difference between two APs is represented by the formula:d = (a2 + 1)d; the second term of an AP is represented by the formula: a = (n – 1)d; and the third term of an AP is represented by the formula: A = (a2 + 1)d. To calculate the sum of n terms in an AP, divide the number of terms by two and add one to the number of terms. S n=n/2(2a+1)d; n = 2a+1+(n-1)d; |
(PhysOrg.com) -- British space engineers working for a space company in Stevenage in England, have designed a "gravity tractor" spacecraft to deflect any asteroids threatening to collide with Earth. The announcement comes only weeks after an asteroid collision scar around the size of Earth was detected on Jupiter.
A collision with an asteroid is a rare event, but scientists believe it is inevitable that sooner or later an asteroid will come close enough to be a real threat. In fact in 2004 an asteroid called Apophis caused alarm when scientists predicted there was a 1:37 chance of it hitting Earth in 2029, which is the greatest threat in recorded history. They later revised their figures but it could still be on course to collide in 2036. The US space agency, NASA estimates there are at least 1000 "potentially hazardous asteroids."
NASA is so concerned about the threat it has set up a monitoring program to track every space object that could be an asteroid on a collision course. They are so far tracking over 6,000 asteroids whose orbits bring them close to Earth, but there are an estimated 100,000 asteroids large enough to wipe out a city.
A collision could be catastrophic, depending on how large the asteroid is and where it hits. A direct hit to a city by even a relatively small asteroid the size of a football field, for example, could completely destroy the city and kill millions of people. Many more could be killed by tsunamis triggered by the impact, and by dust and burning material thrown up into the atmosphere after the collision.
The engineers, led by Dr Ralph Cordey, head of exploration and business at EADS Astrium, a British space company, have designed what they call a "gravity tractor", a ten-tonne spacecraft around 100 feet long that could provide a practical way of averting a collision with Earth.
The device would be launched as soon as an asteroid was found to be on course to crash into the planet, and would fly alongside it at a distance of about 160 feet away. The craft could divert an asteroid up to 430 yards in diameter, and an impact with an asteroid this size would release around 100,000 times the energy of the bomb dropped on Hiroshima in 1945.
The gravity tractor is designed to draw the asteroid towards itself by exerting a small gravitational force on it. The spacecraft would then steer the asteroid away into an orbit away from Earth.
The craft would use four ion thrusters, which are low energy and efficient, of the type commonly used on deep space probes. The ion thrusters enable the craft to adjust its position relative to the asteroid. The gravitational pull exerted by the asteroid would be enough to nudge the rock into a different, and less dangerous, trajectory.
The process of steering the asteroid away from a collision course would take several years, with the craft changing the angle of trajectory by only a fraction of an inch over 15 years, but that is enough change to divert an asteroid. The spacecraft would need to be launched at least 15 (preferably 20) years before the predicted collision to give it time to adjust the asteroid's trajectory away from Earth.
The design team say the gravity tractor could be built fairly quickly with existing technologies, although a prototype has not yet been built. They have planned the details of the mission, and expect the cost could be shared by a number of governments if an asteroid on track to hit Earth was discovered, and international agreements would need to be drawn up.
NASA published a paper earlier this year on the feasibility of using a gravity tractor for this purpose, and they concluded it could be extremely effective if there was enough warning. With scientists saying the asteroid Apophis could possibly be on a course to collide with Earth in 2036, perhaps we do have enough warning.
© 2009 PhysOrg.com
Explore further: Observing the onset of a magnetic substorm |
This lesson defines and compares the National Debt with the National Deficit. Students will discover the differences between the two and look at current trends. Students will examine the amount of per-capita debt and be exposed to the reality of the amount the national debt is increasing every day or two despite recent budget surpluses.
Students will visit “A Citizen’s Guide to the Federal Budget,” and use the federal government web site to obtain information which will help them understand basic information about the budget of the United States Government for the current fiscal year.
The seasonally adjusted rate of change in the consumer price index during the month of September 2002 was 0.2 percent (an increase of two-tenths of one percent). The rate of increase in the consumer price index over the past twelve months was 1.5 percent. In September, the core consumer price index, which excludes energy and food prices, increased by 0.1 percent.
The following lessons come from the Council for Economic Education's library of publications. Clicking the publication title or image will take you to the Council for Economic Education Store for more detailed information.
Teaching Financial Crises is an eight lesson resource that provides an organizing framework in which to contextualize all of the media attention that has been paid to the recent financial crisis, as well as put it in a historical context. The current events stories, opinion pieces, and other popular media pieces that are today in great supply have generally not connected to educational objectives, historical analysis, and economic processes and concepts that are used in the high school classroom. In Teaching Financial Crises, teachers will find a non-partisan and non-ideological resource to help them simplify and offer balanced perspectives on this challenging subject matter.
6 out of 9 lessons from this publication relate to this EconEdLink lesson.
This publication contains complete instructions for teaching the lessons in Capstone. When combined with a textbook, Capstone provides activities for a complete high school economics course. 45 exemplary lessons help students learn to apply economic reasoning to a wide range of real-world subjects.
3 out of 45 lessons from this publication relate to this EconEdLink lesson.
This revised edition features simulations, role plays, small-group discussions and other active-learning instructional activities to help students explore economic concepts through real-life applications.
3 out of 21 lessons from this publication relate to this EconEdLink lesson. |
Evolution can take place by anagenesis, in which changes occur within a lineage, or by cladogenesis, in which a lineage splits into two or more separate lines. Anagenetic evolution has doubled the size of the human cranium over the course of two million years; in the lineage of the horse it has reduced the number of toes from four to one. Cladogenetic evolution has produced the extraordinary diversity of the living world, with its more than two million species of animals, plants, fungi, and microorganisms.
The most essential cladogenetic function is speciation, the process by which one species splits into two or more species. Because species are reproductively isolated from one another, they are independent evolutionary units; that is, evolutionary changes occurring in one species are not shared with other species. Over time, species diverge more and more from one another as a consequence of anagenetic evolution. Descendant lineages of two related species that existed millions of years ago may now be classified into quite different biological categories, such as different genera or even different families.
The evolution of all living organisms, or of a subset of them, can be seen as a tree, with branches that divide into two or more as time progresses. Such trees are called phylogenies. Their branches represent evolving lineages, some of which eventually die out while others persist in themselves or in their derived lineages down to the present time. Evolutionists are interested in the history of life and hence in the topology, or configuration, of phylogenies. They are concerned as well with the nature of the anagenetic changes within lineages and with the timing of the events.
Phylogenetic relationships are ascertained by means of several complementary sources of evidence. First, there are the discovered remnants of organisms that lived in the past, the fossil record, which provides definitive evidence of relationships between some groups of organisms. The fossil record, however, is far from complete and is often seriously deficient. Second, information about phylogeny comes from comparative studies of living forms. Comparative anatomy contributed the most information in the past, although additional knowledge came from comparative embryology, cytology, ethology, biogeography, and other biological disciplines. In recent years the comparative study of the so-called informational macromolecules—proteins and nucleic acids, whose specific sequences of constituents carry genetic information—has become a powerful tool for the study of phylogeny (see below DNA and protein as informational macromolecules).
Morphological similarities between organisms have probably always been recognized. In ancient Greece Aristotle and later his followers and those of Plato, particularly Porphyry, classified organisms (as well as inanimate objects) on the basis of similarities. The Aristotelian system of classification was further developed by some medieval Scholastic philosophers, notably Albertus Magnus and Thomas Aquinas. The modern foundations of biological taxonomy, the science of classification of living and extinct organisms, were laid in the 18th century by the Swedish botanist Carolus Linnaeus and the French botanist Michel Adanson. The French naturalist Lamarck dedicated much of his work to the systematic classification of organisms. He proposed that their similarities were due to ancestral relationships—in other words, to the degree of evolutionary proximity.
The modern theory of evolution provides a causal explanation of the similarities between living things. Organisms evolve by a process of descent with modification. Changes, and therefore differences, gradually accumulate over the generations. The more recent the last common ancestor of a group of organisms, the less their differentiation; similarities of form and function reflect phylogenetic propinquity. Accordingly, phylogenetic affinities can be inferred on the basis of relative similarity.
A distinction has to be made between resemblances due to propinquity of descent and those due only to similarity of function. As discussed above in the section The evidence for evolution: Structural similarities, correspondence of features in different organisms that is due to inheritance from a common ancestor is called homology. The forelimbs of humans, whales, dogs, and bats are homologous. The skeletons of these limbs are all constructed of bones arranged according to the same pattern because they derive from a common ancestor with similarly arranged forelimbs. Correspondence of features due to similarity of function but not related to common descent is termed analogy. The wings of birds and of flies are analogous. Their wings are not modified versions of a structure present in a common ancestor but rather have developed independently as adaptations to a common function, flying. The similarities between the wings of bats and birds are partially homologous and partially analogous. Their skeletal structure is homologous, due to common descent from the forelimb of a reptilian ancestor; but the modifications for flying are different and independently evolved, and in this respect they are analogous.
Features that become more rather than less similar through independent evolution are said to be convergent. Convergence is often associated with similarity of function, as in the evolution of wings in birds, bats, and flies. The shark (a fish) and the dolphin (a mammal) are much alike in external morphology; their similarities are due to convergence, since they have evolved independently as adaptations to aquatic life.
Taxonomists also speak of parallel evolution. Parallelism and convergence are not always clearly distinguishable. Strictly speaking, convergent evolution occurs when descendants resemble each other more than their ancestors did with respect to some feature. Parallel evolution implies that two or more lineages have changed in similar ways, so that the evolved descendants are as similar to each other as their ancestors were. The evolution of marsupials in Australia, for example, paralleled the evolution of placental mammals in other parts of the world. There are Australian marsupials resembling true wolves, cats, mice, squirrels, moles, groundhogs, and anteaters. These placental mammals and the corresponding Australian marsupials evolved independently but in parallel lines by reason of their adaptation to similar ways of life. Some resemblances between a true anteater (genus Myrmecophaga) and a marsupial anteater, or numbat (Myrmecobius), are due to homology—both are mammals. Others are due to analogy—both feed on ants.
Parallel and convergent evolution are also common in plants. New World cacti and African euphorbias, or spurges, are alike in overall appearance although they belong to separate families. Both are succulent, spiny, water-storing plants adapted to the arid conditions of the desert. Their corresponding morphologies have evolved independently in response to similar environmental challenges.
Homology can be recognized not only between different organisms but also between repetitive structures of the same organism. This has been called serial homology. There is serial homology, for example, between the arms and legs of humans, between the seven cervical vertebrae of mammals, and between the branches or leaves of a tree. The jointed appendages of arthropods are elaborate examples of serial homology. Crayfish have 19 pairs of appendages, all built according to the same basic pattern but serving diverse functions—sensing, chewing, food handling, walking, mating, egg carrying, and swimming. Although serial homologies are not useful in reconstructing the phylogenetic relationships of organisms, they are an important dimension of the evolutionary process.
Relationships in some sense akin to those between serial homologs exist at the molecular level between genes and proteins derived from ancestral gene duplications. The genes coding for the various hemoglobin chains are an example. About 500 million years ago a chromosome segment carrying the gene coding for hemoglobin became duplicated, so that the genes in the different segments thereafter evolved in somewhat different ways, one eventually giving rise to the modern gene coding for the α hemoglobin chain, the other for the β chain. The β chain gene became duplicated again about 200 million years ago, giving rise to the γ hemoglobin chain, a normal component of fetal hemoglobin (hemoblobin F). The genes for the α, β, γ, and other hemoglobin chains are homologous; similarities in their nucleotide sequences occur because they are modified descendants of a single ancestral sequence.
There are two ways of comparing homology between hemoglobins. One is to compare the same hemoglobin chain—for instance, the α chain—in different species of animals. The degree of divergence between the α chains reflects the degree of the evolutionary relationship between the organisms, because the hemoglobin chains have evolved independently of one another since the time of divergence of the lineages leading to the present-day organisms. A second way is to make comparisons between, say, the α and β chains of a single species. The degree of divergence between the different globin chains reflects the degree of relationship between the genes coding for them. The different globins have evolved independently of each other since the time of duplication of their ancestral genes. Comparisons between homologous genes or proteins within a given organism provide information about the phylogenetic history of the genes and hence about the historical sequence of the gene duplication events.
Whether similar features in different organisms are homologous or analogous—or simply accidental—cannot always be decided unambiguously, but the distinction must be made in order to determine phylogenetic relationships. Moreover, the degrees of homology must be quantified in some way so as to determine the propinquity of common descent between species. Difficulties arise here as well. In the case of forelimbs, it is not clear whether the homologies are greater between human and bird than between human and reptile, or between human and reptile than between human and bat. The fossil record sometimes provides the appropriate information, even though the record is deficient. Fossil evidence must be examined together with the evidence from comparative studies of living forms and with the quantitative estimates provided by comparative studies of proteins and nucleic acids.
The fossil record indicates that morphological evolution is by and large a gradual process. Major evolutionary changes are usually due to a building-up over the ages of relatively small changes. But the fossil record is discontinuous. Fossil strata are separated by sharp boundaries; accumulation of fossils within a geologic deposit (stratum) is fairly constant over time, but the transition from one stratum to another may involve gaps of tens of thousands of years. Whereas the fossils within a stratum exhibit little morphological variation, new species—characterized by small but discontinuous morphological changes—typically appear at the boundaries between strata. That is not to say that the transition from one stratum to another always involves sudden changes in morphology; on the contrary, fossil forms often persist virtually unchanged through several geologic strata, each representing millions of years.
The apparent morphological discontinuities of the fossil record are often attributed by paleontologists to the discontinuity of the sediments—that is, to the substantial time gaps encompassed in the boundaries between strata. The assumption is that, if the fossil deposits were more continuous, they would show a more gradual transition of form. Even so, morphological evolution would not always keep progressing gradually, because some forms, at least, remain unchanged for extremely long times. Examples are the lineages known as “living fossils”—for instance, the lamp shell Lingula, a genus of brachiopod (a phylum of shelled invertebrates) that appears to have remained essentially unchanged since the Ordovician Period, some 450 million years ago; or the tuatara (Sphenodon punctatus), a reptile that has shown little morphological evolution for nearly 200 million years, since the early Mesozoic.
Some paleontologists have proposed that the discontinuities of the fossil record are not artifacts created by gaps in the record but rather reflect the true nature of morphological evolution, which happens in sudden bursts associated with the formation of new species. The lack of morphological evolution, or stasis, of lineages such as Lingula and Sphenodon is in turn due to lack of speciation within those lineages. The proposition that morphological evolution is jerky, with most morphological change occurring during the brief speciation events and virtually no change during the subsequent existence of the species, is known as the punctuated equilibrium model.
Whether morphological evolution in the fossil record is predominantly punctuational or gradual is a much-debated question. The imperfection of the record makes it unlikely that the issue will be settled in the foreseeable future. Intensive study of a favourable and abundant set of fossils may be expected to substantiate punctuated or gradual evolution in particular cases. But the argument is not about whether only one or the other pattern ever occurs; it is about their relative frequency. Some paleontologists argue that morphological evolution is in most cases gradual and only rarely jerky, whereas others think the opposite is true.
Much of the problem is that gradualness or jerkiness is in the eye of the beholder. Consider the evolution of shell rib strength (the ratio of rib height to rib width) within a lineage of fossil brachiopods of the genus Eocelia. Results of the analysis of an abundant sample of fossils in Wales from near the beginning of the Devonian Period is shown in the figure. One possible interpretation of the data is that rib strength changed little or not at all from 415 million to 413 million years ago; rapid change ensued for the next 1 million years, followed by virtual stasis from 412 million to 407 million years ago; and then another short burst of change occurred about 406 million years ago, followed by a final period of stasis. On the other hand, the same record may be interpreted as not particularly punctuated but rather a gradual process, with the rate of change somewhat greater at particular times.
The proponents of the punctuated equilibrium model propose not only that morphological evolution is jerky but also that it is associated with speciation events. They argue that phyletic evolution—that is, evolution along lineages of descent—proceeds at two levels. First, there is continuous change through time within a population. This consists largely of gene substitutions prompted by natural selection, mutation, genetic drift, and other genetic processes that operate at the level of the individual organism. The punctualists maintain that this continuous evolution within established lineages rarely, if ever, yields substantial morphological changes in species. Second, they say, there is the process of origination and extinction of species, in which most morphological change occurs. According to the punctualist model, evolutionary trends result from the patterns of origination and extinction of species rather than from evolution within established lineages.
As discussed above in the section The origin of species, speciation involves the development of reproductive isolation between populations previously able to interbreed. Paleontologists discriminate between species by their different morphologies as preserved in the fossil record, but fossils cannot provide evidence of the development of reproductive isolation—new species that are reproductively isolated from their ancestors are often morphologically indistinguishable from them. Speciation as it is seen by paleontologists always involves substantial morphological change. This situation creates an insuperable difficulty for resolving the question of whether morphological evolution is always associated with speciation events. If speciation is defined as the evolution of reproductive isolation, the fossil record provides no evidence that an association between speciation and morphological change is necessary. But if new species are identified in the fossil record by morphological changes, then all such changes will occur concomitantly with the origination of new species.
The current diversity of life is the balance between the species that have arisen through time and those that have become extinct. Paleontologists observe that organisms have continuously changed since the Cambrian Period, more than 500 million years ago, from which abundant animal fossil remains are known. The division of geologic history into a succession of eras and periods (see figure) is hallmarked by major changes in plant and animal life—the appearance of new sorts of organisms and the extinction of others. Paleontologists distinguish between background extinction, the steady rate at which species disappear through geologic time, and mass extinctions, the episodic events in which large numbers of species become extinct over time spans short enough to appear almost instantaneous on the geologic scale.
Best known among mass extinctions is the one that occurred at the end of the Cretaceous Period, when the dinosaurs and many other marine and land animals disappeared. Most scientists believe that the Cretaceous mass extinction was provoked by the impact of an asteroid or comet on the tip of the Yucatán Peninsula in southeastern Mexico 65 million years ago. The object’s impact caused an enormous dust cloud, which greatly reduced the Sun’s radiation reaching Earth, with a consequent drastic drop in temperature and other adverse conditions. Among animals, about 76 percent of species, 47 percent of genera, and 16 percent of families became extinct. Although the dinosaurs vanished, turtles, snakes, lizards, crocodiles, and other reptiles, as well as some mammals and birds, survived. Mammals that lived prior to the event were small and mostly nocturnal, but during the ensuing Paleogene and Neogene periods they experienced an explosive diversification in size and morphology, occupying ecological niches vacated by the dinosaurs. Most of the orders and families of mammals now in existence originated in the first 10 million–20 million years after the dinosaurs’ extinction. Birds also greatly diversified at that time.
Several other mass extinctions have occurred since the Cambrian. The most catastrophic happened at the end of the Permian Period, about 248 251 million years ago, when 95 percent of marine species, 82 percent of genera, and 51 percent of families of animals became extinct. (See also Triassic Period: Permian-Triassic extinctions.) Other large mass extinctions occurred at or near the end of the Ordovician (about 440 444 million years ago, 85 percent of marine species extinct), Devonian (about 360 359 million years ago, 83 70–80 percent of species extinct), and Triassic (about 210 200 million years ago, nearly 80 percent of species extinct). Changes of climate and chemical composition of the atmosphere appear to have caused these mass extinctions; there is no convincing evidence that they resulted from cosmic impacts. Like other mass extinctions, they were followed by the origin or rapid diversification of various kinds of organisms. The first mammals and dinosaurs appeared after the late Permian extinction, and the first vascular plants after the Late Ordovician extinction.
Background extinctions result from ordinary biological processes, such as competition between species, predation, and parasitism. When two species compete for very similar resources—say, the same kinds of seeds or fruits—one may become extinct, although often they will displace one another by dividing the territory or by specializing in slightly different foods, such as seeds of a different size or kind. Ordinary physical and climatic changes also account for background extinctions—for example, when a lake dries out or a mountain range rises or erodes.
New species come about by the processes discussed in previous sections. These processes are largely gradual, yet the history of life shows major transitions in which one kind of organism becomes a very different kind. The earliest organisms were prokaryotes, or bacteria-like cells, whose hereditary material is not segregated into a nucleus. Eukaryotes have their DNA organized into chromosomes that are membrane-bound in the nucleus, have other organelles inside their cells, and reproduce sexually. Eventually, eukaryotic multicellular organisms appeared, in which there is a division of function among cells—some specializing in reproduction, others becoming leaves, trunks, and roots in plants or different organs and tissues such as muscle, nerve, and bone in animals. Social organization of individuals in a population is another way of achieving functional division, which may be quite fixed, as in ants and bees, or more flexible, as in cattle herds or primate groups.
Because of the gradualness of evolution, immediate descendants differ little, and then mostly quantitatively, from their ancestors. But gradual evolution may amount to large differences over time. The forelimbs of mammals are normally adapted for walking, but they are adapted for shoveling earth in moles and other mammals that live mostly underground, for climbing and grasping in arboreal monkeys and apes, for swimming in dolphins and whales, and for flying in bats. The forelimbs of reptiles became wings in their bird descendants. Feathers appear to have served first for regulating temperature but eventually were co-opted for flying and became incorporated into wings.
Eyes, which serve as another example, also evolved gradually and achieved very different configurations, all serving the function of seeing. Eyes have evolved independently at least 40 times. Because sunlight is a pervasive feature of Earth’s environment, it is not surprising that organs have evolved that take advantage of it. The simplest “organ” of vision occurs in some single-celled organisms that have enzymes or spots sensitive to light (see eyespot), which helps them move toward the surface of their pond, where they feed on the algae growing there by photosynthesis. Some multicellular animals exhibit light-sensitive spots on their epidermis. Further steps—deposition of pigment around the spot, configuration of cells into a cuplike shape, thickening of the epidermis leading to the development of a lens, development of muscles to move the eyes and nerves to transmit optical signals to the brain—all led to the highly developed eyes of vertebrates (see eye, human) and cephalopods (octopuses and squids) and to the compound eyes of insects.
While the evolution of forelimbs—for walking—into the wings of birds or the arms and hands of primates may seem more like changes of function, the evolution of eyes exemplifies gradual advancement of the same function—seeing. In all cases, however, the process is impelled by natural selection’s favouring individuals exhibiting functional advantages over others of the same species. Examples of functional shifts are many and diverse. Some transitions at first may seem unlikely because of the difficulty in identifying which possible functions may have been served during the intermediate stages. These cases are eventually resolved with further research and the discovery of intermediate fossil forms. An example of a seemingly unlikely transition is described above in the section The fossil record—namely, the transformation of bones found in the reptilian jaw into the hammer and anvil of the mammalian ear.
Starfish are radially symmetrical, but most animals are bilaterally symmetrical—the parts of the left and right halves of their bodies tend to correspond in size, shape, and position (see symmetry). Some bilateral animals, such as millipedes and shrimps, are segmented (metameric); others, such as frogs and humans, have a front-to-back (head-to-foot) body plan, with head, thorax, abdomen, and limbs, but they lack the repetitive, nearly identical segments of metameric animals. There are other basic body plans, such as those of sponges, clams, and jellyfish, but their total number is not large—less than 40.
The fertilized egg, or zygote, is a single cell, more or less spherical, that does not exhibit polarity such as anterior and posterior ends or dorsal and ventral sides. Embryonic development (see animal development) is the process of growth and differentiation by which the single-celled egg becomes a multicellular organism.
The determination of body plan from this single cell and the construction of specialized organs, such as the eye, are under the control of regulatory genes. Most notable among these are the Hox genes, which produce proteins (transcription factors) that bind with other genes and thus determine their expression—that is, when they will act. The Hox genes embody spatial and temporal information. By means of their encoded proteins, they activate or repress the expression of other genes according to the position of each cell in the developing body, determining where limbs and other body parts will grow in the embryo. Since their discovery in the early 1980s, the Hox genes have been found to play crucial roles from the first steps of development, such as establishing anterior and posterior ends in the zygote, to much later steps, such as the differentiation of nerve cells.
The critical region of the Hox proteins is encoded by a sequence of about 180 consecutive nucleotides (called the homeobox). The corresponding protein region (the homeodomain), about 60 amino acids long, binds to a short stretch of DNA in the regulatory region of the target genes. Genes containing homeobox sequences are found not only in animals but also in other eukaryotes such as fungi and plants.
All animals have Hox genes, which may be as few as 1, as in sponges, or as many as 38, as in humans and other mammals. Hox genes are clustered in the genome. Invertebrates have only one cluster with a variable number of genes, typically fewer than 13. The common ancestor of the chordates (which include the vertebrates) probably had only one cluster of Hox genes, which may have numbered 13. Chordates may have one or more clusters, but not all 13 genes remain in every cluster. The marine animal amphioxus, a primitive chordate, has a single array of 10 Hox genes. Humans, mice, and other mammals have 38 Hox genes arranged in four clusters, three with 9 genes each and one with 11 genes. The set of genes varies from cluster to cluster, so that out of the 13 in the original cluster, genes designated 1, 2, 3, and 7 may be missing in one set, whereas 10, 11, 12, and 13 may be missing in a different set.
The four clusters of Hox genes found in mammals originated by duplication of the whole original cluster and retain considerable similarity between clusters. The 13 genes in the original cluster also themselves originated by repeated duplication, starting from a single Hox gene as found in the sponges. These first duplications happened very early in animal evolution, in the Precambrian. The genes within a cluster retain detectable similarity, but they differ more from one another than they differ from the corresponding, or homologous, gene in any of the other sets. There is a puzzling correspondence between the position of the Hox genes in a cluster along the chromosome and the patterning of the body—genes located upstream (anteriorly in the direction in which genes are transcribed) in the cluster are expressed earlier and more anteriorly in the body, while those located downstream (posteriorly in the direction of transcription) are expressed later in development and predominantly affect the posterior body parts.
Researchers demonstrated the evolutionary conservation of the Hox genes by means of clever manipulations of genes in laboratory experiments. For example, the ey gene that determines the formation of the compound eye in Drosophila vinegar flies was activated in the developing embryo in various parts of the body, yielding experimental flies with anatomically normal eyes on the legs, wings, and other structures. The evolutionary conservation of the Hox genes may be the explanation for the puzzling observation that most of the diversity of body plans within major groups of animals arose early in the evolution of the group. The multicellular animals (metazoans) first found as fossils in the Cambrian already demonstrate all the major body plans found during the ensuing 540 million years, as well as four to seven additional body plans that became extinct and seem bizarre to observers today. Similarly, most of the classes found within a phylum appear early in the evolution of the phylum. For example, all living classes of arthropods are already found in the Cambrian, with body plans essentially unchanged thereafter; in addition, the Cambrian contains a few strange kinds of arthropods that later became extinct.
The advances of molecular biology have made possible the comparative study of proteins and the nucleic acids, DNA and RNA. DNA is the repository of hereditary (evolutionary and developmental) information. The relationship of proteins to DNA is so immediate that they closely reflect the hereditary information. This reflection is not perfect, because the genetic code is redundant, and, consequently, some differences in the DNA do not yield differences in the proteins. Moreover, this reflection is not complete, because a large fraction of DNA (about 90 percent in many organisms) does not code for proteins. Nevertheless, proteins are so closely related to the information contained in DNA that they, as well as nucleic acids, are called informational macromolecules.
Nucleic acids and proteins are linear molecules made up of sequences of units—nucleotides in the case of nucleic acids, amino acids in the case of proteins—which retain considerable amounts of evolutionary information. Comparing two macromolecules establishes the number of their units that are different. Because evolution usually occurs by changing one unit at a time, the number of differences is an indication of the recency of common ancestry. Changes in evolutionary rates may create difficulties in interpretation, but macromolecular studies have three notable advantages over comparative anatomy and the other classical disciplines. One is that the information is more readily quantifiable. The number of units that are different is readily established when the sequence of units is known for a given macromolecule in different organisms. The second advantage is that comparisons can be made even between very different sorts of organisms. There is very little that comparative anatomy can say when organisms as diverse as yeasts, pine trees, and human beings are compared, but there are homologous macromolecules that can be compared in all three. The third advantage is multiplicity. Each organism possesses thousands of genes and proteins, which all reflect the same evolutionary history. If the investigation of one particular gene or protein does not resolve the evolutionary relationship of a set of species, additional genes and proteins can be investigated until the matter has been settled.
Informational macromolecules provide information not only about the branching of lineages from common ancestors (cladogenesis) but also about the amount of genetic change that has occurred in any given lineage (anagenesis). It might seem at first that quantifying anagenesis for proteins and nucleic acids would be impossible, because it would require comparison of molecules from organisms that lived in the past with those from living organisms. Organisms of the past are sometimes preserved as fossils, but their DNA and proteins have largely disintegrated. Nevertheless, comparisons between living species provide information about anagenesis.
The following is an example of such comparison: Two living species, C and D, have a common ancestor, the extinct species B (see the left side of the figure). If C and D were found to differ by four amino acid substitutions in a single protein, then it could tentatively be assumed that two substitutions (four total changes divided by two species) had taken place in the evolutionary lineage of each species. This assumption, however, could be invalidated by the discovery of a third living species, E, that is related to C, D, and their ancestor, B, through an earlier ancestor, A. The number of amino acid differences between the protein molecules of the three living species may be as follows:
The left side of the figure proposes a phylogeny of the three living species, making it possible to estimate the number of amino acid substitutions that have occurred in each lineage. Let x denote the number of differences between B and C, y denote the differences between B and D, and z denote the differences between A and B as well as A and E. The following three equations can be produced:
Solving the equations yields x = 3, y = 1, and z = 8.
As a concrete example, consider the protein cytochrome c, involved in cell respiration. The sequence of amino acids in this protein is known for many organisms, from bacteria and yeasts to insects and humans; in animals cytochrome c consists of 104 amino acids. When the amino acid sequences of humans and rhesus monkeys are compared, they are found to be different at position 66 (isoleucine in humans, threonine in rhesus monkeys) but, identical at the other 103 positions. When humans are compared with horses, 12 amino acid differences are found, but, when horses are compared with rhesus monkeys, there are only 11 amino acid differences. Even without knowing anything else about the evolutionary history of mammals, one would conclude that the lineages of humans and rhesus monkeys diverged from each other much more recently than they diverged from the horse lineage. Moreover, it can be concluded that the amino acid difference between humans and rhesus monkeys must have occurred in the human lineage after its separation from the rhesus monkey lineage (see the right side of the figure).
Evolutionary trees are models that seek to reconstruct the evolutionary history of taxa—i.e., species or other groups of organisms, such as genera, families, or orders. The trees embrace two kinds of information related to evolutionary change, cladogenesis and anagenesis. The figure can be used to illustrate both kinds. The branching relationships of the trees reflect the relative relationships of ancestry, or cladogenesis. Thus, in the right side of the figure, humans and rhesus monkeys are seen to be more closely related to each other than either is to the horse. Stated another way, this tree shows that the last common ancestor to all three species lived in a more remote past than the last common ancestor to humans and monkeys.
Evolutionary trees may also indicate the changes that have occurred along each lineage, or anagenesis. Thus, in the evolution of cytochrome c since the last common ancestor of humans and rhesus monkeys (again, the right side of the figure), one amino acid changed in the lineage going to humans but none in the lineage going to rhesus monkeys. Similarly, the left side of the figure shows that three amino acid changes occurred in the lineage from B to C but only one in the lineage from B to D.
There exist several methods for constructing evolutionary trees. Some were developed for interpreting morphological data, others for interpreting molecular data; some can be used with either kind of data. The main methods currently in use are called distance, parsimony, and maximum likelihood.
A “distance” is the number of differences between two taxa. The differences are measured with respect to certain traits (i.e., morphological data) or to certain macromolecules (primarily the sequence of amino acids in proteins or the sequence of nucleotides in DNA or RNA). The two trees illustrated in the figure were obtained by taking into account the distance, or number of amino acid differences, between three organisms with respect to a particular protein. The amino acid sequence of a protein contains more information than is reflected in the number of amino acid differences. This is because in some cases the replacement of one amino acid by another requires no more than one nucleotide substitution in the DNA that codes for the protein, whereas in other cases it requires at least two nucleotide changes. The table shows the minimum number of nucleotide differences in the genes of 20 separate species that are necessary to account for the amino acid differences in their cytochrome c. An evolutionary tree based on the data in the table, showing the minimum numbers of nucleotide changes in each branch, is illustrated in the complementary figure.
The relationships between species as shown in the figure correspond fairly well to the relationships determined from other sources, such as the fossil record. According to the figure, chickens are less closely related to ducks and pigeons than to penguins, and humans and monkeys diverged from the other mammals before the marsupial kangaroo separated from the nonprimate placentals. Although these examples are known to be erroneous relationships, the power of the method is apparent in that a single protein yields a fairly accurate reconstruction of the evolutionary history of 20 organisms that started to diverge more than one billion years ago.
Morphological data also can be used for constructing distance trees. The first step is to obtain a distance matrix, such as that making up the nucleotide differences table, but one based on a set of morphological comparisons between species or other taxa. For example, in some insects one can measure body length, wing length, wing width, number and length of wing veins, or another trait. The most common procedure to transform a distance matrix into a phylogeny is called cluster analysis. The distance matrix is scanned for the smallest distance element, and the two taxa involved (say, A and B) are joined at an internal node, or branching point. The matrix is scanned again for the next smallest distance, and the two new taxa (say, C and D) are clustered. The procedure is continued until all taxa have been joined. When a distance involves a taxon that is already part of a previous cluster (say, E and A), the average distance is obtained between the new taxon and the preexisting cluster (say, the average distance between E to A and E to B). This simple procedure, which can also be used with molecular data, assumes that the rate of evolution is uniform along all branches.
Other distance methods (including the one used to construct the tree in the figure of the 20-organism phylogeny) relax the condition of uniform rate and allow for unequal rates of evolution along the branches. One of the most extensively used methods of this kind is called neighbour-joining. The method starts, as before, by identifying the smallest distance in the matrix and linking the two taxa involved. The next step is to remove these two taxa and calculate a new matrix in which their distances to other taxa are replaced by the distance between the node linking the two taxa and all other taxa. The smallest distance in this new matrix is used for making the next connection, which will be between two other taxa or between the previous node and a new taxon. The procedure is repeated until all taxa have been connected with one another by intervening nodes.
Maximum parsimony methods seek to reconstruct the tree that requires the fewest (i.e., most parsimonious) number of changes summed along all branches. This is a reasonable assumption, because it usually will be the most likely. But evolution may not necessarily have occurred following a minimum path, because the same change instead may have occurred independently along different branches, and some changes may have involved intermediate steps. Consider three species—C, D, and E. If C and D differ by two amino acids in a certain protein and either one differs by three amino acids from E, parsimony will lead to a tree with the structure shown in the left side of the figure illustrating the two simple phylogenies. It may be the case, however, that in a certain position at which C and D both have amino acid g while E has h, the ancestral amino acid was g. Amino acid g did not change in the lineage going to C but changed to h in a lineage going to the ancestor of D and E and then changed again, back to g, in the lineage going to D. The correct phylogeny would lead then from the common ancestor of all three species to C in one branch (in which no amino acid changes occurred), and to the last common ancestor of D and E in the other branch (in which g changed to h) with one additional change (from h to g) occurring in the lineage from this ancestor to E.
Not all evolutionary changes, even those that involve a single step, may be equally probable. For example, among the four nucleotide bases in DNA, cytosine (C) and thymine (T) are members of a family of related molecules called pyrimidines; likewise, adenine (A) and guanine (G) belong to a family of molecules called purines. A change within a DNA sequence from one pyrimidine to another (C ⇌ T) or from one purine to another (A ⇌ G), called a transition, is more likely to occur than a change from a purine to a pyrimidine or the converse (G or A ⇌ C or T), called a transversion. Parsimony methods take into account different probabilities of occurrence if they are known.
Maximum parsimony methods are related to cladistics, a very formalistic theory of taxonomic classification, extensively used with morphological and paleontological data. The critical feature in cladistics is the identification of derived shared traits, called synapomorphic traits. A synapomorphic trait is shared by some taxa but not others because the former inherited it from a common ancestor that acquired the trait after its lineage separated from the lineages going to the other taxa. In the evolution of carnivores, for example, domestic cats, tigers, and leopards are clustered together because of their possessing retractable claws, a trait acquired after their common ancestor branched off from the lineage leading to the dogs, wolves, and coyotes. It is important to ascertain that the shared traits are homologous rather than analogous. For example, mammals and birds, but not lizards, have a four-chambered heart. Yet birds are more closely related to lizards than to mammals; the four-chambered heart evolved independently in the bird and mammal lineages, by parallel evolution.
Maximum likelihood methods seek to identify the most likely tree, given the available data. They require that an evolutionary model be identified, which would make it possible to estimate the probability of each possible individual change. For example, as is mentioned in the preceding section, transitions are more likely than transversions among DNA nucleotides, but a particular probability must be assigned to each. All possible trees are considered. The probabilities for each individual change are multiplied for each tree. The best tree is the one with the highest probability (or maximum likelihood) among all possible trees.
Maximum likelihood methods are computationally expensive when the number of taxa is large, because the number of possible trees (for each of which the probability must be calculated) grows factorially with the number of taxa. With 10 taxa, there are about 3.6 million possible trees; with 20 taxa, the number of possible trees is about 2 followed by 18 zeros (2 × 1018). Even with powerful computers, maximum likelihood methods can be prohibitive if the number of taxa is large. Heuristic methods exist in which only a subsample of all possible trees is examined and thus an exhaustive search is avoided.
The statistical degree of confidence of a tree can be estimated for distance and maximum likelihood trees. The most common method is called bootstrapping. It consists of taking samples of the data by removing at least one data point at random and then constructing a tree for the new data set. This random sampling process is repeated hundreds or thousands of times. The bootstrap value for each node is defined by the percentage of cases in which all species derived from that node appear together in the trees. Bootstrap values above 90 percent are regarded as statistically strongly reliable; those below 70 percent are considered unreliable.
The methods for obtaining the nucleotide sequences of DNA have enormously improved since the 1980s and have become largely automated. Many genes have been sequenced in numerous organisms, and the complete genome has been sequenced in various species ranging from humans to viruses. The use of DNA sequences has been particularly rewarding in the study of gene duplications. The genes that code for the hemoglobins in humans and other mammals provide a good example.
Knowledge of the amino acid sequences of the hemoglobin chains and of myoglobin, a closely related protein, has made it possible to reconstruct the evolutionary history of the duplications that gave rise to the corresponding genes. But direct examination of the nucleotide sequences in the genes coding for these proteins has shown that the situation is more complex, and also more interesting, than it appears from the protein sequences.
DNA sequence studies on human hemoglobin genes have shown that their number is greater than previously thought. Hemoglobin molecules are tetramers (molecules made of four subunits), consisting of two polypeptides (relatively short protein chains) of one kind and two of another kind. In embryonic hemoglobin E, one of the two kinds of polypeptide is designated ε; in fetal hemoglogin F, it is γ; in adult hemoglobin A, it is β; and in adult hemoglobin A2, it is δ. (Hemoglobin A makes up about 98 percent of human adult hemoglobin, and hemoglobin A2 about 2 percent). The other kind of polypeptide in embryonic hemoglobin is ζ; in both fetal and adult hemoglobin, it is α. The genes coding for the first group of polypeptides (ε, γ, β, and δ) are located on chromosome 11; the genes coding for the second group of polypeptides (ζ and α) are located on chromosome 16.
There are yet additional complexities. Two γ genes exist (known as Gγ and Aγ), as do two α genes (α1 and α2). Furthermore, there are two β pseudogenes (ψβ1 and ψβ2) and two α pseudogenes (ψα1 and ψα2), as well as a ζ pseudogene. These pseudogenes are very similar in nucleotide sequence to the corresponding functional genes, but they include terminating codons and other mutations that make it impossible for them to yield functional hemoglobins.
The similarity in the nucleotide sequence of the polypeptide genes, and pseudogenes, of both the α and β gene families indicates that they are all homologous—that is, that they have arisen through various duplications and subsequent evolution from a gene ancestral to all. Moreover, homology also exists between the nucleotide sequences that separate one gene from another. The evolutionary history of the genes for hemoglobin and myoglobin is summarized in the figure.
Cytochrome c consists of only 104 amino acids, encoded by 312 nucleotides. Nevertheless, this short protein stores enormous evolutionary information, which made possible the fairly good approximation, shown in the figure, to the evolutionary history of 20 very diverse species over a period longer than one billion years. But cytochrome c is a slowly evolving protein. Widely different species have in common a large proportion of the amino acids in their cytochrome c, which makes possible the study of genetic differences between organisms only remotely related. For the same reason, however, comparing cytochrome c molecules cannot determine evolutionary relationships between closely related species. For example, the amino acid sequence of cytochrome c in humans and chimpanzees is identical, although they diverged about 6 million years ago; between humans and rhesus monkeys, which diverged from their common ancestor 35 million to 40 million years ago, it differs by only one amino acid replacement.
Proteins that evolve more rapidly than cytochrome c can be studied in order to establish phylogenetic relationships between closely related species. Some proteins evolve very fast; the fibrinopeptides—small proteins involved in the blood-clotting process—are suitable for reconstructing the phylogeny of recently evolved species, such as closely related mammals. Other proteins evolve at intermediate rates; the hemoglobins, for example, can be used for reconstructing evolutionary history over a fairly broad range of time (see figure).
One great advantage of molecular evolution is its multiplicity, as noted above in the section DNA and protein as informational macromolecules. Within each organism are thousands of genes and proteins; these evolve at different rates, but every one of them reflects the same evolutionary events. Scientists can obtain greater and greater accuracy in reconstructing the evolutionary phylogeny of any group of organisms by increasing the number of genes investigated. The range of differences in the rates of evolution between genes opens up the opportunity of investigating different sets of genes for achieving different degrees of resolution in the tree, relying on slowly evolving ones for remote evolutionary events. Even genes that encode slowly evolving proteins can be useful for reconstructing the evolutionary relationships between closely related species, by examination of the redundant codon substitutions (nucleotide substitutions that do not change the encoded amino acids), the introns (noncoding DNA segments interspersed among the segments that code for amino acids), or other noncoding segments of the genes (such as the sequences that precede and follow the encoding portions of genes); these generally evolve much faster than the nucleotides that specify the amino acids.
One conspicuous attribute of molecular evolution is that differences between homologous molecules can readily be quantified and expressed, as, for example, proportions of nucleotides or amino acids that have changed. Rates of evolutionary change can therefore be more precisely established with respect to DNA or proteins than with respect to phenotypic traits of form and function. Studies of molecular evolution rates have led to the proposition that macromolecules may serve as evolutionary clocks.
It was first observed in the 1960s that the numbers of amino acid differences between homologous proteins of any two given species seemed to be nearly proportional to the time of their divergence from a common ancestor. If the rate of evolution of a protein or gene were approximately the same in the evolutionary lineages leading to different species, proteins and DNA sequences would provide a molecular clock of evolution. The sequences could then be used to reconstruct not only the sequence of branching events of a phylogeny but also the time when the various events occurred.
Consider, for example, the figure depicting the 20-organism phylogeny. If the substitution of nucleotides in the gene coding for cytochrome c occurred at a constant rate through time, one could determine the time elapsed along any branch of the phylogeny simply by examining the number of nucleotide substitutions along that branch. One would need only to calibrate the clock by reference to an outside source, such as the fossil record, that would provide the actual geologic time elapsed in at least one specific lineage.
The molecular evolutionary clock, of course, is not expected to be a metronomic clock, like a watch or other timepiece that measures time exactly, but a stochastic clock like radioactive decay. In a stochastic clock the probability of a certain amount of change is constant (for example, a given quantity of atoms of radium-226 is expected, through decay, to be reduced by half in 1,620 years), although some variation occurs in the actual amount of change. Over fairly long periods of time a stochastic clock is quite accurate. The enormous potential of the molecular evolutionary clock lies in the fact that each gene or protein is a separate clock. Each clock “ticks” at a different rate—the rate of evolution characteristic of a particular gene or protein—but each of the thousands and thousands of genes or proteins provides an independent measure of the same evolutionary events.
Evolutionists have found that the amount of variation observed in the evolution of DNA and proteins is greater than is expected from a stochastic clock—in other words, the clock is erratic. The discrepancies in evolutionary rates along different lineages are not excessively large, however. So it is possible, in principle, to time phylogenetic events with as much accuracy as may be desired, but more genes or proteins (about two to four times as many) must be examined than would be required if the clock was stochastically constant. The average rates obtained for several proteins taken together become a fairly precise clock, particularly when many species are studied and the evolutionary events involve long time periods (on the order of 50 million years or longer).
This conclusion is illustrated in the figure, which plots the cumulative number of nucleotide changes in seven proteins against the dates of divergence of 17 species of mammals (16 pairings) as determined from the fossil record. The overall rate of nucleotide substitution is fairly uniform. Some primate species (the pairs represented by triangular points in the figure) appear to have evolved at a slower rate than the average for the rest of the species. This anomaly occurs because the more recent the divergence of any two species, the more likely it is that the changes observed will depart from the average evolutionary rate. As the length of time increases, periods of rapid and slow evolution in any lineage are likely to cancel one another out.
Evolutionists have discovered, however, that molecular time estimates tend to be systematically older than estimates based on other methods and, indeed, to be older than the actual dates. This is a consequence of the statistical properties of molecular estimates, which are asymmetrically distributed. Because of chance, the number of molecular differences between two species may be larger or smaller than expected. But overestimation errors are unbounded, whereas underestimation errors are bounded, since they cannot be smaller than zero. Consequently, a graph of a typical distribution (see normal distribution) of estimates of the age when two species diverged, gathered from a number of different genes, is skewed from the normal bell shape, with a large number of estimates of younger age clustered together at one end and a long “tail” of older-age estimates trailing away toward the other end. The average of the estimated times thus will consistently overestimate the true date. The overestimation bias becomes greater when the rate of molecular evolution is slower, the sequences used are shorter, and the time becomes increasingly remote.
In the late 1960s it was proposed that at the molecular level most evolutionary changes are selectively “neutral,” meaning that they are due to genetic drift rather than to natural selection. Nucleotide and amino acid substitutions appear in a population by mutation. If alternative alleles (alternative DNA sequences) have identical fitness—if they are identically able to perform their function—changes in allelic frequency from generation to generation will occur only by genetic drift. Rates of allelic substitution will be stochastically constant—that is, they will occur with a constant probability for a given gene or protein. This constant rate is the mutation rate for neutral alleles.
According to the neutrality theory, a large proportion of all possible mutants at any gene locus are harmful to their carriers. These mutants are eliminated by natural selection, just as standard evolutionary theory postulates. The neutrality theory also agrees that morphological, behavioral, and ecological traits evolve under the control of natural selection. What is distinctive in the theory is the claim that at each gene locus there are several favourable mutants, equivalent to one another with respect to adaptation, so that they are not subject to natural selection among themselves. Which of these mutants increases or decreases in frequency in one or another species is purely a matter of chance, the result of random genetic drift over time.
Neutral alleles are those that differ so little in fitness that their frequencies change by random drift rather than by natural selection. This definition is formally stated as 4Nes XXltXX < 1, where Ne is the effective size of the population and s is the selective coefficient that measures the difference in fitness between the alleles.
Assume that k is the rate of substitution of neutral alleles per unit time in the course of evolution. The time units can be years or generations. In a random-mating population with N diploid individuals, k = 2Nux, where u is the neutral mutation rate per gamete per unit time (time measured in the same units as for k) and x is the probability of ultimate fixation of a neutral mutant. The derivation of this equation is straightforward: there are 2Nu mutants per time unit, each with a probability x of becoming fixed. In a population of N diploid individuals there are 2N genes at each locus, all of them, if they are neutral, with an identical probability, x = 1/(2N), of becoming fixed. If this value of x is substituted in the equation above (k = 2Nux), the result is k = u. In terms of the theory, then, the rate of substitution of neutral alleles is precisely the rate at which the neutral alleles arise by mutation, independently of the number of individuals in the population or of any other factors.
If the neutrality theory of molecular evolution is strictly correct, it will provide a theoretical foundation for the hypothesis of the molecular evolutionary clock, since the rate of neutral mutation would be expected to remain constant through evolutionary time and in different lineages. The number of amino acid or nucleotide differences between species would, therefore, simply reflect the time elapsed since they shared the last common ancestor.
Evolutionists debate whether the neutrality theory is valid. Tests of the molecular clock hypothesis indicate that the variations in the rates of molecular evolution are substantially larger than would be expected according to the neutrality theory. Other tests have revealed substantial discrepancies between the amount of genetic polymorphism found in populations of a given species and the amount predicted by the theory. But defenders of the theory argue that these discrepancies can be assimilated by modifying the theory somewhat—by assuming, for example, that alleles are not strictly neutral but their differences in selective value are quite small. Be that as it may, the neutrality theory provides a “null hypothesis,” or point of departure, for measuring molecular evolution. |
Structure of the Earth
Article curated by Ed Trollope
We still don't know a lot about what is right underneath our feet, let alone what’s lurking miles down - and there are 6,300km of rock and iron beneath us. The thoughts of distinct layers and a fairly simple system of convection are now long in the past - the Earth’s crust, mantle and core have many unanswered questions about their layers and how they interact. A greater number of improved seismometers may help us slowly piece together what’s really going on below us.
Firstly there's the problem of continental crusts, which form the land we live on. Continents exist on parts of the crust that are thicker than oceanic crust. It is not known how or when these continents formed, plate tectonics cannot currently explain it. As well as not knowing how much continental crust there used to be, there is also no answer to how much there may be in the future. Plate tectonics describes the movement, the continued destruction and creation of the Earth's crust. The continued recycling of the Earth's crusts means that the evidence for what lies in the past is being destroyed. Making it difficult to prove how and why continents were formed.
Heat is carried from the Earth's hot core through the mantle via convection currents. There appears to be some evidence that suggests that this convection may occur in layers and some contradicting evidence that shows its currents act over the whole mantle. Observations of volcanoes show us their are at least two distinct chemical reservoirs within the mantle. This coupled with the fact that earthquakes in subduction zones always stop at the same specific point in the mantle provides the main evidence that convection may be several-layered. However images provide strong evidence for a whole layer. Either way the correct theory must account for the differences observed within the mantle.
At the centre of our planet, heating this mantle is the Earth's core. It is known that the core has a solid inner core and a liquid outer core, but a lot of other information is unknown to us. Seismic data has shown that the core shows sideways motion, with one side melting while the other crystallises. Scientists haven't been able to explain why this may be, although it may be due to a large impact such as an asteroid. Seismic data also shows that there may be variations in structures of iron within the core. The extreme conditions in the core make it difficult to recreate it experimentally, in order to study the behaviour of components in the core. Determining all the constituents and the arrangement of atoms within the material in the core remains a struggle for scientists. Establishing the structure of the core is important as its dynamics are key to many other processes here on Earth. For example Earth's internal heat is radiated out from the core, through the mantle in process as mentioned that is also not fully understood. The core is key in making the Earth the way it is, convection of metals in the outer core are responsible for providing Earth with its magnetic field.
The rotation of the Earth's core results in a magnetic field, but some of the mechanisms taking place are not entirely clear. It has however been shown that this magnetic field protects the Earth from cosmic rays, and without it life here would be impossible. Records show that on numerous occasions in the past, the poles of this magnetic field have switched - North became South, and South became North. It is expected that this will happen again, but when the next switching event will occur is a matter of debate. Some believe it may even already be under-way, but most scientists are unsure. There is also a dispute over what the possible effects of a switching event would be. Problems could occur when during the switch the magnetic field offers less protection against the Sun, for example.
The Earth's youth
The magnetic field of the young Earth is something of a paradox for scientists. Analysis of ancient rocks has yielded evidence that billions of years ago the Earth had a powerful magnetic field, and simulations of the Earth’s core had provided supporting evidence that the thermal conductivity of iron, under the intense conditions found at the centre of the Earth, was low enough to allow a strong magnetic field. But experiments carried out to simulate the conditions do not agree.
The Earth's history is full of mysteries, and we're not even sure where all the material that we now think of as the Earth came from. The vast majority of the Earth's surface - 70% - is covered with water. But where did this water come from? A large quantity of this water would have come from the early formation of the planet - but probably not all of it. Where exactly all the water comes from and how the amount of water on the Earth has varied in the past is still not clear.
Comets and other such objects are also thought to have formed in the early, colder solar system. It's possible that a lot of this water came from the bombardment of the planet in its youth, by asteroids and comets.
Learn more about water.
It's not just the water we're unsure of, either - the atmosphere is a bit of a mystery, too. Why is there nitrogen in our atmosphere, and how did it get here? In June 2014, a NASA and ESA study found that nitrogen on Saturn's moon Titan may have appeared as a result of the moon forming in the cold gas and dust disk which formed the Sun. Titan - the second largest moon in our solar system is often compared to the Earth, but considered to have frozen in an earlier stage of development. Like water, comets may have been responsible for the arrival of much of the nitrogen we have today - and there's a lot of it! 78% of our atmosphere is nitrogen.
Another common school of thought is that when the Earth was very young, it had an atmosphere composed of mainly hydrogen and helium, but these light elements simply floated away from our planet and were replaced by a thicker, heavier atmosphere of carbon dioxide, ammonia (a nitrogen compound) and water vapour by volcanic activity. However, the question of how ammonia came to be in the inner regions of the Earth still remains unanswered.
We may not know how the Earth's oceans or atmosphere came to be here, but at least we have a fairly good idea about the Earth itself. Or do we? While Venus, Earth and Mars all have similar proportions of volatile elements (material which become gas at comparably low temperatures, like water), exactly where these materials originated remains a big question in planetary science, with comets once again being a prime suspect.
Learn more about Why are the planets so different?.
The Coastline Paradox
How long is our coastline? It seems like a very simple question, yet it's actually a problem as the coastline of a landmass doesn't have a well-defined length. Different methods often result in very different lengths that depend on the detail with which the coastline is measured. Imagine you have the task of measuring the coastline for a small island. You could look at a map, or a satellite image, draw a circle around it and take the length of that circle. But what if there's a small bay, or a peninsula sticking out? You can change your line... but as you zoom in and look closer, you will find more and more small changes that need to be made, and all them will increase the size of your answer. This fractal structure of coastlines is at the root of this particular problem. Some ‘official’ figures for coastlines can even vary by up to a factor of four. These large discrepancies can be a real problem, with everyone unsure which figure they should adopt, and whether measurements they wish to compare were made using the same technique. In order to avoid a wide range of estimates for coastlines scientists need to devise a way in which the method for finding these lengths can be standardised.
This article was written by the Things We Don’t Know editorial team, with contributions from Ed Trollope, Jon Cheyne, Cait Percy, Johanna Blee, Grace Mason-Jarrett, and Adam Stevens.
This article was first published on 2015-10-12 and was last updated on 2018-02-17.
why don’t all references have links?
Tarduno, J., A., et al., (2015) A Hadean to Paleoarchean geodynamo recorded by single zircon crystals Science 349 (6247):521-524 DOI: 10.1126/science.aaa9114
Zhang, P., Cohen, R., E., and Haule, K., (2015) Effects of electron correlations on transport properties of iron at Earth’s core conditions Nature 517:605-607 DOI: 10.1038/nature14090
Gomi, H., et al., (2013) The high conductivity of iron and thermal evolution of the Earth’s core Physics of the Earth and Planetary Interiors 224:88-103 DOI: 10.1016/j.pepi.2013.07.010
Mandt, K.E., et al., (2014) Protosolar Ammonia as the Unique Source of Titan's nitrogen. The Astrophysical Journal Letters 788:L24 DOI: 10.1088/2041-8205/788/2/L24 |
After the introduction of the SMU ADALM1000 let’s start with the first of some small, basic measurements.
Written by Doug Mercer, Analog Devices
Now let’s get started with the second experiment.
The objective of this lab activity is to verify the proportionality and superposition theorems.
In this activity, the proportionality and superposition theorems are examined by applying them to the circuits shown in the following figures.
- The proportionality theorem states that the response of a circuit is proportional to the source acting on the circuit. This is also known as linearity. The proportionality constant A relates the input voltage to the output voltage as:
The proportionality constant A is sometimes referred to as the gain of a circuit. For the circuit of Figure 2, the source voltage is VIN. The response VOUT is across the 4.7 kΩ resistor. The most important result of linearity is superposition.
- The proportionality constant is sometimes referred to as the gain of a circuit. For the circuit of Figure 2, the source voltage is . The response is across the 4.7 kΩ resistor. The most important result of linearity is superposition.
- ADALM1000 hardware module.
- Various resistors: 1 kΩ, 2.2 kΩ, and 4.7 kΩ.
1. Verify the voltage division:
- a) Construct the circuit of Figure Using the voltmeter tool, accurately measure for the three input voltages (using the ALM1000 fixed power supply voltages) as shown in Table 1. You should measure and record the actual fixed power supply voltages as well.
Table 1. Enter Your Results
- b) Calculate the value of in each case using Equation 1.
- c) Plot a graph with on the x-axis and on the y-axis.
2. Verifying the superposition theorem:
- Construct the circuit of Figure 3. Measure and record the voltage across the 4.7 kΩ resistor.
- Construct the circuit of Figure 4. Measure and record the voltage across the 4.7 kΩ resistor.
- Calculate the total response for the circuit of Figure 3 by adding the responses from Step 1a and Step Compare your calculated result to what you measured in Step 2a. Explain any differences.
- Is the graph obtained a straight line? Compute the slope of the graph at any point and compare it to the value of K obtained from the Explain any differences.
- For each of the three circuits you built for the superposition experiment, how well did the calculated and measured outputs compare? Explain any differences.
You can find the answers at the Analog Devices StudentZone blog.
As in all the ALM labs, we use the following terminology when referring to the connections to the ALM1000 connector and configuring the hardware. The green shaded rectangles indicate connections to the ADALM1000 analog I/O connector. The analog I/O channel pins are referred to as CA and CB. When configured to force voltage/measure current, –V is added as in CA-V or when configured to force current/measure voltage, –I is added as in CA-I. When a channel is configured in the high impedance mode to only measure voltage, –H is added as CA-H.
Scope traces are similarly referred to by channel and voltage/current, such as CA-V and CB-V for the voltage waveforms, and CA-I and CB-I for the current waveforms.
It is advised to use the ALICE Rev 1.1 software for those examples here. File: alice-desktop-1.1-setup.zip. Please download here.
The ALICE desktop software provides the following functions:
- A 2-channel oscilloscope for time domain display and analysis of voltage and current
- The 2-channel arbitrary waveform generator (AWG) controls.
- The X and Y display for plotting captured voltage and current voltage and current data, as well as voltage waveform histograms.
- The 2-channel spectrum analyzer for frequency domain display and analysis of voltage
- The Bode plotter and network analyzer with built-in sweep generator.
- An impedance analyzer for analyzing complex RLC networks and as an RLC meter and vector
- A dc ohmmeter measures unknown resistance with respect to known external resistor or known internal 50 Ω.
- Board self-calibration using the AD584 precision 2.5 V reference from the ADALP2000 analog parts kit.
- ALICE M1K voltmeter.
- ALICE M1K meter source.
- ALICE M1K desktop tool.
For more information, please look here.
Note: You need to have the ADALM1000 connected to your PC to use the software.
Author: Doug Mercer [email@example.com] received his B.S.E.E. degree from Rensselaer Polytechnic Institute (RPI) in 1977. Since joining Analog Devices in 1977, he has contributed directly or indirectly to more than 30 data converter products and he holds 13 patents. He was appointed to the position of ADI Fellow in 1995. In 2009, he transitioned from full-time work and has continued consulting at ADI as a Fellow Emeritus contributing to the Active Learning Program. In 2016 he was named Engineer in Residence within the ECSE department at RPI. |
Pascal's Triangle is an arrangement of multiple numbers that have been found to relate to many areas of math and science. This article will explain how you can form it.
1Start by writing down the number 1.
2Place two more ones to get started on the next line.
3Learn how to construct new lines following the first two lines that were just ones. For every new line, you will have one more number than the previous one. Each outside number is a 1 every time.
4Find the interior numbers using the numbers above them. Each inside number can be found by adding the two numbers above it. For example, The middle number on the third line of Pascal's Triangle is 1+1=2.
5Look for patterns in the triangle. You will notice many interesting patterns that form as you build the layers of your triangle.
- The one row consists of only ones. The following rows consist of the counting numbers, followed by the triangular numbers.
- The sum of the numbers in each row (with row number given) is about the answer of 2 to that number expressed as an exponent (uses base-2).
- Keeping in mind that the first number in any row is the 0th element 1, if you take the next number in that row and divide all the other elements in that row, you'll see that the items are evenly divisible by the row number expressed.
- Each line of the Pascal's triangle, has a direct effect on being shown using a base number to the power of the row number, with a few exceptions. Some rows where the next box contains one or more numbers for the box, the tens place gets added to the box to the left of it.
- Look for the symmetrical pattern in the Triangle..
Why is the Pascal triangle important to math?It is used in algebra to find the coefficients in a binomial expansion. It is also used in calculating probabilities and combinations.
What is the definition of Pascal's triangle, and its properties?It is a triangular arrangement of numbers representing the coefficients found in a binomial expansion.
Sources and Citations
In other languages:
Español: hacer un triángulo de Pascal, Русский: построить треугольник Паскаля, Deutsch: Ein Pascalsches Dreieck machen, Français: construire le triangle de Pascal, Português: Fazer um Triângulo de Pascal, Italiano: Comporre il Triangolo di Tartaglia, Bahasa Indonesia: Membuat Segitiga Pascal
Thanks to all authors for creating a page that has been read 3,827 times. |
Python Programming – Python Anonymous Functions
Python supports the development of an additional type of function definition known as an anonymous function. These functions are termed anonymous because they do not follow the standard method by using the def keyword and anonymous functions are not bound to a name. These are created at runtime, using the construct known as lambda. The syntax of lambda is as follows:
|lambda arg1, arg2,…, argn: expression|
Characteristics of Lambda Form (Anonymous Function)
• lambda form can take multiple arguments as shown in the syntax and returns only one value computed through the expression.
• It does not contain multiple lines of statement blocks as in standard Python functions.
• Since, an expression is required in lambda form, the direct call to print() function can not be made in lambda form of anonymous function.
• As no additional statements can be written in lambda form, it has only a local namespace that means it can use only those variables that are passed as arguments to it or which are in the global scope.
• The lambda form (anonymous functions) can not be considered as C/C++ inline functions, as they contain only a single line of the statement. The concept of the Python anonymous function is entirely different from the C/C++ inline function and can be used with typical functional concepts.
The programming example illustrating the use of the lambda function is given in Code: 6.10. This program computes the product of two numbers, by using an anonymous function. We see that the variable product is used as a function name while calling and passing the argument values to the lambda (anonymous function). Lambda is just a single line statement, which performs the intended task and the result is assigned to the product variable. The function print() prints the value of the computed product using the lambda function. A similar program to compute the product of two numbers without using the lambda anonymous function is presented in Code: 6.11.
Code: 6.10. Illustration of anonymous functions (lambda form).
|# This program illustrates the concept of lambda form (anonymous function)
#anonymous function definition
#product can be called as function as follows
Code: 6.11. Illustration of computation of product without using anonymous functions.
|# This program illustrates the Product of two numbers
def product(a, b):
# function call of product() |
Bar graphs are graphical representations of statistical data in the form of strips or bars. This allows viewers to understand the difference between the various parameters of the data at a single glance rather than pointing out and comparing each set of data. If you wish to create a bar graph in Excel, read through this article.
Bar graphs in Excel are a form of charts and are to be inserted in the same manner. Bar graphs could be both 2-dimensional and 3-dimensional depending upon the type of Excel editor you use.
How to create a Bar Graph in Excel
To create a bar graph in Excel:
- Select the data in question, and go to the Insert tab.
- Now in the Charts section, click on the downward-pointing arrow next to the Bar Graph option.
- Select the type of bar graph you wish to use. It would immediately show on the Excel sheet but might need a few seconds to load the data.
Usually, the location and size of the chart are centered. You can adjust both these parameters according to your needs.
Eg. Let us say we are provided with a set of data of the marks by students in a class. The data is further extended across various subjects. This makes the data complex because to compare between the students, you would have to literally pick each value from the list, highlight the row and column one by one and check which student scored what in which subject.
So, select the data from range A1 to G7 and go to Insert > Bar Graph.
Select the appropriate bar graph and change the location and size.
The subjects have been mentioned across Y-axis and the percentages across X-axis.
The names of the students have been mentioned using colors.
Now you can easily compare students on the basis of their marks scored in each subject.
How to make a Column Chart in Excel
Alternatively, you could create a column chart. The procedure is similar to that for a bar graph as explained earlier, however, this time select Insert > Column and then choose the chart type.
A column chart makes details even clearer as you can simply compare the marks of 2 students by observing the respective heights of the columns. A column graph for the above-mentioned example has been shown in the image below.
However, it should be noted that this graph is static. You can also choose to create a dynamic chart in Excel.
Hope it helps!
- Tags: Excel |
The smallest of electronics could one day have the ability to turn on and off at an atomic scale.
Lawrence Livermore National Laboratory scientists have investigated a way to create linear chains of carbon atoms from laser-melted graphite. The material, called carbyne, could have a number of novel properties, including the ability to adjust the amount of electrical current traveling through a circuit, depending on the user’s needs.
Carbyne is the subject of intense research because of its presence in astrophysical bodies, as well as its potential use in nanoelectronic devices and superhard materials. Its linear shape gives it unique electrical properties that are sensitive to stretching and bending, and it is 40 times stiffer than diamond. It also was found in the Murchison and Allende meteorites and could be an ingredient of interstellar dust.
Using computer simulations, LLNL scientist Nir Goldman and colleague Christopher Cannella, an undergraduate summer researcher from Caltech, initially intended to study the properties of liquid carbon as it evaporates, after being formed by shining a laser beam on the surface of graphite. The laser can heat the graphite surface to a few thousands of degrees, which then forms a fairly volatile droplet. To their surprise, as the liquid droplet evaporated and cooled in their simulations, it formed bundles of linear chains of carbon atoms.
“There’s been a lot of speculation about how to make carbyne and how stable it is,” Goldman said. “We showed that laser melting of graphite is one viable avenue for its synthesis. If you regulate carbyne synthesis in a controlled way, it could have applications as a new material for a number of different research areas, including as a tunable semiconductor or even for hydrogen storage.
“Our method shows that carbyne can be formed easily in the laboratory or otherwise. The process also could occur in astrophysical bodies or in the interstellar medium, where carbon-containing material can be exposed to relatively high temperatures and carbon can liquefy.”
Goldman’s study and computational models allow for direct comparison with experiments and can help determine parameters for synthesis of carbon-based materials with potentially exotic properties.
“Our simulations indicate a possible mechanism for carbyne fiber synthesis that confirms previous experimental observation of its formation,” Goldman said. “These results help determine one set of thermodynamic conditions for its synthesis and could account for its detection in meteorites resulting from high-pressure conditions due to impact.”
Read more: Carbon research may boost nanoelectronics
The Latest on: Carbyne
via Google News
The Latest on: Carbyne
- Israeli Firms Carbyne, Edgybees Partner To Improve Disaster Responseon August 2, 2021 at 8:12 am
The two firms Carbyne, and Edgybees have teamed up to provide video imagery to disaster and emergency response teams through drones.
- Public Safety Technology Company Carbyne Partners With Aerial Video Software Edgybees To Further Innovate Disaster Responseon July 15, 2021 at 9:21 am
Disclaimer | Accessibility Statement | Commerce Policy | Made In NYC | Stock quotes by finanzen.net NEW YORK, July 14, 2021 /PRNewswire/ -- Today, leading public safety technology company Carbyne ...
- Public Safety Technology Company Carbyne Partners With Aerial Video Software Edgybees To Further Innovate Disaster Responseon July 14, 2021 at 6:40 am
NEW YORK, July 14, 2021 /PRNewswire/ -- Today, leading public safety technology company Carbyne announces its integration with Edgybees, a provider of high-precision, geo-registration software ...
- Public Safety Technology Company Carbyne Partners with Aerial Video Software Edgybees to Further Innovate Disaster Responseon July 14, 2021 at 2:54 am
Today, public safety technology company Carbyne announces its integration with Edgybees, a provider of high-precision, geo-registration software for aerial video that improves response time, accuracy, ...
- Jampro's Export Max III gets $1-m investment to bolster export developmenton June 27, 2021 at 9:00 am
The fourth phase of the programme is expected to be rolled out next year. Carbyne, which has a special interest in enabling small and medium-sized enterprises (SMEs) to access larger markets both ...
- Owning the paycheck is the key to fintech successon June 19, 2021 at 5:56 am
To close out the funding round section, we spoke about Carbyne making emergencies more streamlined, and what Natasha argued is the headline o' the week: Neo4j raises Neo$325m as graph-based data ...
- Carbyne raises $20M more so when death knocks, you don’t answer the dooron June 15, 2021 at 4:52 pm
Israel-based Carbyne’s software platform coordinates these calls so that critical details — say, location or medical allergies — don't get lost from the 911 call taker to the paramedic in the field.
- 'I was injured but the rescue boat found me in seconds'on May 9, 2021 at 8:27 pm
Carbyne, a cloud-based emergency communication platform, was installed in August. It enables a 911 dispatcher to send a text message to the caller. This includes a link, which once opened starts a ...
- First Direct Proof of Stable Carbyne, The World’s Strongest Materialon April 11, 2016 at 7:14 am
The “carbon family” is one very resourceful family. But even with all these developments, carbyne remained elusive. In fact, it is the only form of carbon that has not been synthesized ...
via Bing News |
A Definitive Guide to Descriptive Statistics
Published byat August 20th, 2021 , Revised On February 8, 2023
Descriptive statistics is the summarising and organising of the characteristics of a dataset. Data set is a collection, or a set of responses, hypotheses, or observations from a limited number of samples or an entire population (Mishra et al.2019).
While conducting quantitative research, the first step is the collection of data. Once the data has been collected, the research can proceed to analyse the data.
Data analysis takes place to narrate the characteristics of responses. For instance, an average of one entry or a variable (e.g. age or gender) or the correspondence between two variables ( e.g. age and creativity).
The next step in the process is inferential statistics. These are the tools used to decide whether the data uphold or invalidate your hypothesis and if this data can be generalised to a higher ranged population.
Types of Descriptive Statistics
The three utmost types of descriptive statistics are as follows:
- The distribution which scrutinizes the frequency of each value.
- The central tendency which covers the value average.
- Variability or dispersion which is linked to the level up to which the values are spread out.
These can be used to assess only one variable at a particular time in the univariate analysis. Two or more variables can also be compared in bivariate and multivariate analysis (Kaliyadan et al., 2019).
An example can be considered where a study is conducted about the popularity of different leisure activities by gender. A survey is distributed among the participants who were asked about the frequency of the following actions in the past year.
The results and responses from this survey will lead to a formulation of a dataset. Now the descriptive statistics can be used to figure out the overall frequency of each activity which is referred to as a distribution.
Then the averages of each activity will be figured out, also known as the central tendency. The last step is to measure the spread of responses for each activity referred to as variability.
Dataset comprises of distribution of value or scores. The frequency of every value of a variable can be summarised in numbers or percentages in the form of tables or graphs.
Quantification of Central Tendency
This provides the centre or the average number of a dataset is estimated. There are three different ways of finding an average that is, “mean, median, and mode”.
Following is a demonstration of how to calculate “mean, median, and mode” using the first six responses of the conducted survey.
Measures of Variability
The measures of variability are used to figure out how much the response values are spread. There are three ways to find out the different aspects of spread that are “range, standard deviation, and variance.”
The range is calculated to get an idea about how far the extreme responses are placed. For this, the lowest value is simply subtracted from the highest value.
Range of library visits in the past year
Ordered dataset: 0, 3, 3, 12, 15, 24
Range: 24- 0 = 24
An average of the variability in a dataset is calculated, which is referred to as standard deviation (s) and indicates how far every score is placed from the mean value (Lays et al. .2013). The dataset is more variable if the standard deviation is larger. To find the standard deviation, the following six steps are followed.
- Each score is listed, and its mean is calculated.
- The mean is then subtracted from each score to get the deviation from the mean.
- Each deviation is then squared.
- The sum of all squared deviations is taken.
- The sum should then be divided by N-1.
- The last step is to find the square root of the last found number.
The standard deviation of library visit of last year
Steps 1 to 4:
|Raw data||Deviation from mean||Squared deviation|
|15||15-9.5 = 5.5||30.25|
|3||3- 9.5 = -6.6||42.25|
|12||12 – 9.5 = 2.5||6.25|
|0||0 – 9.5 = – 9.5||90.25|
|24||24 – 9.5 = 14.5||210.25|
|3||3- 9.5 = -6.5||42.25|
|M = 9.5||Sum = 0||Sum of squares 421.5|
Step 5: 421.5 / 5 = 84.3
Step 6: square root of 84.3 will give 9.18
From the answer, it is seen that every response deviates from the mean by 9.18 points.
After the standard deviations are calculated from the mean, an average is calculated, which is called a variance. The more the data is spread in the dataset, the larger the variance will occur in correlation to the mean. The variance is calculated simply by squaring the standard deviation denoted by the symbol s2.
Valiance of the library visit last year
Dataset: 15, 3, 12, 0, 24, 3
s = 9.18
s2 = 84.3
Univariate Descriptive Statistics
This type of descriptive statistic focuses on a single variable at a time. It is crucial to examine the data through the use of multiple measures from every variable separately. Those measures are distribution, central tendency, and spread. Excel or SPSS are the programs that can be used to calculate all of them (Park 2015).
|Visits to a library|
If only mean is considered as a measure to calculate the central tendency, the impression for the middle of the dataset can be slanted by the outliers, not like the median or a mode.
Similarly, the range is also not enough and is sensitive to extreme values. The standard deviation and variance also need to be considered to get satisfactory comparable measures of spread.
Is the Statistics assignment pressure too much to handle?
How about we handle it for you?
Put in the order right now in order to save yourself time, money, and nerves at the last minute.
Bivariate Descriptive Statistics
If the data is collected on more than one variable, bivariate or multivariate descriptive statistics can be used to find out whether the occurrence and type of relationship among them. In bivariate analysis, the frequency and variability of two variables are studied to find out whether they vary or not. The central tendency can also be compared to two variables before carrying on further statistics. The only difference between multivariate and bivariate analysis is that more than two variables are considered in multivariate analysis (Zhang 2016).
In the contingency table, the cell represents the intersection of two variables. In this table, the independent variable, e.g., gender, is placed along the vertical axis, and the dependent variables appear along the horizontal axis, e.g., activities.
It is quite easier to interpret the contingency table when data is considered in a percentage form rather than raw data. The comparison among rows is easy in the case of percentages, as it seems that each group in the data only contains 100 participants or observations. When the percentage-based contingency table is formulated, N is added for every independent variable.
It is more clear from the table that an equal number of men and women went to the library over 17 times in the last year. Moreover, men went to the library most commonly between 5 to 8 times. While women were more frequent between 13 to 16.
It is particularly a chart that depicts a relationship among two or three variables and is a visual representation of the relationship’s strength. In the scatter plot, one variable is plotted along the x-axis, and the other one is placed along the y-axis (Sedlmair et al. .2013). There are the points in the chart which represents the data points.
It is investigated that people who are likely to visit the library regularly tend to watch the movie at the theatre less. The number of times the participants went to watch the movie at the theatre is plotted along the x-axis and their visit to the library by the y-axis.
The scatter plot shows that the frequency of library visits increase as the number of movies seen at theatre decreases. This linear relationship is visually represented, and on this basis, further tests of correlations and regressions can be performed.
- Adamson, K.A., and Prion, S., 2013. Making sense of methods and measurement: measures of central tendency. Clinical Simulation in Nursing, 9(12), pp.e617-e618.
- Kalyan, F., and Kulkarni, V., 2019. Types of variables, descriptive statistics, and sample size. Indian dermatology online journal, 10(1), p.82.
- Leys, C., Ley, C., Klein, O., Bernard, P., and Licata, L., 2013. Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median. Journal of Experimental Social Psychology, 49(4), pp.764-766.
- Liu, W., Wang, L., and Yi, M., 2014. Simple-random-sampling-based multiclass text classification algorithm. The Scientific World Journal, 2014.
- Mishra, P., Pandey, C.M., Singh, U., Gupta, A., Sahu, C. and Keshri, A., 2019. Descriptive statistics and normality tests for statistical data. Annals of cardiac anesthesia, 22(1), p.67.
- Park, H.M., 2015. Univariate analysis and normality test using SAS, Stata, and SPSS.
- Sedlmair, M., Munzner, T., and Tory, M., 2013. Empirical guidance on scatterplot and dimension reduction technique choices. IEEE transactions on visualization and computer graphics, 19(12), pp.2634-2643.
- Verma, S.P., Díaz-González, L., Pérez-Garza, J.A. and Rosales-Rivera, M., 2016. Quality control in geochemistry from a comparison of four central tendency and five dispersion estimators and example of a geochemical reference material. Arabian Journal of Geosciences, 9(20), p.740. |
Question from very important topics are covered by NCERT Exemplar Class 11. Lewis suggested another way of looking at the reaction between H + and OH - ions. The properties of organic molecules depend on the structure, and knowing the names of organic compounds allow us to communicate with other chemists. For this quiz, select a chapter and a type of question. Nomenclature of Halogenated Hydrocarbons (Alkyl Halides) The prefixes fluoro, chloro, bromo, and iodo are used to indicate the presence of halogen in a molecule. MCQ: An alcohol X reacts with acid Y to form an ester with the formula C₈H₁₂O₂. Almost identical to Alkanes/Alkenes of same MW. You can skip questions if you would like and come back to them. of alkyl groups attached to the double bonded carbon atom. This is a collection of four lessons from the 2nd year of the new A Level specification on carboxylic acid derivatives. organic chemistry i - practice exercise alkene reactions and mechanisms for questions 1-24, give the major organic product of the reaction, paying particular attention to regio- and stereochemical outcomes. ALKANE ALKENE ALKYNE NOMENCLATURE PDF - Alkanes - saturated hydrocarbons. ¥Many carboxylic acids are known by their common names. Alkynes are more reactive than alkenes' State true or false. This page explains what stereoisomers are and how you recognise the possibility of optical isomers in a molecule. org are unblocked. Georgia Tech's Online Organic Chemistry. Nomenclature Review. The ease of dehydrohalogenation of alkyl halide with alcoholic KOH is (a) 3°>2°>1° (b) 3° < 2° < 1° (c) 3° > 2° < 1°. The root name is based on the number of C atoms in the ring structures. According to the Lewis definition, a base is a(n): A) Proton donor. Complete Nomenclature of Alkenes chapter (including extra questions, long questions, short questions) can be found on EduRev, you can check out Class 11 lecture & lessons summary in the same course for Class 11 Syllabus. Title: Chemistry Worksheet: Naming Compounds Author: pedro Last modified by: pedro Created Date: 1/31/2007 4:34:00 AM Company: yo mamas Other titles. Nomenclature and isomerism in alkanes can further be understood with the help of a few more examples. HALOGENOALKANES (haloalkanes, alkyl halides). Organic Nomenclature Quizzes Go to the Organic Nomenclature Quizzes Naming organic compounds is extremely important because you or someone in another part of the world should be able to write a structure from a name. Try the following multiple choice questions to test your knowledge of this chapter. myCBSEguide App. Markovnikov. Nomenclature, in other words, provides a foundation of language for organic chemistry. Describe the following chemical reactions as S N1, S N2, E1 & E 2. neet Biology Notes neet Physics Notes neet Biology mcq neet Chemistry mcq neet Physics mcq. 386 Chapter Seven MULTIPLE CHOICE QUESTIONS Topic: Nomenclature Section: 4. Alkene + steam is passed over phosphoric acid (H 3 PO 4) catalyst and temperature of 300 o C. com alkynes test > practice tests > home. 3: What is the relationship between the compounds in #1 and #2. This A Level Chemistry revision page provides access to all the A Level Chemistry past papers for AQA, OCR and Edexcel as well as worksheets. Chapters 6-8: Energy Diagrams - A Review Energy Diagrams Nomenclature Problems - Alkenes A Variety of Fun for Alkene Basics Stereochemistry and other Alkene things Regiochemistry Worksheet Another Regiochemistry Worksheet. In an alkene containing only one double bond, the double bond is broken, the halogen atoms are added, and, the only product of the reaction will be a dihaloalkane. Organic Chemistry is one of the most important branches of Chemistry. Ideal candidates for CambriLearn. ADDITION TO ALKENES (Quiz 8-1) Multiple Choice Self Evaluation Quizzes. The index below will take you to the beginning of questions that focus on a particular functional group, or you may page through the document as you like. Find out more, read a sample chapter, or order an inspection copy if you are a lecturer, from the Higher Education website. 2A) The following rules illustrate the basic principles for naming simple branched alkanes. Alcohols can be converted to alkenes by dehydration. This is due to the availability of n electrons. Start studying Chapter 8 - Alkenes: Reactions and Synthesis (McMurry). Take a self-grading on the properties, reactions, and nomenclature of alkynes : chemhelper. sp2 and sp2 6. Chapter 8 - Alkenes, Alkynes and Aromatic Compounds. C) Hydroxide ion donor. Practice jobs' assessment test for online learning naming organic compounds quiz questions for chemistry major, competitive. Triple bonds are named in a similar way using the suffix. This root give the alkane part of the name. Complete the final exam which contains short answer and or multiple choice questions covering all readings, class discussions, and lectures up to the date of the exam. 2 NOMENCLATURE OF ALKENES 133 Study Problem 4. Class 12 Important Questions for Chemistry - Alcohols, Phenols and Ethers NCERT Exemplar Class 12 Chemistry is very important resource for students preparing for XII Board Examination. Reactions Involving Radicals Chapter 21. They draw structures, they name structures, they draw polymers, they identify the type of isomer given molecules are and they identify functional groups. Furthermore, the terms primary (1º), secondary (2º) & tertiary (3º) are used to classify amines in a completely different manner than they were used for alcohols or alkyl halides. CliffsNotes study guides are written by real teachers and professors, so no matter what you're studying, CliffsNotes can ease your homework headaches and help you score high on exams. Muskan Agarwal at. The central part of the course comprises the enumeration of typical compounds classes such as alkanes, cycloalkanes, alkenes, alkynes. Enolate Ions, their Equivalents, and Reactions Chapter 18. Get them Instantly. Summary notes, flashcards, revision videos and past exam questions by topic for AQA Chemistry AS and A-Level Organic Chemistry I. IUPAC nomenclature. NOMENCLATURE IN ORGANIC CHEMISTRY Contents 1. So "two-four-hexanedione" is one way to do it. Know and understand the intermolecular forces that attract alcohol, ether, thiol, sulfide, disulfide,. Naming Dienes. By using this system, it is possible to give a systematic name to an organic compound just by looking at its structure and it is also possible to write the structure of organic compound by following the. uk/7405 for the most up-to-date specifications, resources, support and administration. Then click on "Create Quiz. 1] · Numbering the longest bridge chain found substitution at 2 nd position methyl group. LPU NEST Syllabus 2020 - Check out the Syllabus of LPU NEST 2020 such as exam pattern and subject wise i. Count the total number of carbons. Organic Terminology. Alkene reacts with water, in the form of steam, to produce alcohol. Nomenclature of Alkenes video for Class 11 is made by best teachers who have written some of the best books of Class 11. The equation to show the formation of bromomethane from methane and bromine is Br + CH4 -----CH3Br +HBr ( numbers shd all be little). To help you build that solid foundation I’ve put together this short quiz testing your knowledge of reactions, reagents, products and additional molecule concepts. sp hybridisation, is present throughout the chain. A repository of tutorials and visualizations to help students learn Computer Science, Mathematics, Physics and Electrical Engineering basics. Reactions Involving Radicals Chapter 21. It is also more reactive than a single bond since the ( bond (the second pair of electrons) is farther from the nuclei. 2 Structure of Alkenes (Structure of Carbon-Carbon Double Bond) 217. In this addition reaction, halogens atoms are added across the double bond of the alkene. Web Quiz Your assignment, Alkenes and Alkynes is ready. THE NOMENCLATURE QUIZZES. Topic Index | Previous | Next Quiz | Clearance | Previous | Next Quiz | Clearance. (4) "Structure" here will refer to a valence structure, which can be used to represent the 2-dimensional structural formula. Answer Key To Covalent Compound Naming. Matching pair quiz on hydrocarbon structure. Start studying Chapter 8 - Alkenes: Reactions and Synthesis (McMurry). Alkenes Characteristics - have general formula C n H 2n. Lewis suggested another way of looking at the reaction between H + and OH - ions. Lattice energy increases with increase of charge on the ions because of their more attractive force between them. Name: _____ Date: _____ 1. Acetoacetic Ester Condensation. Use it if you need more practice. Organic analysis. 2 Alkanes MAIN Idea Alkanes are hydrocarbons that contain only single bonds. Medical & Dental Colleges Admission Test (previously MCAT i. The longest continuous chain of C atoms in the branched alkane is the. This is a very simple and natural way but can be inconvenient in a textbook if one wants to review the nomenclature of more than one class of compounds at a time. The rules for nomenclature are as follows:. Organic synthesis and analysis (year 2). - the formula of one alkene differs from the next by -CH 2. Draw a Dot and Cross diagram for an Ethene Molecule. ? The simplest aromatic compound is benzene. Preparation Of Alcohols From Alkenes Notes PPT Video. Alkanes reaction multiple choice questions (MCQs), alkanes reaction quiz answers to learn online college courses for A level chemistry classes. ? alcohol ? aldehyde ? amine ?. Example: Name the following molecule by the IUPAC system of nomenclature. As I mentioned in class, there will be some multiple choice questions, some give the reagent for the reaction problems, give the product for the reaction problems, some short synthesis problems and a. Just like how your left foot doesn't quite fit your right shoe, molecules also can have properties that depend on their handedness! This property is called chirality. They are a type of condensation reaction. Which of the following is a product of all condensation reactions?. They draw structures, they name structures, they draw polymers, they identify the type of isomer given molecules are and they identify functional groups. • Recognize and name benzene-containing compounds. This set of Organic Chemistry Multiple Choice Questions & Answers (MCQs) focuses on "Alkanes". Most naturally occurring fatty acids have an unbranched chain of an even number of carbon atoms, from 4 to 28. According to the Lewis definition, a base is a(n): A) Proton donor. HALOGENOALKANES (haloalkanes, alkyl halides). myCBSEguide App. In case of alkenes double bond linkages are seen and in alkynes, triple bond linkages are present. Then click on "Create Quiz. A-level exams May/June 2017 onwards. A hydrocarbon C 5 H 10 does not react with chlorine in dark but gives a single monochloro compound C 5 H 9 Cl in bright sunlight. Our videos prepare you to succeed in your college classes with concepts, examples, and practice problems. Grade level: 10-12 School: No school entered. SCH4C Organic Test Modified True/False Indicate whether the statement is true or false. In this organic chemistry review worksheet, students answer 10 questions about different organic molecules. Given the structure of an alcohol, ether, thiol, sulfide, aldehyde, or ketone molecule, be able to give the systemic names and vice versa. If you are doing this sensibly, you will only be looking at one or two types of compounds at a time. Organic Chemistry 32-235 Practice Questions for Exam #2 Part 1: (Circle only ONE choice, circling more than one will be counted as wrong!) 4 points each 1. The theory is based on the electrostatics of the metal-ligand interaction, and so its results are only approximate in cases where the metal-ligand bond is substantially covalent. Naming Stereoisomers • When there is more than one chiral center in a carbohydrate, look at the chiral carbon farthest from the carbonyl group: if the hydroxy group points to right when the carbonyl is “up” it is the D-isomer, and when the hydroxy group points to the left, it is the L-isomer. - have similar properties like alkane going down the series. One double bond 5 B. Short Summary of IUPAC Nomenclature, p. What’s in the ADV Higher. Solved practice questions for IIT-JEE, Find all the formulas, full chapter notes, tips and tricks to prepare on Alkanes, Alkenes and Alkynes for IIT-JEE. The central part of the course comprises the enumeration of typical compounds classes such as alkanes, cycloalkanes, alkenes, alkynes. Very important industrially • Carbon is trigonal planar - flat and triangular! 10 H H H H ethene CH3 H3C H3C CH3 H CH3 3C (1R,5R)-2,6,6-trimethylbicyclo[3. 2 Difficulty Level: Medium 2. The properties of organic molecules depend on the structure, and knowing the names of organic compounds allow us to communicate with other chemists. 9 Naming Spiro and Bicyclic Alkanes. These materials provide a step-by-step guide to learning organic nomenclature and are intended for those taking Introductory Organic Chemistry at a college or university. Our videos prepare you to succeed in your college classes with concepts, examples, and practice problems. Web Quiz Your assignment, Chapter 7: Alkenes: Reactions and Synthesis is ready. Take a self-grading on the properties, reactions, and nomenclature of alkynes : chemhelper. Multiple Choice Questions- Amino acid and protein chemistry Published April 12, 2012 | By Dr. The ease of dehydrohalogenation of alkyl halide with alcoholic KOH is (a) 3°>2°>1° (b) 3° < 2° < 1° (c) 3° > 2° < 1°. Practice nomenclature of compounds multiple choice question (MCQs): chemical formula of ice is, with choices h 2 o 2, ho 2, h 2 o, cho 2 for online colleges admissions. The root name is based on the number of C atoms in the ring structures. CH 3 CH 3 C C H CH 2CH 3 CH 3 CH Z-3-ethyl-4-methyl-2-pentene CH 3 C C H CH 2CH 3 C H 3C CH 3 H CH 3 C C H CH 2CH 3 C H 3C CH 3 H Left. a) Draw the products expected from the following reactions b) Name the reactant and product a) + HBr. The rules for naming organic compounds are still being developed. For each of the compounds A through H shown below, enter the appropriate IUPAC suffix in the designated answer box. Classification and Nomenclature ,Organic Chemistry - Get topics notes, Online test, Video lectures, Doubts and Solutions for CBSE Class 11-science on TopperLearning. Van der Waals forces. Given the structure of an alcohol, ether, thiol, sulfide, aldehyde, or ketone molecule, be able to give the systemic names and vice versa. Naming Polyatomic Ions 25; Amino Acids by Structure 14; Organic Functional Groups 12; Amino Acid Abbreviations (3-letter) 10. Donate or volunteer today!. B) ammonium nitrogen trioxide. either the name of another alkene, or the structural for- mula for another alkene. Another example of this "rule" is hydrogenation of alkynes and alkenes (Figure 13. Since the compound does not react with Cl 2 in the dark, therefore it cannot be an alkene but must be a cycloalkane. Only the most common name will be shown. Our online alkane trivia quizzes can be adapted to suit your requirements for taking some of the top alkane quizzes. Given here is a Organic Chemistry Alkyl Halides Online Quiz Test MCQs Question Answers which can be used to evaluate and also improve your overall written exam preparation level. The index below will take you to the beginning of questions that focus on a particular functional group, or you may page through the document as you like. 42 Ch 3 Stereochemistry Except for very simple alkenes with hydrogen atoms on each carbon of the alkene, the designations of cis and trans for alkenes are replaced by a system that uses E and Z designations. Polycyclic and Heterocyclic Aromatic Compounds Chapter 20. Download CBSE Revision Notes for CBSE Class 11 Chemistry Hydrocarbons Classification of Hydrocarbons - Aliphatic Hydrocarbons: Alkanes - Nomenclature, isomerism, conformation (ethane only), physical properties, chemical reactions including free radical mechanism of halogenation, combustion and pyrolysis. D) Hydrogen ion donor. In this method of naming, the longest continuous alkyl chain forms the stem of the ether name and the alkoxy group is named as a substituent on the alkane backbone. Simple Amines. 1 Alkene and Alkyne Overview. Naming is a little bit more complex for alkenes than alkanes. Nomenclature of Carboxylic Acids. Chemistry 1110 – Organic Chemistry IUPAC Nomenclature Of the approximately 32 million unique chemical compounds presently known, over 95% of them can be classified as organic; i. The 112th annual Metawampe Hike will be held Sunday, November 3 starting at noon (12PM) from the former Ashram parking lot (to the right of the white picket fence and behind a barrier of trees) on Route 63 (#438) about 5 miles north of the North Amherst traffic lights (near Cumberland Farms). The relative mass of an atom which is equal to the number of protons and neutrons present. The unsaturated hydrocarbon _____ is the starting material for the synthesis of the plastic polyethylene. 2 Crystal field theory. Optical Isomers. Our online alkane trivia quizzes can be adapted to suit your requirements for taking some of the top alkane quizzes. The words 'organic chemistry' may send a chill down the undergraduate spine but you needn't be an expert to have a go at this quiz. Chapter 1 Organic Compounds: Alkanes 2 Organic chemistry nowadays almost drives me mad. Nomenclature and Isomerism (Year 2) Compounds containing the carbonyl group. Choose your answers to the questions and click 'Next' to see the next set of questions. sp2 and sp2 6. Complete Alkynes (Properties and Nomenclature) chapter (including extra questions, long questions, short questions, mcq) can be found on EduRev, you can check out Class 11 lecture & lessons summary in the same course for Class 11 Syllabus. (addition reaction, substitution reaction, both addition and substitution reaction) 24. Chapters Covered:- GOC, Hydrogen, Halo Alkanes Halo Arenes, Alcohol Phenol Ether, Aldehyde Ketons Carboxylic Acids, Organic Compounds of Nitrogen. CHO HO H OHH CHO CH2OH HO H CH2OH H OH. Find the longest continuous chain of carbon atoms. These materials provide a step-by-step guide to learning organic nomenclature and are intended for those taking Introductory Organic Chemistry at a college or university. Describe the following chemical reactions as S N1, S N2, E1 & E 2. Nomenclature of Benzenes Quiz. What do you mean by saturated and unsaturated. b) Instrument In this study, two instruments were used to collect data. e Nonene · The three bridge contain 3, 1 and 1 carbon atoms therefore written as = [3. 4 Isomerism in Alkenes 220. Hydrocarbon Alkane MCQ Practice Sheet 1. • Recognize and name benzene-containing compounds. They are a type of condensation reaction. The same thing can be observed in case of alkenes in which the first member is ethene and the successive members are C 3 H 6, C 4 H 8, and C 5 H 10. Nomenclature of Ester 14. The test will consist of only objective type multiple choice questions requiring students to mouse-click their correct choice of the options against the related question number. HYDROCARBONS 3 (i) Alkanes 3 A. Handout for students to practice naming compounds AFTER the students learned how to name binary molecules, ionic salts and metal oxides, and acids. Hydrocarbon Alkane MCQ Practice Sheet 1 Posted on January 11, 2015 February 1, 2015 by Amit Thakur 1. The unsaturated hydrocarbon _____ is the starting material for the synthesis of the plastic polyethylene. Triumph Chemistry” is a complete and thorough guide to prepare students for competitive level examinations. #5 | Nomenclature of Alkenes video from NEET syllabus Chemistry - Organic Chemistry - Some Basic Principles And Techniques. 1 Isomers, Nomenclature, and Conformations of Alkanes Basic Alkanes ⇒ chain like molecules based on C and H with NO branch Branched Alkanes : Alkanes that have carbons that are bonded to more than 2. Then click on "Name" to see the preferred IUPAC name and a highlight of the parent hydrocarbon. If different courses collide in the teaching schedule for non-model trajectory students, the course from the year in which the student is enrolled has priority and no special (re) arrangements are made for this student. Nomenclature Naming Organic Compounds The increasingly large number of organic compounds identified with each passing day, together with the fact that many of these compounds are isomers of other compounds, requires that a systematic nomenclature system be developed. Nomenclature of Halogenated Hydrocarbons (Alkyl Halides) The prefixes fluoro, chloro, bromo, and iodo are used to indicate the presence of halogen in a molecule. Please keep a pen and paper ready for rough work but keep your books away. Step #2: Count how many carbons are directly attached to it. Nomenclature of Alkyl Halide 5. Identify the product formed when the following alkene is reacted with BH3 and THF, then followed up with H2O2 and NaOH. These are the books for those you who looking for to read the Answer Key To Covalent Compound Naming, try to read or download Pdf/ePub books and some of authors may have disable the live reading. From alkenes (i) By acid catalysed hydration: Alkenes react with water in the presence of acid as catalyst to form alcohols. ExplanationEdit. Given here is a Organic Chemistry Alkyl Halides Online Quiz Test MCQs Question Answers which can be used to evaluate and also improve your overall written exam preparation level. 1 Introduction to Hydrocarbons MAIN Idea Hydrocarbons are carbon-containing organic compounds that provide a source of energy and raw materials. This review contains 100 multiple choice questions that cover the following topics: 1. This set of Organic Chemistry Multiple Choice Questions & Answers (MCQs) focuses on “Alkanes”. Each set of cards is saved as an Adobe Acrobat® file. Microsoft Word - Naming and Drawing Alkenes Worksheet and Key. Welcome to our exclusive collections of Chemistry Questions with Answers. You can skip questions if you would like and come back to them. Family of Compound Alkene Alkyne Structure Prefix-----. If an integral membrane is removed, the membrane will be disrupted. Only the most common name will be shown. Class 11 Important Questions for Chemistry – Hydrocarbons NCERT Exemplar Class 11 Chemistry is very important resource for students preparing for XI Board Examination. C) Hydroxide ion donor. Halogenation of Alkenes – Organic Chemistry Reaction Mechanism November 18, 2013 By Leah4sci 5 Comments Reaction Overview: The alkene halogenation reaction, specifically bromination or chlorination, is one in which a dihalide such as Cl2 or Br2 is added to a molecule after breaking the carbon to carbon double bond. The oxidation of primary alcohols and aldehydes The oxidation of primary alcohols leads to the formation of alde‐hydes that undergo further oxidation to yield acids. 3 Alkenes and Alkynes. The synthesis of 3-octyne is achieved by adding a bromoalkane into a mixture of sodium amide and an alkyne. to the first carbon containing double bond. For each of the compounds A through H shown below, enter the appropriate IUPAC suffix in the designated answer box. Olefin, also called alkene, compound made up of hydrogen and carbon that contains one or more pairs of carbon atoms linked by a double bond. The following section consists of Chemistry Multiple Choice questions on Hydrocarbons. Mcq On Alkenes. Nomenclature of Alkenes video for Class 11 is made by best teachers who have written some of the best books of Class 11. Chemistry of the Alcohols and the Alkyl Halides:. Nomenclature Organic Compound By Iupac. Organic Chemistry 32-235 Practice Exam #4. Hydrocarbon Alkane MCQ Practice Sheet 1. (b) Carbon forms four bonds, although the ground state configuration would predict the formation of fewer bonds. This page explains what stereoisomers are and how you recognise the possibility of optical isomers in a molecule. We have both General Chemistry Notes and Organic Chemistry Notes. 3 Expert Answer(s) - 49290 - List of reagents of all chapters and their function. sp2 and sp3 c. 4 Aromatic Compounds: Benzene. Alkenes and alkynes can react with hydrogen halides like HCl and HBr. The ring is called aromatic ring CH3 CH3 For example: t) Toluene i) o-Xylene The aromatic compounds may also contain more than one benzene rings. REACTIONS OF ETHANOIC ACID Carboxylic acids are weak acids and they are partially ionized in water. MCQ – 1 mark ; VSA –1 mark ; SA I – 2 marks ; SA II – 3 marks ; LA – 5 marks ; Chapter-wise Marks Distribution for Class 11 th Chemistry Subject:. The bromoalkane and alkyne respectively are [IIT JEE 2010]. either the name of another alkene, or the structural for- mula for another alkene. WAY longer than actual exam, answers are at the end True/False Indicate whether the sentence or statement is true or false. A simplified version, "Introductory IUPAC Organic Nomenclature" is also available for high school and/or college or university level General Chemistry. Addition Reactions of Alkenes and Alkynes Chapter 16. Alkenes and alkynes can be transformed into almost any other functional group you can name! We will review their nomenclature, and also learn about the vast possibility of reactions using alkenes and alkynes as starting materials. Degrees of Unsaturation. Alkene + steam is passed over phosphoric acid (H 3 PO 4) catalyst and temperature of 300 o C. " Try to determine the name of the alkane. About the book. 3 – Nomenclature DAT Organic Chemistry Outline Quiz 3. Importance of inorganic compounds in pharmacy and medicine; An outline of methods of preparation, uses, sources of impurities, tests for purity and identity, including limit tests for iron, arsenic, lead, heavy metals, chloride, sulphate and special tests if any, of the following classes of inorganic pharmaceuticals. Properties and synthesis 8. Welcome to the QuizMoz Organic Chemistry Nomenclature Multiple Choice Questions Quiz. 1 Introduction to Hydrocarbons MAIN Idea Hydrocarbons are carbon-containing organic compounds that provide a source of energy and raw materials. Of course there's a formula for determining number of isomers for a given hydrocarbon. Welcome to the QuizMoz IUPAC Nomenclature Organic Chemistry Multiple Choice Questions Quiz. Back to previous page Structure and naming of ALKENES notes REPEAT QUIZ alkene structure and nomenclature Quiz on naming alkenes practice questions on alkene nomenclature for AQA AS chemistry, AQA advanced A level chemistry, Edexcel AS chemistry, Edexcel advanced A level chemistry, OCR AS Chemistry A, OCR advanced A level chemistry A, OCR. The Major Field Test in Chemistry consists of 100 multiple-choice questions, some of which are grouped in sets and based on such materials as a descriptive paragraph or experimental results. Chemistry of the Alkenes and the Alkynes: Introduction to the Alkenes and the Alkynes - Alkyl Halides - Mechanism of Nucleophilic Reactions - Multiple-Choice Questions. Reactions Involving Radicals Chapter 21. Sulfuric acid has two acidic hydrogens. The problem sets provided here are similar to those found on various kinds of standardized exams, such as GRE, ACS & MCAT. This test must be given by a proctor. The synthesis of 3-octyne is achieved by adding a bromoalkane into a mixture of sodium amide and an alkyne. More than one double bond 5 C. Multiple choice Quiz on the Structure and naming (nomenclature) of ALKENES. QuizMoz offers one of the Internet's largest collection of quizzes for you to tease your brain and pit your wits against the experienced QuizMoz quiz masters. The instructor also may change the exam dates and due dates for various assignments. This A Level Chemistry revision page provides access to all the A Level Chemistry past papers for AQA, OCR and Edexcel as well as worksheets. naming alkanes alkenes and alkynes, organic functional groups worksheet and organic chemistry functional groups are three of main things we will show you based on the post title. Radical Probes Carbon-centered radicals (as well as many other types of radicals) show a propensity for addition to carbon-carbon pi bonds. The test will consist of only objective type multiple choice questions requiring students to mouse-click their correct choice of the options against the related question number. Very like Alkenes. Unsaturated hydrocarbons containing C=C having general formula C n H 2n. C) ammonia nitrogen oxide. These unsaturated hydrocarbons are isomeric with the saturated cycloalkanes. The following questions are organized somewhat by functional group (as many organic courses introduce nomenclature one functional group at a time). The nomenclature of the most. Priority assignment is based upon the four atoms directly attached to the stereogenic center. Alkene formula is written as C n H 2n. ACC- CH-NOMENCLATURE 1 NOMENCLATURE OF ORGANIC COMPOUNDS Mainly three systems are adopted for naming an organic compound : – (i) Common Names or Trivial System (ii) Derived System (iii) IUPAC system or Geneva System COMMON OR TRIVIAL SYSTEM On the basis of Source Property Discovery Structure (i) On the basis of source from which they were. Please keep a pen and paper ready for rough work but keep your books away. The department features state-of-the art research facilities. The more substituted the carbocation, the more stable it is,. Inorganic pharmaceutical & medicinal chemistry. Gable kevin. Naming Dienes. Nomenclature Review. MCQ – 1 mark ; VSA –1 mark ; SA I – 2 marks ; SA II – 3 marks ; LA – 5 marks ; Chapter-wise Marks Distribution for Class 11 th Chemistry Subject:. 7 Conformational Isomerism Conformational isomers are isomers in which the spatial relationship of atoms differs because of rotation around a carbon-carbon double bond. Unbranched chains 4 (ii) Alkenes 5 A. Grade level: 10-12 School: No school entered. Apply the IUPAC rules of nomenclature to name given inorganic and organic compounds from their formulas. Only the most common name will be shown. Crystal field theory is one of the simplest models for explaining the structures and properties of transition metal complexes. Optical isomerism is a form of stereoisomerism. Don't look back at the previous pages for the answers; instead, work out the answers based on what you remember. naming of alkenes, study their chemistry, and so on. Hunter, YSU Department of Chemistry, (2000. Alkanes ,Hydrocarbons - Get topics notes, Online test, Video lectures, Doubts and Solutions for CBSE Class 11-science on TopperLearning. Complete Nomenclature of Alkenes chapter (including extra questions, long questions, short questions) can be found on EduRev, you can check out Class 11 lecture & lessons summary in the same course for Class 11 Syllabus. The chemical and physical properties of ionic liquids depend on the combination of cations and anions, and the length of the alkyl chains and functional groups also have a significant effect on their properties. This A Level Chemistry revision page provides access to all the A Level Chemistry past papers for AQA, OCR and Edexcel as well as worksheets. (A) 5-hexen-3-ol(B) 1-hexen-4-ol (C) 3-hydroxy-5-hexene(D) Isohexen-3-ol (E) 4-hydroxy-1-hexene (A), alcohol is the parent name, taking precedence over alkene. At least one -c≡c- (triple bond) group i. Haloalkane style: The root name is based on the longest chain containing the halogen. Your Account Isn't Verified! In order to create a playlist on Sporcle, you need to verify the email address you used during registration. Open Digital Education. Alkanes and Alkenes Written by tutor Nathan R. The correct IUPAC name for the following structure is. (i) The carbon-carbon double bond in alkenes is made up of one σ and one π-bond. Alkane Alkene Alkyne Arene Cycloalkane. Please keep a pen and paper ready for rough work but keep your books away. Hydrocarbons - Alkenes. This course is designed to help college students to prepare for the first semester of their organic chemistry final exam. 2: Naming aromatic compounds: (arenes) large number on non-systematic names (Table 15. |
Written records of West Virginia’s history reach back only slightly more than 300 years, about half of which encompass the time when West Virginia was part of Virginia. Recorded history, however, is only a fragment of the West Virginia story and must be coupled with artifacts of preliterate people and other evidence which falls within the realms of geology, geography, and archeology.
Still evident after some 245 million years are the effects upon West Virginia of a great geological disturbance, a mountain-building era, known as the Appalachian Orogeny. At that time the floor of a portion of a great inland sea, which covered much of the interior of North America, was forced upward to create the Appalachian Mountains. In time the new land wore down to a large peneplain that tilts gently toward the Mississippi Valley. Natural forces, including erosion and the flow of streams, eventually produced a terrain marked by numerous valleys, rugged hills, and mountains that distinguish the state’s landscape to this day. Immense deposits of coal, oil, natural gas, salt, limestone, and other resources laid down in long-past geological eras have been vital to the economic life of West Virginia in historic times. The huge glaciers of the Ice Age never reached present West Virginia, but they did much to determine the state’s basic drainage patterns, especially with respect to the New, Ohio, and once-mighty Teays rivers.
The first inhabitants of West Virginia apparently descended from ‘‘Old Mongoloid’’ stock, or eastern Asians, who crossed the Bering Strait from Siberia to Alaska approximately 40,000 years ago. Over the centuries, Native Americans, or Indians, evolved through three major cultural stages, including Paleo-Indian, Archaic, and Woodland. Nomadic Paleo-Indian life centered upon the pursuit of large game animals and lasted until these animals became extinct about 6000 B.C. As early as 7000 B.C., Archaic Culture began to appear and continued over the next 6,000 years. A more reliable food supply that included small game, fish, roots, plants, and berries enabled the Archaic people to live in camps, often for long periods of time. Woodland Cultures, including the Adena, Hopewell, and Mississippian, evolved between about 1000 B.C. and A.D. 1700 and were among the most advanced in prehistoric West Virginia. Woodland Indians cultivated such plants as corn, beans, and squash, made pottery, and practiced burial ceremonialism. They left hundreds of mounds and other structures scattered across West Virginia. Among the best known are the Grave Creek Mound at Moundsville, the South Charleston-Dunbar mounds, the Bens Run earthworks in Tyler County, and the Mount Carbon rock walls in Fayette County.
The first European explorers found only a few natives in present West Virginia. By then, the Indians had formed into tribes and warfare was common. Two of the most powerful groups in the eastern United States were the Iroquois and Cherokee, both of which claimed parts of West Virginia. They probably forced weaker tribes, including the Shawnee, Mingo, and others, to abandon most of the state.
In 1606, King James I of England granted to the Virginia Company of London a vast expanse of land that included all of Virginia, present West Virginia, and Kentucky, as well as parts of North Carolina, Delaware, Pennsylvania, and even New York. The first English settlers arrived in Jamestown in 1607. During the 17th century, white settlers, as well as Africans, arrived in Virginia in ever-increasing numbers. As settlements pushed up the rivers of the Tidewater, native claimants to the land became more and more restless. In 1622 and 1644, clashes between English settlers and the Indians erupted into bloody wars with appalling losses and created conditions that made western exploration hazardous. Interest in advancing into frontier regions languished following the execution of Charles I and the establishment of the commonwealth under Oliver Cromwell, but it revived after the accession of Charles II to the throne in 1660.
Between 1669 and 1673, a surge of frontier exploration took place. Important explorers included John Lederer, who scaled the Blue Ridge Mountains northwest of present Charlottesville, Virginia; Batts and Fallam, who discovered the westward-flowing waters of the New River and laid the basis for English claims to the Ohio Valley; and Needham and Arthur, the latter the first person of European descent to visit the Kanawha Valley. After 1675, English expansion suffered setbacks partly due to troubles with the Susquehannock Indians, to Bacon’s Rebellion in Virginia, and to the death in 1680 of Abraham Wood, a leading promoter.
Renewed interest in the Virginia frontier did not develop until after the beginning of the 18th century. By then, land suitable for settlement had become one of the most important reasons for exploration. The first known plans for a settlement in present West Virginia were made by Louis Michel, a resident of Bern, Switzerland, who in 1706 envisioned a settlement at present Harpers Ferry. A later attempt by Michel and Baron Christopher de Graffenreid was abandoned because of objections of the Conestoga Indians and the conflicting claims to the region by Virginia, Pennsylvania, and Maryland. In 1716, Gov. Alexander Spotswood of Virginia, with about 50 gentlemen later dubbed the ‘‘Knights of the Golden Horseshoe,’’ their servants, and Indian guides, crossed the Blue Ridge Mountains by way of Swift Run Gap. Standing on the banks of the Shenandoah River, Spotswood claimed the land for England.
The location and date of the first settlement in West Virginia is uncertain. A settlement known as ‘‘Potomoke’’ in 1717 may have been at Shepherdstown. Morgan Morgan, a Welsh immigrant, however, has commonly been credited with making the first settlement in the state near Bunker Hill, Berkeley County, about 1731. It is now known that Morgan arrived about 1731 and that settlers were already in present West Virginia. Regardless of the location of the first settlement, it is clear that large numbers of immigrants did not arrive until after 1730, when Virginia enacted a land law that encouraged movement of people westward. Under that law speculators could acquire 1,000 acres for each family they recruited from outside the colony within a two-year period. This generous policy attracted large numbers of German and Scotch-Irish settlers, and by 1750 the population of the Valley of Virginia had reached a saturation point. In 1719, one of the largest land grants in American history was acquired by Thomas, Sixth Lord Fairfax. The Fairfax estate included the Northern Neck of Virginia and present Jefferson, Berkeley, Morgan, Hampshire, Hardy, and Mineral counties, as well as parts of Grant and Tucker counties in West Virginia.
As settlers crossed the Allegheny Mountains, serious conflicts over the Ohio Valley developed between England and France. In order to press her claims to the region and to erect a buffer between the settlements and hostile Indians, Virginia made use of the same land policy that had proved effective in the Valley of Virginia. Speculators, however, were now allowed three years to settle the required number of families. The largest grants were made to the Greenbrier, Loyal, and Ohio companies. Meanwhile, France vigorously asserted her claims to the Ohio Valley. In 1749, Celoron de Blainville led an expedition down the Ohio River and at places along the way buried lead plates with inscriptions claiming the Ohio Valley for his country. During the years immediately following, the French built key forts in the disputed region. In the clash between English and French interests, Western Virginia was in the very center of the storm. In 1753, Gov. Robert Dinwiddie, determined to block French expansion into the Ohio Valley, sent 21-year-old George Washington with a message to the French commandant at Fort Le Boeuf near Lake Erie. Dinwiddie asserted that the French were intruding upon British soil and demanded that they withdraw. The French made it clear that they would remain. At that time the young Virginian perceived that possession of the Forks of the Ohio, present Pittsburgh, held the key to control of the Ohio Valley.
Acting upon Washington’s advice, Dinwiddie dispatched a work party to erect a fort at that location. In April 1754, Washington with 150 militiamen set out to garrison the new fort. Meanwhile, a large French force had seized the Forks of the Ohio. In the skirmishes that followed, the French drove the Virginians from the region. In 1755, at the request of Governor Dinwiddie, Gen. Edward Braddock arrived in Virginia with two regiments of British troops. His coming transformed a frontier conflict into a war between two great empires. Unfamiliar with frontier modes of fighting, Braddock marched his army into an ambush, and his troops were defeated at the Battle of the Monongahela.
The clashes between the British and the French at the Forks of the Ohio were the initial hostilities in the conflict known in American history as the French and Indian War and in other parts of the world to which it spread as the Seven Years War. The war marked the beginning of a 40-year period in which the hunger for land and a preoccupation with frontier defense set the tone for West Virginia affairs. The Ohio Valley remained one of the war’s strategic theaters.
From the beginning, most Indians northwest of the Ohio River favored France, whose interests in the fur trade posed little threat to Indian land or ways of life. On the other hand, English settlements and agricultural pursuits were a danger that must be resisted. In Western Virginia hostile Indians destroyed the Greenbrier settlements and repeatedly attacked the upper Potomac settlers. The capture of the Forks of the Ohio by Gen. John Forbes in 1758 and the construction of Fort Pitt helped turn the tide of the war in favor of the English. By 1759, England controlled key positions in North America, and in 1763 the Treaty of Paris ended the fighting. France lost the Ohio Valley and the rest of her colonial possessions on the North American mainland. There was never then any doubt that English culture would be dominant in Western Virginia.
Western Indian tribes, fearful and embittered, joined together under Chief Pontiac and struck quickly at the English. The Greenbrier settlements were again destroyed, and settlers in the Monongahela Valley and other areas suffered heavy losses. In an attempt to appease the Indians, the British government issued the Proclamation of 1763, which forbade settlements west of the crests of the Allegheny Mountains. Later, by the treaties of Hard Labor, Fort Stanwix, and Lochaber, the Iroquois and Cherokee gave up their claims to lands in West Virginia. Beginning in 1769, waves of pioneers swept into the upper Ohio, Monongahela, Greenbrier, and Kanawha valleys.
The treaties, however, failed to consider the claims of such tribes as the Shawnee, Delaware, and Mingo. Once again, an influx of speculators and new settlers alarmed the western tribes and by the early 1770s provoked a new round of hostilities. The most serious was Dunmore’s War. In its only battle, fought at Point Pleasant on October 10, 1774, the Virginians, led by Andrew Lewis, defeated the Indians under Shawnee Chief Cornstalk. The Treaty of Camp Charlotte restored peace. The Battle of Point Pleasant was a decisive factor in the neutrality of the Indians during the first two years of the American Revolution and allowed the continuation of settlements into Western Virginia and Kentucky.
Although Western Virginians participated in nearly every major battle of the Revolutionary War, for most families the war was a continuation of hostilities with the Indians, who now had British support. In 1777, the Indians broke their neutrality and attacked Fort Henry at Wheeling. Indian raids again became common in most of Western Virginia and continued even after the British surrendered at Yorktown in 1781. The last important Revolutionary War engagement in Western Virginia occurred in 1782 when about 200 Indians besieged Fort Henry. Clashes continued until 1794, when Gen. Anthony Wayne defeated the Indians in the Battle of Fallen Timbers and forced them to give up their claims to lands south of the Ohio River.
On the eve of the Revolution, avaricious speculators expanded their horizons. They proposed an ambitious scheme for a 14th American colony known as Vandalia, which included most of present West Virginia, southwestern Pennsylvania, and portions of Kentucky. The war prevented the establishment of the colony, and its promoters later attempted to gain approval for a 14th state known as Westsylvania. Congress, however, rejected the plan, and Western Virginia remained a part of Virginia.
In 1779, the Virginia general assembly passed a land law that had far-reaching effects upon West Virginia, even to the present. The law recognized the rights of original settlers. It also permitted the buying and selling of certificates that enabled speculators, many of whom were from outside West Virginia, to acquire hundreds of thousands of acres of land. Unfortunately, the law did not require land to be surveyed before its transfer. As a result, land claims were often imprecise and provided lawyers with a profitable business for decades in resolving disputes. Among the most baneful effects of the law on the state were the emergence of an enduring system of absentee landownership and arrested economic growth.
Until nearly the end of the 19th century, when large-scale industry became important, most West Virginians depended upon subsistence farming for their livelihood. Families continued to rely upon their fields and the forests for products commonly used in their foods, shelter, and clothing. Early industries, including grain milling and textile manufacturing, were often farm-related.
The War of 1812 stimulated industrial development, especially salt and iron. The Kanawha Salines at present Malden became by far the most important salt-producing center in the region. By 1815, 52 salt furnaces were operating along the Kanawha River for a distance of ten miles east of Charleston. Competition among salt-makers was so keen that in 1817 they organized the Kanawha Salt Company, sometimes regarded as the first trust in American history. Production in the Kanawha Valley peaked in 1846 when 3,224,786 bushels were produced. Salt stimulated the growth of timbering, flatboat construction, barrel making, and coal mining. The first iron furnace in Western Virginia was established by Thomas Mayberry at Bloomery near Harpers Ferry in 1742. The Peter Tarr Furnace on Kings Creek near Weirton, the first iron furnace west of the mountains, was erected in 1794. Later, the Wheeling area and the Monongahela Valley became the most important centers of iron manufacturing in West Virginia.
On the eve of the Civil War, Burning Springs in Wirt County emerged as one of the foremost oil fields in the United States. Natural gas, often found in the same locations as oil, had little importance before the war. During the 1840s, however, William Tompkins, a Kanawha Valley salt-maker, experimented with gas in the operation of his salt wells.
A growing population and expanding industries led to significant developments in transportation. The National Road, the first major highway in the region, was completed by the federal government from Cumberland, Maryland, to Wheeling in 1818. The highway helped to transform Wheeling into a major industrial and commercial center in the upper Ohio Valley. Three roads completed by Virginia before the Civil War included the James River & Kanawha Turnpike, the Northwestern Turnpike, and the Staunton-Parkersburg Turnpike. These highways stimulated economic development and promoted the growth of numerous new towns.
Although flatboats and keel boats were commonly used, the steamboat soon became the most important craft on Western Virginia’s rivers. James Rumsey, a resident of Shepherdstown, was one of the pioneers in the development of the steamboat. Construction of steamboats for western rivers quickly became an important industry along the upper Ohio. The George Washington, launched by Capt. Henry M. Shreve at Wheeling in 1816, demonstrated that the steamboat had an important future on the inland waterways. Steamboats made river improvements imperative. In the 1850s, the Coal River Navigation Company, with funds provided by coal companies and the state, built nine locks and dams, the first such facilities in Western Virginia.
By the 1830s, interest in transportation in the United States began to shift to railroads. The first major line in Western Virginia, the Baltimore & Ohio, was completed from Harpers Ferry to Wheeling in 1853. The only other important line in Western Virginia before the Civil War was the Northwestern Virginia Railroad, opened in 1857 from Grafton to Parkersburg.
In the early 19th century, sectionalism began to appear in Virginia. The Blue Ridge and later the Allegheny Front marked a divide between eastern and western parts of the state. Differences between Virginians grew out of their cultural backgrounds, their divergent economic interests, and the overwhelming political influence of Tidewater and Piedmont planters. Friction between the sections intensified over such political issues as expanding the vote, representation in the legislature, and popular election of state and county officials. Ironically, the Virginia constitution of 1776, crafted by leaders who proclaimed devotion to democracy, had a granite-like quality that assured the unassailability of eastern supremacy in state affairs.
Western dissatisfaction led to several attempts to reform the state constitution. The Staunton conventions of 1816 and 1825 and the Constitutional Convention of 1829–30 failed to meet western demands. Some western leaders favored separation from Virginia. The convention of 1850–51 made changes that addressed the political sources of western discontent. Under the new constitution a westerner, Joseph Johnson of Bridgeport, became the first popularly elected governor of Virginia. These successes, however, were overshadowed by economic inequities. The new constitution shifted the tax burden to the west by requiring that all property, except slaves, be taxed at its actual value, and it contained provisions that dealt severe blows to internal improvements favored by the west. Old rivalries between east and west were soon renewed.
In the three decades before the Civil War, slavery was increasingly an issue in the United States. Two prominent Western Virginians took a strong stand on slavery. Henry Ruffner, a Kanawha Countian who served as president of Washington College (now Washington and Lee University), published the Ruffner Pamphlet in which he attacked slavery as an evil that kept immigrants out of Virginia, slowed economic development, and hampered education. He urged gradual emancipation of all slaves west of the Blue Ridge Mountains. Alexander Campbell, a founder of the Disciples of Christ and president of Bethany College, contended, however, that the North should accept slavery in the South. He supported the Fugitive Slave Law of 1850 but believed that runaway slaves should be provided the necessities of food, shelter, and clothing. As tensions over slavery mounted, several churches divided over the issue. The Methodists, who split in 1844, included most of Western Virginia in their northern branch.
Some well-known abolitionists regarded Western Virginia as useful to their cause. In 1857, Eli Thayer of Massachusetts chose Ceredo for a settlement by 500 New England emigrants who were expected to demonstrate to Southerners that free labor was superior to slave labor. The Civil War led to the collapse of the experiment, and when the conflict ended only about 125 of the original settlers were left. Unlike Thayer’s friendly invasion, abolitionist John Brown in 1859 led a bold raid on Harpers Ferry so alarming to the South that some historians believe it made the Civil War inevitable.
The election of Abraham Lincoln as president in 1860 exacerbated feelings that led to the Civil War and ultimately to the formation of West Virginia. Following the fall of Fort Sumter and Lincoln’s call for volunteers, Virginia held a convention in April 1861 to consider a course of action. The convention voted 88 to 55 to leave the Union. Of 47 delegates from present-day West Virginia, 32 voted against secession, 11 favored it, and four did not vote. John S. Carlile and other Unionist delegates hurried home and organized opposition to Virginia’s decision. As a result of their efforts, 37 counties sent delegates to a meeting in May known as the First Wheeling Convention. There, Carlile urged immediate steps to establish a new state. Other leaders, including Waitman T. Willey, Francis Harrison Pierpont, and John J. Jackson, preferred to postpone action.
In June 1861, the Second Wheeling Convention established the Reorganized, or Restored, Government of Virginia at Wheeling. Francis H. Pierpont was chosen governor, and Willey and Carlile were named to the U.S. Senate to replace Virginia’s senators who had cast their lot with the Confederacy. Throughout the Civil War, Virginia had two governments. The Wheeling government supported the Union, and the Richmond government the Confederacy. In August, the Second Wheeling Convention, in its Adjourned Session, took steps to establish a separate state, subject to the approval of voters. On October 24, 1861, the voters of 41 counties approved the formation of a new state and on the same day elected delegates to a constitutional convention, although less than 37 percent of those eligible to vote actually did so. The constitution prepared by the convention was approved by the voters in April 1862, with the vote taken in unsettled conditions.
In order to become a state, West Virginia needed the approval of Virginia and a constitution acceptable to the Congress and the president. Since the Confederate government in Richmond would never agree to the dismemberment of Virginia, leaders of the proposed new state turned to the Reorganized Government. Governor Pierpont called a special session of the legislature that approved the request within a week. His role in establishing the state was so crucial that he is regarded as the ‘‘Father of West Virginia.’’
In the U.S. Senate, a petition that would allow West Virginia to enter the Union as a slave state was referred to the Committee on Territories, of which Carlile was a member. Unexpectedly, for reasons on which historians have disagreed, Carlile, who had previously favored creation of a new state, now included proposals that nearly destroyed the chances for statehood. At this critical moment, Willey offered a compromise to gradually abolish slavery in West Virginia. With the Willey Amendment to the state constitution, the statehood bill passed both houses of Congress. The West Virginia Constitutional Convention reconvened in February 1863 and accepted the Willey Amendment. The amended constitution was approved by the electorate in a vote of 28,321 to 572. In accordance with a proclamation of President Abraham Lincoln, West Virginia entered the Union on June 20, 1863, as the 35th state.
When West Virginia became a state, the Civil War had already been raging within its borders for two years and had deepened the divisions among the state’s people. Historians do not agree on exactly how many West Virginians served in Union and Confederate armies. Charles H. Ambler and Festus P. Summers estimated that from 25,000 to 45,000 West Virginians fought in the Civil War, about 80 percent for the Union and about 20 percent for the Confederacy. More recent estimates place the number of Union soldiers at no more than 60 percent and Confederates at about 40 percent. Boyd B. Stutler, in his Civil War in West Virginia, counted 632 actions, including battles, skirmishes, and other engagements in West Virginia.
The year 1861 was one of intense military activity. The Battle of Philippi on June 3 is sometimes regarded as the first land battle of the Civil War. Before the end of summer, Union forces controlled both the Monongahela and Kanawha valleys. A Union victory at Carnifex Ferry in September 1861 prevented the Confederates from driving a wedge between the two federal forces. Later, Gen. Robert E. Lee’s efforts to regain lost territory ended in failure at the Battle of Cheat Mountain. By the winter of 1861–62, much of the military activity in West Virginia had degenerated into vicious guerrilla warfare involving such irregular bands as the Black Striped Company in Logan County and the Moccasin Rangers in Braxton, Nicholas, and other central counties. Some of the most notable military actions of 1862 and 1863 were in the form of daring Confederate raids into Union-held territory. They included the Jenkins Raid of 1862 and the Jones-Imboden Raid of 1863. The Battle of Droop Mountain on November 6, 1863, gave Union forces control over most of the territory of the new state of West Virginia.
The Reconstruction Era was hardly less traumatic than the Civil War. Divisions existed not only between Unionists and former Confederates, but also among the Unionists themselves. Unconditional Unionists, including Arthur I. Boreman, Archibald W. Campbell, and Waitman T. Willey, were willing to accept the emancipation of slaves and increased federal authority in order to maintain statehood. Conservative Unionists, however, adamantly opposed a government they considered dictatorial and abolitionist.
Fearful for the state’s future, Governor Boreman and Radical Republican leaders who dominated the legislature were determined to prevent former Confederates, most of whom were Democrats, from regaining political power. Repressive legislation provided for confiscation of the property of persons regarded as enemies of the state. The Radical- dominated legislature also enacted the Voters’ Test Oaths of 1865 and the Voters’ Registration Law of 1866. These measures restricted the right to vote and required state and local officials, as well as attorneys and school teachers, to take oaths of allegiance to West Virginia and the United States. Estimates of the number of disfranchised voters range from 15,000 to 25,000. By the end of the 1860s, the anomaly of these stern proscriptions at a time when the federal government was assiduously protecting the voting rights of African-Americans led to calls for change. In 1871, moderate Republicans joined with Democrats to pass the Flick Amendment to the state constitution, which ended political restrictions on ex-Confederates in West Virginia. Voters approved the amendment by a margin of more than three to one.
In 1870, the Democratic Party carried the West Virginia elections. The governorship of John J. Jacob initiated a period of Democratic control that lasted 26 years. Democrats immediately took steps to provide the state with a new frame of government. A convention assembled in Charleston and wrote the constitution of 1872, under which the state is still governed. The new constitution eliminated the township system and implemented a modified county court system. It extended the term of office of the governor from two to four years. From time to time voters have declined to authorize a new convention to modernize the state constitution. However, they have endeavored to retain the workability of a somewhat antiquated document by approving 70 of 118 proposed amendments.
One of the most sagacious and farsighted provisions of the original constitution of 1863 was its mandate to the legislature to provide a ‘‘thorough and efficient’’ system of free public schools for all children in the state. The legislature created an administrative structure that included a state superintendent, county superintendents, and officials in townships, into which counties were divided for educational purposes. By 1870, the state had 2,270 schools, mostly with one room and one teacher. The constitution of 1872 retained the free school mandate. Some counties, nevertheless, faced lingering opposition to free schools largely because of objections to taxes needed for their support or to the free-school principle itself.
The development of West Virginia public schools in the last quarter of the 19th century and the early decades of the 20th century was similar to that of several southern and midwestern states. Important milestones were the designation of Marshall College (now Marshall University the state’s normal training school for teachers in 1867 and the establishment of branch normals at Fairmont, Athens, Shepherdstown, Glenville, and West Liberty in the 1870s; the assignment of training for black teachers to the two ‘‘colored institutes’’; the enactment of a compulsory attendance law in 1903; and the opening of 233 high schools by 1925 and 88 junior high schools by 1928. West Virginia pioneered the adoption of a graduating plan for public schools, formulated by Alexander L. Wade of Monongalia County. Beginning in the 1890s, it gradually became the pattern throughout the United States. With the adoption of the County Unit Plan of 1933, providing countywide rather than district school boards, West Virginia again led the nation in a major educational reform. During the 20th century, public schools were strongly influenced by the progressive education movement, whose leaders gained control of the educational administrative machinery at the state level and achieved power that lasted throughout the century.
As in other states, West Virginia education has been shaped to a considerable extent by federal policy and federal support. Under the terms of the Morrill Act, West Virginia University was founded in 1867 as the state’s land-grant institution. The GI Bill of Rights of 1944 provided generous educational benefits to thousands of World War II veterans and improved the financial condition of nearly every college in the state. Segregation of West Virginia schools, mandated by the state constitution, was ended by the U.S. Supreme Court decision Brown v. Board of Education of Topeka (1954). Unlike several southern states, West Virginia achieved integration with little opposition. Ongoing federal programs launched in the 1960s, including Upward Bound and Headstart, have done much to provide equal educational opportunities for children throughout the state. Some major issues in education at the turn of the 21st century include the pros and cons of school consolidation, and the impact of the federal No Child Left Behind Act. At the same time, like other Americans, West Virginians have serious concerns regarding a decline of discipline and an increasing violence in the public schools.
In celebrating the 50 years of statehood in 1913, West Virginians looked back with pride upon an era of unprecedented industrial development. The achievement was largely in extractive industries and based upon coal, oil, natural gas, and timber resources, which had lain dormant for millennia. In the late 19th century, state government, whether in the hands of Democrats or Republicans, endeavored to extirpate the bitterness wrought by the Civil War and Reconstruction and to establish a climate favorable to industrial growth. By 1913, annual coal production exceeded 28 million tons. The state achieved first place in the nation in oil production in 1898 and in natural gas output in 1906. Timber production reached its peak in 1909.
Closely associated with such expansion was the building of hundreds of miles of railroads, including the Chesapeake & Ohio, Norfolk & Western, Coal & Coke, Western Maryland, Virginian, and Kanawha & Michigan lines. Railroad magnates such as Cornelius Vanderbilt, J. P. Morgan, Collis P. Huntington, and others acquired vast acreages of West Virginia land and mineral resources. By the end of the 20th century, major West Virginia railroads, after numerous mergers, were incorporated into such giants as CSX and Norfolk Southern, two of the largest landholders in the state. Also vital to industrial growth was the construction of locks and dams in the Ohio, Kanawha, Monongahela, Big Sandy, and Little Kanawha rivers, their upgrading in the 1930s, and further improvements as the 20th century drew to a close.
By 1900, West Virginia was clearly on the threshold of major economic and demographic changes. The state still had some 93,000 farms. Nevertheless, migration from rural areas to cities, one of the dominant trends in the nation, was also in progress in West Virginia. By 1994, farm acreage was less than 35 percent of that of 1900. Most were commercial rather than subsistence farms. Three fourths of agricultural income came from livestock, including cattle and calves, poultry, and dairy products. Apples, peaches, and tobacco were important commercial crops.
By the late 1800s, rapidly expanding industries, especially coal, led to an acute need for labor, and both the state government and individual companies sent agents abroad to take advantage of the ‘‘New Immigration’’ from southern and eastern Europe. They recruited thousands of Italians, Poles, Hungarians, Austrians, and other nationalities, as well as African-Americans from the South. These ethnic groups added greater diversity to the state’s population and culture.
West Virginia’s rich resources and emerging extractive industries caught the attention of powerful business and financial interests outside the state. Many acquired large amounts of land for a small fraction of its real worth. State businessmen and politicians sometimes became allies of powerful non-resident interests whose activities left both benefits and problems. The new industrial age transformed much of the state from a society of small, independent farmers into one with a class-oriented social and economic structure of newly rich industrial barons at the apex and landless wage-earners at the bottom. Sizable amounts of West Virginia’s wealth left the state, and the land from which it was drawn fell under the heavy cloud of a colonial economy.
As extractive industries, particularly coal, gained a prominent place in the West Virginia economy during the first half-century of statehood, capital investment in manufacturing increased fourfold between 1870 and 1900. The Northern Panhandle, Ohio Valley, and Kanawha Valley became major manufacturing areas. Wheeling was the leading industrial city in the state throughout the 19th century. Other prominent industrial centers included Charleston, Parkersburg, Newell, Wellsburg, Benwood, New Cumberland, and Huntington.
World War I was a major stimulus to industry, especially the manufacture of chemicals. The federal government laid the basis for the industry in the Kanawha Valley by constructing a mustard gas plant at Belle and a smokeless powder plant at Nitro, where a community of 25,000 people sprang up almost overnight. Chemical firms in the Kanawha Valley expanded rapidly in the decades after 1920 and manufactured a great variety of new products, including rubber, plastics, rayon, nylon, and automotive antifreezes. World War II further accelerated the making of chemicals in West Virginia. The Kanawha Valley became one of the chemical centers of the world. By 1970, every Ohio River county except Jackson had at least one chemical plant.
During the first half of the 20th century, textile, clay-product, glass, and electric power industries grew rapidly. Hancock County manufactured fine chinaware. The state was a pioneer in the development and use of modern glass-making machinery, but it was also known throughout the world for its Fostoria and hand-blown Blenko, Fenton, and Pilgrim glass products. After 1940, electric power production increased by about 2,000 percent.
By the mid-20th century, mechanization, foreign competition, and emergence of a global economy contributed to fundamental changes in West Virginia industry. Many traditional industries experienced decline. Increasingly, the state was confronted with technological unemployment. Thousands of miners and other workers lost their jobs and left. The population fell from 2,005,552 in 1950 to 1,860,421 in 1960. Further losses occurred in the 1960s and 1980s. Scores of once-thriving mining towns lost so many families that they became ghost towns. In the 1990s, however, the state’s economy showed signs of improvement. Important growth areas included certain areas of manufacturing, such as the automobile and wood-based industries, as well as the service industries, and tourism and recreation. Investments by Japanese, Taiwanese, and British firms attested to an increasing globalization of the state economy. Service industries, including banking and insurance, real estate, and rapidly expanding health care, made up 68 percent of the gross state product. By 1996, the state’s improved economy seemed to be contributing to a reversal of nearly four decades of population losses. In 2010, the state’s population was 1,852,994.
Industrialization in West Virginia produced conditions conducive to an organized labor movement. As early as the 1820s, Wheeling had a sizable wage-earning class and a labor newspaper. A strong labor movement, however, did not develop until after the Civil War. The first important union was the Knights of Labor, founded in 1869. The Knights established a local organization at Paden City in 1877, and within a few years 16 others were founded in the state. The great railroad strike of 1877, the first nationwide industrial strike, began at Martinsburg and ended only by federal intervention. In 1880, the Knights of Labor supported an unsuccessful strike by miners at Hawks Nest in Fayette County. Following these and other setbacks, the union gradually declined.
In 1881, the American Federation of Labor, made up of crafts of skilled workers, was organized. It advocated an eight-hour day, six-day workweek, higher wages, and job safety and security. By 1914, the West Virginia Federation of Labor, which was affiliated with the national organization, included 152 local craft unions with 31,315 members. The union was especially strong among iron, steel, and tin workers; transportation employees; and glass workers. Wheeling had more than 40 percent of the union craft workers in the state. Wheeling, Fairmont, Clarksburg, Charleston, Hinton, Morgantown, and Parkersburg had central labor organizations made up of the craft unions.
The most powerful union in West Virginia has been the United Mine Workers of America. The union was formed in Columbus in 1890 and only gradually established itself in West Virginia. Only about half of state miners participated in a nationwide strike in 1894. Union membership declined in 1897 to a mere 206 workers. Between 1897 and 1902, the UMWA enlisted the support of well-known labor leaders from across the nation. They included Samuel Gompers, Eugene V. Debs, and Mary ‘‘Mother’’ Jones. Operators responded with court injunctions, yellow-dog contracts, blacklisting, and heavily armed mine guards. Nevertheless, in 1902 the union, with assistance from Jones, organized about 7,000 miners in the Kanawha Valley. For the next quarter-century, Mother Jones had a powerful influence with miners in West Virginia.
During the Mine Wars of the early 20th century, some of the most violent episodes in the state’s labor history occurred in the coalfields. In 1912–13, troubles erupted on Paint and Cabin creeks, tributaries of the Kanawha River, when operators refused to renew contracts with the union. Sporadic violence occurred at Mucklow and Holly Grove and caused Governor Glasscock to impose martial law. The strike ultimately ended when Governor Hatfield helped arrange a settlement.
The great demand for coal and a shortage of labor during World War I produced conditions in which the industry flourished, wages rose, and union membership increased. Between 1919 and 1921, UMWA efforts to unionize the mines of southern West Virginia, particularly in Logan and Mingo counties, were marked by incidents of unusual violence, including the Matewan Massacre, Sharples Massacre, and the Battle of Blair Mountain. Labor suffered major setbacks. By 1924, the UMWA had lost half its members in West Virginia and was nearly bankrupt. Collective bargaining, one of the union’s major goals, remained unachieved.
The Great Depression, beginning in 1929, proved a catalyst for fundamental political, economic, and social reforms in the United States. In 1932, Franklin D. Roosevelt, the Democratic candidate for president, promised a ‘‘New Deal’’ in handling the nation’s extraordinary economic problems. The National Industrial Recovery Act of 1933 (NIRA) gave workers benefits for which they had long battled. It offered an eight-hour workday, an end to yellow-dog contracts, and the right to collective bargaining. After the U.S. Supreme Court ruled that NIRA was unconstitutional, many parts of the act relating to labor were included in the Wagner Act of 1935.
Under the leadership of John L. Lewis, coal miners made rapid gains in the more benign political environment. The Appalachian Agreements eventually ended unfavorable wage scales, and in 1946 a Miners’ Welfare and Retirement Fund, one of the union’s most important goals, was established. During the 1940s, the UMWA reached the zenith of its political influence in West Virginia when its leaders persuaded Matthew Neely to give up his U.S. Senate seat to run for governor. After 1950, mechanization and automation in coal mining drastically reduced the number of miners and began a long-term and eventually dramatic decline in UMWA membership and influence in the state.
Historically, mining has been one of the most dangerous industries. Most miners died in individual accidents killing one or a few miners at a time, but major mine disasters occurred at Monongah in 1907, Eccles in 1914, Benwood in 1924, and Farmington in 1968. Another disaster, at Buffalo Creek in 1972, was the result of the collapse of a coal company dam in which 125 people were killed and 17 communities destroyed. The dangers of underground work outside the coal industry appeared in 1932 during the construction of the Hawks Nest Tunnel, which diverted waters of the New River to a hydroelectric plant. Scores of men died of silicosis that might have been prevented had the company taken the proper precautions.
During the 1960s and 1970s, the actions of both federal and state governments led to improved safety and working conditions. In 1969, the federal government recognized pneumoconiosis, or black lung, as an occupational disease and set up a fund to support afflicted miners. A year later, the state established a Black Lung Fund.
One of the most distinctive events in the state’s labor history occurred in the early 1980s when workers of the Weirton Steel Company purchased its properties and prevented the plant’s closing. For a time, the new company was the largest employee-owned business in the nation, before suffering serious setbacks at the end of the 20th century. Employee ownership ended when Weirton Steel was sold to the International Steel Group early in the 21st century.
Political affairs since 1863 have reflected both changes and continuities in life in West Virginia. In the years immediately following statehood, the state was profoundly affected by the problems and tensions of Reconstruction. Partisan politics agitated discussions regarding the location of a permanent state capital. Republicans favored Wheeling, their center of influence. Democrats wanted the capital in southern West Virginia, where their party was strong. In 1877, the matter was submitted to the voters, who chose Charleston over Clarksburg and Martinsburg as the permanent seat of government. The move was made in 1885.
In 1871, following the troubled eight years of Radical Reconstruction, the Democratic Party, augmented by disfranchised ex-Confederates and by Liberal Republicans, captured the governorship and the legislature. The so-called Bourbon Democrats often clung to the ideals of the rural South but promoted the development of industry, and their rule coincided with the beginnings of the industrial revolution in West Virginia.
Party labels in the late 19th and early 20th centuries are not always enlightening. Bourbon Democrats and conservative Republicans shared many of the same ideas and policies, and favored the development of the state’s resources. The political and business relationships between Henry Gassaway Davis, who had enormous power in the Democratic Party, and his son-in-law, Stephen B. Elkins, who after 1894 had similar control over Republican affairs, illustrate the degree to which politics was tied to industrial welfare and influenced by great industrial tycoons. Four governors—George W. Atkinson, Albert B. White, William M. O. Dawson, and William E. Glasscock—are commonly known as ‘‘Elkins governors.’’ Relations between West Virginia industrialists and those on the national scene often brought temporary prosperity and opportunities but in the long run helped move the state toward economic dependency.
Concerns over unbridled industrial exploitation of both natural and human resources, as well as government neglect of many vital services, helped set the stage for the Progressive Movement in West Virginia. From 1900 to 1920, progressive ideals were at the center of state affairs. Although the movement transcended party lines, the greatest gains were made during the tenure of the Republican governors, particularly Henry D. Hatfield. One student of the period observed that at the end of the Hatfield administration West Virginia had as much progressive legislation as any state in the nation. Except for the Cornwell administration (1917–21), Republicans continued to control the governorship until 1933.
Like many other Americans, West Virginians were beguiled by the prosperity of the 1920s. In 1924, when John William Davis of Clarksburg received the Democratic nomination for president of the United States, West Virginia nonetheless gave its electoral votes to incumbent Republican Calvin Coolidge, whom they associated with the good times. Republican administrations in West Virginia during the 1920s were conservative, and the laissez-faire philosophy of government and economic affairs was the order of the day.
The Great Depression brought wide-scale unemployment, with thousands of people reduced to penury, and proved to be a watershed in American and West Virginia history. Laissez-faire doctrines fell before the activist philosophy of Roosevelt’s New Deal, which projected an expanded role for government in economic, social, and cultural matters and allowed the Democratic Party to regain control over national and state affairs. The New Deal and the measures taken by Governor Kump and the legislature brought new hope to economically distressed West Virginians. Through such agencies as the National Industrial Recovery Administration, Works Progress Administration, Public Works Administration, Civilian Conservation Corps, National Youth Administration, and others, unemployment diminished and the economy improved. The easing of the Great Depression paved the way in West Virginia for a new Democratic era that continued into the 21st century. The period following World War II witnessed troubling new economic problems in West Virginia. The unsettled conditions, along with the popularity of Republican President Dwight D. Eisenhower, interrupted Democratic trends in the state and helped Republican Cecil Underwood capture the governorship in 1956.
While state politics have normally had little impact on the rest of the nation, the West Virginia primary of 1960 attracted national interest when it became a battleground between John F. Kennedy and Hubert H. Humphrey for the Democratic nomination for president. Kennedy’s landslide victory in West Virginia proved to be a turning point in his campaign for the presidency.
During the 1960s, policies of the federal government exerted major impact upon conditions in West Virginia. President Kennedy’s New Frontier and President Lyndon B. Johnson’s War on Poverty pumped millions of federal dollars into the state. Among the most important new federal agencies was the Appalachian Regional Commission (ARC), established in 1965. Although it helped develop health-care centers, and supported vocational training, erosion control, and other projects, four-fifths of the ARC budget was devoted to construction of highways. At the close of the 20th century, more than 300 miles of Appalachian Corridor highways had been completed in the state.
Since the 1960s, one of the most significant changes in West Virginia government has been the emergence of a strong chief executive. The Modern Budget Amendment of 1968 made the governor responsible for preparation of the state budget. In 1970, the Governor’s Succession Amendment permitted a governor to serve two consecutive terms. These amendments have led to a sharp increase in the influence and prestige of the governorship. Unlike other branches of state government, which have been dominated by Democrats, the governor’s office since 1968 has alternated between Republicans and Democrats.
Leaders in both parties were deeply concerned about the condition of the state’s economy. Economic improvements were sometimes made at high costs to the environment, and government officials sought ways to balance economic gains against environmental concerns. One controversial issue was strip mining, which liberals maintained must either be abolished or strictly regulated. Young John D. (Jay) Rockefeller IV, who came to rural Kanawha County as a social worker in the 1960s, endeared himself to liberals by boldly advocating the abolition of strip mining. Following the energy crisis of 1973 and his election to the governorship, Rockefeller became a proponent of regulation rather than abolition. By the early 1990s, continued complaints over the destructive practices of coal operators led to threats by the federal government to take over regulation of surface mining in West Virginia. The actions of Governor Gaston Caperton and the legislature, which appropriated more funding for the employment of additional state inspectors, averted federal actions. By the late 1990s, mountaintop removal, the most profitable and arguably the most damaging form of surface mining, had become common and led to sharp public debate.
Public demands for greater access to education, health care, and other services produced rapid growth in both the size and costs of state government. In an effort to streamline administration, Governor Caperton reorganized the executive branch under seven ‘‘super secretaries,’’ each responsible for several formerly separate agencies. His action, however, aroused criticism that another layer of expensive bureaucracy had been established.
In recent decades the state’s governors, congressional representation, and other officials have made concerted efforts to promote economic development, including foreign investments. Sen. Robert C. Byrd, known nationally as an authority on Senate history and the U.S. Constitution, won federal appropriations in excess of $1 billion and brought numerous federal projects and facilities to West Virginia. By the mid-1990s, the state’s economy bore signs of improvement although some ground was later lost in the recession that followed the national boom of the late 1990s. Between 1988 and 1997, the state budget more than doubled, rising from about $3.3 billion to approximately $7 billion.
As the 20th century slipped away, West Virginians could reflect upon the great changes that it had brought. The automobile, radio, motion pictures, television, computers, and other inventions had opened vistas little dreamed of when the century began. It had brought new opportunities for education and self-fulfillment, recognition of human rights for all people, and ever-increasing prospects for more people to share in the blessings the state had to offer. As always, however, problems remained. West Virginians had deep apprehensions about the future. Their concerns included the quality of education; the availability of health care, especially for children and the elderly; environmental matters; threats to cherished traditional values; and fears that the nation might not have in the future the prescience or the strength to manage the responsibilities of world power.
Last Revised on October 16, 2014
Cite This Article
Rice, Otis K. and Stephen W. Brown "History of West Virginia." e-WV: The West Virginia Encyclopedia. 16 October 2014. Web. 24 April 2017. |
Cognitive development refers to the development of the ability to think and reason. It is the transformation of the child's undifferentiated, unspecialized cognitive abilities into the adult's conceptual competence and problem-solving skills (Driscoll, 2005). For many psychologists, cognitive development answers the questions about how children moves toward reaching the endpoint of gaining the adults' skills, what stages they are pass through and how do changes in their thinking occur and what role dose learning play?
Among many theories that are introduced to explain the children' cognitive and knowledge development, Jean Piaget and Lev Vygotsky proposed the most influential theories that contributes to this component of psychology. Their theories underlined that the way the children learn and mentally grow has a critical role in their learning progress and abilities development. Piaget and Vygotsky were considered as constructivists who believed that learning occurs as a result of "mental construction" and by fitting the new information into the cognitive structure (scheme) that the learners already have (Driscoll, 2005). Constructivism approach also suggests that learning is affected by the context in which knowledge transfer occurs and by learners' beliefs and attitudes . Piaget and Vygotsky also agreed on the societal influences in cognitive growth; however, they differ in the learning progression process. Piaget believed that children learn by interacting with their surroundings but with no importance for the input from others and that learning occurs after development; Vygotsky, on the other hand, held the idea that learning happens before development and that children learn through history and symbolism and they value the input from their surroundings (Slavin, 2003).
Get your grade
or your money back
using our Essay Writing Service!
Further, it is imperative for teachers to understand the progression of cognitive development and the constructs of the major theories in the field in order to be able to attend the unique needs of each child and to develop the learning program, instructions plans and classroom' activities in a developmentally appropriate approach. Kindergarten program is an example of these learning programs that is of particular interest because it influences children in very young age and shapes their cognitive development journey. Kindergarten learning programs should be designed on the natural approach for children learning as suggested by the cognitive development theories. The natural approach suggests that the physical, socio-emotional and cognitive development of children depends on activity and interactions with others (Driscoll, 2005). This means tha play is a key aspect of the Kindergarten learning programs and that is seen as phenomenon of thoughts and activity 'growth (Piaget, 1951).
"Play consists of activities performed for self-amusement that have behavioral, social, and psychomotor rewards. Play is directed towards the child, and the rewards come from within the individual child; it is enjoyable and spontaneous" (Healthline.com). Play consists of different types that could be utilized to serve different needs of children in different situations and settings. Types of play range from physical play which involves jumping, running and other physical activities to the surrogate play at which ill children watch others play on their behalf. They also range from "inactive observation" play at which children prefer to stay away and watch to "active associative" at which children engage in group play that requires planning and co operation (Healthline.com). Play types also include expressive play which involves playing with materials (such as clay, play dough,…) and the manipulative play that gives children the measure of control over others and their environment (for example, to throw a toy out of a cot, watch a parent pick it up, and then throw it out again). Symbolic play (also be referred to as dramatic play) is another important type of play at which children enact scenes where they substitute one object for another (for example, a child will use a stick to represent a spoon or a hair brush to represent a microphone). This kind of pretend play takes on various forms: The child may pretend to play using an object to represent other objects, playing without any objects and pretending that they are indeed present. Or the child may pretend to be someone else and imitate adults and experiment what it means to be an adult in a role they are exposed to in their surrounding environment (for example, mother, father, care-giver, doctor and so on). They may also pretend through other inanimate objects (e.g. a toy horse kicks another toy horse). Symbolic play in children can usually be observed during the beginning of the second year of life and it has been linked through the studies and experiments to the cognitive problem solving skills, creative abilities, and emotional well-being.
Always on Time
Marked to Standard
In the following sections of this paper, the major constructs and ideas proposed by Piaget and Vygotsky theories will be examined in relation to symbolic play for cognitive and knowledge development of children; and the implications of each theory for instruction and practice in Kindergarten educational settings.
Theories of Cognitive Development: Piaget and Vygotsky
It is a fact that most of the methods and approaches for teaching are driven from Piaget and Vygotsky 'research studies. They both offer teachers good proposals on how to teach certain learning materials in appropriate approach that matches the child' developmentally conditions.
Piaget (1896-1980) believed that children progress through an invariant sequence of four stages. Theses stages are not arbitrary but are assumed to reflect qualitative differences in children' cognitive abilities (Driscoll, 2005, p.149). He proposed that each stage must represent a significant qualitative and quantitative change in children cognitive and that children progress through these stages in a culturally invariant sequence. Each stage will include the cognitive structures and abilities (schemes) of the previous stages (constructivism) which all will act as an integrated cognitive structure (accumulated knowledge) at that given stage (Driscoll, 2005).
These knowledge structures (schemes) can be prepared, changed, add to or developed through two processes of "assimilation" and "accommodation". Assimilation occurs when a child perceives new objects or events in term of existing scheme (Driscoll, 2005); in other words, within information the child already knows. Accommodation occurs when existing schemes are modified to adopt (or fit in) a new experience or information. If the new information doesn't fit or it conflicts with the existing scheme then the disequilibrium occurs. Equilibrium, however, is the master developmental process which encompasses both assimilation and accommodation and prepares for the child' transaction from one state of the development to the next (Driscoll, 2005). Piaget' stages of development are: sensorimotor, preoperational, concrete operations and formal operations.
Sensorimotor stage is over the period between the birth to two years. During this stage, the child experiences the world around him through the senses and movement. The child develops "object permanence" which refers to "the ability to understand an object exist even if it is not in field of vision" (Woolfolk, 2004). Toward the end of this period, children begin to mentally represent object and events but to that point they only can act and during the transaction to the mental representation, they may use simple motor indicators as symbols for other events (Driscoll, 2005). They also begin to understand that their actions could cause another actions developing a "goal-director behavior"; for an example, throwing a toy from the cot to make parents pick the toy and pressing the doll button to make the sound and so on (kind of the manipulative play).
Preoperational stage extends from the child' second year to seventh year. According to Piaget, children have not yet mastered the ability of mental operation or to think through the actions (Woolfolk, 2004) but they acquire the "semiotic function" early in this period. This means that they are able to mentally represent the objects and events, as evidenced in their imitation of some activities long after it occurred (Driscoll, 2005). Hence, pretending, or symbolic play, is highly characteristic stage and the language acquisitions. One more interesting idea proposed by Piaget is that during this stage children are considered to be "egocentric" assuming that others share their points of view and which makes them engage in "self monologue" with no interacting with others (Woolfolk, 2004).
Concrete operations period that is from seventh year to eleventh, is characteristic to be the "hands-on" period at which children overcome the limitation of egocentrism and learn through discovery learning while working (operating) with real tangible objects (Woolfolk, 2004). They become more internalized and able to create logical-mathematical knowledge resulting in operations (Driscoll, 2005).
Formal operation occurs from eleventh year to adulthood and at which propositional logic is developed. Reaching this stage, children (who become adult) should be able to not only to think hypothetically but to plan systematic approaches to solve problems (Driscoll, 2005). The acquisition of the met-cognitive (thinking about thinking) is also an important characteristic of the formal operations.
This Essay is
a Student's Work
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work
Piaget also believed in the active role of the child during development. He proposed that children act on their own environment and cognitive is rooted in the action (Driscoll, 2005). He acknowledged the social interaction aspect of the children development but only to move the child away from egocentrism to develop the "social knowledge" that can be learned only from other people (language, moral rules, values..).
Although, Piaget' theory of cognitive development proposed an integrated and beneficial framework for children learning that can be utilized by educators and parents to influence and enrich the learning process of the children; the theory has faced serious challenges and especially in the recent years with the contemporary research add to this filed. For an example, Piaget believed that all children, regardless of the culture, progress through four stages and once particular stage is reached, the regression to earlier stage can't occur. Replications of Piaget's experiments have shown that children in different cultures do not pass through the same types of reasoning suggested in Piaget' stages (Driscoll, 2005). Moreover, there are people, in any culture, who fail to reason at the formal operation level; we experience interacting with these people in our day-to-day life in personal and professional levels. Also, Piaget claimed that there must be a qualitative discontinues change in cognitive from stage to stage; this has been questioned with the ability to accelerate development and the studies and experiments showed that that children can learn more than Piaget thought they could (Siegler & Svetina 2002 as cited in Driscoll, 2005). One more is that children don't exhibit the characteristics of each stage; for example, children are sometimes egocentric beyond the proportional stage and the preoperational children are not egocentric all the time (Driscoll, 2005).
However and despite these challenges, understanding Piaget's proposed stages and development sequence suggests useful and effective certain learning and teaching strategies at each level. Example of these strategies as implications of Piaget theory will be discussed in the next section.
Vygotsky (1896 -1943) proposed an alternative to the Piaget' stages of cognitive development, he stated that children learn mainly by social interactions and their culture plays a major role to shape their cognitive (woolfolk, 2004). He believed that "individual development could not be understood without reference to the social and cultural context within which such development is embedded" (Driscoll, 2005, p.250). His theory suggests a co -constructed process of social interactions at which through children move toward individualized thinking. When a child receives a help through social interaction , the child then develops enhanced strategy to solve a similar problem if encountered in future. This co-constructed channel of communications between the child and his culture will lead to internalization and eventually to independent thinking (Woolfolk, 2004). A good example to understand social dialogue and internalization is what introduced by Vygotsky himself and cited in Driscoll (2005) "One a child stretching out her hand for an object she can't quite reach, an adult interprets the gesture of pointing and responds accordingly. Until the adult responds, the child is simply grasping for an object out of reach, however, the situation change with the adult respond to be a social exchange and the act of grasping takes on a shared meaning of pointing. When a child internalizes the meaning and uses the gesture as pointing, the interpersonal activity has been transferred into intrapersonal one." (p.252).
The zone of proximate development is another principle introduced by Vygotsky. He agreed with Piaget that there is knowledge and skills associated with the child developmentally range of understanding, but he believed that with given help and support, children can perform problems that Piaget would consider out of their staged mental capabilities (Woolfolk, 2004). Scaffolding is the technique proposed by Vygotsky to support the discovery learning through social interaction and in the zone of approximate development. Scaffolding entails providing the child with a hint or clue to solve the problem. This encourages the child' critical thinking and enhances his/her problem solving approach.
Further, Vygotsky highlighted the importance of the "mediation cultural tools" to support learning and higher-level processing in children. These cultural signs and tools involve technological, symbolic and any available resource that aids in social communication (language, signs, symbols, media television, computer, books…). Although the tools at hand may include sophisticated toys, children are successful at creating imaginary situations with sticks and other common objects in their environment. This leads into the symbolic play as a strategy for children teaching. Driscoll (2005) noted that "in play, Vygotsky argued, children stretch their conceptual abilities and begin to develop a capacity for abstract thought; the signs they establish in their imaginations, in other word, can make up a very complex symbol system, which they communicate through verbal and nonverbal gestures"(P.259).
The development of language is another major principle that is proposed by Vygotsky' s theory. Althoug didn't address specific implications for instruction of language, he believed that language constitutes the most important sign-using behavior to occur during the cognitive development and this is because it frees children from the constraints of their immediate environment. The language of a certain group of people reflects their own cultural beliefs and value system and children initially associate the words meaning to their contexts and life aspects till they learn to abstract the word from a particular concrete context (decontextualization). This process of decontextualization "must occur with any symbol system if it is to serve higher mental functions such as reasoning" (Driscoll, 2005, p. 259-260). Once again, Vygotsky suggested that symbolic play is important for language learning in young children. He also emphasized the importance of the "private speech" as a self-directed regulation and communication with the self to guide actions and aid in thinking; this is in contrast to Piaget who viewed privative speech as egocentric (or immature) (Woolfolk, 2004).
Undoubtedly, Piaget and Vygotsky introduced important views and suggestions on the cognitive development in children. Piaget suggested that the children progress through maturation stages and discovery learning with minimal social impact. Vygotsky, from other hand, stressed the importance of the cultural context and language on cognitive development. The following will browse, in general, some implications of the both theories for instructions in different educational settings then more specific for symbolic play in kindergarten.
Implications for Instructions of Piaget and Vygotsky
Educators and school systems have been applying the cognitive development theories of Piaget and Vygotsky in classrooms teaching for some time. The most important implications of the both theories are that the learning environment should support the discovery-learning and that child should be effectively involved in the learning process. They stressed the role of peer interaction and the symbolic play. Both also agreed that development may be triggered by cognitive conflict; this entails adopting instructional strategies that make children aware of conflicts and inconsistencies in their thinking (Driscoll, 2005). A good example of this would be the "Socratic Dialogs" which fosters the critical thinking through a series of questions and answers that enable learner to develop the understanding of the learning materials.
However, Piaget and Vygotsky differ in the ways of guiding the "discovery learning" in children. Piaget recommended a very little teacher interference while Vygotsky prompted the teacher to guide the discovery learning offering questions to students and having them discover the answer by testing different options (Scaffolding).
According to Piaget, teachers dealing with children in preoperational stage (like in kindergarten) are encouraged to incorporate the play as a pedagogic strategy; in play children are engaged in active self-discovery activities employing concrete object or symbolically. It also helps to understand that and since the children in this stage have not yet mastered the mental operations, the teacher should not only use action and verbal short instructions but also to demonstrate these instructions. Using visual aid is very important in this stage to create attractive and discovery-oriented learning environment (Driscoll, 2005). Moreover, is to pay attention to the "egocentrism" as a character of this stage, teachers are encouraged to be sensitive that children don't understand that not everyone else has their view or can understand the words they come up with (Woolfolk, 2004). It is important to in the stage to provide the children with a range of experiences and knowledge to build the foundation (basic scheme) for concept learning and languages those children are expected to master in coming stages. Teaching children in the concrete operation stage should involve "hands-on" learning at which children have the opportunity to test and manipulate objects, perform experiments and solve problems in order to develop logical and analogical thinking skills. Teacher should consider using familiar examples to explain the complex ideas and this is by linking to the existing knowledge of the learners (scheme). While teaching the students in formal operations stage requires teachers to offer student open-ended projects that enhance their advanced problem solving and reasoning skills. It is critical in this stage for the teachers to help learners understanding of the broad concepts and their applications in the real life.
The teachers applying Vygotsky' teaching methods would be very active player in their students' education. The most popular technique to be utilized is the scaffolding at which teachers will provide assistance and the feedback as the knowledge source to support learning of new information. The teachers then will not present information in one sided way but will provide the guidance and assistance required for learners to bridge the gap between their skills level and the desired skills; when they are able to complete tasks on their own, the guidance and support will be withdrawn (Greenfield, 1984 cited in Driscoll 2005). Also teachers applying Vygotsky' theory utilized the "meditation" tools and teach students how to use these tools in their learning (computers, books,…). Vygotsky emphasized the language and other sign systems (such as symbolic playing) as important tools for children learning. Language is the cultural communication tool that transmits history and cultural values between individuals and from parents and teachers toward children.
Most importantly, is incorporating the group or peer learning as an important source of cognitive development. A good application of Vygotsky' principles of social learning and the zone of approximate development zone is the strategy at which teachers encourage children with varying level of knowledge to help each other by allowing the child who master the skill to teach and guide his or her peer who still trying to master this skill. It is evident to be an effective learning strategy not only in children learning but also in adult learning. Piaget also believed that peer interactions are essential in helping children move beyond the egocentric and that children are more effective to provide information and feedback to other children about the validity of their logical constructions (Driscoll, 2005); hence the instructional strategies are favored that encourage peer teaching and social negotiation.
Applying Piaget or Vygotsky, the teachers main goal should be to support learners and to provide the assistance plan that fulfill the learner' needs and promote his thinking skills and cognitive development. Teachers should also prepare the learning environment that attracts children attention and encourages their self-discovery. The instruction plan should be designed on the premises that classrooms have students with different cultural, linguistic and knowledge backgrounds. In preparing learning activities, teachers should be able to get children to play and learn collaboratively and enhance their understanding through teacher feedback, peer feedback and social negotiation.
Symbolic Play: Cognitive and Language Development
As introduced, the cognitive development theories encourage play and symbolic play-in particular- as a pedagogic strategy for active self learning and language development. In play, the children initiate and take control of their activity (Driscoll, 2005); and this very nature of play along with other criteria are what distinguish play from other behaviors: play is essentially motivated with self-imposed goals, play is activity of spontaneous and pleasure, play is free from imposed rules, player is an active participants in the play; play focuses on means rather than ends, play is characteristics by the "as if" dimension that encourages children to use objects and gestures as if they were something else ( Hymans, 1991 ; Fein & Rivikin as cited in Yan, Yuejuan & Hongfen, 2005; Piaget, 1951; Rubin, Waston & Jambor, 1978).
In symbolic play that starts in second year of life, children use tools of objects, actions, language, signs and roles to represent something from their real or imagined world of experiences. It enables the children to build and express their understanding of either individual or social experience (Driscoll, 2005; Hymans, 1991; Lenningar, n.d; Lyytinen, Poikkeus & Laakso, 1997; Piaget, 1951; Woolfolk, 2004). Symbolic play indicates that the child developed the two main cognitive operations: reversibility and decentralization; reversibility refers to the child awareness that he or she can come from the pretended role to the real world at any time while decentralization refers to the child understanding that the child in the play is still him/her at the same time with the person he/she is imitating (Rubin 1980 as cited in Marjanovic & Lesnic, 2001). The next intellectual skill noticeable in the symbolic play is conservation which refers to the "child ability to preserve the imaginary identity of the play materials despite the fact they are perceptually and could be functionally inadequate" (Marjanovic Umek & Lesnic Musek, 2001).
The social element of the symbolic play is also a very important aspect to be considered for the cognitive development in the children. According to Vygotsky, children learn to use the tools and skills they practice with social parents; he also emphasized that learning occurs in social interactions and it is affected cultural context it occurs at. He further proposed that social interaction could lead to developmental delays or abnormal development as well as to normal or accelerated development (Driscoll, 2005). Piaget also highlighted the importance of social interaction for the children to develop beyond the egocentrism that is a characteristic of pre operational stage. The impact of symbolic play in this dimension is supported by Smilansky (1968) studies at which she proposed that social activities influence the development of the child's cognitive and social skills. When children are engaged in a role performance; they have to reach a agreement about the play idea, the course of actions and the transformation of roles and play materials and this can only be achieved when individuals come over their egocentrism and develop the ability to communicate and empathize (cited in Marjanovic Umek & Lesnic Musek, 2001). Smilansky then developed the Scale for Evaluation of Dramatic and Socio-Dramatic Play; the scale tracks the progressive development in the use of the objects in the symbolic play over five stages. The first stage includes simple manipulation followed by the stage of imitating the adults' activities of adults by using the model of the object as adult do (as using the hair brush as a microphone). In the third stage, the object becomes an instrument for enacting certain roles while in the forth stage the use of object/toy goes together with the speech and gestures. The final stage focuses in the speech without using objects or gestures (Smilansky 1968; Smilansky & Shefatya, 1990 as cited in Marjanovic Umek & Lesnic Musek, 2001).
Smilansky' scale supported also the role of symbolic play in the language development that was firstly proposed by Vygotsky and this language-play relation has been investigated all the way since then. The research studies discussed the component of the language in the context of symbolic play and mainly in the role playing part of it. In role playing, children engage in a communication dialogue with their playing parties. It is evident that the role playing and object transformations enable the childe to use lexicographic meanings and clear speech (Pellegrini & Galda as cited in Marjanovic Umek & Lesnic Musek, 2001). According to Lyytinen, Poikkeus and Lassko (1997); their study to observe and examine the relationship between language and play among 110 18-month-old children showed that early talkers of these children displayed significant more symbolic play than the late talkers ; a significant connection was found between the language comprehensive and percentage of symbolic play. This is supported by the study conducted by Marjanovic Umek and Lesnic Musek (2001) at which they compared three age groups of children in preschool settings with different level of play using Smilansky's Scale for the Evaluation of Dramatic and Socio-dramatic Play; the observations and results proved stronger use of the language in the function of defining roles, scenes and materials that are required for the play context.
More interesting studies looked into the implications of symbolic play for the education of children with special needs and disorders such as Down syndrome and Autism. Example of these studies is the study conducted Stanley and Kinstantareas (2006) who investigated the relationship between symbolic play and other domains such as nonverbal cognitive abilities, receptive language, expressive language and social development among 131 children diagnosed with Autism Spectrum Disorder (ASD). The result indicates a significant positive relation between symbolic play and development of these domains in children with (ASD). The study also stressed that training in symbolic play will help to improve these children' skills in other domains (Stanley & Kinstantareas , 2006). Another recent study conducted by Venuti, Falco, Giusti and Bronstein (2008) to investigate the impact of mother-child interaction in the play on the cogitative functions of children with Down Syndrome concluded that such interaction leads to enhanced cognitive functioning (Venuti, Falco, Giusti & Bronstein , 2008).
Symbolic play, then, inked through the literature to the development of cognitive problem solving skills, linguistic transformation and creative abilities. It also supports the emotional and social development. Role playing is seen to be a way at which children escape from the real world' conflicts into fantasy more comfortable world. From different aspect, it enhances the child' self awareness and self directed through the positive feedback the child receives from parents and/or play mates. In term of social development, the children enjoy playful interactions with others starting with parents through whom they learn their culture values and aspects. An interaction with other children helps them to grasp the concepts of boundaries, taking turns, teamwork, competition, social negotiation, sharing, patience and the ability to deal with winning and losing emotions.
Also, place assist the children' physical and moral development. Physical play enhances' the children' motor skills as they run, jump and repeat more of pleasure full body movements. In the moral aspect, during the play with parents and other children, children begin to learn that cheating is not accepted and how they should respect others' feeling and more of boundaries between the acceptable and unacceptable behaviors.
Therefore, models of children learning and preschool education in professional settings are mainly driven from different understanding and implications of symbolic play which are in turn based on the premises of different cognitive development theories.
Play and Learning: Educational Framework in Kindergarten Settings
"Children learn through play" is the golden rule that any educational frameworks in the preschool (Kindergarten) settings should revolve around. According to the theories and studies discussed in this paper, the natural approach for children learning is dependent upon activities and discovery. Through touching, exploring, manipulating testing, imitating, and symbolic playing, children learn about their world. While through social interaction with other children and adults, they develop the language skills and learn about their culture, values, history, themselves and their relationships to others.
The goal of the Kindergarten learning program is to help children to achieve a degree of self-confidence, to acquire social skills and to participate in activities that enable significant development in knowledge and language. The Kindergarten learning program then should engage children in different types of play that covers the range of physical, inactive, associative, solitary, parallel, surrogate (onlooker) and definitely the symbolic play. It is important to be sensitive to the developmentally characteristic of this stage of age and give the children the space for self-discovery and when instructed, instructions should visual, clear and short. The learning program should consider the stages of complexity of play in link to Scale for the Evaluation of Dramatic and Socio-dramatic Play in moving from simple touching and manipulation into object-free role playing.
Teachers should be sensitive to the children differences and to the "egocentrism" characteristic of this age and encourage children gradually to engage in more collaborative kind of playing. For an example, the teacher can introduce simple play such as ringing bells, scribbling with crayons, identifying shapes or feeling sands. Children then will start to use objects as symbols. At this stage , the teacher encourages the symbolic playing at which children enjoy planning together and setting its rules. Teachers can advise the role shifting while roles should be flexible; children at this stage are not ready for the complexity of fixed rules.
The learning environment in Kindergarten should be prepared in rich visual manner and equipped with range of different colored objects, toys and ply materials. This enables the discovery, activity-centered environment and the spontaneous play. Spontaneous play is a very effective learning strategy that entails less interfere from the teacher, maybe only sense of guidance. The teacher role will be mainly to observe, interact, provide feedback and assist when needed. It is important for teacher to be attention to the fact that children, according to Piaget and Vygotsky, construct the sense of order, logic and meaning of their surrounding. New information and experience should be introduced in an organized way that enables them to "accommodate" them in their internal "scheme". Teachers may incorporate the "conflict cognitive" and "scaffolding" principles into children learning in preschool but in simple, leading and progressive process than confusing one.
Effective preschool learning program should also have a strategy to address the children' cultural and linguistic 'differences and cop to their developmentally different level of skills and knowledge. Also the teacher should be sensitive to the fact that not all activities may look interesting to all children; learning styles are different and it is very important skills of the kindergarten' teacher to be able to observe the children behavior and start with engaging the child in activities that more of his/her preferences (example are children who don't like to be part of a collaborative play and prefer to self discover or construct new materials). Hence, the learning program in the preschool settings should incorporate both teacher-directed and child-directed learning activities.
The first early years of life represents important changes in the children physiological, emotional and mental abilities that contributes to their cognitive and linguistic development. It is important to understand these changes and its impact on the children capacity to perceive the world around them and to learn new knowledge and experiences. Cognitive development theories tried to explain the process at which children move toward gaining the adult' skills and be part of their communication and social systems.
Piaget and Vygotsky introduced the most influential theories in this scope. They stated that the children construct the knowledge as they discover the world around them and through interaction with others. They believed that learning is contextual and culture-affected. Piaget proposed four main stages that children go through before achieving the adult skills. Each stage has its own characteristics and at which children are able to develop certain capacities; learning strategies should be sensitive to those capacities and to be more child-directed than teacher-directed in a peer-to-peer social interaction. Vygotsky, alternatively, proposed that learning occurs mainly through social interaction and children can be helped to learn beyond the stage-limited-capacities. Both, however, agreed strongly that symbolic play is the most effective learning strategy that enables children to develop the basic skills of touching and exploring into conceptual thoughts and more advanced cognitive problem-solving skills. Vygotsky stressed that symbolic play is also the way to create the cultural dialogue which enhances the linguistic skills.
Piaget and Vygotsky' suggestions and ideas for children learning has being incorporated into the educational models of the primary and preschool models. Their theories' implications for instructions in preschool (Kindergarten) helped educators to create more conductive learning environment for children to achieve the self-confidence and knowledge growth. |
Authentication is the process of verifying identity. It requires the use of passwords, hardware tokens, or a number of other methods.
After reading this article you will be able to:
Copy article link
In cyber security, authentication is the process of verifying someone's or something's identity. Authentication usually takes place by checking a password, a hardware token, or some other piece of information that proves identity. Just as an airline worker checks a passport or an identification card to verify a person's identity when they board a plane, computer systems need to be sure a person really is who they say they are. At an airport, this authentication process ensures only people with a ticket get on the plane; for digital systems, this ensures data is viewed and used by the right people.
Authentication does not just apply to verifying human users. Computer systems also need to check servers, software, APIs, and other computers to be sure they are who they "say" they are.
Authentication is an important part of identity and access management (IAM), which dictates who can view data and what they can do with it. But it applies to many other areas of security as well, including:
Because a computer cannot "recognize" a person or another computer the way a human can, the process of authentication relies on objective criteria that a computer can measure. One type of objective criteria involves checking for some quality that the person or computer in question is known to have. Another involves the use of a technology called public key cryptography to prove identity.
This type of authentication involves checking a measurable characteristic of identity against a corresponding digital record. The characteristics that an authentication system will check are called "factors." Three common authentication factors are widely used today:
1. Something the person knows
This authentication factor checks a piece of secret knowledge that only the real person should have. A username-and-password combination is the classic example of this factor. Security questions and PIN codes also are examples.
2. Something the person has
This authentication factor checks if the person possesses a physical item they were issued or are known to have. Many people use this authentication factor every day: they live in a house or an apartment that they can unlock with a metal key. Possession of this key, therefore, proves they are authorized to enter the premises, and enables them to do so.
In digital systems, this authentication factor does not rely on an old-fashioned lock and key. But it uses a similar principle by checking for a physical token. There are two types of tokens: soft tokens and hard tokens.
Soft tokens: A soft token involves verifying possession of a device, like a smartphone, by sending a code to that device and asking the user to enter it. The code may be sent as a text message or through an app that generates random codes.
Hard tokens: A hard token is a small physical item that connects to a computer or mobile device via Bluetooth, a USB port, or some other port. Users must plug this token into their device to verify their identity.
Some security experts consider hard tokens more secure than soft tokens. An attacker could remotely intercept a code on its way to a user's phone and use that code to impersonate the user. But it is much harder to steal a hard token: the attacker needs to physically access the token in order to do so.
3. Something the person is
This authentication factor assesses a person's inherent qualities. In real life, people do this all the time — two friends may recognize each other by their appearance or manner of speaking, for instance. A computer could do the same by scanning a person's face or retina, verifying their thumbprint, measuring the frequencies of their voice, or checking the results of a blood test (although this last one is more rare).
Additional authentication factors
Some members of the security industry have proposed or used additional authentication factors besides the three main ones listed above. Two of these additional factors are location (where a user is) and time (when they are accessing the system).
In addition to using the authentication factors described above, known and trusted entities can also be issued digital certificates. A digital certificate is a small digital file that contains information for verifying identity, just as an ID card contains information that verifies a person's identity in real life.
Digital certificates receive a digital signature to prove their authenticity from the authority that issues them, like how a passport, ID card, or piece of paper currency may have a watermark proving it is not counterfeit.
A digital certificate also contains a string of random values called a public key. The public key corresponds to a private key that is stored separately. The entity that has the certificate can digitally sign data with these keys to prove that it possesses the private key and is therefore authentic.
Currently, digital certificates are not often used to verify the identity of individual people. But most people rely on digital certificates every day without realizing it.
Whenever someone loads a website that uses HTTPS, the secure version of HTTP, the TLS protocol uses the website's digital certificate (called an SSL certificate or TLS certificate) to authenticate the website. DKIM, which authenticates email senders, is another example of a technology that uses this method instead of checking authentication factors. DKIM helps email providers sort and block spam emails.
Multi-factor authentication (MFA) is the process of verifying a person’s identity by checking two or more authentication factors, rather than just one. MFA is a stronger type of authentication than single-factor authentication, because it is much harder to fake two of these factors than it is to fake one of them.
An attacker might be able to steal Bob's username and password (perhaps through a phishing attack). But if Bob has to scan his face as well, the attacker will not be able to fake Bob's identity, since their face does not look like Bob's face. Or, if Bob has to plug a hard token into his computer in addition to entering his password, the attacker would have to steal this token as well. While possible, such a theft is much more difficult, making account takeover less likely.
For true MFA, separate factors have to be checked. Assessing multiple instances of one factor is not MFA. For instance, if an application has a user enter a password and answer security questions to authenticate, this is still single-factor authentication. Password entry and security questions both assess the "something you know" factor.
Because of the increased security it offers, MFA is a core principle of Zero Trust security, a security model that requires identity verification for every user and device that accesses a private network.
Two-factor authentication (2FA) is what MFA is called when exactly two factors are used. The most common type of two-factor authentication is "something you know" + "something you have." For instance, in addition to entering their passwords, many people have codes sent to their phones before they can access their bank accounts (an example of the "soft token" version of this factor).
Today, many businesses are employing 2FA in order to reduce the impact of phishing attacks. For example, Google was able to eliminate account takeover attacks by using 2FA with hard tokens for authentication.
While authentication is concerned with verifying identity, authorization is concerned with permissions, or what someone is allowed to do once they gain access to a protected system or resource.
Suppose Bob works in his company's marketing department. Bob enters his password, scans his face, and inserts his hard token to log in to his company's network. At this point, authentication is complete.
After logging in, Bob does not have access to every data file in the company's possession. Authorization determines what Bob can and cannot see. As a marketer, he is authorized to see some data, like a list of potential customers to whom the company will send marketing messages, but not other data, like the company's main codebase or its list of employee salaries.
See our article on authentication vs. authorization to learn more.
Modern corporate employees have to authenticate to many different cloud-based applications. This forces those employees to establish many sets of authentication factors — one set for each application — and creates potential security concerns:
Single sign-on (SSO) is a service that enables users to authenticate only once. Users sign in to the SSO service, which then passes on this authentication to every application by sending a digital authentication message to each application as needed.
SSO also gives IT teams a single point at which to enforce security policies. Not all applications support 2FA, but if the SSO service supports it, then 2FA can be used anyway. IT teams can also enforce requirements for password length and complexity via an SSO service, putting less of a burden on users to remember multiple passwords.
SSO authentication messages use a protocol called Security Assertion Markup Language (SAML). SAML is a standardized method for telling external applications that a user is who they say they are.
A message authenticating a user is called a SAML "assertion." Once an application receives a SAML assertion for a user, it does not need to authenticate the user on its own, because it knows the SSO service has already done this.
OpenID Connect (OIDC) is another authentication protocol that is growing in use by SSO providers. OIDC functions similarly to SAML, but it formats data differently, among other distinctions; while SAML formats data via XML, OIDC uses JSON.
Cloudflare offers a Zero Trust platform that works with all major SSO providers. Once users authenticate to their SSO service, Cloudflare enforces consistent access controls across cloud and on-premise applications. To learn more about this platform, which includes browser isolation, a secure web gateway, DNS filtering, and other Zero Trust features, see the product page.
Learning Center Navigation |
Common Core Standards: ELA - Literacy
ELA - Literacy.CCSS.ELA-Literacy.RH.11-12.9
RH.11-12.9. Integrate information from diverse sources, both primary and secondary, into a coherent understanding of an idea or event, noting discrepancies among sources.
How to Train Your Ideas
Listen up, students. Jonas, Scout, Jane, Frodo, Harry, Katniss...they all have one thing in common: eventually they had to break away from the protection and guardianship of a caring older person and move forward on their own with their protector’s tutelage in tow. It’s time to cut the little Padawan rat tail and take up your own light saber, time to let the wand choose the wizard, time to get off this archetypal train and get your own wheels. Ok, but seriously kids, by now you should be able to cut the cord with your teacher and make your own magic. Previously you’ve been given information with a task to complete, but this time you’re on your own in terms of the tools you have to complete the assignment… stay tuned for more.
Kung Fu Student
In this standard, students need to take a single idea or past event, form a well thought out claim or opinion that represents their understanding and thoughts on the situation, and then slap some research on it to sweeten the deal. Here are some tips to get them started:
- Stick to what you know: When you are dealing with a single idea or event in history or social studies, do yourself a favor and go with something you already have a little bit of background knowledge for. If you haven’t studied the feminist movement a whole heap then avoid the topic.
- Find out what you know: Time to hit the research. This is where the fun comes in; the standard only says “integrate information from diverse sources,” which gives you a ticket to ride the internet train… as well as other legitimate sources. Integrating the information is key. Sifting through the desert of text can be tiresome; however it is critical to zero in on the most applicable information. How will you know when you’re done? Well, you should keep researching until you can demonstrate a coherent (meaning full, clear, and well-rounded) understanding of the topic.
- Let’s talk about… you know: When there are discrepancies in the information you gather, not only is it necessary to point these out, it’s an indication that more research is needed in order to understand the cause and nature of the discrepancy. Even though you’ll have sources that disagree, you should still be able to reach a clear understanding of the topic at hand and explain how these disagreements play into things or may be resolved.
Fall of the Guardians
The point of it all is to finish the journey alone, to show how awesome you are out from under the shadow of your mentor. With patient study and carful logic you can do a job that would make daddy, your teacher, Yoda, Gandalf, or even Dumbledore proud.
This drill will serve as an independent study with criteria consistent with the standard. Read the following project guidelines and complete all tasks and/or questions.
- Determine a topic of interest related to your current mode of study. This topic should focus around a broad idea in history or notable event.
- Integrate at least four sources into a comprehensive discussion of the topic or event in question. At least two of these documents must be primary sources. All sources should be diverse in origin and media type.
- Annotate each text for the consistent presence of the topic discussed and trans-documented evidence. Note discrepancies among the sources; that is, briefly explain in the annotation whether the sources are in agreement, are opposing, or consist of erroneous or conflicting information.
- In the case of narratives or fictional works, make note of any particular characters, events, or ideas in each. In the case of visual or quantitative data, provide a detailed, written explanation of the relation the source has to the topic.
- Integrate the information from all texts into a well-developed evaluation of the central idea or event in question. This discussion should be substantial, incorporate line references from all sources, and contribute to a sense of on-going discussion on the topic. This assignment warrants a 5-8 page response.
- ACT Science 1.2 Research Summary Passage
- ACT Science 1.6 Research Summary Passage
- ACT Science 2.4 Research Summary Passage
- The Vietnam War: The Vietnam War Activity: Document Analysis: "Hanoi Jane"
- Teaching the War of 1812: Document-Based Exercise: New Orleans from a British Point of View
- Teaching the West: Quotation Analysis & Writing Assignment: American Beliefs about Land
- Teaching World War I: Document Analysis: The Sinking of the Lusitania
- Teaching World War I: Document Analysis: The Sedition Act of 1918
- Teaching Causes of the Cold War: Document Analysis: Debating Winston Churchill's "Iron Curtain" Speech
- Cold War: Cuban Missile Crisis to Detente: The Cold War: Cuban Missile Crisis to Détente Activity: Speech Analysis: President Kennedy Announces the Blockade
- Teaching Cold War: McCarthyism & Red Scare: McCarthyism & the Arts: Elia Kazan vs. Arthur Miller
- Teaching Colonial New England: Writing Activity: Answering Jonathan Edwards' "The Justice of God in the Damnation of Sinners"
- Teaching Immigration: Era of Open Borders: Document Analysis & Debate: Chinese Exclusion
- Teaching Immigration: Era of Restriction: Research Project: Personal Immigration Histories
- Teaching Jamestown & Early Colonial Virginia: Document Activity: John Smith's Pocahontas
- Teaching Jefferson's Revolution of 1800: Document-Based Activity: Marbury v. Madison and Judicial Review
- Teaching Jim Crow in America: Image Analysis: Representations of African Americans
- Teaching FDR's New Deal: Document Analysis: Social Security
- Teaching FDR's New Deal: Image Analysis: WPA Post Office Murals
- Teaching Puritan Settlement in New England: Document-Based Activity: The Day of Doom
- Teaching Reconstruction: Document Analysis: Black Codes
- Teaching Reconstruction: Document Analysis: Freedmen's Transition Plan
- Teaching the Right to Bear Arms: Document Analysis: The Right to Bear Arms according to the States
- Teaching The Federalists: Hamilton, Washington & Adams: Legislative Activity: Revising the Sedition Act
- Teaching Manifest Destiny & Mexican-American War: Writing Activity: Soldiers' Letters from the Front
- Teaching the French & Indian War: Mapping Activity: Competition for the Ohio Valley
- Teaching Abolitionism: Writing/Illustrating Assignment: The Caning of Charles Sumner |
- Activities & Projects
- Graphics & Images
- Handouts & References
- Lab Resources
- Learning Games
- Lesson Plans
- Primary Sources
- Printables & Templates
- Professional Documents
- Study Guides
- Graphic Organizers
- Writing Prompts
- Constructed Response Items
- AP Test Preps
- Lesson Planet Articles
- Interactive Whiteboards
- All Resource Types
- Show All
See similar resources:
Chocolate Preferences Voting and Graphing Techniques
Students practice sampling and graphing techniques. In this data collection and interpretation activity, students write and conduct surveys about chocolate preferences and then collect their data. Students graph the data in order to...
9th - 10th Math
FRED in the Classroom: Debt and Deficit
Here is a hands-on activity where your class members will discover different ways to measure the government's financial situation and work to add data and redraw graphs in order to calculate the ratio of gross federal debt held by the...
10th - 12th Social Studies & History CCSS: Adaptable
More Fiscal Cliff Analysis
Continuing from a previous video explaining the various budgetary proposals of 2013 in the United States, this video illustrates a more in-depth analysis of the fiscal cliff. It reviews complex concepts such as the pros and cons of...
15 mins 9th - 12th Social Studies & History CCSS: Adaptable
Government's Financial Condition
What goes into accounting the United States budget deficits and its net operating costs? How do shifts in the economy affect government spending? How is increasing interest projected to affect our federal debt in the future? This video...
11 mins 9th - 12th Social Studies & History CCSS: Adaptable
IELTS Task 1: Example Essay Step by Step
Students write an essay based on an graph, table, or diagram. In this writing lesson, students examine the process of writing a short essay based on the information presented in visual organizer. They examine a chart about water using in...
10th - 12th English Language Arts |
A lesson plan is a teacher’s regular guideline on what the class needs to study, how and when to teach, and how to assess learning. It enables the teacher to be more productive by having a comprehensive approach to observe each stage of the lesson. It means that every time allocated to a particular lesson is entirely spent on teaching and sharing productive conversations about the new concepts. Below is a design of either classroom or online lesson plan for the Take-Away Model.
- Overview: The application of the Take-Away Model is the equivalent of the subtraction process. The aim of using the take-away process is to utilize both the manipulatives and visuals. The outcome of using this method is that students gain experience and understand subtraction. Once participants have used blocks or other visuals, they would visualize the cumulative illustrations in their heads and how many will remain.
- Objectives: The focus of this type of learning is for students to use the take-away model to solve subtraction calculations. Students should be able to generate decrement sentence structures, which can be resolved using deduction method, and understand the results of deducting whole figures using a subtraction.
- Time: As for the time allocated to this exercise, the instructor will devote twenty minutes to describing and providing examples. The students would then have thirty minutes to discuss in selected groups of four and use blocks or other manipulatives to assist them in their calculations, which they will write on the paper and the smartboard.
- Materials: Blocks, pencils, counters, and an adding machine will be used to illustrate the tools which can and will be used in this session. For this tutorial, blocks will become the primary object unless a student wants to use an alternative to comprehend how to solve the take-away model fully. Each student will also need a workbook with the written equations, a pencil, and a rough piece of paper to decipher them.
- Activity: The instructor will let the students know that the exercise they are about to do is the take-away model. They will then be required to have a paper with a heading which says, “start with take-away and indicate what is left in the top columns.” After that, they will be presented with five mathematical questions, which they will be required to solve on a different sheet of paper. The equations may be as follows: 11-6=, 9-5=, 8-4=, 7-3=, 6-2=. This will be the introduction to the subtraction algorithm, which can be used as often as possible. With this practice, students are introduced to subtraction as taking-away method. First, the learners will look at the examples and make sure they have with them the papers with the heading, the rough work paper, the pencil, and their counting blocks. They will assemble in groups of four and begin the activity. So, the first task will be written on the board as 11-6=, the class will commence by writing 11 underneath the heading, then write 6 under the “take-away,” heading. They will take away 6 blocks from their 11 blocks, count how many they have left, which is definitely 5, and document it under the “left with” heading. The learners will then move to the next item on the list. Students will explore process of subtraction through critical thinking which will enable them to communicate and associate with numbers. This way, learners start to discover a descriptive concept of the manipulative and, by extension, the number. The framework develops an understanding of how to compute and record the subtraction circumstances.
In conclusion, lesson planning assists teachers to distinguish between the many different skill levels and needs of learners in their classrooms and prepare the students for them. The planning also serves as documentation which shows how well organized, committed, and dedicated a teacher is to the class. With properly planned lessons, students are able to understand whatever is being taught in class as well as be to follow the given instructions. |
Intro: Manual Derivatives and Integrals
This procedure will guide you through some of the core techniques of calculus: derivatives and integrals. These are essential for almost any kind of math or science career. You'll need a good grounding in algebra to understand the sample equations, and writing materials to check your work. Going through these steps could range from minutes to hours depending on how new you are to calculus, but even if you're an experienced mathematician, you should pace yourself and try to contemplate the fundamental theory of what happens in each step. Although people usually study it in high school at their earliest, calculus was discovered over 300 years ago, so any practiced math student can learn it.
Step 1: Find a Derivative
1. Find or make a function of one variable. The function will give an output that we can use as the second variable in a two-dimensional graph, which in most cases means f(x)=y.
2. Use Newton's difference quotient. For a small segment of a function graph, with width h, slope is found with m=[f(x+h)-f(x)]/h, taking the limit of this equation at h approaches 0.
3. Work out the math. In the example polynomial, we replace all instances of (x) with (x+h) to get m=[2(x+h)^3 + 4(x+h)^2 +3(x+h) + 5 - 2x^3 - 4x^2 - 3x - 5]/h. This works out to m=6x^2 + 6xh + 8x + 2h^2 + 4h + 3, as h approaches 0. Thus, the slope of the expression 2x^3 + 4x^2 + 3x + 5 is given at any x value by 6x^2 +8x + 3. Most people studying derivatives for the first time will find it helpful to fully work out the algebra. If you still struggle, step-by-step guidance can be found with a tutor or www.wolframalpha.com.
4. Generalize the results to find derivation rules. As you may notice with the previous example, each term's exponent decreases by one in the derived polynomial; there are derived rules displayed above for a few general cases that have been studied numerous times.
Step 2: Riemann Sums
1. Find a function whose (x,y) coordinates you can calculate over an interval. Common intervals usually start at or center around x=0.
2. Divide the area under the curve into reasonably small rectangular segments of uniform width. For example, 0.1x units apart.
3. Approximate the height of each segment with the y value of the function halfway between its high and low x values. In some circumstances, you may specifically want an approximation that's strictly too high or too low, prompting you to use a different y value from the curve segment.
4. Find the area of each segment by multiplying height by width. You will need to record these individual areas for the next step.
5. Add the areas to find total area under the curve for the interval. This is a large amount of mathematical grunt work, but it is reliable and very nearly approximates the total area, especially with more narrow segments. This is how most calculators perform integrals.
Step 3: Definite Integration
1. Find a simple polynomial, such as ones in the previous steps.
2. Find a derivation rule that could produce the function. As there are numerous types of functions and infinite examples for each type, as well as combinations of types, along with some functions that are impossible to integrate, you can't exhaustively explore the options for integration; in this case, the answer should be straightforward.
3. Reverse the process of a derivation. For a polynomial term, increase the exponent by one and divide the coefficient by the new exponent.
4. Account for a constant of integration. When you find the derivative of a polynomial that gives y as a function of x, the final term, which is in the power of x^0, is lost. A given polynomial derivative can come from an infinite number of parent polynomials. However, all coordinates are shifted by the same constant, so the next step still works.
5. Find the difference between the ends of the integrated function. Because the original function, this new function's derivative, gives the series of slops between these ends, this difference equals the area under the original function's curve. This relationship is largely unintuitive, so try working out as many examples as needed to observe it.
Step 4: Physical Applications: Forward Acceleration and Deceleration
1. Imagine a car that starts at rest, accelerates for 5 seconds at 2ft/s^2, then runs steadily at 10 ft/s for 5 seconds, then slows down at a rate of 1ft/s^2 until it stops.
2. Try to find the distance from the starting point mentally. This should be mildly tricky, since the rate of motion changes over time. The only easy parts to solve mentally are the distance traveled at a constant velocity and the time it takes to stop (10-1t=0, so t=10). You may check your answer below, then try the mathematical method.
3. Break the story problem into a piecewise function. The velocity and time can be represented as v=2t for 0 to 5 t, v=10 for 5 to 10 t, and v=20-t for 10 to 20 t.
4. Integrate the pieces separately with respect to the variable t.
- v=2t: The exponent must be one degree higher and the coefficient must be divided by the new exponent. Since the exponent is implied to be one, we get x=t^2 over the t interval from 0 to 5. This gives x=(5)^2-(0)^2=25.
- v=10: Velocity is constant, but we can use this step to check that our reasoning is sound. The exponent rule gives x=10t, given t from 5 to 10, which is x=10(10)-10(5)=50.
- v=20-t: Integrate each term separately; x=20t - 0.5t^2 from 10 to 20, which is x=20(20) - 0.5(20)^2 - [20(10) - 0.5(10)^2]=50.
5. Compare the sum of the integrals to the graph. Note that the vertical scale is 5 feet per block.
Step 5: Physical Application: Malfunctioning Rocket
1. Try integrating a higher order polynomial. If you were very familiar with story problems like the accelerating car, you might have known that you can graph ft/s vertically and seconds horizontally to make a graph that's very easy to find the area under. However, a third- or fourth-degree polynomial will not be so visually simple even when you use this kind of method.
2. Imagine a rocket that launches from the ground and has a brief fuel jam, so that its velocity in ft/s at a given time is given by v=6t^2 - 16t + 8 for the first minute of its flight, after which it hits a bird and expels all fuel horizontally, while the only force acting on it is gravity; find the time at which it lands.
3. Assess the problem. Since neither velocity nor acceleration is constant, the only way to solve the problem is to integrate for the position function, find the change in position for the first minute, and find how long an object in free fall from that height takes to land.
4. Execute your strategy. Integrating each term of the polynomial separately, we get h=2t^3 - 8t^2 + 8x + C for the height in feet. We then plug in the endpoints, t=60 and t=0 to get [2(60)^3 - 8(60)^2 +8(60)] - = 432000-28800+480=403680. After this point, the rocket falls with an acceleration of 32 ft/s^2 to the Earth. Since acceleration is constant, we can easily integrate a=32 into v=32t and then h=16t^2, which we equate to the maximum height to find how long it spends falling. 16t^2=403680 --> t=158.8. Adding 60 seconds to this, the rocket hits the ground 218.8 seconds after it launches.
Step 6: Conclusion
You have now learned some of the basic tools for higher mathematics and science. From here, your improved understanding of calculus should allow you to perform well in more advanced studies such as field physics, thermodynamics, and quantum mechanics. If you have any problems, remember there's nothing wrong with seeking out a tutor or using Wolfram Alpha to walk you through the steps of a complicated problem. |
A comprehensive 68-slide PowerPoint
that introduces all major topics in a typical middle school / early high school unit on chemical reactions and rates of reactions.
NOTE: PPT Completely Updated!
This PPT has been completely reviewed and updated from the previous version. Each slide has been examined, edited, and improved in both appearance, stability, and detail. New slides have also been added to enhance the content. Finally, a summary notes document has been added to this product to help students take notes as they view the presentation. I hope you enjoy the new changes!
Part 1 – Physical and Chemical Change
- Review: what is matter?
- 2 types of properties of matter (physical and chemical)
- Review game: Physical or Chemical Property?
- 2 types of changes of matter (physical and chemical)
- Changes of state: physical or chemical change?
- 4 signs of a chemical change (color, energy, gas, precipitate)
- Review game: Physical or Chemical Change?
Part 2 – Chemical Equations
- What is a chemical equation?
- What a chemical equation shows about a chemical reaction
- Parts of a chemical equation (reactants, products, formulas, coefficients)
- Why chemical equations must be balanced
- Antoine Lavoisier and the Law of Conservation of Mass
- How to balance chemical equations (3 helpful techniques)
- Quiz - balancing chemical equations (5 sample equations)
Part 3 – Types of Chemical Reactions
- Introduction: 5 types of chemical reactions
- Synthesis (definition, general format, example reaction, example equation)
- Decomposition (definition, general format, example reaction, example equation)
- Combustion (definition, general format, example reaction, example equation)
- Single Displacement (definition, general format, example reaction, example equation)
- Double Displacement (definition, general format, example reaction, example equation)
- Review: 5 types of chemical reactions (format, sample equations)
- Quiz: classifying types of chemical reactions (5 sample equations)
Part 4 – Rates of Chemical Reactions
- Introduction: what are reaction kinetics?
- What is the rate of reaction?
- What is collision theory?
- Factors affecting the rate of reaction
- Temperature (how it affects the rate, mechanism, diagram)
- Concentration (how it affects the rate, mechanism, diagram)
- Pressure (how it affects the rate, mechanism, diagram)
- Surface area (how it affects the rate, mechanism, diagram)
- Catalysts and inhibitors (how they affect the rate, mechanism, diagrams)
- Summary table: factors affecting the rate of reaction
NEW! Includes Summary Notes Document!
- This PPT also includes a 2-page summary notes
- Summary notes are fill-in-the-blank questions directly tied to each slide, in order
- Notes document comes in 2 formats: a static PDF document
and fully-editable WORD document
PPT Downloadable in 2 Formats:
- This product comes in 2 formats: a static PDF document
and fully-editable PPT document
- Download the PDF preview
to see a sample of the PPT slides from each sectionRelevant NGSS Core Idea(s) Addressed by This Product:
NGSS - MS-PS1.A
Physical Science - Matter and its Interactions - Structure and Properties of Matter
NGSS - MS-PS1.B
Physical Science - Matter and its Interactions – Chemical Reactions
You Might Also Like the Following Unit Resources:
UNIT BUNDLE - Chemical Reactions
PPT - Chemical Reactions
PPT - Jeopardy Game: Chemical Reactions Review
Unit Overview & Key Words - Chemical Reactions Unit
Activity - Physical & Chemical Properties & Changes Sorting Cards (2 Set Bundle)
Worksheet - Physical and Chemical Properties and Changes
Worksheet - What are Chemical Equations?
Worksheet - Balancing Chemical Equations (2 Worksheet Bundle)
Worksheet - What are the Types of Chemical Reactions?
WS - Naming Compounds, Balancing Equations and Classifying Reactions (A Review)
Worksheet - What is Collision Theory?
Lab - Mystery Powders - Physical and Chemical Properties of Matter
Lab - Ziploc Bag Physical and Chemical Changes
Lab - Signs of a Chemical Reaction
Lab - Great Steel Wools of Fire (A Conservation of Mass Investigation)
Lab - Double Displacement Reactions
Lab - Alka-Seltzer Rates of Reaction
Lab - The Disappearing X Rates of Reaction Activity
Lab - The Disappearing X Rates of Reaction Activity (Virtual Lab)
Quiz - Balancing Chemical Equations (4 Quiz Bundle)
Quiz - Types of Chemical Reactions
Quiz - Collision Theory
Unit Review - Chemical Reactions
Unit Test - Chemical Reactions
Other Chemistry PPTs You Might Like:
PPT - Classification of Matter
PPT - Solids, Liquids and Gases
PPT - Atoms and the Periodic Table
PPT - Chemical Bonds
PPT - Chemical Reactions
PPT - Acids, Bases and Salts
Connect with More Science With Mr. Enns Resources:
Be sure to follow my TpT store by clicking on the Follow Me
next to my seller picture to receive notifications of new products and upcoming sales. |
The alphabet provides plenty of practice in regards to slope. Printed letters can be created using lines – for example, an ‘A’ is just a positive, negative and zero slope line. Or ‘N’ is composed of two undefined slope lines and a negative slope line. A ‘C’ on the other hand would simply be made of a “non-linear” line. This is a great activity for students who need extra practice with the basics: positive, negative, undefined and zero slopes.
Below are the links for two handouts:
This handout is for students to simply identify the lines of letters as positive, negative, zero, undefined or non-linear.
This handout is where letters are described in slope and students decipher the message. This one could be done better, a la funsheets style, but I haven’t figured out how to attach documents to this blog that can actually open.
Just have your technology specialist download PYTHON on the students computers. Even if your students are not familiar with code (which is typically the case), they really only need to “copy” a few lines of code, changing only the parameters that indicate slope and intervals. Even so, students will love the results and this introduction to computer languages.
Here’s a slide show with example projects and directions. The project itself really won’t take two solid weeks, this teacher is integrating slope “lessons” with the actual project. In my experience, students need way more than two weeks on this subject anyways!
Here is a Webquest project set where students can choose from 3 different topics:
Buying a cell phone
NBA or WNBA Statistics
Students will gather data, write an equation and graph using EXCEL. Handouts and rubrics are included.
If there’s one thing I’ve figured out about teaching the basics of slope, it’s that there’s not one single method that will reach every single student. (This is true of any topic). However, it is still possible to reach every student since different methods work for different students. Here’s a few slope memory tricks that I’ve used when remediating students, if they just don’t get it after being shown the traditional ways:
Mr. Slope Guy
This was actually the favorite method of my below-level high school students. On every assessment relating to linear equations, the first thing most students did was sketch this on the top page as a guide. This isn’t my creation, but I can’t remember where and when I came across this to give the proper credit.
Since we write from left to write, people inherently will write the word “slope” from left to write, and this gives students a visual. Without moving the paper around, write the word “slope” on the line and if you find yourself writing upwards, it’s positive. Writing down is negative. Straight across is zero. And since there’s not really a place to write the word “slope” on a vertical line (without moving the paper), that’s undefined.
This only works for distinguishing positive from negative slopes, but simply tracing the line with a fingertip from left to right lets students physically feel the direction of the line as to whether it’s going up or down. I prefer that students write the word “slope” as mentioned above since writing is inherently left to right and tracing is not, but some students prefer this method.
One test asked “what is the slope of a horizontal line,” and a student told me that she couldn’t decide whether to write zero or undefined until she remembered that I had told them horiZontal has “z” for zero. Whatever works…
For the classroom
Given a point and slope, graph the line. Simple, clean graphing program good for student work on the board.
Recognizing Slope and Y-Intercept
Identify slope and y-intercept from an equation. All in slope-intercept form. Good for a mini checkpoint quiz.
Move the points around to see how the slope and graph of a line is affected. Mr. Kibbe’s Slope Demo
Slope and y-intercept Investigations
See how slope and y-intercept affect the graph of a line. Mr. Kibbe’s Slope and y-intercept demo.
Writing slope-intercept Equations
Kind of like Line Gems (for students), in that students write the equations that will go through the most points, but I like the cleaner look and feel of this one: Mr. Kibbe’s Slope Game.
Writing slope-intercept equations
Write the equation of the line that will go through the most gems.
Writing slope-intercept equations
Algebra vs. the Cockroach
Write the equation of the line that will exterminate the roaches.
I recently made a slope worksheet where I drew figures on a coordinate plane, and students had to state the slope of each of the sides of the figures. Then it occurred to me that this would make a really great slope project.
Students could create their own line design on a coordinate plane, and label the slopes of the lines they used. It doesn’t sound neat when stated like that, but here’s an example of what a final product might look like. (I only wrote the slope for six segments, but you get the idea).
Stained glass and linear equations (or inequalities) are fairly common, but I think keeping it just as slope might be better. Students don’t have to have lines running all across the coordinate plane since they only have state the slope for smaller line segments.
To ensure students don’t just draw a few squares, students should be given a list of criteria. For example, direct students that their design must include 6 negative sloped lines, 6 positive, 4 zero slope and 4 undefined slope lines. Or 5 pairs of parallel and 5 pairs of perpendicular lines, or some similar variation. That way students have to use different sloped lines in their designs, and it also gives them a finite number of segments they have to write the slope for. This way, they’re not penalized if they produce more complex designs.
There are a few standard linear equations which are often used to determine a person’s height if, for example, a human femur bone was found. Here’s an idea for a linear equation class activity or project about the relationship between the length of bones and a person’s height:
Information from bones (MALE)
h = 2.24F + 69.09
h = 2.39T + 81.68
h = 2.97H + 73.57
h = 3.65R + 80.41
Information from bones (FEMALE)
h = 2.23F + 61.41
h = 2.53T + 72.57
h = 3.14H + 64.98
h = 3.88R + 73.51
All measurements are in cm.
h = height
T = tibia (ankle to knee)
F = femur (thigh)
H = humerus (shoulder to elbow)
R = radius (forearm, elbow to base of thumb)
Instead of simply giving students the linear equations with numbers to plug in, I think a better idea is to have students measure the length of their own bones (and height), in centimeters. Collect the data sets (separating girls from boys) and see if they can come up with the linear equations themselves. I suppose this would be just another linear regression activity.
To turn it into a project, students can collect data from maybe 6-10 males and 6-10 females (siblings, parents, classmates, etc) and do this activity at home. This would probably be more interesting since students will all be working with different data sets. Students could then present their information on a poster, divided into four sections (one for each type of bone and linear equation). |
- About the Initiative
- Topical Index of Curriculum Units
- View Topical Index of Curriculum Units
- Search Curricular Resources
- View Volumes of Curriculum Units from National Seminars
- Find Curriculum Units Written in Seminars Led by Yale Faculty
- Find Curriculum Units Written by Teachers in National Seminars
- Browse Curriculum Units Developed in Teachers Institutes
- On Common Ground
- League of Institutes
- Video Programs
Have a suggestion to improve this page?
To leave a general comment about our Web site, please click here
Are we, as human beings, formed by "nature" or "nurture" or both? I ask this question at the beginning of our Frankenstein unit in order to get my students thinking about our roles in this world and our accountability (or lack thereof) for our actions and the resulting consequences. With high school seniors, these questions are even more imperative since these young folks are on the brink of graduating and exiting the routine, expected, predictable lives of adolescence to be thrust into the "real" world of the unknown and the uncertain.
The common response my students have to our unit on Frankenstein comes directly from cultural references: "Frankenstein is the green monster right?" "We're gonna be reading about the guy who is ginormous with bolts in his neck!" "Oh yeah...I've had Frankenberry cereal!" My students are soon informed that "Frankenstein" is actually the name of the creator/scientist, and his creation remains nameless, being referred to only as "creature" and "demon," although there are quite a few parallels between the creator and the created. It is this very question of identity that will drive this unit in attempting to show students the harmful and dangerous impact that assumptions and prejudice can have on students who are seen as the "other," the "unknown," or the "different." While students initially view the "creature" as some grotesque, raging, violent, deranged killer, their assumption is shattered once they begin to discover the circumstances of the creature's 'birth" and its first exposure to the world. This "newborn" experience is a far cry from what my students are accustomed to when thinking about Victor Frankenstein's "creature."
To set the stage for Shelley's novel, I introduce my students to the life into which Mary Shelley was born, her personal struggles, her difficulties with her relationships, and her longstanding shadow of pain and suffering. While the novel may be written at a time and in a style foreign to modern day teenagers, the plight of the creature/creator is all too familiar to them. While we don't have this fictional eight-foot tall, grotesque, incomprehensible creature roaming in our society today, we do have individuals who step onto our high school campus (and society) feeling this disowned, this displaced, and this ostracized because they are somehow "different."
Ostracism gets to the heart of the matter when it comes to why this unit is valid for my students. At Independence High School, our 2011-student body count was at 3,409 students. Our diverse student population consists of: 37.6% Asian, 34.6% Hispanic/Latino, 19.6% Filipino, 3.5% Black/African-American, 3.4% White, .7% native Hawaiian or Pacific Islander, and .3% American Indian/Alaskan native, and .1% biracial 1. Our students speak a range of languages from Spanish and Filipino to Vietnamese and Mandarin. Our school has the luxury of being one of the most culturally, linguistically, demographically diverse schools in the state, if not the nation. Yet, this luxury can also be a detriment. After maneuvering through nine years of schooling, it comes as no surprise that by the time a child reaches high school he wants nothing more than to mix in, be accepted, assimilate, and be "like" everyone else. What does this "fitting in" look like? The same clothes, the same taste in music, food, and language? These are the daily markers of normalcy to a teenager, but the inevitable questions soon arise: "What if I don't fit in? What if I can't fit in? Why am I being singled out? Am I a loner? What do I have to do to be a part of this gigantic puzzle known as adolescence?" In our Frankenstein unit, this question of acceptance will be our focus as we delve into the parallel experience of the creature wanting to be accepted, to gain the trust of others, and to do everything in his power not to be alone.
More importantly, we will also face the reality of how prejudice, racism, and ostracism impact an individual who is already struggling with his or her place in society. Much like the creature, our students are often seen as "different," outsiders, loners, and creatures to disassociate with. Why do we prejudge someone who looks, speaks, and acts, in a way different than we do? What assumptions are made based on someone's outer appearance? We will explore the ways in which society impacts an individual using the creature's interactions with his "society": the Delacey's, the villagers, Victor, and numerous other characters.
As my students consider this idea of prejudice, especially in their own environment, I ask the students if prejudice is the same today as it was in the past. Most of my students will respond with the typical "We've come a long way since the days of segregation", a sign of progress that I do acknowledge. However, I ask them to think deeper into how different the forms of prejudice and intolerance have truly gotten. I ask my students to join in completing my sentence: "sticks and stones may break my bones but ...names will never hurt me". This adage has always proven to be a faulty concept. As we know through various psychological studies, while a physical bruise may heal in time, an emotional scar is lasting. The same lasting impact can be seen with social media outlets as the instantaneous yet permanent power of technology allows these emotional words to spread quickly and to reach a wider audience. Such technological social outlets as MySpace, Facebook, Twitter, tumblr, and YouTube are perfect examples of the far-reaching and immediate impact of our words and attitudes. While the social media might have been a lightning rod for the Egyptian revolution, it takes on a different purpose when used as a form of bullying and hatred. Twitter and Facebook may have been the catalysts for freedom and social protest, but with cyber-bullying what is created is social oppression.
Cyber-bullying has become a vital issue in regards to the kinds of cruelty and discrimination our young people face today. Whereas in the past a student might verbally spew hateful attitudes by word of mouth, today those words are released into the world with the swift touch of a button. Students have their personal accounts hacked, being the victims of "identity theft" as others publicly release embarrassing, shameful, harmful pictures and/or messages. There have also been accounts of young people who have been victims of hateful on-line campaigns to ostracize and shun them all in the name of cruel fun and hatred. Some examples of the fatal impact of cyber-bullying include: the suicide of fifteen year old Phoebe Prince of Massachusetts who was taunted as the new girl from Ireland and the suicide of seventeen year old Alexis Pilkington from Long Island, New York who endured relentless taunts on the internet which continued after her death. 2 Another extreme example that is brought to mind is that of Rutgers University freshmen John Clementi whose gay tryst was secretly videotaped by a fellow classmate Dhrun Rhavi. This violation of privacy is compounded when the facts were revealed that the "videographer" invited fellow classmates to secretly join in on the filming of these encounters, as if being treated to an entertaining show. Even more disturbing is that Dhrun Rhavi publicly urged his friends and Twitter followers to watch an upcoming encounter via his webcam. The result: eighteen-year-old Tyler jumped to his death from the George Washington Bridge. 3
Our unit's emphasis on ostracism and how we recognize our own prejudices and assumptions will benefit my students in various ways. The most obvious benefit will come in the form of students recognizing how human beings form their own prejudices based on race, language, appearance, or any outside, superficial category. Students will become aware of how these prejudices impact others, specifically noting the power of words and labels and stereotypes. In addition, this unit will help students from the other end of the spectrum of prejudice, as individuals who have been or might be prejudged, and to reflect on how those experiences of humiliation or ostracism have impacted their lives.
The end result, it is my hope, will be a group of young people who leave my classroom with a bit more empathy for those "others" on our campus and in our society who might have been viewed as invisible and unheard and irrelevant by the rest of the world. Of course students will come away with a better understanding of character perspectives, and the relationship between reader and audience; but the ultimate goal is for these lessons to transcend from the Mary Shelley's novel on their desk, to their personal day-to- day interactions with their peers. My students can relate on numerous levels to the plight of the creature in the novel, and as such will hopefully understand the impact, power, and cruelty of our prejudicial words, our stereotypical attitudes, and our closed minded visions.
Timing of the Unit
This unit will be delivered to a 12 th Grade World Literature class. Preceding this unit, students will have covered the traits of Romantic literature, the structure of epistolary narration, as well as the various points of view a writer, such as Mary Shelley, employs in order to offer the reader a glimpse through the eyes of different characters. By using various narrators, the students are able to get a more genuine, reliable story. The recurring question that will begin with this unit and will continue through the rest of the semester—with such works as Dante's Inferno and Shakespeare's Macbeth—is: How do ethics play a role in an individual's actions? This question will of course be applied to the likes of Victor, the Creature, the sinful religious figures of Dante's time, and the infamous Macbeth. More importantly, students will pose this question to themselves in order to consider the bigger picture of what these pieces of literature, these characters, these authors aim to show us about ourselves as human beings.
While the assigned readings can be overwhelming and dense for my students, I supplement the unit with audio readings of the novel, and with film clips to aid students in their understanding of characterization and the differences between a Hollywood rendition of literature and the integrity of the original novel. More importantly, the film adaptions allow the students to be aware of the same short sightedness that Victor (like countless other characters) has when only using his sight as a means to view the creature.
Students will also cover specific segments in order to make their reading smoother and easier to digest: namely-chapter vocabulary, sentence structure, diction, architectonic chapter outline, metaphors /analogies, family tree, conceptual handouts, geographical map, and the structure of the novel (epistolary narration, first[-]person retrospective point of view, aporia as literary device.)
Structure in Frankenstein
As mentioned above, students will have had exposure to epistolary narration in order to become familiar with Mary Shelley's format of Frankenstein. Once my students have familiarized themselves with the various narrators (from Walton to Victor to the Creature), the question arises: "So, who do we believe— the first person who speaks? Victor because he is an educated scientist? Walton because he begins and ends the story? Victor, even though he's a monster?" To get my students to understand the reliability of a narrator, I ask them to think of times in their lives when two individuals may have experienced the same event but have two very different perspectives. Sometimes they have a favorite childhood memory, but their recall is much different from that of their sibling. I briefly model for them the telephone game where I tell person X a fact, person X whispers to person Y what they heard, person Y whispers their fact to person Z, and so on. Inevitably, the original detail is distorted as the words pass through various ears and various mouths. This is the very process the students experience in Mary Shelley's novel. Through the first-person retrospective narrative, and using the epistolary form of narration via letters, Mary Shelley presents the reader with a balanced view of what could easily be a biased story.
To begin to frame the story for the students, they will need to understand what the epistolary format is and why authors use it. The epistolary format in a frame narrative allows for a narrator, typically an outsider independent of the initial action, to become a reliable representative of the author's views. The story unfolds as a series of letters in which this outside narrator relates the story he has been told by another character to a third person, in this case Walton's sister. The story actually begins at the conclusion of Victor's quest, as he is chasing the creature in this cat and mouse game. This story within a story allows the reader the luxury of seeing a moving timeline of emotional and moral development, as the Creature is allowed to take over the narration midway. Readers can judge the Creature in terms of his internal thoughts and merits rather than the external appearance. Shelley's epistolary format also allows Walton to set up, for his sister and the readers, the idea of Victor as a mad scientist whose obsession for creating life ultimately is destroyed by the very thing he created, the creature, and the very thing he maintained, his obsession. 4
Since the various shifts in narration allow for the reader to get differing perspectives, students are brought to the realization that there is a contrast with how Victor perceives the chain of events versus how the Creature perceives those same events. To analyze the kind of narration Victor gives is to begin with the straightforward details he offers us, seemingly leaving out any kind of emotional backdrop. For example, as a student of chemistry, Victor includes specific ideas and descriptions of what he has studied and read and done, but he leaves out the very thing as a narrator that he leaves out as a character—any emotional description. He admits to Walton that his "studies interfered with the tranquility of his domestic affections", admitting that his priorities in family emotional connection was not a priority in comparison with is pursuit of knowledge. 5
Through the Creature's narration, the perception of Victor's actions takes quite a different turn. His supposed marvelous deeds are now revealed as crimes as well as his unreliable judgment of the creature. Students will be asked to reread specific passages where Victor's observations can be deemed as doubtful. For example, before Victor destroys the female mate he promised the Creature, he describes looking out at the window to see the Creature looking in at him with a "ghastly grin wrinkled his lips". 6 It is important to ask students at this point where they feel the most connected, alongside Victor in the laboratory or alongside the Creature from the outside looking in.
Because they will have reread quite a few passages that raise doubts about Victor's credibility, students may begin to see themselves alongside the Creature. This kind of shift from the protagonist (Victor) to the antagonist (Creature) can begin to be explained through the literary term, aporia as an "ideal state of mind...as the perfect suspension of judgment that presents either complete faith or doubt." Once the Creature's narration unfolds, the reader begins to realize the contradiction that Victor has so far established. The sympathy that some students might have felt for Victor is now replaced by suspicion. 7 Regardless of Victor's determined rants against the Creature as "monster", "fiend", "devil", "vile insect", and even after the various crimes committed by the Creature, students may rein in their extreme hatred for the Creature, and allow some kind of understanding to color their view of the Creature. After all, the Creature's narration is dominated by his feelings and emotional journey, whereas Victor's narration is usually observational and lacking in emotion.
Perception: Then (18 th Century) and Now (21 st Century)
We jump into the labs of 18 th Century scientists Gall and Spurzheim by analyzing their Phrenological Chart. I will have the class join me in physically analyzing our own skulls to determine our specific moral strengths and weaknesses. After feeling for skull formations for violence, compassion, dishonesty, and various other supposed markers of behavior, my students will probably sneer and joke about the absurdity of this chart. I expect some laughter at the incredible theory that society believed in during the 18 th Century. While students will find this chart nonsensical, they will be reminded soon enough that they themselves have used such "nonsensical" criteria to do the exact same thing—judge a person's inner being and their moral character based on what their eyes show them, rather than what their ears can tell them. I will use a PowerPoint presentation of various images—a few of Diane Arbus' photographs, famed astrophysicist Stephen Hawking, rapper and businessman David Banner, Olympic Weightlifter Tommy Kono, among others—to elicit their own assumptions and reactions to the what they think they see and know through their vision. Students will see that even they, in this day and age, base their own judgments of others based on the superficiality of what their eyes can see, once again noting that while we have come a long way in terms of legal segregation and prejudice, the issue of individual perception and judgment continues to exist.
These preconceived notions of an individual, or group, are directly tied to the way an individual views himself. For example, students are questioned about their identity in various ways: Who are you? [Priscilla Garcia]. What are you? [Mexican. Mexican-American. A girl. A student.] Where do you come from? [Juarez, Mexico. Checkers apartments across the street. San Jose]. These questions of origin and identity are important as we begin to see how individuals recognized themselves and others in the 18 th Century, given the two opposing theories of human origin. I must remind the students that while some of these theories might seem fantastical and ridiculous today, they were developed with the knowledge and assumptions available at the time. The Polygenic Theory claimed that the missing link between human beings and apes was black Africans who originated from a distinct and different species. The opposing theory is the Christian Doctrine, which argued that all humans came from the first original couple of Adam and Eve, and attributed the differences in skin color, hair color, skull shape, and anatomy to environmental conditions and changes. 8
In addition to the understanding of both theories of origin, my students will also analyze the race through Freiderich Blumenback's 1775 classifications. He categorized specific subgroups within the human species into groups using the traits of skin color, hair, skull formation, and physical anatomy. In addition, students will be exposed to Camper's theory that the inner soul and moral character produce the outer appearance of an individual, as well as the Phrenology chart which uses the contours of the skull to determine the characteristics and morality of an individual. Victor and many of the other characters support these theories that claim that an individual's outer appearance is a valid measure of a person's inner nature and automatic behavior. 9
Why is all of this background on origin significant to my students? My students understand stereotypes very well, especially given their personal histories and observations of prejudice. Some common racial stereotypes that they reveal include: "All blacks are thugs and steal", "All Mexican girls get pregnant and drop out of high school", "All Asians are stingy and selfish." While these assumptions might seem laughable to my students, they are the actual basis for how individuals perceived race and ethics during the 18 th Century. Lawrence gave a list of intellectual and emotional qualities identified with each race, specifically noting that "the white race held a preeminence in moral feelings and mental endowments." 10 Would my students of Asian descent be comfortable with the idea that their ancestors could have strong moral fibers, but because of their inherent laziness they will remain inferior and destructive like Genghis Khan? Their Asian "gene" has already determined that they, and their children, and their children's children will remain inferior to the white race?
If the creature were to be asked the same questions I posed to my students—"Who are you?" "What are you?" "Where do you come from?"—he would be hard pressed for an answer. However, there are certain subtle clues in the text to point to the Creature possibly being of the Mongolian/Asian race using the 18 th Century descriptions of man. UCLA professor Anne K. Mellor points to the Creature's Mongolian race by noting that at the beginning of the novel, Walton and his men have set off on their voyage to China by way of the North Pole. Mellor points to the Creature as an inhabitant of "an island North of the wilds of Tartary and Russia whence Victor has pursued him, North of Archangel, the northernmost city in western Asia from which Walton has set sail." Mary Shelley describes this newborn giant as having yellow skin, black and flowing hair, the "dun white" or light grey-brown of both irises of his eyes and sockets. The Creature does not have white skin, blond hair, or blue eyes, and is definitely not Caucasian, nor is he the same race as his maker. Even Walton notices the difference in appearance between Victor and the traveller preceding him, as he notes Victor was "not as the other traveller seemed to be, a savage inhabitant of some undisclosed island, but an European". 11
The Burdens We Carry: Biographical Backgrounds
To begin to understand the connection between author and work, my students first need to recognize the complexity of Mary Shelley's personal background. Given the obvious parallels between Mary Shelley and the situations depicted in her novel, it is extremely beneficial for my students to see the how the realities of an author can have the same validity and purpose as told in a fictional account, using a fictional character as opposed to a non-fiction piece of literature. More importantly, my students have their own valid experiences and stories when it comes to the connection (or disconnection) between their own family background and dynamics and the formation of their own personalities and identities. This is not to suggest (as my students are quick to point out) that who their parents are will guarantee who they themselves will become in the future. Family background is just one factor we may use in order to understand our own values, concepts, and attitudes of the world and of those around us.
Mary Shelley had the luxury (and burden) of being the offspring of two very prominent and established writers, Mary Wollstonecraft and William Godwin. Mary's own parental bonding was short-lived, as her mother died eleven days after giving birth to her. Before her death, Mary Wollstonecraft recognized that "before there can be an interplay of love between father and child, the father has to fulfill his duties", a statement Mary was familiar with from her mother's literary work. 12 For various reasons, Godwin did not fulfill his duties as a father, remaining a distant parent after Mary Wollstonecraft's death. In addition, Mary's relationship to his second wife was not without its problems. In essence, Mary became familiar with rejection as her mother's death signaled a type of "desertion" and "abandonment" by the one who gave her life. 13 Mary's sense of abandonment continued as her elopement with an already-married Percy earned her the scorn, and disownment, from her father, someone who embodied her "God" and whom she had a strong attachment to.
As if Mary didn't have enough sorrow in her life, she continued to face loss and suffering. Death seemed to be a constant and complex theme in Mary's life, as she may have felt indirectly guilty for each individual's demise. As already noted, Mary's mother died as a result of complications in giving birth to her. Secondly, Mary encountered death by means of the infant deaths of three of her four children. Interestingly enough, her first daughter died at eleven days old, the same age Mary was when her own mother died. Next, her half-sister, Fanny Imlay, committed suicide six weeks into Mary's beginning her drafting of Frankenstein. Also, Harriet Shelley (Percy's wife) committed suicide two and a half months after Mary began her novel. The death of her friend Lord Byron also had an impact on her. To complete her string of sorrow, Percy died in a boating accident, after which her closest friends, Jane Williams and Thomas Jefferson Hogg, betrayed her in publicly labeling her as an unloving wife. Her ongoing struggle with depression has been evidenced through her journals spanning the 1820's. 14
These themes of abandonment and death are at the forefront of Frankenstein, thus bridging the life of the author with the lives in the book. As a stark example of this connection, we need only view Victor's visions and dreams of resurrecting life with a page from Mary Shelley's journals detailing her dream of reviving her dead child. Mary's journal noted, "Dream that my little baby came to life again- that it had only been cold and that we rubbed it before the fire and it lived". 15 While Mary was still grieving for the loss of her child, her father continued in his insensitive advice and warning not to grieve too much for her dead child or else risk losing the love and affection of those around her, which mirrors Alphonse's ultimate advice to Victor after his mother dies of the Scarlet fever. 16
Child Psychology: Substitute "Parents"
While it is helpful for students to recognize Mary Shelley's personal history, and the struggles and suffering she endured both as a child and as a parent, it is time for my students to revisit the idea of the lack of a parent, a guide who will be the vital supporter so that the child can maneuver through this complex world. Many of my students grow up in low income, single parent homes, or they are raised by a grandparent or by an extended family member. Sometimes the students may come from a two-parent home, but that is by no means a guarantee of a healthy, happy environment. Those students who are lucky are raised in a strong, supportive home with parents who realize their main purpose in life is to care, guide, and love their sons and daughters. The question then becomes who or what actually takes the role of a "parent" in those homes where there is an absentee or neglectful parent? For my students, I've seen the role of parent take on the shape of peers (whether in the positive, traditional peer sense or in the negative "family" of gang life), extracurricular clubs, sports, and church. I used to think that many of my students who spent endless hours after school, immersed in multiple activities such as sports or band practice or club meetings, were just overachievers. While some students indeed dedicate themselves to extracurricular activities, others are involved because this "school" life is much more stable, safe, and functional than the lives that await them when they get home. In the case of the Creature, education becomes Victor's parent and serves as his only sign of stability, safety, and functionality.
Where Victor fails to fulfill his role as a parent, books are the only things that can fill the void and serve as the creature's guide into love, knowledge, and sorrow. The books the Creature discovers while hiding in the hovel behind the DeLacey's cottage serve as his "mother" and "father", nurturing him but also showing him the painful reality of love and sacrifice. While some of my students are familiar with their parent's lecture about the danger of temptations, the Creature gains this lesson from John Milton's Paradise Lost. In terms of love and sacrifice, my students gain their insight from their parents' stories and warnings (or unfortunately what they glean from reality dating shows) while the Creature takes the lessons of love from Johann Wolfgang von Goethe's The Sorrows of Werther. Lastly, the moral virtues and weaknesses of man may be revealed to my students by family members, friends, and role models, yet the Creature must settle for Plutarch's Lives to instruct him in this area of leadership. These three condensed readings—Milton's Paradise Lost, Goethe's Sorrows of Werther, and Plutarch's Lives— will be the basis for this jigsaw activity for my students. My students will not only identify the lessons the Creature learns from these texts, but will also stand in the actual role of Creature as they, like he, learn these concepts and literature for the first time.
As my students uncover the lessons of temptation, sacrifice, and leadership, I will ask them if there could be any substitute for a parent. In the Creature's case, all he has are the books he finds. What about the kinds of modern day substitutes that may fill the gap of an absentee parent, or the time and affection a parent cannot or will not provide? We have already discussed the various outside influences on a young person such as a gang, a "family" of sorts to supposedly "accept", "protect", and "guide" a young child. Students might also point to the materialistic items that some parents use as a symbol of affection and love, such as extravagant gifts in the form of the latest designer purse, the latest version of the iPhone, the cutting edge basketball sneakers. Can there be continued loyalty from child to parent— from the Creature to Victor—even if that parent has caused unheard of pain and suffering?
This debatable question centers on a few important aspects of child psychology. After all, the creature, despite his gargantuan size, begins as a child, newly formed and easily influenced. As a class, we discuss the old adage, "It takes a village to raise a child". In the creature's case, "It takes a village to ruin a child." According to psychologist Selma Fraiberg, the unloved child has the capacity to grow into the unusual adult, the deviant who seeks to compensate for his overwhelming displacement, his "nothingness" by inflicting pain on others—a form of announcing to the world, "I exist, I am". 17 By viewing the creature through Fraiberg's lens, we are able to see that it is not the creature's nature that makes him vengeful (as Victor deludes himself into thinking) but rather his magnified isolation and despair at the lack of human connections that Victor should have provided. As is the case with many (if not all) youth, there is an incredible yearning to win the approval of one's parents, as we have seen with Mary Shelley and her own parents. As a class, we discuss the current dynamics of this kind of parental stamp of approval, which in the best cases can lead to a child's excellence, and in the worst cases can lead to the dangerous extremes in which a child sets incredible and unattainable goals at the expense of everything else in life.
Valid questions are also swirling in a young child's brain when it comes to his or her place in society, specifically dealing with the protectors in his/her life. As a class, we move from Fraiberg's theory of displacement to child psychologist Bruno Bettelheim's approach to child identity. He notes:
- The child asks himself: "Who am I?" " Where did I come from?" ... He worries
- not whether there is justice for individual man, but whether he will be
- treated justly. He wonders who and what projects him into adversity,
- and what can prevent this from happening to him. Are there benevolent
- powers in addition to his parents? Are his parents benevolent powers? How
- should he form himself, and why? Is there hope for him though he may have
- done wrong? Why has all of this happened to him? 18
All of these questions can be directly applied to the creature as a child. At one point or another, it is probable that my students have come across at least one of these questions in regard to their home life and parental figures. By recognizing Bettelheim's approach, students will be prepared to view the creature through the most basic question of identity—Who is the Creature really?
The impact of absentee parents is apparent in Shelley's construction of Victor and the Creature's relationship, if it can be called that. My students will begin to formulate their own opinions regarding Victor and the Creature as moral or immoral characters, but will first consider the kinds of ostracism that impact an individual. It is one thing to be cruelly shunned by society, but quite another when his own parent rejects him. As evidenced by the Creature's existence, man can thrive and live through communion and relationship with others, as isolation and solitude essentially represent man's death. My students are aware of various types of isolation, from extremely introverted students who walk solitarily on campus with eyes fixated on the ground to the hidden student in the back of the class who hardly says a word or makes eye contact with the teacher or his peers.
While the most obvious dysfunctional relationship is Victor's and the Creature's, it is by no means the only one in the novel. Students need to analyze the various other parental relationships with great scrutiny. In order for the students to understand the different dysfunctions in a home, they must allow themselves to see different family structures that exist in the novel. After all, students might not immediately recognize the term "dysfunctional" but they certainly can recognize unhealthy markers in relationships, whether at home or in school.
To begin with, students must first direct their attention to Victor's childhood upbringing. While Victor paints his childhood as ideal (causing the reader to question his reliability), evidence suggests that through his narrative he places a forced, unnatural emphasis on happiness and the love in his family. Students can discuss the various reasons why an individual might put up a "front" by portraying a home life in a doubtful manner versus giving a realistic view of what their family is truly like. Why do students, like Victor, seem to rely on a fantasy when talking about reality? Is a student's family life too painful or shameful to reveal to the public? Maybe relaying a façade is much safer and more comfortable for students to recount than the harsh reality of what they go home to each day.
Dysfunction continues with Victor as he begins to grow into a college student. While students might only view the problems Victor encountered as a child, they need to follow Victor as the kinds of problems change from home life to academic pursuits. Indirectly, Victor's father, Alphonse, can be held accountable for the creation of the monster (and the subsequent murders) based on his lack of attention to Victor's passion in science. Once again, students can relate to this family dynamic of pursuing the exact thing that your family rejects or dismisses. While it may seem unfair to directly blame Alphonse for the deaths, it is quite a scary idea to realize that had Alphonse taken some interest in his son's studies, then maybe things would have turned out differently. These "What If's..." are timely for my seniors since they will be able to reflect on the last three years of their high school careers, and what role their parents played in where they are right now, both academically and socially.
Is there only one kind of dysfunctional family? Are we to consider Victor and the Creature's relationship the only unhealthy representation of family? There are many different kinds of families who have dysfunction, with no magic potion to make everyone have a happy ending. The DeLacey's are seemingly settled and healthy but their stability soon turns into chaos. As old man De Lacey is on the cusp of giving the creature a chance at human connection, his children burst into the room and rob the creature of this miracle. Ironically, it is when Felix enables his father to see through his eyes that old man De Lacey actually loses his visionary powers. The blinded ones are actually the children who might have literal sight, but are completely in the dark when it comes to compassion and open mindedness. Even for students who have a seemingly perfect and stable family life, looks can be deceiving as is established with the DeLacey's. Once again, students are brought back to the idea that things (and people) aren't always what they seem to be, a fact that they are accustomed to in their own interactions with their peers.
A Parent's License
"How can anyone reject their own flesh and blood?" is the ultimate question many of my students ask as they become more comfortable with viewing Victor and the Creature not as "creator" and "created", but more personally as "parent" and "child". We begin to answer this question with less of the "how" and more of the "why" parents would reject their children. In a perfect world, of course parents would earn an "A+" on a parent report card. Parents would always know what to say and what to do in every possible situation when it comes to their child. As students' frustration with Victor's neglect begins to build, some of my students express the need for people to apply for having a child, much as individuals apply for a driver's license. My students' rationale for this application is that people should meet the basic requirements in order to have children, nurture their children, and love their children as opposed to being the careless, irresponsible Victor who leaves an innocent to fend for itself. As some of my students may point out, even interested pet owners must fill out forms, participate in orientations, and prove to the animal shelter that they can care for and maintain a pet in a loving, safe home. Why is there less screening when it comes to parents? This question might also be especially topical when students consider teen pregnancies and teen parents.
While my students have almost daily moments of immaturity, which sometimes drives my patience to a breaking point, I am always amazed and proud of the kind of insight they have when it comes to relevant concepts that they are honest enough to discuss. A prime example is our discussion about the responsibilities of a parent to a child and of a child to a parent. After listing the expected duties of parent to child (shelter, safety, food, guidance, and most importantly love) we shift to how this list compares to Victor's desire to create life. It doesn't take too long for my students to see that Victor should have never gotten his "parent's license" given his motivation for becoming a creator. After all, Victor "expects that "a new species would bless me as its creator and source...no father would claim the gratitude of his child so completely as I should deserve theirs." 19 Victor is not shy about stating his need to be praised, adored, placed on a pedestal of acclaim, all the while forgetting to acknowledge his duty to this new, delicate, vulnerable being. This parent's license to have a child echoes Mary Shelley's own views that "a right always includes a duty, and I think it may likewise fairly be inferred that they forfeit the right who do not fulfill the duty." 20
From this idea of parental duty, my students will move into the discussion of expectation versus reality when it comes to children. Victor, in a sense, customizes his child in that he chooses every body part, every physical characteristic, and every physical detail of the creature. He is able to purposefully and knowingly make his child, and has literal control over this process. Students will usually be dumfounded by the confusion of Victor's revulsion at his creation. He did know exactly what his creature would look like throughout the entire process, so why is he horrified at something he's looked at every day for a year? At this point, I have them keep track of the verbal exchanges between Victor and the Creature, noting especially the various labels Victor assigns the Creature: "demoniacal corpse", "mummy", "hideous", "wretch", "a thing such as even Dante could not have conceived." Pointing back to the impact on the psyche of the child, we discuss the impact these words would have (and do have) on a child, especially coming from the parent.
While it is important for students to follow Victor's irresponsible and cruel treatment of the Creature, I find it even more important for students to find examples of Victor's possible compassion for the Creature. It is through these few instances that students need to realize that it is very possible for Victor to take ownership of his actions. He has the ability, the will, and the capacity to reverse his decisions. In a fleeting moment Victor does emit a tinge of compassion to his child as he recalls looking at the Creature's face and stating, "his countenance bespoke bitter anguish but its unearthly ugliness rendered it almost too horrible for human eyes." 21 Another brief moment of responsibility comes when he encounters the Creature and hears the details of his existence. Victor realizes, "I ought to [have rendered] him happy before I complained of his wickedness" and also notes, "his tale and the feelings he now expressed proved him to be a creature of fine sensations; and did I not as his maker, owe him, all the position of happiness that it was in my power to bestow him?" 22 "Yes" would be the resounding answer, but unfortunately for the Creature, Victor never fully allows himself to develop this responsibility.
Images of Propaganda?
Since the basis of our Frankenstein unit revolves around what our eyes can see, versus what our ears can hear, I plan to spend some time showing my class the only surviving picture of the Creature as depicted in the Frontispiece to the 1831 revised edition. Students will actually begin by creating their own visual image of the Creature using only Victor's narrative to guide them. The students will then compare their image with the 1831 Frontispiece picture, noting the differences in appearance. While the Frontispiece image does show a large head and gigantic body proportions, students will actually discover that the Creature's body is a perfect embodiment of strength and masculinity as opposed to the grotesque image that Victor offers us. The Creature has a large but well proportioned body with only the head and Mongoloid features which appear awkwardly connected to the body. 23Once again, I will bring my students back to our "Into" lesson about what our minds understand by what our eyes can see, and quick way we can easily fall into this tricky and deceptive trap of assumption and prejudice.
One can make the case that Victor uses his words as propaganda against the monster in trying to somehow align the reader with his demonic view of his creation. This leap from propaganda using race/stereotypes fits smoothly with my students' understanding of manipulation throughout history. My students will become aware that prejudice and racism aren't limited to an individual's ideas, but can be widespread using the various forms of media to paint a specific group as threatening and evil as was done in WWII.
My students, as mentioned above, are familiar with the power of social media. Yet, it is important for students to remember that the newspaper and print ads were the "social media" of the late 1800's. Through various propaganda pamphlets, my aim is to show the students how powerful and successful it is to prey on people's fears, however irrational or incredible. Some of my examples will center around the "Yellow Peril" of the late 1800's and early 1900's targeting the Chinese wave of immigrants in San Francisco, California who worked on the railroads, vineyards, laundries, and restaurants. These immigrants were depicted as a monopolizing, greedy threat that was on a quest to bring their fellow opium-addicted brethren to infiltrate the country. I will also use some examples of the anti-Chinese and anti-Japanese propaganda used to portray the Asian individual as a demonic Dr. FuManchu, as British author Arthur Henry Sarsfield Ward did in his posters. 24 In these depictions, the Asian man is shown as tall, leaning forward in an almost cat-like pose, with a face resembling the high arch of a Satanic brow, and with the cruel cunning look of an entire Eastern race. In other images, the Asian man was stereotyped as a gigantic, bloodthirsty warrior who was very similar to an ape. Once again, I will remind my students of Professor Mellor's theory that the Creature, based on racial features and geographical location, could have been of Asian descent.
To shift gears into a more modern connection for my students I will show them recent advertising images that prey on stereotypes to sell a product. Some examples of these ads include Intel, Dove, and Sony among others. By just these visual representations of a specific race, students will uncover the subtle (and not so subtle) messages the advertisers are sending to the public.
Nature vs. Nurture?
What is the final verdict in terms of the Creature's good or evil identity? This is a question my students struggle with, especially since they don't want to let go of the fact that at some point, the Creature stops being a child, and becomes an individual responsible for his actions. By looking at Rousseau's theory of natural man, my students get a fuller and more complex view of the Creature as a being who makes his own decisions.
We begin our understanding of nature vs. nurture by analyzing Rousseau's philosophy of natural man. According to Rousseau's claim, the Creature aligns to a "natural man" because he has a balanced set of defects and virtues. This "natural man" may begin lacking the ability to speak and reason, but is stronger and survives unbelievable circumstances, as the Creature did in surviving the unforgiving societal and environmental challenges he faced. In contrast, the average human being may have the ability to speak and reason, but placed in the same challenging situations would not survive. Rousseau's theory would also point to the Creature's independence and natural sense of pity, a trait that he is not afforded by society. Ultimately, students will understand the Creature's monstrosity is in part a social construction, not an innate part of him but placed upon him by society without his choice. 25 The Creature explains his behavior by stating, "My vices are the children of forced solitude that I abhor; and my virtues will necessarily arise when I live in communion with an equal." 26 Yet, when does man (or creature) take responsibility for his actions? My students need to discuss this idea of accountability, and at what point does age and inexperience stop becoming an excuse for immoral behavior?
At this point, I would like to present to my students local questions in our community, such as the juvenile justice law in California that allows for a fourteen year old to be tried as an adult in serious cases of gang crimes, sexual offenses, and murder. Does a fourteen year old truly belong in a prison alongside hardened criminals? Will this fourteen-year-old leave prison worse then when he went in? Should this fourteen year old be given a more adequate sentence with a better chance at rehabilitation? The debate will ultimately center on how to address this issue of justice and punishment given that this fourteen year old "child" would have the knowledge that when you hold a gun in your hand, you point it at someone, you place your forefinger on the trigger and squeeze, a bullet will dislodge and will pierce through flesh and tissue, and what will be left is a dead body on the floor. Once again, the discussion will move towards the question of when the Creature can be held accountable for his actions, when he knew right from wrong.
While some students may continue in their sympathy with the Creature, others may align themselves with Victor by noting that the Creature can be nothing but evil given his actions. Victor looks at his creature, and exclaims, "Abhorred monster! Fiend that thou art! Tortures of hell are too mild a vengeance for they crimes wretched devil!" 27 At this point, students who have taken the view that the Creature is evil remain steadfast in their belief that the Creature, regardless of his emotional turmoil, must be held accountable for the lives he has taken: William, Justine, Henry Clerval, and ultimately Victor. The students who find the Creature fully responsible for these deaths align themselves with Dante Alighieri's assertion that man is full of free will and choice. Dante supports the concept that while God is all knowing, all good, and all-powerful, man falls extremely short of acquiring any sense of innate goodness as he is inherently evil and will falter through his own choice. The Creature may perceive his evil to come from society's cruel treatment of him rather than some innate predilection for evil. In contrast, Dante would argue that evil actions result from the free will and choice of man and that knowledge brings salvation and redemption, something the Creature never reaches. 28
Before moving into the structure of the novel, I compare the two editions of Frankenstein to offer my students yet another interpretation of the novel. Mary Shelley's 1818 edition presents, although subtly, Victor as having the ability to make decision regarding the Creature. This edition displays Victor as having the free will to make meaningful and ethical choices at the critical points in the novel. Victor could have abandoned his quest for the source of life, he could have cared for his creature, and he could have protected Elizabeth. In contrast, Shelley's 1831-revised edition portrayed Victor in a less capable role. Rather, the emphasis is placed on destiny, thereby allowing Victor to not have the moral choices placed in front of him. Many of his decisions are really not his "free will" but activated by fate. His academic passion of the sciences is "attributed to chance—or rather the evil influence; the Angel of Destruction, which asserted omnipotent sway over me." 29 Justine and William's death are not the result of Victor's silence, but rather a curse imposed by "inexorable fate." Victor, Justine, and Elizabeth each poignantly attribute their fates to "immutable laws" or an "omnipotent "will" to which mankind must "learn...to submit in patience." 30
Loyalty To The End
Mary Shelley imbues this Creature with a generous amount of empathy, which aligns her with the British empirical philosophers such as John Locke and David Hartley, who placed emphasis on sympathy as a marker of moral behavior. 31 The fact that society fails in its ability to sympathize with the creature is evidence enough of the absence of morality and common decency. As a result, the creature is a prime example of isolation. Essentially, the Creature does not begin as the cruel and monstrous murderer, but rather is a product of the lack of sympathy from society, and more importantly from his creator, Victor. Even being the source of the Creature's misery, a dying Victor earns his "child's" loyalty as the Creature drapes himself over his dying "father" and exclaims, "Oh, Frankenstein! Generous and self devoted being! I...destroyed thee by destroying all thou lovedst". 32 As if facing the reality he has painfully known all along, the Creature beholds his dead father and states, "Alas! He is cold, he cannot answer me". 33 Regardless of whom the students side with—Victor or Creature—my expectation is that they close the cover of Shelley's novel with an expanded understanding of the power of man—through words and actions—to empathize and lift up his fellow man, or to aid in the destruction of one who seeks some minimal type of compassion. Hopefully my students will choose the former with which to lead their lives.
Critical Thinking Questions
In order for students to reaffirm their reading and understanding of the novel, students will use a variety of assessments. One of these assessments will be the creation of their own critical thinking questions, as well as answering assigned questions. As students begin their readings for the first eight or so chapters, they will answer the assigned questions both in pairs in class, and slowly transitioning into their independent responses as homework. I suggest this slow transition in order to give students confidence in their ability with Mary Shelley's somewhat difficult sentence structure and diction, and also so that students can interact with the text and with each other. As students are weaned away from working with their partners, and they have a stronger grasp of the plot and the rhythm of the sentences, I will have the students create their own questions modeling the "Question Tree" taken from the Literacy Solution handbook which uses on the surface questions (which have factual, textual answers) and under the surface questions (which have opinion based, inferential answers) to further enable students to use their own critically thinking skills. 34
After completing the novel, students come back to where we began: Does nature or environment determine an individual's morality (or lack thereof)? What impact does prejudice, cruelty, and ostracism have on an individual? How can superficial markers prevent man from destroying civilization? How do we begin to empathize with this "other" being with a better understanding of multiple perspectives, rather than just one? Once again, students are brought back to the benefits of multiple narrators in order to get a balanced view of the novel. Even Walton's narrative is able to give the reader a more reliable sense of the Creature's nature. Students are asked to recall the fact that Walton does not immediately reject the Creature based on his first visual impression. This kind of delay may be due to the fact that Victor has already been exposed to the Creature via Victor's words. At Victor's deathbed, watching the Creature hover over the dead man, Walton is indeed disgusted, amazed that he had never seen such a grotesque figure as the Creature. Nevertheless, once Walton shuts his eyes, and is temporarily in the dark, he asks the Creature to stay, as an almost sympathetic offering to this maltreated figure. This kind of empathy and willingness to suspend assumptions is the same core purpose for the Unit Assessment that my students will complete: "Frankenstein's Archive of Letters".
Students will be paired up and asked to choose either Victor or the Creature (both can be aptly addressed as "Frankenstein" since there is such a strong correlation between the two characters) and take on their persona in their collection of letters. Students will write five letters to the other character (1 page in length, handwritten, single spaced) regarding five specific incidents. While the Creature is still alive at the end of the novel (although there is some doubt when comparing Mary's original statement that Walton "lost sight of the creature in the distance" and Percy's edited statement that the Creature "was lost in the distance") and can have an obvious voice in the letters, the same cannot be said about Victor since he is dead by the end of the book. He is entirely mute for the last section of the novel, and therefore the reader never truly knows if he had any remorse or any acknowledgement of the Creature. Because of this muted situation, I will have the students who take on the persona of Victor assume that Victor's "spirit" is reawakened and has the epiphany of taking some responsibility for the creature, and more importantly having the much desired empathy that his child has so desperately needed and wanted.
Students will be given a list of specific events from which they will choose five. Students will be required to use textual evidence, letter writing structure, and stylistic techniques/diction familiar to their character. Generally, students' letters should sound like their specific character, not like the students themselves. Students will relay a sense of empathy through their words, especially since they will present their specific letters centering on the same event to read aloud in class. Students will stand opposite their partner, and will read these letters to each other, finally giving voice and "empathy" through the eyes of both Creature and Creator.
The following are three key lesson plans that are spread throughout our unit; lesson one serves as an "introduction" into our curriculum unit, lesson two will come as a "through" activity as students are in the midst of the novel, and lesson three will be our "beyond" activity that will complete our unit as students create their final projects.
Lesson Plan One: Introduction Part I
While this lesson is detailed as one activity, it will take two days for students to complete their tasks. It is important to allow some time from the 18 th Century use of the Phrenological Chart and the modern day PowerPoint presentation so that students can catch themselves in making the same assumptions of appearance today, in 2012, as the scientists made in the 18 th Century.
Students will be able to...
- - Identify Gall and Spurzheim's Phrenological Chart
- - Analyze the theory of 18 th Century scientists that a person's physical appearance is a marker of morality
- - Infer the reaction this theory might have had on individuals in the 18 th Century
- - Apply the Phrenological Chart as they study their own skulls
- - Predict how modern day individuals would react if this theory were still used today
- - Discuss various issues of prejudice, nature vs. nurture, and human nature to transition into our unit of study.
- - Phrenological Chart handout
- - Sabbatini's Phrenological online image (projected on the screen)
- - Paper/pen
I will begin by having the students agree or disagree (in 2-3 complete sentences) with the following anticipation guide statements in preparation for today's lesson. I will remind students that while their responses will remain private, they will be expected to share one idea (even if vague) with their partner and with the class:
a) Human beings are born free of any malice, hatred, or anger.
b) Seeing is believing.
c) Individuals associate with those who are most like them in terms of physical appearance (race, age, etc).
d) Discrimination and prejudice are issues of the past that are no longer relevant in today's world.
- Students will respond to the anticipation guide statements for 10-15 minutes.
- Students will complete their responses, and spend a few minutes pair sharing their ideas with their partners. At this point, I will roam the class picking up bits and pieces of conversation, and getting a feel for where my students stand on these issues. I will also listen for insightful comments or questions that might spur on our class conversation.
- Once students are done pair sharing, I will ask the class to come back to a whole group discussion and will review briefly what I had heard as I was roaming the class. I will point out the few comments that I find fitting, and I will ask for a few volunteers to share their ideas for each of the statements.
- Once our whole group discussion is done, maybe after 10-15 minutes, I will hand out the Phrenological Chart and will have the students tell me what they think this head formation with various boxed titles is.
- After varied responses, I will briefly inform my students that the discussion they just had is in direct contrast with Gall and Spurzheim's Phrenolgoical Chart. I will briefly explain the theory of skull formation and traits of morality (and immorality) as we begin to understand 18 th Century theories of man.
- After my students have somewhat of a grasp on this theory, I will have them do their own skull examination to see where they fit in terms of morality. As students find different parts of their skull, I will use the online chart on the projector screen to specifically click on to the traits to show students the specifics of what that trait means. For example, I would click on the "destructiveness" segment above the earlobe, and students would see the specific behaviors that someone with this measured fragment would have.
- After students have had some fun with this chart, and after hearing their jokes and laughter on such a ridiculous chart, I will have them answer the following question on their anticipation guide paper:
"In 5-6 sentences, describe this "skull experiment". What did you find out about yourself based on this Phrenological Chart? How would you feel if this chart were still in use today? How would your family and friends feel if this chart was used to determine whether they were good or bad people?"
After students are done writing their reaction to the day's experiment, I will have students pair share and then group share, in addition to tying in their original responses to the anticipation guide questions of prejudice and appearance.
Lesson Plan Two: Introduction Part II
Students will be able to...
- Use only their sense of vision to make assumptions about individuals
- Relate to being judged by superficial markers (such as appearance)
- Connect the absurdity of their own modern day prejudices with the theories of the 18 th Century which they had deemed as ridiculous the day before
- Identity the dangers in supporting stereotypes and making quick assumptions versus taking the time to get to know an individual
- Discuss the impact of these stereotypes on an individual and the ostracism that follows
- "Don't Judge a Book By Its Cover" PowerPoint
I will begin by having students make a t-chart on their paper and labeling it as follows:
I will instruct students that for each image I show them, they will write 4-5 bullet points/notes on the left hand side. I will let students know that I want their honest responses to the pictures I am about to show them. If it helps certain students, I will have them think about how society would view these individuals. Again, I don't want my students to write what they think I want to see, but rather their honest reactions and predictions about these individuals. This is a completely silent activity so students are not influenced by other students' reactions. As with the previous day's activity, I will let students know that their responses are private, but they should be prepared to share at least one idea or insight into the activity.
For each picture, I will give students 2-3 minutes to write.
The pictures will be:
a) Astrophysicist Stephen Hawking
b) Rapper/businessman David Banner
c) Olympian Weightlifter Tommy Kono
1. Once students are done with each picture, I will remind them that as we look at each picture one at a time, I would like honest responses from the class in terms of the specific questions posed on the T-chart. I will try to put the students at ease by stating that these responses might not be their true beliefs but rather what they know society as a whole might believe when taking a look at these pictures.
2. I will begin with having a few volunteers for each picture. Once students have given me their reactions, I will reveal the truth about these individuals, thus revealing my students' "ridiculous" assumptions based on the superficiality of physical appearance, and will remind them of the previous day's lessons and their view of the "ridiculous" phrenology skull experiment.
The Truth for each picture:
a) Some responses to this awkward looking man in a wheelchair, half slumped, head lopsided, may include: he's disabled, his mentally retarded, he is a vegetable, I'd feel uncomfortable around him because I wouldn't know what to do, he's probably in a nursing home with around the clock care for feeding and bathing and the basic functions for a handicapped person.
The Truth: Stephen Hawking contracted a motor neuron disease as a young child, he is completely paralyzed and communicates through a speech generating device, yet these limitations are only physical; he is a world renowned theoretical physicist, he's a published author, his work on black holes emitting radiation earned the theory to be named after him ("Hawking Radiation"), he was awarded the 2009 Presidential Medal of Freedom—the highest civilian award in the U.S., he has been married twice, and has three children, and he was a Math professor at the University of Cambridge from 1979-2009.
b)Some responses to this African American male staring up with an angry look on his face, in a black skull cap, with a gold chain around his neck, may include: he's
a rapper, he is threatening, he looks like he's in a ghetto part of town, and he might be dealing drugs, he's probably looking for trouble, he's a dropout, he's violent, he's uneducated.
The Truth: David Banner is a rapper, record producer, and actor, he graduated
from Southern University in Louisiana, he served as the President of the Student
Government Association and received a degree in business, he pursued a masters
degree in education at the University of Maryland, he was awarded a Visionary
Award by the National Black Caucus of the State Legislature for his work after
Hurricane Katrina, and in 2007 he testified before Congress about racism and
misogyny in hip-hop music.
c)Some responses to this young Asian American male in a suit and tie may include:
he's a smart Asian guy, he works in an office, he's cute, he's the president of
some company, he seems like a pushover, his grin gives away his weakness, he
doesn't seem like he has a backbone, he's too "nice", he's had an easy life.
The Truth: Tommy Kono was an Olympic weightlifter in the 1950's and 60's,
he's the only Olympic weightlifter to have set world records in four different
weight classes, he is a Japanese-American from Sacramento California who had
to relocate to an internment camp with his family during WWII, he began as a
sickly child, but endured challenges and gained the Mr. Iron Man World title in
1954, inspiring Arnold Schwarzenegger's own career in the sport.
After reviewing these pictures, and discussing the assumptions vs. the realities, I will have the students add one more response on their t-chart, answering the following question:
"In 4-5 sentences, describe a time when you or someone you know was misjudged based on your appearance. What were the circumstances? How old were you? How did you feel? What was your reaction at the time?"
Since this question has much more personal content, I will tell the students that this paper will be handed to me on their way out of class as an "exit" slip. I will read them privately, comment on their responses, and hand them back the next day.
Lesson Plan Three: Getting "Through" the Text
As students are immersed in the reading of Shelley's Frankenstein, I will use the Jigsaw Cooperative Learning groups to allow students an insight into what and how the Creature is learning.
Students will be able to...
- Read summaries of the major works of literature the Creature finds
- Collaborate as a team to present their expert knowledge on their assigned reading
- Identify the main ideas and concepts in their assigned reading, as well as create questions regarding the reading
- Summary handouts of Paradise Lost, Sorrows of Werther, Plutarch's Lives
I will begin class by telling the students they will have a break from their reading of the novel as we stop at the point when the Creature discovers the bag of books as he hides in the hovel behind the DeLacey's cottage. Instead, they will step into the shoes of the Creature and will be reading the same three works that the Creature was exposed to.
At this time, I will divide the class into three sections: 1-Paradise Lost. 2-Sorrows of Werther. 3-Plutarch's Lives.
1. Since these three groups will be very large, about 10-11 students per group, I will remind students that within their teams they will be divided into subgroups.
2. While all the students will read their assigned section, students will be subdivided into the following: Summarizers (3 students will take notes on the main ideas of the reading, and will present these ideas to the class), Questioners (3 students will create under-the-surface questions (why how could should would...answers are inferences rather than textual) to be used in a brief discussion during the presentation), Illustrators (3 students will represent their section visually on a poster board, including an original title, 3 significant quotations from the reading, and a brief explanation as to why these are significant).
3. I will pass out the summaries and will remind them that since these works are very dense and time consuming, they will read these summaries to get the gist of the literature the Creature was introduced to.
4. Once students are done with their tasks, each group presents their section to the class. Students will take notes on each presentation (other then their own, of course).
Once students have completed their presentations, we will have a brief discussion about these three books that Mary Shelley chose to have the Creature discover. I will ask my class what books they would substitute? Which books would they give the Creature especially since he is at this point alone, rejected, scared, and confused.
- Kara Rosenberg (U-32, Montpelier, VT)
Subject taught: English, Grade: 12
Question about Unit Assessment
I\'m incredibly interested in this approach to Frankenstein as it meshes well with my understanding of the novel. I\'m a little confused about the \"five incidents\" referred to. Does the author of the unit plan mean that she chooses incidents from the novel for the characters to interact over or that she invents new incidents? Is the assignment designed to have the letters be written in response to each other (i.e. one student must write first and the other must react)? I would very much appreciate this information as I\'m planning for a Frankenstein unit as we speak.
THANK YOU — your feedback is very important to us! Give Feedback |
In this video, we will learn how to
describe the formation of covalent bonds in simple molecules.
First, what is a chemical bond? Well, our entire world is made up
of atoms. Atoms are the smallest unit of
ordinary matter that forms a chemical element. Atoms interact with each other to
form many different types of matter and countless compounds. The forces that hold atoms together
are called chemical bonds.
Chemical bonds often allow atoms to
be more stable together than when they are apart. The type of chemical bond formed
between atoms depends on what type of elements the atoms are. The known elements can be
categorized as metals, nonmetals, metalloids, and noble gases. When metal atoms transfer electrons
to nonmetal atoms during a chemical reaction, an ionic bond is formed. A covalent bond is formed when two
nonmetal atoms share electrons. This forms a discrete unit called a
molecule. Let’s take a closer look at
covalent bonds and how covalent molecules are formed.
We can define a covalent bond as a
chemical bond that is formed when two nonmetal atoms share one or more pairs of
electrons. Let’s take a look at how two
fluorine atoms share electrons. Each fluorine atom has seven
electrons in its outermost electron shell. Electrons found in the outermost
shell are called valence electrons. The fluorine molecule forms when
two fluorine atoms share a pair of valence electrons. The pair of electrons shared
between the atoms is the covalent bond, shown here as the electrons in the overlap
of the electron shells. In other diagrams, a single line
represents the shared pair of electrons.
When the fluorine atoms share a
pair of electrons, each then has eight outer electrons. This can be explained using the
octet rule. The octet rule states that atoms
tend to share enough electrons to have eight valence electrons. This gives the atoms the same
stable electron configuration as an atom of a noble gas.
While the octet rule is useful,
let’s have a look at another molecule where we’ll see an exception to the octet
rule. Hydrogen atoms only have one
valence electron that is found in the first electron shell. An atom of hydrogen and an atom of
fluorine can form a covalent bond by sharing a pair of electrons. In this molecule, fluorine has
eight valence electrons and satisfies the octet rule. But the hydrogen atom only has two
valence electrons. This does not obey the octet
rule. But with two valence electrons,
hydrogen’s valence shell is full. And it does have the same stable
electron configuration as the noble gas helium.
So far, we’ve seen molecules that
share just one pair of electrons between two atoms. Let’s look more closely at
molecules that have multiple shared pairs.
Water has the chemical formula
H2O. The chemical formula indicates that
a molecule of water has one oxygen atom and two hydrogen atoms. The oxygen atom has six valence
electrons, and each hydrogen atom has one valence electron. When one hydrogen atom forms a
covalent bond with the oxygen atom, the hydrogen atom will have two valence
electrons. And the oxygen atom will have seven
valence electrons. This allows hydrogen to acquire a
noble gas configuration.
However, the oxygen atom, with only
seven valence electrons, is not yet stable. When the second hydrogen atom forms
a covalent bond with the same oxygen atom, all three atoms will have a stable noble
gas electron configuration. Each of the covalent bonds in a
molecule of water are single bonds. A single covalent bond is formed
when one pair of electrons is shared between two atoms. So a water molecule has two single
Another way to represent the water
molecule is by drawing the chemical symbols with a line connecting each hydrogen
atom to the oxygen atom. Each of these lines represents a
single bond and one shared pair of electrons. Lots of other molecules contain
more than one single covalent bond. Examples include ammonia, which
contains three single bonds, and methane, which contains four single bonds. In all of these bonds, a single
pair of electrons is shared between two atoms. However, it is possible for two
atoms to share more than one pair of electrons.
To help us understand the bonding
between atoms, we can use dot-and-cross diagrams. A dot-and-cross diagram assigns
dots to one atom’s electrons and crosses to another atom’s electrons to more clearly
see which electrons are shared. Let’s use dot-and-cross diagrams to
examine the bonding in a molecule of oxygen.
A molecule of oxygen contains two
oxygen atoms that each have six valence electrons. If the atoms share one pair of
electrons, then each oxygen atom has seven valence electrons. However, with only seven valence
electrons, the oxygen atoms are not stable. But if the oxygen atoms share two
pairs of electrons, then each oxygen atom will have eight valence electrons and a
stable noble gas electron configuration. When two atoms share two pairs of
electrons, a double covalent bond is formed. A double bond can be represented by
drawing two lines between the atoms’ chemical symbols.
Now let’s take a look at a molecule
of nitrogen. A molecule of nitrogen contains two
nitrogen atoms that each have five valence electrons. When three pairs of electrons are
shared between the atoms, then each nitrogen atom has eight valence electrons. When two atoms share three pairs of
electrons, a triple covalent bond is formed. A triple bond can be represented by
drawing three lines between the atoms’ chemical symbols.
For these molecules, the
dot-and-cross diagrams helped us to distinguish between each atom’s valence
electrons. In later lessons, determining which
atoms contribute electrons to shared pairs will tell us further information about
the molecules produced.
Oxides are just one of the many
different types of covalent compounds. Oxides are compounds that contain
oxygen and another element. In a covalently bonded oxide, the
other element must be a nonmetal. Thus, these compounds are called
nonmetal oxides. Examples of nonmetal oxides include
carbon dioxide and sulfur trioxide.
Nonmetal oxides are formed when a
nonmetal element reacts with oxygen. Let’s take a closer look at the
reaction between carbon and oxygen. Carbon has four outer shell valence
electrons. And as we have seen, an oxygen
molecule contains a double covalent bond. When these two substances react,
the double bond is broken. New double covalent bonds are
formed between the carbon atom and each oxygen atom to create the nonmetal oxide
Now that we’ve seen examples of
covalent bonds and covalent compounds, let’s take a look at some questions.
Which of the following is not a
covalent molecule? (A) CO2, (B) HCl, (C) H2O, (D) SO3,
A covalent molecule is a molecule
composed of nonmetal atoms joined by one or more covalent bonds. To determine which of the chemical
formulas given is not a covalent molecule, we can use the periodic table to see
which does not contain two nonmetal elements.
CO2 contains carbon and oxygen. Both of these elements are
nonmetals. So they are covalently bonded in
the molecule CO2. The question asks which of the
answer choices is not a covalent molecule. So we can eliminate answer choice
(A). HCl contains the nonmetals hydrogen
and chlorine. So HCl is a covalent molecule and
is not the answer to the question. We’ve already seen that hydrogen
and oxygen are nonmetals. Therefore, H2O is a covalent
molecule and is not the answer to the question. SO3 contains sulfur and oxygen. This is a covalent molecule, since
sulfur and oxygen are nonmetals. MgO contains the metal magnesium
and the nonmetal oxygen. Since this formula contains a metal
and a nonmetal, MgO is likely an ionic compound, not a covalent molecule.
So the chemical formula that does
not represent a covalent molecule is answer choice (E), MgO.
Although there are some rare
exceptions, the number of atoms of an element in a covalent compound depends on the
number of bonds that need to be formed to create full outer shells in all the
atoms. The electronic structures of two
elements are shown in the diagram. What is the likely chemical formula
of the covalent molecule formed between A and B? (A) A3B2, (B) AB2, (C) A3B, (D) AB,
This question asks us to determine
the chemical formula of a covalent molecule. Covalent molecules contain covalent
bonds. A covalent bond is a bond formed
when two atoms share one or more pairs of electrons. We are told in the question that
the atoms of A and B will form a number of bonds in order to fill both atoms’ outer
Atom A has one valence electron in
a shell that is full when it contains two electrons. Atom B has six electrons in a shell
that is full when it contains eight electrons. We can deduce the number of
covalent bonds needed between atoms of A and B by using a dot-and-cross diagram. Here we have assigned crosses to
represent the electrons of atom A and dots to represent the electrons of atom B.
An atom of A and an atom of B can
form a covalent bond by sharing a pair of electrons. With two electrons, atom A has a
full outer shell. However, atom B only has seven
valence electrons and needs one more electron to have a full outer shell. Atom B can form another covalent
bond with a second atom of A. Now, with the second shared pair of
electrons, atom B has eight valence electrons and a full outer shell.
So, in order for all of the atoms
in the covalent molecule to have full outer shells, the molecule must contain two
atoms of A and one atom of B. Therefore, the most likely chemical
formula of the covalent molecule formed between A and B is answer choice (E),
Now let’s review what we’ve
learned. Covalent bonds form when two
nonmetal atoms share electrons. A single covalent bond is formed
when one pair of electrons is shared between two atoms. A double covalent bond is formed
when two pairs of electrons are shared between two atoms. And a triple covalent bond is
formed when three pairs of electrons are shared between two atoms.
The octet rule allows us to predict
how bonds will be formed. It states that atoms tend to share
enough electrons to have eight valence electrons and the same electron configuration
as a noble gas atom. We can use dot-and-cross diagrams
to distinguish between the electrons of different atoms. Nonmetal oxides, like carbon
dioxide, are formed when oxygen reacts with another nonmetal. |
Posted in Finance, Accounting and Economics Terms, Total Reads: 65
Reflation is a strategy adopted by the government of a country to counter the effects of deflation. This can be achieved by pumping in more money into the system and there is tax reduction, to increase liquidity in the economy.
Output and price level have direct effect on the economy. In case the price level is more that the level of output that is a case of inflation and in case the price level is going below the corresponding level of output that is a case of deflation. Excess of both, inflation and deflation is considered bad for the economy and every economy undergo through inflation or deflation.
Reflation is related to deflation. So in order to understand reflation we will look deeper into deflation. In case the general price level is falling continuously because of increase in the supply but lack of demand. In case there is lack of demand this means that this will lead to increase in unemployment as there will be less need of output and the firms will already have excess of output. Hence high unemployment, less level of income and low level of output. This is not good for the economy as the economy is not growing.
In order to counter deflation various monetary and fiscal measures are undertaken by the government and the central bank to stimulate the economy. Stimulating the economy will increase either the investment funding with the general public or income in the hand of the people by providing more employment opportunities. This will directly affect the aggregate demand and lead to increase in the general price level
MONETARY POLICY MEASURE
The central bank reduces the interest rate at which lends to the general public which will further reduce the rate at which the commercial banks will lend money. Hence there will be more borrowing and more investment which will lead to increase in the employment and hence more income and more demand.
FISCAL POLICY MEASURE
Government may reduce the tax rate of do more public spending. In case they reduce the tax rate which means that the public will have more money to spend and in case more public spending then again more employment, both cases will eventually lead to increase in aggregate demand. |
Ray tracing (graphics)
This article needs additional citations for verification. (March 2008) (Learn how and when to remove this template message)
In computer graphics, ray tracing is a rendering technique for generating an image by tracing the path of light as pixels in an image plane and simulating the effects of its encounters with virtual objects. The technique is capable of producing a high degree of visual realism, more so than typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for applications where taking a relatively long time to render can be tolerated, such as in still computer-generated images, and film and television visual effects (VFX), but more poorly suited to real-time applications such as video games, where speed is critical in rendering each frame.
- 1 History
- 2 Algorithm overview
- 3 Detailed description of ray tracing computer algorithm and its genesis
- 4 Adaptive depth control
- 5 Bounding volumes
- 6 Interactive ray tracing
- 7 Computational complexity
- 8 See also
- 9 References
- 10 External links
This section needs expansion. You can help by adding to it. (September 2019)
The idea of ray tracing comes from as early as 16th century when it was described by Albrecht Dürer, who is credited for its invention. In 1982, Scott Roth used the related term ray casting in the context of computer graphics.
Optical ray tracing describes a method for producing visual images constructed in 3D computer graphics environments, with more photorealism than either ray casting or scanline rendering techniques. It works by tracing a path from an imaginary eye through each pixel in a virtual screen, and calculating the color of the object visible through it.
Scenes in ray tracing are described mathematically by a programmer or by a visual artist (typically using intermediary tools). Scenes may also incorporate data from images and models captured by means such as digital photography.
Typically, each ray must be tested for intersection with some subset of all the objects in the scene. Once the nearest object has been identified, the algorithm will estimate the incoming light at the point of intersection, examine the material properties of the object, and combine this information to calculate the final color of the pixel. Certain illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene.
It may at first seem counterintuitive or "backward" to send rays away from the camera, rather than into it (as actual light does in reality), but doing so is many orders of magnitude more efficient. Since the overwhelming majority of light rays from a given light source do not make it directly into the viewer's eye, a "forward" simulation could potentially waste a tremendous amount of computation on light paths that are never recorded.
Therefore, the shortcut taken in ray tracing is to presuppose that a given ray intersects the view frame. After either a maximum number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel and the pixel's value is updated.
Calculate rays for rectangular viewport
- eye position
- target position
- field of view - for human we can assume
- numbers of square pixels on viewport vertical and horizontal direction
- numbers of actual pixel
- vertical vector which indicates where is up and down, usually (not visible on picture) - roll component which determine viewport rotation around point C (where the axis of rotation is the ET section)
The idea is to find the position of each viewport pixel center which allows us to find the line going from eye through that pixel and finally get the ray described by point and vector (or its normalisation ). First we need to find the coordinates of the bottom left viewport pixel and find the next pixel by making a shift along directions parallel to viewport (vectors i ) multiplied by the size of the pixel. Below we introduce formulas which include distance between the eye and the viewport. However, this value will be reduced during ray normalization (so you might as well accept that and remove it from calculations).
Pre-calculations: let's find and normalise vector and vectors which are parallel to the viewport (all depicted on above picture)
note that viewport center , next we calculate viewport sizes divided by 2 including aspect ratio
and then we calculate next-pixel shifting vectors along directions parallel to viewport (), and left bottom pixel center
Calculations: note and ray so
Detailed description of ray tracing computer algorithm and its genesis
What happens in (simplified) nature
In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One can think of this "ray" as a stream of photons traveling along the same path. In a perfect vacuum this ray will be a straight line (ignoring relativistic effects). Any combination of four things might happen with this light ray: absorption, reflection, refraction and fluorescence. A surface may absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. It might also reflect all or part of the light ray, in one or more directions. If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color). Less commonly, a surface may absorb some portion of the light and fluorescently re-emit the light at a longer wavelength color in a random direction, though this is rare enough that it can be discounted from most rendering applications. Between absorption, reflection, refraction and fluorescence, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, reflective and fluorescent properties again affect the progress of the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image.
Ray casting algorithm
The first ray tracing algorithm used for rendering was presented by Arthur Appel in 1968. This algorithm has since been termed "ray casting". The idea behind ray casting is to trace rays from the eye, one per pixel, and find the closest object blocking the path of that ray. Think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the shading of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3D computer graphics shading models. One important advantage ray casting offered over older scanline algorithms was its ability to easily deal with non-planar surfaces and solids, such as cones and spheres. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using solid modeling techniques and easily rendered.
Recursive ray tracing algorithm
The next important research breakthrough came from Turner Whitted in 1979. Previous algorithms traced rays from the eye into the scene until they hit an object, but determined the ray color without recursively tracing more rays. Whitted continued the process. When a ray hits a surface, it can generate up to three new types of rays: reflection, refraction, and shadow. A reflection ray is traced in the mirror-reflection direction. The closest object it intersects is what will be seen in the reflection. Refraction rays traveling through transparent material work similarly, with the addition that a refractive ray could be entering or exiting a material. A shadow ray is traced toward each light. If any opaque object is found between the surface and the light, the surface is in shadow and the light does not illuminate it. This recursive ray tracing added more realism to ray traced images.
Advantages over other rendering methods
Ray tracing-based rendering's popularity stems from its basis in a realistic simulation of light transport, as compared to other rendering methods, such as rasterization, which focuses more on the realistic simulation of geometry. Effects such as reflections and shadows, which are difficult to simulate using other algorithms, are a natural result of the ray tracing algorithm. The computational independence of each ray makes ray tracing amenable to a basic level of parallelization, but the divergence of ray paths makes high utilization under parallelism quite difficult to achieve in practice.
A serious disadvantage of ray tracing is performance (though it can in theory be faster than traditional scanline rendering depending on scene complexity vs. number of pixels on-screen). Scanline algorithms and other algorithms use data coherence to share computations between pixels, while ray tracing normally starts the process anew, treating each eye ray separately. However, this separation offers other advantages, such as the ability to shoot more rays as needed to perform spatial anti-aliasing and improve image quality where needed.
Although it does handle interreflection and optical effects such as refraction accurately, traditional ray tracing is also not necessarily photorealistic. True photorealism occurs when the rendering equation is closely approximated or fully implemented. Implementing the rendering equation gives true photorealism, as the equation describes every physical effect of light flow. However, this is usually infeasible given the computing resources required.
The realism of all rendering methods can be evaluated as an approximation to the equation. Ray tracing, if it is limited to Whitted's algorithm, is not necessarily the most realistic. Methods that trace rays, but include additional techniques (photon mapping, path tracing), give a far more accurate simulation of real-world lighting.
Reversed direction of traversal of scene by the rays
The process of shooting rays from the eye to the light source to render an image is sometimes called backwards ray tracing, since it is the opposite direction photons actually travel. However, there is confusion with this terminology. Early ray tracing was always done from the eye, and early researchers such as James Arvo used the term backwards ray tracing to mean shooting rays from the lights and gathering the results. Therefore, it is clearer to distinguish eye-based versus light-based ray tracing.
While the direct illumination is generally best sampled using eye-based ray tracing, certain indirect effects can benefit from rays generated from the lights. Caustics are bright patterns caused by the focusing of light off a wide reflective region onto a narrow area of (near-)diffuse surface. An algorithm that casts rays directly from lights onto reflective objects, tracing their paths to the eye, will better sample this phenomenon. This integration of eye-based and light-based rays is often expressed as bidirectional path tracing, in which paths are traced from both the eye and lights, and the paths subsequently joined by a connecting ray after some length.
Photon mapping is another method that uses both light-based and eye-based ray tracing; in an initial pass, energetic photons are traced along rays from the light source so as to compute an estimate of radiant flux as a function of 3-dimensional space (the eponymous photon map itself). In a subsequent pass, rays are traced from the eye into the scene to determine the visible surfaces, and the photon map is used to estimate the illumination at the visible surface points. The advantage of photon mapping versus bidirectional path tracing is the ability to achieve significant reuse of photons, reducing computation, at the cost of statistical bias.
An additional problem occurs when light must pass through a very narrow aperture to illuminate the scene (consider a darkened room, with a door slightly ajar leading to a brightly lit room), or a scene in which most points do not have direct line-of-sight to any light source (such as with ceiling-directed light fixtures or torchieres). In such cases, only a very small subset of paths will transport energy; Metropolis light transport is a method which begins with a random search of the path space, and when energetic paths are found, reuses this information by exploring the nearby space of rays.
To the right is an image showing a simple example of a path of rays recursively generated from the camera (or eye) to the light source using the above algorithm. A diffuse surface reflects light in all directions.
First, a ray is created at an eyepoint and traced through a pixel and into the scene, where it hits a diffuse surface. From that surface the algorithm recursively generates a reflection ray, which is traced through the scene, where it hits another diffuse surface. Finally, another reflection ray is generated and traced through the scene, where it hits the light source and is absorbed. The color of the pixel now depends on the colors of the first and second diffuse surface and the color of the light emitted from the light source. For example, if the light source emitted white light and the two diffuse surfaces were blue, then the resulting color of the pixel is blue.
As a demonstration of the principles involved in ray tracing, consider how one would find the intersection between a ray and a sphere. This is merely the math behind the line–sphere intersection and the subsequent determination of the colour of the pixel being calculated. There is, of course, far more to the general process of ray tracing, but this demonstrates an example of the algorithms used.
In vector notation, the equation of a sphere with center and radius is
Any point on a ray starting from point with direction (here is a unit vector) can be written as
where is its distance between and . In our problem, we know , , (e.g. the position of a light source) and , and we need to find . Therefore, we substitute for :
Let for simplicity; then
Knowing that d is a unit vector allows us this minor simplification:
This quadratic equation has solutions
The two values of found by solving this equation are the two ones such that are the points where the ray intersects the sphere.
Any value which is negative does not lie on the ray, but rather in the opposite half-line (i.e. the one starting from with opposite direction).
If the quantity under the square root ( the discriminant ) is negative, then the ray does not intersect the sphere.
Let us suppose now that there is at least a positive solution, and let be the minimal one. In addition, let us suppose that the sphere is the nearest object on our scene intersecting our ray, and that it is made of a reflective material. We need to find in which direction the light ray is reflected. The laws of reflection state that the angle of reflection is equal and opposite to the angle of incidence between the incident ray and the normal to the sphere.
The normal to the sphere is simply
where is the intersection point found before. The reflection direction can be found by a reflection of with respect to , that is
Thus the reflected ray has equation
Now we only need to compute the intersection of the latter ray with our field of view, to get the pixel which our reflected light ray will hit. Lastly, this pixel is set to an appropriate color, taking into account how the color of the original light source and the one of the sphere are combined by the reflection.
Adaptive depth control
Adaptive depth control means that the renderer stops generating reflected/transmitted rays when the computed intensity becomes less than a certain threshold. There must always be a set maximum depth or else the program would generate an infinite number of rays. But it is not always necessary to go to the maximum depth if the surfaces are not highly reflective. To test for this the ray tracer must compute and keep the product of the global and reflection coefficients as the rays are traced.
Example: let Kr = 0.5 for a set of surfaces. Then from the first surface the maximum contribution is 0.5, for the reflection from the second: 0.5 × 0.5 = 0.25, the third: 0.25 × 0.5 = 0.125, the fourth: 0.125 × 0.5 = 0.0625, the fifth: 0.0625 × 0.5 = 0.03125, etc. In addition we might implement a distance attenuation factor such as 1/D2, which would also decrease the intensity contribution.
For a transmitted ray we could do something similar but in that case the distance traveled through the object would cause even faster intensity decrease. As an example of this, Hall & Greenberg found that even for a very reflective scene, using this with a maximum depth of 15 resulted in an average ray tree depth of 1.7.
We enclose groups of objects in sets of hierarchical bounding volumes and first test for intersection with the bounding volume, and then only if there is an intersection, against the objects enclosed by the volume.
Bounding volumes should be easy to test for intersection, for example a sphere or box (slab). The best bounding volume will be determined by the shape of the underlying object or objects. For example, if the objects are long and thin then a sphere will enclose mainly empty space and a box is much better. Boxes are also easier for hierarchical bounding volumes.
Note that using a hierarchical system like this (assuming it is done carefully) changes the intersection computational time from a linear dependence on the number of objects to something between linear and a logarithmic dependence. This is because, for a perfect case, each intersection test would divide the possibilities by two, and we would have a binary tree type structure. Spatial subdivision methods, discussed below, try to achieve this.
Kay & Kajiya give a list of desired properties for hierarchical bounding volumes:
- Subtrees should contain objects that are near each other and the further down the tree the closer should be the objects.
- The volume of each node should be minimal.
- The sum of the volumes of all bounding volumes should be minimal.
- Greater attention should be placed on the nodes near the root since pruning a branch near the root will remove more potential objects than one farther down the tree.
- The time spent constructing the hierarchy should be much less than the time saved by using it.
Interactive ray tracing
The first implementation of an interactive ray tracer was the LINKS-1 Computer Graphics System built in 1982 at Osaka University's School of Engineering, by professors Ohmura Kouichi, Shirakawa Isao and Kawata Toru with 50 students. It was a massively parallel processing computer system with 514 microprocessors (257 Zilog Z8001's and 257 iAPX 86's), used for rendering realistic 3D computer graphics with high-speed ray tracing. According to the Information Processing Society of Japan: "The core of 3D image rendering is calculating the luminance of each pixel making up a rendered surface from the given viewpoint, light source, and object position. The LINKS-1 system was developed to realize an image rendering methodology in which each pixel could be parallel processed independently using ray tracing. By developing a new software methodology specifically for high-speed image rendering, LINKS-1 was able to rapidly render highly realistic images." It was used to create an early 3D planetarium-like video of the heavens made completely with computer graphics. The video was presented at the Fujitsu pavilion at the 1985 International Exposition in Tsukuba." It was the second system to do so after the Evans & Sutherland Digistar in 1982. The LINKS-1 was reported to be the world's most powerful computer in 1984.
The earliest public record of "real-time" ray tracing with interactive rendering (i.e., updates greater than a frame per second) was credited at the 2005 SIGGRAPH computer graphics conference as being the REMRT/RT tools developed in 1986 by Mike Muuss for the BRL-CAD solid modeling system. Initially published in 1987 at USENIX, the BRL-CAD ray tracer was an early implementation of a parallel network distributed ray tracing system that achieved several frames per second in rendering performance. This performance was attained by means of the highly optimized yet platform independent LIBRT ray tracing engine in BRL-CAD and by using solid implicit CSG geometry on several shared memory parallel machines over a commodity network. BRL-CAD's ray tracer, including the REMRT/RT tools, continue to be available and developed today as Open source software.
Since then, there have been considerable efforts and research towards implementing ray tracing in real-time speeds for a variety of purposes on stand-alone desktop configurations. These purposes include interactive 3D graphics applications such as demoscene productions, computer and video games, and image rendering. Some real-time software 3D engines based on ray tracing have been developed by hobbyist demo programmers since the late 1990s.
In 1999 a team from the University of Utah, led by Steven Parker, demonstrated interactive ray tracing live at the 1999 Symposium on Interactive 3D Graphics. They rendered a 35 million sphere model at 512 by 512 pixels, running at approximately 15 frames per second on 60 CPUs.
The OpenRT project included a highly optimized software core for ray tracing along with an OpenGL-like API in order to offer an alternative to the current rasterisation based approach for interactive 3D graphics. Ray tracing hardware, such as the experimental Ray Processing Unit developed by Sven Woop at the Saarland University, has been designed to accelerate some of the computationally intensive operations of ray tracing. On March 16, 2007, the University of Saarland revealed an implementation of a high-performance ray tracing engine that allowed computer games to be rendered via ray tracing without intensive resource usage.
On June 12, 2008 Intel demonstrated a special version of Enemy Territory: Quake Wars, titled Quake Wars: Ray Traced, using ray tracing for rendering, running in basic HD (720p) resolution. ETQW operated at 14-29 frames per second. The demonstration ran on a 16-core (4 socket, 4 core) Xeon Tigerton system running at 2.93 GHz.
At SIGGRAPH 2009, Nvidia announced OptiX, a free API for real-time ray tracing on Nvidia GPUs. The API exposes seven programmable entry points within the ray tracing pipeline, allowing for custom cameras, ray-primitive intersections, shaders, shadowing, etc. This flexibility enables bidirectional path tracing, Metropolis light transport, and many other rendering algorithms that cannot be implemented with tail recursion. OptiX-based renderers are used in Adobe AfterEffects, Bunkspeed Shot, Autodesk Maya, 3ds max, and many other renderers.
Imagination Technologies offers a free API called OpenRL which accelerates tail recursive ray tracing-based rendering algorithms and, together with their proprietary ray tracing hardware, works with Autodesk Maya to provide what 3D World calls "real-time raytracing to the everyday artist".
In 2014, a demo of the PlayStation 4 video game The Tomorrow Children, developed by Q-Games and SIE Japan Studio, demonstrated new lighting techniques developed by Q-Games, notably cascaded voxel cone ray tracing, which simulates lighting in real-time and uses more realistic reflections rather than screen space reflections.
AMD offers interactive ray tracing on top of OpenCL on Vega graphics cards through Radeon ProRender. The company is reportedly planning to release the second generation Navi GPUs with hardware-accelerated ray tracing in 2020.
Nvidia offers hardware-accelerated ray tracing in their GeForce RTX and Quadro RTX GPUs, currently based on the Turing architecture. The Nvidia hardware uses a separate functional block, publicly called an "RT core". This unit is somewhat comparable to a texture unit in size, latency, and interface to the processor core. The unit features BVH traversal, compressed BVH node decompression, ray-AABB intersection testing, and ray-triangle intersection testing.
Aside from being implemented through RTX graphic cards, starting from October 2019, the ray tracing technology can also be enabled on computers whose graphic cards support DirectX 11 and higher through software-based implementation.
Various complexity results have been proven for certain formulations of the ray tracing problem. In particular, if the decision version of the ray tracing problem is defined as follows – given a light ray's initial position and direction and some fixed point, does the ray eventually reach that point, then the referenced paper proves the following results:
- Ray tracing in 3D optical systems with a finite set of reflective or refractive objects represented by a system of rational quadratic inequalities is undecidable.
- Ray tracing in 3D optical systems with a finite set of refractive objects represented by a system of rational linear inequalities is undecidable.
- Ray tracing in 3D optical systems with a finite set of rectangular reflective or refractive objects is undecidable.
- Ray tracing in 3D optical systems with a finite set of reflective or partially reflective objects represented by a system of linear inequalities, some of which can be irrational is undecidable.
- Ray tracing in 3D optical systems with a finite set of reflective or partially reflective objects represented by a system of rational linear inequalities is PSPACE-hard.
- For any dimension equal to or greater than 2, ray tracing with a finite set of parallel and perpendicular reflective surfaces represented by rational linear inequalities is in PSPACE.
- Beam tracing
- Cone tracing
- Distributed ray tracing
- Global illumination
- Gouraud shading
- List of ray tracing software
- Parallel computing
- Phong shading
- Progressive refinement
- Specular reflection
- Georg Rainer Hofmann (1990). "Who invented ray tracing?". The Visual Computer. 6 (3): 120–124. doi:10.1007/BF01911003..
- Appel A. (1968) Some techniques for shading machine renderings of solids. AFIPS Conference Proc. 32 pp.37-45
- Whitted T. (1979) An improved illumination model for shaded display. Proceedings of the 6th annual conference on Computer graphics and interactive techniques
- Tomas Nikodym (June 2010). "Ray Tracing Algorithm For Interactive Applications" (PDF). Czech Technical University, FEE.
- J.-C. Nebel. A New Parallel Algorithm Provided by a Computation Time Model, Eurographics Workshop on Parallel Graphics and Visualisation, 24–25 September 1998, Rennes, France.
- A. Chalmers, T. Davis, and E. Reinhard. Practical parallel rendering, ISBN 1-56881-179-9. AK Peters, Ltd., 2002.
- Aila, Timo and Samulii Laine, Understanding the Efficiency of Ray Traversal on GPUs, High Performance Graphics 2009, New Orleans, LA.
- Eric P. Lafortune and Yves D. Willems (December 1993). "Bi-Directional Path Tracing". Proceedings of Compugraphics '93: 145–153.
- Péter Dornbach. "Implementation of bidirectional ray tracing algorithm". Retrieved June 11, 2008.
- Global Illumination using Photon Maps Archived 2008-08-08 at the Wayback Machine
- Photon Mapping - Zack Waters
- Hall, Roy A.; Greenberg, Donald P. (November 1983). "A Testbed for Realistic Image Synthesis". IEEE Computer Graphics and Applications. 3 (8): 10–20. CiteSeerX 10.1.1.131.1958. doi:10.1109/MCG.1983.263292.
- "【Osaka University 】 LINKS-1 Computer Graphics System". IPSJ Computer Museum. Information Processing Society of Japan. Retrieved November 15, 2018.
- Defanti, Thomas A. (1984). Advances in computers. Volume 23 (PDF). Academic Press. p. 121. ISBN 0-12-012123-9.
- See Proceedings of 4th Computer Graphics Workshop, Cambridge, MA, USA, October 1987. Usenix Association, 1987. pp 86–98.
- "About BRL-CAD". Retrieved January 18, 2019.
- Piero Foscari. "The Realtime Raytracing Realm". ACM Transactions on Graphics. Retrieved September 17, 2007.
- Parker, Steven; Martin, William (April 26, 1999). "Interactive ray tracing". I3D '99 Proceedings of the 1999 Symposium on Interactive 3D Graphics (April 1999): 119–126. Retrieved October 30, 2019.
- Mark Ward (March 16, 2007). "Rays light up life-like graphics". BBC News. Retrieved September 17, 2007.
- Theo Valich (June 12, 2008). "Intel converts ET: Quake Wars to ray tracing". TG Daily. Retrieved June 16, 2008.
- Nvidia (October 18, 2009). "Nvidia OptiX". Nvidia. Retrieved November 6, 2009.
- "3DWorld: Hardware review: Caustic Series2 R2500 ray-tracing accelerator card". Retrieved April 23, 2013.3D World, April 2013
- Cuthbert, Dylan (October 24, 2015). "Creating the beautiful, ground-breaking visuals of The Tomorrow Children on PS4". PlayStation Blog. Retrieved December 7, 2015.
- GPUOpen Real-time Ray-tracing
- James, Dave (June 11, 2019). "AMD's second-gen RDNA GPUs will feature hardware-accelerated ray tracing in 2020". PCGamesN. Retrieved June 14, 2019.
- Warren, Tom (June 8, 2019). "Microsoft hints at next-generation Xbox 'Scarlet' in E3 teasers". The Verge. Retrieved October 8, 2019.
- Chaim, Gartenberg (October 8, 2019). "Sony confirms PlayStation 5 name, holiday 2020 release date". The Verge. Retrieved October 8, 2019.
- Rob Thubron (October 17, 2019). "World of Tanks enCore RT demo allows ray tracing on non-RTX graphics cards". TechSpot. Retrieved October 31, 2019.
- Matt Hanson (September 17, 2019). "Intel topples Nvidia's ray tracing monopoly in World of Tanks". TechRadar. Retrieved October 31, 2019.
- "Computability and Complexity of Ray Tracing". https://www.cs.duke.edu/~reif/paper/tygar/raytracing.pdf |
Are you too thick, too thin, or of normal weight? The Body Mass Index (BMI) is often used as a guide to determine nutritional status. Find out how to calculate your personal BMI and what it says exactly here.
Body mass index
Body size and weight provide useful information about a person’s current nutritional status. These two body measurements are almost linearly related to each other in adults and form the basis of the Body Mass Index (BMI).
The body mass index (BMI) is calculated from the body weight (in kilograms or pounds) and divided by the square of the body size (in square meters or inches): BMI = weight [kg] / height [m^2].
Calculate your index here.
The “desirable” BMI also depends on age, so children’s BMI is not measured by these terms.
|over 64 years||24-29|
The BMI Division
BMI is used for the classification of overweight. The World Health Organization (WHO) has classified body weight in adults according to BMI. The following table is independent of gender and age:
|low mass weight||<16|
|low moderate weight||16.00-16.99|
|low light weight||17.00-18.49|
|Class I obesity||30.00-34.99|
|Class II obesity||35.00-39.99|
|Class III obesity||≥ 40|
Difficulties in determining BMI
However, there are some pitfalls in BMI. For one thing, it doesn’t distinguish between fat and lean mass: even with an increased BMI, malnutrition could still be present. For example, if muscle mass is massively reduced but fat mass increases at the same time.
Competitive athletes and bodybuilders who have a higher body weight due to their high muscle mass could also be wrongly classified as overweight with the help of BMI.
On the other hand, BMI cannot differentiate between fat mass and water. The accumulation of water in a person’s tissues increases body weight, which could lead to incorrect and excessive BMI.
Other anthropometric parameters
The determination of BMI alone is not sufficient to assess a person’s body composition. In addition, it is recommended to consult various anthropometric parameters.
This includes the body size, the body weight, the length of the arm, the muscle circumference of the upper arm, and the measurement of the thickness of the skin fold in various parts of the body.
Measurement of skin fold thickness
The skinfold thickness measurement measures the fat in the subcutaneous tissue, which constitutes approximately 50 percent of the reservoir fat, and is used to assess the amount of body fat.
Measurement of skin fold thickness is quick and easy to perform, but should be professionally performed by a physician.
Based on the sum of all measured skinfold thickness, the corresponding body fat content can now be read using a table. The precision of the results can be increased with an increasing number of recorded skin folds and the measurement made.
Arm Muscle Circumference
A rough estimate of muscle mass or fat can be made by measuring the arm muscle circumference. The circumference of the upper arm is measured at the midpoint between the shoulder joint and the elbow. It is believed that twice the thickness of the fat layer of the upper-middle arm corresponds to the skin fold of the triceps. |
1 Properties of Matter 8th Grade. Adapted from Matter, Building Blocks of the Universe Prentice Hall. Created by Jim Barnaby. (We use physical properties to differentiate between pepper and salt) There are many different physical properties for matter.
Properties of matter definition. Matter is anything that has weight and takes up space. Knowing the properties of matter can help you pick the right materials for the job. If you are going on a canoe trip and want to take along some cold sodas, taking a Styrofoam cooler would be a good choice...Free Science worksheets, Games and Projects for preschool, kindergarten, 1st grade, 2nd grade, 3rd grade, 4th grade and 5th grade kids
|Hobby lobby furniture decals|
Zebra zm400 print network configuration
|How to complete clickworker profile|
Iphone 7 getting really hot
|"Atoms and Matter" crossword puzzle . Use the "Dream Journey Into the Atom" poster to complete the accompanying worksheet or use this printable version of the worksheet. "Atomic Structure" worksheet . Have students use the internet to do this "Chemistry Scavenger Hunt" . Have students to the "Atomic Structure and Theory" Magic Square.||2.3 – Earth materials have varied physical properties which make them useful in different ways. GRADE-LEVEL CONCEPT 1: u Soils can be described by their color, texture and capacity to retain water. GRADE-LEVEL EXPECTATIONS: 1. Soil is a mixture of pieces of rock (particles), living and once living things (humus), water and air.|
|Another property of solid materials that depends on the strength of the bonds between atoms or molecules is hardness. As in a solid, the atoms or molecules of a liquid are held together by attractive forces. But these forces are not great enough to hold the atoms or molecules in a fixed pattern...||STATES OF MATTER SCIENCE ACTIVITIESThis resource includes experiments, science notebook printables and much more for an effective states of Science TEKS 3.3C, 4.3C, and 5.3C ask students to connect grade-level appropriate science concepts with the history of science, science careers, and...|
|Topic Name. 11. Thermal Properties of matter. Question 11. 14. In an experiment on the specific heat of a metal, a 0.20 kg block of the metal at 150°C is dropped in a copper calorimeter (of water equivalent 0.025 kg) containing 150 cm3 of water at 27 °C. The final temperature is 40° C. Compute...||Lg google tv g3 firmware update|
|Grades 3-5. 3-PS2-1. Each force acts on one particular object and has both strength and a direction. An object at rest typically has multiple forces acting on it, but they add to give zero net force on the object. Forces that do not sum to zero can cause changes in the object’s speed or direction of motion. 3-PS2-2.||The kilogram is the SI unit of mass and it is the almost universally used standard mass unit. The associated SI unit of force and weight is the Newton, with 1 kilogram weighing 9.8 Newtons under standard conditions on the Earth's surface.|
|Oobleck: the Dr. Seuss Science Experiment: Oobleck is a classic science experiment that's perfect for entertaining both kids and adults. If you haven't seen it in action it's very fascinating stuff and before too long you'll have your hands covered with it, happily making a mess that can be …||Jan 01, 2015 · First Grade Lesson Plan Solids and Liquids First Grade Physical Sciences Standard 1. Materials come in different forms (states), including solids, liquids, and gases. As a basis for understanding this concept: a. Students know solids, liquids, and gases have different properties. Time Needed: Twenty minutes per state of matter covered. Each ...|
|What could you do to water that would change only its physical properties? Are there any ways you can think of to chemically change water? Baking soda is a chemical compound with the formula NaHCO3. What are the properties of the elements that make up this compound (Na, H, C, and O)?||K-2.ETS1-3 Analyze data from tests of two objects designed to solve the same problem to compare the strengths and weaknesses of how each performs PS1.A Different kinds of matter exist and many of them can be either solid or liquid, depending on temperature PS1.B Heating or cooling a substance may cause changes that can be observed|
|Since they are difficult to create, they are seldom studied in science projects. Areas of interest include special properties of the various states of matter, different methods of changing the states of matter, and experiments on adding factors that affect the rate that the matter changes its states.||Start studying Properties of Matter (grade 9 science). Learn vocabulary, terms, and more with flashcards, games, and other study tools.|
|3. 4. 5. E v i d e n c e E v i d e n c e i s information gathered when scientists make systematic observations or set up an experiment to collect and record data. The data recorded is then analyzed by the scientists in order to base conclusions on the evidence collected. The collection of evidence is a critical part of a scientific investigation.||Thermal Properties of Matter. The Matter is defined as any substance that has mass and occupies space. Also, there are other temperature-related properties of matter like thermal conductivity, thermal diffusivity and so on. In this chapter, we'll learn about the following topics|
|English Language Arts Standards » Reading: Informational Text » Grade 3 » 3 Print this page. Describe the relationship between a series of historical events, scientific ideas or concepts, or steps in technical procedures in a text, using language that pertains to time, sequence, and cause/effect.||Second grade Third grade, Fourth grade 2 more ..., Third grade, Fourth grade. 55,236 Views. 2 Favorites. Objective: Students will explore the existence and properties of gas through an experiment in blowing a balloon up inside of an empty water bottle.|
|Properties of Matter; Earth Science. Adopt A City – Mini Weather Unit. Tasks 1, 2, & 3; Tasks 4, 5, & 6; Tasks 7, 8, & 9; Tasks 10, 11, & 12. Bernoulli’s Principle; Tasks 13, 14, & 15; Space Science. Moon; Constellations (Star Lab) Life Science; Planbook “Junkyard Wars” 6th Grade Science – Lessons Overview. 6th Grade Daily Lessons ...||Extensive properties, on the other hand, show an additive relationship that builds with more matter. Both intensive and extensive properties are usually only true when the amount of the sample and its divided amounts don't affect a physical or chemical process.|
|Describe different properties of matter. Describe the properties of a solid, a liquid, and a gas. Describe the properties of a solid and a liquid. Describe the properties of gases and liquids. Understand the transitions between states of matter. Understand how matter changes from one state to another and what affects the change.||If you would like to read this article, or get unlimited access to The Times and The Sunday Times, find out more about our special 12 week offer here|
|2 Experiments in Materials Science and Engineering 14. Bring yourself to each lab. Attendance is a must by every student and absence will negatively impact your lab grade unless it is excused absence for extenuating circumstance. Moreover, you have to attend on time at the beginning of a lab. Do not come late to labs. 15.||Dec 27, 2017 · Energy and Matter (2-12) Within a natural or designed system, the transfer of energy drives the motion and/or cycling of matter. (6-8) The total amount of energy and matter in closed systems is conserved. (9-12) Changes of energy and matter in a system can be described in terms of energy and matter flows into, out of, and within that system. (9-12)|
|Mystery Science offers an open-and-go elementary science unit suitable for 1st, 2nd, and 3rd grade covering Properties & Phases of Matter||Third grade . A.3 Compare properties of materials. X2K. Share skill. share to google . share to facebook share to twitter|
|Feb 02, 2017 · Solids, Liquids, Gas – Oh My! (Ages 3-6) A simple activity to help young kids get their heads around states of matter. Let little scientists invent different ways to compare the balloons and see the differences for themselves like crash tests, tapping each, comparing weights, and more!||Learn All About the Properties of Metals. See what properties distinguish metals from other The properties of these different metals can be combined by mixing two or more of them together. Join our list for the latest on products, promotions, and experiments and receive FREE economy shipping...|
|Introduction: The Grade 5 Physical Science Unit focuses on matter and its properties. All of the Grade 5 California Science Content Standards for Physical The Grade 5 Physical Science Unit is presented to students through a series of investigations, experiments, active learning experiences, questions...||General Chemistry: Organic & Biochemistry Miscellaneous: Introduction to the Study of Chemistry - Atoms, Elements, Compounds, Chemical Properties, Physical Properties|
|Experiments. Why not try a fun science experiment right now? In order for your science experiment to be safe and successful, be sure to: Get your parent's or teacher's permission, and their help.||Properties of Matter Anchor Chart. New Product: Click the picture to go to our store! Shapes Worksheet Kindergarten 2nd Grade Worksheets Science Worksheets Science Activities For Kids Kindergarten Science Science Classroom Science Lessons Teaching Science Matter Worksheets.|
|Matter has two fundamental properties: matter takes up space and matter has mass. Clarification for grades 3-5: In grade 3, introduce the term mass as compared to the term weight. This lesson contains a lab experiment that tests the timing at which butter changes to its melting point while using...||Properties of Matter. This game covers vocabulary for Chapter 5, Grade 3, Silver, Burdett and Ginn Science Series.|
|3-5 Science . Matter: Properties and Change . Essential Standard . Clarifying Objectives 3.P.2 Understand the structure and properties of matter before and after they undergo a change. 3.P.2.1 Recognize that air is a substance that surrounds us, takes up space and has mass. 3.P.2.2 Compare solids, liquids, and gases based on their basic properties.||Therefore, teachers of science need to help students recognize the properties of objects, as emphasized in grade-level content standards, while helping them to understand systems. As another example, students in middle school and high school view models as physical copies of reality and not as conceptual representations.|
|How Does the Experiment Work? Heat can move in three ways: conduction, convection and radiation. In this experiment, the heat was transferred by means of conduction. Conduction is the transfer of heat from one particle of matter to another without the movement of matter itself. As matter is heated, the particles that make up the matter begin to ...||Properties of Matter (5th grade). Properties of Matter. The Noble Gases - Reactivity Series. Matter and Change.|
|The general properties of matter include mass, volume, (a) capacity and density (b) weight and density (c) size and density. 2. The formula used to measure the surface area of irregular objects is called (a) π r2(b) base x height (c) estimation. 3.||the john muir exhibit - lessons - science - grade 2. John Muir Study Guide Science Lesson Plan. Grade Two Soil. PDF Version of this Lesson Plan. Although John Muir is most renowned for his work as a naturalist, he also was a successful fruit rancher for many years.|
|It doesn’t matter what height you drop the ball from, as long as it’s the same each time. Be sure to simply drop it – don’t bounce it or throw it. Step 3: Drop two balls (one at a time) and record the height that they bounced after hitting the floor.||Oobleck: the Dr. Seuss Science Experiment: Oobleck is a classic science experiment that's perfect for entertaining both kids and adults. If you haven't seen it in action it's very fascinating stuff and before too long you'll have your hands covered with it, happily making a mess that can be …|
|Course Overview The Grade 2 Science course investigates animal life, plant life, weather, water, and physics, as well as technology and astronomy. Engaging on-camera experiments and examples help deepen students’ understanding of the concepts presented. Course topics include: Plants and Animals Food Chains and Life Cycles Earth’s Resources Weather and Seasons Matter, Energy, Forces, Motion ...||Observable Physical Properties of Matter, Physical and Chemical properties of matter, A series of free Science Lessons for 7th Grade and 8th Grade, KS3 and Checkpoint, GCSE and IGCSE Observable Physical Properties of Matter Color, Texture, Luster, Shape, Smell, Taste, Hardness.|
|matter. Because such a small percentage of particles were redirected, he reasoned that this clump of matter, called the nucleus, must occupy only a small fraction of the atom’s total space. 3. Ibuprofen, C 13H 18O 2, that is manufactured in Michigan contains 75.69% by mass carbon, 8.80% hydrogen, and 15.51% oxygen.||1.3 Names and formulae of substances 23 Activity 3 28 Exercise 1 29 1.4 Properties of materials 31 Practical activity 3 32 Activity 4 34 1.4.1 Electrical conductors and insulators 35 Experiment 1 35 1.4.2 Thermal conductors and insulators 37 Experiment 2 37 1.4.3 Magnetic and non-magnetic materials 39 Experiment 3 39 Exercise 2 41 Summary of ...|
|GRADE 1, Unit 2 • How does studying the attributes/properties of objects help us to understand them, organize them, and answer questions about them? • How can we communicate the results of our science experiments to other people? GRADE 1, Unit 3 • How do we know that objects or materials can exist as solids,|
|Freedom munitions 9mm review|
|Chemistry worksheet_ matter 1 quizlet|
|Earthquake origin time calculator|
|Jdbi sqllogger example|
May 11, 2020 · So, in addition to these free worksheets all about states of matter, I have included free activities, free learning tools, and experiments. Enjoy these FREE worksheets all about states of matter: Various States of Matter FREE PDFs | Super Teacher Worksheets. 3 States of Matter FREE Worksheets | Worksheet Place Pre-K Gradek Grade1 Grade2 Grade3 Grade4 Grade5 Grade6+. States of Matter is an educational activity for kids to learn about the different properties of matter.Jan 31, 2020 · These will not only make learning the states of matter fun, but also teach them in a hands-on and visual way! These experiments go perfectly with my States of Matter Chemistry Study Pack! Get your copy here! 15 States of Matter Science Experiments for Kids. Oobleck Science Sink or Float Experiment; Experiment with Solids, Liquids, and Polymers Teaching Tools & Resources - All About Water. Our Water Crisis Lesson Plans, for grades K-12, are packed with engaging lesson for students.Written by a certified teacher with a busy professional in mind, we're sure you'll find the activities useful out-of-the-box or as a great head start.
3. Guide students to use different observations of properties to group a plastic lid, a coin, and a metal key in different ways. Hold up a round plastic lid, a coin, and a key. Ask students to describe two or three of the properties of each object. If students can’t come up with descriptive words, show them that the plastic lid is flexible. Jan 27, 2016 · In this cool experiment –Dry Ice Soap Tower is a fun experiment where kids can build a huge chain of popsicles and lock it using a specific pattern. You Might be Interested in Following Posts : Top 10 Science Experiments for Class 6 Kids; Top 10 Fun Science Experiments for Kids at home with Youtube Videos Matter exists in different states. We call them solid, liquid, gas, and plasma. Each of these states has special properties. These states of matter and their properties have been explained in a clear and complete manner in this video. Having seen this, students will be able to identify different states of matter and their properties. Therefore, teachers of science need to help students recognize the properties of objects, as emphasized in grade-level content standards, while helping them to understand systems. As another example, students in middle school and high school view models as physical copies of reality and not as conceptual representations.
Water 3: Melting and Freezing allows students to investigate what happens to the amount of different substances as they change from a solid to a liquid or a liquid to a solid. Motivation. Begin this lesson by dividing the class into pairs. Assign each student a role: writer or illustrator. Students will exchange roles during the course of the ...
New science subscription for kids with VR experience, exciting experiments and educative instructions. My son is 12 and MEL Chemistry continues to inspire and encourage his fascination with chemistry. The experiments with expanded explanations through the smartphone app and website...
Jul 15, 2014 · Create States of Matter Anchor Chart as a class. Solid, Liquid, Gas StudyJams Video Show students marbles in a petri dish to represent molecules in each of the 3 states of matter. Students act out the states of matter in groups, moving and spreading out to represent the states.
Xdelta ignore checksumMany experiments can be carried out in the laboratory of inorganic chemistry. Thus, if we want to obtain hydrogen chloride (HCl), which is 1. The laboratory was lit up very well. 2. This substance is to be heated to a high temperature. 3. In this experiment we were to find out all the properties of this...Properties of Light : Properties of Sound #1: Light travels in a straight line. #1: Sound travels. #2: Light reflects off smooth, shiny, flat surfaces in a regular reflection pattern. #2: Sound can be reflected (bounce). #3: Light reflects off rough, shiny, uneven surfaces in a diffuse reflection pattern. #3: Sound can be absorbed (not bounce). 3 Structural members A arm form the main structure. A cantilever is a bea which is supported at one end only. Arches Columns are vertical structural members. A buttress is a structure built against or projecting from a wall which serves to support or reinforce the wall. cantilever is a structural member which sticks out like an m
Use goal seek to calculate the changing value in cell b12 quizlet |
Even and odd functions
In mathematics, even functions and odd functions are functions which satisfy particular symmetry relations, with respect to taking additive inverses. They are important in many areas of mathematical analysis, especially the theory of power series and Fourier series. They are named for the parity of the powers of the power functions which satisfy each condition: the function f(x) = xn is an even function if n is an even integer, and it is an odd function if n is an odd integer.
- 1 Definition and examples
- 2 Some facts
- 2.1 Continuity and differentiability
- 2.2 Algebraic properties
- 2.3 Calculus properties
- 3 Harmonics
- 4 See also
- 5 Notes
- 6 References
Definition and examples
The concept of evenness or oddness is defined for functions whose domain and image both have an additive inverse. This includes additive groups, all rings, all fields, and all vector spaces. Thus, for example, a real-valued function of a real variable could be even or odd, as could a complex-valued function of a vector variable, and so on.
The examples are real-valued functions of a real variable, to illustrate the symmetry of their graphs.
Continuity and differentiability
A function's being odd or even does not imply differentiability, or even continuity. For example, the Dirichlet function is even, but is nowhere continuous. Properties involving Fourier series, Taylor series, derivatives and so on may only be used when they can be assumed to exist.
- If a function is even and odd, it is equal to 0 everywhere it is defined..
- If a function is odd, the absolute value of that function is an even function.
Properties involving addition and subtraction
- The sum of two even functions is even, and any constant multiple of an even function is even.
- The sum of two odd functions is odd, and any constant multiple of an odd function is odd.
- The difference between two odd functions is odd.
- The difference between two even functions is even.
- The sum of an even and odd function is neither even nor odd, unless one of the functions is equal to zero over the given domain.
Properties involving multiplication and division
- The product of two even functions is an even function.
- The product of two odd functions is an even function.
- The product of an even function and an odd function is an odd function.
- The quotient of two even functions is an even function.
- The quotient of two odd functions is an even function.
- The quotient of an even function and an odd function is an odd function.
Properties involving composition
- The composition of two even functions is even.
- The composition of two odd functions is odd.
- The composition of an even function and an odd function is even.
- The composition of either an odd or an even function with an even function is even (but not vice versa).
Other algebraic properties
- Any linear combination of even functions is even, and the even functions form a vector space over the reals. Similarly, any linear combination of odd functions is odd, and the odd functions also form a vector space over the reals. In fact, the vector space of all real-valued functions is the direct sum of the subspaces of even and odd functions. In other words, every function f(x) can be written uniquely as the sum of an even function and an odd function:
- is even and
- is odd. For example, if f is exp, then fe is cosh and fo is sinh.
- The even functions form a commutative algebra over the reals. However, the odd functions do not form an algebra over the reals, as they are not closed under multiplication.
Basic calculus properties
- The derivative of an even function is odd.
- The derivative of an odd function is even.
- The integral of an odd function from −A to +A is zero (where A is finite, and the function has no vertical asymptotes between −A and A).
- The integral of an even function from −A to +A is twice the integral from 0 to +A (where A is finite, and the function has no vertical asymptotes between −A and A. This also holds true when A is infinite, but only if the integral converges).
- The Maclaurin series of an even function includes only even powers.
- The Maclaurin series of an odd function includes only odd powers.
- The Fourier series of a periodic even function includes only cosine terms.
- The Fourier series of a periodic odd function includes only sine terms.
In signal processing, harmonic distortion occurs when a sine wave signal is sent through a memoryless nonlinear system, that is, a system whose output at time only depends on the input at time and does not depend on the input at any previous times. Such a system is described by a response function . The type of harmonics produced depend on the response function :
- When the response function is even, the resulting signal will consist of only even harmonics of the input sine wave;
- When it is odd, the resulting signal will consist of only odd harmonics of the input sine wave;
- When it is asymmetric, the resulting signal may contain either even or odd harmonics;
- Simple examples are a half-wave rectifier, and clipping in an asymmetrical class-A amplifier.
Note that this does not hold true for more complex waveforms. A sawtooth wave contains both even and odd harmonics, for instance. After even-symmetric full-wave rectification, it becomes a triangle wave, which, other than the DC offset, contains only odd harmonics.
- Hermitian function for a generalization in complex numbers
- Taylor series
- Fourier series
- Holstein–Herring method
- Parity (physics)
- Gelfand 2002, p. 11
- Gelfand 2002, p. 72
- Ask the Doctors: Tube vs. Solid-State Harmonics |
Blue stragglers are stars that are observed to be brighter and bluer than we Earthlings would expect, since these characteristics make the stars appear younger than they actually are.
No one's been able to pin down how blue stragglers form. But a group of scientists report a theory in this week's issue of the journal Nature based on new observations.
Most blue stragglers have a companion star; the two form a binary system. But the companion is usually in a wider orbit than what's usually found in binaries. Blue stragglers are hotter than red stars, and are fairly rare.
Aaron Geller and colleagues at Northwestern University observed a cluster of stars that all formed at about the same time - 7 billion years ago - with 21 blue stragglers. The scientists looked at companion stars that are about half the mass of our sun. Blue stragglers tend to be more massive than our sun, on the other hand.
They believe blue stragglers form in a process called mass transfer. This happens when a red giant, a large star that's relatively old, has an outer envelope of material that is no longer gravitationally bound to the star anymore. The red giant transfers mass onto the blue star in the binary system. Material that comes from the red giant forms an accretion disk around the blue star.
"It was a pretty happy star living on its own, with this binary companion, until its companion grew to become a giant and then started dumping all of this mass onto it," Geller said.
This material can be used as fuel that this bluer star can use to burn at its core, which keeps it alive longer, Geller said. Eventually all of the outer layer from the red giant will go to the blue star, turning the red giant into a white dwarf, and the accretion disk around the blue star will go away.
"This bluer star will be blue straggler. It will [be] more massive than it was before, because it's gained all this material, and all this material is allowing it to stay bluer, brighter, longer than it should have," Geller said.
Scientists are observing the final state of this mass transfer process - systems with a blue straggler and white dwarf companion - but have not actually seen the transformation happen.
Geller and colleagues have not discovered any planets orbiting blue stragglers, but NASA's Kepler mission may come up with something in the future, Geller said.
all formed at about the same time – 7 billion years ago –
You are out of your minds.
Valuable information. Lucky me I found your site by chance, and I am stunned why this twist of fate did not came about in advance! I bookmarked it.
I beloved as much as you'll obtain carried out proper here. The caricature is tasteful, your authored material stylish. nevertheless, you command get bought an impatience over that you would like be turning in the following. unwell unquestionably come more beforehand once more since exactly the similar nearly very often within case you shield this increase.
Thanks for another wonderful article. The place else could anyone get that type of information in such a perfect method of writing? I have a presentation next week, and I am on the search for such information.
I liked up to you'll obtain carried out right here. The sketch is attractive, your authored material stylish. nonetheless, you command get got an edginess over that you would like be delivering the following. in poor health for sure come more before once more since exactly the similar just about a lot frequently within case you defend this increase.
Blue stars exist, god does not. Proof for one, none for the other. The end!
Each of these does not have to devolve into an argument about the existence of God. To ignore what is observed is ignorant. To realize that what we think of as mysteries God has figured out, is priceless.
I just watched a Star trek TNG episode about this same thing. There was a star that was feeding off a red giant and every like 200 years it gathers enough matter and then reignites bigger and brighter and of course blue. Was a good visual of what they are talking about in this article.
I love science but the question I have is, if stars and galaxies that are farther away are moving faster, doesn't that just mean that they were moving faster way back in the past? Couldn' t it be that they were moving faster in the past "shortly" after the Big Bang and then slowed down over time?
this sounds like more made up crap I can't relate to
and it's probably wrong anyway
dumber and dumber!!!!
The stars are always a great topic for discussion. That said, Big Bang Creationism is on the way out as Plasma Cosmology becomes more sophisticated.
When we were young we use to play outside till late at night. My father who wanted us in the house before the sun sets will never force us to come home but instead told us a story about him and a Philippine legend called a “Tikbalang”. Considering a “Tikbalang” is just a legend, I can honestly say that I have not seen one in person but only in picture .
He narrated a story about him challenging a “Tikbalang” to a fight to the death and the “Tikbalang” miserably lost. The “ Tikbalang” before he crawled back to his den, according to my father, curse and said that he could never beat my dad but promised to come back to take revenge on his future children. And when the sun sets that is when the “Tikbalang” will come out from his den and hunt us.
In short, we are always home before the sun sets and if we happen to forget the time he just mention in a well mannered voice to remember the “Tikbalang” and we come running home.
At the beginning I did not have any reason not to believed the story because it came from my dad, and I respected and believed everything he said. As I aged and got educated, I came to realized that is was just a story concocted by my father to keep us in line. But my sister thought she could use the same story to keep her children disciplined, she kept telling the story to her children over and over to the point where I think she believed that it really happened.
I can compare my Tikbalang’s story to the church version of hell. Nobody has seen hell but fear that it could be true, and just to play it safe they blindly accept what the church teaches. Others like my sister will gladly tell her children the same story, not that she believed in it, but just to keep the children in line just like us when we were young. Others like me cast doubt on the story but will not challenge my dad because there no harm done.
I never question the bible nor I believed on the story completely. To me the old testament was a story handed from one generation to another to keep the Jews in line. To me there was no harm done by the old testament so I treated it like any other books, it is just a collection of good stories.
The same as the new testament, I will never question Jesus or his teachings because I believed that he really existed and It does not harm me to read the new testament.
There will be Christians who are like my sister, they will teach heaven and hell to their children to keep them disciplined. They will tell the story of Heaven and Hell until they believed on it themselves. Some will believed on them just to play it safe.
In short I with treat the Bible stories like the father’s “Tikbalang” story, a collection of stories to keep us disciplined.
HOW IN THE HELL DID AN ARTICLE ABOUT A BLUE STAR ENDED UP AN ARGUMENT BETWEEN ATHEIST AND THEIEST? NEXT TIME I WILL NOT READ THE COMMENTS.
I bang big stick against ground with big rock. My sister helps bang big stick with big rocks too. The spirit in big rocks make stick harder and longer. Big rocks get smaller. Big stick gets thinner. Spirit in rocks do these things. There was this one time where the rocks turned into sand. And the sand into dust. That is where clouds come from. See? That makes sense. I make up my own science. My science always works. Always rocks and sticks to have for free. Gifts. Happy. No need to think too much. No need to be too much. No need. Just want. Me want so much rocks and sticks. For free. So excited.
Yeah. There's nothing like an intelligent conversation concerning a concept older than writing by mocking someone with no substantiated arguments.
I can remember when I had my first beer.
So if the material from the red star is gained by the companion star and causes this star to be a young blue star,-
Then when our sol goes through its red giant phase and sheds off several layers of material, will our giant planets (Jupiter and Saturn) gain enough material to cause them to be a blue star or just a brown dwarf or just a bigger giant gas planet ???
don't worry about it, or overthink it
all these theories on how the world started is their theory on how god made the big bang so loud,they just trying to figure out gods steps in doing what he did to form the whole anchilada, and im not sure but i think we are not suppose to question god or his actions,
Why not? Why not question? What really is the problem? So God may have created people....So what!!! We still can question anything that piques our curiosity.
If there's a god (and that's a big "IF"), he gave us the ability and desire to study, learn, and understand the world around us. To question the functions of our universe is to be human. Only an idiotic God would give us such a curious nature and then tell us not to use it.
anyone who believes in god is foolish. To all thiests I say PROVE IT
I see it like this Mickey. Once you start understanding why you are living and conscious you will then understand that God is not a person but a very holy force that allows the cosmos to move through its laws of nature. Without God, physics wouldn't exists, without god, your cut on your hand wouldn't mend back together through the intricate works of your cellular bio-structure within. I thank god everyday for these small things that happen to me. One must believe in a higher force in order to treat the earth and heavens with respect and love. That being said, if we were to advance as a civilization 1 million years into the future, would you think we would have mastered all scientific technology by then? Maybe so, Maybe not.... And if so, what else do you have left to live for. And if not, how crazy is it to live in a universe that is unlimited. There is a an amazing godly force that lives among us infinitely in time and space or even in other dimensions we can come to understand.
Well said, Eduardo. To Mickey, you've missed the whole essence of life. No one can PROVE anything about the origin of existence or God or any of it. You operate on faith that we're all here from random events. Eduardo and I have faith that God created all things and has a special place in heaven for us for eternity. As I just appended in another response... are you 100% sure that there's no God? Come on. If you'll admit there's 1% chance that God exists, wouldn't it make sense to study the options for eternity? In one place, there's fiery torment every second of every day. The other option is peace, joy, excitement, everything good. You've already picked the former. At least give yourself a chance and look at the other option. Find a Bible and read the book of John. God will transform you.
Your statement of a holy force and god being in the cosmos and all that sounds pretty but with zero evidence supporting it in any way.
There is evidence supporting the current theories on universe creation, there is just one key thing. In science we gladly admit when wrong cause we learn new things, where are with religion you never admit a single fallacy..
Lets look at the church, guessing you guys would fall into christian so lets head over to europe... Church said world was flat, supported on the pillars of god, and that everything in the universe rotated around earth... Fast forward a few years and Galileo shows them how badly wrong they are, and they fight him on it for decades then finally admit "Ok maybe the world is round and rotates around the sun and we are a small small spec in the universe"...
And that is a key flaw with religion, if you are not willing to admit fallacy then you will be wrong more and more often. Life is a constantly changing thing and we learn things constantly. Hell a couple of scientist at CERN thought they violated the laws of Physics and showed up Einstein... Did they go nuts screaming how great they are? No they asked every scientist on the planet with any clue to review their work, check their numbers and tell them where they went wrong.
So take your religion and your belief that some supreme creator came through and made this world and throw it where it belongs, right next to those stories about vampires, werewolves, and all the other constructs of human fantasy.
How do I know that God exists? We're having conversation. If you desire proof, please explain how all the millions of things that happened went exactly right for life on this planet, and only 1 species being intelligent. Bible covers it all if you read it from the point of view that its not a literal way of how things happened, but written so that way it was explained to people of a simplier era back then and able to explain things we learn these days. The Bible supports evolution/big bang if you read between the lines.
Religion is a personal concept. It doesn't need to be debated. It doesn't need to be examined. It just is what it is. The more people focus on horoscopes, and trolls living in their stomachs, the less then need to focus on REAL SERIOUS issues in their daily lives. Religion can be a wonderful tool that helps people to overcome the greatest challenges in life, and it can also prohibit and distract those same people's attention long enough away from everything else going on in life. People are very black and white. If it wasn't God, it would be ice cream. Too much ice cream. It is either you love it or hate it. You have to LOVE ice cream. All the time. Otherwise you are a bad person. I'm really very tired of this argument. Just as much as I am tired of people taking offense at science. Science is not anti-religion. And religion is not anti-science. It is PEOPLE who are ANTI INTELLIGENT. PUSHY. IGNORANT. FOOLS. THAT CAN NOT KEEP THEIR BELIEFS TO THEMSELVES. I DON"T CARE WHAT YOU BELIEVE. SHUT UP. GET A LIFE. NOBODY CARES. I CLICKED THIS ARTICLE TO READ ABOUT SCIENCE. OBVIOUSLY. GET A GRIP ALREADY. GROW UP. NOBODY CARES WHAT YOU THINK WHEN THEY COME HERE TO READ ABOUT SCIENCE. TAKE THE DILDO OUT OF YOUR EAR AND LISTEN TO SOMEONE OTHER THAN YOURSELF FOR A CHANGE.
Why?? It's amazing how an article about something as mundane as a star can turn into a religious debate...
Every single article on this site has some kind of religous debate. From blue stars to is god real to obama controlling the weather, its all the same. And in the laws of science god cant be real beacuse one person cant be so powerful. Religon is just beliefs and my opinion it causes lots of debates and tensions between people.
I'm so sorry that you think using a computer is ok, but looking at the stars is not ok. Everything that you use in your everyday life is due to science looking at how the Universe. You should be on your knees thanking the people that make your ability to preach complete garbage an unfortunate reality.
to set here and read all these comments on whether there is a god or not and how physics over ride that notion,it may seem like nothing to some but have you ever went to a zoo and seen the birds ,all the differrent colors ,the many colors so brilliant,beautiful shinny colors,and i think gods work,how do the scientist explain that
Easy richard, it's called evolution. I don't believe you can disprove god with science. However, I equally don't believe there is substantial evidence to say for certain he/she/it exists. But do some critical thinking with me here for a moment... why would a god create us, make us in a way to ensure we were curious, independent minded beings, and then banish us to hell if we didn't believe in him? Oh... and offer no definitive proof of his existence? Sounds like either a cruel god or more likely.. scare tactics of people who fear death.
I look at it this way...No one really knows the answer....that being said, let's assume the Big Bang Theory is used for fact....then, perhaps, someone could explain to me where the two rocks that collided in space came from? Just saying...it all had to start somewhere, somehow...
if you need an explanation, read Richard Dawkins "Greatest Show on Earth". It demonstrates evilution briliantly.
It's called arrogance. Since man is the most intelligent thing we know of on earth we reason that if a 3.5lb brain can't understand it than it was all an accident. evolution is like comparing a model T to a 2011 BMW and saying it just evolved, or comparing a 1960 calculator to today's computer and saying those programs wrote themselves in a successive higher order of complexity. Looking at the model of evolution it's pretty clear there was something that created life and continued to improve on it over time just like we have improved on the car and computer. Imagine in the future computers gaining intelligence and self awareness they would never believe humans actually wrote their computer codes. Instead they would just reason those codes wrote themselves after all a computer code is just 1 & 0, and humans are just A-T-C-G spun around in a spiral ladder called DNA. Not really much difference in a computer program than DNA- both tell it how to run and determine what it will be.
LOL the Model T DID evolve into a 2011 BMW, etc. etc. Not that I don't believe we were created by a higher power... Just sayin'...
Oh please. Evolution is a fact. It's mechanism's are well known. The fossil record is pretty clear. Read a book!
Evolution does not explain creation.
Evolution does not touch on the subject of creation at all.
This is where arrogance in the science vs. religion debate is most prominent, sadly.
These "blue" stars are actually yellow-white stars that pissed off Chuck Norris, who's favorite color is blue.
Wow, I wonder which one is more far fetched. Science or Islam. Both are based upon shakey premises.
Thank God Monsignor Georges Henri Joseph Édouard Lemaître discovered the Big Bang Theory.
if you truely think that islam and science are on the same "shakey ground" then you have no concept of what evidance and proof really are. All forms of thiesm, and faith based thoughts, are fully stupid. Science takes facts and paints a picture of reality, while thiesm makes up everything, and trys to spin a "mentally pascifing" story, eg it feels good, so it must be true.
That is such a huge failure on your part to make that comparison.
Did Islam invent medicine? Mechanical engineering? Airplanes? Microscopes? The discovery of DNA? Create Nuclear weapons and power? Collide tiny particles together at near the speed of light just to see what happens? Motors, telephones, internet, electricity, cars, boats, submarines, Mars landers, 2 Spacecraft that are currently literally at the edge of our solar system, your telephone, your computer, your video camera, TV, radio, satellites...
I don't think so. It's pretty clear. Science wins.
This article is, frankly poorly written. Being an amateur astronomer, I understand what Blue Stragglers are, but if I hadn't known going in to this article, I would have been confused. Blue Stragglers are pretty much always in clusters (particularly globular clusters which are generally immensely old). Since a bluer star in the main sequence has a shorter life expectancy than a redder star, you can effectively give the age of the cluster based on the most massive stars that are still on the main sequence in the cluster. When you have a few stars that are bluer (and more massive) than the vast majority of stars in the cluster, there are your blue stragglers.
Always good to see new Astronomical findings. I wonder how old the Blue light is coming from those stars....could be thousands of years. COOL!!......errr.....HOT!!! (spectroscopically speaking). (Don't worry about the religious zealots. As history has proven over the years, "The Church" always "modifies its position" on the universe. I would loved to have been in the room when with the Pope when scientists indisptuably proved that the Earth was ROUND and NOT at the center of the universe. LOL!!)
Gratuitous attacks on the Church? Whats worse is that you show your ignorance of history at the same time. The Earth was shown to be round before the Church even came to be.
There is nothing in the bible suggesting the world isn't round. The bible isn't a geography or astronomy book. And it was actually most scientists of the day who thought the world was flat and later disproven.
Referring to "scientists" from when people thought the world was flat is like referring to the "auto-mechanics" of the 17th century.
what is funny, is in the old testiment, moses is "ascended" to "on high" and he quotes, " I beheld the whole ROUND of the earth". And it was still almost 3000 years before these evil thiests would stop killing scientists for speaking the truth. It is time for science to treat religon as religon has treated science for severl thousand years.
@ Whats Wrong with Everyone?
Thank you, for the post.
I would like to keep to just new science findings.
This started off as an interesting science article and has turned into Religion vs science. Take it to another site. Why is the religious side even posting on here..? Did the article mention God? Science people just ignore them. OR just publish a short request for them to go somewhere else (nicely). Don't bait them. Neither side will convince the other of anything.
You are so right. It also seems like they all do, except on more interest-specific sites, you know, like science sites where the religious zealots fear to tread, lest they be led astray, or religious sites, which most science-minded people wouldn't even know existed.
Am I stereotyping?
Yes you are and you have a limited view and closed mind. Not very scientific I might add.
Probably because atheists and science types like us always do the EXACT SAME THING to every article ever written about religious beliefs. I don't blame them... in fact, I encourage the debate.
I agree, I alway respond to the thiests, and do not mind them opening there mouths and proveinghow dumb there views are.
'No one's been able to pin down how blue stragglers form' -– Because they can?
The Devil to them there to test your faith.
What we all seem to forget is that if there is a "God" then anything he does is incomprehendable to human beings. "It is easier to count every sand on this beach than to know the Trinity (Christian God). All i am saying is that its foolish to disbelieve in the supernatural based on any human knowledge. We have not the slightest clue why anything exists. Personally, I look at DNA for my spiritual answers.
Excuse me. We're having a big boys/girls conversation here. Could you go wait outside please? We deal in science and fact, not fairy tale.
Mike, you may want to go outside and play. The article says they dont know which is not fact. Theory is never a fact.
dont wory mike, Bill, is just so dumb, he does not understand the meaning of the word theory, he, like most thiests, think theory is the same thing as guess. He is of course wrong. How the earth moves around the sun is theory. How apples fall from trees is theory. But thiests cannot atack the theories themeslves (due to lack of knowladge), they can only attaqck the scientific method. Lucky for those of us smart enought to know better, there arguments are not based on anything ever.
Bill, you might be surprised at how many "theories" made everything you use a possibility. Hope you are enjoying your electronic device that is probably capable of sending and receiving information via waves of light wirelessly to a box that is connected by silicone to your local internet dealer.
"All i am saying is that its foolish to disbelieve in the supernatural based on any human knowledge. We have not the slightest clue why anything exists."
Sooo, isn't it even more foolish to actually believe in the supernatural, based on any human knowledge?....as you, yourself said "we have not the slightest clue why anything exists"....
It's just as foolish to believe the physical world exists.
The physical world makes itself known every morning, at least in my house anyway, whether I want it to or not. :(
It's ..... foolish to disbelieve ...... in undetectable things ...... cause .... wait.... what?
I'd like to know if eventually the white dwarf remnant will eventually begin siphoning material back from its companion and become a nova? Maybe if the orbits decay and they get closer?
No, white dwarf is now much less massive than blue star, so cannot reverse the gravitational mass-syphoning effect. If anything, the dwarf is eventually completely absorbed by the blue star.
Attn Editor: missing word error: "..It will BE more massive..."
Nice catch, thank you. It has been corrected.
– Sophia Dengo, CNN.com
This material can be used as fuel that this BLUER star ,
im not sure if this is a typo or not but it made no sense if it was not a typo .
Maybe the blue star came from a large star being hit with a super large object that broke a chunk off of the star.
Willy, large objects don't collide with and break off chunks of stars. The stars absorb whatever collides with them unless it is another star. When it is another star, the two merge to form a new and more massive stellar object.
This has been known for several decades. The Paradox Star System Beta Persei (Rosh ha-Satan, Al-Gol, Medusa's Head, et. al.) is a blue with red companion. Observation (and doctoral dissertations) decades ago have expounded and explained the solar mass distribution between the two stars. Why CNN would be deem this newsworthy is questioned.
Because not everyone is as up to date as you are with space news.
---------There was a “BIG BANG” When God said;
–“Let there be Light and let the light ferment and the light
became fermented”& there was the Heavens and the Earth.
_______It is called quantum electrodynamics/or QED_______
________Six + “Nobel Winners” describe and illustrate their findings of
_____________“millions” of particle collisions in the same way.
All of the elementary particles divide up into packets of photonic energy
[light energy ] that Turns [or ferments,] into two or more sub-particles and
some of the photonic energy [ light energy ] returns [or ferments,] back into
the original elementary particles.
The EM-fields of each particle defines the amount of light particles /or the total energy
that is within each particle or sub-particle. As the “EM-fields collapse” / or ferments back
it either elementary particles or other sub-particles.
__________Atheist, you do know about “Nobel Winning” QED?____________
++++++++++++++++++++The Science of Creation!+++++++++++++++++++++++++
Physics-lite; With the Science of God’s Creation.
A Blue star is no surprise to me. All throughout the universe there are the two of most extreme energy sources from the highest to the most lowest energies.
All mass / elements act different to these extreme energies. At extreme low energies the mass / elements magnetize and attract like element.
At extreme high energies the same mass / elements are stripped of their electrons and become a plasmas state of that mass /element. Yet these to states of mass / elements are attracted to each other.
Thus we have the makings of an inner magnetic core & outer plasmas core of a star. It is this process of the attraction of the two states of the mass /elements that brings about the contraction of the plasmas state mass / elements into a high-density mass /element of about 1/137th. of its, original mass’ area, this actions forces the six strong force fields of the mass / elements to polarize & propagate out as what we know as Gravitational fields.
This is why we see that the big gas planets and now the Blue stars
Physics-lite, how exactly is that supposed to prove the existence of god? You're not making an argument, you're just pasting paragraphs in and then shouting "You see? You see???" Sorry, no, we don't see. The existence of something nifty doesn't instantly prove the existence of a god.
@Judas Priest; Quotation
Physics-lite, how exactly is that supposed to prove the existence of god? You're not making an argument, you're just pasting paragraphs in and then shouting "You see? You see???" Sorry, no, we don't see. The existence of something nifty doesn't instantly prove the existence of a god.
Sorry, That you can’t see that as God said in the very first book in the bible. That was written down Long before there was any kind of Science.
That all of the mass / particles that they smashup into packets of light energy all ferments back into mass.
Yes The bible did exist before science. But then the human race started to learn and think for ourselves and not have to rely on someone else telling us what to think.
physics-lite, your moniker is well earned, as you have no clue what you're talking about. Mass does not break apart to become light, and light does not smash together to become particles. Your understanding of particle physics is terrible.
In the beginning..... oh wait.... there was nothing. Science begins after something has been created from nothing. Yet science still has not been able to create anything from nothing. Religion celebrates God.... a master architect??? Science wants God to disappear from any of their equations in order for their theories to work. Science keeps "discovering and explaining" that which has has already come into existence. The God particle that science seeks in the colliders already has something to work with. If the Big Bang started everything then why have there not been any more just as stars fade and reform.....but they become a part of something that already exist. They don't create any new universes. Since they don't, science is now offering that there are possibly multiple universes. The God story has not changed for thousands of years but every couple of months science offers many new theories because new groups of scientist discover failings with existing theories. The religious cannot present God to the world and science cannot prove that God does not exist until they create something from nothing. Which by the way everything already exist.
you talk about of jive, but wheres the cream filling huh? i see no proof of your statements, at least science has the integrity to admitt to whatever being only a theory and not fact – your BELIEF is nothing but a thought you hold on to like a baby and there special blanky... sceince attempts to prove or even to disprove itself, you just go on believing what some silk covered so called humble man with gold in his pockets tells you. Try listening to the meek of the earth, they have far more to say then any priest or other clergy man. Science tries, religion dictates, which is better, which has proven more things correct over time, which has been the soucre of answers for questions asked from all times...not god, nor any god for that matter.. and its arrogance to think that your god is the only one, the true anything – such arrogance!
"The God story has not changed for thousands of years..."
Just tell that to the Ancient Romans, Druids, Muslims, Mormons, Lutherans, and Scientologists. And then tell that it to Socrates, Galileo, Tycho Brahe, and Giordano Bruno.
The big Bang didn't create 'someting from nothing'. and matter can never be eliminated, only changed in form. Gravity sucks every thing in until it's compressed into a singularity and can no longer contain itself. the goes supernova, just like a star. on a slightly larger scale. That has been happening for eternity, not the first big bang, not the last. And I can prove that god doesn't exist, but I'm not going to tell you....now prove to me that I'm wrong. Actully prove to the world that ONE shred of truth lyes in any religious book. Other that the fact that in many cultures rape, ritual killings, slavery, murder (in the name of god), subjection of women or other races/tribes/religiouns/etc are all still ok. You can call me a heritic if you will, I call you a lemming.
What I don't get is how do you seperate the church from reality?
"Gravity sucks every thing in until it's compressed into a singularity and can no longer contain itself. the goes supernova, just like a star. on a slightly larger scale."
That doesn't sound accurate, to me. Where did you see this?
I am pro-science, so don't misinterpret me, but I think that particular statement is inaccurate.
i was corrected by Cosmos42, a person who took my as stated very simple answer and expanded it so it was completely true and not just for laymen... i think Cosmos42 is needed for a proper answer here as well. dude knows his/her stuff.
Just an honest question...who or what created gravity? I mean, it's a law right? So, who or what made that law? Becuase as far as I know, laws are all created by someone. I mean, the earth tilting at just the right angle to keep us on the ground...why? Other planets are not crashing into us...why? The universe seems very big and complex, so is all of this by chance? There seems to be alot of smart people commenting on here, so answers please.
" And I can prove that god doesn't exist, but I'm not going to tell you"
A.) No you can't
B.) No one believes you anyway
You cannot prove or disprove anything. Anything at all. You just sound ignorant. Wake up.
Not sure if you being sincere or just trolling, but here goes.
Laws, as in the laws of nature, physics, and/or the universe are not created by people, just discovered. Newton discovered the law of gravitation first and Einstein refined it with Relativity. I don't understand your question about the tilt of the Earth because that would have little to do with us "staying on" the planet. And planet and objects in the universe run into each other all the time, the Shoemaker-Levy comet(s) hit Jupiter a few years back, even galaxies run into each other (http://www.huffingtonpost.com/2011/08/15/colliding-galaxies-exclamation-point_n_927407.html), Every crater visible on the moon is due to an impact. There are fewer impacts on Earth these days (last few hundred thousand years, perhaps) because most of the possible large collisions have already happened (think the dinosaur extinction, although not totally due to meteorite) thus clearing the orbital path in which Earth moves, although impacts still do happen and can in the future.
When gravity compress something enough it becomes a black hole. Because the time dilation at the event horizon is infinite, anything that will ever enter the black hole would appear inside the black hole all in the same instance. That creates the singularity that "quantum tunnels" outside of our 3d space and creates a big bang on the other side.
What is on the other side of our 3d space?
I'm no where near educated enough in astrophysics, but my understanding was that black holes will eventually "evaporate" (and I use that term loosely) via Hawking radiation at the event horizon. Something to do with twin virtual particles both inside and outside the event horizon, but please correct me if I'm wrong.
And your God was created out of what?....nothing? LOL
Feelings mean nothing when it comes to conclusive and recorded proof among everyone. Religion plays to your feelings an you trust it from there. Nothing about it is tangible. Science demands tangible proof that can be captured and interpreted without emotional bias. Facts are facts.
Religion needs you to "feel" it's true with "faith" and other mental bias that doesn't mean anything to anyone but yourself and perhaps the likeminded group of zealots you hang out with so that your numbers alone convince one another you're thinking clearly. But ultimately you have nothing but stories to go on, not evidence that can be slapped down on a table and be like "there". The Bible is stories, not evidence, not fact, any more than Lord of the Rings, Star Wars, or other complex fiction.
when you spout thiestic filth, it is your job to do the proving, not the other way around.
When the red star transfers it's matter to the blue start does it create a purple star?
No. The easiest explanation is that stars color is based on temperature and (unlike art class) sharing materials between stars increases and decreases temperature but does not "mix colors"
It's actually a hot indigo from a puse accretion, but I don't want to get too technical here...
To scientists bashing Christians - what about George Washington Carver?
To Christians bashing scientists - is this some odd form of loving your enemies, your neighbors, etc?
For the religious fanatics, this is nothing new. They used to just attack one another, now they are attacking people who don't believe in a higher power/master architect/whatever. The scientists have learned that they are under increasing attack, especially in the US, partially due to pressure exerted in the courts by extremist religious groups bent on having theology taught in the place of science in America's schools.
Please pass me a banana as I read these ridiculous posts by humans. You humans kill our jungles, create wars, hate, and spout religion, while we swing through the trees admiring nature and the universe. We do not care how we got here, we just enjoy living.
ape man, it really has little to do with 'living'. It's about eternity. Some of us who 'spout religion' care about where you will spend eternity. There's a dark, fiery place full of torment which I'm thankful is not my destination. You sound like you've accepted that as your eternity. I hope that you reconsider. You can learn about a better way, where there's peace and pleasure. Find a Bible and start reading the book of John. God will transform your life and create a place for you in heaven.
do you read anything but the new testiment? how can one base a belief on a man whoes teachings were based on things you don't care to read about... are you simple? And the idea of eternity is a very silly one, matter and energy can not be created nor can they be destoryed... so an afterlife when there are still babies being born is a silly thought. And thats all it is is a thought, an idea, an electric pulse thru the brain forming thoughts in your little closed mind. try reading a different book then a book on pholosophy! maybe one on facts!
No He won't. Apes and other animals have no souls, therefore are poo on the cosmic sneaker.
Ignorance is bliss, and that's okay. You'll die, and it won't matter anymore......as long as you have a warm fuzzy, and don't have to actually confront reality, right?
I believe my atoms will disperse and be re-used on Earth for eternity....as for me, I'll be dead, so it won't matter...not worried about Hell (or Santa Claus), and not concerned about death. The earth and THIS life is my heaven, and I'm fine with that.
@gnodges... what if you're wrong??? Are you 100% sure that there's no eternity? What if there's 1% chance that there's an eternity where you'll be after you die? I'd at least study the options and make some wise decisions based on the minute chance that God is real. For me, my faith is 100% based on the myriad historical documents that confirm Jesus lived, performed miracles, rose from the dead, and ascended into heaven. Did you ever hear of Socrates, Plato, or Aristotle? Of course you have. Did you know they lived 400 years before Jesus? Do you question whether they ever lived or were fairy tales? Of course not. They didn't cause the disruption that Jesus did. So now unbelievers just say "well Jesus is a fairy tale". Don't be so naive.
If you know so much about religion then tell me, wheres heaven located in out atmosphere? When we die our organs no longer workso how in the world are we souposed to live again? Sounds like frankinstien to me.
Extraordinary claims require extraordinary evidence. No one has yet proved the existence of God. The argument that we exist proves that there is a God is laughable weak. Faith is nothing more than religions reply toward questions that DEMAND evidence. This life is it. Be a good person. Don't let cave man writings be your evidence to an afterlife – that alone is comical. With all of the suffering and unrest in the world TODAY where is God now? And don't reply that it is 'his' will Kentish struggle brings people closer to god....it doesn't, they remain forlorn, hungry, such and in despair.
For anyone here not not caught up in childish name calling: What is the spectral type of those Blue stars? Are they classic O/B/A types? If the transfer occurs from Red giants – those did not evolve from blue stars – or they would not be around after 7 Billion yrs. If they were sun-like, and now very old – there could not be equally old blue stars to transfer mass to. It seems that these are not as old as indicated.
It would appear that the spectral type of the star actually changes over time as it receives material from it's companion star. How they come to be 7 billion years old is a very good question however. Seems there are very few stars (other than red dwarfs and some main-sequence stars) that should even be 7 billion years old...
As an aside, there's an interesting theory about Blue Stragglers being "astro-engineered"... http://www.centauri-dreams.org/?p=18173
They likely appear to be O2V.
Dc, most of these blue stragglers have been observed in globular clusters with old population stars. Globular cluster stars are usually all about the same age so are thought to have been formed about the same time. So the population of each cluster should have stars of equal mass at about the same stage of star evolution (no no, not darwinian evolution, star evolution, sheesh). When you look at the clusters you find these blue stragglers or hot, younger, looking stars (using spectroscopy readings, not easy to do) when all the rest of the stars are older, redder, cooler stars. The question then is how did this "younger", blue star get into this population of oldsters. This is one explanation that fits all the evidence we have.
@intothemoonbeam, remember George Washington Carver?
Yes, and I also agree that science and religion can exists happily together, however I've seen no evidence of this from the GOP.
I would be careful about a stereotype of Republicans. If you said that you saw no evidence of that coming from Michele Bachmann, I'd be right with you. If you said that Rick Perry has horrible advanced math skills, right with you.
I'm confident there are Republicans who embrace science in this country. My guess is Colin Powell is one.
@Hugo Colin Powell is a good example but he isn't running for president. I am worried about the current GOP candidates, I feel like the only one with a little sense is Ron Paul but he has no chance of winning the GOP nomination in my opinion. Everyone is pissed at Obama because of the economy but I think they are ignoring other important issues, such as scientific and space research and out of the current GOP candidates I don't see much support in that area. Sure not all republicans hate science but the current candidates show no promise in that area.
To the atheists: Absence of evidence of God is not evidence of absence.
To the turbo-christians: The universe is over 14 billion years old. Get used to it.
To those who have turned this article into a political debate: Really???
On a further note about that, since it is being discussed: Not all GOPers are mindless turbo-christians, although the current lineup is albeit disappointing.
To Judas Priest: Great music, but all living things have souls. ALL OF THEM. And guess what, they ALL go to Heaven. Just like ALL OF US. Probably even you...
To everyone: Very interesting and lively. Reminds me of the early flame wars that I got involved in back in 1992 when the internet was much younger. Same stuff, just better (more evolved) graphics.
If you love and appreciate science please don't vote for a teabagger when election time comes. Rick Perry is blatantly against science and foolishly denies evolution. Romney claims he is against Stem Cell Research, even though at one time he was for it. Michelle Bachmann's anti-vaccine statements are completely ludicrous. If you think Science and NASA don't get enough funding now, it will get very little or none at all, if either of these nuts are elected.
Your assumption that denying evolution equals not being scientific is elementary and just plain false. The "theory" of evolution is just that a theory. A theory,by the way that is full of holes, lies and is built on so many mistakes it is laughable.
Creation has many many many more "holes" than evolution. I know you will disagree but it's true.
I think you misunderstand the meaning of "theory". Relativity is a "theory" but has been shown to be fact time and time again. What about the "theory" of intelligent design?
Randy your comment is full of holes. No, your argument just caused me to change my mind, creationists are correct, life started 6,000 years ago, Humans walked with Dinosaurs… Thanks for setting me straight.
No, no it isn't. The "Theory" of evolution is the entire chain of events from start to end. From the begining of life on Earth to today. So yeah, that has holes in it because you are talking about a flawless timeline of billions of years.
However evolution itself, the idea that traits are passed on from parent to offspring, that mutations can occur in populations, that biological fitness is a major factor in a species survival...these things are not theory. They are fact. There really is no more arguing it, we've understood these things are true for decades now.
To try and propose that evolution is flawed and full of lies is a blatant lie or showing of massive ignorance. It is that sort of thing that needs to stop if we are to progress as a society.
Is there some school where they teach these clowns the "only a theory" talking point over and over without actually telling them what a theory is, or for that matter how science in general works? Yes, evolution is a theory. Would you like to know what else is a theory? Gravity, Heliocentrism, and literally everything else in human science that makes everything work the way it does. That computer or Internet-enabled device on which you typed that drek? Built based on theories created under the various fields of mechanical and electrical engineering.
On the other hand, Creationism, being untestable as it is, does not even warrant being called a hypothesis.
@Michael – Einstein's theory of relativity may have a big black hole in it now that the Large Hadron Collider was able to detect particles that were traveling faster than light...
Great, another guy who doesn't know what 'theory' means. C'mon, dude.
Everything in science is a theory, dingle. Creationism, on the other hand, is a STORY.
@FactNoMore: You do realize that's been figured out, right? No problems with relativity, the error was in the GPS used to calculate the exact locations of the endpoints of the experiment. Because the GPS satellites are themselves subject to relativistic effects, they throw off the measurement by exactly the discrepancy measured.
Hmmm, being both a scientist and and engineer myself, I can say most of the scientists and, even moreso, engineers that I know are politically conservative. I can also say that R&D spending is about the only thing Obama has cut at all since he's been in office. Nice try, though. Republicans typically tend to provide more R&D funding. Democrats then come along and cancel the programs started by the Republicans when they're between 80 and 100% complete with the R&D stages. The F-22 was already in production when our previous brilliant Congress and President decided to axe it. That makes sense. Spend billions developing a fighter to replace our nearly 40-50-year-old F-15's, then cancel it after it's already in production. Same goes for Constellation. Ares-I was already flight testing when Obama gave it the axe and, in the process, cancelled the entire planned future of U.S. manned spaceflight. If you like science and engineering, by all means, stop voting for Democrats. While they talk a big game about promoting U.S. technological leadership, their actions do the opposite.
Because weapons systems are the "only" thing still Made in the USA. (Yes, I'm aware that I can still buy some socks and shoes made in this country, but certainly little else at a department store.)
Interesting. Do you have sources for the R&D claim? Honestly, I'm interested it it's accurate, because I think R&D is a huge indicator of innovation, but I hadn't heard that claim before.
I can't speak to the rest, but the F-222 was grossy over budget and had delay upon delay upon delay. What would you have them do? Just keep pouring money into the sinkhole?
Although i believe in evalution, i disagree that it is a theory. Theories, by definition, predict future occurances. Evalution can't predict how organisms will evolve, just that they will.
This is an absolutely idiotic statement.
Evolution does successfully predict that evolution will continue to occur, and describes the mechanisms by which it will do so.
What YOU think evolution should predict is what will evolve in the future. This is NOT what it will or is required to do.
Comparison. Meteorology is also a science. We can, to some small degree, predict the weather on a limited scale, and very shortly into the future. We currently cannot predict the weather on a global scale or past the near future except in a very general way.
The reason why we can't is that weather is extremely complex, with a staggering number of variables. Evolution is similarly complex. We can accurately predict that the human appendix will continue to shrink, for instance, but large-scale predictions like "what will humans evolve into in a million years" are not really possible at the state of the art.
However this may well change as processing power and information handling capacity increases– because the theory is valid.
you lost me at "evalution".....learn to spell, and I'll read the rest.
Captain America is probably the closest in this discussion to what I believe. God and Science can and do co-exist. If there were no God, there would be no point to any of this. But to suggest that Evolution doesn't exist is to deny things that happen in front of our very eyes. We are barely more than barbarians trying to hammer each other into believing what we "know" to be true, except we really don't know much. But we are learning. As certain as I am that the universe is approximately 14.3 billion years old, I am also certain that when I die I will come to understand everything that I can only conjecture upon now. I believe that God is not only far greater than our current books explain, but infinitely greater than any book CAN explain.
Much of the symbolism in the bible can be traced back repeatedly to previous cultures, including the crucifix, the great flood, the angels, etc. The Bible is simply the latest (from ancient times, and most popular) interpretation of how everything started, written by men in an age where they did their best to explain the unexplainable with the knowledge and understanding of their time, based upon their culture. This does not negate many of the moral teachings of Jesus Christ, nor does it justify the horrors in the Old Testament.
There are powers and forces that we cannot see that affect our lives. Science and Religion both confirm this. Someday sane, open minded people from both sides of this issue will accept this. Many already have... might check out the Rosicrucian order... the Posito is quite eye opening, and does not smear, belittle, or crucify religion or science, but places them in context and provides (IMHO) a worthy perspective. Also might read Conversations with God by Neal Donald Walsh...
God Bless the Scientists...
Andy, you sound very intelligent and well intending, but you can't have it both ways. The Bible is the inspired Word of God. You don't separate the Old and New Testaments. Jesus Christ was not a moral teacher. He is God.
Now on to science... I agree that God and science coexist. God created all science. Everything that is studied and learned is already known by God.
The problem I have is when a scientist claims with certainty that the stars were created 7 billion years ago. I understand that's where some scientific ideas lead you, but in reality that's not science. I'll call it Origins, and we have to start separating Science and Origins. Science is observable and repeatable, awesome in its complexity. Your ideas and theories around Origins are fine, but that's no more than your faith in how this existence may have started. My faith happens to be different than your faith.
As for science, I love everything about it and hope someday this country will understand we can agree on how majestic science is, honor the scientists who continue to make startling discoveries, but admit we have some major differences on how to explain Origins!
One last point, my view of Origins is based on my faith in the Bible. Yours may not be. But please admit that your view on Origins is in fact your faith which by definition is your religion.
re·li·gion [ri-lij-uhn], noun
1. a set of beliefs concerning the cause, nature, and purpose of the universe
Science bases its "Origin religion", as you refer to it, on facts that are verified by other scientists and many other scientists work tirelessly to discredit those facts. When no one can discredit and everyone else can verify....that makes it a functional theory.
Theories saying a star was born X billion years ago takes into account a multitude of facts that all correspond to a specific date. It isn't some guy reading a book written by sheep hearders or some one interpriting bird sign, they are using actual facts, truths, verifiable evidence.
Science, is not a religion. Science stops just before the big bang. What came before, we have theory, idea, but large unsupportable and verified, so if anyone admantly believes in it, that'd be religion. However, saying the stars are a certain age based on observable data, no, that isn't religion, that's truth....what we call science.
Anon, if you can 'prove' all these facts, then we wouldn't be discussing this at all. Your 'facts' are based on your presuppositions. Your 15 billion year old existence is not possible with my presuppositions. My 'facts' are that the earth is roughly 10,000 years old. My facts come from the only man to conquer death and who thousands of other men saw after he was killed and came back to life (not someone roaming the desert). So whether you come with your presuppositions or I come with mine, neither of us has the power to 'prove' anything. That's my point. You have your legion of scientists who claim they can. So what. I have an equally large number who come with my presuppositions.
As I said before, I hope one day this country will realize that science is an awesome thing that we all can share. It's the poisoning by scientists with their Origins theories (religion) that ruins the outstanding scientific discoveries being made.
Carbon dating...? theres the pudding......and theres the proof! Anyone ever carbon dated a bible?
Certainly I'll admit that. I also admit that all of my beliefs arise from what I read, have read, and will continue to read, combined with my interpretation of how each of these relate to one another. It is very likely that my beliefs will continue to evolve as they have done so far. If it makes sense to me, it is more likely that I will believe it. Does it mean I'm right? No. I'm surely not right about a great many things I believe, but that does not mean I do not continue to seek the truth.
Whether the Bible is a work of 66 books, written by 40 authors inspired by God, that again goes back to belief. If the Bible reflects in your heart what you "know" to be true, then believe it. I simply question everything that I do not "know" to be true until I am either satisfied in my own mind, or decide that it ultimately indeed reverts to belief. Do I believe everything in the Bible? No. Is the Bible worthy of reading again and again? Absolutely. I believe many great truths are written there. Especially if it's in the red letters.
I also believe many great truths can be found in Nature Magazine. Life is about learning the quest for truth. I am greatly relieved that I am not one that already knows the Truth. How boring that would be.
"Cause I heard from my mom..." does not make it a fact or a truth. Just because you have a book, that makes claims doesn't make them facts. You say Jesus came back after death. However there are almost no historical facts showing he ever even existed to begin with and most of those are considered to be fakes. So you have a book that is its own proof versus dating via light, radiation, radio waves, space noise, etc etc etc.
Science deals with things that are measured, things that can be verified. When science says X, you know it's because A-W were all proven to be false and X has more evidence than Y. Thus X is the dominant theory. A person who may or may not of existed and a book written by superstitious ancients do not equate to verifiable evidence.
I can provide numerious peices of evidence to show stars are over 10,000 years old:
I can do this, because, like I said above, science is verifiable by everyone because it is measureable truth. Can you prove me one single peice of evidence that the universe is ~10,000 that isn't a self fulfilling cycle of:
Evidence is true because bible says so -> bible is true because this evidence says so -> etc
So, science is great until it tries to look at origins, got it.....tell you what, why don't you hop on your jesus horse (dinosaur), and ride on into the sunset?
TR in ATL:
The person who you alleged conquered death and whose resurrected self was witnessed by thousands? That Jesus guy? Yeah, he also told those people that his second coming would be before they died. Did that happen? No. Therefore, Jesus is a false Prophet. Therefore he is not the son of God come to deliver God's truth to the world. Therefore, any argument you make based on this mystical assumption is not valid.
You're incorrect about what Jesus said of his second coming. Here's what my Bible says about when Jesus will return. Matthew 24:
36 “No one knows about that day or hour, not even the angels in heaven, nor the Son (Jesus), but only the Father (God). 42 “Therefore keep watch, because you do not know on what day your Lord will come. 44 So you also must be ready, because the Son of Man (Jesus) will come at an hour when you do not expect him.
We all need to actually read the Bible. God will use it to transform your life.
Science is not "a set of beliefs". True science is quantifiable data, produced by experimental evidence, based upon solid theory. Simply because a quantity cannot be measured, does not mean it isn't present. A theory isn't "just" a theory.
Until the 1980's, atoms hadn't been imaged. I am guessing that until that point, many of "the faithful" didn't believe in atoms, either.
Um, well, sort of. Just show me the experimental evidence of that '7 billion year old star' experiment you conducted.
@TR in ATL,
I'm confused, if the Universe is 10K+/- years old, then how are we seeing light from stars millions and billions of light-years away?
@Nominus – God put it there. Same as always. "It's turtles all the way down!"
So why does the outer layer lose it's gravitational binding to the red giant? If the red giant has more mass than the orbiting partner how can it lose mass to it? Wouldn't the red giant have more gravitational force, and actually suck mass from the smaller star? Is this reversability? Is the arrow of time pointing backward? Is entropy moving from hight to low? WTF???
answer – when a giant gets really old and uses nearly all its fuel its spin slows, causing a lessening in gravitational pull. The star is made up of layers that are held together by the G force, and when it is weakened things get pulled to the nearest G force that is stronger.
thats a very very simple way to explain what would be a very long paragraph, so please bare with and ride the wave.
No, that's not correct. When a star is nearing the end of its lifetime, its core will run out of hydrogen fuel and contract under the pressure of gravity. This contraction causes the core to heat up. Eventually the core becomes hot enough to begin fusing helium, which produces an immense amount of radiation that pushes the outer layers of the star outward. This is when the star becomes a red giant. I the star is in a binary system already and its partner is close enough, the outer layers can get close enough to the partner that they begin accretion around it.
... Good answer Cosmos42.
ditto, Good answer Cosmos42.
And good question BadMonkey, I was curious about that too.
God did it.
Agreed. He must have been 'just kidding'.
not the best written article but its been a while since i found one of cnn.'s science articles to be interesting and informative... studing stars was something i used to love to do, this may just be the kick i need to get back in, and i'm sure my kids will love making the solar system with daddy. :)
Trolls and hateful posters are full of evil. Anonymous haters hiding in their little rooms spew their foul smell throughout the internet. They would never have courage to say these things to someone's face. Don't listen or respond to them. Cut off their food and they die. For a more peaceful world, I vote we do away with ALL of these useless article comments sections. Blah blah blah, I hate you, you hate me....blah blah blah. Ridiculous.
Isn’t that exactly what you’re doing only on a prolific scale? What a hypocrite.
Please don't judge them – for that is how u will be judged – instead ask for blessings for them, forgive them and turn ur gaze back onto urself and ur own heart and u in turn will have peace (soul) – may God bless you as well friend ...
Yes, there are a lot of religious trolls out there...and atheist ones as well.
In reference to saying something to someones face...for example "there is no god" then I have no problem doing that. Why does that take courage?
I agree with you. Steve Jobs creates this wonderful tecnology and some people use it to post nonsence and abuse it. What a shame.
I'm going to post this again in reply to 'Steve' who apparently thinks evolution is a sham...
*Steve wrote; If we evolved from monkeys and apes, why are they still monkeys and apes, if evolution was real then there would be no monkeys or apes, what an id-iot."
I've waited for a moron like you to show up and ask this question.
If England colonized America, why do we still have England? Get my point? We split off into a different species and evolved from there.
Also, Steve, why do some viruses become resistant to antibiotic treatment? Because they mutate...that's considered evolving too.
"Lord, please guide the hands of the surgeon" - why? Was the doctors 8 to 10 years of medical school not enough and now he needs some assistance? Get real, Steve. The god you believe in isn't real...it never was. You were told, probably from a very young age that "you'll burn in hell if you don't believe"...and that probably scared you into believing. Did they tell you in church how cruel your god actually is? Did they tell you that god endorses slavery, rape, incest, murder, torture...did you read about how Lot gave his virgin daughter up to be raped by men to protect two "angels" he had in his home...because that's what god recommended. Don't tell me I'm taking it out of context. Don't tell me "it was a different time". Don't tell me "that's the old testament". Are the rules suddenly different for god? I don't think so.
If you knew nothing about god...never heard of him..... Would you start believing when people told you this as an adult without providing any proof? Probably not. You'd think they were all crazy.
Well said, Phil. I love when people misrepresent science to support their claims. Christians are notorious for doing this.
Be careful when you throw stones. There is not a single current evolutionary biologist, to my knowledge, that believes that humans evolved from any species of ape that is in existance today, and please don't even talk about Monkeys. When you make this claim you sounds like the...how did you put it, 'id-iot'.
Humans evolved from a now extinct species that evolved from other extinct species and so on...if you go back far enough you can find a common ancestor that apes and humans evolved from, but humans most definately did not evolve from any species of ape that is in existance, and frankly, it's highly unlikely that any common ancestor would be classified as an ape were it in existance today. Apes are more highly evolved than the common ancestor.
Monkeys are an offshoot that is so far distant that I am not sure why you even bring them up except for ignorance.
Your brand of ignorance is the secular version of Steves. Please read a book on the subject. You show as little understanding of Evolutionary Theory than Steve shows of apologetics. (And that is saying something.)
your an idiot... first yes there are evolutionary biologists...and second why not ask about monkeys being they have little to do with us, it was the greater apes that we are spawn from. And no sh!t that there are no living apes today that we directly decended from. we are 6 million years in the making – no creature exists the same now as it may have then! And apes are more evolved then monkeys so how exactly is a monkey an offshoot of an ape...ITS NOT, monkeys are where the ape came from and so on and so forth! study wolves, when you get to the grey wolf see what happens!
To Steve, there is no way to talk to these people. I applaud you for trying. The Bible says that the ignorant will refuse to listen to reason and will reject the Lord. You tried. Let's move on. It's ok if they want to believe that fossils are millions of years old because the rock they are in is considered to be millions of years old because of the fossils that are found in them are millions of years old. It is circular reasoning that has no fact embedded within. He says that man created God and so God created man when all his "science" is manmade. Tthere was no proof, no recorded evidence as the Bible is and a timetable that constantly changes in order to get a paper published. that's why evolution is still called "The Theory of Evolution", because there will never be proof. They will try and demean you with their degrees and there titles but they all believe in the same circumstantial evidence because they are searching for what, "The True reason we are here." You and I both know Jesus is the way, the Truth and the Light."
Hippypoet, I believe many creatures have remained pretty much the same over the past six million years, including many turtles, crustaceans and jellyfish. ..just saying.
Charles, you're ABSOLUTLY correct, we didn't descend from monkeys. We descended fro primitive hominids. Evolution occurs every day on the planet as we speak. Dog breeding is forced evolution. Look at the frog. Put a group of male frog in a tank and within a year or two, some of them will turn female. An evolutionary trait they developed to progress their species. Or the snakehead fish, which is developing lungs.
Now look at the backwoods country areas where inbreeding occurs rampantly. If Adam n Eve spawned civilization we must be their retarded offspring. Not a “THEORY” I’m willing to believe.
"To Steve, there is no way to talk to these people. I applaud you for trying. The Bible says that the ignorant will refuse to listen to reason and will reject the Lord."
I have a personal relationship with reality.
Of course the bible SAYS only the ignorant will refuse to listen. It's a great sales pitch.
The bible is not fact. The bible is a book of fiction. Get it through your thick head already.
I'll say it again. If you had NO knowledge of god to this day, and some asshat came along and tried to convince you that there is some almighty magical sky daddy watching over us...you'd think the guy was a lunatic.
The story of god worked long ago when the population wasn't as dense and when people didn't have access to technology to look up information like we do now. The story of god works through fear... Getting people to agree with you by using fear is considered abuse.
My Flying Spaghetti Monster is real, but you can't tell me otherwise because you're wrong...but I can't prove it to you. That's how stupid belief in god actually is.
Take a chill pill... He is entitled to his own beliefs and you are with your equally 'bogus' beliefs. You can become a Christian a lot easier then you can become a scientist... you hear what someone else tells you and believe it. You are no different then Steve. Although I do tend to believe that Steve is right and you are wrong - once again my own opinion.
"you hear what someone else tells you and believe it" by @phil
Wrong, rational people think for themselves.
"You can become a Christian a lot easier then you can become a scientist" by @phil
I find this statement very odd too. Are you suggesting only scientists are atheists?
What's your opinion on how the transistors in your computer work?
Christians are entitled to their own beliefs, but not their own facts.
A vast majority of scientists are atheist.
I think I love you, man. Well said.
That was the sound of me High Fiving you!
Phil, I'd like you to read Dr. Zull's textbook, The Art of Changing the Brain. The book is about learning theory as it relates to the brain. I know this request appears to come from "left field" but I think you are missing concepts that I won't be able to explain to you here. (He wrote an entire textbook instead of a couple paragraphs for good reason.) The concepts are not about theology. They are about science (of the human brain). Hopefully as you defend science, you also have the desire to learn more science that many scientists have.
Even if you don't read the textbook, please read about Dr. Zull so that you'll realize he's a real scientist. At least if you research Dr. Zull, you'll be able to make a partially informed decision.
You guys are all wrong... We came from aliens...
Don't tell me this, don't tell me that. If you don't want to hear the explanation don't ask the questions.
Just a note Phil: Viri and bacteria are not the same, which means that a virus will never be affected by antibiotic treatments-not because of evolution, but because it is not the target. An antiviral targets a virus, and an antibiotic targets a bacterium. Please do your research next time.
Humans are going nowhere... sept where the Dinosaurs went!
LMAO, this may as well be the Planet of the Apes...
at least Apes can live in Nature without exterminating it!
Humans are not built for this planet, without cars, clothes, wal-marts and credit cards
the average human would be dead in a week from the weather exposure and starvation!
the Apes would be laughing at the patheticly weak humanoids... hehe
No I think it's Gorillas in the Mist !
Let's think about this for a minute....
Cars have existed for a little over 100 years. Walmarts have existed for less than half that time. The first Credit Cards showed up sometime between the advent of cars and the advent of Walmart. Modern Humans have been around for thousands of years. In fact...they invented clothes quite a while ago, and are still the only resident of the Earth that puts on their own clothes.
I think 'Realist' may be smoking too much cheap weed.
"...still the only resident of the Earth that puts on their own clothes."
Which residents put on other resident's clothes?
(sorry, couldn't resist)
Our knowladge is so little and sciance want us to think that they know so much.....lets get real...we know about the universe nothing or mabe ...a sand grain on the beach...thanks , all at wwwTheDimensionMachineDOTcom
What's wrong with not knowing the answers? I love the hunt for answers and would rather find them out for myself than have someone tell me.
Yes, science is alive and has a will of its own! It wants to ruin humanity! You better go back to church dip$h1t.
What, no one's allowed to ruin humanity except for you?
Exactly what point are you making? Science learns something nearly everyday. Who told you that science "knows so much"? Making up nonsense and showing it's fallacy is called a strawman argument.
Rudix if you actually believe that science wants us to think they know so much, you haven't adequately learned sufficient science upon which to form a sound opinion.
Good scientists understand that they know very little. It's just the opposite of what you claimed. Thus, you don't know what you are talking about. Please take some science classes and learn. Please.
Some English classes wouldn't hurt.
It's so funny how these scientists get a few data points and come up with a wild guess and that's their reality for centuries or decades until they get another data point and reform another wild arse guess. Like the current "truth" that the universe is rushing apart at faster than the speed of light, because of PERCEIVED red-shift : what a load of BS. They took one thing found by Edwin Hubble and decided it's all rushing apart faster than light. You know we'll find out later that is totally wrong, just like oh, oops, world is not flat, oops again, world is not center of universe or even solar system. Science: oops, oops, oops, there is no God, oops oops oops...
That's how science works, dude.
Its called the scientific method, and it has a pretty decent track record.
I'll take science over religion
At least science has the balls to admit when it's wrong and update the information. Unlike the church that says "the world is flat and everything revolves around us". Say anything different and you're an outcast.
Science. It works, bitches.
That you think scientists make wild guesses shows what an idiot you are.
And It was science that showed the world was not flat, it was science that showed the world was not the centre of the universe, despite the church's insistence that it was.
Isa 40:22 stated "the circle of the Earth" thousands of years before any scientist found it to be true. And Just becuase the "church leaders" tell lies to threaten and harness their flock, that is mans problem. there is nothing wrong with what the bible really teached. The Bible does not claim to be a science textbook, but when it does speak of nature and the natural world, it is acurate.
Actually, the Earth was proven to be a sphere around 2,300 years ago by Eratosthenes. Of course, anyone with a straight stick and a steady hamd could show the horizon has a curve to it. Or if they saw that ships slowly sunk over the horizon. Or that as you walked toward a mountain the peak appears first. Etc etc etc... Observation would have shown the earth was a sphere even before it was proven via geometry by Eratosthenes. So during the time the Bible was being written most any educated person would have know the Earth was round.
As far as I am aware, a circle is still flat. Although, it doesn't have corners, as it says in Isaiah and Revelations.
dont forget science also brought us global warming, nukes, guns, and lots of different ways to kill people.
Very true. Science has created the ability to easily kill other humans. Shame on them. But you also should consider the church's role in the death of humans over the centuries. There has been more blood shed in the name of god than can be counted.
Remember those scientists 400 years ago who said the Earth was round and wasn't the center of the universe? Well they were all right and they got executed by the church. Science > Religion.
Chris, how do you think the transistors inside your computer work? Hint: quantum mechanics. That's not the entire answer.
Yes, it's called the scientific method. It's the very best thing we have for understanding the world and the Universe around us. It's what we use to seek the actual truth. Theories and ideas can shift in favor of the data. Our understanding of the world can change. Can religion change?
"world is not center of universe or even solar system. Science: oops, oops, oops" If you look it up it was actually the church that said that the earth is the center of the universe and the solar system so before you start bashing science get your facts straight.
So, what you are saying is science is willing to adjust it's reality when new data proves the old theories wrong? And that's an argument against science?
Whereas, religion already KNOWS the answer, despite the "data points" that come along?
You've just convinced me that science is the way to go......thanks, buddy!
Everything the human race thinks it knows is a tiny spec of what the universe is and has to offer. When I read about scientists being suprised be results I have to scratch my head. Isn't that the point? If you think you have answers start looking over your shoulder. If you reach certanity you are totally screwed.
Very true. In reality, people don't know anything. They only think that they do. As soon as people think that they know, it really means that they don't.
I love how all you people diss science, yet everything you use was created through science, even a simple seasaw. Most of you people diss scientist probably have very little education, and know very little about your own backyard. That is why, when I read post about people bashing these guys and girls who have spent their life investigating facts, I just roll my eyes. You people haven't lifted one finger to find facts, most haven't even lifted a single finger to study their own bible, what a bunch of morons.
I thoroughly enjoyed most of the banter on this thread, and indeed do support this theory of the unexpected aspects of these blue giants in our cosmos. While I was reading, my cat Riley was playing with String Theory too.
I just love when they come up with new observations that blows their mines. I will defferently look in to this.
Just please promise us you will never return here to write about what you "defferently" look into. Hope your mines don't get blown.
A Blue star is no surprise to me. All throughout the universe there are the two of most extreme energy sources from the highest to the most lowest energies.
All mass / elements act different to these extreme energies. At extreme low energies the mass / elements magnetize and attract like element.
At extreme high energies the same mass / elements are stripped of their electrons and become a plasmas state of that mass /element. Yet these to states of mass / elements are attracted to each other.
Thus we have the makings of an inner magnetic core & outer plasmas core of a star. It is this process of the attraction of the two states of the mass /elements that brings about the contraction of the plasmas state mass / elements into a high-density mass /element of about 1/137th. this actions forces the six strong force fields of the mass / elements to polarize & propagate out as what we know as Gravitational fields.
This is why we see that the big gas planets and now the Blue stars
Uhm... what? O.o.
Lucky Charms now has blue and red stars, they're magically delicious.
WIsh they just sold the box with the yummy marshmallows only...I can't stand the cardboard flavored bits...
This primarily means that even our greatest scientific minds still look up into the sky with the bewilderment similar to that of a child. We as a people truly have no idea about the physics of the universe or the dark secrets it holds. What's most important now is unifying all nations to be on the same page. We need to calculate our resources, be at peace with one another and share knowledge to advance our species and heal our planet.
Get rid of Islam and this just might happen.
Get rid of Capitalism and it just might happen.
get rid of all religions-Islam is just the new boogeyman--and you fell for it--sucker!
Get rid of occupywallstreet and it just might happen
Fastest growing religion in the world, so not likely to happen. As far as capitilism..Its not the problem, its all the lazy people who are jealous of others that work hard.
Get rid of all religions, indeed.
All religions are just ancient mythology. But sometimes necessary to keep the ignorant masses in line.
Capitalism and Religion are the two root causes of all fighting in humanity. One is trying to tell you how to get to heaven their way, the other wants you to either work to make someone else rich or trick others to do it for you.
Religions are the most evil thing on this planet! They are designed to keep you blind folded
Get rid of Dan and it just might happen.
Religion is the absolute worst!
Science flies you to the moon
Religion flies you into a building!
I have that shirt! Gets a lot of attention...sometimes unwanted attention from people similar to the Phelps family with their Westboro Baptist Church and all.
"We as a people truly have no idea about the physics of the universe or the dark secrets it holds. What's most important now is unifying all nations to be on the same page. We need to calculate our resources, be at peace with one another and share knowledge to advance our species and heal our planet."
If we "...truly have no idea about the about the physics of the universe..." then how can we possibly unify "all nations" and "calculate our resources" ?
Information is about as useful as the info I get from playing the latest video game with my son...
This is a science FACT, is not suppose to be a USUFUL information. What a stupid comment.
This is not a FACT, it is a supposition – a theory, and no more than guesswork. How did you come up with the concept that this was a FACT? Just because you read it on CNN? READ it again – they are guessing and have no proof. That does not make it a fact.
Most thinks in science can not be proven, not facts... Just educated guesses...and often not even that educated a guess.
Things not thinks... ah its early I need a coffee.
Ah but a scientific theory is much more than just a theory. One definition of a SCIENTIFIC theory – A statement or group of statements that explains the observable facts. True, there could be more than one explanation, but being very knowledgeable about astronomy, I would agree that this theory is probably correct. Scientists have proposed this scenario to the the answer in the past. Similar situations with other types of stars occur all the time.
It's a theory, sort of like evolution :).
If the headline isn't about something you're interested in, why did you click on it? And, if after reading the story you still weren't interested, why comment on it?
DP, Agree with you. It's kind of like folks complaining about the TV shows. I'f you don't like it don't watch it! If you not interrested in the subject, then don't read it!
Trolls. They just post because the want attention. Lonely, pathetic losers seeking attention. Very sad.
Darn it. This article is too interesting! I should be doing other things, but now all I can do is think about Blue Stragglers and comment on them. They really should make them less interesting. ; )
"Umh, Gork, what is that?"
"Me call it 'fire'"
"What it good for?"
"Don't know yet. Just curious"
"Useless. Go back to playing with rock."
That's like the same thing that happens when a much younger woman marries an older man. Resulting in a mass transfer of funds and eventually scientifically proven the much younger companion takes it all. Such is life.
Life imitating science?
Maybe she just binded him with science.
So many science haters here. Obviously, knowledge and curiosity are just beyond your capabilities.
You're such an intelligent sweaty nerd.
WE CAN'T HAVE NICE THINGS!!!!!!!!!!!!!!!!!!!!!!!!!!!
The posts on this blob are without a doubt the strangest that I have seen, ever, on the internet. How did this one article bring out so many bizarre thought processes among the people tonight?
Agree..especillay the one one about the video with his son. One of the most stupid comment ever.
Good night, guys. It was great discussing with you.
Everyone arguing. Exactly what the fuck are you arguing about on a page that doesn't even hold an opinion. Do you just like yelling at other people? If I went up to you would you yell at me just for the sake of giving me your opinion? Like what the hell is this..?
You would think, wouldn't you? I don't understand why discussions and even debates always need to turn to a rude and immature place. I wonder who yet has not received the memo that:
NO ONE HAS TO BE RIGHT OR WRONG. Understandings can just BE and be left at that.
"NO ONE HAS TO BE WRONG" Ah, the rallying cry of the colossally wrong. Just like "everyone gets a trophy" and "there are no losers," this philosphy is for the weak. Of course there is right and wrong. Guess which one you are, lunatic?
"It'll be interesting to see if you're still laughing at that on December 22, 2012 when you wake up and realize that you're keyed to the portion of the planetary grids headed straight into the ever-so-conveniently located black hole in the center of this galaxy."
HAHAHA! Oh. Okay. Won't you look like an ass Dec 23rd.
I will be looking for your dumbass to be quieltly trolling these pages Dec. 23.
A good book to read is The Bible, The science and Quran
I think some of you people are jumping to the wrong conclusion. Take a look at the link – basically it's a book that poionts out which parts of these holy books are supported or refuted by modern observation.
Every last piece of information that comes out of NASA is either a complete lie or a complete distraction.
You're an idiot.
you're a tweebo
Eckasha Activation LMAO!
Jared Roussel- newly appointed mayor of the city FAILENHARD. Eat some more paint chips.
It's always the most interesting and compelling to see the extreme transition of those most inclined to categorically invalidate the words of another without proposing absolutely any comparative logic or understanding. You're free to use your energy however you see fit, but one linear day you'll realize that it does you no good to reject someone else's understanding while having very little of your own that wasn't recited directly to you through sociological and mainstream educational constructs (i.e. mind control). You're not going to get the reaction you're hoping for from me, but yeah, remember the words Eckasha Activation. It'll be interesting to see if you're still laughing at that on December 22, 2012 when you wake up and realize that you're keyed to the portion of the planetary grids headed straight into the ever-so-conveniently located black hole in the center of this galaxy. It might merit a little more of your perspective one day. Either way, enjoy your "contributions" to the discussion and have a great night.
Wow, really? You run your mouth quite extensively without any research or knowledge apparently. Many, and i mean MANY top and respected scientists have already debunked the 2012 bs. Currently they are more concerned with a massive coronal ejection from the sun striking earth where our magnetic field (you know, that thing that keeps us from floating into space?) is at it's weakest. It might seem like the end of the world when i fries satellites, but i would love to see for 1 day no one able to log onto the internet or text message or their cell phones. I'm not religious by nature, but i have read the bible, many times over, it's a good read, and unfortunately open widely to interpretation by the reader, and it simply states "no man, nor angel will no the time of the final trumpet" which means only GOD knows when the proverbial manure is going to hit the fan. Here's a simple lifes lesson for you, live each day to the fullest, be polite, say thank you, excuse me, etc., hug your wife, kids, mom or dad, make sure they know you love them, that way if it does go down, at least you'll be fulfilled in that aspect. Okay, enough ranting.
How utterly stupid. 2012 doomsday = crackpot theory, and you fell for it hook line and sinker. What a dumbass.
...then, during the final rectification of the last of the McKittrick Supplicants, they chose a new form for him. That of a Giant SLOR! Many Shubs and Zools knew what it was to be roasted in the depths of the Slor that day, I can tell you!
Holy crap, what a loony tune. Seek serious help.
Did you find a dictionary and pick words at random? Your post makes no sense.
Each one of us is created from a drop of sperm. Our own bodies are more complex than the universe. The earth, the sun, the stars and moon are precisely following THE LAW of the creator (Those of you who are following CERN, quantum physics, it has been concluded the universe was created from nothing outside of its own realm). The universe is expanding since "the big bang" it will cool off and end (NASA research). The creator continues disclosing HIS signs from everything we see (including ourselves) to evolution of science. We were created as well as are continuously evolving. There is no conflict between creation & evolution. The creator is beyond our human comprehension as He is infinite in everyway, we are only given finite comprehension. The creator does not have a son, or a daughter, or a mother or a father. The creator did send Messengers to give his divine message consistently the same, i.e. There is NO GOD worthy of worship but HE ALONE. Regrettably, evil created schisms between humans and created multiple brands of religions fighting each other and only worshiping their own vain desires. Finally, the creator sent HIS final Messenger and revealed HIS FINAL WORD for the whole mankind and promised that HE would protect His word till the end of the universe. He challenged mankind in His BOOK that if you do not believe it is from Him, either reproduce anything like it or find any contradiction in it (including scientific contradictions). It has been around over 1400 years during which all scientific discoveries are consistently in-line with the revelation and several scientific discoveries were foretold in it. Read this book today. It is the Holy Quran. http://www.quran.com May Allah guide you in the right path, as it is the creator who has your soul & life in His hand and it is ONLY HIM, who could could guide us all through his own mercy. Muslim simply means someone or something who follows & submit to the Will of God (or Laws of God). Therefore, the stars, the moon, the universe and the righteous human beings are ALL MUSLIMS. Just being a born with Muslim name and in a Muslim country does not make anyone "holier than though" person. It is the actual practice of righteousness. In the Quran, Allah says, " if anyone kills an innocent person, it is as if he killed the whole humanity". Therefore, the small minority of misguided people who call themselves Muslims and do not follow Quran are not and should not be considered Muslims. Therefore, go directly to the source, i.e. Quran and you will then learn first hand what is the true Words of your creator, Allah, the lord of the Worlds. May Allah have mercy upon all of us and forgive our sins.
-Our bodies are more complex than the universe-
Umm, by objective fact of our bodies existing within the universe, and the universe containing multiple bodies, that is fundamentally impossible.
-Those of you following CERN... it has been concluded the universe was created from nothing outside of its own realm-
Seeing as how I'm genuinely interested in physics, considering it's my major, I tend to actually keep track of developments out of there every now and then, especially with this silly OPERA experiment result. Still, never heard them say anything about proving this, I wasn't even aware that CERN was a center of cosmology, mostly because it's not.
The rest of everything you said is just religious statements, so uninteresting to me. No science content after all.
Haha, Andrew. I was thinking the exact same thing about the first point you brought up. It's actually that the cosmic body and what we call our personal bodies are the same structure. One is a microcosmic perspective, and one is a macrocosmic perspective of the same conscious experience. So the multi-cosmos is no more or less complex than our bodies here; they are equally as complex and simplistic.
Zealots are no different from anyone else on this world. We are all destined for the same sticky end and it has nothing to do with Allah, Muhammad, God, Jesus, Buddha or anyone else. You people are wasting your time and everyone else's time with your silliness.
There is no God!
Soooo .... tell me again what this has to do with binary star systems??
the Bible is as Realistic as a Comic Book! it's a Fairy Tal by Border-Line Cave Dwellers! LMAO
So is a copy of Quantum Thermodynamics at this phase in the game. At least the Bible has some good quotes in it. Why bother forming theories of creation when all of this knowledge is already available? The trick is to acknowledge that when you close your eyes, you weren't meant to see complete darkness. That is indicative of our condition as a race; we're in the dark, like a computer whose monitor no longer functions. That's as a result of genetic distortions to our base morphogenetic templates. Such distortions can be cleared and our race memory imprints can be re-established. The proof's in the pudding, but unfortunately that requires eating a little bit of the pudding in order to find out. If you do research on the Eckasha Activation and stick with the technique for a month, you won't have to continue to take anyone else's word for anything. Don't expect the media to come out and tell you about it though. Everyone will just label you as insane and continue to argue. Have fun!
I'm 98% positive that the word you were looking for is "Quantum ELECTROdynamics". The other 2% is "Quantum CHROMOdynamics"
That still makes me 100% positive you didn't mean "quantum thermodynamics", because frankly, that just makes you seem more nutty than you already do. Generally, for quantum approaches to thermodynamics, we simply classify it under "statistical mechanics".
I did mean electrodynamics, so thank you for the correction. But I have a generally different view of understandings of materialized states in general. Quantum mechanics in general on Earth are just budding; I don't think anyone would argue that. But now that I compare the difference between quantum thermodynamics (a totally valid concept) and quantum electrodynamics, they're exactly the same thing, from different perspectives. The varying matter states (not all of which are visible on this plane of Earth) are actually related to a specific order of dimensionalized elemental structures called the Aah-JhA hydro-acoustic body from which different states of geleziac radiation signatures (think: conscious "light jelly" formed upon an encrypted "dark matter", also called rasha, structure) are down-stepped (refracted) through our multi-dimensional universal structures and are then *perceivable as* the various matter states based on their relative locations to one another. It's not easy to describe without a lot of background, but the material is out there in a study called Keylonta. The gist is that no matter state is truly physical but rather they are different corresponding encryptions of radiation depending on which dimensional perception and in which density perception of the dimensionalized structure your consciousness is focused within. These matter states, at their base, are actually formed through the refraction of harmonic structures... basically, vibration [sound] is the inverse of oscillation [light] which gives rise to different geleziac matter states depending on your relative perception. Maybe I can get like two or three concepts through, but I don't expect more than a "what?" back. I'm definitely not confused about the structure, but it's confusing just writing it trying to introduce 25 concepts at once. I'd have to know a little bit more about your background to try to appeal to that.
... You are aware that everything you just said is nonsense, right?
Oh, and my background is fairly standard, I have your usual undergraduate understanding of linear algebra, multi-variable vector calc, familiar working with tensors, have taken classical electrodynamics, statistical mechanics, introductory quantum mechanics, an introductory particle physics course (but it didn't include quantum field theory, so some proofs were fairly ad hoc), and a big bang cosmology course, I've taken others, but those are generally the relevant courses to comment on the subjects you're mangling.
So what's your background, because something tells me you haven't taken many physics courses. I've made a concerted effort to at least somewhat know the subjects I comment on, though to be fair, there are a LOT of individuals who know vastly more than me. An undergraduate physics degree doesn't mean much, graduate students and especially post docs puts me to absolute shame consistently. And that's still well before you get into professors. I've just got a bit more than an average person's understanding.
it has a lot more artistic value then you give it credit for..... its very very metaphorical because the writers used that metaphor to explain what they had no way of describing.... you just get people who come afterwords that place it on a pedestal and call it fact... but the old testament is very very artistic... thats why it was able to inspire so many people for so long... the quran is also extremely artistic and very poetic.....
I can appreciate the texts. I can disagree with certain parts of them, but I find it extremely disrespectful to downright insult what is sacred to someone else. All outlooks in creation are equally correct and illusory because our perception defines our reality through the experience of our own consciousness as the Oneness that all must invariably lead back to. If you have one individuation and another individuation, it invariably implies that those individuations came from the same place somewhere along the way. So whether you believe in God or that we chaotically exploded from nothingness into somethingness, isn't it effectively saying the same thing? It's the zero-point, the point of all origination and the point of all return.
All religions are nothing but ancient mythology and have no resemblance to reality. Ancient mythology, nothing more. All primitive cultures have their creation myths and other myths that become a religion. They're tweaked and modified over the centuries until the wording is such that it pulls you into it's fantasy world. Truly intelligent people are able to see it for what it is and rise above it. If it brings comfort and control to the ignorant masses then it's worthwhile but as far as respecting it, why should anyone show respect for ignorance or for ancient fairytales?
Your realism is more akin to a stupid teenager justifying skipping sunday school by saying he's an agnostic. Get a life idiot.
Ah, yet another one of the ignorant masses. Have fun with your invisible friends in your fantasy world.
Every last piece of information that comes out of NASA is either a complete lie or a complete distraction. It's that simple. Lots of it is scientifically reasonable (on a planet where we still measure distance in units that date to an origin of our own feet), but that doesn't make a lick of difference in the world when you've got a huge meta-galactic alignment approaching on 12-21-12 and people are too busy paying attention to the *theory* that certain subatomic particles even *exist*. For all that, I genuinely find it more enlightening to learn about the latest happenings of Lindsay Lohan. I believe she went back to jail today, bless her.
Talk to me in a year.
You won't be here in a year. The authorities will have hauled you off to the looney bin.
Lay off of the moon-shine,
haha.... meta-galatic alignment.... still won't have enough gravitational effect to even effect us.... the strength of gravity decreases exponentially... meaning that for every set amount of distance you move away from a source of gravity, the force of gravity drops by 1/4... so if you were a mile away from a source of gravity and you measured it... then you went 2 miles away from the source and measured it... it would be only twice as far, but 4times weaker... and if you were to measure it 3 miles out... it would be 4x times weaker then the 2 mile mark.... which just means that even if all the plants line up they are not going to have an effect on us.... mega-galactic alignment? even if it "appears" that our sun is raising in the middle off the our solar system... in reality its way way to far away.... thats like saying a basketball on earth is effecting the movement of jupiter.... asinine
First off, contain the comments like "asinine" because if you want to actually debate and not just listen to the click-clack of yourself typing, it doesn't do you any good to categorize my logic as such. That's immature and disrespectful. As to address your point, suffice it to say that if you've been studying the electromagnetic fields on Earth and in Milky Way, you don't have a full picture of what the electromagnetic natural alignments are supposed to look like because what we call "gravity" here is actually a force that is being perpetually strengthened through artificial technology. Don't be so quick to trust your corporate-written textbook that obliviously spouts off self-admitted "theoretical particles" (gravitrons) that attempt to account for what really here amounts to a strange force that Newtonian physics account for because objects here appear to have a tendency to be pulled (pushed, actually) toward the planetary core. Your logic about moving "a set amount of distance away" (arbitrary) and increasing/decreasing by increments of 1/4 (specific) in quantum bears no semblance of mathematical congruency even by conventional scientific understanding. Now, a meta-galactic alignment actually relates to the Kathara structure (look these things up instead of trying to argue with me... this isn't some intelligence contest) of galactic systems. What we call "the Milky Way" is actually a misaligned portion of the universal system of what we call Andromeda M31 here on Earth, also known as the Aquinos matrix. The entire existence of Milky Way was resultant from a fall from a planet called Tara from an event that occurred far back in our relative past (and is now, evolutionarily, located in our relative future). When I speak of metagalaxies, I'm speaking of the space-time locations in between our galactic and universal system, so in this case that would be the union set of space-time locations (stellar alignments) existing between Milky Way and Andromeda M31/Aquinos. December 21, 2012 has always been a critical date, especially in recent times, because it is the point which our planetary templar complex (organic planetary star gate structures) align with the galactic templar complex and the galactic templar complex aligns with the metagalactic templar complex (which is part of the universal templar complex). Basically, you've got entire galactic CORES aligning. Conventional science doesn't even *acknowledge* the concept of a galactic core because they have not even correctly defined the correct boundaries of a solar system and galaxy yet. When we look at the stars, what we perceive as "outer space", we are seeing the distance in space-time between our *sun* and our local particum veca system (local galaxy). We are actually linked into the perception of our higher galactic systems through our solar core (which is the center of our solar system). No, it doesn't appear to make sense at first because we don't perceive ourselves as being located inside of our sun, but we actually are because our entire solar system is part of a larger star/planet called Urtha which is located in Andromeda M31/Aquinos. This is a big, big story, so while I encourage you to learn as much as you can, you should also consider that the mainstream understanding represents diddly squat because it is our collective (de-evolved) race understanding of roughly 300-years of continual study (not even accounting for the concept that many people are dying). Sure, the applications of mathematics and quantum theory is very valid, but the concept that quantum theory is still called theory is exactly what. Yes, there is a unified field. Yes, there are most certainly ETs and many inter-dimensional races and entire cosmic orders (most benevolent, but some malevolent). Unfortunately we're dealing with a lot of the "bad apples" here which is why we're on a planet that doesn't know much any longer, but help is also here in many, many different capacities for those who are willing to work co-creatively with that. If you are genuine in your interest of learning more, then a good start would be to begin your research into a subject called Keylonta. There are many resources available, and there is a video introduction on YouTube with planetary Guardian Alliance commissioned speaker Ashayana Deane called Ascension Mechanics in conjunction with Project Camelot. It will discuss many of the concepts I've gone into brief detail about here, and if you don't have time to watch the whole thing the first day, you can at least skip through toward some of the charts to see the alignment structures of the various structural complexes. That should help. Best of luck. I'm not interested in arguing if that's your initial response, so I would wish you luck in that case, but I would be open to further discussion here or elsewhere if it's something that proves to be mutually engaging and beneficial. ∞ Love
Oh Jarad, please pleaase tell me what you are smoking.. I want it too..
I totally want to be get on the bandwagon you tout.. but I seem to be way too sane to follow your logic, and the magic escapes my puny sane mind.. Please tell me the magic joint you are doing too
I think you should go back to eating Subway sandwiches.
And yes I'm aware I'm complaining about having to go through the frontpage of Nature, but it's nice to keep the links pointed to the papers themselves, rather than to the nature frontpage, which is not going to be current nearly as long.
God d-mmit CNN!! When a paper in Nature comes out, LINK TO THE DAMN PAPER! I'm sorry but that should be instinctual. If you're going to have the hyperlink for Nature and for the Kepler project, you might as well give us the link to the actual paper you're citing. It's courteous to your readers to save us the trouble of tracking down citations like that.
It'd behoove you to acknowledge that CNN is not here to provide information, so sources are simply viewed as above and beyond. CNN is here to provide an entertaining and emotionally-responsive form of mind control to cover up the fact that this entire planet is under complete ET manipulation. The media, through NASA, presents us with these whimsical views that the universe is less than 14 billion years old and that we're the smartest race out there (complete with ego pats) while other corporate-controlled outlets such as "The History Channel" (listen to the name for God's sake) attempt to feed us further lies about "Ancient Aliens" being our original creator Gods as referenced by Egyptian hieroglyphs. All of these falsehoods and half-truths all go back to the smorgasbord of competing intruder agendas interested in harnessing the quantum of Earth's electromagnetic fields through a technology called the NET. That's the short truth.
Heh, the really interesting thing is forget the media, I had a professor who worked on the WMAP project. I'm doing a research paper concerning how we modeled the CMB from discovery till WMAP (and possibly touch on Planck)... so apparently I'm part of the media conspiracy.
Science explains everything, but science.
Yes because science is ordered spirit. So if you ignore spirit then you ignore the very essence of what science is being applied to. It's studying some undefined thing that just exists from nowhere. The issue is that religion presents us with a distorted view of spirituality, so science rejects it (for some good reasons) and then on the same token, Earth's version of science in its current state provides very few complete theories, even by its own logic, as to how our creation is. The union between these two now-competing focuses (and then some) is where the truth actually lies. The bigger point: stop fighting over it, and we'd get somewhere.
Very well said. My statement neither favors science nor religion. It argues that scientists have to stand by the basic principles for which the laws of science operate; the same is said for those of many religions.
I think step one is to reject both, step two is to learn the truth, then step three is to re-accept that both were partially on the right track the whole time. Our issue is that we're fighting and if one is right then they are Hell bent on disproving the other. That's called polarity and whether you look at polarity from a scientific perspective or a spiritual perspective, it means that we are not unified and aren't getting anywhere. The concept that someone can simply study religion and have no clue about science and still feel somewhat knowledgable about the workings of the cosmos is ludicrous just as equally as the concept that one can be a scientist and know every behavioral aspect of subatomic particles yet still not understand the true original of such a beautiful and divinely perfect mathematical structure. Oneness, on every level, is what guides us back to the original understanding that created our ability to argue in the first place.
If you were born 1000 years ago, you would have wasted your one and only life, arguing that it is DEFINITELY a chariot of gold pulling the sun across the sky cause your mommy told you it was. The magical sky horses. Guess what. I still don't believe in your magical sky horses.
I have observed that people want to say something but realize that they don't know JACK SQUAT about the topic at hand or for the most part science/mathematics/physics in general, so they make snide comments/jokes/insults because they want to say something witty anyway to cover their ignorance and appease their need to say something. Kinda like I just did right here. So I'll be the first to honestly say it: I am a science enthusiast, but my knowledge is limited. I don't fully understand astrophysics, but I do understand how important these processes are in our understanding of this universe and thus our place in it. I am just smart enough to know and acknowledge how stupid and ignorant I really am and I'm not going to try to fake being smarter than I really am or picking on someone else just to make myself feel better, because that is disingenuous, misleading, and unnecessarily mean to our fellow mankind. Spread the good karma, not the bad.
It's surprisingly easy to understand physics, and even to some extent math. Math is often taught in very poor ways, but it's surprisingly easy in concept to go back over. And, with things like wolfram's integrator, you can even skip some of the busy work now. So you can understand conceptually how the math works without constantly needing to remember things like which trig identities solve which types of integrals. int(1/(x^2-1)) seems deceptively easy at first, but really, it feels kinda like reinventing the wheel to try to explicitly solve it. It's nice there are resources now that allow us to omit the busy work and still have an understanding of what's going on.
I suggest you start at the beginning of your present math knowledge, and build it up slowly. You'll be amazed at what kind of new insights you can find regarding how math works, and how it ties into physics. (After all, physics is to math what s-x is to mastu-bation.)
the universe has always been here. the "big bang" is when a star is born. mass accumulates and as it accumulates gravity pulls more and more mass together. Eventually it comes to critical mass and ignites, and theres your big bang. to have the entire universe appear from one big bang is absurd.
Wrong: the big bang is when I, your Lord and Savior Jesus Christ, BANG out some co me from my rock hard co ck using these holes in my hands.
Now if you excuse me, I have some universes to create.
BANG!! Universe created to a facebook picture of your mom.
the big bang really wasn't a big bang..... its nothing like what your visualizing.... there was no space or matter when it happened, because they were just products of it.... there are no combustive characteristics.... gravity, electromagnetism, and weak and strong nuclear fusion are also just products of the big bang....
the big bang can best be described by sudden appearance and rapid expansion of space.... not an explosion from tension building due to gravity... just the expansion of space.... expansion not explosion.... and space is still expanding to this day.... the expansion of space is the only thing thats faster then the speed of light..... and that means it functions outside what we understand to be everyday universal physics...
While Jesus was being facetious, Joe, you were being serious. That's really sad.
But I'm sure you're an expert...
Politics and Relegion are always sure to get an argument. When I was growing up my parents tought me not to argue about either. I vote but I don't brag about it. I don't put bumper stickers on my car in support of my candidate. I do my civic duty and vote. I am a registered Democrat but I have voted for a republican or two. One I was pleased with one absolutely made me livid. But I vote for the candidate that says he cares about the issues I'm concerned with. I'm not an atheist but I respect those who are. I respect all human beings except serial killers and folks who hurt other folks. When it comes to what is after death or what is not after death no one knows. We all think we know based on our belief or lack of belief but we really don't know. Why do Christians and Atheist always have to argue. What if Christians and Atheist are both wrong? I'll leave you to your own imagination on that one! Good Day, Joe.
It's me again, your Lord Jesus Christ, through me all things were made.
Just wanted to let everyone know that I just masterba ted by putting my co ck through the nail-holes in my hands. Totally worth being nailed to a cross to be able to rub one out this way.
Anyways, sorry about the whole giving-your-grandmother-cancer thing. I work in mysterious ways.
Now if I can just bend over enough to use the holes in my feet...
Hey guys, it's me Jesus Christ, Lord and Savior of all mankind! I just wanted to let you know that this whole star thing is totally fine to believe, there's nothing in my Bible which I wrote myself over the course of several thousand years that says star's aren't allowed to be burning balls of hydrogen.
On the other hand, you better not be g ay or else I'll da mn you for all eternity.
So to recap, stars: cool, fa gs: not cool
The whole Genesis story has so many holes in it, it should not be believed by any intelligent person. I used think "God's Day" is different too, but he would have had to change the length for each one since we know how long it took to create each thing, and they are vastly different. That means that God is inconsistent. But God can't be inconsistent, because he's perfect. Inconsistency is imperfection. Therefor, there is no God, because an imperfect God cannot exist.
I find many inconsistencies in what my Catholic religion tells me to believe that I find it too funny to actually accept. Since God created everything, as my Catholic bruthas insists, then everything that's both good AND bad and sinful is God's creation, but religious people only wanna give positive props to God when it suits their needs and not want to blame God for all the other bad things as well. It's so inconsistent, and that bugs me also. The Bible is just a bunch of fairy tales to make us feel good and to give us a general idea of common morals and themes that would help us to abide by and I'm ok with that, just not ok with people knocking on my front door on Sunday morning in suits and Bibles in hand taking away my one day out of the week when I can enjoy my football.
The information is just alliterative at this point because the context of the days of creation has been removed from the original text. My posts are always removed every time I discuss them, but the true information is out there under a study called Keylonta. The days of creation are actually part of a very complex sequential, mathematical natal cycle called the SEda cycle of which the original conscious inflow from Source individuates into manifest creation through the perpetual birth of the rasha body ("dark matter" template), spirit body (etheric matter) and light body structures. If you look up the query "Legacy of the Lost, Freedoms of the Found" in Google, you can find your way to the original Krystic mechanics which is a re-translation of the original material where the Bible was once sourced from prior to the Illuminati-Leviathan Council of Nicaea editing the text back in the 300 AD period. Good luck with everything! The truth is out there if you are willing to do your research!
Wow! You've really gotten sucked into a fantasy world. I can recommend a good deprogrammer for you.
If your posts are removed its because you are a nut job
60% of Americans believe that the Bible is literal truth. Of the other 40%, many don't believe in science, and believe global warming is bunk. I used to think that most intelligent people accepted the findings of science. Not so, sad to say. Myths and their upbringing get in the way.
The comment sections in these articles aren't exactly here because people like to read about people getting along with each other. They are here because people absolutely cannot resist an argument or a chance to bicker with someone. And for people like me, it is very entertaining.
What is this, asshole central?
Even Ass Holes are Stars
I'm certain yours is.
Now you undertand the source of dark matter!
This is all fake, god did not want for us to know these lies that these "scientists" claim to be real. god has made this universe and it is very small and only contains our solar system because we are the only reason the universe exists. god will show you that this is all fake and that these "scientists" are just trying to trick you into believing that there is no god and that the universe is in fact vast and possibly unending. god is right "scientists" are wrong. read your bible and you will see.
Why are you out of bed, troll? Don't you have to get up for preschool in the morning?
I would read my bible but the Easter Bunny told me not to believe in it.
Its ok people calm down, If you really pay attention I assure you he's being facetious, he's joking. He's poking fun at the fact that there are people who actually believe that, someone who believes that wouldn't write it the way he did, so just enjoy the satire people.. i'm a christian and it gave me a big laugh
The computer you are typing on is an inescapable proof of the science you reject. There is no way to put this other than this: you are a puling idiot. Crawl up your anus and die.
what are you a whack job? then how do explain all the billions of stars?? are them fake too? I suppose you voted for obama too. how do you like the change? I have a little change in my pocket, is about what that amounts to.
Gee, a man who barely graduated high school here thinks he knows more than all the astrophysicists and biologists in the world. I'll take the numbers and empirical evidence of scientists any day over crackpot illiterates.
actually God created all the planets and all the stars in all the galaxies in all the universes. No, it wasn't 6000 years ago. Obviously the six "days" referred to in Genesis is not a literal 24 hour day, but a period of time. In the bible it also states that a thousand years to God is like one day and one day as a thousand years... Though this is all figurative, it shows that God's timetable is much different than ours. A creative "day" could've lasted millions of years. Anyway, I do agree that man has been on earth for about 6000 years. That's what all the evidence points to.. science has supported these claims showing that earth is billions of years old.
What an idiot.
Take the time to read the bible from cover to cover. The entire thing - don't skip around. How many flaws did you find? Man conceived god...man wrote the bible.
Also, we have been on this planet a good part of 100,000 years.
Science answers everything.
god answers nothing. It's a weak argument.
How does the universe work? god did it.... No - we are not satisfied with that answer.
Do you believe there are other forms of life out there? Do you think they might look a bit different from us given that their planet is different than ours? So if WE were created in gods image - who created them?
That's right. Evolution. Same thing that created us.
If we evolved from monkeys and apes, why are they still monkeys and apes, if evolution was real then there would be no monkeys or apes, what an id-iot.
We evolved from the same genus as apes, we didn't evolve from apes that are out there today. Science must be proven to be considered true. Science cannot be true without proof. For religion to be true it only requires that someone say it in front of a group of people. Christian scientists have tried to prove anything in the bible, they have failed. Science wins.
to Steve.... because we evolved, that means we attained different genes to evolve than apes and monkeys did. that is why they are still apes and monkeys. Idiot.............................................. Phil, nicely written
I love you.
I've waited for a moron like you to show up.
If England colonized America, why do we still have England?
Get my point? We evolved and split into a different species.
Who's the idiot now?
6000 years? Are you kidding? Humanity has been around MUCH longer than that, as referenced by the Wikipedia article "Human" This is not a guess, nor is it referenced from the Bible. It's a fact based on decades of painstaking research and analysis of fossil records.
"Humans (known taxonomically as Homo sapiens, Latin for "wise man" or "knowing man") are the only living species in the Homo genus. Anatomically modern humans originated in Africa about 200,000 years ago, reaching full behavioral modernity around 50,000 years ago."
I am God!
Yeah, homo sapeisn have been around (as documented per human remains) at least 80,000 years. Humans just like us have been tracked that far back, maybe a little longer. Neanderthal (who went extinct about 15,000 years ago) had been around some 50,000 years. I believe in Intelligent design, I beleive the universe is roughly 14 Billion years old and that scientists have proven that many times over, however, I believe for a big bang or the existence of the first and second elements Helium and Hydrogen, someone or something had to have set it all off, and thats where and a God like being comes in. God started it all, but the universe went on its way and here we are, 14 billion years later. Science and Religous are both right and can co exist, its the ones who say one is completely wrong and the other is right that are the real idiots.
On the other hand, you better not be g ay, or else I'll da mn you for all eternity.
Ignorance and a fear of death are they only ingredients necessary to make a loyal flock of worshipers. It is sad, really, the lengths you will go to convince the rest of us that your low IQ is NOT being abused by people who only ask for money. And power. Oh and that you never question why they want money and power. Forever.
Even if God's day is a billion years, it would have taken him 7 days. However, the universe is about 15 billion years old. Where did the other 8 billion years go? Then there is the problem of inconsistency. He created the heavens and the earth in one, which took almost the full 15 billion. Then he created man in one day, when we know that homosapiens is only about 150,000 years, and that manlike animals are only about 6 million. Then he created woman in one. God doesn't work by any kind of schedule I can figure out.
It's a fact – The #1 source atheists site for their not believing in God: reading The Bible with an open mind.
Exactly. Along with using reason and logic instead of blind acceptance.
Human remains (including settlements, structures, clothing, and art) have been found that are considerably older than 6000 years. Sorry, no alien seeding going on here.
Stars are actually very highly intelligent "gods" so to speak. they have a very different consciousness then us. Much more ancient and evolved. We all will become stars one day. I hope I am blue too.
You got it backwards (and you're a moron.) We are the product of dead stars, all of the atoms in our bodies (except Hydrogen) were created in the cores of dying stars. We are the ashes that remain.
I understand oh non-enlightened one. One day you will see the Light! Stars have a consciousness too. Soon, we all will be ONE.
More likely the other way around, I think. We were stars, or at least born from a new star. A giant accretian disk that cooled and formed into planets to become us. And it's okay to be just us. It reminds me of "essay on man" by Alexander Pope.
Now don't get all wordy with us here please.
I once had a similar thought. Then the acid wore off.
Certainly you will become blue. Find the nearest large ziploc bag and place your head inside it. Think happy star-thoughts.
Humans should be called Gorilla and Chimp Hybrids cuz thats what they look like! LMAO
with a little bit of Neaderthal breeding mixed in! hehe
just shave the Inter-Species bred end product and you'll have what Science loosely
calls the Human "Race"... which is anything on 2 legs in todays world...
Birds are on two legs. Next.
Don't judge the story by this crappy article. There is a much better-written story at http://www.msnbc.msn.com/id/44964854#.Tp-c37LBWUl which gives more information and doesn't have Geller coming across as a goober. It also has more facts. Better yet, just go read the actual article in Nature. You can get an idea of the detail in the Nature article by visiting the link to Nature.com where you can see some of the accompanying graphs and more.
CNN really needs to find some real journalists to write their articles and quit relying upon poorly-educated unpaid interns and random bums off the street to create their web content.
Yes, this article is totally unreadable. It appears to have been written by a 10 year old who has neither the grasp of English nor Science. It's no wonder the masses flock to the feel-good blanket of religion when an explanation of the universe around us is as badly written as this.
It's so funny how these scientists (and their believers, so hungry to deny the existence of God so that they can keep on being evil and selfish and feeling like it won't matter later) get a few data points and come up with a wild guess and that's their reality for centuries or decades until they get another data point and reform another wild arse guess. Like the current "truth" that the universe is rushing apart at faster than the speed of light, because of PERCEIVED red-shift : what a load of BS. They took one thing found by Edwin Hubble and decided it's all rushing apart faster than light. You know we'll find out later that is totally wrong, just like oh, oops, world is not flat, oops again, world is not center of universe or even solar system. Science: oops, oops, oops, there is no God, oops oops oops... oops
It seems a very reasonable and normal theory. wondering what makes it published both in Nature and CNN.........................................
How long does this mass tranfer take? Millions or years?
What version of MS Paint did they use for the picture?
Looks like CorelDraw, circa 1997.
In a galaxy far far away.... Im actually impressed they didnt add that lame pun in there.
Had a professor once that called his students Earthlings... I dropped the class after 12 minutes.
Technically we are all earthlings.
Surely your loss was their gain.
I've always preferred the term "Terran" myself. "Earthlings" sounds a little too much like finger food for my comfort. Or maybe "Terrestrials".
Terrans have nothing on the Protoss but at least you are all better looking than The Zerg
Haha the original Taran races (from Tara) were called the Taraneusiums. That's where the word terra meaning Earth originally came from, from Tara which is Density 2 expression of Urtha (of which this plane of Earth is a part).
The only thing this is missing is Once upon a time.....
Light Years strives to tell the stories of science research, discovery, space and education. This is your go-to place on CNN.com for today’s stories, but also for a scientific perspective on the news and everyday wonders. Come indulge your curiosity in all things space and science related, brought to you by the entire CNN family.
July 19thAtlas V launch of US DOD MUOS-2 satellite, notable for large "551" config of Atlas
Aug 3rdJapanese HTV-4 flight to ISS on cargo supply mission
Aug 14thSpaceX launch of Canadian satellite in the first launch from their new Vandenberg facility, and first launch of upgraded Falcon 9 v1.1 launch vehicle
Aug 28thDelta IV Heavy launch of NROL-65 spy satellite
SeptemberSoyuz TMA-08M flight returning Expedition 36 crew from ISS to Earth (Kazakhstan)
Sept 12thOrbital Sciences maiden flight of Cygnus cargo vehicle on Antares rocket to ISS
Sept 25thSoyuz TMA-10M flight launching Expedition 38 crew to ISS
Dec 9thSpaceX Dragon launch by Falcon 9 v1.1 on CRS-3 cargo supply mission to ISS
recurringfirst powered test flights of Scaled Composites' SpaceShipTwo commercial vehicle, to be used by Virgin Galactic for sub-orbital tourism |
Amy has a master's degree in secondary education and has taught math at a public charter high school.
Watch this video lesson, and you will understand how Euler's circuit theorem, Euler's path theorem, and Euler's sum of degrees theorem will help you analyze graphs. Also, get some practice with the quiz.
In this video lesson, we will go over three of Euler's theorems relating to graph theory. Why are these important? These are important because they help you to analyze graphs such as this one:
Why are graphs such as these important? If our graph represents a neighborhood where the dots are intersections and the lines are the roads, then graph theory can help us find the best way to get around town. For example, graph theory can help the mailman deliver his mail so that he doesn't have to back-track or pass by the same road twice. Euler's theorems come in handy because they tell the mailman whether an efficient route is even possible just from looking at the graph. How does this work? Let's take a look at Euler's theorems and we'll see.
Euler's Circuit Theorem
The first theorem we will look at is called Euler's circuit theorem. This theorem states the following: 'If a graph's vertices all are even, then the graph has an Euler circuit. Otherwise, it does not have an Euler circuit.' What does this mean for our mailman?
Recall that an Euler circuit is a route where you can pass by each edge or line in the graph exactly once and end up where you began. This helps the mailman figure out whether there is a route that he can take where he ends up where he began and where he goes through each road just once. If such a route doesn't exist in the first place, then there is no point for him to even try to figure one out. Our vertices are of even degree if there is an even number of edges connecting it to other vertices.
If all vertices have an even degree, the graph has an Euler circuit
Looking at our graph, we see that all of our vertices are of an even degree. The bottom vertex has a degree of 2. All the others have a degree of 4. This means that the graph does have an Euler circuit. This tells the mailman that, yes, there does exist a route where he doesn't have to back-track. He can now go about figuring out such a route.
Euler's Path Theorem
This next theorem is very similar. Euler's path theorem states the following: 'If a graph has exactly two vertices of odd degree, then it has an Euler path that starts and ends on the odd-degree vertices. Otherwise, it does not have an Euler path.'
Recall that an Euler path is a path where you pass by each edge or line in the graph exactly once, and you end up in a different spot than where you began. A path is very similar to a circuit, with the only difference being that you end up somewhere else instead of where you began. An Euler path is good for a traveling salesman or someone else who doesn't need to end up where he began. Looking at our graph, we see that we don't have any vertices that are odd. This tells us that this graph does not have an Euler path in it.
Over 79,000 lessons in all major subjects
Get access risk-free for 30 days,
just create an account.
This next theorem is a general one that works for all graphs. Euler's sum of degrees theorem tells us that 'the sum of the degrees of the vertices in any graph is equal to twice the number of edges.' This means that if we have 3 edges, then we will get 6 after adding up the degrees of each vertex. Let's look at our graph and see if this theorem is true. We expect the total number of degrees from our vertices to add up to twice the number of edges in our graph. Let's see how.
First, let's count the edges. We have 11 edges. This means that we should expect the total number of degrees to add up to 22. Let's see if it does. The bottom vertex has a degree of 2. The rest have a degree of 4. So, we have 2 + 4 + 4 + 4 + 4 + 4 = 22. Hey, look at that; we got 22! Just like the theorem says! It works. Why is this theorem useful? This theorem lets you know whether or not the graph you are looking at is legit.
Let's review what we've learned. We learned that Euler's circuit theorem states this: 'If a graph's vertices are all even, then the graph has an Euler circuit. Otherwise, it does not have an Euler circuit.' Euler's path theorem states this: 'If a graph has exactly two vertices of odd degree, then it has an Euler path that starts and ends on the odd-degree vertices. Otherwise, it does not have an Euler path.' Euler's sum of degrees theorem tells us that 'the sum of the degrees of the vertices in any graph is equal to twice the number of edges.'
These theorems are useful in analyzing graphs in graph theory. Euler's circuit and path theorems tell us whether it is worth looking for an efficient route that takes us past all of the edges in a graph. This is helpful for mailmen and others who need to find a most efficient route.
Once you are finished with this lesson you should be able to:
State three of Euler's theorems
Determine if a graph has an Euler's circuit
Define a Euler's path
Recall why Euler's theorems could be useful in real life
Did you know… We have over 200 college
courses that prepare you to earn
credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the
first two years of college and save thousands off your degree. Anyone can earn
credit-by-exam regardless of age or education level. |
Routing is the process of selecting a path for traffic in a network or between or across multiple networks. Broadly, routing is performed in many types of networks, including circuit-switched networks, such as the public switched telephone network (PSTN), and computer networks, such as the Internet.
In packet switching networks, routing is the higher-level decision making that directs network packets from their source toward their destination through intermediate network nodes by specific packet forwarding mechanisms. Packet forwarding is the transit of network packets from one network interface to another. Intermediate nodes are typically network hardware devices such as routers, gateways, firewalls, or switches. General-purpose computers also forward packets and perform routing, although they have no specially optimized hardware for the task.
The routing process usually directs forwarding on the basis of routing tables. Routing tables maintain a record of the routes to various network destinations. Routing tables may be specified by an administrator, learned by observing network traffic or built with the assistance of routing protocols.
Routing, in a narrower sense of the term, often refers to IP routing and is contrasted with bridging. IP routing assumes that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, structured addressing (routing, in the narrow sense) outperforms unstructured addressing (bridging). Routing has become the dominant form of addressing on the Internet. Bridging is still widely used within local area networks.
Routing schemes differ in how they deliver messages:
- Unicast delivers a message to a single specific node using a one-to-one association between a sender and destination: each destination address uniquely identifies a single receiver endpoint.
- Broadcast delivers a message to all nodes in the network using a one-to-all association; a single datagram from one sender is routed to all of the possibly multiple endpoints associated with the broadcast address. The network automatically replicates datagrams as needed to reach all the recipients within the scope of the broadcast, which is generally an entire network subnet.
- Multicast delivers a message to a group of nodes that have expressed interest in receiving the message using a one-to-many-of-many or many-to-many-of-many association; datagrams are routed simultaneously in a single transmission to many recipients. Multicast differs from broadcast in that the destination address designates a subset, not necessarily all, of the accessible nodes.
- Anycast delivers a message to any one out of a group of nodes, typically the one nearest to the source using a one-to-one-of-many association where datagrams are routed to any single member of a group of potential receivers that are all identified by the same destination address. The routing algorithm selects the single receiver from the group based on which is the nearest according to some distance measure.
- Geocast delivers a message to a group of nodes in a network based on their geographic location. It is a specialized form of multicast addressing used by some routing protocols for mobile ad hoc networks.
Unicast is the dominant form of message delivery on the Internet. This article focuses on unicast routing algorithms.
With static routing, small networks may use manually configured routing tables. Larger networks have complex topologies that can change rapidly, making the manual construction of routing tables unfeasible. Nevertheless, most of the public switched telephone network (PSTN) uses pre-computed routing tables, with fallback routes if the most direct route becomes blocked (see routing in the PSTN).
Dynamic routing attempts to solve this problem by constructing routing tables automatically, based on information carried by routing protocols, allowing the network to act nearly autonomously in avoiding network failures and blockages. Dynamic routing dominates the Internet. Examples of dynamic-routing protocols and algorithms include Routing Information Protocol (RIP), Open Shortest Path First (OSPF) and Enhanced Interior Gateway Routing Protocol (EIGRP).
Distance vector algorithms
Distance vector algorithms use the Bellman–Ford algorithm. This approach assigns a cost number to each of the links between each node in the network. Nodes send information from point A to point B via the path that results in the lowest total cost (i.e. the sum of the costs of the links between the nodes used).
When a node first starts, it only knows of its immediate neighbors and the direct cost involved in reaching them. (This information — the list of destinations, the total cost to each, and the next hop to send data to get there — makes up the routing table, or distance table.) Each node, on a regular basis, sends to each neighbor node its own current assessment of the total cost to get to all the destinations it knows of. The neighboring nodes examine this information and compare it to what they already know; anything that represents an improvement on what they already have, they insert in their own table. Over time, all the nodes in the network discover the best next hop and total cost for all destinations.
When a network node goes down, any nodes that used it as their next hop discard the entry and convey the updated routing information to all adjacent nodes, which in turn repeat the process. Eventually, all the nodes in the network receive the updates and discover new paths to all the destinations that don't involve the down node.
When applying link-state algorithms, a graphical map of the network is the fundamental data used for each node. To produce its map, each node floods the entire network with information about the other nodes it can connect to. Each node then independently assembles this information into a map. Using this map, each router independently determines the least-cost path from itself to every other node using a standard shortest paths algorithm such as Dijkstra's algorithm. The result is a tree graph rooted at the current node, such that the path through the tree from the root to any other node is the least-cost path to that node. This tree then serves to construct the routing table, which specifies the best next hop to get from the current node to any other node.
Optimized Link State Routing algorithm
A link-state routing algorithm optimized for mobile ad hoc networks is the optimized Link State Routing Protocol (OLSR). OLSR is proactive; it uses Hello and Topology Control (TC) messages to discover and disseminate link-state information through the mobile ad hoc network. Using Hello messages, each node discovers 2-hop neighbor information and elects a set of multipoint relays (MPRs). MPRs distinguish OLSR from other link-state routing protocols.
Distance vector and link-state routing are both intra-domain routing protocols. They are used inside an autonomous system, but not between autonomous systems. Both of these routing protocols become intractable in large networks and cannot be used in inter-domain routing. Distance vector routing is subject to instability if there are more than a few hops in the domain. Link state routing needs significant resources to calculate routing tables. It also creates heavy traffic due to flooding.
Path-vector routing is used for inter-domain routing. It is similar to distance vector routing. Path-vector routing assumes that one node (there can be many) in each autonomous system acts on behalf of the entire autonomous system. This node is called the speaker node. The speaker node creates a routing table and advertises it to neighboring speaker nodes in neighboring autonomous systems. The idea is the same as distance vector routing except that only speaker nodes in each autonomous system can communicate with each other. The speaker node advertises the path, not the metric, of the nodes in its autonomous system or other autonomous systems.
The path-vector routing algorithm is similar to the distance vector algorithm in the sense that each border router advertises the destinations it can reach to its neighboring router. However, instead of advertising networks in terms of a destination and the distance to that destination, networks are advertised as destination addresses and path descriptions to reach those destinations. The path, expressed in terms of the domains (or confederations) traversed so far, is carried in a special path attribute that records the sequence of routing domains through which the reachability information has passed. A route is defined as a pairing between a destination and the attributes of the path to that destination, thus the name, path-vector routing; The routers receive a vector that contains paths to a set of destinations.
Path selection involves applying a routing metric to multiple routes to select (or predict) the best route. Most routing algorithms use only one network path at a time. Multipath routing and specifically equal-cost multi-path routing techniques enable the use of multiple alternative paths.
In computer networking, the metric is computed by a routing algorithm, and can cover information such as bandwidth, network delay, hop count, path cost, load, maximum transmission unit, reliability, and communication cost. The routing table stores only the best possible routes, while link-state or topological databases may store all other information as well.
In case of overlapping or equal routes, algorithms consider the following elements in priority order to decide which routes to install into the routing table:
- Prefix length: A matching route table entry with a longer subnet mask is always preferred as it specifies the destination more exactly.
- Metric: When comparing routes learned via the same routing protocol, a lower metric is preferred. Metrics cannot be compared between routes learned from different routing protocols.
- Administrative distance: When comparing route table entries from different sources such as different routing protocols and static configuration, a lower administrative distance indicates a more reliable source and thus a preferred route.
Because a routing metric is specific to a given routing protocol, multi-protocol routers must use some external heuristic to select between routes learned from different routing protocols. Cisco routers, for example, attribute a value known as the administrative distance to each route, where smaller administrative distances indicate routes learned from a protocol assumed to be more reliable.
A local administrator can set up host-specific routes that provide more control over network usage, permits testing, and better overall security. This is useful for debugging network connections or routing tables.
In some small systems, a single central device decides ahead of time the complete path of every packet. In some other small systems, whichever edge device injects a packet into the network decides ahead of time the complete path of that particular packet. In both of these systems, that route-planning device needs to know a lot of information about what devices are connected to the network and how they are connected to each other. Once it has this information, it can use an algorithm such as A* search algorithm to find the best path.
In high-speed systems, there are so many packets transmitted every second that it is infeasible for a single device to calculate the complete path for each and every packet. Early high-speed systems dealt with this by setting up a circuit switching relay channel once for the first packet between some source and some destination; later packets between that same source and that same destination continue to follow the same path without recalculating until the channel teardown. Later high-speed systems inject packets into the network without any one device ever calculating a complete path for that packet—multiple agents.
In large systems, there are so many connections between devices, and those connections change so frequently, that it is infeasible for any one device to even know how all the devices are connected to each other, much less calculate a complete path through them. Such systems generally use next-hop routing.
Most systems use a deterministic dynamic routing algorithm: When a device chooses a path to a particular final destination, that device always chooses the same path to that destination until it receives information that makes it think some other path is better. A few routing algorithms do not use a deterministic algorithm to find the "best" link for a packet to get from its original source to its final destination. Instead, to avoid congestion in switched systems or network hot spots in packet systems, a few algorithms use a randomized algorithm—Valiant's paradigm—that routes a path to a randomly picked intermediate destination, and from there to its true final destination. In many early telephone switches, a randomizer was often used to select the start of a path through a multistage switching fabric.
Depending on the application for which path selection is performed, different metrics can be used. For example, for web requests one can use minimum latency paths to minimize web page load time, or for bulk data transfers one can choose the least utilized path to balance load across the network and increase throughput. A popular path selection objective is to reduce the average completion times of traffic flows and the total network bandwidth consumption which basically leads to better use of network capacity. Recently, a path selection metric was proposed that computes the total number of bytes scheduled on the edges per path as selection metric. An empirical analysis of several path selection metrics, including this new proposal, has been made available.
In some networks, routing is complicated by the fact that no single entity is responsible for selecting paths; instead, multiple entities are involved in selecting paths or even parts of a single path. Complications or inefficiency can result if these entities choose paths to optimize their own objectives, which may conflict with the objectives of other participants.
A classic example involves traffic in a road system, in which each driver picks a path that minimizes their travel time. With such routing, the equilibrium routes can be longer than optimal for all drivers. In particular, Braess' paradox shows that adding a new road can lengthen travel times for all drivers.
In another model, for example, used for routing automated guided vehicles (AGVs) on a terminal, reservations are made for each vehicle to prevent simultaneous use of the same part of an infrastructure. This approach is also referred to as context-aware routing.
The Internet is partitioned into autonomous systems (ASs) such as internet service providers (ISPs), each of which controls routes involving its network, at multiple levels. First, AS-level paths are selected via the BGP protocol, which produces a sequence of ASs through which packets flow. Each AS may have multiple paths, offered by neighboring ASs, from which to choose. Its decision often involves business relationships with these neighboring ASs, which may be unrelated to path quality or latency. Second, once an AS-level path has been selected, there are often multiple corresponding router-level paths, in part because two ISPs may be connected in multiple locations. In choosing the single router-level path, it is common practice for each ISP to employ hot-potato routing: sending traffic along the path that minimizes the distance through the ISP's own network—even if that path lengthens the total distance to the destination.
Consider two ISPs, A and B. Each has a presence in New York, connected by a fast link with latency 5 ms—and each has a presence in London connected by a 5 ms link. Suppose both ISPs have trans-Atlantic links that connect their two networks, but A's link has latency 100 ms and B's has latency 120 ms. When routing a message from a source in A 's London network to a destination in B 's New York network, A may choose to immediately send the message to B in London. This saves A the work of sending it along an expensive trans-Atlantic link, but causes the message to experience latency 125 ms when the other route would have been 20 ms faster.
A 2003 measurement study of Internet routes found that, between pairs of neighboring ISPs, more than 30% of paths have inflated latency due to hot-potato routing, with 5% of paths being delayed by at least 12 ms. Inflation due to AS-level path selection, while substantial, was attributed primarily to BGP's lack of a mechanism to directly optimize for latency, rather than to selfish routing policies. It was also suggested that, were an appropriate mechanism in place, ISPs would be willing to cooperate to reduce latency rather than use hot-potato routing.
As the Internet and IP networks become mission critical business tools, there has been increased interest in techniques and methods to monitor the routing posture of networks. Incorrect routing or routing issues cause undesirable performance degradation, flapping and/or downtime. Monitoring routing in a network is achieved using route analytics tools and techniques.
In networks where a logically centralized control is available over the forwarding state, for example, using Software-defined networking, routing techniques can be used that aim to optimize global and network-wide performance metrics. This has been used by large internet companies that operate many data centers in different geographical locations attached using private optical links examples of which includes Microsoft's Global WAN, Facebook's Express Backbone, and Google's B4. Global performance metrics to optimize include maximizing network utilization, minimizing traffic flow completion times, and maximizing the traffic delivered prior to specific deadlines. Minimizing flow completion times over private WAN, particularly, has not received much attention from the research community. However, with the increasing number of businesses that operate globally distributed data centers connected using private inter-data center networks, it is likely to see increasing research effort in this realm. A very recent work on reducing the completion times of flows over private WAN discusses modeling routing as a graph optimization problem by pushing all the queuing to the end-points. Authors also propose a heuristic to solve the problem efficiently while sacrificing negligible performance.
- RFC 3626
- RFC 1322
- A Survey on Routing Metrics (PDF), February 10, 2007, retrieved 2020-05-04
- Michael Mitzenmacher; Andréa W. Richa; Ramesh Sitaraman. "The Power of Two Random Choices: A Survey of Techniques and Results". Section "Randomized Protocols for Circuit Routing". p. 34.
- Stefan Haas. "The IEEE 1355 Standard: Developments, Performance and Application in High Energy Physics". 1998. p. 15. quote: "To eliminate network hot spots, ... a two phase routing algorithm. This involves every packet being first sent to a randomly chosen intermediate destination; from the intermediate destination it is forwarded to its final destination. This algorithm, referred to as Universal Routing, is designed to maximize capacity and minimize delay under conditions of heavy load."
- M. Noormohammadpour; C. S. Raghavendra. (2018). "Poster Abstract: Minimizing Flow Completion Times using Adaptive Routing over Inter-Datacenter Wide Area Networks".CS1 maint: multiple names: authors list (link)
- M. Noormohammadpour; C. S. Raghavendra. (2018). "Minimizing Flow Completion Times using Adaptive Routing over Inter-Datacenter Wide Area Networks".CS1 maint: multiple names: authors list (link)
- Jonne Zutt, Arjan J.C. van Gemund, Mathijs M. de Weerdt, and Cees Witteveen (2010). Dealing with Uncertainty in Operational Transport Planning. In R.R. Negenborn and Z. Lukszo and H. Hellendoorn (Eds.) Intelligent Infrastructures, Ch. 14, pp. 355–382. Springer.
- Matthew Caesar and Jennifer Rexford. BGP routing policies in ISP networks. IEEE Network Magazine, special issue on Interdomain Routing, Nov/Dec 2005.
- Neil Spring, Ratul Mahajan, and Thomas Anderson. Quantifying the Causes of Path Inflation. Proc. SIGCOMM 2003.
- Ratul Mahajan, David Wetherall, and Thomas Anderson. Negotiation-Based Routing Between Neighboring ISPs. Proc. NSDI 2005.
- Ratul Mahajan, David Wetherall, and Thomas Anderson. Mutually Controlled Routing with Independent ISPs. Proc. NSDI 2007.
- Khalidi, Yousef (March 15, 2017). "How Microsoft builds its fast and reliable global network".
- "Building Express Backbone: Facebook's new long-haul network". May 1, 2017.
- "Inside Google's Software-Defined Network". May 14, 2017.
- Noormohammadpour, Mohammad; Raghavendra, Cauligi (16 July 2018). "Datacenter Traffic Control: Understanding Techniques and Tradeoffs". IEEE Communications Surveys and Tutorials. 20 (2): 1492–1525. arXiv:1712.03530. doi:10.1109/COMST.2017.2782753.
- Noormohammadpour, Mohammad; Srivastava, Ajitesh; Raghavendra, Cauligi (2018). "On Minimizing the Completion Times of Long Flows over Inter-Datacenter WAN". IEEE Communications Letters. 22 (12): 2475–2478. arXiv:1810.00169. Bibcode:2018arXiv181000169N. doi:10.1109/LCOMM.2018.2872980.
- Ash, Gerald (1997). Dynamic Routing in Telecommunication Networks. McGraw–Hill. ISBN 978-0-07-006414-0.
- Doyle, Jeff & Carroll, Jennifer (2005). Routing TCP/IP, Volume I, Second Ed. Cisco Press. ISBN 978-1-58705-202-6.Ciscopress ISBN 1-58705-202-4
- Doyle, Jeff & Carroll, Jennifer (2001). Routing TCP/IP, Volume II. Cisco Press. ISBN 978-1-57870-089-9.Ciscopress ISBN 1-57870-089-2
- Huitema, Christian (2000). Routing in the Internet, Second Ed. Prentice–Hall. ISBN 978-0-321-22735-5.
- Kurose, James E. & Ross, Keith W. (2004). Computer Networking, Third Ed. Benjamin/Cummings. ISBN 978-0-321-22735-5.
- Medhi, Deepankar & Ramasamy, Karthikeyan (2007). Network Routing: Algorithms, Protocols, and Architectures. Morgan Kaufmann. ISBN 978-0-12-088588-6.
|Wikiversity has learning resources about Routing|
|Wikimedia Commons has media related to Routing.| |
EtymologyIn 1776 the counties of beyond the became known to European Americans as , named for the . The precise etymology of the name is uncertain, but likely based on an name meaning "(on) the meadow" or "(on) the prairie"Mithun, Marianne. 1999. ''Languages of Native North America''. Cambridge: Cambridge University Press, pg. 312 (cf. ''kenhtà:ke'', ''gëdá'geh'' ( ), "at the field"). Others have suggested the term ''Kenta Aki'', which could have come from an Algonquian language and were possibly derived from . Folk etymology translates this as "Land of Our Fathers". The closest approximation in another Algonquian language, , translates as "Land of Our In-Laws", thus making a fairer English translation "The Land of Those Who Became Our Fathers". In any case, the word ''aki'' means "land" in most Algonquian languages. Some also theorize that the name Kentucky may be a corruption of the word Catawba, in reference to the Catawba people who inhabited Kentucky.
Native American settlementIt is not known exactly when the first humans arrived in what is now Kentucky. Based on the evidence in other regions, humans were likely living in Kentucky prior to 10,000 BCE, but "archaeological evidence of their occupation has yet to be documented". Around 1800 BCE, a gradual transition began from a hunter-gatherer economy to agriculturalism. Around 900 CE, a took root in western and central Kentucky; by contrast, a appeared in eastern Kentucky. While the two had many similarities, the distinctive ceremonial earthwork mounds constructed in the former's centers were not part of the culture of the latter. In about the 10th century, the Kentucky native people's variety of corn became highly productive, supplanting the , and replaced it with a maize-based agriculture in the Mississippian Era. French explorers in the 17th century documented numerous tribes living in Kentucky until the in the 1670s; however, by the time that European colonial explorers and settlers began entering Kentucky in greater numbers in the mid-18th century, there were no major Native American settlements in the region. As of the 16th century, the area known as Kentucky was home to tribes from five different culture groupsIroquoian, Sioux, Algonquian, Muskogean and Yuchi. Around the Bluestone River was the Siouan . North of the Tennessee River was the and south of it was the . Much of the interior of the state was controlled by the Algonquian Cisca; the confluence region of the Mississippi and Ohio was home to the . During a period known as the , 1640–1680, another Algonquian tribe called the Maumee, or was chased out of southern Michigan. The vast majority of them moved to Kentucky, pushing the Kispoko east and war broke out with the Tutelo that pushed them deeper into Appalachia, where they merged with the Saponi and Moneton. The Maumee were closely related to the Miami of Indiana. Later, the Kispoko merged with the Shawnee (who broke off from the Powhatan on the east coast) and the Thawikila of Ohio to form the larger nation which inhabited the Ohio River Valley into the 19th century. The Shawnee from the northeast and Cherokee from the south also sent parties into the area regularly for hunting.
European settlementIn 1774 James Harrod founded the first permanent European settlement in Kentucky at the site of present-day Harrodsburg.
County of Kentucky and statehoodOn December 31, 1776, by an act of the , the portion of Fincastle County west of the Appalachians extending to the Mississippi River, previously known as Kentucky (or Kentucke) territory, was split off into its own county of . Harrod's Town (Oldtown as it was known at the time) was named the county seat. The county was subdivided into Jefferson, Lincoln and Fayette Counties in 1780, but continued to be administered as the District of Kentucky even as new counties were split off. On several occasions the region's residents petitioned the General Assembly and the for separation from Virginia and . Ten constitutional conventions were held in between 1784 and 1792. One petition, which had Virginia's assent, came before the Confederation Congress in early July 1788. Unfortunately, its consideration came up a day after word of 's all-important ninth of the proposed , thus establishing it as the new framework of governance for the United States. In light of this development, Congress thought that it would be "unadvisable" to admit Kentucky into the Union, as it could do so "under the Articles of Confederation" only, but not "under the Constitution", and so declined to take action. On December 18, 1789, Virginia again gave its consent to Kentucky statehood. The gave its approval on February 4, 1791. (This occurred two weeks before Congress approved 's petition for statehood.) Kentucky officially became the fifteenth state in the Union on June 1, 1792. , a military veteran from Virginia, was elected its first Governor.
Relationship Between Native Americans and European SettlersA 1790 U.S. government report states that 1,500Kentucky settlers had been killed by Native Americans since the end of the . As more settlers entered the area, warfare broke out with the Native Americans over their traditional hunting grounds. Historian Susan Sleeper-Smith documents the role of Kentucky settlers in displacing Native American communities living in the northern Ohio River Valley during the late 18th century.
19th centuryCentral Kentucky, the bluegrass region, was the area of the state with the most slave owners. cultivated and hemp (see Hemp in Kentucky) and were noted for their quality . During the 19th century, Kentucky slaveholders began to sell unneeded slaves to the , with Louisville becoming a major slave market and departure for slaves being transported downriver. Kentucky was one of the border states during the , and it remained part of the Union. Despite this, representatives from 68 of 110 counties met at Russellville calling themselves the "Convention of the People of Kentucky" and passed an on November 20, 1861. They established a with its capital in . The Confederate shadow government was never popularly elected statewide. Although Confederate forces briefly controlled Frankfort, they were expelled by Union forces before a Confederate government could be installed in the state capital. After the expulsion of Confederate forces after the Battle of Perrysville, this government operated in-exile. Though it existed throughout the war, Kentucky's provisional government had very little effect on the events in the Commonwealth or in the war. Kentucky remained officially "neutral" throughout the war due to the Union sympathies of a majority of the Commonwealth's citizens. Despite this, some 21st-century Kentuckians observe on leader ' birthday, June 3, and participate in Confederate battle re-enactments. Both Davis and U.S. president were born in Kentucky. John C. Breckinridge, the 14th and youngest-ever Vice President was born in Lexington, Kentucky at Cabell's Dale Farm. Breckenridge was expelled from the U. S. Senate for his support of the Confederacy. Modern historians such as Aaron Astor, Maryjean Wall, and Anne Marshall argue that many of Kentucky's white leaders and influential figures embraced a romanticized Southern identity, drawing from misleading and mythologized conceptions of the Old South and on the , in the decades following Reconstruction. This phenomenon mirrors similar cultural trends in other states during the nadir of race relations. On January 30, 1900, Governor , flanked by two bodyguards, was mortally wounded by an while walking to the State Capitol in downtown Frankfort. Goebel was contesting the Kentucky gubernatorial election of 1899, which William S. Taylor was initially believed to have won. For several months, , Goebel's running mate, and Taylor fought over who was the legal governor until the ruled in May in favor of Beckham. After fleeing to , Taylor was indicted as a co-conspirator in Goebel's . Goebel is the only governor of a U.S. state to have been assassinated while in office.
20th centuryThe , a vigilante action, occurred in Western Kentucky in the early 20th century. As a result of the monopoly, tobacco farmers in the area were forced to sell their crops at prices that were too low. Many local farmers and activists united in a refusal to sell their crops to the major tobacco companies. An Association meeting occurred in downtown Guthrie, where a vigilante wing of "Night Riders", formed. The riders terrorized farmers who sold their tobacco at the low prices demanded by the tobacco corporations. They burned several tobacco warehouses throughout the area, stretching as far west as Hopkinsville, Kentucky, Hopkinsville to Princeton, Kentucky, Princeton. In the later period of their operation, they were known to physically assault farmers who broke the boycott. Governor Augustus E. Willson declared martial law and deployed the Kentucky National Guard to end the wars. On October 15, 1959, a Boeing B-52 Stratofortress, B-52 carrying two nuclear weapons collided in midair with a KC-135 tanker near Hardinsburg, Kentucky. One of the nuclear bombs was damaged by fire but both weapons were recovered.
GeographyKentucky is situated in the Upland South. A significant portion of eastern Kentucky is part of Appalachia. Kentucky borders seven states, from the Midwestern United States, Midwest and the Southeastern United States, Southeast. lies to the northeast, to the east, to the south, to the west, to the northwest, and and to the north. Only Missouri and Tennessee, both of which border eight states, touch more. Kentucky's northern border is formed by the and its western border by the ; however, the official border is based on the courses of the rivers as they existed when Kentucky became a state in 1792. For instance, northbound travelers on U.S. Route 41 in Kentucky, U.S. 41 from Henderson, after crossing the Ohio River, will be in Kentucky for about . Ellis Park Racecourse, Ellis Park, a thoroughbred racetrack, is located in this small piece of Kentucky. Waterworks Road is part of the only land border between Indiana and Kentucky. Kentucky has a non-contiguous part known as Kentucky Bend, at the far west corner of the state. It exists as an exclave surrounded completely by and , and is included in the boundaries of Fulton County, Kentucky, Fulton County. Road access to this small part of Kentucky on the Mississippi River (populated by 18 people ) requires a trip through Tennessee. The epicenter of the 1811–12 New Madrid earthquakes was near this area, causing the Mississippi River to flow backwards in some places. Though the series of quakes changed the area geologically and affected the small number of inhabitants of the area at the time, the Kentucky Bend is the result of a surveying error, not the New Madrid earthquake.
RegionsKentucky can be divided into five primary regions: the Cumberland Plateau in the east, which contains much of the historic coal mines; the north-central Bluegrass region, where the major cities and the capital are located; the south-central and western Pennyroyal Plateau (also known as the Pennyrile or Mississippi Plateau); the Western Coal Fields; and the far-west Jackson Purchase. The Bluegrass region is commonly divided into two regions, the Inner Bluegrass encircling around Lexington, and the Outer Bluegrass that contains most of the northern portion of the state, above the Knobs region, Knobs. Much of the outer Bluegrass is in the Eden Shale Hills area, made up of short, steep, and very narrow hills.
ClimateLocated within the southeastern interior portion of North America, Kentucky has a climate that is best described as a humid subtropical climate (Köppen: ''Cfa''), only small higher areas of the southeast of the state has an oceanic climate (''Cfb'') influenced by the Appalachian Mountains, Appalachians. Temperatures in Kentucky usually range from daytime summer highs of to the winter low of . The average precipitation is a year. Kentucky has four distinct seasons, with substantial variations in the severity of summer and winter. The highest recorded temperature was at Greensburg, Kentucky, Greensburg on July 28, 1930, while the lowest recorded temperature was at Shelbyville, Kentucky, Shelbyville on 1994 North American cold wave, January 19, 1994. The state rarely experiences the extreme cold of far northern states, nor the high heat of the states in the . Temperatures seldom drop below 0 degrees or rise above 100 degrees. Rain and snowfall totals about 45 inches per year. The climate varies markedly within the state. The northern parts tend to be about five degrees cooler than those in the western parts of the state. Somerset, Kentucky, Somerset in the south-central part receives ten more inches of rain per year than, for instance, Covington, Kentucky, Covington to the north. Average temperatures for the entire Commonwealth range from the low 30s in January to the high 70s in mid-July. The annual average temperature varies from : of in the far north as an average annual temperature and of in the extreme southwest. In general, Kentucky has relatively hot, humid, rainy summers, and moderately cold and rainy winters. Mean maximum temperatures in July vary from ; the mean minimum July temperatures are . In January the mean maximum temperatures range from ; the mean minimum temperatures range from . Temperature means vary with northern and far-eastern mountain regions averaging five degrees cooler year-round, compared to the relatively warmer areas of the southern and western regions of the state. Precipitation also varies north to south with the north averaging of , and the south averaging of . Days per year below the freezing point vary from about sixty days in the southwest to more than a hundred days in the far-north and far-east.
Lakes and riversKentucky has more navigable miles of water than any other state in the union, other than Alaska. Kentucky is the only U.S. state to have a continuous border of rivers running along three of its sidesthe to the west, the to the north, and the Big Sandy River (Ohio River), Big Sandy River and Tug Fork to the east. Its major internal rivers include the , Tennessee River, Cumberland River, Green River (Kentucky), Green River and Licking River (Kentucky), Licking River. Though it has only three major natural lakes, Kentucky is home to many artificial lakes. Kentucky has both the largest artificial lake east of the Mississippi in water volume (Lake Cumberland) and surface area (Kentucky Lake). Kentucky Lake's of shoreline, of water surface, and of flood storage are the most of any lake in the Tennessee Valley Authority, TVA system. Kentucky's of streams provides one of the most expansive and complex stream systems in the nation.
Natural environment and conservationKentucky has an expansive park system, which includes one national park, two National Recreation Areas, two National Historic Parks, two United States National Forest, national forests, two National Wildlife Refuges, 45 state parks, of state forest, and 82 wildlife management areas. Kentucky has been part of two of the most successful wildlife reintroduction projects in United States history. In the winter of 1997, the Kentucky Department of Fish and Wildlife Resources began to re-stock elk in the state's eastern counties, which had been extinct from the area for over 150 years. , the herd had reached the project goal of 10,000 animals, making it the largest herd east of the . The state also stocked wild turkeys in the 1950s. There were reported to be fewer than 900 at one point. Once nearly extinct here, wild turkeys thrive throughout today's Kentucky. Hunters officially reported a record 29,006 birds taken during the 23-day season in spring 2009. In 1991 the Land Between the Lakes partnered with the U.S. Fish and Wildlife Service for the Red Wolf Recovery Program, a captive breeding program.
Natural attractions* Cumberland Gap, chief passageway through the in early American history. * Cumberland Falls, the only place in the Western Hemisphere where a "moonbow" may be regularly seen, due to the spray of the falls. * , featuring the world's longest known cave system. * Red River Gorge Geological Area, part of the Daniel Boone National Forest. * Land Between the Lakes, a National Recreation Area managed by the United States Forest Service. * Big South Fork National River and Recreation Area near Whitley City, Kentucky, Whitley City. * Black Mountain (Kentucky), Black Mountain, state's highest point. Runs along the south ridge of Pine Mountain in Letcher County, Kentucky. The highest point located in Harlan County. * Bad Branch Falls State Nature Preserve, state nature preserve on southern slope of Pine Mountain in Letcher County, Kentucky, Letcher County. Includes one of the largest concentrations of rare and endangered species in the state, as well as a waterfall and a Kentucky Wild River. * Jefferson Memorial Forest, located in the southern fringes of in the Knobs region, the largest municipally run forest in the United States. * Lake Cumberland, of shoreline located in South Central Kentucky. * Natural Bridge State Resort Park, Natural Bridge, located in Slade, Kentucky Powell County, Kentucky, Powell County. * Breaks Interstate Park, located in southeastern Pike County, Kentucky and Southwestern . The Breaks is commonly known as the "Grand Canyon of the South".
CountiesKentucky is subdivided into 120 county (United States), counties, the largest being Pike County, Kentucky, Pike County at , and the most populous being Jefferson County, Kentucky, Jefferson County (which Consolidated city–county, coincides with the Louisville, Kentucky, Louisville Metro Louisville Metro Council, governmental area) with 741,096 residents . County government, under the Kentucky Constitution of 1891, is vested in the County Judge/Executive, (formerly called the County Judge) who serves as the Executive (government), executive head of the county, and a legislature called a Fiscal Court. Despite the unusual name, the Fiscal Court no longer has judiciary, judicial functions.
Consolidated city-county governmentsKentucky's two most populous counties, Jefferson and Fayette, have their Consolidated city-county, governments consolidated with the governments of their largest cities. ''Louisville-Jefferson County Government'' (Louisville, Kentucky, Louisville Metro) and ''Lexington-Fayette Urban County Government'' (Lexington, Kentucky, Lexington Metro) are unique in that their city councils and county Fiscal Court structures have been merged into a single entity with a single mayor, chief executive, the Louisville Metro Mayor, Metro Mayor and Urban County Mayor, respectively. Although the counties still exist as subdivisions of the state, in reference the names Louisville and Lexington are used to refer to the entire area coextensive with the former cities and counties.
Major citiesThe Louisville metropolitan area, Metro Louisville government area has a 2018 population of 1,298,990. Under United States Census Bureau methodology, the population of Louisville was 623,867. The latter figure is the population of the so-called Louisville/Jefferson County metro government (balance), Kentucky, "balance"the parts of Jefferson County that were either unincorporated or within the City of Louisville before the formation of the merged government in 2003. In 2018 the Louisville metropolitan area, Louisville Combined Statistical Area (CSA) had a population of 1,569,112; including 1,209,191 in Kentucky, which means more than 25% of the state's population now lives in the Louisville CSA. Since 2000, over one-third of the state's population growth has occurred in the Louisville CSA. In addition, the top 28 wealthiest places in Kentucky are in Jefferson County and seven of the 15 wealthiest counties in the state are located in the Louisville CSA. The second-largest city is Lexington with a 2018 census population of 323,780, its metro had a population of 516,697, and its Lexington–Fayette–Frankfort–Richmond, KY Combined Statistical Area, CSA, which includes the Frankfort, Kentucky micropolitan area, Frankfort and Richmond–Berea micropolitan area, Richmond statistical areas, having a population of 746,310. The Northern Kentucky area, which comprises the seven Kentucky counties in the Cincinnati/Northern Kentucky metropolitan area, had a population of 447,457 in 2018. The metropolitan areas of Louisville, Lexington, and Northern Kentucky have a combined population of 2,402,958 , which is 54% of the state's total population on only about 19% of the state's land. This area is often referred to as the Golden triangle as it contains a majority of the state's wealth, population, population growth, and economic growth, it is also where most of the state's largest cities by population are located. It is referred to as the Golden triangle as the metro areas of Lexington, Louisville, and Northern Kentucky/Cincinnati outline a triangle shape. Interstates I-71, I-75, and I-64 form the triangle shape. Additionally, all counties in Kentucky that are part of an MSA or CSA have a total population of 2,970,694, which is 67% of the state's population. had a population of 67,067, making it the third most populous city in the state. The Bowling Green metropolitan area had an estimated population of 174,835; and the combined statistical area it shares with Glasgow, Kentucky, Glasgow has an estimated population of 228,743. The two other fast-growing urban areas in Kentucky are the area and the "Tri-Cities Region" of southeastern Kentucky, comprising Somerset, Kentucky, Somerset, London, Kentucky, London and Corbin, Kentucky, Corbin. Although only one town in the "Tri-Cities" (Somerset) currently has more than 12,000 people, the area has been experiencing heightened population and job growth since the 1990s. Growth has been especially rapid in Laurel County, which outgrew areas such as Scott and Jessamine counties around Lexington or Shelby and Nelson Counties around Louisville. London significantly grew in population in the 2000s, from 5,692 in 2000 to 7,993 in 2010. London also landed a Walmart, Wal-Mart distribution center in 1997, bringing thousands of jobs to the community. In northeast Kentucky, the greater Ashland, Kentucky, Ashland area is an important transportation, manufacturing, and medical center. Iron and steel industry, Iron and petroleum production, as well as the transport of coal by rail and barge, have been historical pillars of the region's economy. Due to a decline in the area's industrial base, Ashland has seen a sizable reduction in its population since 1990; however, the population of the area has since stabilized with the medical service industry taking a greater role in the local economy. The Ashland area, including the counties of Boyd County, Kentucky, Boyd and Greenup County, Kentucky, Greenup, is part of the Huntington-Ashland, WV-KY-OH, Metropolitan Statistical Area (MSA). As of the 2000 census, the MSA had a population of 288,649. More than 21,000 of those people () reside within the city limits of Ashland. The largest county in Kentucky by area is Pike County, Kentucky, Pike, which contains Pikeville, Kentucky, Pikeville and suburb Coal Run Village, Kentucky, Coal Run Village. The county and surrounding area is the most populated region in the state that is not part of a United States Micropolitan Statistical Area, Micropolitan Statistical Area or a Metropolitan Statistical Area containing nearly 200,000 people in five counties: Floyd County, Kentucky, Floyd County, Martin County, Kentucky, Martin County, Letcher County, Kentucky, Letcher County, and neighboring Mingo County, West Virginia. Pike County contains slightly more than 68,000 people. Only three U.S. states have capitals with smaller populations than Kentucky's Frankfort (pop. 25,527): Augusta, Maine (pop. 18,560), Pierre, South Dakota (pop. 13,876), and Montpelier, Vermont (pop. 8,035).
DemographicsThe United States Census Bureau determined that the population of Kentucky was 4,505,836 in 2020, increasing since the 2010 United States Census, 2010 United States census. As of July 1, 2016, Kentucky had an estimated population of 4,436,974, which is an increase of 12,363 from the prior year and an increase of 97,607, or 2.2%, since the year 2010. This includes a Population growth, natural increase since the last census of 73,541 people (that is 346,968 births minus 273,427 deaths) and an increase due to net migration of 26,135 people into the state. Immigration to the United States, Immigration from outside the United States resulted in a net increase of 40,051 people, and migration within the country produced a net decrease of 13,916 people. , Kentucky's population included about 149,016 foreign-born persons (3.4%). In 2016 the population density of the state was 110 people per square mile (42.5/km2). Kentucky's population has grown during every decade since records have been kept. But during most decades of the 20th century there was also net out-migration from Kentucky. Since 1900, rural Kentucky counties have had a net loss of more than a million people to migration, while urban areas have experienced a slight net gain. Kentucky's center of population is in Washington County, Kentucky, Washington County, in the city of Willisburg, Kentucky, Willisburg.
Race and ancestryAccording to U.S. Census Bureau official statistics, the largest ancestry in 2013 was American ethnicity, American totalling 20.2%. In 1980, before the status of ethnic American was an available option on the official census, the largest claimed ancestries in the commonwealth were English American, English (49.6%), Irish American, Irish (26.3%), and German American, German (24.2%). In the state's most urban counties of Jefferson, Oldham County, Kentucky, Oldham, Lexington, Kentucky, Fayette, Boone County, Kentucky, Boone, Kenton County, Kentucky, Kenton, and Campbell County, Kentucky, Campbell, German is the largest reported ancestry. Americans of Scots-Irish American, Scots-Irish and English American, English stock are present throughout the entire state. Many residents claim Irish ancestry because of known "Scots-Irish" among their ancestors, who immigrated from Ireland, where their ancestors had moved for a period from Scotland during the plantation period. As of the 1980s, the only counties in the United States where over half of the population cited "English American, English" as their only ancestry group were in the hills of eastern Kentucky (virtually every county in this region had a majority of residents identifying as exclusively English in ancestry).James Paul Allen and Eugene James Turner, ''We the People: An Atlas of America's Ethnic Diversity'' (Macmillan, 1988), 41. The Ridgetop Shawnee organized in the early 21st century as a non-profit to gain structure for their community and increase awareness of Native Americans in Kentucky. In the 2000 census, some 20,000 people in the state identified as Native American (0.49%). In June 2011, Jerry "2 Feather" Thornton, a , led a team in the Voyage of Native American Awareness 2011 canoe journey, to begin on the Green River in Rochester, Kentucky and travel through to the Ohio River at Henderson, Kentucky, Henderson. African Americans, who were mostly enslaved at the time, made up 25% of Kentucky's population before the American Civil War, Civil War; they were held and worked primarily in the central Bluegrass region, an area of hemp and tobacco cultivation, as well as raising blooded livestock. The number of African Americans living in Kentucky declined during the 20th century. Many migrated during the early part of the century to the industrial North and Midwest during the Great Migration (African American), Great Migration for jobs and the chance to leave the segregated, oppressive societies. Today, less than 9% of the state's total population is African-American. The state's African-American population is highly urbanized and 52% of them live in the Louisville metropolitan area; 44.2% of them reside in Jefferson County, Kentucky, Jefferson County. The county's population is 20% African American. Other areas with high concentrations, besides Christian and Fulton counties and the Bluegrass region, are the cities of Paducah, Kentucky, Paducah and Lexington. Some mining communities in far Southeastern Kentucky have populations that are between five and 10 percent African-American.
LanguageIn 2000 96.1% of all residents five years old and older spoke only American English, English at home, a decrease from 97.5% in 1990. Speech patterns in the state generally reflect the first settlers' Virginia and Kentucky backgrounds. South Midland features are best preserved in the mountains, with Southern American English, Southern in most other areas of Kentucky, but some common to Midland and Southern are widespread. After a vowel, the /r/ may be weak or missing. For instance, ''Coop'' has the vowel of ''put'', but the root rhymes with ''boot''. In southern Kentucky, earthworms are called ''redworms'', a burlap bag is known as a ''tow sack'' or the ''Southern grass sack'', and green beans are called ''snap beans''. In Kentucky English, a young man may ''carry'', not escort, his girlfriend to a party. Spanish language, Spanish is the second-most-spoken language in Kentucky, after English.
Religion, the Association of Religion Data Archives (ARDA) reported the following groupings of Kentucky's 4,339,367 residents: * 48% not affiliated with any religious group, 2,101,653 persons * 42% Protestant Christian, 1,819,860 adherents ** 33% Evangelicalism, Evangelical Protestant, 1,448,947 adherents (23% within the Southern Baptist Convention, 1,004,407 adherents) ** 7.1% Mainline Protestant, 305,955 adherents (4.4% in the United Methodist Church, 189,596 adherents) ** 1.5% Black Protestant, 64,958 adherents * 8.3% Catholic Church in the United States, Catholic Church, 359,783 adherents * 0.74% Latter-day Saints, 31,991 adherents * 0.60% other religions, 26,080 adherents (0.26% Muslim, 0.16% Judaism, 0.06% Buddhism, 0.01% Hindu, other Christianity, Christian, etc.) Kentucky is home to several seminaries. Southern Baptist Theological Seminary in is the principal seminary for the Southern Baptist Convention. Louisville is also the home of the Louisville Presbyterian Theological Seminary, an institution of the Presbyterian Church (USA). Lexington has one seminary, Lexington Theological Seminary (affiliated with the Christian Church (Disciples of Christ), Disciples of Christ). The Baptist Seminary of Kentucky is located on the campus of Georgetown College in Georgetown. Asbury Theological Seminary, a multi-denominational seminary in the Methodism, Methodist tradition, is located in nearby Wilmore, Kentucky, Wilmore. In addition to seminaries, there are several colleges affiliated with denominations: * In Louisville, Bellarmine University and Spalding University are affiliated with the Catholic Church in the United States, Roman Catholic Church. * In Lexington, Transylvania University is affiliated with the Disciples of Christ. * In Owensboro, Kentucky, Owensboro, Kentucky Wesleyan College is associated with the United Methodist Church, and Brescia University is associated with the Roman Catholic Church. * In Pikeville, the University of Pikeville is affiliated with the Presbyterian Church (USA). * In Wilmore, Asbury University (a separate institution from the seminary) is associated with the Christian College Consortium. * The Baptist denomination is associated with several colleges: ** University of the Cumberlands, in Williamsburg, Kentucky, Williamsburg ** Campbellsville University, in Campbellsville, Kentucky, Campbellsville ** Georgetown College (Kentucky), Georgetown College, in Georgetown, Kentucky, Georgetown ** Clear Creek Baptist Bible College, in Pineville, Kentucky * Grayson, Kentucky, Grayson in Carter County, Kentucky, Carter County is home to Kentucky Christian University which is affiliated with the Christian Churches and Churches of Christ. *The Abbey of Our Lady of Gethsemani is located in Bardstown, Kentucky. Author Thomas Merton, known as a social activist, worked to reconcile Christianity with other major religions, had converted to Catholicism as a young man, and became a Trappist monk; he lived and worked here from 1941 until his death in 1968. Louisville is home to the Cathedral of the Assumption (Louisville, Kentucky), Cathedral of the Assumption, the third-oldest Catholic cathedral in continuous use in the United States. The city also holds the headquarters of the Presbyterian Church (USA) and their printing press. Reflecting late 19th, 20th and 21st-century immigration from different countries, Louisville also has Jewish, Muslim, and Hindu communities. In 1996 the Center for Interfaith Relations established the Festival of Faiths, the first and oldest annual interfaith festival to be held in the United States. The Christian creationist apologetics group, Answers in Genesis, along with its Creation Museum, is headquartered in Petersburg, Boone County, Kentucky, Petersburg, Kentucky.
EconomyEarly in its history, Kentucky gained recognition for its excellent farming conditions. It was the site of the first commercial winery in the United States (started in present-day Jessamine County, Kentucky, Jessamine County in 1799) and due to the high calcium content of the soil in the Bluegrass region quickly became a major horse breeding (and later racing) area. Today Kentucky ranks 5th nationally in goat farming, 8th in production, and 14th in corn production. Kentucky has also been a long-standing major center of the tobacco industryboth as a center of business and tobacco farming. Today Kentucky's economy has expanded to importance in non-agricultural terms as well, especially in auto manufacturing, energy fuel production, and medical facilities. Kentucky ranks 4th among U.S. states in the number of automobiles and trucks assembled. The Chevrolet Corvette, Cadillac XLR (2004–2009), Ford Escape, Ford Super Duty trucks, Ford Expedition, Lincoln Navigator, Toyota Camry, Toyota Avalon, Toyota Camry Solara, Toyota Solara, Toyota Venza, and Lexus ES 350 are assembled in Kentucky. Kentucky has historically been a major coal producer, but the coal industry has been in decline since the 1980s, and the number of people employed in the coal industry there dropped by more than half between 2011 and 2015. , 24% of electricity produced in the U.S. depended on either enriched uranium rods coming from the Paducah Gaseous Diffusion Plant (the only domestic site of low-grade uranium enrichment), or from the 107,336 tons of coal extracted from the state's two coal fields (which combined produce 4% percent of the electricity in the United States). Kentucky produces 95% of the world's supply of bourbon whiskey, and the number of barrels of bourbon being aged in Kentucky (more than 5.7million) exceeds the state's population.Associated Press
TaxationTax is collected by the Kentucky Department of Revenue. There are six income tax brackets, ranging from 2% to 6% of personal income. The sales tax rate in Kentucky is 6%. Kentucky has a broadly based classified property tax system. All classes of property, unless exempted by the Constitution, are taxed by the state, although at widely varying rates. Many of these classes are exempted from taxation by local government. Of the classes that are subject to local taxation, three have special rates set by the Kentucky General Assembly, General Assembly, one by the Kentucky Supreme Court and the remaining classes are subject to the full local rate, which includes the tax rate set by the local taxing bodies plus all voted levies. Real property is assessed on 100% of the fair market value and property taxes are due by December 31. Once the primary source of state and local government revenue, property taxes now account for only about 6% of the Kentucky's annual General Fund revenues. Until January 1, 2006, Kentucky imposed a tax on intangible personal property held by a taxpayer on January1 of each year. The Kentucky intangible tax was repealed under House Bill 272. Intangible property consisted of any property or investment that represents evidence of value or the right to value. Some types of intangible property included: bonds, notes, retail repurchase agreements, accounts receivable, trusts, enforceable contracts sale of real estate (land contracts), money in hand, money in safe deposit boxes, annuities, interests in estates, loans to stockholders, and commercial paper.
Government-promoted slogansIn December 2002, the Kentucky governor Paul E. Patton, Paul Patton unveiled the state slogan "It's that friendly", in hope of drawing more people into the state based on the idea of southern hospitality. This campaign was neither a failure nor a success. Though it was meant to embrace southern values, many Kentuckians rejected the slogan as cheesy and ineffective. It was quickly seen that the slogan did not encourage tourism as much as initially hoped for. So government decided to create a different slogan to embrace Kentucky as a whole while also encouraging more people to visit the Bluegrass. In 2004, then Governor Ernie Fletcher launched a comprehensive branding campaign with the hope of making the state's $12–14million advertising budget more effective. The resulting "Unbridled Spirit" brand was the result of a $500,000 contract with New West, a Kentucky-based public relations advertising and marketing firm, to develop a viable brand and tag line. The Fletcher administration aggressively marketed the brand in both the public and private sectors. Since that time, the "Welcome to Kentucky" signs at border areas have an "Unbridled Spirit" symbol on them.
TourismTourism has become an increasingly important part of the Kentucky economy. In 2019 tourism grew to $7.6billion in economic impact. Key attractions include with events like the Kentucky Derby and the Keeneland Fall and Spring Meets, Kentucky Bourbon Trail, bourbon distillery tours and natural attractions such as the state's many lakes and parks to include Mammoth Cave, Lake Cumberland State Resort Park, Lake Cumberland and Red River Gorge. The state also has several religious destinations such as the Creation Museum and Ark Encounter of Answers in Genesis.
The Horse IndustryHorse Racing has long been associated with Kentucky. Churchill Downs, the home of the Derby, is a large venue with a capacity exceeding 165,000. The track hosts multiple events throughout the year and is a significant draw to the city of Louisville. Keeneland Race Course, in Lexington, hosts two major meets, the Spring and Fall running. Beyond hosting races Keeneland also hosts a significant horse auction drawing buyers from around the world. In 2019 $360million was spent on the September Yearling sale. The Kentucky Horse Park in Georgetown, Kentucky, Georgetown hosts multiple events throughout the year, including international equestrian competitions and also offers horseback riding from April to October.
EducationKentucky maintains eight public four-year universities. There are two general tiers: major research institutions (the University of Kentucky and the University of Louisville) and regional universities, which encompass the remaining six schools. The regional schools have specific target counties that many of their programs are targeted towards (such as Forestry at Eastern Kentucky University or Cave Management at Western Kentucky University), however, most of their curriculum varies little from any other public university. The University of Kentucky (UK) and the University of Louisville (UofL) have the highest academic rankings and admissions standards although the regional schools aren't without their national recognized departmentsexamples being Western Kentucky University's nationally ranked Journalism Department or Morehead State University offering one of the nation's only Space Science degrees. UK is the flagship and land grant of the system and has agriculture extension services in every county. The two research schools split duties related to the medical field, UK handles all medical outreach programs in the eastern half of the state while UofL does all medical outreach in the state's western half. The state's sixteen public two-year colleges have been governed by the Kentucky Community and Technical College System since the passage of the Postsecondary Education Improvement Act of 1997, commonly referred to as House Bill 1. Before the passage of House Bill 1, most of these colleges were under the control of the University of Kentucky. Transylvania University, a liberal arts university located in Lexington, was founded in 1780 as the oldest university west of the Allegheny Mountains. Berea College, located at the extreme southern edge of the Bluegrass below the Cumberland Plateau, was the first coeducational college in the Southern United States, South to admit both black and white students, doing so from its very establishment in 1855. This policy was successfully challenged in the Supreme Court of the United States, United States Supreme Court in the case of ''Berea College v. Kentucky'' in 1908. This decision effectively segregated Berea until the landmark ''Brown v. Board of Education'' in 1954. There are 173 school districts and 1,233 public schools in Kentucky. For the 2010 to 2011 school year, there were approximately 647,827 students enrolled in public school. Kentucky has been the site of much educational reform over the past two decades. In 1989 the Kentucky Supreme Court ruled the state's education system was unconstitutional. The response of the Kentucky General Assembly, General Assembly was passage of the Kentucky Education Reform Act (KERA) the following year. Years later, Kentucky has shown progress, but most agree that further reform is needed. The 2018 West Virginia teachers' strike, West Virginia teachers' strike in 2018 inspired 2018–19 education workers' strikes in the United States, teachers in other states, including Kentucky, to take similar action.
RoadsKentucky is served by six major Interstate Highway System, Interstate highways (Interstate 24 in Kentucky, I-24, Interstate 64 in Kentucky, I-64, Interstate 65 in Kentucky, I-65, Interstate 69 in Kentucky, I-69, Interstate 71 in Kentucky, I-71, and Interstate 75 in Kentucky, I-75), seven :Kentucky parkway system, parkways, and six bypasses and spurs (Interstate 165 (Kentucky), I-165, Interstate 169 (Kentucky), I-169, Interstate 264 (Kentucky), I-264, Interstate 265, I-265, Interstate 275 (Ohio–Indiana–Kentucky), I-275, and Interstate 471, I-471). The parkways were originally toll roads, but on November 22, 2006, Governor Ernie Fletcher ended the toll charges on the William H. Natcher Parkway and the Audubon Parkway, the last two parkways in Kentucky to charge tolls for access. The related Toll house, toll booths have been demolished. Ending the tolls some seven months ahead of schedule was generally agreed to have been a positive economic development for transportation in Kentucky. In June 2007, a law went into effect raising the speed limit on rural portions of Kentucky Interstates and parkways from . Road tunnels include the interstate Cumberland Gap Tunnel and the rural Nada Tunnel.
RailsAmtrak, the national passenger rail system, provides service to Ashland, Kentucky, Ashland, South Shore, Kentucky, South Portsmouth, Maysville, Kentucky, Maysville and Fulton, Kentucky, Fulton. The ''Cardinal (train), Cardinal'' (trains 50 and 51) is the line that offers Amtrak service to Ashland, South Shore, Maysville and South Portsmouth. The ''City of New Orleans (train), City of New Orleans'' (trains 58 and 59) serve Fulton. The Northern Kentucky area is served by the ''Cardinal'' at Cincinnati Union Terminal. The terminal is just across the in Cincinnati. Norfolk Southern Railway passes through the Central and Southern parts of the Commonwealth, via its Cincinnati, New Orleans, and Texas Pacific (CNO&TP) subsidiary. The line originates in Cincinnati and terminates 338 miles south in Chattanooga, Tennessee. , there were approximately of railways in Kentucky, with about 65% of those being operated by CSX Transportation. Bituminous coal, Coal was by far the most common cargo, accounting for 76% of cargo loaded and 61% of cargo delivered. Bardstown, Kentucky, Bardstown features a tourist attraction known as ''My Old Kentucky Dinner Train''. Run along a stretch of rail purchased from CSX Transportation, CSX in 1987, guests are served a four-course meal as they make a two-and-a-half-hour round-trip between Bardstown and Limestone Springs. The Kentucky Railway Museum is located in nearby New Haven, Kentucky, New Haven. Other areas in Kentucky are reclaiming old railways in rail trail projects. One such project is Louisville's Big Four Bridge. When the bridge's Indiana approach ramps opened in 2014, completing the pedestrian connection across the Ohio River, the Big Four Bridge rail trail became the second-longest pedestrian-only bridge in the world. The longest pedestrian-only bridge is also found in Kentuckythe Newport Southbank Bridge, popularly known as the "Purple People Bridge", connecting Newport, Kentucky, Newport to Cincinnati, Cincinnati, Ohio.
AirKentucky's primary airports include Louisville International Airport (Standiford Field (SDF)) of , Cincinnati/Northern Kentucky International Airport (CVG) of Cincinnati/Covington, Kentucky, Covington, and Blue Grass Airport (LEX) in Lexington. Louisville International Airport is home to United Parcel Service, UPS's Worldport (UPS air hub), Worldport, its international air-sorting hub. Cincinnati/Northern Kentucky International Airport is the largest airport in the state, and is a focus city for passenger airline Delta Air Lines and headquarters of its Delta Private Jets. The airport is one of DHL Aviation's three super-hubs, serving destinations throughout the Americas, Europe, Africa, and Asia, making it the 7th busiest airport in the U.S. and 36th in the world based on passenger and cargo operations. CVG is also a focus city for Frontier Airlines and is the largest O&D airport and base for Allegiant Air, along with home to a maintenance for American Airlines subsidiary PSA Airlines and Delta Air Lines subsidiary Endeavor Air. There are also a number of regional airports scattered across the state. On August 27, 2006, Blue Grass Airport was the site of a crash that killed 47 passengers and 2crew members aboard a Bombardier CRJ designated Comair Flight 191, or Delta Air Lines Flight 5191, sometimes mistakenly identified by the press as Comair Flight 5191. The lone survivor was the flight's First Officer (civil aviation), first officer, James Polehinke, who doctors determined to be brain damaged and unable to recall the crash at all.
WaterAs the state is bounded by two of the largest rivers in North America, water transportation has historically played a major role in Kentucky's economy. Louisville was a major port for steamships in the nineteenth century. Today, most barge traffic on Kentucky waterways consists of coal that is shipped from both the Eastern and Western Coalfields, about half of which is used locally to power many power plants located directly off the , with the rest being exported to other countries, most notably Japan. Many of the largest ports in the United States are located in or adjacent to Kentucky, including: * Port of Huntington-Tristate, Huntington-Tristate (includes Ashland, Kentucky), largest inland port and 7th largest overall * Cincinnati-Northern Kentucky, 5th largest inland port and 43rd overall * Louisville-Southern Indiana, 7th largest inland port and 55th overall As a state, Kentucky ranks 10th overall in port tonnage. The only natural obstacle along the entire length of the Ohio River is the Falls of the Ohio, located just west of Downtown Louisville.
Law and governmentKentucky is one of four U.S. states to officially use the term ''Commonwealth (U.S. state), commonwealth.'' The term was used for Kentucky as it had also been used by Virginia, from which Kentucky was created. The term has no particular significance in its meaning and was chosen to emphasize the distinction from the status of royal colonies as a place governed for the general welfare of the populace. Kentucky was originally styled as the "State of Kentucky" in the act admitting it to the union since that is how it was referred to in Kentucky's first constitution. The commonwealth term was used in citizen petitions submitted between 1786 and 1792 for the creation of the state. It was also used in the title of a history of the state that was published in 1834 and was used in various places within that book in references to Virginia and Kentucky. The other three states officially called "commonwealths" are Massachusetts, Pennsylvania, and . Puerto Rico and the Northern Mariana Islands are also formally commonwealths. Kentucky is one of only five states that elect their state officials in odd-numbered years (the others being Louisiana, Mississippi, New Jersey, and ). Kentucky holds elections for these offices every four years in the years preceding Presidential election years. Thus, Kentucky held gubernatorial elections in 2011, 2015 and 2019.
Executive branchThe executive branch is headed by the Governor of Kentucky, governor who serves as both head of state and head of government. The Lieutenant Governor of Kentucky, lieutenant governor may or may not have executive authority depending on whether the person is a member of the Governor's Cabinet (government), cabinet. Under the current Kentucky Constitution, the lieutenant governor assumes the duties of the governor only if the governor is incapacitated. (Before 1992 the lieutenant governor assumed power any time the governor was out of the state.) The governor and lieutenant governor usually run on a single ticket (also per a 1992 constitutional amendment) and are elected to four-year terms. The current governor is Andy Beshear, and the lieutenant governor is Jacqueline Coleman. Both are Democratic Party (United States), Democrats. The executive branch is organized into the following "cabinets", each headed by a secretary who is also a member of the governor's cabinet: * Kentucky General Government Cabinet, General Government Cabinet * Kentucky Transportation Cabinet, Transportation Cabinet * Kentucky Cabinet for Economic Development, Cabinet for Economic Development * Kentucky Finance and Administration Cabinet, Finance and Administration Cabinet * Kentucky Tourism, Arts, and Heritage Cabinet, Tourism, Arts, and Heritage Cabinet * Kentucky Education and Workforce Development Cabinet, Education and Workforce Development Cabinet * Kentucky Cabinet for Health and Family Services, Cabinet for Health and Family Services * Kentucky Justice and Public Safety Cabinet, Justice and Public Safety Cabinet * Kentucky Personnel Cabinet, Personnel Cabinet * Kentucky Labor Cabinet, Labor Cabinet * Kentucky Energy and Environment Cabinet, Energy and Environment Cabinet * Kentucky Public Protection Cabinet, Public Protection Cabinet The cabinet system was introduced in 1972 by Governor Wendell Ford to consolidate hundreds of government entities that reported directly to the governor's office. Other elected constitutional offices include the Secretary of State of Kentucky, Secretary of State, Attorney General of Kentucky, Attorney General, Auditor of Public Accounts, Kentucky State Treasurer, State Treasurer and Commissioner of Agriculture. Currently, Republican Michael Adams (Kentucky politician), Michael G. Adams serves as the Secretary of State. The commonwealth's chief prosecutor, law enforcement officer, and law officer is the Attorney General, currently Republican Daniel Cameron (Kentucky politician), Daniel Cameron. The Auditor of Public Accounts is Republican Mike Harmon (politician), Mike Harmon. Republican Allison Ball is the current Treasurer. Republican Ryan Quarles is the current Commissioner of Agriculture.
Legislative branchKentucky's legislative branch consists of a bicameralism, bicameral body known as the Kentucky General Assembly. The Kentucky Senate, Senate is considered the upper house. It has 38 members and is led by the President of the Senate, currently Robert Stivers (Republican Party (United States), R). The Kentucky House of Representatives, House of Representatives has 100 members, and is led by the Speaker of the House, currently David Osborne of the Republican Party. In November 2016, Republicans won control of the House for the first time since 1922, and currently have supermajorities in both the House and Senate.
Judicial branchThe judicial branch of Kentucky is called the Kentucky Court of Justice and comprises courts of limited jurisdiction called District Courts; courts of general jurisdiction called Kentucky Circuit Courts, Circuit Courts; specialty courts such as Drug Court and Family Court; an intermediate appellate court, the Kentucky Court of Appeals; and a court of last resort, the Kentucky Supreme Court. The Kentucky Court of Justice is headed by the Chief Justice of the Commonwealth. The chief justice is appointed by, and is an elected member of, the Supreme Court of Kentucky. The current chief justice is John D. Minton Jr. Unlike federal judges, who are usually appointed, justices serving on Kentucky state courts are chosen by the state's populace in non-partisan elections.
Federal representationKentucky's two United States Senate, U.S. Senators are Party leaders of the United States Senate, Senate Minority Leader Mitch McConnell and Rand Paul, both Republicans. The state is divided into six Kentucky Congressional Districts, Congressional Districts, represented by Republicans James Comer (politician), James Comer (Kentucky's 1st congressional district, 1st), Brett Guthrie (Kentucky's 2nd congressional district, 2nd), Thomas Massie (Kentucky's 4th congressional district, 4th), Hal Rogers (Kentucky's 5th congressional district, 5th) and Andy Barr (American politician), Andy Barr (Kentucky's 6th congressional district, 6th) and Democrat John Yarmuth (Kentucky's 3rd congressional district, 3rd). In the federal judiciary, Kentucky is served by two United States district courts: the United States District Court for the Eastern District of Kentucky, Eastern District of Kentucky, with its primary seat in Lexington, and the United States District Court for the Western District of Kentucky, Western District of Kentucky, with its primary seat in Louisville. Appeals are heard in the United States Court of Appeals for the Sixth Circuit, Court of Appeals for the Sixth Circuit, based in Cincinnati, Cincinnati, Ohio.
LawKentucky's body of laws, known as the Kentucky Revised Statutes (KRS), were enacted in 1942 to better organize and clarify the whole of Kentucky law. The statutes are enforced by local police, Sheriffs in the United States, sheriffs and deputy sheriffs, and constables and deputy constables. Unless they have completed a police academy elsewhere, these officers are required to complete Police Officer Professional Standards (POPS) training at the Kentucky Department of Criminal Justice Training Center on the campus of Eastern Kentucky University in Richmond, Kentucky, Richmond. Additionally, in 1948, the Kentucky General Assembly established the Kentucky State Police, making it the 38th state to create a force whose jurisdiction extends throughout the given state. Kentucky is one of the Capital punishment in the United States, 32 states in the United States that sanctions the Capital punishment, death penalty for certain murders defined as heinous. Those convicted of capital crimes after March 31, 1998 are always executed by lethal injection; those convicted on or before this date may opt for the electric chair. Only List of people executed in Kentucky, three people have been executed in Kentucky since the Supreme Court of the United States, U.S. Supreme Court re-instituted the practice in 1976. The most notable execution in Kentucky was that of Rainey Bethea on August 14, 1936. Bethea was publicly hanged in Owensboro, Kentucky, Owensboro for the rape and murder of Lischia Edwards. Irregularities with the execution led to this becoming the last public execution in the United States. Kentucky has been on the front lines of the debate over displaying the Ten Commandments on public property. In the 2005 case of ''McCreary County v. ACLU of Kentucky'', the Supreme Court of the United States, U.S. Supreme Court upheld the decision of the United States Court of Appeals for the Sixth Circuit, Sixth Circuit Court of Appeals that a display of the Ten Commandments in the Whitley City, Kentucky, Whitley City courthouse of McCreary County, Kentucky, McCreary County was unconstitutional. Later that year, Judge Richard Fred Suhrheinrich, writing for the United States Court of Appeals for the Sixth Circuit, Sixth Circuit Court of Appeals in the case of ''American Civil Liberties Union, ACLU of Kentucky v. Mercer County, Kentucky, Mercer County'', wrote that a display including the Mayflower Compact, the United States Declaration of Independence, Declaration of Independence, the Ten Commandments, the Magna Carta, ''The Star-Spangled Banner'', and the In God We Trust, national motto could be erected in the Mercer County, Kentucky, Mercer County courthouse. Kentucky has also been known to have unusually high political candidacy age laws, especially compared to surrounding states. The origin of this is unknown, but it has been suggested it has to do with the commonwealth tradition. A 2008 study found that Kentucky's Supreme Court to be the least influential high court in the nation with its decisions rarely being followed by other states.
PoliticsThroughout its history, Kentucky has remained politically competitive. The state has leaned toward the Democratic Party (United States), Democratic Party since 1860, when the Whig Party dissolved. The southeastern region of the state aligned with the Union during the war and tends to support Republican candidates. The central and western portions of the state were heavily Democratic in the years leading to the Civil War and in the decades following the war. Kentucky was part of the Democratic Solid South in the second half of the nineteenth century and through the majority of the twentieth century. Mirroring a broader national reversal of party composition, the Kentucky Democratic Party of the twenty-first century primarily consists of liberal whites, African Americans, and other minorities. As of March 2020, 48.42% of the state's voters were officially registered as Democrats, and 42.75% were registered Republican Party (United States), Republican, who tend to be conservative whites. Some 8.83% were registered with some other political party or as Independents. Despite this, a majority of the state's voters have generally elected Republican candidates for federal office beginning around the turn of the 21st century. From 1964 through 2004, Kentucky voted for the eventual winner of the election for President of the United States; however, in the 2008 United States presidential election, 2008 election the state lost its bellwether status. Republican John McCain won Kentucky, but he lost the national popular and electoral vote to Democrat Barack Obama (McCain carried Kentucky 57% to 41%). 116 of Kentucky's 120 counties supported former Massachusetts Governor Mitt Romney in the 2012 election while he lost to Barack Obama nationwide. Voters in the Commonwealth have supported the previous three Democratic candidates elected to the White House in the late 20th century, all from Southern states: Lyndon B. Johnson (Texas) in 1964, Jimmy Carter (Georgia (U.S. state), Georgia) in 1976, and Bill Clinton (Arkansas) in 1992 and 1996. In the twenty-first century presidential elections, the state has become a Republican stronghold, supporting that party's presidential candidates by double-digit margins from 2000 through 2016. At the same time, voters have continued to elect Democratic candidates to state and local offices in many jurisdictions. Kentucky is one of the most Anti-abortion movements, pro-life states in the United States. A 2014 poll conducted by Pew Research Center found that 57% of Kentucky's population thought that abortion should be illegal in all/most cases, while only 36% thought that abortion should be legal in all/most cases.
CultureAlthough Kentucky's culture is generally considered to be Southern culture, Southern, it is unique in that it is also influenced by the Midwestern United States, Midwest and Appalachia, Southern Appalachia in certain areas of the state. The state is known for Bourbon whiskey, bourbon and whiskey distilling, , , and . Kentucky is more similar to the Upland South in terms of ancestry that is predominantly American. Nevertheless, during the 19th century, Kentucky did receive a substantial number of German immigrants, who settled mostly in the Midwest, along the Ohio River primarily in Louisville, Covington and Newport. Only Maryland, Delaware and West Virginia have higher German ancestry percentages than Kentucky among Census-defined Southern states, although Kentucky's percentage is closer to Arkansas and Virginia's than the previously named state's percentages. Scottish Americans, English Americans and Scotch-Irish Americans have heavily influenced Kentucky culture, and are present in every part of the state. As of the 1980s the only counties in the United States where more than half the population cited "English" as their only ancestry group were all in the hills of eastern Kentucky (and made up virtually every county in this region). Kentucky was a slave state, and black people once comprised over one-quarter of its population; however, it lacked the cotton plantations in the American South, plantation system and never had the same high percentage of African Americans as most other slave states. While less than 8% of the total population is black, Kentucky has a relatively significant rural African American population in the Central and Western areas of the state. Kentucky adopted the Jim Crow laws, Jim Crow system of racial segregation in most public spheres after the Civil War. Louisville's 1914 ordinance for residential racial segregation was Buchanan v. Warley, struck down by the US Supreme Court in 1917. However, in 1908 Kentucky enacted the Day Law, "An Act to Prohibit White and Colored Persons from Attending the Same School", which Berea College Berea College v. Kentucky, unsuccessfully challenged at the US Supreme Court in 1908; in 1948, Lyman T. Johnson filed suit for admission to the University of Kentucky; as a result in the summer of 1949, nearly thirty African American students entered UK graduate and professional programs. Kentucky integrated its schools after the 1954 ''Brown v. Board of Education'' verdict, later adopting the first state civil rights act in the South in 1966. The biggest day in American horse racing, the Kentucky Derby, is preceded by the two-week Kentucky Derby Festival, Derby Festival in Louisville. The Derby Festival features many events, including Thunder Over Louisville, the Pegasus Parade, the Great Steamboat Race, Fest-a-Ville, the Chow Wagon, BalloonFest, BourbonVille, and many others leading up to the big race. Louisville also plays host to the Kentucky State Fair and the Kentucky Shakespeare Festival. , the state's third-largest city and home to the Bowling Green Assembly Plant, only assembly plant in the world that manufactures the Chevrolet Corvette, opened the National Corvette Museum in 1994. The fourth-largest city, Owensboro, Kentucky, Owensboro, gives credence to its nickname of "Barbecue Capital of the World" by hosting the annual International Bar-B-Q Festival. Old Louisville, the largest historic preservation district in the United States featuring Victorian architecture and the third largest overall, hosts the St. James Court Art Show, the largest outdoor art show in the United States. The neighborhood was also home to the Southern Exposition (1883–1887), which featured the first public display of Thomas Edison's light bulb, and was the setting of Alice Hegan Rice's novel, ''Mrs. Wiggs of the Cabbage Patch''. Hodgenville, Kentucky, Hodgenville, the birthplace of , hosts the annual Lincoln Days Celebration, and also hosted the kick-off for the National Abraham Lincoln Bicentennial Celebration in February 2008. Bardstown, Kentucky, Bardstown celebrates its heritage as a major bourbon-producing region with the Kentucky Bourbon Festival. Glasgow, Kentucky, Glasgow mimics Glasgow, Scotland by hosting the Glasgow Highland Games, its own version of the Highland Games, and Sturgis, Kentucky, Sturgis hosts "Little Sturgis", a mini version of Sturgis, South Dakota's annual Sturgis Motorcycle Rally. Winchester, Kentucky, Winchester celebrates an original Kentucky creation, Beer cheese (spread), Beer Cheese, with its Beer Cheese Festival held annually in June. Beer Cheese was developed in Clark County, Kentucky, Clark County at some point in the 1940s along the Kentucky River. The residents of tiny Benton, Kentucky, Benton pay tribute to their favorite tuber, the sweet potato, by hosting Tater Day. Residents of Clarkson, Kentucky, Clarkson in Grayson County, Kentucky, Grayson County celebrate their city's ties to the honey industry by celebrating the Clarkson Honeyfest. The Clarkson Honeyfest is held the last Thursday, Friday and Saturday in September, and is the "Official State Honey Festival of Kentucky".
MusicRenfro Valley, Kentucky is home to Renfro Valley Entertainment Center and the Kentucky Music Hall of Fame and is known as "Kentucky's Country Music Capital", a designation given it by the Kentucky State Legislature in the late 1980s. The Renfro Valley Barn Dance was where Renfro Valley's musical heritage began, in 1939, and influential country music luminaries like Red Foley, Homer & Jethro, Lily May Ledford &the Original Coon Creek Girls, Martha Carson and many others have performed as regular members of the shows there over the years. The Renfro Valley Gatherin' is today America's second-oldest continually broadcast radio program of any kind. It is broadcast on local radio station WRVK and a syndicated network of nearly 200 other stations across the United States and Canada every week. Contemporary Christian music star Steven Curtis Chapman is a Paducah, Kentucky, Paducah native, and Rock and Roll Hall of Famers The Everly Brothers are closely connected with Muhlenberg County, Kentucky, Muhlenberg County, where older brother Don was born. Merle Travis, Country &Western artist known for both his signature "Travis picking" guitar playing style, as well as his hit song "Sixteen Tons", was also born in Muhlenberg County. Kentucky was also home to Mildred Hill, Mildred and Patty Hill, the sisters credited with composing the tune to the ditty Happy Birthday to You in 1893; Loretta Lynn (Johnson County, Kentucky, Johnson County), Brian Littrell and Kevin Richardson (musician), Kevin Richardson of the Backstreet Boys, and Billy Ray Cyrus (Flatwoods, Kentucky, Flatwoods). However, its depth lies in its signature soundBluegrass music. Bill Monroe, "The Father of Bluegrass", was born in the small Ohio County, Kentucky, Ohio County town of Rosine, Kentucky, Rosine, while Ricky Skaggs, Keith Whitley, David "Stringbean" Akeman, Grandpa Jones, Louis Marshall "Grandpa" Jones, Sonny and Bobby Osborne, and Sam Bush (who has been compared to Monroe) all hail from Kentucky. The Bluegrass Music Hall of Fame & Museum is located in Owensboro, Kentucky, Owensboro, while the annual Festival of the Bluegrass is held in Lexington. Kentucky is also home to famed jazz musician and pioneer, Lionel Hampton. Blues legend W. C. Handy and Rhythm and blues, R&B singer Wilson Pickett also spent considerable time in Kentucky. The R&B group Midnight Star (band), Midnight Star and Hip-Hop group Nappy Roots were both formed in Kentucky, as were country acts The Kentucky Headhunters, Montgomery Gentry and Halfway to Hazard, The Judds, as well as GMA Music Awards, Dove Award-winning Christian groups Audio Adrenaline (rock) and Bride (band), Bride (metal). Heavy Rock band Black Stone Cherry hails from rural Edmonton. Rock band My Morning Jacket with lead singer and guitarist Jim James originated out of Louisville, as well as bands Wax Fang, White Reaper, Tantric (band), Tantric. Rock bands Cage the Elephant, Sleeper Agent (band), Sleeper Agent, and Morning Teleportation are also from Bowling Green. The bluegrass groups Driftwood and Kentucky Rain, along with Nick Lachey of the pop band 98 Degrees are also from Kentucky. King Crimson guitarist Adrian Belew is from Covington, Kentucky, Covington. Post rock band Slint also hails from Louisville. Noted singer and actress Rosemary Clooney was a native of Maysville, Kentucky, Maysville, her legacy being celebrated at the annual music festival bearing her name. Noted songwriter and actor Bonnie "Prince" Billy, Will Oldham is from Louisville. More recently in the limelight are country artists Chris Stapleton, Sturgill Simpson, Tyler Childers, and Chris Knight (musician), Chris Knight. In eastern Kentucky, old-time music carries on the tradition of ancient ballads and reels developed in historical Appalachia.
LiteratureKentucky has played a major role in Southern and American literature, producing works that often celebrate the working class, rural life, nature, and explore issues of class, extractive economy, and family. Major works from the state include ''Uncle Tom's Cabin'' (1852) by Harriet Beecher Stowe, widely seen as one of the impetuses for the American Civil War; ''The Little Shepherd of Kingdom Come'' (1908) by John Fox Jr., which was the first novel to sell a million copies in the United States; ''All the King's Men'' by Robert Penn Warren (1946), rated as the 36th best Modern Library 100 Best Novels, English-language novel of the 20th century; ''The Dollmaker'' (1954) by Harriette Simpson Arnow, Harriette Arnow; ''Night Comes to the Cumberlands'' (1962) by Harry Caudill, which contributed to initiating the U.S. Government's War on poverty, and others. Author Thomas Merton lived most of his life and wrote most of his booksincluding ''The Seven Storey Mountain'' (1948), ranked on ''National Review'' list of the 100 best non-fiction books of the centuryduring his time as a monk at the Abbey of Our Lady of Gethsemani near Bardstown, Kentucky. Author Hunter S. Thompson is also a native of the state. Since the later part of the 20th century, several writers from Kentucky have published widely read and critically acclaimed books, including: Wendell Berry (Floruit, fl. 1960–), Silas House (fl. 2001–), Barbara Kingsolver (fl. 1988–), poet Maurice Manning (poet), Maurice Manning (fl. 2001–), and Bobbie Ann Mason (fl. 1988–). Well-known playwrights from Kentucky include Marsha Norman (works include '''night, Mother'', 1983) and Naomi Wallace (works include ''One Flea Spare'', 1995).
CuisineKentucky's cuisine is generally similar to traditional southern cooking, although in some areas of the state it can blend elements of both the South and Midwest. One original Kentucky dish is called the Hot Brown, a dish normally layered in this order: toasted bread, turkey, bacon, tomatoes and topped with mornay sauce. It was developed at the Brown Hotel (Louisville, Kentucky), Brown Hotel in . The Pendennis Club in Louisville is the birthplace of the Old fashioned (cocktail), Old Fashioned cocktail. Also, western Kentucky is known for its own regional style of barbecue. Central Kentucky is the birthplace of Beer cheese (spread), Beer Cheese. Colonel Sanders, Harland Sanders, a Kentucky colonel, originated Kentucky Fried Chicken at his service station in North Corbin, Kentucky, North Corbin, though the first franchised KFC was located in South Salt Lake, Utah.
SportsKentucky is the home of several sports teams such as Minor League Baseball's Triple-A Louisville Bats and High-A Bowling Green Hot Rods. It is also home to the independent Atlantic League of Professional Baseball's Lexington Legends and the Frontier League's Florence Y'alls. The Lexington Horsemen and Louisville Fire of the now-defunct af2 had been interested in making a move up to the "major league" Arena Football League, but nothing has come of those plans. The Northern Kentucky, northern part of the state lies across the from Cincinnati, which is home to the National Football League's Cincinnati Bengals, Major League Baseball's Cincinnati Reds. It is not uncommon for fans to park in the city of Newport, Kentucky, Newport and use the Newport Southbank Bridge, Newport Southbank Pedestrian Bridge, locally known as the "Purple People Bridge", to walk to these games in Cincinnati. Also, Georgetown College (Kentucky), Georgetown College in Georgetown, Kentucky, Georgetown was the location for the Bengals' summer training camp, until it was announced in 2012 that the Bengals would no longer use the facilities. As in many states, especially those without major league professional sports teams, college athletics are prominent. This is especially true of the state's three NCAA Division I, DivisionI Football Bowl Subdivision (FBS) programs, including the Kentucky Wildcats, the Western Kentucky Hilltoppers and Lady Toppers, Western Kentucky Hilltoppers, and the Louisville Cardinals. The Kentucky Wildcats men's basketball, Wildcats, Western Kentucky Hilltoppers basketball, Hilltoppers, and Louisville Cardinals men's basketball, Cardinals are among the most tradition-rich college men's basketball teams in the United States, combining for 11 National Championships and 24 NCAA Final Fours; all three are high on the lists of total all-time wins, wins per season, and average wins per season. The Kentucky Wildcats are particularly notable, leading all DivisionI programs in all-time wins, win percentage, NCAA tournament appearances, and being second only to UCLA Bruins men's basketball, UCLA in NCAA championships. Louisville Cardinals football, Louisville has also stepped onto the football scene in recent years, including winning the 2007 Orange Bowl as well as the 2013 Sugar Bowl, and also producing 2016 NCAA Division I FBS football season, 2016 Heisman Trophy winner Lamar Jackson. WKU Hilltoppers football, Western Kentucky, the 2002 NCAA Division I Football Championship, national champion in DivisionI-AA football (now Football Championship Subdivision (FCS)), completed its transition to DivisionI FBS football in 2009. The Kentucky Derby is a horse race held annually in Louisville on the first Saturday in May. The Valhalla Golf Club in Louisville has hosted several editions of the PGA Championship, Senior PGA Championship and Ryder Cup since the 1990s. The NASCAR Monster Energy NASCAR Cup Series, Cup Series held a race at the Kentucky Speedway in Sparta, Kentucky from 2011 to 2020, which is within an hour driving distance from Cincinnati, Louisville, and Lexington. The race is called the Quaker State 400. The NASCAR Nationwide Series and the Camping World Truck Series also raced there through 2020. The IndyCar Series previously raced there as well. Ohio Valley Wrestling in Louisville was the primary location for training and rehab for WWE professional wrestlers from 2000 until 2008, when WWE moved its contracted talent to Florida Championship Wrestling. OVW later became the primary developmental territory for Total Nonstop Action Wrestling (TNA) from 2011 to 2013. In 2014 Louisville City FC, a professional soccer team in the league then known as USL Pro and now as the United Soccer League, was announced. The team made its debut in 2015, playing home games at Louisville Slugger Field. In its first season, Louisville City was the official reserve side for Orlando City SC, who made its debut in Major League Soccer at the same time. That arrangement ended in 2016 when Orlando City established a Orlando City B, directly controlled reserve side in the USL.
Kentucky colonelThe distinction of being named a Kentucky colonel is the highest title of honor bestowed by the Commonwealth of Kentucky. Commissions for Kentucky colonels are given by the Governor of Kentucky, Governor and the Secretary of State of Kentucky, Secretary of State to individuals in recognition of noteworthy accomplishments and outstanding service to a community, state or the nation. The sitting Governor of Kentucky, governor of the Commonwealth of Kentucky bestows the honor of a colonel's commission (document), commission, by issuance of letters patent. Kentucky colonels are commissioned for life and act officially as the state's goodwill ambassadors.
See also* Index of Kentucky-related articles * Outline of Kentucky
Surveys and reference* Bodley, Temple and Samuel M. Wilson. ''History of Kentucky'' 4 vols. (1928). * Harry M. Caudill, Caudill, Harry M., ''Night Comes to the Cumberlands'' (1963). * Channing, Steven. ''Kentucky: A Bicentennial History'' (1977). * Clark, Thomas Dionysius. ''A History of Kentucky'' (many editions, 1937–1992). * Collins, Lewis. ''History of Kentucky'' (1880). * * Lowell H. Harrison, Harrison, Lowell H. and James C. Klotter. ''A New History of Kentucky'' (1997). * Kleber, John E. et al. ''The Kentucky Encyclopedia'' (1992), standard reference history. * James C. Klotter, Klotter, James C. ''Our Kentucky: A Study of the Bluegrass State'' (2000), high school text * Lucas, Marion Brunson and Wright, George C. ''A History of Blacks in Kentucky'' 2 vols. (1992).
Specialized scholarly studies |
Class 10 Science Chapter 1 Important Questions
Class 10 Science Chapter 1 Important Questions, Extra Questions and explanation of Chemical Reaction and Equations updated for new academic session 2020-2021 based on latest NCERT Books and following the new CBSE Curriculum 2020-2021.These questions cover NCERT Solutions of back exercises, previous year questions from CBSE Board, intext questions and extra questions for board examinations. Ask your doubts in discussion forum and get the answers by subject experts.
Class 10 Science Chapter 1 Important Questions 2020-21
10th Science Chapter 1 Important Questions for Exams
Class 10 Science Chapter 1 Important Questions are given below to use online updated for new academic session 2020-2021. These sets of questions contains questions with answers of NCERT Books intext questions, back exercises questions and previous year questions. Set of important questions also contains some extra questions which are important for class tests and schools exams. These are also considered as expected questions for board exams.
10th Science Chapter 1 Important Questions Set – 1
What is meant by the Chemical Reaction?
Whenever a chemical change occurs, we can say that a chemical reaction has taken place. For example, food is cooked, milk is left at room temperature during summers, an iron tawa/pan/nail is left exposed to humid atmosphere, grapes get fermented, food is cooked, food gets digested in our body, respiration etc. Chemical change involves change in state, change in colour, evolution of a gas, change in temperature.
What do you understand by the Chemical Equation? Give example.
The description of a chemical reaction in a shorter form is called chemical equation The simplest way to do this is to write it in the form of a word-equation.
Magnesium + Oxygen → Magnesium oxide
Magnesium and oxygen are reacting with each other so it is called reactants and magnesium oxide, formed during the reaction, as a product. The reactants are written on the left-hand side (LHS) with a plus sign (+) between them. Similarly, products are written on the right-hand side (RHS) with a plus sign (+) between them. The arrowhead points towards the products, and shows the direction of the reaction.
What do you mean by the Skeletal Chemical Equation?
A chemical reaction written in the form of symbols and formulae is called skeletal chemical equation. The above word-equation can be written as a balanced chemical equation:
2Mg + O2 → 2MgO
In this equation there are two oxygen atom on the LHS side while there is only one oxygen atom on the RHS side.
What do you mean by the balanced chemical equation?
A chemical reaction in which total mass of the elements present in the products of a chemical reaction has to be equal to the total mass of the elements present in the reactants. In other words, the number of atoms of each element remains the same, before and after a chemical reaction. Hence, we need to balance a skeletal chemical equation.
For example: Zn + H2SO4 → ZnSO4 + H2
What is meant by the Combination reaction? Explain with example.
The reaction in which two or more element or compound combine together to form a single compound are called Combination reaction.
(i) CaO(s) + H2O(l) → Ca(OH)2(aq) + Heat
(Quick lime) (Slaked lime)
(ii) C(s) + O2(g) → CO2(g)
(iii) 2H2(g) + O2(g) → 2H2O(l)
Types of Chemical Reactions
The different types of chemical reactions are:
Double Displacement Reaction.
10th Science Chapter 1 Important Questions Set – 2
What is meant by the Decomposition reaction? Explain its other type of Decomposition reaction.
It is a type of reaction in which a single compound broke down to give two or more simpler substance.
2FeSO4(s) → Fe2O3(s) + SO2(g) + SO3(g)
(Ferrous sulphate) (Ferric oxide)
There are three different type of Chemical reaction are:
i). Thermal Decomposition: When the decomposition take place due to heat then this type of reaction is known as thermal decomposition.
Example: Ca(OH)2(aq) → CaO(s) + H2O(l)
ii). Electrical Decomposition: When the decomposition take place due to electricity then this type of reaction is known as electrical decomposition.
iii). Photolytic Decomposition: When the decomposition take place in the presence of sunlight then this type of reaction is known as photolytic decomposition.
Example: 2AgCl(s) → 2Ag(s) + Cl2
What is meant by the Displacement reaction? Give its example.
Those reactions in which a more active element displaces a less active element from its compound are called displacement reaction.
Example: Zn(s) + CuSO4(aq) → ZnSO4(aq) + Cu(s)
What do you understand by the Double Displacement reaction?
Those reactions in which two ionic compound in the solution react by exchange of their ions to form new compound are called double displacement reaction.
Na2SO4(aq) + BaCl2(aq) → BaSO4(s) + 2NaCl(aq)
(Sodium (Barium (Barium (Sodium
Sulphate) chloride) sulphate) chloride)
What do you understand by the Precipitation reaction?
Those reactions in which aqua solution of two compound on mixing react to form an insoluble compound which separate out as a solid or such an insoluble compound is forms when a gas passed through the solution of compound are called precipitation reaction.
CuSO4(aq) + H2 → Cu + H2So4
What is meant by the neutralization reaction?
It is a type of reaction in which a acid react with a base to form salt and water this is known as Neutralization reaction
NaOH + HCL → NaCl + H2O
A chemical reaction involves a chemical change in which substances react to form new substances with entirely new properties. Substances that react or take part in the reaction are known as reactants and the substances formed are known as products.
If a change involves change in colour or state but no new substance is formed, then it is a physical change.
If a change involves formation of new substances, it is a chemical change.
10th Science Chapter 1 Important Questions Set – 3 (1 Mark)
What is meant by the Oxidation and Reduction reaction?
The gain of oxygen or loose of hydrogen is known as Oxidation reaction.
The gain of hydrogen or loosing of oxygen is known as Reduction reaction.
Differentiate between corrosion and rancidity?
Corrosion: When a metal is attacked by substance around it such as moisture, acids, etc., it is said to be corrode and this process is known as corrosion. For example, iron metal are shiny when new but with the passage of time they get coated with reddish brown powder called rust. Silver articles became black and copper articles became green on the surface due to corrosion. Corrosion cause damage to car body, bridges, iron railing, ships, etc.
Rancidity: When fats and oils are oxidized (or kept in open for some days), their smell and taste became changes. We can say that they have become rancid. This phenomenon is called rancidity. Antioxidants are adding to food containing oils and fats and oils to prevent rancidity. The antioxidant substance that are added to food are preferentially oxidized and thus they prevent the oxidation of food. Keeping food in air tight containers helps to slow down oxidation. For example: Chips manufacturer flush bags of chips with nitrogen to prevent chips from getting oxidized.
What is Decomposition reaction?
In a decomposition reaction, a single reactant decomposes to give 2 or more products. Decomposition reactions require energy in the form of heat, light or electricity
Types of decomposition reactions:
a). Decomposition reactions which require heat are known as thermolytic decomposition reactions
b). Decomposition reactions which require light are known as photolytic decomposition reactions
c). Decomposition reactions which require electricity are known as electrolytic decomposition
What are chemical equations?
Chemical equation: The symbolic representation of a chemical reaction is called a chemical equation.
Describe any three chemical reactions?
Exothermic and endothermic reactions: If heat is evolved during a reaction, then such a reaction is known as exothermic reaction. If heat is absorbed from the surroundings, then such a reaction is known as endothermic reaction
Combination reaction: Combination reaction is a reaction in which two or more substances combine to give a single product.
Decomposition reaction: In a decomposition reaction, a single reactant decomposes to give 2 or more products. Decomposition reactions require energy in the form of heat, light or electricity
8. Displacement reaction: A reaction in which a more active element displaces less active element from its salt solution.
10th Science Chapter 1 Important Questions Set – 4 (1 Mark)
State one basic difference between a Physical change and a Chemical change? [CBSE 2011]
The one basic difference is that In a Physical change no new substance is formed but in a Chemical change a new substance is formed.
What is meant by a chemical reaction? [CBSE 2011]
The reaction in which a chemical change occurs it is known as chemical reaction. A chemical reaction can be observed with the help of any of the following observations:
a) Evolution of a gas
b) Change in temperature
c) Formation of a precipitate
d) Change in colour
e) Change of state
Balance the following chemical equation: FeSo4 → Fe2O3 + SO2 + SO3 [CBSE 2008]
2FeSo4 → Fe2O3 + SO2 + SO3
Hydrogen being a highly inflammable gas and oxygen being a supporter of combustion, yet water which is a compound made up of hydrogen and oxygen is used to extinguish fire. Why? [CBSE 2011]
It is because the properties of compound (H2O) are different from the properties of its constituting elements which is H2 and O2.
State the main difference between an endothermic reaction and an exothermic reaction? [CBSE 2011]
In endothermic reaction, heat is absorbed but in the exothermic reaction, heat is evolved.
Potato chips manufacture fill the packet of chips with nitrogen gas. Why? [CBSE 2011]
Flushing bags of chips with nitrogen cuts off oxygen and protect the food from rancidity.
Write a balanced chemical equation to represent the following reaction: Carbon Monoxide react with hydrogen gas at 340 atm to from methyl alcohol. [CBSE 2011]
Co + 2H2 → CH3OH
Skeletal Chemical Equation
A chemical equation which simply represents the symbols and formulae of reactants and products taking part in the reaction is known as skeletal chemical equation for a reaction.
For the burning of Magnesium in the air, Mg + O2 → MgO is the skeletal equation.
10th Science Chapter 1 Important Questions Set – 5 (2 Marks)
If copper metal is heated over a flame it develops a coating. What is the colour and composition of coating? [CBSE 2011]
WE get a black coating of CuO, copper oxide.
N2 + 2H2 →2NH3, name the type of reaction. [CBSE 2011]
It is a combination reaction.
Which one is chemical change fermentation of fruit juice or diluting fruit juice? [CBSE 2011]
Fermentation of fruit juice is a chemical change.
Is burning of a candle wax a physical change or a chemical change? [CBSE 2011]
It is a chemical change. The hydrocarbon of wax burns to produce CO2 and H2O
What happen chemically when quicklime is added to water filled in a bucked? [CBSE 2010, 2008]
Quick lime react with water to form slaked lime and produces lot of heat and hissing sound.
CaO + H2O → Ca(OH)2 + heat + hissing sound
On what basis is a chemical equation balanced? [CBSE 2010]
It is based on the law of conservation of mass which said that mass can neither be created nor be destroy in a chemical reaction.
Name and state the law which is kept in mind when we balance a chemical equation. [CBSE 2012]
While balancing a chemical equation, we follow the law of conservation of mass which states mass can neither be created nor be destroy in a chemical reaction.
Define oxidation and reduction. [CBSE 2010, 2011]
Oxidation id addition of oxygen or removal of hydrogen. Reduction is addition of hydrogen or removal of oxygen.
Give an example of double displacement reaction. [CBSE 2010, 2011]
Na2SO4(aq) + BaCl2(aq) → BaSO4(s) + 2NaCl(aq)
Balance the following chemical equation: NaOH + H2So4 → Na2So4 + H2O. [CBSE 2015]
The balanced chemical equation is
2NaOH + H2So4 → Na2So4 + 2H2O
Balanced Chemical Equation
A balanced equation is a chemical equation in which number of atoms of each element is equal on both sides of the equation i.e. number of atoms of an element on reactant side = number of atoms of that element on the product side.
As per the law of conservation of mass, the total mass of the elements present in the products of a chemical reaction is equal to the total mass of the elements present in the reactants.
10th Science Chapter 1 Important Questions Set – 6 (2 Marks)
Write balanced chemical equation for the following reaction: Silver bromide on exposure to sunlight decomposes into silver and bromine. Sodium metal react with water to form sodium hydroxide and hydrogen gas. [CBSE 2012]
(i) 2AgBr → 2Ag + Br2
(ii) 2Na + 2H2 → 2NaOH + H2
(i) What is the colour of ferrous sulphate crystals? How does this colour change after heating? (ii) Name the products formed on strongly heating ferrous sulphate crystals. What type of chemical reaction occurs in this change? [CBSE 2012]
(i) The colour of ferrous sulphate is pale green. The colour changes to reddish brown on heating due to formation of Iron (III) oxide.
(ii) 2FeSo4 → Fe2O3 + SO2 + SO3
Using balanced chemical equation explain the difference between a displacement reaction and a double displacement reaction. [CBSE 2012, 2011]
In a displacement reaction, more reactive can displace the less reactive metal from its salt solution e.g.
Cu + 2AgNo3 → Cu(NO3)2 + 2Ag
In double displacement reactions ,two compound exchange their ions to form two new compounds e.g.
NaOH + HCL → NaCl + H2O
Using a suitable chemical equation justify that some chemical reactions are determined by: (i) change in colour, (ii) change in temperature. [CBSE 2011]
Pb(NO3)2 + 2Kl → 2Pbl2 + 2KNO3
CaO + H2O → Ca(OH)2 + heat.
Give an example each for thermal decomposition and photochemical decomposition reactions. Write relevant balanced chemical equation also. [CBSE 2012]
Thermal decomposition reaction:
Ca(OH)2(aq) → CaO(s) + H2O(l)
2AgCl(s) → 2Ag(s) + Cl2
(a) A solution of substance ‘X’ Is used for white washing. What is the substance ‘X’? State the chemical reaction of ‘X’ with water. (b) Why does the colour of copper sulphate solution change when an iron nail is dipped in it? [CBSE 2011]
(a) ‘X’ is calcium oxide (CaO)
CaO + H2O → Ca(OH)2 + Heat
(b) It is because iron displaces copper from CuSo4 to form FeSo4 Which is pale green
Fe(s) + CuSO4(aq) → FeSO4(aq) + Cu(s)
Oxidation and Reduction
Oxidation is a chemical process in which a substance gains oxygen or loses hydrogen whereas Reduction is a chemical process in which a substance gains hydrogen or loses oxygen. During a chemical reaction, there is a breaking of bonds between atoms of the reacting molecules to give products.
10th Science Chapter 1 Important Questions Set – 7 (2 Marks)
Define combination reaction. Give one example of a combination reaction which is also exothermic. [CBSE 2011]
A reaction in which two elements or compounds combine to form a single compound is called combination reaction.
CaO + H2O → Ca(OH)2 + Heat
It is also an exothermic reaction because heat is evolved.
What happens when an aqueous solution of sodium sulphate reacts with an aqueous solution of barium chloride? State the physical conditions of reactants in which the reaction between them will not take place. Write the balanced chemical equation for the reaction and name the type of reaction. [Delhi 2010]
White precipitate of barium sulphate is formed. If both reactants are in solid state, then the reaction will not take place between them.
BaCl2(aq) + Na2So4 → BaSo4 + 2NaCl
It is double displacement as well as precipitation reaction.
What is redox reaction? When a magnesium ribbon burns in air with a dazzling flame and forms a white ash , is magnesium oxidized or reduced ? Why? [Delhi 2010]
The reaction in which oxidation (loss of electrons) and reduction (gain of electrons) take place simultaneously are called redox reaction.
2Mg + O2 → 2MgO
Magnesium is getting oxidised because it is losing electrons to form Mg+2 and oxygen is gaining electrons to form O-2 therefore it is getting reduced.
Write any two observations in an activity which may suggest that a chemical reaction has taken place. Give an example in support of your answer. [CBSE 2010]
Any two of these observations will suggest chemical reaction has taken place
(i) Change in state
(ii) Change in colour
(iii) Evolution of gas
(iv) Change in temperature
e.g. lead nitrate is white crystalline solid, on heating gives yellowish brown solid (lead monoxide), brown gas is evolved and a colourless gas is evolved . It shows chemical reaction has taken place.
2Pb(NO3)2(s) → 2PbO(s) + 4NO3(g) + O2(g)
When the powder of a common metal is heated in an open china dish, its colour turns black. However, when hydrogen is passed over the hot black substance so formed, it regains its original colour. Based on the above information answer the following questions: (i) What type of chemical reaction take place in each of the two given steps? (ii) Name the metal initially taken in the powder form. Write balanced chemical equations for both reactions. [CBSE 2010]
(i) In the first step, oxidation is taking place.
In second step, redox reaction take place.
(ii) Metal in the powder form is copper
2Cu + O2→ 2CuO
CuO +H2 → Cu+H2O
What is oxidation reaction? Give an example of oxidation reaction. Is oxidation an exothermic or an endothermic reaction? [Delhi 2009]
The reaction in which oxygen or electronegative element is added, Hydrogen or electropositive element is removed or loss of electrons takes place is called oxidation reaction.
Fe+2 → Fe+3 + e- (loss of electron )
C + O2 → Co2 + Heat
Oxidation reaction are mostly exothermic in nature because heat is evolved in this process.
Ask your doubts related to NIOS or CBSE Board and share your knowledge with your friends and other users through Discussion Forum. Download CBSE NCERT Books and Apps for offline use.
Provide us your feedback, so that we can make changes to make Tiwari Academy more user friendly website. We are trying to provide best NCERT Solutions 2020-21 and Study Material Free to all. |
When we change the subject of a formula, it is very important that we always do the same thing to both sides of the equation.
The question asked us to make z the subject of the formula x = y - z divided by 3.
It is often best to remove the fraction first, so we multiply everything by 3:
3x = 3y - z
Add z to both sides:
3x + z = 3y
Subtract 3x from both sides:
z = 3y - 3x
And finally, factorise:
z = 3 (y - x)
When estimating the answer to a problem, we usually round each of the numbers to one significant figure (1 s.f.)
The question asked us to estimate the answer to the problem:
(364 x 21) divided by 39
So we start by writing each of the numbers correct to 1 s.f.:
(400 x 20) divided by 40
= 8000 divided by 40
Stem and Leaf Diagrams
Stem and leaf diagrams are used to represent a set of data. When drawing or interpreting a stem and leaf diagram, it is important that you use a key so that you know exactly what is being represented.
In the question, the key tells us that the numbers represent prices (5|7 means 57p), so the data should read as:
3| 2 32p
4| 1 4 4 41p 44p 44p
5| 2 7 7 9 9 52p 57p 57p 59p 59p
6| 1 4 61p 64p
7| 0 70p
So we can see that the most expensive cake was 70p.
Expressing as a Percentage
To express one quantity as a percentage of another, we first write the numbers as a fraction and then change the fraction to a percentage (by multiplying by 100).
Remember that the numbers must have the same units - if you dont convert them first youll end up with the wrong answer!
So the question express 32 cm as a percentage of 4 m should be changed to express 32 cm as a percentage of 400 cm
32 divided by 400 x 100 = 8%
Speed, Distance & Time
Emma ran for 30 minutes at 16 km/h. 30 minutes is the time, and 16 km/h is the speed, so we need to calculate the distance.
We use the formula
distance = speed x time
Distance = 16 x 0.5 = 8 km
Remember that the units must be the same. The speed was given in kilometers per hour, so we need to convert the time to hours (30 minutes = 0.5 hours), and give our answer in km.
Remember that when we multiply powers of the same number, we add, and when we divide powers of the same number, we subtract.
a to the power of 2 multiplied by a to the power of 7 multiplied by a to the power of 3 is a multiplication, so we add the powers.
The powers 2, 7 and 3 can be added together: 2 + 7 + 3 = 12.
So, a to the power of 2 multiplied by a to the power of 7 multiplied by a to the power of 3 = a to the power of 12.
The question asked us to factorise the expression x squared - 6x + 8
We are looking for an answer of the form (x + p)(x + q) where p + q = -6 and pq = +8.
-2 + -4 = -6 and -2 x -4 = +8
so the answer is
x squared - 6x +8 = (x - 2)(x - 4).
Dividing to a Given Ratio
When dividing in a given ratio we need to know the total number of parts.
Eg. divide 40 sweets between Jo and Tom in the ratio 3:5.
Jo required 3 parts of the ratio and Tom required 5, so the total is:
3 + 5 = 8
40 divided by 8 = 5, so Jo receives 3 multiplied by 5 = 15 sweets.
When rounding to a given number of decimal places we need to look at the digit after the decimal place that has been asked for. If this number is 5 or more, we round up and if it is less than 5 we round down.
So, when we are asked to round 6.78512 to two decimal places, we note that the digit in the third decimal place is 5, so we round up.
Therefore, 6.78512 = 6.79 (2d.p.) |
Solving linear systems using the elimination method is also known as using the addition/subtraction method. The goal is to "eliminate" one of the variables in order to solve the system of equations.
Simplifying we get:
Now let's look at the first equation. Multiply each term in the equation by 2.
`2 (x/2 +2y) = 2(27)`
`x+4y = 54`
Now let's go back to the second equation. Multiply each term in the equation by 3.
`3(x+ y/3) =3(10)`
Now the two equations will be easier to work with.
Let's eliminate the x variable to solve for y first. To do this multiply the first equation by -3.
`-3(x+4y=54)` >>> `-3x-12y=-162`
Now we have:
`3x +y =30`
The 3x and -3x cancel out. Combine -12y with 1y which is -11y. Finally -162 combined with 30 is -132.
`-11y = -132`
Divide both sides by -11 to get y alone.
Almost finished. Since you now know what y is, simply plug this value into one of the original equations to solve for x.
`1/2 x +2(12) =27`
`x/2 +24 =27`
`2 (x/2) = 2 (3)`
The solution to the system of equations is |
In Australia’s first century, from initial convict settlement in 1788 to the post gold rush decades, the economy grew rapidly. And despite all the changes going on, we found that during this time Australia gained its equality edge.
In fact, during roughly the same period (1774 to 1870) the United States experienced a steep increase in inequality. So looking at this phase of Australian economic history could teach today’s policymakers some lessons.
In the nineteenth century, Australia enjoyed the fastest rate of GDP growth per worker, between 1821 and 1871 it was about twice that of the US and three times that of Britain. We started to look at data from the 1820s onwards. This was the time when Australia quickly evolved from a colony where convicts were 55% of the labour force to a more conventional “free” economy by 1870.
While both Australia and the United States used forced labour extensively (slaves in the southern US and convicts in Australia), their share of the labour force was much higher in Australia (more than half) than in America (about a fifth). The difference in the two countries’ trajectories on inequality has to do with the timing of the emancipation of forced labour, the duration of their coerced employment and changing economies.
How Australia avoided inequality in the past
In Australia convicts were gradually emancipated following the 1820s. As existing convicts eventually got their freedom, the inflow of new convicts fell sharply after the 1830s (except for Tasmania).
By the 1850s Britain had practically ceased its convict transportation policy. In contrast, the slaves in the American south were used as forced labour for much longer, and emancipated only after the Civil War.
Another key difference between the two countries lies in the fact that while the United States underwent a process of impressive industrial growth, Australia specialised in the export of wool and gold (small scale extraction).
We used a wage to rental ratio to work out income inequality, comparing rental income and land values to workers’ wages. What we noticed is that European settlement in Australia was characterised by labour scarcity and land abundance.
In fact, the ratio of acreage to farm labour rose by a whopping 11.7% per annum between 1828 and 1860 and by 6.3% per annum across the 1860s. This was because land endowments grew very fast after the Blue Mountains were breached in 1815. This trend was also matched by a reduction in the gap between rental income accruing to those who owned land, relative to what unskilled workers were receiving.
Our analysis shows that while land values per acre rose at 2.2% per annum, land rents fell by 0.3% per annum. This difference was driven by the fall in interest rates, because of the partial integration between Australian and British financial markets.
On the other hand, the annual earnings of unskilled labourers soared, pushing the wage-rental rate up. With the end of British transportation policy, the “emancipated” convicts moved up the earnings ranks. They almost doubled their incomes if they remained unskilled, and moved up even higher if they could exploit their skills.
But there is another important reason behind the rise in unskilled workers’ incomes. As Australia did not undergo a process of industrialisation, it did not experience an increased demand for skilled workers, like the US. So the supply of workers kept pace with the demand for skills.
Lessons to learn for today’s inequality
While today’s economic conditions are different, there is something that we can learn from this episode of Australian history. Australia’s experience shows that it’s possible to achieve fast growth, and at the same time, a reduction in inequality.
Income inequality in Australia has been rising since the mid-1990s. At the start of the 21st century, the income share of the richest 1% of Australians was higher than it had been at any point since 1951.
Greater equality obviously can’t be achieved by emancipating convicts now, but policymakers can mimic the same effect by targeting vulnerable segments of society that experience greater disadvantage. For example politicians could improve equality of access to health, education, housing and other services across the country. |
One of the predictions of special relativity is that the speed of light in a vacuum is a universal constant. This prediction has held up so well that we now use the speed of light to define part of the metric system. The first verification of special relativity is typically seen as the Michelson-Morley experiment, which demonstrated there wasn’t a luminiferous aether. But this experiment was actually done before Einstein proposed relativity, and so it wasn’t technically a prediction. It took two other experiments to completely verify Einstein’s model.
The Michelson-Morley experiment focused on determining the speed of the Earth through the aether. It wasn’t designed as a test of special relativity, and so it only tested that the speed of light was the same with different orientations. No matter which way you orient your device, the travel time back and forth along your experiment is the same. That’s certainly a prediction of relativity, but the theory goes further to claim that light speed is the same even if you’re moving at different speeds.
It took two other experiments to fully pin down the veracity of relativity. One, known as the Ives-Stilwell experiment looked at the time dilation effects of the model. In order for the speed of light to be the same in every reference frame, the clock of an experiment moving relative to you must appear to tick more slowly than that of an experiment sitting next to you. This effect is known as time dilation, and is one of the stranger aspects of relativity.
The Ives-Stilwell experiment looks at the light emitted or absorbed by fast moving particles and compares them with the transverse Doppler effect. If an object speeds past you from left to right, when it is directly in front of you would you see any Doppler shift of its light? Since the relative motion along your line of sight at that moment is zero, you might think there would be no shift. But since the object is speeding past you, its time should be dilated. As a result there should be a Doppler shift. The experiment confirmed the Doppler shift just as relativity predicts.
But relativity also predicts that space and time are connected, so a time dilation must also create a change of apparent length (known as length contraction). In other words not only must the clock of a moving experiment appear slower, then length of the experiment must appear shorter. Ives-Stilwell confirmed the first part, but not the second. To do that took a different test known as the Kennedy-Thorndike experiment.
The Kennedy-Thorndike experiment is similar to the Michelson-Morley. A beam of light is split to travel along two different paths. The separate beams of light are then recombined to create an interference pattern. The main difference is that the path length of the two beams is radically different. Since (according to Michelson-Morley) the speed of light is independent of orientation, the travel time of each path is different. Since Ives-Stilwell verified time dilation, as the apparatus moves with Earth, the amount of time dilation along one path is different from the other. This would produce a shift in the resulting interference pattern unless the lengths of the two paths also contract as relativity predicts.
The Kennedy-Thorndike experiment found no apparent shift in the interference pattern. Combined with the results of Michelson-Morley and Ives-Stilwell, this confirms that the speed of light is constant, and time dilation and length contraction both occur in agreement with special relativity.
And that’s why relativity is the strangest theory we know is true. |
As part of the Katz et al. (1990) study that examined test performance on a passage that a group of students had not read, the experimenters obtained the same kind of data from a smaller group of students who had read the passage (called the Passage group). Their data follow:
Calculate the mode, median, and the mean for these data.
Answer to relevant QuestionsCreate a sample of ten numbers that has a mean of 8.6. Notice carefully how you did this—it will help you later to understand the concept of degrees of freedom. In Chapter 3 (Figure 3.5) we saw data on grades of students who did and did not attend class regularly. What are the mean and median scores of those two groups of students? (The data are reproduced here for convenience.) ...Given the following set of data, demonstrate that subtracting a constant (e.g., 5) from every score reduces all measures of central tendency by that amount. The mean of the data used in Exercise 5.1 is 46.57. Suppose that we had an additional subject who had a score of 46.57. Recalculate the variance for these data. (You can build on the intermediate steps used in Exercise 5.1.) ...Compare the mean, standard deviation, and variance for the data for the Cognitive Behavior condition in Exercise 5.21 with their 20% trimmed and Winsorized counterparts. Why is the Winsorized variance noticeably smaller than ...
Post your question |
Nasa probe may rewrite the books on the birth of the solar system's smallest planet
Nasa’s Messenger spacecraft is bringing new understanding to the question of how Mercury formed. The new information looks set to rewrite theories about the birth of the solar system’s smallest planet.
Many scientists consider Mercury - with its high density composition, heavily cratered surface and magnetic field - to be the most unusual planet. It is the smallest of the other three rocky planets - Mars, Venus and Earth - yet it has a massive iron core and only a thin mantle. Scientists had thought these unusual features might have been remnants of a particularly violent formation.
In the mid 1970s, Nasa’s Mariner 10 probe made three flybys of Mercury, uncovering almost half of the surface. Yet it took over 30 years for scientists to get another close up look when, in 2008, Messenger made its first pass. Then, in March this year, the half-tonne, $446 million (?285 million) craft made its first orbit of Mercury - one of over 700 it is expected to make before its mission is completed next year.
Using Messenger’s gamma-ray spectrometer, Patrick Peplowski of John Hopkins University, US, and colleagues determined the ratios of radioactive elements on the planet’s surface.1 These elements emit gamma rays at specific, known energies, so by measuring the number of gamma rays at different energies, the researchers calculated the relative abundance of the elements.
One particular measure - the ratio of potassium to thorium, or potassium to uranium - is important for understanding the temperatures Mercury has experienced in the past. This is because potassium is more volatile than thorium or uranium: if the ratio were skewed towards these latter elements, it would imply past high temperatures boiled the potassium away into space. As it happens, the team found the opposite: Mercury has a high ratio of potassium to thorium - similar to the other rocky planets - implying past temperatures were not so high, after all.
Such moderate temperatures rule out several models of Mercury’s formation. These include the idea that Mercury once lost much of its mass in a collision with a similar-sized object, or that the sun gradually stripped the planet’s outer layers. Now, ’we are left with one theory - accretion of the planet from chondritic-like [meteorite] material,’ Peplowski says.
Another research group used x-ray fluorescence spectroscopy to probe the planet’s surface with the results backing up the theory that Mercury was assembled from meteorites or cometary particles.2 Larry Nitler, from the Carnegie Institution of Washington, US, and colleagues discovered that the ratios of a number of elements, such as magnesium and silicon, differ from those found in a common planetary and lunar mineral. The planet’s geology bears a closer resemblance to meteorites than its rocky neighbours. ’Theorists need to go back to the drawing board on Mercury’s formation,’ says Nittler. ’Most previous ideas about Mercury’s chemistry are inconsistent with what we have actually measured on the planet’s surface.’
Messenger has not just been taking measurements of Mercury’s surface. Thomas Zurbuchen of the University of Michigan in Ann Arbor, US, and colleagues employed the probe’s plasma spectrometer to detect heavy ions in Mercury’s ionised exosphere, the outmost region of the atmosphere.3 They found that sodium ions are most abundant, particularly around the poles where the solar wind - the continuous flow of charged particles from the sun - can funnel inwards. They also found contributions from helium ions, which probably originated in helium evaporating from the surface.
Peter Wurz, a planetary scientist at Bern University in Switzerland, who was not involved in these studies, thinks these results will help scientists to understand the physics of the magnetic fields surrounding planets, known as magnetospheres. Mercury’s magnetosphere is ’highly interesting’, he says, because it seems to be driven mostly by the solar wind and, unlike Earth, has an almost non-existent ionised layer, or ionosphere, to dampen it.
1 P NPeplowski ,,2 L R Nittler et alScience3333 T H Zurbuchen , 2011, 333, 1862 (DOI: 10.1126/science.1211302)
No comments yet |
The concept of the derivative of a function is one of the most significant ideas in mathematics. Evaluating the derivative of a function, dy/dx, can provide insights into the behavior and characteristics of the function at a particular point. This article aims to provide a comprehensive explanation of the derivative of dy/dx and its applications in various fields.
What is a derivative?
A derivative is a measure of how much a function is changing at a specific point. It is calculated as the ratio of the change in the output of a function to the change in its input, as the interval between the points approaches zero. In simple terms, a derivative is the slope of the tangent to the curve at a particular point.
In mathematical notation, the derivative of a function y=f(x) with respect to x is denoted by dy/dx, which represents the rate at which the function is changing with respect to the variable x. It can also be explained as the instantaneous rate of change of y with respect to x at a particular value of x.
The derivative of dy/dx
The derivative of dy/dx is called the second derivative of a function. It is obtained by simply differentiating the derivative of a function with respect to its variable. In other words, it is the derivative of the first derivative of a function. The notation for the second derivative of y with respect to x is denoted as d²y/dx².
The second derivative gives us more information about the function’s behavior than the first derivative. It provides insights into the concavity and curvature of the function’s graph. If the second derivative is positive, the function is said to be concave up, while if it is negative, the function is concave down.
Applications of the derivative of dy/dx
The derivative of dy/dx has many practical applications, ranging from science and engineering to economics and finance. Some of the applications are:
In physics, the derivative of dy/dx plays a vital role in calculating velocity and acceleration. For instance, if we know the position of an object with respect to time, we can calculate the velocity by taking the derivative of its position function with respect to time. Similarly, the acceleration can be obtained by taking the second derivative.
In engineering, the derivative of dy/dx is used extensively in the design of electrical circuits and control systems. The rate of change of an electrical signal can be calculated using the derivative, which helps in analyzing and optimizing the performance of a system.
In economics, the derivative of dy/dx is used to calculate marginal utility, which is the additional satisfaction gained by consuming an additional unit of a product. The derivative of the utility function with respect to the quantity consumed gives us the marginal utility.
In finance, the derivative of dy/dx is used to calculate the option price in financial markets. The Black-Scholes model, which is extensively used in options pricing, involves the use of derivatives of financial pricing formulas.
The derivative of dy/dx is a fundamental concept, which plays a crucial role in various fields. It is the rate of change of a function, which gives us insights into the behavior of the function at a particular point. The second derivative of a function provides additional information about its concavity and curvature, which helps in analyzing the function’s graph. The applications of the derivative of dy/dx are extensive, making it a vital concept in many fields of study.
Keywords: derivative, dy/dx, second derivative, slope, tangent, concavity, curvature, velocity, acceleration, marginal utility, options pricing. |
[Total: 4 Average: 5/5]
How do Scientists Design Experiments Using the Scientific Method?
- As in all other fields of science, the knowledge of chemistry is gathered through the scientific method, a systematic method used by scientists in their investigations.
- Generally, the scientific method starts with careful observations on a situation. An inference is made based on the observations.
- An inference is a smart guess. To verify it, a hypothesis is formulated and tested through a carefully planned and controlled procedure called an experiment.
- Scientific attitudes and noble values should be inculcated in carrying out the experiments. For example:
(a) An experiment is planned and carried out systematically and diligently.
(b) All observations and collection of data must be done honestly and objectively.
(c) Interpretation of data, inferences and conclusions are made with rational, critical and analytical thinking.
Steps Involved in the Scientific Method
- Make observations using five senses.
- Make an inference based on the observations.
- Identify the problem based on the inference made.
- Make a hypothesis about the relationship between the manipulated variable and the responding variable.
- Identify the variables:
- Manipulated variable (the factor that is purposely changed in an experiment)
- Responding variable (the factor that changes with the manipulated variable)
- Controlled variable (the factors that are kept constant throughout an experiment)
- Control the variables: Decide how to manipulate the chosen variable, what to measure and how to keep the controlled variables constant.
- Plan an experiment: Determine the list of materials and apparatus, the procedure of the experiment, the method to collect data and the ways to analyse and interpret the collected data.
- Collect the data: Make observations or measurements and record them sistematically.
- Interpret the data: Organise and analyse data. Calculations, graphs or charts are usually drawn to look for any relationship between the variables.
- Make a conclusion: Make a statement about the outcome of the experiment and whether the hypothesis is accepted.
- Make a Report |
In mathematics, the Riemann sum is a method of approximating the integral of a function over an interval. One representation of the integral, is the area under a curve in a graph. The Riemann sum is calculated by splitting the region into equal parts. In each part, the function is replaced by a shape. Shapes that are commonly used are rectangles, trapezoids, parabolas and cubics.The area of the similar shape can then be calculated. This method is often used when finding the closed form integral is difficult or impossible. The shapes used also influence how accurate the approximation is. More complex shapes give a higher accuracy, but this also means that calculating the approximation becomes more difficult.
Because the region filled by the small shapes is usually not exactly the same shape as the region being measured, the Riemann sum will differ from the area being measured. This error can be reduced by dividing up the region more finely, using smaller and smaller shapes. As the shapes get smaller and smaller, the sum approaches the Riemann integral.
Definition[change | change source]
You divide the horizontal length under the part of the function you want to evaluate into "n" equal pieces. That is the n on top of the Σ (Greek letter sigma). The (xi-xi-1) represents the size of one horizontal segment that is created from dividing the whole by the "n". The f(yi) is a y value in an "n" segment. Since the area of a rectangle is length × width, the multiplication of xi and f(yi) is the area of a rectangle for that part of the graph. The Σ means we add up all of these small rectangles to get an approximation of the area under the segment of a function. |
Module 7 − Introduction to Solid−State Devices and Power Supplies
Pages i ,
4−1 to 4−10,
In the previous discussions, we assumed that for every portion of the input signal
there was an output from the amplifier. This is not always the case with amplifiers. It may be desirable to have
the transistor conducting for only a portion of the input signal. The portion of the input for which there is an
output determines the class of operation of the amplifier. There are four classes of amplifier operations. They
are class A, class AB, class B, and class C.
Class a Amplifier Operation
amplifiers are biased so that variations in input signal polarities occur within the limits of CUTofF and
SATURATION. In a PNP transistor, for example, if the base becomes positive with respect to the emitter, holes will
be repelled at the PN junction and no current can flow in the collector circuit. This condition is known as
cutoff. Saturation occurs when the base becomes so negative with respect to the emitter that changes in the signal
are not reflected in collector-current flow.
Biasing an amplifier in this manner places the dc operating
point between cutoff and saturation and allows collector current to flow during the complete cycle (360 degrees)
of the input signal, thus providing an output which is a replica of the input. Figure 2-12 is an example of a
class a amplifier. Although the output from this amplifier is 180 degrees out of phase with the input, the output
current still flows for the complete duration of the input.
The class a operated amplifier is used as an
audio- and radio-frequency amplifier in radio, radar, and sound systems, just to mention a few examples.
For a comparison of output signals for the different amplifier classes of operation, refer to figure 2-15 during
the following discussion.
Figure 2-15. - a comparison of output signals for the different amplifier classes of operation.
Class AB Amplifier Operation
Amplifiers designed for class AB operation are
biased so that collector current is zero (cutoff) for a portion of one alternation of the input signal. This is
accomplished by making the forward-bias voltage less than the peak value of the input signal. By doing this, the
base-emitter junction will be reverse biased during one alternation for the amount of time that the input signal
voltage opposes and exceeds the value of forward-bias voltage. Therefore, collector current will flow for more
than 180 degrees but less than 360 degrees of the input signal, as shown in figure 2-15 view B. As compared to the
class a amplifier, the dc operating point for the class AB amplifier is closer to cutoff.
The class AB
operated amplifier is commonly used as a push-pull amplifier to overcome a side effect of class B operation called
Class B Amplifier Operation
Amplifiers biased so that collector current is cut
off during one-half of the input signal are classified class B. The dc operating point for this class of amplifier
is set up so that base current is zero with no input signal. When a signal is applied, one half cycle will forward
bias the base-emitter junction and IC will flow. The other half cycle will reverse bias the base-emitter junction
and IC will be cut off. Thus, for class B operation, collector current will flow for approximately 180 degrees
(half) of the input signal, as shown in figure 2-15 view C.
The class B operated amplifier is used
extensively for audio amplifiers that require high-power outputs. It is also used as the driver- and
power-amplifier stages of transmitters.
Class C Amplifier Operation
In class C
operation, collector current flows for less than one half cycle of the input signal, as shown in figure 2-15 view
D. The class C operation is achieved by reverse biasing the emitter-base junction, which sets the dc operating
point below cutoff and allows only the portion of the input signal that overcomes the reverse bias to cause
collector current flow.
The class C operated amplifier is used as a radio-frequency amplifier in transmitters.
From the previous
discussion, you can conclude that two primary items determine the class of operation of an amplifier - (1) the
amount of bias and (2) the amplitude of the input signal. With a given input signal and bias level, you can change
the operation of an amplifier from class a to class B just by removing forward bias. Also, a class a amplifier can
be changed to class AB by increasing the input signal amplitude. However, if an input signal amplitude is
increased to the point that the transistor goes into saturation and cutoff, it is then called an OVERDRIVEN
You should be familiar with two terms used in conjunction with amplifiers - Fidelity and
Efficiency. Fidelity is the faithful reproduction of a signal. In other words, if the output of an amplifier is
just like the input except in amplitude, the amplifier has a high degree of fidelity. The opposite of fidelity is
a term we mentioned earlier - distortion. Therefore, a circuit that has high fidelity has low distortion. In
conclusion, a class a amplifier has a high degree of fidelity. a class AB amplifier has less fidelity, and class B
and class C amplifiers have low or "poor" fidelity.
The efficiency of an amplifier refers to the ratio of
output-signal power compared to the total input power. An amplifier has two input power sources: one from the
signal, and one from the power supply. Since every device takes power to operate, an amplifier that operates for
360 degrees of the input signal uses more power than if operated for 180 degrees of the input signal. By using
more power, an amplifier has less power available for the output signal; thus the efficiency of the amplifier is
low. This is the case
with the class a amplifier. It operates for 360 degrees of the input signal and requires a relatively
large input from the power supply. Even with no input signal, the class a amplifier still uses power from the
power supply. Therefore, the output from the class a amplifier is relatively small compared to the total input
power. This results in low efficiency, which is acceptable in class a amplifiers because they are used where
efficiency is not as important as fidelity.
Class AB amplifiers are biased so that collector current is
cut off for a portion of one alternation of the input, which results in less total input power than the class a
amplifier. This leads to better efficiency.
Class B amplifiers are biased with little or no collector
current at the dc operating point. With no input signal, there is little wasted power. Therefore, the efficiency
of class B amplifiers is higher still.
The efficiency of class C is the highest of the four classes of
Q22. What amplifier class of operation allows collector current to flow during the complete cycle of
Q23. What is the name of the term used to describe the condition in a transistor when the
emitter-base junction has zero bias or is reverse biased and there is no collector current?
Q24. What two primary items determine the class of operation of an amplifier?
Q25. What amplifier
class of operation is the most inefficient but has the least distortion?
A transistor may be connected in any one of three basic configurations (fig. 2-16): common emitter (CE),
common base (CB), and common collector (CC). The term common is used to denote the element that is common to both
input and output circuits. Because the common element is often grounded, these configurations are frequently
referred to as grounded emitter, grounded base, and grounded collector.
Figure 2-16. - Transistor configurations.
Each configuration, as you will see later, has particular characteristics that make it suitable for
specific applications. An easy way to identify a specific transistor configuration is to follow three simple
1. Identify the element (emitter, base, or collector) to which the input signal is applied.
2. Identify the element (emitter, base, or collector) from which the output signal is taken.
remaining element is the common element, and gives the configuration its name.
Therefore, by applying
these three simple steps to the circuit in figure 2-12, we can conclude that this circuit is more than just a
basic transistor amplifier. It is a common-emitter amplifier.
The common-emitter configuration (CE) shown in figure 2-16 view a is the
arrangement most frequently used in practical amplifier circuits, since it provides good voltage, current, and
power gain. The common emitter also has a somewhat low input resistance (500 ohms-1500 ohms), because the input is
applied to the forward-biased junction, and a moderately high output resistance (30 kilohms-50 kilohms or more),
because the output is taken off the reverse-biased junction. Since the input signal is applied to the base-emitter
circuit and the output is taken from the collector-emitter circuit, the emitter is the element common to both
input and output.
Since you have already covered what you now know to be a common-emitter amplifier (fig. 2-12), let's take a
few minutes and review its operation, using the PNP common-emitter configuration shown in figure 2-16 view A.
When a transistor is connected in a common-emitter configuration, the input signal is injected between
the base and emitter, which is a low resistance, low-current circuit. As the input signal swings positive, it also
causes the base to swing positive with respect to the emitter. This action decreases forward bias which reduces
collector current (IC) and increases collector voltage (making VC more negative). During the
negative alternation of the input signal, the base is driven more negative with
respect to the emitter. This
increases forward bias and allows more current carriers to be released from the emitter, which results in an
increase in collector current and a decrease in collector voltage (making VC less negative or swing in
a positive direction). The collector current that flows through the high resistance reverse-biased junction also
flows through a high resistance load (not shown), resulting in a high level of amplification.
input signal to the common emitter goes positive when the output goes negative, the two signals (input and output)
are 180 degrees out of phase. The common-emitter circuit is the only configuration that provides a phase reversal.
The common-emitter is the most popular of the three transistor configurations because it has the best
combination of current and voltage gain. The term Gain is used to describe the amplification capabilities of the
amplifier. It is basically a ratio of output versus input. Each transistor configuration gives a different value
of gain even though the same transistor is used. The transistor configuration used is a matter of design
consideration. However, as a technician you will become interested in this output versus input ratio (gain) to
determine whether or not the transistor is working properly in the circuit.
The current gain in the
common-emitter circuit is called BETA (β). Beta is the relationship of collector current (output current) to base
current (input current). To calculate beta, use the following formula:
(������ is the Greek letter delta, it is used to indicate a small change)
For example, if the input current (IB) in a common emitter changes from 75 uA to 100 uA and
the output current (IC) changes from 1.5 mA to 2.6 mA, the current gain (β) will be 44.
This simply means that a change in base current produces a change in collector current which is 44 times
You may also see the term hfe used in place of b. The terms hfe
and β are equivalent and may be used interchangeably. This is because "hfe" means:
.................................... h = hybrid (meaning mixture)
.................................... f = forward current transfer ratio
.................................... e = common emitter configuration
The resistance gain of the
common emitter can be found in a method similar to the one used for
Once the resistance gain is known, the voltage gain is easy to calculate since it is equal to the
current gain (β) multiplied by the resistance gain (E = βR). And, the power gain is equal to the voltage gain
multiplied by the current gain β (P = βE).
configuration (CB) shown in figure 2-16, view B is mainly used for impedance matching, since it has a low input
resistance (30 ohms-160 ohms) and a high output resistance (250 kilohms-550 kilohms). However, two factors limit
its usefulness in some circuit applications: (1) its low input resistance and (2) its current gain of less than 1.
Since the CB configuration will give voltage amplification, there are some additional applications, which require
both a low-input resistance and voltage amplification, that could use a circuit configuration of this type; for
example, some microphone amplifiers.
In the common-base configuration, the input signal is applied to the emitter, the output is taken from
the collector, and the base is the element common to both input and output. Since the input is applied to the
emitter, it causes the emitter-base junction to react in the same manner as it did in the common-emitter circuit.
For example, an input that aids the bias will increase transistor current, and one that opposes the bias will
decrease transistor current.
Unlike the common-emitter circuit, the input and output signals in the
common-base circuit are in phase. To illustrate this point, assume the input to the PNP version of the common-base
circuit in figure 2-16 view B is positive. The signal adds to the forward bias, since it is applied to the
emitter, causing the collector current to increase. This increase in Ic results in a greater voltage drop across
the load resistor
(not shown), thus lowering the collector voltage VC. The collector voltage, in becoming less negative,
is swinging in a positive direction, and is therefore in phase with the incoming positive signal.
The current gain in the common-base circuit is calculated in a method similar to that of the common emitter except
that the input current is IE not IB and the term ALPHA (α) is
used in place of beta for gain. Alpha is the relationship of collector current (output current) to emitter current
(input current). Alpha is calculated using the formula:
For example, if the input current (IE) in a common base changes from 1 mA to 3 mA and the
current (IC) changes from 1 mA to 2.8 mA, the current gain (α)
will be 0.90 or:
This is a current gain of less than 1.
Since part of the emitter current flows into the base
and does not appear as collector current, collector current will always be less than
the emitter current that causes it. (Remember, IE = IB + IC) Therefore, ALPHA is
ALWAYS LESS THAN ONE for a Common-Base Configuration.
Another term for "α" is
hf. These terms (and hf) are equivalent and may be used interchangeably. The meaning for the
term hf is derived in the same manner as the term hfe mentioned earlier, except that the
last letter "e" has been replaced with "b" to stand for common- base configuration.
Many transistor manuals and data sheets only list transistor current gain characteristics in terms of b or hfe. To
find alpha (α) when given beta (β), use the following formula to convert b to a for
use with the common-base configuration:
To calculate the other gains (voltage and power) in the common-base configuration when the current gain
(α) is known, follow the procedures described earlier under the common-emitter
The common-collector configuration (CC) shown in
figure 2-16 view C is used mostly for impedance matching. It is also used as a current driver, because of its
substantial current gain. It is particularly useful in switching circuitry, since it has the ability to pass
signals in either direction (bilateral operation).
In the common-collector circuit, the input signal is
applied to the base, the output is taken from the emitter, and the collector is the element common to both input
and output. The common collector is equivalent to our old friend the electron-tube cathode follower. Both have
high input and low output resistance. The input resistance for the common collector ranges from 2 kilohms to 500
kilohms, and the output resistance varies from 50 ohms to 1500 ohms. The current gain is higher than that in the
common emitter, but it has a lower power gain than either the common base or common emitter. Like the common base,
the output signal from the common collector is in phase with the input signal. The common collector is also
referred to as an emitter-follower because the output developed on the emitter follows the input signal applied to
Transistor action in the common collector is similar to the operation explained for the common
base, except that the current gain is not based on the emitter-to-collector current ratio, alpha (α).
Instead, it is based on the emitter-to-base current ratio called GAMMA (γ), because
the output is taken off the emitter. Since a small change in base current controls a large change in emitter
current, it is still possible to obtain high current gain in the common collector. However, since the emitter
current gain is offset by the low output resistance, the voltage gain is always less than 1 (unity), exactly as in
the electron-tube cathode follower
The common-collector current gain, gamma (γ),
is defined as
and is related to collector-to-base current gain, beta (β), of the common-emitter circuit by the
Since a given transistor may be connected in any of three basic configurations, there is a definite
relationship, as pointed out earlier, between alpha (α), beta (β), and gamma (γ).
These relationships are listed again for your convenience:
Take, for example, a transistor that is listed on a manufacturer's data sheet as having an alpha of
0.90. We wish to use it in a common emitter configuration. This means we must find beta. The calculations are:
Therefore, a change in base current in this transistor will produce a change in collector current that
will be 9 times as large.
If we wish to use this same transistor in a common collector, we can find gamma
To summarize the properties of the three transistor configurations, a comparison chart is provided in
table 2-1 for your convenience.
Table 2-1. - Transistor Configuration Comparison Chart
Now that we have analyzed the basic transistor amplifier in terms of bias, class of operation, and
circuit configuration, let's apply what has been covered to figure 2-12. a reproduction of figure 2-12 is shown
below for your convenience.
This illustration is not just the basic transistor amplifier shown earlier in figure 2-12 but a class a
amplifier configured as a common emitter using fixed bias. From this, you should be able to conclude the
· Because of its fixed bias, the amplifier is thermally unstable.
· Because of its class a
operation, the amplifier has low efficiency but good fidelity.
· Because it is configured as a common
emitter, the amplifier has good voltage, current, and power gain.
In conclusion, the type of bias, class of operation, and circuit configuration are all clues to the function and
possible application of the amplifier.
Q26. What are the three transistor configurations?
Q27. Which transistor configuration provides a phase reversal between the input and output signals?
Q28. What is the input current in the common-emitter circuit?
Q29. What is the current gain in a
common-base circuit called?
Q30. Which transistor configuration has a current gain of less than 1?
Q31. What is the output current in the common-collector circuit?
Q32. Which transistor configuration
has the highest input resistance?
Q33. What is the formula for GAMMA (γ)?
Transistors are available in a large variety of shapes and sizes, each with its own unique
characteristics. The characteristics for each of these transistors are usually presented on SPECIFICATION SHEETS
or they may be included in transistor manuals. Although many properties of a transistor could be specified on
these sheets, manufacturers list only some of them. The specifications listed vary with different manufacturers,
the type of transistor, and the application of the transistor. The specifications usually cover the following
1. a general description of the transistor that includes the following information:
a. The kind of transistor. This covers the material used, such as germanium or silicon; the type of transistor
(NPN or PNP); and the construction of the transistor(whether alloy-junction, grown, or diffused junction, etc.).
b. Some of the common applications for the transistor, such as audio amplifier, oscillator, rf
c. General sales features, such as size and packaging mechanical data).
2. The "Absolute Maximum Ratings" of the transistor are the direct voltage and current values that if exceeded
in operation may result in transistor failure. Maximum ratings usually include collector-to-base voltage,
emitter-to-base voltage, collector current, emitter current, and collector power dissipation.
typical operating values of the transistor. These values are presented only as a guide. The values vary widely,
are dependent upon operating voltages, and also upon which element is common in the circuit. The values listed
may include collector-emitter voltage, collector current, input resistance, load resistance, current-transfer
ratio (another name for alpha or beta), and collector cutoff current, which is leakage current from collector to
base when no emitter current is applied. Transistor characteristic curves may also be included in this section. a
transistor characteristic curve is a graph plotting the relationship between currents and voltages in a circuit.
More than one curve on a graph is called a "family of curves."
4. Additional information for
So far, many letter symbols, abbreviations, and terms have been introduced,
some frequently used and others only rarely used. For a complete list of all semiconductor letter symbols and
terms, refer to EIMB series 000-0140, Section III.
Transistors can be identified by a Joint Army-Navy (JAN) designation printed directly on the case of the
transistor. The marking scheme explained earlier for diodes is also used for transistor identification. The first
number indicates the number of junctions. The letter "N" following the first number tells us that the component is
a semiconductor. And, the 2- or 3-digit number following the N is the manufacturer's identification number. If the
last number is followed by a letter, it indicates a later, improved version of the device. For example, a
semiconductor designated as type 2N130A signifies a three-element transistor of semiconductor material that is an
improved version of type 130:
NEETS Table of Contents
- Introduction to Matter, Energy, and Direct
- Introduction to Alternating Current and Transformers
- Introduction to Circuit Protection, Control,
- Introduction to Electrical Conductors, Wiring
Techniques, and Schematic Reading
- Introduction to Generators and Motors
- Introduction to Electronic Emission, Tubes,
and Power Supplies
- Introduction to Solid-State Devices and
- Introduction to Amplifiers
- Introduction to Wave-Generation and Wave-Shaping
- Introduction to Wave Propagation, Transmission
Lines, and Antennas
- Microwave Principles
- Modulation Principles
- Introduction to Number Systems and Logic Circuits
- Introduction to Microelectronics
- Principles of Synchros, Servos, and Gyros
- Introduction to Test Equipment
- Radio-Frequency Communications Principles
- Radar Principles
- The Technician's Handbook, Master Glossary
- Test Methods and Practices
- Introduction to Digital Computers
- Magnetic Recording
- Introduction to Fiber Optics |
Graphing Exponential Decay Functions
Lesson 6 of 13
Objective: SWBAT graph an exponential decay function.
For class today, I bring a ball that will bounce well on the classroom floor. I ask students to watch very carefully as I drop the ball. I like to stand on a desk or a chair (just to really get student's attention) and then drop the ball.
I give students a minute to write down as many observations as they can about what they notice about the ball as it bounces each time. Then I ask one or two students share out what they wrote. Because this is so open ended, expect a variety of responses:
- height of bounces
- number of bounces
- color of the ball
Next I show the second slide of exponential_decay_warmup to students and introduce the Golf Ball context. I ask the class to record the data about the height of the bounces individually. Then I ask one student to share their data. I work to engage the class in a discussion about how this data connects to what they saw in the opening demonstration (MP2, MP3).
During the ensuing discussion, I guide students towards the understanding that the bounces get smaller and smaller every time. Theoretically, this would continue forever. In real life, the ball eventually just stops bouncing. I explain to students that they have been studying exponential growth up until now. This bouncing ball scenario models something called exponential decay. Finally, I have students turn and talk to share what they think exponential decay might mean.
I have students read the exponential_decay_launch question to themselves and write down their response. Before doing a think-pair-share, I use a non-verbal cue to assess which members of the class think the car is worth nothing after 5 years. Allow the pairs to discuss the reasoning behind their choice.
After the partners talk have them work together to describe this situation mathematically. Expect some students to struggle with this question because there is no dollar value for the car. Encourage students to make up a value that will be easy to work with to see how the calculations work (if not all students do this, I emphasize the students that do. The ability to "test out" numbers to help visualize a situation is a big idea in mathematics (MP2))
After a few minutes. have several groups share out their findings. Try to call on groups that had modeled the situation correctly (determining the 20% loss), so that other students do not become confused. As the students present, help the class to recognize how this example is connected to the bouncing ball. Make sure that students understand that the car retains some value after five years because you are taking 20% of smaller and smaller numbers each year. It is important for students to understand the difference between exponential and linear depreciation (MP2).
*NOTE: I like to use the above context because, although students at this age don't own cars, they seem to have a "real-world sense" of how depreciation works. They have heard adults talk about how the value of a car drops so quickly after you first buy it. This lesson then serves to help them understand what happens to the value after that initial drop.
In this section of the lesson I will be leading the class using exponential_decay_direct. Here are my bullet point instructional notes.
- Help students to see that if 20% of the value is gone then 80% of the value is left. In doing their original calculations many students probably found 20% then subtracted and then repeated that process. Show students how multiplying by 80% (0.8) each time accomplishes the same thing.
- Students can then connect this idea to the formula they had learned for exponential growth. Ask students to turn and talk and identify the difference in the formula. Students will notice that the formula now has a minus sign because 20% is being taken off of the price. The (1 - 0.2) leaves us with (0.8) which connects back to the first bullet on this slide.
- Let students evaluate the function for each value of x over the 5 years if the original value is $10,000 (some students may have already used this value). All students can now use their 6 values to make a sketch of the exponential decay graph.
- Ask students what they notice about the appearance of this graph in comparison to the exponential growth graph?
This Closing Activity will assess how each student grasped the content of the day's lesson. The first question gives you some information about the students ability to build a function that applies to a given situation. The second question will show you if the students can evaluate the function appropriately. The third question will help you determine if students are continuing to look at the math in context. In other words, students can "mathematize" a situation but they have to be able connect the mathematics back to the context (MP2). |
Help! The numbers in our equations have run away and left their answers alone! In this lesson, students will review their math facts and knowledge to solve Ken Ken like puzzles and bring the numbers back to their places.
It's not enough to just memorize the multiplication table! It helps students to know how to explain their strategy to find the product too. Teach this lesson on its own or use it as support for the lesson Slap and Roll Timed Multiplication.
Multiplication and Division: What's the Connection?
Numbers are connected in many ways! Take students on a journey to uncover multiplication and division fact families and inverse relationships. Teach this lesson on its own or prior to the lesson Division and Multiplication Relationship.
Help your students become detail-oriented mathematicians as they explore two strategies for multiplying a one-digit number by a multiple of 10. Use this as a stand alone lesson or alongside *Multiplying by Multiples of 10.*
Lay the foundation for multiplication by introducing your second graders to the concepts of skip counting and repeated addition. This lesson can be used alongside Up, Up, and Array, or separately to reinforce these important skills.
Explore the Associative Property of Multiplication
Use this lesson with your students to allow them to explore the associative property of multiplication by having deep discussions in small groups. Use this as a stand alone lesson or alongside *Associative Property of Multiplication*.
Reflecting on Multiplication and Division Word Problems
Teach your students how to reflect upon the information in multiplication and division word problems before solving them. Use this lesson on its own or as a pre-lesson to *Stepping Through Multiplication and Division Word Problems*.
Are your students struggling to remember their times tables? We all know the only way to remembering math facts is to practice! This hands-on lesson is a fun way for your class get the practice they need to master multiplication facts.
With this lesson, your students will see how the order of the factors does not affect the product in a multiplication expression. Use this on its own or alongside *You're On a Roll! Practicing Multiplication Facts.* |
A relay is an electrically operated switch. Many relays use an electromagnet to mechanically operate a switch, but other operating principles are also used, such as solid-state relays. Relays are used where it is necessary to control a circuit by a low-power signal (with complete electrical isolation between control and controlled circuits), or where several circuits must be controlled by one signal. The first relays were used in long distance telegraph circuits as amplifiers: they repeated the signal coming in from one circuit and re-transmitted it on another circuit. Relays were used extensively in telephone exchanges and early computers to perform logical operations.
A type of relay that can handle the high power required to directly control an electric motor or other loads is called a contactor. Solid-state relays control power circuits with no moving parts, instead using a semiconductor device to perform switching. Relays with calibrated operating characteristics and sometimes multiple operating coils are used to protect electrical circuits from overload or faults; in modern electric power systems these functions are performed by digital instruments still called "protective relays".
- 1 History
- 2 Basic design and operation
- 3 Types
- 3.1 Latching relay
- 3.2 Reed relay
- 3.3 Mercury-wetted relay
- 3.4 Mercury relay
- 3.5 Polarized relay
- 3.6 Machine tool relay
- 3.7 Coaxial relay
- 3.8 Time delay relay
- 3.9 Contactor
- 3.10 Solid-state relay
- 3.11 Solid state contactor relay
- 3.12 Buchholz relay
- 3.13 Force-guided contacts relay
- 3.14 Overload protection relay
- 3.15 Vacuum relays
- 3.16 Safety relays
- 3.17 Multivoltage relays
- 4 Pole and throw
- 5 Applications
- 6 Relay application considerations
- 7 Protective relays
- 8 Railway signalling
- 9 See also
- 10 References
- 11 External links
The American scientist Joseph Henry is often claimed to have invented a relay in 1835 in order to improve his version of the electrical telegraph, developed earlier in 1831. However, there is little in the way of official documentation to suggest he had made the discovery prior to 1837.
A simple device, which we now call a relay, was included in the original 1840 telegraph patent of Samuel Morse. The mechanism described acted as a digital amplifier, repeating the telegraph signal, and thus allowing signals to be propagated as far as desired. This overcame the problem of limited range of earlier telegraphy schemes.
The word relay appears in the context of electromagnetic operations from 1860.
Basic design and operation
A simple electromagnetic relay consists of a coil of wire wrapped around a soft iron core, an iron yoke which provides a low reluctance path for magnetic flux, a movable iron armature, and one or more sets of contacts (there are two in the relay pictured). The armature is hinged to the yoke and mechanically linked to one or more sets of moving contacts. It is held in place by a spring so that when the relay is de-energized there is an air gap in the magnetic circuit. In this condition, one of the two sets of contacts in the relay pictured is closed, and the other set is open. Other relays may have more or fewer sets of contacts depending on their function. The relay in the picture also has a wire connecting the armature to the yoke. This ensures continuity of the circuit between the moving contacts on the armature, and the circuit track on the printed circuit board (PCB) via the yoke, which is soldered to the PCB.
When an electric current is passed through the coil it generates a magnetic field that activates the armature, and the consequent movement of the movable contact(s) either makes or breaks (depending upon construction) a connection with a fixed contact. If the set of contacts was closed when the relay was de-energized, then the movement opens the contacts and breaks the connection, and vice versa if the contacts were open. When the current to the coil is switched off, the armature is returned by a force, approximately half as strong as the magnetic force, to its relaxed position. Usually this force is provided by a spring, but gravity is also used commonly in industrial motor starters. Most relays are manufactured to operate quickly. In a low-voltage application this reduces noise; in a high voltage or current application it reduces arcing.
When the coil is energized with direct current, a diode is often placed across the coil to dissipate the energy from the collapsing magnetic field at deactivation, which would otherwise generate a voltage spike dangerous to semiconductor circuit components. Such diodes were not widely used before the application of transistors as relay drivers, but soon became ubiquitous as early germanium transistors were easily destroyed by this surge. Some automotive relays include a diode inside the relay case.
If the relay is driving a large, or especially a reactive load, there may be a similar problem of surge currents around the relay output contacts. In this case a snubber circuit (a capacitor and resistor in series) across the contacts may absorb the surge. Suitably rated capacitors and the associated resistor are sold as a single packaged component for this commonplace use.
If the coil is designed to be energized with alternating current (AC), some method is used to split the flux into two out-of-phase components which add together, increasing the minimum pull on the armature during the AC cycle. Typically this is done with a small copper "shading ring" crimped around a portion of the core that creates the delayed, out-of-phase component, which holds the contacts during the zero crossings of the control voltage.
A latching relay (also called "impulse", "keep", or "stay" relays) maintains either contact position indefinitely without power applied to the coil. The advantage is that one coil consumes power only for an instant while the relay is being switched, and the relay contacts retain this setting across a power outage. A latching relay allows remote control of building lighting without the hum that may be produced from a continuously (AC) energized coil.
In one mechanism, two opposing coils with an over-center spring or permanent magnet hold the contacts in position after the coil is de-energized. A pulse to one coil turns the relay on and a pulse to the opposite coil turns the relay off. This type is widely used where control is from simple switches or single-ended outputs of a control system, and such relays are found in avionics and numerous industrial applications.
Another latching type has a remanent core that retains the contacts in the operated position by the remanent magnetism in the core. This type requires a current pulse of opposite polarity to release the contacts. A variation uses a permanent magnet that produces part of the force required to close the contact; the coil supplies sufficient force to move the contact open or closed by aiding or opposing the field of the permanent magnet. A polarity controlled relay needs changeover switches or an H bridge drive circuit to control it. The relay may be less expensive than other types, but this is partly offset by the increased costs in the external circuit.
In another type, a ratchet relay has a ratchet mechanism that holds the contacts closed after the coil is momentarily energized. A second impulse, in the same or a separate coil, releases the contacts. This type may be found in certain cars, for headlamp dipping and other functions where alternating operation on each switch actuation is needed.
An earth leakage circuit breaker includes a specialized latching relay.
Some early computers used ordinary relays as a kind of latch—they store bits in ordinary wire spring relays or reed relays by feeding an output wire back as an input, resulting in a feedback loop or sequential circuit. Such an electrically latching relay requires continuous power to maintain state, unlike magnetically latching relays or mechanically racheting relays.
In computer memories, latching relays and other relays were replaced by delay line memory, which in turn was replaced by a series of ever-faster and ever-smaller memory technologies.
A reed relay is a reed switch enclosed in a solenoid. The switch has a set of contacts inside an evacuated or inert gas-filled glass tube which protects the contacts against atmospheric corrosion; the contacts are made of magnetic material that makes them move under the influence of the field of the enclosing solenoid or an external magnet.
Reed relays can switch faster than larger relays and require very little power from the control circuit. However, they have relatively low switching current and voltage ratings. Though rare, the reeds can become magnetized over time, which makes them stick 'on' even when no current is present; changing the orientation of the reeds with respect to the solenoid's magnetic field can resolve this problem.
Sealed contacts with mercury-wetted contacts have longer operating lives and less contact chatter than any other kind of relay.
A mercury-wetted reed relay is a form of reed relay in which the contacts are wetted with mercury. Such relays are used to switch low-voltage signals (one volt or less) where the mercury reduces the contact resistance and associated voltage drop, for low-current signals where surface contamination may make for a poor contact, or for high-speed applications where the mercury eliminates contact bounce. Mercury wetted relays are position-sensitive and must be mounted vertically to work properly. Because of the toxicity and expense of liquid mercury, these relays are now rarely used.
The mercury-wetted relay has one particular advantage, in that the contact closure appears to be virtually instantaneous, as the mercury globules on each contact coalesce. The current rise time through the contacts is generally considered to be a few picoseconds, however in a practical circuit it will be limited by the inductance of the contacts and wiring. It was quite common, before the restrictions on the use of mercury, to use a mercury-wetted relay in the laboratory as a convenient means of generating fast rise time pulses, however although the rise time may be picoseconds, the exact timing of the event is, like all other types of relay, subject to considerable jitter, possibly milliseconds, due to mechanical imperfections.
The same coalescence process causes another effect, which is a nuisance in some applications. The contact resistance is not stable immediately after contact closure, and drifts, mostly downwards, for several seconds after closure, the change perhaps being 0.5 ohm.
A mercury relay is a relay that uses mercury as the switching element. They are used where contact erosion would be a problem for conventional relay contacts. Owing to environmental considerations about significant amount of mercury used and modern alternatives, they are now comparatively uncommon.
A polarized relay places the armature between the poles of a permanent magnet to increase sensitivity. Polarized relays were used in middle 20th Century telephone exchanges to detect faint pulses and correct telegraphic distortion. The poles were on screws, so a technician could first adjust them for maximum sensitivity and then apply a bias spring to set the critical current that would operate the relay.
Machine tool relay
A machine tool relay is a type standardized for industrial control of machine tools, transfer machines, and other sequential control. They are characterized by a large number of contacts (sometimes extendable in the field) which are easily converted from normally open to normally closed status, easily replaceable coils, and a form factor that allows compactly installing many relays in a control panel. Although such relays once were the backbone of automation in such industries as automobile assembly, the programmable logic controller (PLC) mostly displaced the machine tool relay from sequential control applications.
A relay allows circuits to be switched by electrical equipment: for example, a timer circuit with a relay could switch power at a preset time. For many years relays were the standard method of controlling industrial electronic systems. A number of relays could be used together to carry out complex functions (relay logic). The principle of relay logic is based on relays which energize and de-energize associated contacts. Relay logic is the predecessor of ladder logic, which is commonly used in programmable logic controllers.
Where radio transmitters and receivers share one antenna, often a coaxial relay is used as a TR (transmit-receive) relay, which switches the antenna from the receiver to the transmitter. This protects the receiver from the high power of the transmitter. Such relays are often used in transceivers which combine transmitter and receiver in one unit. The relay contacts are designed not to reflect any radio frequency power back toward the source, and to provide very high isolation between receiver and transmitter terminals. The characteristic impedance of the relay is matched to the transmission line impedance of the system, for example, 50 ohms.
Time delay relay
Timing relays are arranged for an intentional delay in operating their contacts. A very short (a fraction of a second) delay would use a copper disk between the armature and moving blade assembly. Current flowing in the disk maintains magnetic field for a short time, lengthening release time. For a slightly longer (up to a minute) delay, a dashpot is used. A dashpot is a piston filled with fluid that is allowed to escape slowly; both air-filled and oil-filled dashpots are used. The time period can be varied by increasing or decreasing the flow rate. For longer time periods, a mechanical clockwork timer is installed. Relays may be arranged for a fixed timing period, or may be field adjustable, or remotely set from a control panel. Modern microprocessor-based timing relays provide precision timing over a great range.
Some relays are constructed with a kind of "shock absorber" mechanism attached to the armature which prevents immediate, full motion when the coil is either energized or de-energized. This addition gives the relay the property of time-delay actuation. Time-delay relays can be constructed to delay armature motion on coil energization, de-energization, or both.
Time-delay relay contacts must be specified not only as either normally open or normally closed, but whether the delay operates in the direction of closing or in the direction of opening. The following is a description of the four basic types of time-delay relay contacts.
First we have the normally open, timed-closed (NOTC) contact. This type of contact is normally open when the coil is unpowered (de-energized). The contact is closed by the application of power to the relay coil, but only after the coil has been continuously powered for the specified amount of time. In other words, the direction of the contact's motion (either to close or to open) is identical to a regular NO contact, but there is a delay in closing direction. Because the delay occurs in the direction of coil energization, this type of contact is alternatively known as a normally open, on-delay:
A contactor is a heavy-duty relay used for switching electric motors and lighting loads, but contactors are not generally called relays. Continuous current ratings for common contactors range from 10 amps to several hundred amps. High-current contacts are made with alloys containing silver. The unavoidable arcing causes the contacts to oxidize; however, silver oxide is still a good conductor. Contactors with overload protection devices are often used to start motors. Contactors can make loud sounds when they operate, so they may be unfit for use where noise is a chief concern.
A contactor is an electrically controlled switch used for switching a power circuit, similar to a relay except with higher current ratings. A contactor is controlled by a circuit which has a much lower power level than the switched circuit.
Contactors come in many forms with varying capacities and features. Unlike a circuit breaker, a contactor is not intended to interrupt a short circuit current. Contactors range from those having a breaking current of several amperes to thousands of amperes and 24 V DC to many kilovolts. The physical size of contactors ranges from a device small enough to pick up with one hand, to large devices approximately a meter (yard) on a side.
A solid state relay or SSR is a solid state electronic component that provides a function similar to an electromechanical relay but does not have any moving components, increasing long-term reliability. A solid-state relay uses a thyristor, TRIAC or other solid-state switching device, activated by the control signal, to switch the controlled load, instead of a solenoid. An optocoupler (a light-emitting diode (LED) coupled with a photo transistor) can be used to isolate control and controlled circuits.
As every solid-state device has a small voltage drop across it, this voltage drop limits the amount of current a given SSR can handle. The minimum voltage drop for such a relay is a function of the material used to make the device. Solid-state relays rated to handle as much as 1,200 amperes have become commercially available. Compared to electromagnetic relays, they may be falsely triggered by transients and in general may be susceptible to damage by extreme cosmic ray and EMP episodes.
Solid state contactor relay
A solid state contactor is a heavy-duty solid state relay, including the necessary heat sink, used where frequent on/off cycles are required, such as with electric heaters, small electric motors, and lighting loads. There are no moving parts to wear out and there is no contact bounce due to vibration. They are activated by AC control signals or DC control signals from Programmable logic controller (PLCs), PCs, Transistor-transistor logic (TTL) sources, or other microprocessor and microcontroller controls.
A Buchholz relay is a safety device sensing the accumulation of gas in large oil-filled transformers, which will alarm on slow accumulation of gas or shut down the transformer if gas is produced rapidly in the transformer oil. The contacts are not operated by an electric current but by the pressure of accumulated gas or oil flow.
Force-guided contacts relay
A 'force-guided contacts relay' has relay contacts that are mechanically linked together, so that when the relay coil is energized or de-energized, all of the linked contacts move together. If one set of contacts in the relay becomes immobilized, no other contact of the same relay will be able to move. The function of force-guided contacts is to enable the safety circuit to check the status of the relay. Force-guided contacts are also known as "positive-guided contacts", "captive contacts", "locked contacts", "mechanically linked contacts", or "safety relays".
These safety relays have to follow design rules and manufacturing rules that are defined in one main machinery standard EN 50205 : Relays with forcibly guided (mechanically linked) contacts. These rules for the safety design are the one that are defined in type B standards such as EN 13849-2 as Basic safety principles and Well-tried safety principles for machinery that applies to all machines.
Force-guided contacts by themselves can not guarantee that all contacts are in the same state, however they do guarantee, subject to no gross mechanical fault, that no contacts are in opposite states. Otherwise, a relay with several normally open (NO) contacts may stick when energised, with some contacts closed and others still slightly open, due to mechanical tolerances. Similarly, a relay with several normally closed (NC) contacts may stick to the unenergised position, so that when energised, the circuit through one set of contacts is broken, with a marginal gap, while the other remains closed. By introducing both NO and NC contacts, or more commonly, changeover contacts, on the same relay, it then becomes possible to guarantee that if any NC contact is closed, all NO contacts are open, and conversely, if any NO contact is closed, all NC contacts are open. It is not possible to reliably ensure that any particular contact is closed, except by potentially intrusive and safety-degrading sensing of its circuit conditions, however in safety systems it is usually the NO state that is most important, and as explained above, this is reliably verifiable by detecting the closure of a contact of opposite sense.
Force-guided contact relays are made with different main contact sets, either NO, NC or changeover, and one or more auxiliary contact sets, often of reduced current or voltage rating, used for the monitoring system. Contacts may be all NO, all NC, changeover, or a mixture of these, for the monitoring contacts, so that the safety system designer can select the correct configuration for the particular application. Safety relays are used as part of an engineered safety system.
Overload protection relay
Electric motors need overcurrent protection to prevent damage from over-loading the motor, or to protect against short circuits in connecting cables or internal faults in the motor windings. The overload sensing devices are a form of heat operated relay where a coil heats a bimetallic strip, or where a solder pot melts, releasing a spring to operate auxiliary contacts. These auxiliary contacts are in series with the coil. If the overload senses excess current in the load, the coil is de-energized.
This thermal protection operates relatively slowly allowing the motor to draw higher starting currents before the protection relay will trip. Where the overload relay is exposed to the same environment as the motor, a useful though crude compensation for motor ambient temperature is provided.
The other common overload protection system uses an electromagnet coil in series with the motor circuit that directly operates contacts. This is similar to a control relay but requires a rather high fault current to operate the contacts. To prevent short over current spikes from causing nuisance triggering the armature movement is damped with a dashpot. The thermal and magnetic overload detections are typically used together in a motor protection relay.
Electronic overload protection relays measure motor current and can estimate motor winding temperature using a "thermal model" of the motor armature system that can be set to provide more accurate motor protection. Some motor protection relays include temperature detector inputs for direct measurement from a thermocouple or resistance thermometer sensor embedded in the winding.
A sensitive relay having its contacts mounted in a highly evacuated glass housing, to permit handling radio-frequency voltages as high as 20,000 volts without flashover between contacts even though contact spacing is but a few hundredths of an inch when open.
Safety relays are devices which generally implement safety functions. In the event of a hazard, the task of such a safety function is to use appropriate measures to reduce the existing risk to an acceptable level.
Multivoltage relays are devices designed to work for wide voltage ranges such as 24 to 240 VAC/VDC and wide frequency ranges such as 0 to 300Hz. They are indicated forto use in installations that do not have stable supply voltages.
Pole and throw
Since relays are switches, the terminology applied to switches is also applied to relays; a relay switches one or more poles, each of whose contacts can be thrown by energizing the coil.
Normally open (NO) contacts connect the circuit when the relay is activated; the circuit is disconnected when the relay is inactive. It is also called a "Form A" contact or "make" contact. NO contacts may also be distinguished as "early-make" or "NOEM", which means that the contacts close before the button or switch is fully engaged.
Normally closed (NC) contacts disconnect the circuit when the relay is activated; the circuit is connected when the relay is inactive. It is also called a "Form B" contact or "break" contact. NC contacts may also be distinguished as "late-break" or "NCLB", which means that the contacts stay closed until the button or switch is fully disengaged.
Change-over (CO), or double-throw (DT), contacts control two circuits: one normally open contact and one normally closed contact with a common terminal. It is also called a "Form C" contact or "transfer" contact ("break before make"). If this type of contact has a "make before break" action, then it is called a "Form D" contact.
The following designations are commonly encountered:
- SPST – Single Pole Single Throw. These have two terminals which can be connected or disconnected. Including two for the coil, such a relay has four terminals in total. It is ambiguous whether the pole is normally open or normally closed. The terminology "SPNO" and "SPNC" is sometimes used to resolve the ambiguity.
- SPDT – Single Pole Double Throw. A common terminal connects to either of two others. Including two for the coil, such a relay has five terminals in total.
- DPST – Double Pole Single Throw. These have two pairs of terminals. Equivalent to two SPST switches or relays actuated by a single coil. Including two for the coil, such a relay has six terminals in total. The poles may be Form A or Form B (or one of each).
- DPDT – Double Pole Double Throw. These have two rows of change-over terminals. Equivalent to two SPDT switches or relays actuated by a single coil. Such a relay has eight terminals, including the coil.
The "S" or "D" may be replaced with a number, indicating multiple switches connected to a single actuator. For example 4PDT indicates a four pole double throw relay that has 12 switch terminals.
EN 50005 are among applicable standards for relay terminal numbering; a typical EN 50005-compliant SPDT relay's terminals would be numbered 11, 12, 14, A1 and A2 for the C, NC, NO, and coil connections, respectively.
DIN 72552 defines contact numbers in relays for automotive use;
- 85 = relay coil -
- 86 = relay coil +
- 87 = common contact
- 87a = normally closed contact
- 87b = normally open contact
Relays are used wherever it is necessary to control a high power or high voltage circuit with a low power circuit, especially when galvanic isolation is desirable. The first application of relays was in long telegraph lines, where the weak signal received at an intermediate station could control a contact, regenerating the signal for further transmission. High-voltage or high-current devices can be controlled with small, low voltage wiring and pilots switches. Operators can be isolated from the high voltage circuit. Low power devices such as microprocessors can drive relays to control electrical loads beyond their direct drive capability. In an automobile, a starter relay allows the high current of the cranking motor to be controlled with small wiring and contacts in the ignition key.
Electromechanical switching systems including Strowger and Crossbar telephone exchanges made extensive use of relays in ancillary control circuits. The Relay Automatic Telephone Company also manufactured telephone exchanges based solely on relay switching techniques designed by Gotthilf Ansgarius Betulander. The first public relay based telephone exchange in the UK was installed in Fleetwood on 15 July 1922 and remained in service until 1959.
The use of relays for the logical control of complex switching systems like telephone exchanges was studied by Claude Shannon, who formalized the application of Boolean algebra to relay circuit design in A Symbolic Analysis of Relay and Switching Circuits. Relays can perform the basic operations of Boolean combinatorial logic. For example, the boolean AND function is realised by connecting normally open relay contacts in series, the OR function by connecting normally open contacts in parallel. Inversion of a logical input can be done with a normally closed contact. Relays were used for control of automated systems for machine tools and production lines. The Ladder programming language is often used for designing relay logic networks.
Because relays are much more resistant than semiconductors to nuclear radiation, they are widely used in safety-critical logic, such as the control panels of radioactive waste-handling machinery. Electromechanical protective relays are used to detect overload and other faults on electrical lines by opening and closing circuit breakers.
Relay application considerations
Selection of an appropriate relay for a particular application requires evaluation of many different factors:
- Number and type of contacts – normally open, normally closed, (double-throw)
- Contact sequence – "Make before Break" or "Break before Make". For example, the old style telephone exchanges required Make-before-break so that the connection didn't get dropped while dialing the number.
- Contact current rating – small relays switch a few amperes, large contactors are rated for up to 3000 amperes, alternating or direct current
- Contact voltage rating – typical control relays rated 300 VAC or 600 VAC, automotive types to 50 VDC, special high-voltage relays to about 15,000 V
- Operating lifetime, useful life - the number of times the relay can be expected to operate reliably. There is both a mechanical life and a contact life. The contact life is affected by the type of load switched. Breaking load current causes undesired arcing between the contacts, eventually leading to contacts that weld shut or contacts that fail due erosion by the arc.
- Coil voltage – machine-tool relays usually 24 VDC, 120 or 250 VAC, relays for switchgear may have 125 V or 250 VDC coils,
- Coil current - Minimum current required for reliable operation and minimum holding current, as well as, effects of power dissipation on coil temperature, at various duty cycles. "Sensitive" relays operate on a few milliamperes
- Package/enclosure – open, touch-safe, double-voltage for isolation between circuits, explosion proof, outdoor, oil and splash resistant, washable for printed circuit board assembly
- Operating environment - minimum and maximum operating temperature and other environmental considerations such as effects of humidity and salt
- Assembly – Some relays feature a sticker that keeps the enclosure sealed to allow PCB post soldering cleaning, which is removed once assembly is complete.
- Mounting – sockets, plug board, rail mount, panel mount, through-panel mount, enclosure for mounting on walls or equipment
- Switching time – where high speed is required
- "Dry" contacts – when switching very low level signals, special contact materials may be needed such as gold-plated contacts
- Contact protection – suppress arcing in very inductive circuits
- Coil protection – suppress the surge voltage produced when switching the coil current
- Isolation between coil contacts
- Aerospace or radiation-resistant testing, special quality assurance
- Expected mechanical loads due to acceleration – some relays used in aerospace applications are designed to function in shock loads of 50 g or more
- Size - smaller relays often resist mechanical vibration and shock better than larger relays, because of the lower inertia of the moving parts and the higher natural frequencies of smaller parts. Larger relays often handle higher voltage and current than smaller relays.
- Accessories such as timers, auxiliary contacts, pilot lamps, and test buttons
- Regulatory approvals
- Stray magnetic linkage between coils of adjacent relays on a printed circuit board.
There are many considerations involved in the correct selection of a control relay for a particular application. These considerations include factors such as speed of operation, sensitivity, and hysteresis. Although typical control relays operate in the 5 ms to 20 ms range, relays with switching speeds as fast as 100 us are available. Reed relays which are actuated by low currents and switch fast are suitable for controlling small currents.
As with any switch, the contact current (unrelated to the coil current) must not exceed a given value to avoid damage. In high-inductance circuits such as motors, other issues must be addressed. When an inductance is connected to a power source, an input surge current or electromotor starting current larger than the steady-state current exists. When the circuit is broken, the current cannot change instantaneously, which creates a potentially damaging arc across the separating contacts.
Consequently, for relays used to control inductive loads, we must specify the maximum current that may flow through the relay contacts when it actuates, the make rating; the continuous rating; and the break rating. The make rating may be several times larger than the continuous rating, which is itself larger than the break rating.
|Type of load||% of rated value|
Control relays should not be operated above rated temperature because of resulting increased degradation and fatigue. Common practice is to derate 20 degrees Celsius from the maximum rated temperature limit. Relays operating at rated load are affected by their environment. Oil vapor may greatly decrease the contact life, and dust or dirt may cause the contacts to burn before the end of normal operating life. Control relay life cycle varies from 50,000 to over one million cycles depending on the electrical loads on the contacts, duty cycle, application, and the extent to which the relay is derated. When a control relay is operating at its derated value, it is controlling a smaller value of current than its maximum make and break ratings. This is often done to extend the operating life of a control relay. The table lists the relay derating factors for typical industrial control applications.
Switching while "wet" (under load) causes undesired arcing between the contacts, eventually leading to contacts that weld shut or contacts that fail due to a buildup of contact surface damage caused by the destructive arc energy.
Without adequate contact protection, the occurrence of electric current arcing causes significant degradation of the contacts, which suffer significant and visible damage. Every time a relay transitions either from a closed to an open state (break arc) or from an open to a closed state (make arc & bounce arc), under load, an electrical arc can occur between the two contact points (electrodes) of the relay. In many situations, the break arc is more energetic and thus more destructive, in particular with resistive-type loads. However, inductive loads can cause more destructive make arcs. For example, with standard electric motors, the start-up (inrush) current tends to be much greater than the running current. This translates into enormous make arcs.
During an arc event, the heat energy contained in the electrical arc is very high (tens of thousands of degrees Fahrenheit), causing the metal on the contact surfaces to melt, pool and migrate with the current. The extremely high temperature of the arc cracks the surrounding gas molecules creating ozone, carbon monoxide, and other compounds. The arc energy slowly destroys the contact metal, causing some material to escape into the air as fine particulate matter. This action causes the material in the contacts to degrade quickly, resulting in device failure. This contact degradation drastically limits the overall life of a relay to a range of about 10,000 to 100,000 operations, a level far below the mechanical life of the same device, which can be in excess of 20 million operations.
For protection of electrical apparatus and transmission lines, electromechanical relays with accurate operating characteristics were used to detect overload, short-circuits, and other faults. While many such relays remain in use, digital devices now provide equivalent protective functions.
Railway signalling relays are large considering the mostly small voltages (less than 120 V) and currents (perhaps 100 mA) that they switch. Contacts are widely spaced to prevent flashovers and short circuits over a lifetime that may exceed fifty years. BR930 series plug-in relays are widely used on railways following British practice. These are 120 mm high, 180 mm deep and 56 mm wide and weigh about 1400 g, and can have up to 16 separate contacts, for example, 12 make and 4 break contacts. Many of these relays come in 12V, 24V and 50V versions.
The BR Q-type relay are available in a number of different configurations:
- QN1 Neutral
- QL1 Latched - see above
- QNA1 AC-immune
- QBA1 Biased AC-immune - see above
- QNN1 Twin Neutral 2x4-4 or 2x6-2
- QBCA1 Contactor for high current applications such as point motors. Also DC biased and AC immune.
- QTD4 - Slow to release timer
- QTD5 - Slow to pick up timer
Since rail signal circuits must be highly reliable, special techniques are used to detect and prevent failures in the relay system. To protect against false feeds, double switching relay contacts are often used on both the positive and negative side of a circuit, so that two false feeds are needed to cause a false signal. Not all relay circuits can be proved so there is reliance on construction features such as carbon to silver contacts to resist lightning induced contact welding and to provide AC immunity.
Opto-isolators are also used in some instances with railway signalling, especially where only a single contact is to be switched.
Signalling relays, typical circuits, drawing symbols, abbreviations & nomenclature, etc. come in a number of schools, including the United States, France, Germany, and the United Kingdom.
- Digital protective relay
- Dry contact
- Race condition
- Stepping switch - a kind of multi-position relay
- Wire spring relay
- Analogue switch
- Nanoelectromechanical relay
- Icons of Invention: The Makers of the Modern World from Gutenberg to Gates. ABC-CLIO. p. 153.
- "The electromechanical relay of Joseph Henry". Georgi Dalakov.
- Scientific American Inventions and Discoveries: All the Milestones in Ingenuity--From the Discovery of Fire to the Invention of the Microwave Oven. John Wiley & Sons. p. 311.
- Thomas Coulson (1950). Joseph Henry: His Life and Work. Princeton: Princeton University Press.
- Gibberd, William (1966). "Edward Davy". Australian Dictionary of Biography. Canberra: Australian National University. Retrieved 7 June 2012.
- US Patent 1,647, Improvement in the mode of communicating information by signals by the application of electro-magnetism, June 20, 1840
- Mason, C. R. "Art & Science of Protective Relaying, Chapter 2, GE Consumer & Electrical". Retrieved October 9, 2011.
- Sinclair, Ian R. (2001), Sensors and Transducers (3rd ed.), Elsevier, p. 262, ISBN 978-0-7506-4932-2
- A. C. Keller. "Recent Developments in Bell System Relays -- Particularly Sealed Contact and Miniature Relays". The Bell System Technical Journal. 1964.
- Ian Sinclair, Passive Components for Circuit Design, Newnes, 2000 ISBN 008051359X, page 170
- Kenneth B. Rexford and Peter R. Giuliani (2002). Electrical control for machines (6th ed.). Cengage Learning. p. 58. ISBN 978-0-7668-6198-5.
- Terrell Croft and Wilford Summers (ed), American Electricians' Handbook, Eleventh Edition, McGraw Hill, New York (1987) ISBN 0-07-013932-6 page 7-124
- Zocholl, Stan (2003). AC Motor Protection. Schweitzer Engineering Laboratories, Inc. ISBN 978-0972502610.
- Safety Compendium, Chapter 4 Safe control technology, p. 115
- EN 50005:1976 "Specification for low voltage switchgear and controlgear for industrial use. Terminal marking and distinctive number. General rules." (1976). In the UK published by BSI as BS 5472:1977.
- "Relay Automatic Telephone Company". Retrieved October 6, 2014.
- "British Telecom History 1912-1968". Retrieved October 8, 2014.
- "Arc Suppression to Protect Relays From Destructive Arc Energy". Retrieved December 6, 2013.
- Al L Varney. "Questions About The No. 1 ESS Switch". 1991.
- "Lab Note #105 Contact Life - Unsuppressed vs. Suppressed Arcing". Arc Suppression Technologies. April 2011. Retrieved October 9, 2011.
- Gurevich, Vladimir (2005). Electrical Relays: Principles and Applications. London - New York: CRC Press.
- Westinghouse Corporation (1976). Applied Protective Relaying. Westinghouse Corporation. Library of Congress card no. 76-8060.
- Terrell Croft and Wilford Summers (ed) (1987). American Electricians' Handbook, Eleventh Edition. New York: McGraw Hill. ISBN 978-0-07-013932-9.
- Walter A. Elmore. Protective Relaying Theory and Applications. Marcel Dekker kana, Inc. ISBN 978-0-8247-9152-0.
- Vladimir Gurevich (2008). Electronic Devices on Discrete Components for Industrial and Power Engineering. London - New York: CRC Press. p. 418.
- Vladimir Gurevich (2003). Protection Devices and Systems for High-Voltage Applications. London - New York: CRC Press. p. 292.
- Vladimir Gurevich (2010). Digital Protective Relays: Problems and Solutions. London - New York: CRC Press. p. 422.
- Colin Simpson, Principles of Electronics, Prentice-Hall, 2002, ISBN 978-0-9686860-0-3
|Wikimedia Commons has media related to Relay.|
- Abdelmoumene, Abdelkader, and Hamid Bentarzi. "A review on protective relays' developments and trends." Journal of Energy in Southern Africa 25.2 (2014): 91-95. http://www.scielo.org.za/pdf/jesa/v25n2/10.pdf http://www.scielo.org.za/scielo.php?pid=S1021-447X2014000200010&script=sci_arttext&tlng=pt
- Electromagnetic relays and Solid-State Relays (SSR), general technical descriptions, functions, shutdown behaviour and design features
- The Electromechanical Relay
- Information about relays and the Latching Relay circuit
- "Harry Porter's Relay Computer", a computer made out of relays.
- "Relay Computer Two", by Jon Stanley.
- Interfacing Relay To Microcontroller.
- Relays Technical Write, by O/E/N India Limited |
Race in the United States criminal justice system refers to the unique experiences and disparities in the United States in regard to the policing and prosecuting of various races. There have been different outcomes for different racial groups in convicting and sentencing felons in the United States criminal justice system. Experts and analysts have debated the relative importance of different factors that have led to these disparities. Minority defendants are charged with crimes requiring a mandatory minimum prison sentence more often, in both relative and absolute terms (depending on the classification of race, mainly in regards to Hispanics), leading to large racial disparities in correctional facilities.
Race has been a factor in the United States criminal justice system since the system's beginnings, as the nation was founded on Native American soil. It continues to be a factor throughout United States history through the present.
Lynching and Lynch-Law date back to the 1700s when the term was first used by the Scotch-Irish in reference to an act pursued by the Quakers toward Native Americans. The law was originally regulatory, providing regulations regarding how lynching could and could not be carried out. Most crimes of and relating to lynching prior to 1830 were frontier crimes and were considered justifiable due to necessity.
In the construction of the United States Constitution in 1789, slavery and white supremacy were made part of the justice system, as citizens were defined as free white men.
Lynch law was renewed with the anti-slavery movement, as several acts of violence towards people of color took place in the early 1830s. In August 1831, Nat Turner led the slave insurrection in Virginia. Turner, an African-American Baptist preacher, believing that the Lord had destined him to free his race, followed through with his plans to conquer Southampton county through the enlistment of other slaves. He did so by traveling from house to house murdering every white person he could find. Due to this act, many innocent slaves were killed by the police.
When slavery was abolished after the Civil War through the ratification of the Thirteenth Amendment to the constitution, violence against African Americans increased tremendously and thousands of African Americans experienced lynching.
During the same time period, unequal treaties towards Native Americans led to a large decrease in Native American land holdings, and Native Americans were forced into 160 acres (65 ha) reservations.
Latin Americans entering the country were also a target for the penal system during this time.
The Ku Klux Klan, was founded in 1865 in Pulaski, Tennessee as a vigilante organization whose goal was to keep control over freed slaves; It performed acts of lawlessness against negroes and other minorities. This included taking negro prisoners from the custody of officers or breaking into jails to put them to death. Few efforts were made by civil authorities in the South against the Ku Klux Klan.
The Memphis Riots of 1866 took place after many black men were discharged from the United States Army. The riot broke out when a group of discharged Negro soldiers got into a brawl with a group of Irish police officers in Memphis, Tennessee. Forty-six African Americans and two white people were killed in the riot, and seventy-five people received bullet wounds. At least five African American women were raped by predatory gangs, and the property damage was worth over $100,000.
In 1868 the Fourteenth Amendment to the United States Constitution overruled the 1857 Dred Scott v. Sandford by establishing that those born or naturalized in the United States are entitled to equal protection under the law, regardless of race.
In 1882 Congress passed the 1882 Chinese Exclusion Act, prohibiting Chinese laborers from immigrating into the United States. Senator James G. Blaine proposed the idea in 1879 in an effort to prohibit the Chinese from taking over the Pacific slope and avoid the possibility of another civil war.
In its 1896 ruling, Plessy v. Ferguson, the United States Supreme Court established that segregation was legal in the United States, establishing the doctrine, "separate but equal". Homer Adolph Plessy was removed from the East Louisiana Railroad train and arrested for violating the Jim Crow Car Act of 1890 on June 7, 1892. Despite the Supreme Court ruling against him, Plessy's case marked the first use of the 14th Amendment's Equal Protection provision after the Reconstruction Period.
In 1935 the United States Supreme Court overturned convictions of the Scottsboro Boys in Norris v. Alabama. These were nine African American teenagers who had been previously denied equal protection under the law as stated in the Fourteenth Amendment to the United States Constitution because African Americans were purposely excluded from their cases' juries.
President Franklin D. Roosevelt established the Fair Employment Practices Commission with Executive Order 8802, which banned discrimination based on race, color, religion, or national origin in the defense industry.
In the 1954 Brown v. Board of Education, the United States Supreme Court decision overturned the "separate but equal" doctrine implemented in the 1896 Plessy v. Ferguson case in schools and required that schools be integrated. The case was brought before the Court in 1952 after African American Oliver Brown tried to enroll his daughter Linda in a local white elementary school and was refused enrollment. He and other African American parents, with the help of the NAACP sued the Topeka school district, and Thurgood Marshall argued before the Supreme Court in 1952 and 1953 that public school segregation violated the 14th Amendment. The Court decision was unanimous.
Emmett Till, 14-year-old African American boy in Mississippi was murdered for allegedly flirting with a white woman. His mother's insistence on an open-casket funeral led to the publishing of images of his mutilated body in many newspapers and magazines to showcase the scrutiny of the Mississippi criminal justice system in the 1950s and 1960s.
In the 1960 case of Boynton v. Virginia, the United States Supreme Court ruled that racial segregation in public interstate transportation facilities such as bus or train stations violates the Interstate Commerce Act.
In 1963 16th Street Baptist Church was bombed, killing four African American girls and bringing attention to the need for increased civil rights protection in the United States Legislature. In 2002, nearly 40 years later, Bobby Frank Cherry was the last person brought to trial for the murder of the four girls.
The Civil Rights Act of 1964, prohibited discrimination based on race, color, religion, sex, or national origin in employment or public accommodations. It also overruled all state and local laws that mandated such discrimination.
In the 1965 riot in Watts, Los Angeles, an African American Neighborhood, 16,000 policemen, highway patrolmen, and National Guard troops were forced to restore order. The riot lasted for six days and resulted in property damages worth 40 million dollars. It started when an African American man by the name of Marquette Fry was pulled over by the police for suspicion of driving while under the influence of alcohol, after which tension grew between onlookers and police officers fusing the resulting violence.
In 1937, the Marijuana Transfer Tax Act was passed. Several scholars have claimed that the goal was to destroy the hemp industry, largely as an effort of businessmen Andrew Mellon, Randolph Hearst, and the Du Pont family. These scholars argue that with the invention of the decorticator, hemp became a very cheap substitute for the paper pulp that was used in the newspaper industry. These scholars believe that Hearst felt[dubious ] that this was a threat to his extensive timber holdings. Mellon, United States Secretary of the Treasury and the wealthiest man in America, had invested heavily in the DuPont's new synthetic fiber, nylon, and considered[dubious ] its success to depend on its replacement of the traditional resource, hemp. However, there were circumstances that contradict these claims. One reason for doubts about those claims is that the new decorticators did not perform fully satisfactorily in commercial production. To produce fiber from hemp was a labor-intensive process if you include harvest, transport and processing. Technological developments decreased the labor with hemp but not sufficient to eliminate this disadvantage.
Although Nixon declared "drug abuse" to be public enemy number one in 1971, the policies that his administration implemented as part of the Comprehensive Drug Abuse Prevention and Control Act of 1970 were a continuation of drug prohibition policies in the U.S., which started in 1914.
In 1982, the current President of the United States, Ronald Reagan, officially declared war on drugs. The President increased federal spending on anti-drug related programs. He also greatly increased the number of United States federal drug task forces. Ensuring a lasting impact, Reagan also launched a campaign marked by rhetoric that both demonized drugs and drug users. The United States Executive branch employed two types of anti-drug strategies during The War on Drugs: supply-reduction and demand-reduction. Supply-reduction strategies typically involved limiting access to drug sources and employing harsher penalties for drug possession and distribution. Demand-reduction strategies included drug use treatment and prevention. The Reagan administration favored supply-reduction strategies and focused their efforts on the seizure of illegal substances and prosecution of individuals caught in possession of these substances.
The controversy surrounding The War on Drugs is still widely debated by the academic community. In March 2016, former Nixon domestic policy chief John Ehrlichman told a writer for Harper's magazine that "the Nixon campaign in 1968, and the Nixon White House after that, had two enemies: the antiwar left and black people". He then went on to elaborate further, saying: "knew we couldn't make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin. And then criminalizing both heavily, we could disrupt those communities". This recent comment by Ehrlichman made headlines primarily because it was the first instance of any person who was ever affiliated with the Presidential administration publicly framing the drug war as a political tactic to assist Nixon's win.
Many scholars believe that The War on Drugs had a large impact on minority communities across the nation. In particular, African American communities were affected by the political implications of the new drug policies. It has been noted that throughout The War of Drugs, African Americans were investigated, detained, arrested, and charged with using, possessing, and distributing illegal drugs at a level disproportionate to that of the general population.
William J. Bennett, John J. Dilulio, Jr., and John P. Walters' moral poverty theory counter argues that the increase in juvenile crime and drug use during the 1980s and 1990s is due to children's lack of adult role models in their upbringing, such as parents, teachers, and guardians. They argue that children born out of wedlock are more likely to commit crimes, and they use this argument to explain the higher rate of crime for African American youth compared to that of white youth in the United States.
|2010. Inmates in adult facilities, by race and ethnicity. Jails, and state and federal prisons.|
|Race, ethnicity||% of US population||% of U.S.
|National incarceration rate |
(per 100,000 of all ages)
|White (non-Hispanic)||64||39||450 per 100,000|
|Hispanic||16||19||831 per 100,000|
|Black||13||40||2,306 per 100,000|
According to the United States Bureau of Justice, in 2014 6% of all black males ages 30 to 39 were in prison, while 2% of Hispanic and 1% of white males in the same age group were in prison. There were 2,724 black male prisoners with sentences over one year per 100,000 black male residents in the United States, and a total of 516,900 black male sentenced prisoners in the United States as of December 31, 2014. This compares to 1,091 Hispanic male prisoners per 100,000 Hispanic male residents, and 465 white male prisoners per 100,000 white male residents in the United States at that time. Black males between the ages of 18 and 19 had a rate of imprisonment 10.5 times that of white males of the same age group in 2014. Studies have found that a decreasing percentage of the overrepresentation of blacks in the U.S. criminal justice system can be explained by racial differences in offending: 80% in 1979, 76% in 1991, and 61% in 2004.
A 2013 study found that the increased likelihood of African American males of being arrested and incarcerated than white males was entirely accounted for by adjusting for both self-reported violence and IQ. According to a report by the National Council of La Raza, research obstacles undermine the census of Latinos in prison, and "Latinos in the criminal justice system are seriously undercounted." A study regarding the Violent Crime Control and Law Enforcement Act concluded due to mandatory sentencing blacks have a 1 in 3 chance of spending some time in prison or jail. Latinos 1 in 6 chance and whites, a 1 in 17.
A 2016 study from the American Psychological Association, "Discrimination and Instructional Comprehension", researched how the lack of comprehension of capital penalty jury instructions, relates to death sentencing in America. This study was composed of eligible subjects, who were given the option to sentence a verdict based on their comprehension from the given instructions and their evidence. The study concluded that multiple verdicts who could not comprehend the penalty instructions, had a higher death sentence probability.
Various scholars have addressed what they perceived as the systemic racial bias present in the administration of capital punishment in the United States. There is also a large disparity between races when it comes to sentencing convicts to Death Row. The federal death penalty data released by the United States Department of Justice between 1995–2000 shows that 682 defendants were sentenced to death. Out of those 682 defendants, the defendant was black in 48% of the cases, Hispanic in 29% of the cases, and white in 20% of the cases.[clarification needed] 52.5% of people who committed homicides in the 1980-2005 time period were black.
Two competing hypotheses exist regarding why racial/ethnic minorities, especially African Americans, are overrepresented in the criminal justice system compared to their share of the general population.. These are the differential offending or differential involvement hypothesis, which proposes that this overrepresentation is a result of African Americans committing more of the crimes that result in criminal justice processing, and the differential selection hypothesis, which proposes that this disproportionality is a result of discrimination by the criminal justice system. Piquero (2008) argues that it is difficult, if not impossible, to determine which of these factors is more important than the other.
The criminal justice system in the United States has a very large imbalance in the composition of races, specifically between blacks and whites, incarcerated. Alfred Blumstein states, "Although blacks comprise roughly one-eighth of the population, they represent about one-half of the prison population. Thus, the race-specific incarceration rates are grossly disproportionate." The research done by Alfred Blumstein and the apparent dis-proportionality raise the problem of injustice within the United States criminal justice system. This injustice is alluded to further, but not directly linked to racial injustice, because black males are the victims of having an incarceration rate twenty five times higher than that of the total population.
Education may also be a factor that plays into this dis-proportionality. Studies done from 1965 to 1969 based on administrative data, surveys, and census data showed that 3 percent of whites and 20 percent of blacks served time in prison by their early thirties. Thirty years later in 1999, risk of incarceration was partially dependent on education with 30 percent of college dropouts and roughly 60 percent of high school dropouts going to prison. Education playing a role in either increasing or decreasing the likelihood of incarceration based upon the education and skill a person possesses.
In the United States, racial disparities in the juvenile justice system are partly, but not entirely, due to racial differences in offending; differences in treatment by the justice system also appear to play a role.
A 1994 study found that black and Hispanic youths were more likely to be detained at each of the three stages of the juvenile justice system examined (police detention, court intake detention, and preliminary hearing detention), even after controlling for other factors such as offense seriousness. Other studies have reached similar conclusions. A 2014 study looking at juvenile dispositional decisions found that minority juveniles were more likely than their white counterparts to be committed to physical regimen-oriented facilities than their white counterparts were, which the authors suggested was due to court actors using "a racialized perceptual shorthand of youthful offenders that attributes both higher levels of blame and lower evaluations of reformability to minority youth." Research suggests the racial disparities in assessments of juvenile offenders, and the resulting sentence recommendations, result from officials attributing different causes of crime to cases based on the race of the offender. According to a 1982 study, racial bias in juvenile justice decisions is more pronounced in police decisions than in judicial ones.
Black and Latino juvenile offenders are also vastly more likely to be tried as adults by local prosecutors throughout the US, and are generally likelier to be given harsher, longer sentences by the judges presiding over their trials.
A study of New Jersey juvenile court records for the years 2010-2015 released by WNYC late in 2016 found that black and Latino offenders comprised almost 90% of juveniles tried as adults (849 black youths, 247 Latino out of a total of 1,251 juveniles tried as adults during the five-year period, thereby black/Hispanic teens represented 87.6% of the total cases.) WNYC also surveyed all NJ inmates currently serving sentences which resulted from crimes committed as a minor, and found that 93% of them are black or Latino. These numbers represent a clear racial disparity in sentencing, particularly so, given the fact that during this period New Jersey was only 14.8% Black and 19.7% Hispanic, in comparison to 56.2% of the state's residents being white. "Controlling for nature of offense...for family background...for educational history—all of the things that go into a prosecutor's decision, there are still disparities, significant disparities, that cannot be explained by anything other than race," says Laura Cohen, the director of the Criminal and Youth Justice Clinic at Rutgers Law School.
These numbers are comparable to the juvenile detention and sentencing trends for the country as a whole, analysis of which shows that roughly 60% of all juveniles who received life sentences after being tried as adults are black. Judges, prosecutors, juries, and police/detention officers all commonly perceive black children as less innocent and childlike than white children. Black teens are commonly over-estimated in age by an average of 4.5 years, meaning that black boys as young as 13 could conceivably be seen as fully 18 years old, and thereby easily acceptable for overzealous prosecutors to treat as an adult defendant. This tendency to round black teens up to adults is detailed in a 2014 study by the American Psychological Association entitled: "The Essence of Innocence: Consequences of Dehumanizing Black Children".
Immigrants to the United States commit fewer crimes than native-born citizens. The drop in crime rates is due to a greater influx of immigrants. The belief that a third of all federal prisoners are illegal immigrants is inaccurate, as government authorities do not categorize all inmates by immigration status. The higher percentage of undocumented convicted immigrants in federal courts was due to immigration offenses, as opposed to serious crimes such as drug offenses, and US-born citizens have a higher percentages for crimes such as drug offenses. Arresting undocumented immigrants cannot ensure public safety, and some law enforcement authorities state that aggressively enforcing immigration law would jeopardize public safety. To ensure public safety, initiatives should be taken to investigate the causes of crime and implement community-based programs accordingly. The legal system of both past and present United States governments has had to confront issues of enforcing border security and/or deporting illegal immigrants.
Over the past 70 years, researching the impact that racial identity has on sentencing outcomes has been at the forefront of criminology. But, many studies contradict each other. Some studies found that minorities receive harsher sentences than whites, while others found that minorities received lighter punishments. In a study done from 2011-2014, that followed 302 men and women in drug related convictions found that blacks were actually convicted at a lower rate than other ethnicities, but had 2.5 more incarcerations on average.
Numerous studies have been conducted to examine whether race is associated with sentence length or severity. An early study by Joan Petersilia found that in California, Michigan, and Texas, Hispanics and blacks tended to receive harsher sentences than whites convicted of comparable crimes and with similar criminal records. A 1998 meta-analysis found that the relationship between race and sentencing in the U.S. was not statistically significant, but that the use of different methods of classifying race may also mask the true race-sentencing relationship. A study published the same year, which examined sentencing data from Pennsylvania, found that young black men were sentenced more harshly than were members of any other age-race-gender combination. Similarly, a 2005 meta-analysis found that blacks tended to receive harsher sentences than did whites, and that this effect was "statistically significant but small and highly variable."
A 2006 study found that blacks and Hispanics received about 10% longer sentences than whites, even after controlling for all possible relevant characteristics, with regard to final offenses. However, when the researchers examined base offenses instead, the disparity was reversed. A 2010 analysis of U.S. Sentencing Commission data found that blacks received the longest sentences of any ethnicity within each gender group (specifically, their sentence lengths were on average 91 months for men and 36 months for women). A 2011 study found that black women with lighter perceived skin tones tended to receive more lenient sentences and serve less of them behind bars. A 2012 study looking at felony case data from Cook County, Illinois found that the sentencing disparity between blacks and whites varied significantly from judge to judge, which the authors state provides "support for the model where at least some judges treat defendants differently based on their race." A 2013 report by the U.S. Sentencing Commission found that black men's prison sentences were on average almost 20% longer than those of their white counterparts who were convicted of similar crimes.
A 2015 study focusing primarily on black and white men in Georgia uncovered that, on average, black men received sentences that were 4.25% higher than whites for the same type of crime. However, the same study found a larger disparity in sentence length among medium- and dark-skinned blacks, who received 4.8% longer sentences than whites, whereas light-skinned blacks received sentences of about the same average length as those of whites. It is also documented that, in the United States as a whole, Latinos, African Americans, and American Indians are far more frequently convicted than white Americans, and they receive harsher and longer punishments than their white counterparts for committing the same crimes.
A study published by Roland G. Fryer, Jr. a professor at Harvard concluded in 2015, nationwide, white people were more likely to be shot by police than black people in similar situations while black and Hispanic people were more likely to be manhandled, handcuffed or beaten by the police — even if they are compliant and law-abiding. The study looked at 1,332 police shootings between 2000 and 2015 in 10 major police departments, in Texas, Florida and California. The study found that black and white suspects were equally likely to be armed and officers were more likely to fire their weapons before being attacked when the suspects were white. For shootings in Houston, the study looked at incidents in which an officer does not fire but might be expected to. They concluded that officers were about 20 percent less likely to shoot black suspects. When it comes to lethal force the study concluded that police are not racially biased in how they use lethal force. A 2016 study published in the Injury Prevention journal concluded that African Americans, Native Americans and Latinos were more likely to be stopped by police compared to Asians and whites. They found that there was no racial bias in the likelihood of being killed or injured after being stopped. The disparity in how police interact with white people and people of color was a contributing factor to the rise of the Black Lives Matter movement.
A database collected by The Guardian concluded that 1093 people in 2016 were killed by the police. The rate of fatal police shootings per million was 10.13 for Native Americans, 6.66 for Black people, 3.23 for Hispanics; 2.93 for White people and 1.17 for Asians. The database showed by total, Whites were killed by police more than any other race or ethnicity.
Police behavior depends on the social dynamics of a scenario in a police to citizen interaction. Within scenarios of a police to citizen interaction, different levels of force can be applied to the citizen. A 2017 study found that people of different races are treated differently by police officers throughout the time of their interaction. 62 White, 42 Black, and 35 Latino use-of-force cases were studied from a medium to large sized urban police department in the United States. The studies showed that certain people were treated differently initially when force was used by a police officer, and other races are treated differently when the use of force escalates. The outcomes of the study showed that Black and Latino suspects have more force applied to them early on in the police to citizen interaction, while White citizens receive more violent force as the interaction progresses.
A 2014 study involving computer-based simulations of a police encounter in which one has to decide whether to shoot or not found a greater likelihood to shoot Black targets instead of Whites when the participants were undergraduate students. The same simulation used with police shows the target race affects the police reaction in some ways but they do not generally show a biased pattern of shooting. A majority of police officers' see "ambiguous behavior as more violent when the actor is Black rather than White." Thus, a police officer's judgement of the suspect could be the difference between using force. Another study at Washington State University used realistic police simulators of different scenarios where a police officer might use deadly force. The study concluded that unarmed white suspects were three times more likely to be shot than unarmed black suspects. The study found that "the participants were experiencing a greater threat response when faced with African Americans instead of white or Hispanic suspects" but were still "significantly slower to shoot armed black suspects than armed white suspects, and significantly less likely to mistakenly shoot unarmed black suspects than unarmed white suspects." The study concluded that the results could be because officers were more concerned with using deadly force against black suspects for fear of how it would be perceived. A 1977 analysis of reports from major metropolitan departments found officers fired more shots at white suspects than at black suspects, possibly because of "public sentiment concerning treatment of blacks."
A study that considered 34,794 federal offenders took into account the race, risk assessment, and future arrests of all participating members of the sample. Though the use of the Post Conviction Risk Assessment (PCRA), which proved to be highly accurate in predicting whether or not whites and blacks would return to prison after being released, showed that recidivism correlates less with race and more with criminal history.
Other studies suggest that recidivism rates as related to race vary based on state. For example, the Alabama Department of Corrections performed a study where they tracked 2003 releases for 3 years. In that time span, 29% of both African American and white males that were released returned to prison, 20% of African American females that were released returned to prison, and 24% of white females returned to prison. The Florida Department of Corrections performed a similar study; they tracked 2001 releases for 5 years. They found that 45% of African American males were reincarcerated and 28% of non-African American males were reincarcerated.
There are two main studies that analyze the issue of habitual offenders in regards to race. Both were mostly conducted by Western Michigan University professor Charles Crawford. Published in 1998 and 2000, both studies focused on habitual offenders in the state of Florida. Crawford's studies found that black defendants in Florida were significantly more likely to be sentenced as habitual offenders than were whites, and that this effect was significantly larger for drug offenses and property crimes of which whites are often the victims.
Examining both individual level and county level variables, a new study from 2008 updated and evaluated Crawford's work. It affirmed that sentencing policies are becoming harsher, and habitual offender statutes are currently just another tool that lawmakers use to incarcerate minorities at a higher rate than their white counterparts. The 2008 study concluded that habitual offender statutes can only continue to be used if they are used in a way that completely disregards race and is unbiased.
Blacks had a higher chance of going to prison especially those who had dropped out of high school. If a Black male dropped out of high school, he had an over 50% chance of being incarcerated in his lifetime, as compared to an 11% chance for White male high school dropouts. Socio-economic, geographic, and educational disparities, as well as alleged unequal treatment in the criminal justice system, contributed to this gap in incarceration rates by race.
Failure to achieve literacy (reading at "grade level") by the third or fourth grade makes the likelihood of future incarceration twenty times more likely than other students. Some states use this measurement to predict how much prison space they will require in the future. It appears to be a poverty issue rather than a race issue.
According to Dorothy Roberts the current prison system serves as a punitive system in which mass incarceration has become the response to problems in society. Field studies regarding prison conditions describe behavioral changes produced by prolonged incarceration, and conclude that imprisonment undermines the social life of inmates by exacerbating criminality or impairing their capacity for normal social interaction. Roberts further argues this racial disparity in imprisonment, particularly with African Americans, subjects them to political subordination by destroying their positive connection with society. Roberts also aegues that institutional factors – such as the prison industrial complex itself – become enmeshed in everyday lives, so much so that prisons no longer function as "law enforcement" systems. It has also been argued that Latinos have been overlooked in the debate over the criminal justice system. It has also been suggested that differences in the way the criminal justice system treats blacks and whites decreases legitimacy, which, in turn, increases criminal behavior, leading to further increases in racial disparities in interactions with the criminal justice system.
Crime in poorer urban neighborhoods is linked to increased rates of mass incarceration, as job opportunities decline and people turn to crime for survival. Crime among low-education men is often linked to the economic decline among unskilled workers. These economic problems are also tied to reentry into society after incarceration. Data from the Washington State Department of Corrections and Employment Insurance records show how "the wages of black ex-inmates grow about 21 percent more slowly each quarter after release than the wages of white ex-inmates". A conviction leads to all sorts of social, political, and economic disadvantages for felons, and has been dubbed the "new civil death" (Chin 2012, 179). In the aggregate, these obstacles make it difficult for released inmates to transition to society successfully, which, in turn, makes it difficult for these communities to achieve social stability.
Black ex-inmates earn 10 percent less than white ex-inmates post incarceration on average.
Problems resulting from mass incarceration extend beyond economic and political aspects to reach community lives as well. According to the U.S. Department of Justice, 46% of black female inmates were likely to have grown up in a home with only their mothers. A study by Bresler and Lewis shows how incarcerated African American women were more likely to have been raised in a single female headed household while incarcerated white women were more likely to be raised in a two parent household. Black women's lives are often shaped by the prison system because they have intersecting familial and community obligations. The "increase incarceration of black men and the sex ratio imbalance it induces shape the behavior of young black women".
Education, fertility, and employment for black women are affected due to increased mass incarceration. Black women's employment rates were increased, shown in Mechoulan's data, due to increased education. Higher rates of black male incarceration lowered the odds of nonmarital teenage motherhood and black women's ability to get an educational degree, thus resulting in early employment. Whether incarcerated themselves or related to someone who was incarcerated, women are often conformed into stereotypes of how they are supposed to behave yet are isolated from society at the same time.
Furthermore, this system can disintegrate familial life and structure. Black and Latino youth are more likely to be incarcerated after coming in contact with the American juvenile justice system. According to a study by Victor Rios, 75% of prison inmates in the United States are Black and Latinos between the ages of 20 and 39. Rios further argued that, societal institutions – such as schools, families, and community centers can impact youth by initiating them into this "system of criminalization" from an early age. Rios argues that these institutions, which are traditionally set up to protect the youth, contribute to mass incarceration by mimicking the criminal justice system.
From a different perspective, parents in prison face further moral and emotional dilemmas because they are separated from their children. Both black and white women face difficulty with where to place their children while incarcerated and how to maintain contact with them. According to the study by Bresler and Lewis, black women are more likely to leave their children with related kin whereas white women's children are likely to be placed in foster care. In a report by the Bureau of Justice Statistics revealed how in 1999, seven percent of black children had a parent in prison, making them nine times more likely to have an incarcerated parent than white children.
Having parents in prison can have adverse psychological effects as children are deprived of parental guidance, emotional support, and financial help. Because many prisons are located in remote areas, incarcerated parents face physical barriers in seeing their children and vice versa.
Societal influences, such as low education among African American men, can also lead to higher rates of incarceration. Imprisonment has become "disproportionately widespread among low-education black men" in which the penal system has evolved to be a "new feature of American race and class inequality". Scholar Pettit and Western's research has shown how incarceration rates for African Americans are "about eight times higher than those for whites", and prison inmates have less than "12 years of completed schooling" on average.
These factors all impact released prisoners who try to reintegrate into society. According to a national study, within three years of release, almost 7 in 10 will have been rearrested. Many released prisoners have difficulty transitioning back into societies and communities from state and federal prisons because the social environment of peers, family, community, and state level policies all impact prison reentry; the process of leaving prison or jail and returning to society. Men eventually released from prison will most likely return to their same communities, putting additional strain on already scarce resources as they attempt to garner the assistance they need to successfully reenter society. They also tend to come from disadvantaged communities as well and due to the lack of resources, these same men will continue along this perpetuating cycle.
A major challenge for prisoners re-entering society is obtaining employment, especially for individuals with a felony on their record. A study utilizing U.S. Census occupational data in New Jersey and Minnesota in 2000 found that "individuals with felon status would have been disqualified from approximately one out of every 6.5 occupations in New Jersey and one out of every 8.5 positions in Minnesota". It has also been argued that combination of race and criminal status of an individual will diminish the positive aspects of an individual and intensify stereotypes. From the viewpoint of employers, the racial stereotypes will be confirmed and encourage discrimination in the hiring process. As African Americans and Hispanics are disproportionately affected by felon status, these additional limitations on employment opportunity were shown to exacerbate racial disparities in the labor market.
There have been minor adjustments to reduce the incarceration rate in the United States on the state level. Some of these efforts include introducing Proposition 47 in 2014, which reclassified specific property and drug crimes, and the Rockefeller drug laws in 2009, which pressed extreme minimum sentences for minor drug offenses. According to The Sentencing Project, there can be other alterations made to lower the incarceration rate. Some changes include reducing the length of some sentences, making resources such as treatment for substance abuse available to all and investing in organizations that promote strong youth development.
Nonwhite youths referred for delinquent acts are more likely than comparable white youths to be recommended for petition to court, to be held in pre-adjudicatory detention, to be formally processed in juvenile court, and to receive the most formal or the most restrictive judicial dispositions. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.