text
stringlengths 256
572k
|
|---|
Your third grader will continue to build on basic math skills such as addition and subtraction, telling time, and counting money, but will also start to learn how to multiply, measure, and understand fractions. You can help your child master these skills simply by playing games in and around the house. Leave the flashcards and worksheets to the teacher; if you want your child to love numbers, show your child how math is part of everyday life and he'll be eager to learn more.
Here are 12 fun ways to introduce your child to the world of math. Because children learn in different ways, we've arranged these activities by learning style.
For the visual learner
Estimate the weight of a household object. Ask your child to guess the weight of the family cat, a dictionary, a glass of water. Then use the scale to find out the real weight. Have him estimate his own weight, and that of other family members. Were his estimates on target?
Buy your child a watch with an hour and second hand. Periodically ask him to tell you what time it is. Ask questions like: "If Arthur comes on at 4 p.m., how many more minutes do you have to wait?" "It takes me 15 minutes to drive to the store. Do I have time to get there before it closes at 5 p.m.?"
Use M&M's to teach fractions. Have your child count the M&M pieces in a bag. Then sort them by color. Count the number of green M&M's to find out what fraction of all of the candy is that color. Do the same with the other colors. Eat the results.
Fold a napkin. An idea from the U.S. Department of Education: Fold paper towels or napkins into large and small fractions. Start with halves, then move to quarters, then eighths, and finally 16ths. Use magic markers to label the fractions.
For the physical learner
Play card games. War and Go Fish are classic card games that reinforce basic math concepts such as greater and less than, as well as grouping by category.
Host a book or toy exchange party. Have each child bring along four or five used books or toys to sell; price all the books under one dollar (24 cents, 60 cents, etc.). Give each child one dollar in play money to spend and let them sort through the selection for about 15 minutes. When it's time to pay for the books, help the children count out the money and determine whether they have any left over or have gone over their budget. This activity reinforces making change and money skills.
Measure your family. The National PTA recommends this family activity: Use a tape measure or ruler to record the heights of everyone in your family. Total the inches to see how "tall" you are all together. Try it again with everyone's weight. A good way to practice adding two-digit numbers.
Play board games that use counting and paper money. Games such as Monopoly Junior are aimed at ages 5 through 8 but are still fun for parents or older siblings.
Play with money. This is a family game: The goal is to be the first player to win a set amount of money (75 cents, 50 cents). Roll a pair of dice. Each person gets the number of pennies shown on the dice. As each player gets five pennies, replace them with a nickel. Replace ten pennies with a dime, and so on. The first player to reach the set amount wins. This game reinforces grouping skills, and counting by fives.
Plan and shop for a meal. Give your child the grocery circular from the newspaper. Give him a budget ($30, $50) and have him plan a dinner for your family. If he goes over the budget, what can he subtract? If he has money left over, what else can he buy? Then go to the store and shop for the items together. Did his estimates match the real total?
For the auditory learner
Play a guessing game. A good one for a car trip: Have your child think of a number between one and 100. Try to guess the number by asking questions such as "Is it greater than 50?" "Is it between 35 and 55?" Then switch roles and have your child do the guessing.
Make a recipe with your child. Give your child the measuring cups, measuring spoons, and bowls and read him the directions as he does the work. An easy — and delicious — way to introduce concepts such as volume, weight, and fractions.
|
Spiral galaxy NGC 4921 presently is estimated to be 320 million light years distant.
This image, taken by the Hubble Space Telescope, is being used to identify key stellar distance markers known as Cepheid variable stars.
The magnificent spiral NGC 4921 has been informally dubbed anemic because of its low rate of star formation and low surface brightness.
Visible in the image are, from the center, a bright nucleus, a bright central bar, a prominent ring of dark dust, blue clusters of recently formed stars, several smaller companion galaxies, unrelated galaxies in the far distant universe, and unrelated stars in our Milky Way Galaxy.
Explore further: Hubble Snaps Images of a Pinwheel-Shaped Galaxy
|
Shingles (herpes zoster) is a painful, blistering skin rash. It is caused by the varicella-zoster virus. This is the virus that also causes chickenpox.
After you get chickenpox, the virus remains inactive (becomes dormant) in certain nerves in the body. Shingles occurs after the virus becomes active again in these nerves after many years.
The reason the virus suddenly becomes active again is not clear. Often only one attack occurs.
Shingles can develop in any age group. You are more likely to develop the condition if:
If an adult or child has direct contact with the shingles rash and did not have chickenpox as a child or the chickenpox vaccine, they can develop chickenpox, not shingles.
The first symptom is usually pain, tingling, or burning that occurs on one side of the body. The pain and burning may be severe and are usually present before any rash appears.
Red patches on the skin, followed by small blisters, form in most people:
Other symptoms may include:
You may also have pain, muscle weakness, and a rash involving different parts of your face if shingles affects a nerve in your face. The symptoms may include:
Your health care provider can make the diagnosis by looking at your skin and asking about your medical history.
Tests are rarely needed, but may include taking a skin sample to see if the skin is infected with the virus.
Blood tests may show an increase in white blood cells and antibodies to the chickenpox virus. But the tests cannot confirm that the rash is due to shingles.
Your health care provider may prescribe a medicine that fights the virus, called an antiviral drug. This drug helps reduce pain, prevent complications, and shorten the course of the disease.
The medicines should be started within 72 hours of when you first feel pain or burning. It is best to start taking them before the blisters appear. The medicines are usually given in pill form. Some people may need to receive the medicine through a vein (by IV).
Strong anti-inflammatory medicines called corticosteroids, such as prednisone, may be used to reduce swelling and pain. These medicines do not work in all patients.
Other medicines may include:
Follow your health care provider's instructions about how to care for yourself at home.
Other measures may include:
Stay away from people while your sores are oozing to avoid infecting those who have never had chickenpox -- especially pregnant women.
Herpes zoster usually clears in 2 to 3 weeks and rarely returns. If the virus affects the nerves that control movement (the motor nerves), you may have temporary or permanent weakness or paralysis.
Sometimes the pain in the area where the shingles occurred may last from months to years. This pain is called postherpetic neuralgia.
It occurs when the nerves have been damaged after an outbreak of shingles. Pain ranges from mild to very severe. Postherpetic neuralgia is more likely to occur in persons over age 60.
Complications may include:
Call your health care provider if you have symptoms of shingles, particularly if you have a weakened immune system or if your symptoms persist or worsen. Shingles that affects the eye may lead to permanent blindness if you do not receive emergency medical care.
Do not touch the rash and blisters on persons with shingles or chickenpox if you have never had chickenpox or the chickenpox vaccine.
A herpes zoster vaccine is available. It is different than the chickenpox vaccine. Older adults who receive the herpes zoster vaccine are less likely to have complications from shingles.
Cohen J. Varicella-zoster virus (chickenpox, shingles). In: Goldman L, Schafer AI, eds. Goldman’s Cecil Medicine. 24th ed. Philadelphia, Pa.: Elsevier Saunders; 2011:chap 383.
Habif TP.Clinical Dermatology. 5th ed. St. Louis, MO: Elsevier Mosby; 2009:chap 12.
Updated by: Jatin M. Vyas, MD, PhD, Assistant Professor in Medicine, Harvard Medical School; Assistant in Medicine, Division of Infectious Disease, Department of Medicine, Massachusetts General Hospital. Also reviewed by David Zieve, MD, MHA, Bethanne Black, and the A.D.A.M. Editorial team.
The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed physician should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. Copyright 1997-2014, A.D.A.M., Inc. Duplication for commercial use must be authorized in writing by ADAM Health Solutions.
|
Civil Rights Movement Teacher Resources
Find Civil Rights Movement educational ideas and activities
Showing 201 - 220 of 1,588 resources
Montgomery Bus Boycott
It's December 1, 1955, and a tired African American woman refuses to give up her seat for a white man on a bus in Montgomery. This woman is Rosa Parks. While she wasn't the first person to stay seated despite the current laws, her arrest...
3 mins 9th - 12th Social Studies & History
Individuals Making a Difference
The focus of this, the third in a five-lesson unit study of human rights, is on individuals who made a difference. Billy Bowlegs, Dr. Sun Yat Sen, Fannie Lou Hamer, Michi Weglyn, and Yuri Koshiyama are some of the people class members...
9th - 12th Social Studies & History CCSS: Adaptable
Strength and Voices: Engaging Scenario – Lights camera, action! Day 1
This task is somewhat incomplete, yet if matched with the larger unit, it could be a helpful way to start off a great project. Learners work in groups to create and research a film documentary on civil rights. This project is based on...
7th English Language Arts CCSS: Designed
Centers of the Storm: The Lyceum and the Circle at the University of Mississippi
Greek Revival architecture and the Civil Rights Movement? Sure! Examine how the Lyceum and Circle, two historic buildings located on the campus of the University of Mississippi, relate to integration and the 1962 riot on the university...
9th - 12th English Language Arts CCSS: Designed
Perspectives and Point of View: Engaging Scenario Unit 1
In groups Middle schoolers work together to analyze texts and author's perspective across media types. They read the Langston Hughes poem, "I Look at the World" and speeches from JFK and MLK. They create a multi-media presentation...
6th English Language Arts CCSS: Designed
Freedom Songs of the Civil Rights Movement
Fifth graders analyze freedom songs sung during the Civil Rights Movement. In this historical music lesson plan, 5th graders sing and understand the musical concepts within freedom songs. Students also analyze the songs' meanings and...
5th Visual & Performing Arts
Leaders in the Civil Rights Movement
Students investigate who Rosa Parks and Martin Luther King Jr. were. They study the impact of the Civil Rights Movement and the Montgomery Bus Boycott. Each student designs through pictures and/or words how they make a better place.
9th - 12th Social Studies & History
Making More Places at the Table: The American Civil Rights Movement of the 50's and 60's
Eleventh graders examine the biography of Henry B. Gonzalez. They examine primary source documents from Congressman Gonzalez's personal papers related to his contributions to the Civil Rights Movement.
11th Social Studies & History
Civil Rights Movement in America
Eleventh graders explore the Civil Rights movement as a culmination of history and cultural perspectives developed from the Slave Trade and Reconstruction. They identify leading persons and organizations and their personal philosophy to...
11th Social Studies & History
Let Freedom Ring: The Life & Legacy of Martin Luther King, Jr.
Students use text and photos to visualize the delivery of Dr. Martin Luther King, Jr.'s historic "I Have A Dream" speech. They analyze Dr. King's speech for examples of imagery and allusion and create original poetry and illustrations...
3rd - 5th Social Studies & History
Martin Luther King Jr. and Nonviolence
Using the book, Martin's Big Words, learners will discover the life of Dr. Martin Luther King Jr. Vocabulary is identified throughout the story by using several his famous protest speeches as examples. Class discussions on racism, during...
K - 5th Social Studies & History
Martin Luther King Jr's "I Have A Dream" Speech
Invite your class to investigate racism and civil rights by analyzing the great Dr. Martin Luther King's speech. Your learners will read the words from the "I Have a Dream" speech and analyze the political and racial overtones. They will...
6th - 8th Visual & Performing Arts
|
A natural disaster is a major adverse event resulting from natural processes of the Earth; examples include floods, volcanic eruptions, earthquakes, tsunamis, and other geologic processes. A natural disaster can cause loss of life or property damage, and typically leaves some economic damage in its wake, the severity of which depends on the affected population's resilience, or ability to recover.
An adverse event will not rise to the level of a disaster if it occurs in an area without vulnerable population. In a vulnerable area, however, such as San Francisco, an earthquake can have disastrous consequences and leave lasting damage, requiring years to repair.
In 2012, there were 905 natural disasters worldwide, 93% of which were weather-related disasters. Overall costs were US$170 billion and insured losses $70 billion. 2012 was a moderate year. 45% were meteorological (storms), 36% were hydrological (floods), 12% were climatological (heat waves, cold waves, droughts, wildfires) and 7% were geophysical events (earthquakes and volcanic eruptions). Between 1980 and 2011 geophysical events accounted for 14% of all natural catastrophes.
- 1 Avalanches
- 2 Earthquakes
- 3 Volcanic eruptions
- 4 Hydrological disasters
- 5 Meteorological disasters
- 6 Wildfires
- 7 Health disasters
- 8 Space disasters
- 9 Protection by international law
- 10 See also
- 11 References
- 12 External links
During World War I, an estimated 40,000 to 80,000 soldiers died as a result of avalanches during the mountain campaign in the Alps at the Austrian-Italian front, many of which were caused by artillery fire.
An earthquake is the result of a sudden release of energy in the Earth's crust that creates seismic waves. At the Earth's surface, earthquakes manifest themselves by vibration, shaking and sometimes displacement of the ground. The vibrations may vary in magnitude. Earthquakes are caused mostly by slippage within geological faults, but also by other events such as volcanic activity, landslides, mine blasts, and nuclear tests. The underground point of origin of the earthquake is called the focus. The point directly above the focus on the surface is called the epicenter. Earthquakes by themselves rarely kill people or wildlife. It is usually the secondary events that they trigger, such as building collapse, fires, tsunamis (seismic sea waves) and volcanoes, that are actually the human disaster. Many of these could possibly be avoided by better construction, safety systems, early warning and planning. Some of the most significant earthquakes in recent times include:
- The 2004 Indian Ocean earthquake, the third largest earthquake recorded in history, registering a moment magnitude of 9.1-9.3. The huge tsunamis triggered by this earthquake killed at least 229,000 people.
- The 2011 Tōhoku earthquake and tsunami registered a moment magnitude of 9.0. The earthquake and tsunami killed 15,889 and injured 6,152. 2,609 were still missing as of 2014.
- The 8.8 magnitude February 27, 2010 Chile earthquake and tsunami cost 525 lives.
- The 7.9 magnitude May 12, 2008 Sichuan earthquake in Sichuan Province, China. Death toll at over 61,150 as of May 27, 2008.
- The 7.7 magnitude 2006 Pangandaran earthquake and tsunami.
- The 6.9 magnitude 2005 Azad Jammu & Kashmir and KPK province Earthquake, which killed or injured above 75,000 people in Pakistan.
Volcanoes can cause widespread destruction and consequent disaster in several ways. The effects include the volcanic eruption itself that may cause harm following the explosion of the volcano or the fall of rock. Second, lava may be produced during the eruption of a volcano. As it leaves the volcano, the lava destroys many buildings and plants it encounters. Third, volcanic ash generally meaning the cooled ash - may form a cloud, and settle thickly in nearby locations. When mixed with water this forms a concrete-like material. In sufficient quantity ash may cause roofs to collapse under its weight but even small quantities will harm humans if inhaled. Since the ash has the consistency of ground glass it causes abrasion damage to moving parts such as engines. The main killer of humans in the immediate surroundings of a volcanic eruption is the pyroclastic flows, which consist of a cloud of hot volcanic ash which builds up in the air above the volcano and rushes down the slopes when the eruption no longer supports the lifting of the gases. It is believed that Pompeii was destroyed by a pyroclastic flow. A lahar is a volcanic mudflow or landslide. The 1953 Tangiwai disaster was caused by a lahar, as was the 1985 Armero tragedy in which the town of Armero was buried and an estimated 23,000 people were killed .
A specific type of volcano is the supervolcano. According to the Toba catastrophe theory 75,000 to 80,000 years ago a super volcanic event at Lake Toba reduced the human population to 10,000 or even 1,000 breeding pairs creating a bottleneck in human evolution. It also killed three quarters of all plant life in the northern hemisphere. The main danger from a supervolcano is the immense cloud of ash which has a disastrous global effect on climate and temperature for many years.
It is a violent, sudden and destructive change either in quality of earth's water or in distribution or movement of water on land below the surface or in atmosphere.
A flood is an overflow of an expanse of water that submerges land. The EU Floods directive defines a flood as a temporary covering by water of land not normally covered by water. In the sense of "flowing water", the word may also be applied to the inflow of the tide. Flooding may result from the volume of water within a body of water, such as a river or lake, which overflows or breaks levees, with the result that some of the water escapes its usual boundaries. While the size of a lake or other body of water will vary with seasonal changes in precipitation and snow melt, it is not a significant flood unless the water covers land used by man like a village, city or other inhabited area, roads, expanses of farmland, etc.
Some of the most notable floods include:
- The Johnstown Flood of 1889 where over 2200 people lost their lives when the South Fork Dam holding back Lake Conemaugh broke.
- The Huang He (Yellow River) in China floods particularly often. The Great Flood of 1931 caused between 800,000 and 4,000,000 deaths.
- The Great Flood of 1993 was one of the most costly floods in United States history.
- The North Sea flood of 1953 which killed 2251 people in the Netherlands and eastern England
- The 1998 Yangtze River Floods, in China, left 14 million people homeless.
- The 2000 Mozambique flood covered much of the country for three weeks, resulting in thousands of deaths, and leaving the country devastated for years afterward.
- The 2005 Mumbai floods which killed 1094 people.
- The 2010 Pakistan floods directly affected about 20 million people, mostly by dispolacement, destruction of crops, infrastructure, property and livelihood, with a death toll of close to 2,000.
- The 2014 India–Pakistan floods
A limnic eruption occurs when a gas, usually CO2, suddenly erupts from deep lake water, posing the threat of suffocating wildlife, livestock and humans. Such an eruption may also cause tsunamis in the lake as the rising gas displaces water. Scientists believe landslides, volcanic activity, or explosions can trigger such an eruption. To date, only two limnic eruptions have been observed and recorded:
- In 1984, in Cameroon, a limnic eruption in Lake Monoun caused the deaths of 37 nearby residents.
- At nearby Lake Nyos in 1986 a much larger eruption killed between 1,700 and 1,800 people by asphyxiation.
Tsunamis can be caused by undersea earthquakes as the one caused by the 2004 Indian Ocean Earthquake, or by landslides such as the one which occurred at Lituya Bay, Alaska.
- The 2004 Indian Ocean Earthquake created the Boxing Day Tsunami.
- On March 11, 2011, a tsunami occurred near Fukushima, Japan and spread through the Pacific.
Blizzards are severe winter storms characterized by heavy snow and strong winds. When high winds stir up snow that has already fallen, it is known as a ground blizzard. Blizzards can impact local economic activities, especially in regions where snowfall is rare.
Significant blizzards include:
- The Great Blizzard of 1888 in the United States in which many tons of wheat crops were destroyed.
- The 2008 Afghanistan blizzard
- The North American blizzard of 1947
- The 1972 Iran blizzard resulted in approximately 4,000 deaths and lasted for 5 to 7 days.
Cyclone, tropical cyclone, hurricane, and typhoon are different names for the same phenomenon, which is a cyclonic storm system that forms over the oceans. The deadliest hurricane ever was the 1970 Bhola cyclone; the deadliest Atlantic hurricane was the Great Hurricane of 1780 which devastated Martinique, St. Eustatius and Barbados. Another notable hurricane is Hurricane Katrina which devastated the Gulf Coast of the United States in 2005.
Extratropical cyclones, sometimes called mid-latitude cyclones, are a group of cyclones defined as synoptic scale low pressure weather systems that occur in the middle latitudes of the Earth (outside the tropics) not having tropical characteristics, and are connected with fronts and horizontal gradients in temperature and dew point otherwise known as "baroclinic zones". As with tropical cyclones, they are known by different names in different regions (Nor'easter, Pacific Northwest windstorms, European windstorm, East Asian-northwest Pacific storms, Sudestada and Australian east coast cyclones). The most intense extratropical cyclones cause widespread disruption and damage to society, such as the storm surge of the North Sea flood of 1953 which killed 2251 people in the Netherlands and eastern England, the Great Storm of 1987 which devastated southern England and France and the Columbus Day Storm of 1962 which struck the Pacific Northwest.
Drought is unusual dryness of soil, resulting in crop failure and shortage of water for other uses, caused by significantly lower rainfall than average over a prolonged period. Hot dry winds, high temperatures and consequent evaporation of moisture from the ground can contribute to conditions of drought.
Well-known historical droughts include:
- 1900 India killing between 250,000 to 3.25 million.
- 1921–22 Soviet Union in which over 5 million perished from starvation due to drought
- 1928–30 Northwest China resulting in over 3 million deaths by famine.
- 1936 and 1941 Sichuan Province China resulting in 5 million and 2.5 million deaths respectively.
- The 1997–2009 Millenium Drought in Australian led to a water supply crisis across much of the country. As a result many desalination plants were built for the first time (see list).
- In 2006, Sichuan Province China experienced its worst drought in modern times with nearly 8 million people and over 7 million cattle facing water shortages.
- 12-year drought that was devastating southwest Western Australia, southeast South Australia, Victoria and northern Tasmania was "very severe and without historical precedent".
- In 2011, the State of Texas lived under a drought emergency declaration for the entire calendar year. The drought caused the Bastrop fires.
Hailstorms are falls of rain drops that arrive as ice, rather than melting before they hit the ground. A particularly damaging hailstorm hit Munich, Germany, on July 12, 1984, causing about 2 billion dollars in insurance claims.
A heat wave is a period of unusually and excessively hot weather. The worst heat wave in recent history was the European Heat Wave of 2003.
A summer heat wave in Victoria, Australia, created conditions which fuelled the massive bushfires in 2009. Melbourne experienced three days in a row of temperatures exceeding 40°C (104°F) with some regional areas sweltering through much higher temperatures. The bushfires, collectively known as "Black Saturday", were partly the act of arsonists.
The 2010 Northern Hemisphere summer resulted in severe heat waves, which killed over 2,000 people. It resulted in hundreds of wildfires which causing widespread air pollution, and burned thousands of square miles of forest.
Heat waves can occur in the ocean as well as on land with significant effects (often on a large scale) e.g. coral bleaching.
A tornado is a violent, dangerous, rotating column of air that is in contact with both the surface of the earth and a cumulonimbus cloud or, in rare cases, the base of a cumulus cloud. It is also referred to as a twister or a cyclone, although the word cyclone is used in meteorology in a wider sense, to refer to any closed low pressure circulation. Tornadoes come in many shapes and sizes, but are typically in the form of a visible condensation funnel, whose narrow end touches the earth and is often encircled by a cloud of debris and dust. Most tornadoes have wind speeds less than 110 miles per hour (177 km/h), are approximately 250 feet (80 m) across, and travel a few miles (several kilometers) before dissipating. The most extreme tornadoes can attain wind speeds of more than 300 mph (480 km/h), stretch more than two miles (3 km) across, and stay on the ground for dozens of miles (perhaps more than 100 km).
Well-known historical tornadoes include:
- The Tri-State Tornado of 1925, which killed over 600 people in the United States;
- The Daulatpur-Saturia Tornado of 1989, which killed roughly 1,300 people in Bangladesh.
Wildfires are large fires which often start in wildland areas. Common causes include lightning and drought but wildfires may also be started by human negligence or arson. They can spread to populated areas and can thus be a threat to humans and property, as well as wildlife.
An epidemic is an outbreak of a contractible disease that spreads through a human population. A pandemic is an epidemic whose spread is global. There have been many epidemics throughout history, such as the Black Death. In the last hundred years, significant pandemics include:
- The 1918 Spanish flu pandemic, killing an estimated 50 million people worldwide
- The 1957–58 Asian flu pandemic, which killed an estimated 1 million people
- The 1968–69 Hong Kong water flu pandemic
- The 2002-3 SARS pandemic
- The AIDS pandemic, beginning in 1959
- The H1N1 Influenza (Swine Flu) Pandemic 2009–2010
Other diseases that spread more slowly, but are still considered to be global health emergencies by the WHO, include:
- XDR TB, a strain of tuberculosis that is extensively resistant to drug treatments
- Malaria, which kills an estimated 1.6 million people each year
- Ebola virus disease, which has claimed hundreds of victims in Africa in several outbreaks
|This section requires expansion. (December 2010)|
Asteroids that impact the Earth have led to several major extinction events, including one that created the Chicxulub crater 64.9 million years ago and associated with the demise of the dinosaurs. Scientists estimate that the likelihood of death for a living human from a global impact event is comparable to death from airliner crash. One of the notable impact events in modern times was the Tunguska event in June 1908.
A solar flare is a phenomenon where the sun suddenly releases a great amount of solar radiation, much more than normal. Some known solar flares include:
- An X20 event on August 16, 1989
- A similar flare on April 2, 2001
- The most powerful flare ever recorded, on November 4, 2003, estimated at between X40 and X45
- The most powerful flare in the past 500 years is believed to have occurred in September 1859
Protection by international law
International law, for example Geneva Conventions defines International Red Cross and Red Crescent Movement the Convention on the Rights of Persons with Disabilities, requires that "States shall take, in accordance with their obligations under international law, including international humanitarian law and international human rights law, all necessary measures to ensure the protection and safety of persons with disabilities in situations of risk, including the occurrence of natural disaster." And further United Nations Office for the Coordination of Humanitarian Affairs is formed by General Assembly Resolution 44/182. People displaced due to natural disasters are currently protected under international law (Guiding Principles of International Displacement, Campala Convention of 2009).
- Act of God
- Effects of climate change on humans
- Emergency management
- Environmental disaster
- Environmental emergency
- Gamma ray burst
- List of countries by natural disaster risk
- List of natural disasters by death toll
- Property insurance
- World Conference on Disaster Reduction
- G. Bankoff, G. Frerks, D. Hilhorst (eds.) (2003). Mapping Vulnerability: Disasters, Development and People. ISBN 1-85383-964-7.
- Luis Flores Ballesteros. "What determines a disaster?" 54 Pesos Sep 2008:54 Pesos 11 Sep 2008. <http://54pesos.org/2008/09/11/what-determines-a-disaster/>
- D. Alexander (2002). Principles of Emergency planning and Management. Harpended: Terra publishing. ISBN 1-903544-10-6.
- B. Wisner, P. Blaikie, T. Cannon, and I. Davis (2004). At Risk - Natural hazards, people's vulnerability and disasters. Wiltshire: Routledge. ISBN 0-415-25216-4.
- Natural Catastrophes in 2012 Dominated by U.S. Weather Extremes Worldwatch Institute May 29, 2013
- Lee Davis (2008). "Natural Disasters". Infobase Publishing. p.7. ISBN 0-8160-7000-8
- ^ "USGS Earthquake Details". United States Geological Survey. http://earthquake.usgs.gov/earthquakes/eqinthenews/2010/us2010tfan/. Retrieved February 27, 2010
- Gibbons, Ann (19 January 2010). "Human Ancestors Were an Endangered Species". ScienceNow.
- MSN Encarta Dictionary. Flood. Retrieved on 2006-12-28. Archived 2009-10-31.
- Directive 2007/60/EC Chapter 1 Article2
- Glossary of Meteorology (June 2000). Flood. Retrieved on 2009-01-09.
- Wurman, Joshua (2008-08-29). "Doppler On Wheels". Center for Severe Weather Research. Retrieved 2009-12-13.
- "Hallam Nebraska Tornado". National Weather Service. National Oceanic and Atmospheric Administration. 2005-10-02. Retrieved 2009-11-15.
- Roger Edwards (2006-04-04). "The Online Tornado FAQ". National Weather Service. National Oceanic and Atmospheric Administration. Retrieved 2006-09-08.
- "Sun Unleashes Record Superflare, Earth Dodges Solar Bullet". ScienceDaily. April 4, 2011. Retrieved 2011-08-27.
- "Biggest Solar Flare ever recorded". National Association for Scientific and Cultural Appreciation. 2004. Retrieved 2011-08-27.
- "A Super Solar Flare". NASA. May 6, 2008. Retrieved 2011-08-27.
- Article 11 of the Convention on the Rights of Persons with Disabilities
- Terminski, Bogumil, Towards Recognition and Protection of Forced Environmental Migrants in the Public International Law: Refugee or IDPs Umbrella (December 1, 2011). Policy Studies Organization (PSO) Summit, December 2011.
|Wikiquote has quotations related to: Natural disasters|
- "Natural Disasters News". Ubyrisk. Worldwide news site focused on natural disasters, mitigation and climate changes news
- "Global Risk Identification Program (GRIP)". GRIP.
- "BioCaster Global Health Monitor". National Institute of Informatics (NII).
- "World Bank's Hazard Risk Management". World Bank.
- "Disaster News Network". Retrieved 2006-11-05. US news site focused on disaster-related news.
- "EM-DAT International Disaster Database". Retrieved 2006-11-05. Includes country profiles, disaster profiles and a disaster list.
- "Global Disaster Alert and Coordination System". European Commission and United Nations website initiative.
- "Natural Disaster and Extreme Weather. Searchable Information Center". Ebrary.
|
How 3D works
How We See
The fact that our left eye and right eye see objects from different angles is the basis of 3D photography. If you try looking at an object through one eye and then the other, you will notice that it slightly changes position. However, with both eyes open, the two images that each eye observes separately are fused together as one in our brain. It is the fusion of these two images that creates normal binocular (3D) sight and allows our brain to understand depth and distance.
To capture images in 3D two camera lenses are used in place of our eyes, set about 2 ½ inches apart, which is the same distance between your eyes (called the interocular or interaxial distance). The two lenses each capture onto separate pieces of film. To review the image in 3D a stereoscopic viewer is needed. This is made of 2 eye pieces, each one feeds only one of the images to each eye, (the right image to the right eye and the left image to the left eye) tricking your brain into fusing the images into a single 3D image (as it would with normal vision).
To project a 3D film, two individual images representing the perspective of the left and right eye are simultaneously projected on screen. Without special glasses during the presentation, it will seem like you are seeing double, because in actual fact you are seeing two separate images. Fortunately the 3D glasses correct this problem. Each lens of the 3D glasses has a special filter (either red and cyan as in the old style glasses or the more modern polarized lenses) which blocks out the opposing image, allowing each eye only to see one image. Your brain perceives the fusion of the two separate images as one three-dimensional image.
The IMAX 3D Experience
In IMAX to recreate the 3D effect on screen we project two separate films through the same projector, one for the right eye and one for the left, at the same time. The film is projected through a set of passive linear polarized lenses, which match the lenses in the special glasses that the customers wear. The polarized lenses separate out the images making sure that your right eye only sees the right film and the left eye sees the left film. When this information is passed from your eyes to your brain it fuses them together to create the world's best 3D experience.
Want to book tickets? Easy!
Just come down and rock up to
the front door, or call 0141 420 5000.
See you soon.
|
Common Core Standards: ELA
- The Standard
- Teach With Shmoop
- Sample Assignments
- Aligned Resources
RL.9-10.9. Analyze how an author draws on and transforms source material in a specific work (e.g., how Shakespeare treats a theme or topic from Ovid or the Bible or how a later author draws on a play by Shakespeare).
Just as there are no new literary devices or ways to arrange a story (see Question 5), there are no new stories. Yes, that’s right—we just said that. There weren’t even any new stories four hundred years ago, when Shakespeare was repackaging used goods for the amusement of King James’s court. The good news is that the “old” stories provide endless ways to rearrange their parts, plots, and themes so as to create new work. This Standard takes a closer look at how borrowing turns old news into new art.
Teach With Shmoop
Tag! You're it.
The links in this section will take you straight to the standard-aligned assignments tagged in Shmoop's teaching guides.
That's right, we've done the work. You just do the clickin...
Using this Standard
- 1984 Teacher Pass
- A Raisin in the Sun Teacher Pass
- A Rose For Emily Teacher Pass
- Adventures of Huckleberry Finn Teacher Pass
- Animal Farm Teacher Pass
- Antigone Teacher Pass
- Beowulf Teacher Pass
- Brave New World Teacher Pass
- Death of a Salesman Teacher Pass
- Fahrenheit 451 Teacher Pass
- Fences Teacher Pass
- Frankenstein Teacher Pass
- Grapes Of Wrath Teacher Pass
- Great Expectations Teacher Pass
- Hamlet Teacher Pass
- Heart of Darkness Teacher Pass
- Julius Caesar Teacher Pass
- King Lear Teacher Pass
- Macbeth Teacher Pass
- Moby Dick Teacher Pass
- Narrative of Frederick Douglass Teacher Pass
- Oedipus the King Teacher Pass
- Of Mice and Men Teacher Pass
- One Flew Over the Cuckoo's Nest Teacher Pass
- Romeo and Juliet Teacher Pass
- The Aeneid Teacher Pass
- The As I Lay Dying Teacher Pass
- The Bluest Eye Teacher Pass
- The Canterbury Tales General Prologue Teacher Pass
- The Canterbury Tales: The Miller's Tale Teacher Pass
- The Canterbury Tales: The Wife of Bath's Prologue Teacher Pass
- The Cask of Amontillado Teacher Pass
- The Catch-22 Teacher Pass
- The Catcher in the Rye Teacher Pass
- The Crucible Teacher Pass
- The Great Gatsby Teacher Pass
- The House on Mango Street Teacher Pass
- The Iliad Teacher Pass
- The Lottery Teacher Pass
- The Metamorphosis Teacher Pass
- The Odyssey Teacher Pass
- The Old Man and the Sea Teacher Pass
- Their Eyes Were Watching God Teacher Pass
- Things Fall Apart Teacher Pass
- To Kill a Mockingbird Teacher Pass
- Twilight Teacher Pass
- Wide Sargasso Sea Teacher Pass
- Wuthering Heights Teacher Pass
Sample Questions for Use in Class
1. Shakespeare and Stoppard: Different Plays, Same Body Count
Tom Stoppard’s Rosencrantz and Guildenstern are Dead is based on Shakespeare’s Hamlet in both obvious and not-so-obvious ways. Obviously, Stoppard borrows two of Shakespeare’s minor characters and gives them top billing. But the two plays also have similar themes – both dwell on the futility of human action and the inevitability of death. Have students compare the two passages below (or others from the plays, if you so choose), discussing how each deals with the ideas of death and action/ inaction:
To be, or not to be: that is the question:
Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles,
And by opposing end them? To die: to sleep;
No more; and by a sleep to say we end
The heart-ache and the thousand natural shocks
That flesh is heir to, 'tis a consummation
Devoutly to be wish'd. To die, to sleep;
To sleep: perchance to dream: ay, there's the rub;
For in that sleep of death what dreams may come
When we have shuffled off this mortal coil,
Must give us pause: there's the respect
That makes calamity of so long life;
For who would bear the whips and scorns of time,
The oppressor's wrong, the proud man's contumely,
The pangs of despised love, the law's delay,
The insolence of office and the spurns
That patient merit of the unworthy takes,
When he himself might his quietus make
With a bare bodkin? who would fardels bear,
To grunt and sweat under a weary life,
But that the dread of something after death,
The undiscover'd country from whose bourn
No traveller returns, puzzles the will
And makes us rather bear those ills we have
Than fly to others that we know not of?
Thus conscience does make cowards of us all;
And thus the native hue of resolution
Is sicklied o'er with the pale cast of thought,
And enterprises of great pith and moment
With this regard their currents turn awry,
And lose the name of action.
Rosencrantz: Do you ever think of yourself as actually dead, lying in a box with the lid on it? Nor do I really. Silly to be depressed by it. I mean, one thinks of it like being alive in a box. One keeps forgetting to take into account that one is dead. Which should make all the difference. Shouldn't it? I mean, you’d never know you were in a box would you? It would be just like you were asleep in a box. Not that I’d like to sleep in a box, mind you. Not without any air. You'd wake up dead for a start and then where would you be? In a box. That's the bit I don't like, frankly. That’s why I don’t think of it. Because you'd be helpless wouldn't you? Stuffed in a box like that. I mean, you'd be in there forever. Even taking into account the fact that you're dead. It isn't a pleasant thought. Especially if you're dead, really. Ask yourself: if I asked you straight off I'm going to stuff you in this box now – would you rather to be alive or dead? Naturally you’d prefer to be alive. Life in a box is better than no life at all. I expect. You'd have a chance at least. You could lie there thinking, well, at least I’m not dead. In a minute, somebody’s going to bang on the lid and tell me to come out. (knocks) "Hey you! What's your name? Come out of there!"
2. Unintentional Borrowing
Even though there are no new stories, storytellers can end up borrowing from previous stories without knowing it because the sheer number of tales is so vast that no one can possibly be expected to know them all. Have students compare and contrast the two summaries below. What’s the same, and what’s different? How do these similarities and differences shed new light on both the old tale and the new one?
8 C.E.: Philomela Weaves a Tale
In Metamorphoses, the Roman writer Ovid tells the tale of Tereus and Philomela. Tereus is a strapping young soldier married to Philomela’s sister, Procne - but he’s still got the hots for Philomela. When Philomela tells him to get lost, Tereus assaults her, then cuts out her tongue so she can’t tell her sister what he did. Undaunted, Philomela weaves a tapestry showing the assault, then presents it to her sister. (We’re never told what Procne thinks of this fabulous “gift”.)
2008 C.E.: A Story in Pictures
In 2008, police in Los Angeles, California, arrested a 24-year-old man on suspicion of drug charges. While they were taking his booking photos, they noticed that he had an elaborate scene tattooed across his chest and shoulders. On closer inspection, the police discovered that it was a scene that exactly matched the scene of an unsolved liquor store murder that had happened a few years before - right down to the name of the liquor store, which appeared in the tattooed version. The tattoo’s owner later confessed that he had committed the murder, then had the event made into a tattoo. Needless to say, no one but the man’s prison mates are likely to see him show off his story for a long time.
Quiz 1 QuestionsHere's an example of a quiz that could be used to test this standard.
Quiz 2 QuestionsHere's an example of a quiz that could be used to test this standard.
Questions 1-10 are based on the following information:
One classic Greek myth is the story of Pygmalion. Pygmalion was a sculptor who didn’t love much of anything except carving things out of stone all day long. That is, he didn’t love much of anything until he started carving a lovely lady out of marble, whom he named Galatea. The more he carved, the more he fell in love with the lady he was carving, until the statue was finally finished and Pygmalion was so madly in love with it that he stared at it all day long. Taking pity on him lest he waste away to nothing, the gods turned Galatea-the-statue into Galatea-the-woman so that Pygmalion could marry her and maybe even start eating and sleeping again instead of staring at her all the time. (We’re not told how Galatea felt about this arrangement.)
In 1913, George Bernard Shaw wrote Pygmalion, a play based on the myth. In the play, linguist and English gentleman Henry Higgins makes a bet with his friend that he can pick a lower-class flower seller at random off the street, teach her to speak “proper” English, and dupe all his upper-class friends into thinking she’s a duchess instead of a poor person. To do this, Higgins enlists a flower girl named Eliza Doolittle, gives her several speech and etiquette lessons, and then successfully passes her off as an upper-class lady at a local ball, making a rich young man fall in love with her.
Eliza, however, is furious that Higgins has only been “helping” her in order to win a bet and that, now that she has all these upper-class manners, her friends at home laugh at her. They get into a fight and she storms out, only coming back later to tell Higgins off and announce that she’s going to marry the rich young man with the crush on her, which for some reason Higgins thinks is hilarious - probably to conceal the fact that he’s in love with the upper-class lady he “created,” but not the lower-class flower girl she started as.
In 1972, Richard Huggett wrote the play The First Night of Pygmalion. The play focuses on the events backstage at the very first production of Shaw’s play. The three main characters are Shaw, Mrs. Campbell (who plays Eliza) and Herbert Beerbohm Tree (who plays Higgins). The play is mostly about the fight the three have over what the characters would and wouldn’t say on stage, including the infamous line “not bloody likely,” which Shaw, Campbell, and Tree each have a passionate reason for leaving in the play or taking out. As far as we know, no one falls in love with anyone, but the play does deal to some extent with Shaw’s belief that he can turn Mrs. Campbell, who has never been a particularly good actress, into one of the best on stage by shaping her to fit Eliza’s role.
- Teaching A Raisin in the Sun: Costume Design
- Teaching A Rose for Emily: Write an Epitaph
- Teaching A Rose for Emily: Put Miss Emily On Trial
- Teaching A Rose for Emily: Dramatizing "A Rose for Emily"
- Teaching The Adventures of Huckleberry Finn: Rollin' on the River: Mapping Huck's Journey
- Teaching The Adventures of Huckleberry Finn: Is Mark Twain is the Original Jon Stewart?
- All Quiet on the Western Front: War is Awesome… When it’s Fake!
- All Quiet on the Western Front: Eggnog in a Trench
- An Occurrence at Owl Creek Bridge: Write What You Know
- Teaching Animal Farm: Don't Wanna Be Your Beast of Burden: Animal Farm Music
- Teaching Animal Farm: You Say You Want A (R)evolution?
- Teaching Antigone: The First Three Letters of Funeral
- As I Lay Dying: Dysfunction Junction: Somebody, Help These Bundrens!
- As I Lay Dying: Telling a Story from All Sides: Experimenting with Multiple-Perspective Narration
- Teaching 1984: From Doublethink to Doublespeak
- Teaching 1984: This Is Why I Write
- Teaching 1984: It's Not Over Until the Fat Lady Sings
- A Christmas Carol: From Victorian England to Modern America
- Teaching Twilight: "The Cullen Cars"
- Teaching Twilight: Judging a Book by its Cover
- Teaching Wide Sargasso Sea: Hollywood Needs Your Help! Make a Movie of Wide Sargasso Sea
- Teaching Wuthering Heights: Timing is Everything
- Teaching Wuthering Heights: Isn't It Byronic?
- Teaching Wuthering Heights: Remix Time on the Moors
- Beloved: Back to the Source
- Teaching Beowulf: Speaking Beowulf
- Teaching Beowulf: Wise Guys in Beowulf: Gnomic Verse
- Teaching Brave New World: Aldous Huxley: Oracle or Alarmist?
- Catch-22: Waiting for Yossarian: Bureaucracy in Catch-22 and in Schools
- Catch-22: Oops, I Satirized It Again
- Catch-22: Achilles’ Heel: Antiheroes in Catch-22 and the Iliad
- Teaching Death of a Salesman: It's Just an Expressionism
- Teaching Death of a Salesman: Selling the American Dream
- Teaching Fahrenheit 451: Burn, Baby, Burn: Censorship 101
- Teaching Fahrenheit 451: Internet Censorship
|
To calculate the area of a triangle you need to know its height. If this information is not given to you, you can easily calculate it based on what you do know! This article will teach you two different ways to find the height of a triangle, depending on what information you have been given.
Method 1 of 2: Using Base and Area
(You may also use the side lengths as the height)
1Recall the formula for the area of a triangle. The formula for the area of a triangle is A=1/2bh.
- A = Area of the triangle
- b = Length of the base of the triangle
- h = Height of the base of the triangle
2Look at your triangle and determine which variables you know. In this case, you already know the area, so assign that value to A. You should also know the value of one side length; assign that value to "'b'". If you do not know both the Area and the length of one side, you will need to try a different method.
- Any side of a triangle can be the base, regardless of how the triangle is drawn. To visualize this, just imagine rotating the triangle until the known side length is at the bottom.
- For example, if you know that the area of a triangle is 20, and one side is 4, then: A = 20 and b = 4.
3Plug your values into the equation A=1/2bh and do the math. First multiply the base (b) by 1/2, then divide the area (A) by the product. The resulting value will be the height of your triangle!
- In our example: 20 = 1/2(4)h
- 20 = 2h
- 10 = h
Method 2 of 2: Equilateral Triangle
1Recall the properties of an equilateral triangle. An equilateral triangle has three equal sides, and three equal angels that are each 60 degrees. If you cut an equilateral triangle in half, you will end up with two congruent right triangles.
- In this example, we will be using an equilateral triangle with side lengths of 8.
2Recall the Pythagorean Theorem. The Pythagorean Theorem states that for any right triangle with sides of length a and b, and hypotenuse of length c: a2 + b2 = c2. We can use this theorem to find the height of our equilateral triangle!
3Break the equilateral triangle in half, and assign values to variables a, b, and c. The hypotenuse c will be equal to the original side length. Side a will be equal to 1/2 the side length, and side b is the height of the triangle that we need to solve.
- Using our example equilateral triangle with sides of 8, c = 8 and a = 4.
4Plug the values into the Pythagorean Theorem and solve for b2. First square c and a by multiplying each number by itself. Then subtract a2 from c2.
- 42 + b2 = 82
- 16 + b2 = 64
- b2 = 48
5Find the square root of b2 to get the height of your triangle! Use the square root function on your calculator to find Sqrt(2. The answer is the height of your equilateral triangle!
- b = Sqrt (48) = 6.93
We could really use your help!
In other languages:
Italiano: Trovare l'Altezza di un Triangolo, Português: Achar a Altura de um Triângulo, Deutsch: Die Höhe eines Dreiecks bestimmen, Español: encontrar la altura de un triángulo, Русский: найти высоту треугольника, 中文: 求三角形的高, Français: calculer la hauteur d'un triangle, Bahasa Indonesia: Mencari Tinggi Segitiga, Nederlands: De hoogte van een driehoek berekenen
Thanks to all authors for creating a page that has been read 464,744 times.
|
Earth's companion is so large and fascinating that geologists count the Moon as one of the solar system's "terrestrial planets." In fact, it was probably born from Earth, after a Mars-sized body collided with the proto-Earth, in a collision so violent that the Moon that coalesced from the leftover fragments was entirely (or almost entirely) molten. We can tell this story of Earth and the Moon's creation thanks to our analysis of the rocks returned to Earth by the Apollo astronauts, Luna landers, and chance discoveries of lunar meteorites. New laboratory techniques yield new discoveries every year even though no samples have been collected from the surface of the Moon since 1972.
In the years since the end of the space race between the United States and Russia, many other nations have sent robotic spacecraft to orbit the Moon as a first step in their planetary exploration: Japan, the European Space Agency, India, and China. Likewise, many people see a staging station on the Moon as a necessary first stepping stone toward sending humans on missions to asteroids or Mars. Thanks to the combined data from lunar orbiters from all nations we know that there is water stored in lunar soil and that there are permanently sunlit peaks at the lunar poles, providing for two basic needs of human settlements: water and power. We can go back to the Moon; but who will make the effort?
Recent Blog Articles About the Moon
Learn about the Planetary Society’s newest project: PlanetVac, with Honeybee Robotics, aims to prototype and test in a huge vacuum chamber a new way to sample planetary surfaces that could be used for sample return or for in situ instruments.
Posted by Emily Lakdawalla on 2013/01/31 02:00 CST
We welcomed Sarah Noble to our weekly Google+ Hangout. Sarah is a lunar geologist and a civil servant working in the Research & Analysis program at NASA Headquarters, and has recently been named Program Scientist for the LADEE lunar mission.
Posted by Emily Lakdawalla on 2003/11/14 12:00 CST
New observations reported this week in the journal Nature have cast doubt on the theory that thick deposits of ground ice lie conveniently close to the surface in permanently shadowed crater floors at the lunar poles.
Posted by Emily Lakdawalla on 2012/03/14 08:47 CDT
It is always thrilling to see relics of human exploration out there on other worlds. Today, the Lunar Reconnaissance Orbiter Camera team posted some new photos of two defunct spacecraft: the Luna 17 lander and the Lunokhod 1 rover. I've posted images of the two craft before, but the ones released today are much better.
Posted by Emily Lakdawalla on 2014/01/21 05:02 CST
A higher-resolution version of the Chang'e 3 lander's panoramic view of the lunar surface has appeared on the Web, and artist Don Davis has cleaned it of artifacts to make a beautiful, seamless view. In other news, the mission has been reorganized to accommodate a possibly year-long adventure on the lunar surface.
Posted by Jason Davis on 2011/09/08 11:58 CDT
On September 6, NASA released new high-resolution photos from the Lunar Reconnaissance Orbiter (LRO) showing the Apollo 12, 14 and 17 landing sites from vantage points as close as 21 kilometers.
Fifteen years ago, Society members and passionate space advocates like you helped save the Pluto mission. Now we can do the same for missions to Europa and Mars.
Join over 19,000 people who have completed their petition and consider a donation to support advocacy efforts.
|
This session introduces the idea that there are different meanings of "more" and distinguishes between relative and absolute comparisons. To familiarize ourselves with the idea of equivalent ratios, we will use both additive and multiplicative methods to explore different ways of making similar figures. We will look at mixture problems and explore ratios without using algorithms to convert them to common denominators. Finally, we will examine characteristics of equations and graphs that represent direct variation.
Materials Needed: Graph paper, rulers, handouts of Quadperson, blank overheads
Groups: Discuss any questions about the homework. If time allows, take a few minutes to try out the number games with a partner. Pairs should show their networks to one another. One partner can choose a (secret) input, run it through the network, and reveal only the output. Then the other partner can use the "undoing" network to find the original number.
Groups: Take a minute and discuss iteration, along with Problems H4 and H5 from Session 3. It is likely to be a new idea.
<< back to Session 4 index
|
The phylum Mollusca includes snails, clams, chitons, slugs, limpets, octopi, and squid. As mollusks develop from a fertilized egg to an adult, most pass through a larval stage called the trocophore. The trocophore is a ciliated, free-swimming stage. Mollusks also have a radula or file-like organ for feeding, a mantle that may secrete a shell, and a muscular foot for locomotion. Clams are marine mollusks with two valves or shells. Like all mollusks, a clam has a mantle which surrounds its soft body. It also has a muscular foot which enables the clam to burrow itself in mud or sand. The soft tissue above the foot is called the visceral mass and contains the clam's body organs.
Kingdom - Animalia
Phylum - Mollusca
Class - Bivalvia or Pelecypoda
To study the internal and external anatomy of a bivalve mollusk.
Dissecting pan, dissecting kit, screwdriver, lab apron, plastic gloves, safety glasses, preserved clam
Put on your lab apron, safety glasses, and plastic gloves.
Place a clam in a dissecting tray and identify the anterior and posterior ends of the clam as well as the dorsal, ventral, & lateral surfaces. Figure 1
Locate the umbo, the bump at the anterior end of the valve. This is the oldest part of the clam shell. Find the hinge ligament which hinges the valves together and observe the growth rings.
Turn the calm with its dorsal side down and insert a screwdriver between the ventral edges of the valves. Carefully work the tip of the screwdriver between the valves so you do not jab your hand.
Turn the screwdriver so that the valves are about a centimeter apart. Leave the tip of the screwdriver between the valves and place the clam in the pan with the left valve up.
Locate the adductor muscles. With your blade pointing toward the dorsal edge, slide your scalpel between the upper valve & the top tissue layer. Cut down through the anterior adductor muscle, cutting as close to the shell as possible.
Repeat step 6 in cutting the posterior adductor muscle. Figure 2
Bend the left valve back so it lies flat in the tray.
Run your fingers along the outside and the inside of the left valve and compare the texture of the two surfaces.
Examine the inner dorsal edges of both valves near the umbo and locate the toothlike projections. Close the valves & notice how the toothlike projections interlock.
Locate the muscle "scars" on the inner surface of the left valve. The adductor muscles were attached here to hold the clam closed.
Identify the mantle, the tissue that lines both valves & covers the soft body of the clam. Find the mantle cavity, the space inside the mantle.
Locate two openings on the posterior end of the clam. The more ventral opening is the incurrent siphon that carries water into the clam and the more dorsal opening is the excurrent siphon where wastes & water leave.
With scissors, carefully cut away the half of the mantle that lined the left valve. After removing this part of the mantle, you can see the gills, respiratory structures.
Observe the muscular foot of the clam, which is ventral to the gills. Note the hatchet shape of the foot used to burrow into mud or sand.
Locate the palps, flaplike structures that surround & guide food into the clam's mouth. The palps are anterior to the gills & ventral to the anterior adductor muscle. Beneath the palps, find the mouth.
With scissors, cut off the ventral portion of the foot. Use the scalpel to carefully cut the muscle at the top of the foot into right and left halves.
Carefully peel away the muscle layer to view the internal organs.
Locate the spongy, yellowish reproductive organs.
Ventral to the umbo, find the digestive gland, a greenish structure that surrounds the stomach.
Locate the long, coiled intestine extending from the stomach.
Follow the intestine through the calm. Find the area near the dorsal surface that the intestine passes through called the pericardial area. Find the clam's heart in this area.
Continue following the intestine toward the posterior end of the clam. Find the anus just behind the posterior adductor muscle.
Use your probe to trace the path of food & wastes from the incurrent siphon through the clam to the excurrent siphon.
Answer the questions on your lab report & label the diagrams of the internal structures of the clam. Also, use arrows on the clam diagram to trace the pathway of food as it travels to the clam's stomach. Continue the arrows showing wastes leaving through the anus.
When you have finished dissecting
the clam, dispose of the clam as your teacher advises and clean, dry, and return
all dissecting equipment to the lab cart. Wash your hands thoroughly with soap.
|
LEWIS AND CLARK EXPEDITION
Ambrose, S.E. Undaunted Courage (Simon & Schuster, 1996).
United States citizens knew little about western North America when the Lewis and Clark Expedition set out in 1804. Twelve years earlier Captain Robert Gray, an American navigator, had sailed up the mouth of the great river he named the Columbia. Traders and trappers reported that the source of the Missouri River was in the mountains in the Far West. No one, however, had yet blazed an overland trail.
President Thomas Jefferson was interested in knowing more about the country west of the Mississippi and in finding a water route to the Pacific Ocean. In 1803, two years after he became president, he asked Congress for $2,500 for an expedition.
To head the expedition, Jefferson chose his young secretary, Captain Meriwether Lewis. Lewis invited his friend Lieutenant William Clark to share the leadership. Both were familiar with the frontier and with Native Americans through their service in the army.
Before Lewis and Clark set out, word came that Napoleon had sold an immense tract of land to the United States. Therefore, part of the region the expedition would be exploring was United States territory.
Plans for the expedition were carefully laid. The party was to ascend the Missouri to its source, carry canoes across the Continental Divide, and descend the Columbia River to its mouth. In preparation for the historic journey, Lewis studied natural history and learned how to fix latitude and longitude by the stars. In the winter of 1803-04 the expedition was assembled in Illinois, near St. Louis. The permanent party consisted of the two leaders, Lewis and Clark; three sergeants; 22 privates; the part-Native American frontiersman George Drouillard; and Clark's African American slave, York. They called themselves the Corps of Discovery.
On May 14, 1804, the explorers started up the Missouri in a 55-foot (17-meter) covered keelboat and two small canoes, paddled by French boatmen and a small temporary escort. On August 3 they held their first meeting with Native Americans at a place the explorers named Council Bluff, across the river and downstream from present-day Council Bluffs, Iowa. In late October they reached the earth-lodge villages of the Mandan, near the present site of Bismarck, N.D.
Across the river from the Mandan villages, the explorers built Fort Mandan and spent the winter. It was here that they hired Toussaint Charbonneau, a French interpreter, and his Native American wife, Sacagawea, the sister of a Shoshone chief. While at Fort Mandan, Sacagawea gave birth to a baby boy. This did not stop her from participating in the group. She carried the child on her back for the rest of the trip. As an interpreter she proved invaluable.
In the spring of 1805 the keelboat was sent back to St. Louis with dispatches for President Jefferson and with natural history specimens. Meanwhile, canoes had been built. On April 7 the party continued up the Missouri. On April 26 it passed the mouth of the Yellowstone, and on June 13 reached the Great Falls of the Missouri. Carrying the laden canoes 18 miles (29 kilometers) around the falls caused a month's delay. In mid-July the canoes were launched again above the falls. On the 25th the expedition reached Three Forks, where three rivers join to form the Missouri. They named the rivers the Madison, the Jefferson, and the Gallatin, after presidents James Madison and Thomas Jefferson, and Albert Gallatin, who was secretary of treasury under Jefferson.
For some time the explorers had been within sight of the Rocky Mountains. Crossing them was to be the hardest part of the journey. The expedition decided to follow the Jefferson River, the fork that led westward toward the mountains.
On August 12 the group climbed to the top of the Continental Divide, where they hoped to see the headwaters of the Columbia close enough to let them carry their canoes and proceed downstream toward the Pacific. Instead they saw mountains stretching endlessly into the distance. The water route Jefferson had sent them to find did not exist.
They were now in the country of the Shoshone, Sacagawea's people. Sacagawea eagerly watched for her tribe, but it was Lewis who found them. The chief, Sacagawea's brother, provided the party with horses and a guide for the difficult crossing of the lofty Bitterroot Range.
It took the Corps of Discovery most of September to cross the mountains. Hungry, sick, and exhausted, they reached a point on the Clearwater River where Nez Perce helped them make dugout canoes. From there they were able to proceed by water. They reached the Columbia River on October 16.
On Nov. 7, 1805, after a journey of nearly 18 months, Clark wrote in his journal, "Great joy in camp. We are now in view of the Ocean." The explorers had traveled more than 4,100 miles (6,600 kilometers) since they started up the Missouri. They were disappointed to find no ships at the mouth of the Columbia. A few miles from the Pacific shore, south of present-day Astoria, Ore., they built a stockade, Fort Clatsop. There they spent the rainy winter.
On March 23, 1806, the entire party started back. They crossed the mountains in June with Nez Perce horses and guides. Beside the Bitterroot River the two leaders separated to learn more about the country.
Clark headed for the Yellowstone River and followed it to the Missouri. Lewis, with nine men, struck off toward the northeast to explore a branch of the Missouri that he named the Marias. On this trip he had a skirmish with Native Americans that left two Blackfoot dead, the only such incident of the entire journey. Later, while out hunting, he was accidentally shot by one of his own men. He recovered after the party was reunited and had stopped at the Mandan villages. There they left Sacagawea and her family.
The party reached St. Louis on Sept. 23, 1806. Their arrival caused great rejoicing, for they had been believed dead. They had been gone two years, four months, and nine days.
Lewis, Clark, and several other members of the expedition kept detailed journals. They brought back much new material for cartographers and specimens of previously unknown wildlife. American settlers and traders soon began to travel over the route they had blazed. The expedition also provided useful support for the United States claim to the Oregon country.
A project by History World International
|
Why are the stars in the Orion constellation different colors?
In this view of Orion you can see that the stars are different colors. The red star in the upper left is Betelgeuse (pronounced BET-ul-juice). The blue star in the lower right is Rigel. The fuzzy patch in the sword is the Orion nebula. The nebula will be discussed in the concept, Star Formation.
With a quick look, stars look the same. Look closer, though, and you can see differences. The most obvious differences are in size and color.
Color and Temperature
Think about the coil of an electric stove as it heats up. The coil changes in color as its temperature rises. When you first turn on the heat, the coil looks black. The air a few inches above the coil begins to feel warm. As the coil gets hotter, it starts to glow a dull red. As it gets even hotter, it becomes a brighter red. Next it turns orange. If it gets extremely hot, it might look yellow-white, or even blue-white. Like a coil on a stove, a star’s color is determined by the temperature of the star’s surface. Relatively cool stars are red. Warmer stars are orange or yellow. Extremely hot stars are blue or blue-white.
Star temperatures are measured in degrees kelvin. The lowest temperature on the kelvin scale is absolute zero. That means molecules have no motion. Kelvin is related to Celsius and Fahrenheit in these ways:
[°C] = [K] − 273.15
[°F] = [K] × 9/5 − 459.67
A graph of the brightness (absolute magnitude) of stars versus their color (temperature) is pictured below ( Figure below ). This is called Hertzsprung-Russell diagram.
The Hertzsprung-Russell diagram plots luminosity (absolute magnitude) against the color of the stars ranging from the high-temperature blue-white stars on the left side of the diagram to the low temperature red stars on the right side.
Most stars fall along the main sequence curve. Stars in the main sequence fuse hydrogen into helium in the core. The horizontal branch also has many stars. These fuse helium in the core and burn hydrogen surrounding the core. Other stars are found in other regions.
Relative sizes of stars of different masses.
This illustration ( Figure above ) shows the relative sizes of stars and their mass compared to the sun. Red dwarfs are less massive and much smaller in size than the sun. This means they have a very long lifetime. Our sun is a fairly common type of star and has an average lifespan. Red giants are what some main sequence stars (like our sun) become near the end of their lives. They are much larger than our sun. Supergiants are very massive stars and are larger than our sun, but have a smaller radius than red giants. Supergiants have very short lifetimes.
- red dwarf : A relatively cool small star.
- red giant : A relatively cool, large star.
- supergiant : An enormous star that is near the end of its life.
- Stars are classified by color, which correlates with temperature. Red stars are the coolest and blue are the hottest.
- Stars are plotted on a Hertzsprung-Russell diagram.
- Star temperatures are found in a continuum ranging from 2000 K to more than 30,000 K.
- Kelvin is a temperature measure in which the lowest temperature is absolute zero.
Use the resource below to answer the questions that follow.
- Star Classification - Sixty Symbols at http://www.youtube.com/watch?v=R6_dZhE-4bk (7:57)
- What characteristics of a star does a classification system need to tell?
- What is the classification of our star?
- What is the problem with the classification system set up for stars?
- What number is assigned to the brightest stars? Originally what did astronomers think that was referring to, which was thought to indicate temperature?
- Why wasn't this a very good way to measure that characteristic?
- What is the current letter classification, in order from hottest to coldest? What is the mnemonic device to remember that?
- Why is the temperature of a star a really important thing to know?
- What identified on each axis of the Hertzsprung-Russell diagram?
- Why are stars different colors?
- Where do most stars fall on the Hertzsprung-Russel diagram? Why?
- Why do stars that are different colors appear in the same constellation?
- If a cluster of stars is all the same color, what could that mean?
|
Reading Skills Teacher Resources
Find Reading Skills educational ideas and activities
Showing 1 - 20 of 3,767 resources
Students practice their fluency skills. In this fluency lesson, students read aloud stories to their peers and they help to coach one another on their fluency, pronunciation, phrasing, and inflection. They discuss what makes a good reader enjoyable to listen to and easy to understand.
In this reading skills worksheet, students read about skills to use while reading and then fill out a Venn Diagram. Students choose which items to go into their Venn Diagram.
In this reading skills learning exercise, students fill in a graphic organizer, writing an article name, prediction, vocabulary words, main idea, values and a reflection.
In this reading skills worksheet, students read a selection entitled "Hurricane Warning!" and then respond to 5 questions regarding the main idea and supporting details.
In this graphic aids reading skills worksheet, learners read a 3 paragraph piece and examine the graphic aid accompanying it. Students respond to 4 short answer questions and use the graphic aid provided to think of their own labels and captions.
In this fact and opinion activity, middle schoolers sharpen their reading skills as they read a 1 page article titled "Culture Control" and identify the facts and opinions in the piece. Students list 2 facts and 2 opinions about an issue that matters to them.
In this cause and effect reading skills activity, students read a 5 paragraph selection about the Antarctic ice and identify causes and effects noted in the piece. Students give examples of cause and effect pertaining to an issue of their choice.
In this reading skills and strategies worksheet, learners look for sequence in the provided reading selection as they complete a graphic organizer. Students also identify the sequence of events about a day that especially memorable for them.
Students read articles related to local, state, national, and world events using word maps.
Seventh graders master the SQ3R method. They begin reading for a purpose and organize thoughts through categorizing them. They write in their notebooks what they think about the lesson and the classroom for the day and write a paragraph about their own culture.
In this problem and solution reading strategies learning exercise, students read a 2 page selection about renewable energy and then identify the problems and solutions noted in the essay.
In this reading skills worksheet, students use clues from a story to make predictions about what will happen next and continue to revise their predictions as they read.
In this reading skills worksheet, students look for clues about the author's purpose in a 3 paragraph selection. Students decide whether the author's main purpose is to inform, entertain, or persuade.
In this reading skills and strategies worksheet, learners look for sequence in the provided reading selection as they complete a graphic organizer. Students also identify a sequence of events pertaining to goals they have set for themselves and achieved.
In this reading skills and strategies learning exercise, students read a 1 page passage titled " A Few Good Noses" and identify important details about the piece as they complete a graphic organizer.
Students explore the rhythm of words. For this reading skills lesson, students read Bedtime at the Swamp and use rhythm instruments to find the cadence in the words of the story. Students listen for rhythm in other written text as they listen to more stories, rhymes, and songs.
In this reading skills worksheet, learners look for the main idea and details that support it as they respond to questions about to the 1 page reading selection. Students also identity the main idea and supporting details of an interesting job that they might write about.
Students sharpen their reading skills and learn about the character trait of generosity through the book, "The Rainbow Fish." They describe characters in the story, explain cause and effect, make simple predictions, and compare characters.
Students compare a map of the Roman Empire in 44 BC with one of the Roman Empire in 116 AD. Using these two maps as a reference, students use critical reading skills to explore the expansion of the Roman Empire during that time period.
Students observe and listen to nonfiction books about the life cycle of pumpkins. They practice early reading skills in a shared reading related to pumpkins. They observe the life cycle of a pumpkin including growth and decay.
|
© 2008 Zachary S Tseng B-3 - 1
A mass m is suspended at the end of a spring, its weight stretches the spring
by a length L to reach a static state (the equilibrium position of the system).
Let u(t) denote the displacement, as a function of time, of the mass relative
to its equilibrium position. Recall that the textbook’s convention is that
downward is positive Then, u > 0 means the spring is stretched beyond its
equilibrium length, while u < 0 means that the spring is compressed. The
mass is then set in motion (by any one of several means).
© 2008 Zachary S Tseng B-3 - 2
The equations that govern a mass-spring system
At equilibrium: (by Hooke’s Law)
mg = kL
While in motion:
m u″ + γ u′ + k u = F(t)
This is a second order linear differential equation with constant coefficients.
It usually comes with two initial conditions: u(t0) = u0, and u′(t0) = u′0.
Summary of terms:
u(t) = displacement of the mass relative to its equilibrium position.
m = mass (m > 0)
γ = damping constant (γ ≥ 0)
k = spring (Hooke’s) constant (k > 0)
g = gravitational constant
L = elongation of the spring caused by the weight
F(t) = Externally applied forcing function, if any
u(t0) = initial displacement of the mass
u′(t0) = initial velocity of the mass
© 2008 Zachary S Tseng B-3 - 3
Undamped Free Vibration (γ = 0, F(t) = 0)
The simplest mechanical vibration equation occurs when γ = 0, F(t) = 0.
This is the undamped free vibration. The motion equation is
m u″ + k u = 0.
The characteristic equation is mr2
+ k = 0. Its solutions are i
r ±= .
The general solution is then
u(t) = C1 cos ω0t + C2 sin ω0t.
=0ω is called the natural frequency of the system. It is the
frequency at which the system tends to oscillate in the absence of any
damping. A motion of this type is called simple harmonic motion.
Comment: Just like everywhere else in calculus, the angle is measured in
radians, and the (angular) frequency is given in radians per second. The
frequency is not given in hertz (which measures the number of cycles or
revolutions per second). Instead, their relation is: 2π radians/sec = 1 hertz.
The (natural) period of the oscillation is given by
© 2008 Zachary S Tseng B-3 - 4
To get a clearer picture of how this solution behaves, we can simplify it with
trig identities and rewrite it as
u(t) = Rcos (ω0 t − δ).
The displacement is oscillating steadily with constant amplitude of
1 CCR += .
The angle δ is the phase or phase angle of displacement. It measures how
much u(t) lags (when δ > 0), or leads (when δ < 0) relative to cos(ω0 t),
which has a peak at t = 0. The phase angle satisfies the relation
More explicitly, it is calculated by:
=δ , if C1 > 0,
πδ += −
, if C1 < 0,
δ = , if C1 = 0 and C2 > 0,
δ −= , if C1 = 0 and C2 < 0,
The angle is undefined if C1 = C2 = 0.
© 2008 Zachary S Tseng B-3 - 5
An example of simple harmonic motion:
Graph of u(t) = cos(t) − sin(t)
Phase angle: δ = −π/4
© 2008 Zachary S Tseng B-3 - 6
Damped Free Vibration (γ > 0, F(t) = 0)
When damping is present (as it realistically always is) the motion equation
of the unforced mass-spring system becomes
m u″ + γ u′ + k u = 0.
Where m, γ, k are all positive constants. The characteristic equation is mr2
γr + k = 0. Its solution(s) will be either negative real numbers, or complex
numbers with negative real parts. The displacement u(t) behaves differently
depending on the size of γ relative to m and k. There are three possible
classes of behaviors based on the possible types of root(s) of the
Case I. Two distinct (negative) real roots
> 4mk, there are two distinct real roots, both are negative. The
displacement is in the form
21)( += .
A mass-spring system with such type displacement function is called
overdamped. Note that the system does not oscillate; it has no periodic
components in the solution. In fact, depending on the initial conditions the
mass of an overdamped mass-spring system might or might not cross over
its equilibrium position. But it could cross the equilibrium position at most
© 2008 Zachary S Tseng B-3 - 7
Figures: Displacement of an Overdamped system
Graph of u(t) = e−t
Graph of u(t) = − e−t
© 2008 Zachary S Tseng B-3 - 8
Case II. One repeated (negative) real root
= 4mk, there is one (repeated) real root. It is negative:
The displacement is in the form
u(t) = C1 e rt
+ C2 te rt
A system exhibits this behavior is called critically damped. That is, the
damping coefficient γ is just large enough to prevent oscillation. As can be
seen, this system does not oscillate, either. Just like the overdamped case,
the mass could cross its equilibrium position at most one time.
Comment: The value γ2
= 4mk → mk2=γ is called critical damping. It
is the threshold level below which damping would be too small to prevent
the system from oscillating.
© 2008 Zachary S Tseng B-3 - 9
Figures: Displacement of a Critically Damped system
Graph of u(t) = e−t / 2
+ t e− t / 2
Graph of u(t) = e−t / 2
− t e− t / 2
© 2008 Zachary S Tseng B-3 - 10
Case III. Two complex conjugate roots
< 4mk, there are two complex conjugate roots, where their common
real part, λ, is always negative. The displacement is in the form
u(t) = C1 e λt
cos µt + C2 e λt
A system exhibits this behavior is called underdamped. The name means
that the damping is small compares to m and k, and as a result vibrations will
occur. The system oscillates (note the sinusoidal components in the
solution). The displacement function can be rewritten as
u(t) = Reλ t
cos (µt − δ).
The formulas for R and δ are the same as in the previous (undamped free
vibration) section. The displacement function is oscillating, but the
amplitude of oscillation, Reλ t
, is decaying exponentially. For all particular
solutions (except the zero solution that corresponds to the initial conditions
u(t0) = 0, u′( t0) = 0), the mass crosses its equilibrium position infinitely
Damped oscillation: u(t) = e−t
© 2008 Zachary S Tseng B-3 - 11
The displacement of an underdamped mass-spring system is a quasi-periodic
function (that is, it shows periodic-like motion, but it is not truly periodic
because its amplitude is ever decreasing so it does not exactly repeat itself).
It is oscillating at quasi-frequency, which is µ radians per second. (It’s just
the frequency of the sinusoidal components of the displacement.) The peak-
to-peak time of the oscillation is the quasi-period:
In addition to cause the amplitude to gradually decay to zero, damping has
another, more subtle, effect on the oscillating motion: It immediately
decreases the quasi-frequency and, therefore, lengthens the quasi-period
(compare to the natural frequency and natural period of an undamped
system). The larger the damping constant γ, the smaller quasi-frequency and
the longer the quasi-period become. Eventually, at the critical damping
threshold, when mk4=γ , the quasi-frequency vanishes and the
displacement becomes aperiodic (becoming instead a critically damped
Note that in all 3 cases of damped free vibration, the displacement function
tends to zero as t → ∞. This behavior makes perfect sense from a
conservation of energy point-of-view: while the system is in motion, the
damping wastes away whatever energy the system has started out with, but
there is no forcing function to supply the system with additional energy.
Consequently, eventually the motion comes to a halt.
© 2008 Zachary S Tseng B-3 - 12
Example: A mass of 1 kg stretches a spring 0.1 m. The system has a
damping constant of γ = 14. At t = 0, the mass is pulled down 2 m and
released with an upward velocity of 3.5 m/s. Find the displacement function.
What are the system’s quasi-frequency and quasi-period?
m = 1, γ = 14, L = 0.1;
mg = 9.8 = kL = 0.1 k → 98 = k.
The motion equation is u″ + 14u′ + 98u = 0, and
the initial conditions are u(0) = 2, u′(0) = −3.5.
The roots of characteristic polynomial are r = −7 ± 7i:
u(t) = C1 e −7t
cos 7t + C2 e −7t
Therefore, the quasi-frequency is 7 (rad/sec) and the quasi-period is
Apply the initial condition and we get C1 = 2, and C2 = 3/2. Hence
u(t) = 2e −7t
cos 7t + 1.5e −7t
© 2008 Zachary S Tseng B-3 - 13
Summary: the Effects of Damping on an Unforced Mass-Spring System
Consider a mass-spring system undergoing free vibration (i.e. without a
forcing function) described by the equation:
m u″ + γ u′ + k u = 0, m > 0, k > 0.
The behavior of the system is determined by the magnitude of the damping
coefficient γ relative to m and k.
1. Undamped system (when γ = 0)
Displacement: u(t) = C1 cosω0 t + C2 sinω0 t
Oscillation: Yes, periodic (at natural frequency
Notes: Steady oscillation with constant amplitude
1 CCR += .
2. Underdamped system (when 0 < γ2
Displacement: u(t) = C1 e λ t
cos µt + C2 e λ t
Oscillation: Yes, quasi-periodic (at quasi-frequency µ)
Notes: Exponentially-decaying oscillation
3. Critically Damped system (when γ2
Displacement: u(t) = C1 e rt
+ C2 te rt
4. Overdamped system (when γ2
© 2008 Zachary S Tseng B-3 - 14
Displacement: u(t)= C1 ert
+ C2 tert
Mass crosses equilibrium at most once.
Mechanical Vibrations, F(t)=0
System oscillates with amplitude decreasing
Displacement: u(t)= C1e
cos µt + C2 e
Oscillation quasi periodic: Tq = 2π/µ
Displacement: u(t)= C1 e
+ C2 e
Mass crosses equilibrium at most once.
γ = 0, Displacement: u(t)= C1 cos ω0t + C2 sin ω0t
Natural frequency: ω0 = , Steady oscillation with constant amplitude
© 2008 Zachary S Tseng B-3 - 15
Undamped Forced Vibration (γ = 0, F(t) ≠ 0)
Now let us introduce a nonzero forcing function into the mass-spring system.
To keep things simple, let damping coefficient γ = 0. The motion equation is
mu″ + ku = F(t).
In particular, we are most interested in the cases where F(t) is a periodic
function. Without the losses of generality, let us assume that the forcing
function is some multiple of cosine:
mu″ + ku = F0 cosωt.
This is a nonhomogeneous linear equation with the complementary solution
uc(t) = C1 cosω0 t + C2 sinω0 t.
The form of the particular solution that the displacement function will have
depends on the value of the forcing function’s frequency, ω.
Case I. When ω ≠ ω0
If ω ≠ ω0 then the form of the particular solution corresponding to the
forcing function is
Y = Acosωt + Bsinωt.
Solving for A and B using the method of Undetermined Coefficients, we find
Therefore, the general solution of the displacement function is
© 2008 Zachary S Tseng B-3 - 16
An interesting instance of such a forced vibration occurs when the initial
conditions are u(0) = 0, and u′(0) = 0. Applying the initial conditions to the
general solution and we get
C , and C2 = 0.
Again, a clearer picture of the behavior of this solution can be obtained by
rewriting it, using the identity:
sin(A)sin(B) = [cos(A − B) − cos(A + B)]/2.
The displacement becomes
The behavior exhibited by this function is that the higher-frequency, of
(ω0 + ω)/2, sine curve sees its amplitude of oscillation modified by its
lower-frequency, of (ω0 − ω)/2, counterpart.
This type of behavior, where an oscillating motion’s own amplitude shows
periodic variation, is called a beat.
© 2008 Zachary S Tseng B-3 - 17
An example of beat:
Graph of u(t) = 5sin(1.8t)sin(4.8t)
© 2008 Zachary S Tseng B-3 - 18
Case II. When ω = ω0
If the periodic forcing function has the same frequency as the natural
frequency, that is ω = ω0, then the form of the particular solution becomes
Y = Atcos ω0 t + Btsin ω0 t.
Use the method of Undetermined Coefficients we can find that
A = 0, and
B = .
The general solution is, therefore,
The first two terms in the solution, as seen previously, could be combined to
become a cosine term u(t) = Rcos (ω0 t − δ), of steady oscillation. The third
term, however, is a sinusoidal wave whose amplitude increases
proportionally with elapsed time. This phenomenon is called resonance.
Resonance: graph of u(t) = tsin(t)
© 2008 Zachary S Tseng B-3 - 19
Technically, true resonance only occurs if all of the conditions below are
1. There is no damping: γ = 0,
2. A periodic forcing function is present, and
3. The frequency of the forcing function exactly matches the
natural frequency of the mass-spring system.
However, similar behaviors, of unexpectedly large amplitude of oscillation
due to a fairly low-strength forcing function occur when damping is present
but is very small, and/or when the frequency of forcing function is very
close to the natural frequency of the system.
© 2008 Zachary S Tseng B-3 - 20
1 – 4 Solve the following initial value problems, and determine the natural
frequency, amplitude and phase angle of each solution.
1. u″ + u = 0, u(0) = 5, u′(0) = −5.
2. u″ + 25u = 0, u(0) = −2, u′(0) = 310 .
3. u″ + 100u = 0, u(0) = 3, u′(0) = 0.
4. 4u″ + u = 0, u(0) = −5, u′(0) = −5.
5 – 10 Solve the following initial value problems. For each problem,
determine whether the system is under-, over-, or critically damped.
5. u″ + 6u′ + 9u = 0, u(0) = 1, u′(0) = 1.
6. u″ + 4u′ + 3u = 0, u(0) = 0, u′(0) = −4.
7. u″ + 6u′ + 10u = 0, u(0) = −2, u′(0) = 9.
8. u″ + 2u′ + 17u = 0, u(0) = 6, u′(0) = −2.
9. 4u″ + 9u′ + 2u = 0, u(0) = 3, u′(0) = 1.
10. 3u″ + 24u′ + 48u = 0, u(0) = −5, u′(0) = 6.
11. Consider a mass-spring system described by the equation
2u″ + 3u′ + ku = 0. Give the value(s) of k for which the system is under-,
over-, and critically damped.
12. Consider a mass-spring system described by the equation
4u″ + γu′ + 36u = 0. Give the value(s) of γ for which the system is under-,
over-, and critically damped.
13. One of the equations below describes a mass-spring system undergoing
resonance. Identify the equation, and find its general solution.
(i.) u″ + 9u = 2cos9t (ii.) u″ + 4u′ + 4u = 3sin 2t
(iii.) 4u″ + 16u = 7cos2t
© 2008 Zachary S Tseng B-3 - 21
14. Find the value(s) of k, such that the mass-spring system described by
each of the equations below is undergoing resonance.
(a) 8u″ + ku = 5sin6t (b) 3u″ + ku = −πcost
1. u = 5cost − 5sint, ω0 = 1, R = 25 , δ = −π/4
2. u = −2cos5t + 32 sin5t, ω0 = 5, R = 4, δ = 2π/3
3. u = 3cos10t, ω0 = 10, R = 3, δ = 0
4. u = −5cost/2 − 10sint/2, ω0 = 1/2, R = 55 , δ = π + tan−1
5. u = e −3t
+ 4te −3t
, critically damped
6. u = 2e −3t
− 2e −t
7. u = −2e −3 t
cost + 3e −3 t
sin t, underdamped
8. u = 6e − t
cos4t + e − t
9. u = 4e −t/4
− e −2t
10. u = −5e −4t
− 14te −4t
, critically damped
11. Overdamped if 0 < k < 9/8, critically damped if k = 9/8, underdamped
if k > 9/8.
12. Underdamped if 0 < γ < 24, critically damped if γ = 24, overdamped if
γ > 24. When γ = 0, the system is undamped (rather than underdamped).
13. (iii), tttCtCu 2sin
2sin2cos 21 ++=
14. (a) k = 288 (b) k = 3
|
|Part of the Politics series|
|Basic forms of government|
A confederation (also known as confederacy or league) is a union of political units for common action in relation to other units. Usually created by treaty but often later adopting a common constitution, confederations tend to be established for dealing with critical issues (such as defense, foreign affairs, or a common currency), with the central government being required to provide support for all members.
The nature of the relationship among the states constituting a confederation varies considerably. Likewise, the relationship between the member states, the central government, and the distribution of powers among them is highly variable. Some looser confederations are similar to intergovernmental organizations and even may permit secession from the confederation. Other confederations with stricter rules may resemble federations. A unitary state or federation may decentralize powers to regional or local entities in a confederal form.
In a non-political context, confederation is used to describe a type of organization which consolidates authority from other autonomous (or semi-autonomous) bodies. Examples include sports confederations or confederations of pan-European trades unions.
In the context of the history of the indigenous peoples of the Americas, a confederacy may refer to a semi-permanent political and military alliance consisting of multiple nations (or "tribes", "bands", or "villages") which maintained their separate leadership. One of the most well-known is the Iroquois Confederacy, but there were many others during different eras and locations across North America; these include the Wabanaki Confederacy, Western Confederacy, Powhatan Confederacy, Seven Nations of Canada, Pontiac's Confederacy, Illinois Confederation, Tecumseh's Confederacy, Great Sioux Nation, Blackfoot Confederacy, Iron Confederacy and Council of Three Fires.
Many scholars have proposed that Belgium has some characteristics of a confederation. For example, C. E. Lagasse declared that Belgium was "near the political system of a Confederation" regarding the agreements between Belgian regions and communities, while Centre de recherche et d'information socio-politiques (CRISP) director Vincent de Coorebyter called Belgium "undoubtedly a federation...[with] some aspects of a confederation" in Le Soir. Also in Le Soir, Professor Michel Quévit of the Catholic University of Leuven wrote that the "Belgian political system is already in dynamics of a Confederation".
Nevertheless, the Belgian regions and communities lack the necessary autonomy to leave the Belgian state. As such, the federal aspects seem to dominate. Also for fiscal policy and public finances, the federal state dominates the other levels of government.
The limited confederal aspects appear to be a meager political reflection of the profound sociological, cultural and economic differences between Flemings and Walloons (or French-speaking Belgians). As an example, in the last several decades, over 95% of the Belgians have voted for political parties that represent voters from only one community. Parties that advocate Belgian unity and appeal to voters of both communities systematically get only a few percent of the votes.
This makes Belgium fundamentally different from federal countries like Switzerland, Canada, Germany and Australia. In those countries, national parties get over 90% of the votes. The only comparable places with Belgium are Catalonia, the Basque Country, Northern Ireland and Scotland, where there is majority voter turnout for local political parties, while national parties draw less (sometimes much less) than half of the votes.
In modern terminology, Canada is a federation and not a confederation. However, at the time the Constitution Act, 1867, confederation was the normal British and Canadian term for a single sovereign nation-state of federating provinces. Canadian Confederation generally refers to the Constitution Act, 1867 which formed the Dominion of Canada from three of the colonies of British North America, and to the subsequent incorporation of other colonies and territories. Therefore on July 1, 1867, Canada became a self-governing dominion of the British Empire with a federal structure under the leadership of Sir John A. Macdonald. The provinces involved were the Province of Canada (comprising Canada West, now Ontario, formerly Upper Canada; and Canada East now Quebec, formerly Lower Canada), Nova Scotia, and New Brunswick. Later participants were Manitoba, British Columbia, Prince Edward Island, Alberta and Saskatchewan (the latter two created as provinces from the Northwest Territories in 1905), and finally Newfoundland (now Newfoundland and Labrador) in 1949. Canada is an unusually decentralized federal state and not a confederate association of sovereign states, (the usual meaning of confederation in modern terms). A Canadian law, the Clarity Act, and a court ruling, Reference re Secession of Quebec, set forth the conditions for negotiations to allow Canadian provinces (though not territories) to leave the Canadian federal state; however, as this would require a constitutional amendment, there is no current "constitutional" method for withdrawal.
Due to its unique nature, and the political sensitivities surrounding it, there is no common or legal classification for the European Union (EU). However, it does bear some resemblance to both a confederation (or "new" type of confederation) and a federation. The EU operates common economic policies with hundreds of common laws, which enable a single economic market, open internal borders, a common currency and allow for numerous other areas where powers have been transferred and directly applicable laws are made. However, unlike a federation, the EU does not have exclusive powers over foreign affairs, defence and taxation. Furthermore, laws sometimes must be transcribed into national law by national parliaments; decisions by member states are taken by special majorities with blocking minorities accounted for; and treaty amendment requires ratification by every member state before it can come into force.
However, academic observers more usually discuss the EU in the terms of it being a federation. As international law professor Joseph H. H. Weiler (of the Hague Academy and New York University) wrote, "Europe has charted its own brand of constitutional federalism". Jean-Michel Josselin and Alain Marciano see the European Court of Justice as being a primary force behind building a federal legal order in the Union with Josselin stating that a "complete shift from a confederation to a federation would have required to straightforwardly replace the principality of the member states vis-à-vis the Union by that of the European citizens...As a consequence, both confederate and federate features coexist in the judicial landscape". Rutgers political science professor R. Daniel Kelemen observed: "Those uncomfortable using the 'F' word in the EU context should feel free to refer to it as a quasi-federal or federal-like system. Nevertheless...the EU has the necessary attributes of a federal system. It is striking that while many scholars of the EU continue to resist analyzing it as a federation, most contemporary students of federalism view the EU as a federal system".[when?] Thomas Risse and Tanja A. Börzel claim that the "EU only lacks two significant features of a federation. First, the Member States remain the 'masters' of the treaties, i.e., they have the exclusive power to amend or change the constitutive treaties of the EU. Second, the EU lacks a real 'tax and spend' capacity, in other words, there is no fiscal federalism."
The Iroquois League, historically the Iroquois Confederacy, is a group of Native Americans (in what is now the United States) and First Nations (in what is now Canada) that consists of six nations: the Mohawk, the Oneida, the Onondaga, the Cayuga, the Seneca and the Tuscarora. The Iroquois have a representative government known as the Grand Council. The Grand Council is the oldest governmental institution still maintaining its original form in North America. The League has been functioning since prior to major European contact. Each tribe sends chiefs to act as representatives and make decisions for the whole nation.
Serbia and Montenegro
Serbia and Montenegro (2003–06) was a confederation that was formed by the two remaining republics of the Socialist Federal Republic of Yugoslavia (SFR Yugoslavia): Montenegro and neighboring Serbia were sole legal successors to FR Yugoslavia, which consequently ceased to exist. The country was reconstituted as a very loose political union called the State Union of Serbia and Montenegro. It was established on February 4, 2003.
As a confederation, Serbia and Montenegro were united only in very few realms, such as defense, foreign affairs and a very weak common president of the confederation. The two constituent republics functioned separately throughout the period of its short existence, and continued to operate under separate economic policies, as well as using separate currencies (the euro was and still is the only legal tender in Montenegro, while the dinar was still the legal tender in Serbia). On 21 May 2006, the Montenegrin independence referendum was held. Final official results indicated on 31 May that 55.5% of voters voted in favor of independence. The state union effectively came to an end after Montenegro's formal declaration of independence on June 3rd 2006, and Serbia's formal declaration of independence on June 5th.
Switzerland, officially known as the Swiss Confederation, is an example of a modern country that refers to itself as a confederation. However, at the time Switzerland adopted the Latin name "Confoederatio Helvetica", no distinction existed in Europe between the words "confederation" and "federation" regarding the strength of federal authority. After the Swiss civil war of 1847, when some of the Catholic cantons tried to set up a separate alliance (the Sonderbundskrieg), the resulting political system acquired all the characteristics of a federation. It had been a confederacy since its inception in 1291 as the Old Swiss Confederacy, originally created as an alliance among the valley communities of the central Alps, and retains the confederal name. The confederacy facilitated management of common interests (free trade) and ensured peace in the important mountain trade.
Historical confederations (especially those predating the 20th century) may not fit the current definition of a confederation, may be proclaimed as a federation but be confederal (or the reverse), and may not show any qualities that 21st-century political scientists might classify as those of a confederation.
Arabia during Muhammad era
Early in 627 during the Battle of the Trench a confederation of tribes was formed to fight the Islamic Prophet Muhammad. The Jews of Banu Nadir met with the Arab Quraysh of Makkah. Huyayy ibn Akhtab, along with other leaders from Khaybar, traveled to swear allegiance with Safwan at Makkah.
The Banu Nadir began rousing the nomads of Najd. The Nadir enlisted the Banu Ghatafan by paying them half of their harvest. This contingent, the second largest, added a strength of about 2,000 men and 300 horsemen led by Unaina bin Hasan Fazari. The Bani Assad also agreed to join, led by Tuleha Asadi. From the Banu Sulaym, the Nadir secured 700 men, though this force would likely have been much larger had not some of its leaders been sympathetic towards Islam. The Bani Amir, who had a pact with Muhammad, refused to join.
Other tribes included the Banu Murra, with 400 men led by Hars ibn Auf Murri, and the Banu Shuja, with 700 men led by Sufyan ibn Abd Shams. In total, the strength of the Confederate armies, though not agreed upon by scholars, is estimated to have included around 10,000 men and six hundred horsemen. At the end of March 627 the army, which was led by Abu Sufyan, marched on Medina.
In accordance with the plan the armies began marching towards Medina, Meccans from the south (along the coast) and the others from the east. At the same time horsemen from the Banu Khuza'a left to warn Medina of the invading army.
Some have more the characteristics of a personal union, but appear here because of their self-styling as a "confederation":
- a Confederated personal union.
- b De facto confederation.
- Oxford English Dictionary
- "How Canadian Govern Themselves, First Edition, 1980 by Eugene Forsey, Ch. on A Federal State p.1". .parl.gc.ca. Retrieved 2011-02-19.
- French Le confédéralisme n'est pas loin Charles-Etienne Lagasse, Les nouvelles institutions politiques de la Belgique et de l'Europe, Erasme, Namur 2003, p. 405 ISBN 2-87127-783-4
- Belgian research center whose activities are devoted to the study of decision-making in Belgium and in Europe[dead link]
- French: "La Belgique est (...) incontestablement, une fédération : il n’y a aucun doute (...) Cela étant, la fédération belge possède d’ores et déjà des traits confédéraux qui en font un pays atypique, et qui encouragent apparemment certains responsables à réfléchir à des accommodements supplémentaires dans un cadre qui resterait, vaille que vaille, national." Vincent de Coorebyter "La Belgique (con)fédérale" in Le Soir 24 June 2008
- French: "Le système institutionnel belge est déjà inscrit dans une dynamique de type confédéral." Michel Quévit Le confédéralisme est une chance pour les Wallons et les Bruxellois, Le Soir, 19 September 2008
- Robert Deschamps, Michel Quévit, Robert Tollet, "Vers une réforme de type confédéral de l'État belge dans le cadre du maintien de l'union monétaire," in Wallonie 84, n°2, pp. 95-111
- P.W. Hogg, Constitutional Law of Canada (5th ed. supplemented), para. 5.1(b).
- How Canadians Govern Themselves, 7th ed
- Kiljunen, Kimmo (2004). The European Constitution in the Making. Centre for European Policy Studies. pp. 21–26. ISBN 978-92-9079-493-6.
- Burgess, Michael (2000). Federalism and European union: The building of Europe, 1950–2000. Routledge. p. 49. ISBN 0-415-22647-3. "Our theoretical analysis suggests that the EC/EU is neither a federation nor a confederation in the classical sense. But it does claim that the European political and economic elites have shaped and moulded the EC/EU into a new form of international organization, namely, a species of "new" confederation."
- Josselin, Jean Michel; Marciano, Alain (2006). "The Political Economy of European Federalism". Series: Public Economics and Social Choice. Centre for Research in Economics and Management, University of Rennes 1, University of Caen. p. 12. WP 2006-07; UMR CNRS 6211.
A complete shift from a confederation to a federation would have required to straightforwardly replace the principalship of the member states vis-à-vis the Union by that of the European citizens.... As a consequence, both confederate and federate features coexist in the judicial landscape.
- "How the Court Made a Federation of the EU" [referring to the European Court of Justice]. Josselin (U. de Rennes-1/CREM) and Marciano (U. de Reims CA/CNRS).
- J.H.H. Weiler (2003). "Chapter 2, Federalism without Constitutionalism: Europe's Sonderweg". The federal vision: legitimacy and levels of governance in the United States and the European Union. Oxford University Press. ISBN 0-19-924500-2.
Europe has charted its own brand of constitutional federalism. It works. Why fix it?
- How the [ECJ] court made a federation of the EU Josselin (U de Rennes-1/CREM) and Marciano (U de Reims CA/CNRS).
- Josselin, Jean Michel; Marciano, Alain (2006). "The political economy of European federalism" (PDF). Series: Public Economics and Social Choice. Centre for Research in Economics and Management, University of Rennes 1, University of Caen. p. 12. WP 2006-07; UMR CNRS 6211.
- Bednar, Jenna (2001). A Political Theory of Federalism. Cambridge University. pp. 223–270.
- Thomas Risse and Tanja A. Börzel, Who is Afraid of a European Federation? How to Constitutionalise a Multi-Level Governance System, Section 4: The European Union as an Emerging Federal System, Jean Monnet Center at NYU School of Law
- Evans-Pritchard, Ambrose (2003-07-08). "Giscard's 'federal' ruse to protect Blair". The Daily Telegraph. Retrieved 2008-10-15.
- Jennings, p.94
- "Startseite". admin.ch. 2011-02-13. Retrieved 2011-02-19.
- "Federal Chancellery - The Swiss Confederation – a brief guide". Bk.admin.ch. 2010-03-01. Retrieved 2011-02-19.
- swissconfederationinstitute Resources and Information. This website is for sale!. swissconfederationinstitute.org. Retrieved on 2013-07-12.
- Haller/Kölz, p. 147
- Lings, Muhammad: his life based on the earliest sources, pp. 215f.
- al-Halabi, al-Sirat al-Halbiyyah, p. 19.
- Nomani, Sirat al-Nabi, p. 368-370.
- Watt, Muhammad at Medina, p. 34-37.
- Rodinson, Muhammad: Prophet of Islam, p. 208.
|Look up confederation in Wiktionary, the free dictionary.|
- P.-J. Proudhon, The Principle of Federation, 1863.
- The Fathers of Confederation
- Confederation: The Creation of Canada
- South Africa at worldstatesmen.org.
- United Confederation of Taino People
|
A painting by Rudolf Bohunek depicting a man and a barrel of whiskey entitled "Ould Irish Whiskey. This image was painted during the Prohibition Era around 1910. Learn more »
In the early twentieth century, Louisiana reluctantly became subject to the abolition, or prohibition, of alcoholic drinks as a result of the federal law.The ban on alcohol mandated by the 18th amendment to the US Constitution, commonly known as the Volstead Act, was approved by Congress in December 1917, ratified with the approval of thirty-six of the forty-eight states in January 1919, and became law on January 16, 1920.
The effort to eliminate alcohol, or the “dry” movement, as it was often known, actually began in earnest in the 1840s. In 1869 an official Prohibition Party emerged to coordinate dry efforts. By 1873 the Women’s Christian Temperance Union (WCTU) eclipsed other national antialcohol movements, emerging as a potent political force across much of the South and Midwest. The Louisiana branch of the WCTU boasted a large membership of politically active “drys.” During the Progressive era (1890–-1920), the Anti-Saloon League established itself as an umbrella organization to coordinate prohibition efforts.
Attitudes about Prohibition
Louisiana proved an uneasy partner in the “noble experiment.” Much of north central Louisiana, portions of the Florida Parishes, the Black Belt parishes along the Mississippi River, and the western Sabine River parishes were dominated by strongly pro-prohibition Baptist and Methodist adherents. Yet the majority of Louisiana residents opposed the prohibition of alcohol. Many Roman Catholics, Episcopalians, and German Lutherans across the southern part of the state intensely resisted the new law. Business interests in New Orleans and other southern Louisiana urban areas also opposed prohibition due to the economic implications it entailed. Many “wets” enjoyed the opportunity to consume alcohol, while others resisted the law in the belief that government had no right to legislate morality.
Unlike some states that exhibited near homogeneity in their stance on drinking hard liquor, Louisiana endured deeply entrenched divisions. Many residents accepted prohibition arguments that alcohol contributed to moral depravity and illicit behavior. Others believed it advanced the misery and debauchery of the poor, especially among newly freed African Americans. Still others claimed alcohol served as a root cause of Louisiana’s exceptionally high rates of violence. Yet, alcohol had always played a prominent role in Louisiana life. Whether celebrating Mardi Gras, joining in a corn shucking, or enjoying an elegant meal at a New Orleans restaurant, alcoholic drinks proved a normal concomitant to Louisiana culture.
Prohibition Takes Effect
When the law took effect in January 1920, Louisianans quickly perfected numerous methods to circumvent it. Indeed, some have argued that drinking liquor became even more popular in Louisiana after it was declared illegal. Rumrunning became a major industry; smugglers brought so many shiploads of illegal liquor to Louisiana that the price actually began to decline. A 1926 survey of social workers nationwide identified New Orleans as the “wettest” city in America. A similar 1924 report issued by the US Attorney General’s office declared southern Louisiana to be 90 percent wet. In rural upstate parishes, illicit stills concealed in the vast woodlands provided customers with prodigious quantities of moonshine liquor, ranging from “Blind Tiger” to “Busthead” whiskey.
The political atmosphere of the state facilitated Louisianans’ ability to resist prohibition. Some members of the New Orleans city council sought to have alcohol declared a food supplement in order to circumvent the law. When asked by the mayor of Atlanta what his administration was doing to enforce prohibition, Louisiana governor Huey P. Long famously responded “not a damn thing.” Scores of Louisiana residents, whether they consumed alcohol or not, simply resented the intrusion of government into what they perceived as private affairs. Thus, they refused to support enforcement of the law.
Despite such obstacles, federal agents worked aggressively to enforce their mandate. Coast Guard cutters stepped up interdiction of rumrunning throughout the 1920s, even firing on and sinking some runners such as the Canadian-registered schooner, I’m Alone, in the Gulf of Mexico. Liquor raids peaked in New Orleans in 1925 when 200 agents uncovered and destroyed more than 10,000 cases of liquor. By December 1926, New Orleans had more padlocked speakeasies or saloons illegally selling alcohol than any city in the nation. When repeal of the prohibition law occurred in April 1933, more than nine hundred retail beer permits were issued in the Crescent City in the first week.
In Louisiana, as elsewhere, prohibition did reduce the overt consumption of alcohol. Yet it also contributed to social degradation by promoting multiple forms of criminal behavior such as rumrunning, moonshining, and violent turf wars, not to mention increasing hostility toward government agents and the legal authority they represented. In the end, prohibition may have been a noble experiment, but its legacy was short lived in Louisiana. As recently as the 1980s, the Louisiana legislature proved willing to temporarily forgo federal highway construction appropriations rather than raise the drinking age to twenty-one. Consuming alcoholic beverages has proven a deeply ingrained dynamic of Louisiana culture that even the federal government cannot mitigate.
Cite This Entry
Chicago Manual of Style
Hyde, Samuel C. "Prohibition." In KnowLA Encyclopedia of Louisiana, edited by David Johnson. Louisiana Endowment for the Humanities, 2010–. Article published February 7, 2011. http://www.knowla.org/entry/847/.
Hyde, Samuel C. "Prohibition." KnowLA Encyclopedia of Louisiana. Ed. David Johnson. Louisiana Endowment for the Humanities, 7 Feb 2011. Web. 5 Feb. 2016.
Explore this Entry
Related articles from the LEH's
Louisiana Cultural Vistas
A Dry Spell in a Wet Town
Vol. 24 Iss. 4, Winter, 2013
When Shreveport Went Dry
Vol. 22 Iss. 4, Winter, 2011
|
Apollo Space Freighter (1963)
When first proposed in 1959, the spacecraft that would eventually become known as the Apollo Command and Service Module (CSM) was envisioned as a three-man Earth-orbital vehicle upgradable to lunar-orbital capability. On November 15, 1960, NASA awarded six-month feasibility study contracts for just such an Apollo spacecraft to the Martin Company, the Convair Division of General Dynamics, and the General Electric (GE) Company Defense Electronic Division, Missile and Space Vehicle Department. The CSM at that time was to include a Command Module (CM), a Service Module (SM), and an orbital module, a kind of mini-space station. The three companies submitted their final study reports on May 15, 1961.
Ten days later, President John F. Kennedy redirected Apollo – and, indeed, the entire U.S. civilian space program – toward the goal of landing a man on the moon by the end of the 1960s. On November 28, 1961, NASA awarded North American Aviation (NAA) the contract to build the Apollo CSM, the initial design of which included two modules: the conical CM and the drum-shaped SM. At the time, the method by which NASA would carry out the President’s mandate remained uncertain, though it was widely assumed that it would soon award a contract for a third Apollo spacecraft module: a landing propulsion module for lowering the CSM to the lunar surface. NAA went so far as to design the Service Propulsion System (SPS) main engine, mounted at the base of the SM, with enough thrust to launch the CSM off the moon using the propulsion module as a launch pad.
The Apollo CSM would never land on the moon, however. On July 11, 1962, as part of an ongoing debate that was not finally settled until November of that year, NASA selected the Lunar-Orbit Rendezous (LOR) mode for accomplishing the Apollo mission. A contract for a third Apollo module was indeed awarded (to Grumman, on November 7, 1962), but it was for the Lunar Excursion Module (LEM), a bug-like two-man lander that would detach from the CSM in lunar orbit and land. The Apollo CSM thus became the mother ship for delivering astronauts and LEM to lunar orbit and returning astronauts and moon rocks to Earth.
If some within NASA had had their way, then the Apollo CSM would also have become the primary crew and cargo delivery vehicle for a 24-man Earth-orbiting space station beginning as early as 1968. In April 1963, NASA’s Manned Spacecraft Center (MSC) awarded NAA a contract for a seven-month, two-phase study of a Modified Apollo (MODAP) Logistics Spacecraft. At the time, MSC personnel, who had moved from NASA’s Langley Research Center in Virginia beginning in early 1962, were housed in temporary offices scattered across Houston, Texas. By the time NAA completed the MODAP study in November 1963, MSC had officially opened its new facilities on Houston’s southern outskirts.
Not surprisingly, the Apollo CSM design in 1963 had yet to reach its final form. No docking unit design had been selected, for example, though the probe-and-drogue system eventually chosen was already the leading candidate. A peculiar phased-array high-gain antenna had yet to be replaced by the familiar four-dish Apollo high-gain. The overall layout and many other details were, however, firmly in place, giving NAA a meaningful point of departure for its MODAP design.
The Apollo CSM’s crew-carrying CM included three astronaut couches, a control panel, small windows at strategic locations, a side-mounted hatch, a docking tunnel and parachutes in its nose, and a bowl-shaped heatshield and thrusters for orienting it for atmosphere reentry at its base. An umbilical linked the CM to the SM. The SM included seven major internal compartments. A central cylindrical compartment housed helium pressurant tanks for pushing rocket propellants to the SPS main engine. Arrayed around the central compartment were six triangular compartments containing tanks of fuel and oxidizer for the SPS and four attitude-control thruster quads, fuel cells for making electricity and water, and tanks of liquid oxygen and liquid hydrogen for supplying the fuel cells.
The MODAP CSM would have a stripped-down SM and a beefed-up CM. Because it would spend a limited amount of time in free flight before docking with an Earth-orbiting space station that could supply it with air, electricity, and cooling, it could dispense with or downgrade many lunar mission SM systems. Batteries would replace the lunar SM’s fuel cells, for example, and a compact, less powerful LEM descent engine would replace the SPS. The LEM engine would draw propellants from a pair of spherical tanks in the central cylindrical compartment. This would free up the triangular compartments for cargo containers.
NAA assumed that the MODAP CSM would launch on a two-stage Saturn IB rocket capable of placing 32,500 pounds into a 105-nautical-mile-high circular parking orbit (NAA also looked at launching the MODAP CSM on a four-stage Titan-IIIC). Pre-launch preparations, launch operations, and ascent to parking orbit would need from five to 10 days, from five to eight hours, and 11 minutes, respectively. The spacecraft would remain in parking orbit for less than five hours before igniting the LEM descent engine to place itself into an elliptical transfer orbit with a 260-mile apogee (highest point above the Earth). Upon reaching this apogee 45 minutes later, it would again ignite its engine to circularize its orbit. Rendezvous and docking with the space station in 260-mile-high orbit would need up to 17.5 hours.
Though MSC was on the move throughout the MODAP study and busy with Apollo moon program preparations, its engineers had already found time to design the 24-man space station to which the MODAP CSM would deliver crews and cargo. Designed for launch on a single two-stage Saturn V rocket, MSC’s station would reach orbit unmanned and unfold three “arms” from a central hub. The hub would included a docking port for MODAP CSM spacecraft and three berthing ports for MODAP CMs without their SMs.
NAA calculated that a 24-man space station with full crew rotation every six months would need to receive a MODAP CSM bearing six astronauts and 5855 pounds of cargo eight times per year, or once every 45 days. The cargo manifest would include 1620 pounds of food, 1035 pounds of breathing oxygen, 505 pounds of buffering nitrogen, 1450 pounds of propellants, and 1245 pounds of spare parts. Water would not be carried because the space station was expected to recycle all of its water.
The company estimated that containers for solid and liquid cargo – which it called Cargo Modules, or CAMs – would have a combined empty mass of 1970 pounds. The volume required to accommodate the cargo and containers would total 202.4 cubic feet, meaning that all necessary cargo could be carried in four of the SM’s six triangular compartments. NAA noted that the MODAP CSM launched on a Saturn IB would have surplus cargo capacity equal to 1302 pounds of mass and 52 cubic feet of volume that might be applied to additional cargo, such as science instruments. In all, one MODAP CSM could transport 9127 pounds of cargo and CAMs.
NAA proposed that the MODAP SM include hinged doors for unloading cargo at the space station, a process that would have to be completed in 44 days or less to make way for the next MODAP CSM to dock. Small doors near the top of the SM, where it joined the CM, would provide access to four CAMs holding liquid cargo, while large doors below those would expose four solid-cargo CAMs.
NAA envisioned that the three-armed MSC space station would include a hangar for either the MODAP CM alone or for the entire MODAP CSM. If the hangar housed the CM alone, then the SM would protrude into open space following docking. A robot arm on the station would grip each CAM in turn and transfer it to a pipe-like loading chute on the station’s exterior. After all cargo was transferred, the MODAP SM would be cast off and the hangar closed to protect the MODAP CM, which would remain attached to the station for up to six months. If, on the other hand, the hangar accommodated the entire MODAP CSM, then cargo transfer would occur within the hangar. The SM would still be cast off within 44 days of docking to make room for the next MODAP CSM.
After the MODAP SM was discarded, the MODAP CM would be pivoted using a manipulator arm to a berthing port to free up the main docking port. It would remained parked there, undergoing periodic inspection and maintenance but otherwise dormant, for up to six months.
Discarding the MODAP SM meant that the MODAP CM would need to carry a separate de-orbit propulsion module. NAA proposed a cluster of six solid-propellant retrorocket motors, any five of which would be adequate to deorbit the MODAP CM. The retro package would also include batteries for powering the MODAP CM during free flight prior to reentry. NAA expected that, under normal conditions, the MODAP CM would need 30 minutes for checkout and undocking, after which the retro motors would fire immediately. Twenty-five minutes later, shortly after de-orbit module separation, it would reenter Earth’s atmosphere. Because the MODAP CM would encounter the atmosphere moving at about half the speed of the lunar CM, its heat shield could be about half as thick. Descent and splashdown would need 11 minutes. The MODAP CM would be heavier than the lunar CM, so would lower on four parachutes; that is, one more than the lunar CM. Its crew could splash down safely if one parachute failed.
Under normal circumstances, the MODAP CM would splash down in the Gulf of Mexico, not far from Houston, and crew recovery would take place within a few hours. NAA acknowledged, however, that emergencies might occur. Because of this, the MODAP CM could fly free of the space station for up to 10.5 hours while its orbit carried it into position for reentry and splashdown at any of three landing sites. These were the prime site in the Gulf of Mexico, a site near Okinawa in the western Pacific Ocean, and a site near Hawaii. To cut costs, fleets of recovery ships would not remain on standby at the landing sites; because of this, recovery might be delayed for up to 24 hours following an emergency splashdown near Okinawa or Hawaii.
An abort during ascent to orbit could cause the MODAP CM to land in southern Africa; that is, on land. To protect its three-man crew during a land landing, the lunar CM would include shock absorbers in its supporting seat struts. These would enable the crew couches to move vertically up to five inches to dissipate the force of impact.
Because the MODAP CM would carry six men arrayed in two rows of three couches, one row above the other, vertical couch movement was not an option. Insufficient room would exist within the MODAP CM to permit vertical movement totaling at least 10 inches (five inches per row). The lunar CM would also rely on crushable material in the CM heatshield; this would be inadequate to soften the blow for the greater mass of six men.
NAA proposed to solve this problem by, in effect, moving the shock absorbers from the seat supports to the MODAP CM’s heat shield and by adding four solid-propellant landing rockets. In the event of a land landing, the heat shield would deploy downward on shock absorbing struts and the landing rockets would ignite and pivot out from behind the shield.
NAA assumed a MODAP CSM design and test program spanning from early 1964 to mid-1968, and that operational MODAP CSMs would deliver crews and cargoes to the 24-man space station from mid-1968 through 1973. The company anticipated that five MODAP CSMs would be used in ground tests and unmanned test flights, and that 40 MODAP CSMs would fly during the five-year space station program. Of these, perhaps two would fail, requiring assembly of at least two backup spacecraft. NAA placed the total cost of the MODAP CSM program (including $861 million for Saturn IB rockets) at $1,881,350,000.
Final Technical Presentation: Modified Apollo Logistics Spacecraft, Contract NAS 9-1506, North American Aviation, Inc., Space and Information Systems Division, November 1963.
Beyond Apollo chronicles space history through missions and programs that didn’t happen.
I research and write about the history of space exploration and space technology with an emphasis on missions and programs planned but not flown (that is, the vast majority of them). Views expressed are my own.
|
Add To Favorites
Let's find out how we know our shapes!
Introduce or review geometric shapes such as circles, triangles, squares, rectangles, etc. with students. Hold a class discussion of the similarities and differences between the shapes. Have students use construction paper and Crayola® Construction Paper™ Crayons to illustrate mastery of each of the discussed shapes.
Students collect clean, recycled boxes, such as shoes boxes milk cartons, etc. Remind students that each box needs to have an attached lid.
Students will be painting the background of each recycled cardboard box and then add illustrations of circles. Have students spread newspaper over their work area and put on a Crayola® Art Smock before beginning this process. Using Crayola® Washable Paint and So Big® Brushes, students paint their boxes. Allow paint to dry overnight.
Encourage students to add the same number of circles to their box blocks so that matching circle (or dot) sides can be stacked face to face. Use a different color paint for the circles/dots. Add Crayola® Glitter Glue to the dots. Dry overnight.
Allow class time for students to manipulate their blocks, matching circles/dots face to face as they build. Have students tell a story about what they build.
Let's make something!
What do two things have in common? Select two cards and decide if they are alike.
Add To Favorites
Focus on historic achievements and positive role models with this collaborative monument making project.
Can a simple strip of hand colored paper and two paper clips get students excited about math? Introduce topology with th
Introduce, or refresh, the concept of surface area to your students with an investigation into the Joel Shapiro “Untitle
Create a dreamtime symbol in the style of modern Aboriginal culture.
Face painting is an ancient tradition! Explore the creative designs used by Australia's native peoples, who have painted
Connect with an ancient culture with Model Magic masks! Discover what role masks played in Aboriginal life, and apply th
Capture animals in abstract drawings, finding the geometric shapes that make up animal faces and bodies.
|
The Early Cases
The Supreme Court adopted its present law of affirmative action after an extended period of experimentation. In a series of plurality decisions, various justices and coalitions of justices toyed with a variety of legal standards to govern the use of racial classifications for the benefit of racial minorities. In the course of the deliberations that occurred among the justices, a number of legal issues emerged as having potential constitutional significance. Finally, after a series of Reagan and Bush appointments, the Supreme Court was able to speak through majority opinions in issuing its formula for constitutionally acceptable affirmative action. Under current law, most forms of race-conscious affirmative action now appear to be unconstitutional.
The Court's constitutional decisions are directly applicable only to governmental use of affirmative action programs because the state-action requirement of the Fourteenth Amendment—the legal provision on which the Court's decisions are based—does not apply to purely private action. Private affirmative action programs, however, must comply with the congressional statutes that govern the private use of racial classifications, such as the Civil Rights Act of 1964, which prohibits, inter alia, discrimination in education, employment, and public accommodations. It is unclear whether the Supreme Court will ultimately adopt the same standards for statutory and constitutional analysis of affirmative action.1
The problems posed by affirmative action are directly traceable to Brown v. Board of Education.2 In Brown I, the Supreme Court invalidated the separate-but-equal doctrine of Plessy v. Ferguson,3 and held that, under the equal protection clause of the Fourteenth Amendment, governmental use of racial classifications was constitutionally suspect.4 Then, in Brown II and its progeny, the Court not only held that the Constitution required a remedy for the continuing effects of past discrimination but stressed that the use of racial classifications was constitutionally compelled where necessary to provide an effective remedy for the prior constitutional violation. As a result,
|
You may have heard of the Richter scale used to study earthquakes. In 1935 Charles Richter developed a system to measure the magnitude --or amount of energy released--of an earthquake. Each whole number on the Richter scale indicates a tenfold increase in amplitude (greatness in size). Thus, a 7.5 earthquake on the Richter scale actually has ten times the amplitude of a 6.5 earthquake. There is no upper limit on the Richter scale, meaning that it could be used to measure earthquakes of a ten or more magnitude if one ever occurred. The most devastating earthquakes we know of are 8 or 9 on the Richter scale.
Scientists also measure seismic waves, or movements in the earth's crust. Special machines called seismographs record movement in the earth, including earthquakes that are so low in magnitude that people cannot feel them. You can make a simple seismograph to demonstrate how this machine works.
Fill a 2-liter soda bottle with water and use wire to suspend it about 1' above the surface of a table, using a sturdy stick or ruler set across stacks of heavy books. (The bottle should hang freely between the books.)
Tape a sheet of paper with aluminum foil underneath it to the table, beneath the bottle.
Roll a felt-tip pen in a piece of paper, and tape it loosely enough for the pen to slide up and down.
Tape this roll to the side of the bottle so that the pen's tip touches the paper.
Shake the table back and forth, gently at first and then a little harder. (Don't move the table's legs; shaking is enough.)
When you're done, examine the paper. What kind of record is there of the 'quake'?
There's another measurement used for earthquakes: the Modified Mercalli Scale is used to measure intensity, or how strong the effects of the quake are. The intensity varies based on position relative to the epicenter of the earthquake, so one earthquake does not have a set number from the scale assigned to it as with the Richter scale. Intensity is measured in Roman numerals I-XII. For a list of effects at each level, visit www.geo.mtu.edu/UPSeis/Mercalli.html
If it isn't confusing enough with so many things to measure, there's one more method for determining magnitude. The Moment Magnitude scale is more accurate than the Richter scale for measuring large earthquakes. In the world's worst recorded earthquake--the 1960 one in Chile--the magnitude on the Richter scale was 8.5, but on the Moment Magnitude (Mw) scale it was 9.5.
|
Call us 24/7 at 800-860-6272
or email us
Collecting rocks and minerals is a fun way to learn about geology! Most kids are naturally inclined to pick up any "pretty" rock that they see, which provides a great learning opportunity. Start by keeping the interesting rocks you find on walks and hikes. You might want to wrap larger specimens in newspaper and put small ones in plastic ziplocks until you get home.
You can use a brush (old toothbrushes work well) to "clean" your specimens: brush the dirt off carefully, so that fragile rocks or minerals are not damaged.
Sort through the specimens that you've collected, putting similar stones together. Once you have done that, use a rocks and minerals guide and try to identify each rock. Match up color and description as best as possible. If you don't have a good guide, check one out at a local library. Use the tests in the next section for more properties that will help you identify a rock or mineral.
As you identify each specimen, make a label for it. If possible, write down the location and the date when the rock or mineral was collected. For rocks, you might also want to note information such as what minerals it is made of and whether it is igneous, sedimentary, or metamorphic. For minerals, note the elements that they are composed of: for example, Galena is PbS (lead sulfide).
Keep your specimens in a compartmentalized box or in cardboard egg cartons. If you want to number your specimens, use a permanent marker to mark each one consecutively. Write the number on each label as well. That way you won't worry about getting the rocks confused with each other.
Identifying Specimens: Testing Rocks & Minerals
There are many tests you can perform to help you identify your rock and mineral specimens. The first step is to examine your specimen with a magnifying glass and take note of its outside appearance. Look for the mineral's transparency. If you can see through the specimen, it is transparent. If light can pass through, but the specimen cannot be seen through, your mineral is translucent. Minerals that do not let light through are called opaque.
Next, test your specimen for hardness. Mineral hardness is measured on the Mohs Hardness Scale. On each level of the scale a mineral can be scratched by something of the same or higher level, but nothing lower. Number one on the Mohs scale is talc, because it is soft and very easy to scratch. Number ten is the diamond, because it is the hardest natural substance and can only be scratched by another diamond. Test your mineral specimen by trying to scratch it with your fingernail. Next try a copper penny, and then a steel nail. A fingernail has a hardness of 2.5, a penny is 3.5, and a steel nail is 5.5. If you are able to scratch your specimen with the penny but not with your fingernail, it has a hardness between 2.5 and 3.5. Also try scratching your specimen with another rock to see which one is harder.
One last test that is commonly used is called a streak test. A mineral's "streak," or color when it is finely powdered, is always the same, even when the color of the mineral varies. (Sometime the streak can be very different from the color of the mineral itself.) Rub your specimen across a piece of porcelain tile (called a streak plate) and examine the color it leaves behind. You can also rub it across smooth cement if you don't have a tile.
You Might Like
|
You Decide: Should the American space program send a manned mission to Mars?
This educational guide focuses on whether or not the American Space Program should send a manned Mission to Mars. Students are invited to examine the arguments on both sides of the debate, developing critical thinking skills as they work through the activities. Students will learn how to support their arguments with evidence and reason. It is expected that at the end of this guide students will determine where they stand on this controversial issue.
On the Case: An Introduction to the Genre of Mysteries
In this lesson, students will view a video from the series Reading Rainbow, "Mystery on the Docks" by Thacher Hurd. Mysteries provide an opportunity to teach reading strategies such as questioning, prediction and problem solving. This lesson will also focus on the characteristics common to all mysteries and the devices that authors use to create setting, characters, plot and suspense.
Will the Real Cinderella Please Stand Up?
Students learn that folk stories can be told in many ways and learn to write their own Cinderella story and script according to their own gender or culture. They also become aware of the steps that are necessary to make a film as they learn the various parts that go into the process.
Civil Disobedience Action Plan
This lesson acquaints students with historical and current concepts of civil disobedience. They will also consider issues that affect their own lives in relation to civil disobedience.
Modeling Research Skills
The fifth lesson in the Family, History and Memory module centers on developing students' research skills. Using the book The Diary of Anne Frank as a starting point, it guides students through the necessary steps for conducting good-quality research and developing a subsequent presentation. Students work as a group to develop their presentation. The lessons can be delivered as a module or as individual units.
Establishing Borders: The Expansion of the United States, 1846-48
This site offers geography and history activities showing how two years in history had an indelible impact on American politics and culture. Students interpret historical maps, identify territories acquired by the U.S., identify states later formed from these territories, examine the territorial status of Texas, and identify political, social, and economic issues related to the expansion of the U.S. in the 1840s.
West Point in the Making of America, 1802-1918
These activities will have student look at the history of the U.S. Military Academy at West Point, its contributions to American history, and accomplishments of selected West Point graduates. Proposed by George Washington in 1783 and created 20 years later, West Point became an important American institution before the Civil War.
Welcome to my room, 3. klassile
UNIT 11 - Welcome to My Room. 3. klass, I kooliaste. Word'is koostatud 10 töölehte sõnavara ja grammatika omandamiseks."Arvuti koolis" lõputöö.
TÜ Loodusteadusliku hariduse lektoraadi vastavatel kursustel (Loodusteaduste visualiseerimine) osalenud õpetajate koostatud interaktiivsed mudelid bioloogiliste protsesside tundmaõppimiseks. Kuigi mudeleid ei saa lugeda lõplikult viimistletuks, on nad õppeprotsessi visualiseerimiseks asjakohased: kasvõi ideede saamiseks. Kasutajal on soovitav fail enne oma arvutisse salvestada ning vajadusel täiendada.
Building Molecular Models of DNA, Protein, and Lipids
Molecular models of DNA, protein (alpha-helix and beta-pleated sheet), and lipids are built to scale. With a minimum of scientific jargon, these laboratory exercises effectively display the important aspects of three-dimensional shape and spatial orientation that are poorly presented in textbook illustrations and demonstrate how the shape of molecules and weak chemical associations like hydrogen bonds and hydrophobic/hydrophilic interactions combine to form the macromolecular associations fundam
Moral and ethical principles in end of life care
In many areas of health care, and especially in such areas as palliative care, increasing attention has been paid in recent years to patient autonomy, and the need to respect it. Autonomy has come to be seen as a very important aspect of the interaction between patients and those looking after them, and forms the basis for many ethical commitments, such as telling the truth to patients, and seeking their consent for health care interventions. In this unit we look at quite a wide range of ethical
Arctic Ozone from February 1, 2003 through March 30, 2003
This visualization shows the northern hemisphere ozone hole from February 1, 2003 through March 30, 2003.
The Middle East Dust Storm
Dust storms are an every day occurrence in Saudi Arabia. This storm is of an unusual size.
Cloud Cover over Borneo: March 1, 1998
Cloud cover over Borneo for March 1, 1998 superimposed over a topographic image
Great Zoom into Siberia
Using data from different spacecraft and some powerful computer technology, visualizers at the Goddard Space Flight Center present you with a collection of American cities in a way you have never seen them before. Starting with our camera high above the Earth, we rush in towards the surface at what would be an impossible speed for any known vehicle. Passing though layers of atmosphere, the colors of our destinations shimmer with their own unique characteristics, and suddenly we find ourselves fl
North America NDVI 1993 August
NDVI in North America for August 1993, based off data collected over the 1981-2000 time frame.
After Hurricane Floyd: East Coast Flyover September 16, 1999 from SeaWiFS
Flying up the east coast of the United States from Florida to North Carolina using a SeaWiFS image taken September 16, 1999
Objetivos1. Comprensión de los conceptos fundamentales relacionados con los medios de transmisión y la propagación de ondas.2. Conceptos básicos, similitudes y diferencias entre los campos electrostáticos y los dinámicos.3. Propagación de ondas en diversos materiales.4. Herramientas matemáticas necesarias para el análisis de medios de transmisión.5. Habilidades para el trabajo en equipo y el trabajo autónomo.6. Análisis y síntesis sobre los medios de transmisión más comúnmente em
UW 360 January 2011: Bebe Miller
The UW's Chamber Dance Company has a mission: to present, record and archive modern dance works of historical and artistic significance. In the fall of 2011, members of the company performed dances by choreographer Bebe Miller and had the opportunity to work with her directly. UW 360 profiles the fascinating people, programs and community connections that define the University of Washington. The show looks at a wide range of UW topics from solar energy, to heart tissue regeneration, to neighb
“Securing the International Oil Supply”
A panel featuring David Goldwyn, President of Goldwyn International Strategies LLC; Senior Fellow in the Energy Program at the Center for Strategic and International Studies; former Assistant Secretary of Energy for International Affairs; Scott Nauman, Manager of Economics and Energy in Corporate Planning for ExxonMobil Corporation; and Michael Klare, Five College Professor
|
A healthy ear emits soft sounds in response to the sounds that travel in. Detectable with sensitive microphones, these otoacoustic emissions help doctors test newborns' hearing. A deaf ear doesn't produce these echoes.
New research involving the University of Michigan and Oregon Health and Science University shows that, contrary to the current scientific thought, the emissions don't leave the ear the same way they entered. The findings give new insight into a phenomenon that researchers study to better understand hearing loss, and they reinforce a previous controversial study that came to a similar conclusion.
A paper on the research is published in the current issue of Proceedings of the National Academy of Sciences.
"The former wisdom on how otoacoustic emissions left the ear was that there was a backward-traveling wave going along the structure of the cochlea in the same way as the forward-traveling sound wave," said Karl Grosh, a professor in the U-M departments of Mechanical Engineering and Biomedical Engineering and an author of the paper. "These measurements show that is not the case."
Grosh said the next step is to develop tools to find out where hearing damage is occurring. "If we want to try to infer from the emission what's wrong with the ear, we have to understand how the emission is produced," Grosh said.
The experiment, performed at the Oregon Health and Science University in associate professor Tianying Ren's lab, showed that the sound waves coming out travel through the fluid of the inner ear, rather than rippling along the basilar membrane of the cochlea.
The cochlea, located deep in the ear, is shaped like a snail. The basilar membrane essentially cuts the inner channel of the cochlea diametrically in half into two chambers. Both chambers are filled with liquid.
Sound waves going into the ear undulate along the basilar membrane through the cochlea and eventually excite the organ of Corti, which senses and sends the sound signals to the brain through the auditory nerve.
Sounds coming out of the ear, according to results from this experiment, likely travel through the fluid on either side of the basilar membrane.
For this experiment, the researchers used laser interferometers, which detect waves, to measure vibrations of the basilar membrane in response to sound at two locations in the cochlea of gerbils. They detected evidence of sound waves traveling forward on the membrane, but they found no evidence of backward-traveling waves.
"Our new method can detect vibrations of less than a picometer, 1,000 times smaller than the diameter of an atom. The new data demonstrate that there is no detectable backward-traveling wave at physiological sound levels across a wide frequency range," said Ren, principal investigator of this project. "This knowledge will change scientists' fundamental thinking on how waves propagate inside the cochlea, or how the cochlea processes sounds."
The paper is called "Reverse wave propagation in the cochlea."
Source: University of Michigan
Explore further: Full-annual-cycle models track migratory bird populations throughout the year
|
(Phys.org)—Two teams working independently have succeeded in entangling a single electron spin with a single photon in a solid-state platform. Both teams describe their process and results in papers they've had published in the journal Nature. The two teams used laser pulses fired at quantum dots to entangle pairs of electrons and photons then used different techniques to remove either the shared color that resulted or their polarization.
To create a quantum computer, scientists believe it will be necessary to combine or connect stationary quantum bits (qubits) with mobile or "flying" qubits. Thus, research has been focused on building a system in which this is possible. In this new research, quantum dots were used to represent the stationary particles while photons were used to represent those that fly. To connect them, the researchers relied on entanglement between pairs of particles and the properties they share.
In their labs, both teams used very small semiconductors to trap a single electron, e.g. a quantum dot. They then both fired a laser at the dot to set its spin state to either up or down (representing "0" or "1"). Next both teams also fired another laser pulse at the dot to force it to a higher energy level. Doing so caused an entangled photon to be released as the energy decayed. The photon was emitted as either horizontally or vertically polarized with a wavelength that was demonstrated by either a red or blue color. It was at this stage that the work between the two teams diverged. To use the information from a (qubit) in a quantum system, only one of the two properties can be allowed to exist; thus the other must be removed. The first team's work involved removing the color, the second, the polarization.
To remove the color the first team ran the photon through a crystal that was also shot with a laser beam. Doing so caused the colors to smear which was enough to remove that property from the entangled particles. The second team removed the polarization by allowing the photon to pass through a polarizing filter which forced it into an anticlockwise state which effectively erased the shared property from the particles.
More information: 1. Quantum-dot spin–photon entanglement via frequency downconversion to telecom wavelength, Nature, 491, 421–425 (15 November 2012) doi:10.1038/nature11577
Long-distance quantum teleportation and quantum repeater technologies require entanglement between a single matter quantum bit (qubit) and a telecommunications (telecom)-wavelength photonic qubit. Electron spins in III–V semiconductor quantum dots are among the matter qubits that allow for the fastest spin manipulation and photon emission, but entanglement between a single quantum-dot spin qubit and a flying (propagating) photonic qubit has yet to be demonstrated. Moreover, many quantum dots emit single photons at visible to near-infrared wavelengths, where silica fibre losses are so high that long-distance quantum communication protocols become difficult to implement. Here we demonstrate entanglement between an InAs quantum-dot electron spin qubit and a photonic qubit, by frequency downconversion of a spontaneously emitted photon from a singly charged quantum dot to a wavelength of 1,560 nanometres. The use of sub-10-picosecond pulses at a wavelength of 2.2 micrometres in the frequency downconversion process provides the necessary quantum erasure to eliminate which-path information in the photon energy. Together with previously demonstrated indistinguishable single-photon emission at high repetition rates, the present technique advances the III–V semiconductor quantum-dot spin system as a promising platform for long-distance quantum communication.
2. Observation of entanglement between a quantum dot spin and a single photon, Nature, 491, 426–430 (15 November 2012) doi:10.1038/nature11573
Entanglement has a central role in fundamental tests of quantum mechanics1 as well as in the burgeoning field of quantum information processing. Particularly in the context of quantum networks and communication, a main challenge is the efficient generation of entanglement between stationary (spin) and propagating (photon) quantum bits. Here we report the observation of quantum entanglement between a semiconductor quantum dot spin and the colour of a propagating optical photon. The demonstration of entanglement relies on the use of fast, single-photon detection, which allows us to project the photon into a superposition of red and blue frequency components. Our results extend the previous demonstrations of single-spin/single-photon entanglement in trapped ions, neutral atoms and nitrogen–vacancy centres to the domain of artificial atoms in semiconductor nanostructures that allow for on-chip integration of electronic and photonic elements. As a result of its fast optical transitions and favourable selection rules, the scheme we implement could in principle generate nearly deterministic entangled spin–photon pairs at a rate determined ultimately by the high spontaneous emission rate. Our observation constitutes a first step towards implementation of a quantum network with nodes consisting of semiconductor spin quantum bits.
© 2012 Phys.org
|
Oxidation and Reduction
Oxidation is the loss of electrons or loss of hydrogen and addition of oxygen.
Reduction is the gain of electrons or gain of hydrogen and loss of oxygen.
Redox refers to a reaction where both of these happens. Remember it with OIL RIG (Oxidation Is Loss, Reduction Is Gain)
The oxidation state of simple ions is simply its charge. For example, the oxidation state of Mg2+ is +2. However, we also assign oxidation states to other compounds and is the charge it would have if it were a simple ion and not bonded. In order to work out the oxidation states of some compounds we need to use some rules.
|Oxidation state of an atom in element is 0||Br in Br2 is 0|
|The oxidation state of Fluorine is always -1; O is nearly always -2 and Cl is usually -1.||no example.|
|The sum of the oxidation states in a polyatomic ions is always the charge on the ion.||In PO43- the Oxygens make -8 so P should be +8 - 3 (the overall charge) meaning its oxidation state here is +5.|
The oxidation gives a compound its name, for example Iron (IV) oxide means the oxidation state of iron is 4 so therefore there must be two oxygen atoms bonded to it.
S-block metals loose their electrons in a reaction so are good reductants (they are oxidised easily).
And then there are p-block elements which can have different oxidation states; and those in group VII gain electrons in reactions so are good oxidants (they gain electrons easily).
Have a look at the reaction below.
Fe2O3 + 2Al ® Al2O3 + 2Fe
In this reaction iron is reduced (is an oxidant) because it gains electrons and goes from ion to element, and the reverse is true for aluminium which is oxidised (is a reductant). To show this we use half equations as below.
Fe3+ + 3e- ® Fe
Al ® Al3+ + 3e-
Another skill you need is to combine half equations. Use the following example where concentrated nitric acid is added to copper. To do this you simply need to have the same number of electrons in each equation and then add combine, the same electrons means they will cancel.
Remember that in acid, you can balance with H+ ions as well.
|
Kids learn the appearance and value of pennies, count pennies, and write the number of cents for the pennies on this first grade math worksheet.
Counting quarters is simple once you learn how. Practice counting quarters with your child using this math worksheet.
Let's go shopping! In this math worksheet, your kid will add up the value of groups of coins, then draw lines from coins to the items that cost the same amount.
Let your first grader flex his financial savvy with this fun worksheet. Your child must circle the coins that add up to the value of each present.
This worksheet features a piggy bank and is a fun, familiar way to help preschoolers begin counting money.
Your child is shopping for gifts, and it's time to make her purchases. This fun worksheet asks your child to circle the coins she needs to purchase each gift.
Exact change only, please! In this first grade worksheet, your child will match each group of coins to an item with a price tag of equal value.
Being able to identify and count coins is a valuable skill for kids. In this worksheet, your child will count coins, then write their total values on the lines.
Kids learn the appearance and value of dimes, count dimes, and write the number of cents for the dimes on this first grade math worksheet.
|
Analyze Air Quality with Lichens
Lichens are composite organisms consisting of a fungus (the mycobiont) and a photosynthetic partner (the photobiont or phycobiont) growing together in a symbiotic relationship. The photobiont is usually either a green alga (commonly Trebouxia) or cyanobacterium (commonly Nostoc).
The morphology, physiology and biochemistry of lichens are very different from those of the isolated fungus and alga in culture. Lichens occur in some of the most extreme environments on Earth—arctic tundra, hot deserts, rocky coasts, and toxic slag heaps.
However, they are also abundant as epiphytes on leaves and branches in rain forests and temperate woodland, on bare rock, including walls and gravestones, and on exposed soil surfaces (e.g., Collema) in otherwise mesic habitats.
The roofs of many buildings have lichens growing on them. Lichens are widespread and may be long-lived; however, many are also vulnerable to environmental disturbance, and may be useful to scientists in assessing the effects of air pollution, ozone depletion, and metal contamination.
Lichens are informally classified by growth form into:
Lichens are sensitive to air pollution, specially the air's acidity. Therefore, the presence or absence can be used to see how clean the air is. Shrubby and leafy lichens only survive in clean air, and when an area is really polluted you will not find anyone.
The goal of this application is to help to analyze, classify and measure the size of the lichens in order to study the quality of air in different areas of the cities.
Look for lichens on walls, stones and trees take pictures with your phone and submit the data using this EpiCollect Plus Lichens project. Then, you can help in measuring the size of the lichen in this web application!
|
In living and past peoples, there is wide range of variability. Despite this variability, our bones
have features that can be clues to ancestry. Many of these features reflect evolutionary processes, including adaptation to the environment.
Bone cells retain "biogeographical" information that is found in our DNA. These inherited markers are due to mutational changes that gradually accumulate and differentiate populations over time. DNA can help associate an individual with a region of the world.
We can also assess ancestral origins by looking at the skeleton itself. The bones of the skull express inherited features from one generation to the next. Measuring the cranium gives us information that is similar to that from DNA. By comparing a skull's measurements with data from populations worldwide, scientists can statistically evaluate that individual's relationship to a world group.
Identifying Ancestry in the Colonial Chesapeake
The archaeological cases in the Written in Bone exhibition focus on identifying skeletal remains from only three groups who were here in the 1600s and early 1700s — individuals of American Indian, European, and African origins.
- Individuals with American Indian ancestry have proportionately wider faces and shorter, broader cranial vaults.
- Individuals with European ancestry tend to have straight facial profiles and narrower faces with projecting, sharply angled nasal bones.
- Individuals with sub-Saharan African ancestry generally show greater facial projection in the area of the mouth, wider distance between the eyes, and a wider nasal cavity.
Illustrations by Diana Marques
- The color of a bone does not reveal ancestry. Bone color has more to do with what happens to a body after death than in life.
[ TOP ]
|
Oh no! The mayor finally finished building the city, but all the signs got mixed up! Can your child help him sort out the mess?
Help your second grader learn how to read a math table by using this math farm table to answer a set of questions.
Can your second grader make her own bar chart? Use this pretend survey of 38 people and their favorite cities to find out!
Give your second grader some practice working with data with this fun-to-complete favorite veggie survey.
Kids practice making a Venn diagram about kinds of gift wrap by sorting the gifts in their correct spaces in this 2nd grade math worksheet.
If your second grader is stumped by bar graphs, clear up the confusion with this worksheet that helps kids learn how to read and interpret a bar graph.
Celebrate the winter Olympics with this 2nd grade math worksheet in which children practice reading, analyzing, and computing data in a bar graph.
Pictographs are a great introduction to working with data and graphs. Kids help the hamburger cafe compare the number of hamburgers they sold using pictographs.
If your child has ever played Battleship, chances are he's familiar with grids. Can your child help build a new town by drawing buildings on the grid map?
Help your second grader learn how to interpret a pie graph with this worksheet all about kids and their pets.
|
Primary Sources and Research – 4
Students’ research will use the History Detective Form to examine artifacts and primarysource documents, making observations about them and generalizing about what theymight mean. And how they might be used by historians. The primary source documentsare authentic and come from the life of Elizabeth K. Steen, anthropologist and explorer whose life is largely forgotten (this in itself is a good history lesson—about how historyis written and the odd quirks that influence what is recorded for future generations). Thislesson takes two 50 minute long sessions to accomplish
Grade State Standards 4
Grade Information Literacy StandardsObjectives
1.Students will recognize primary sources and secondary sources and be able to describe the difference.2.Students will experience how historians conduct their research.3.Students will have the opportunity to learn collaboratively fromtheir peers as they examine primary sources items in teams.4.Students will select one primary source and one secondary sourceto use in their own slide show teaching other students the difference between primary and secondary sources.5.Students will produce a three slide powerpoint presentation aboutwhat they know about primary and secondary sources with illustrations of the two. Using the template.
Materials and Resources
1.Letters, artifacts, old photographs, newspaper articles, postcards,and other items which offer clues to about Elizabeth k. Steen’s life.2.Secondary resources for comparison purposes.3.White gloves for the junior archivists to touch the aged documents.-- explain4.Students come equipped with worksheets and pencils5.Graphic resources provided on the web for students to select fromfor their slide presentations. Including a template for PowerPoint.
Students will assemble as
the Media Specialist begins the overview PowerPoint
and describes the housekeeping aspects of the lesson.Embedded in this lesson is a film from the National Parks Service – DownMobile Way from 1935 which the Media Specialist will use to work through thePrimary/Secondary Resource Inquiry Form as an example of how it should be
|
Multiplying Decimals Teacher Resources
Find Multiplying Decimals educational ideas and activities
Showing 101 - 120 of 1,224 resources
Developing the Concept: Exponents and Powers of Ten
Here is an exponents lesson which invites learners to examine visual examples of multiplication and division using powers of 10. They also practice solving problems that their instructors model. If you are new to teaching these skills,...
5th - 7th Math CCSS: Adaptable
Graphing Calculator Activity: Multiplying and Dividing Mixed Numbers
In this graphing calculator worksheet, students explore necessary steps to multiply and divide mixed numbers using a graphing calculator. In groups, students measure objects in their classroom and write equations using their collected...
8th - 9th Math
New Review Fraction Equivalence, Ordering, and Operations
Need a unit to teach fractions to fourth graders? Look no further than this well-developed and thorough set of lessons that take teachers through all steps of planning, implementing, and assessing their lessons. Divided into eight...
3rd - 5th Math CCSS: Designed
|
Fossil galaxy reveals clues to early universe
A tiny galaxy has given astronomers a glimpse of a time when the first bright objects in the universe formed, ending the dark ages that followed the birth of the universe.
Astronomers from Sweden, Spain and the Johns Hopkins University used NASA's Far Ultraviolet Spectroscopic Explorer (FUSE) satellite to make the first direct measurement of ionizing radiation leaking from a dwarf galaxy undergoing a burst of star formation. The result, which has ramifications for understanding how the early universe evolved, will help astronomers determine whether the first stars -- or some other type of object -- ended the cosmic dark age.
The team presented its results Jan. 12 at the American Astronomical Society's 207th meeting in Washington, D.C.
Considered by many astronomers to be relics from an early stage of the universe, dwarf galaxies are small, very faint galaxies containing a large fraction of gas and relatively few stars. According to one model of galaxy formation, many of these smaller galaxies merged to build up today's larger ones. If that is true, any dwarf galaxies observed now can be thought of as "fossils" that managed to survive -- without significant changes -- from an earlier period.
Led by Nils Bergvall of the Astronomical Observatory in Uppsala, Sweden, the team observed a small galaxy, known as Haro 11, which is located about 281 million light years away from Earth in the southern constellation of Sculptor. The team's analysis of FUSE data produced an important result: between 4 percent and 10 percent of the ionizing radiation produced by the hot stars in Haro 11 is able to escape into intergalactic space.
Ionization is the process by which atoms and molecules are stripped of electrons and converted to positively charged ions. The history of the ionization level is important to understanding the evolution of structures in the early universe, because it determines how easily stars and galaxies can form, according to B-G Andersson, a research scientist in the Henry A. Rowland Department of Physics and Astronomy at Johns Hopkins and a member of the FUSE team.
"The more ionized a gas becomes, the less efficiently it can cool. The cooling rate in turn controls the ability of the gas to form denser structures, such as stars and galaxies," Andersson said. The hotter the gas, the less likely it is for structures to form, he said.
The ionization history of the universe therefore reveals when the first luminous objects formed, and when the first stars began to shine.
The Big Bang occurred about 13.7 billion years ago. At that time, the infant universe was too hot for light to shine. Matter was completely ionized: atoms were broken up into electrons and atomic nuclei, which scatter light like fog. As it expanded and then cooled, matter combined into neutral atoms of some of the lightest elements. The imprint of this transition today is seen as cosmic microwave background radiation.
The present universe is, however, predominantly ionized; astronomers generally agree that this reionization occurred between 12.5 and 13 billion years ago, when the first large-scale galaxies and galaxy clusters were forming. The details of this ionization are still unclear, but are of intense interest to astronomers studying these so-called "dark ages" of the universe.
Astronomers are unsure if the first stars or some other type of object ended those dark ages, but FUSE observations of "Haro 11" provide a clue.
The observations also help increase understanding of how the universe became reionized. According to the team, likely contributors include the intense radiation generated as matter fell into black holes that formed what we now see as quasars and the leakage of radiation from regions of early star formation. But until now, direct evidence for the viability of the latter mechanism has not been available.
"This is the latest example where the FUSE observation of a relatively nearby object holds important ramifications for cosmological questions," said Dr. George Sonneborn, NASA/FUSE Project Scientist at NASA's Goddard Space Flight Center, Greenbelt, Md.
This result has been accepted for publication by the European journal Astronomy and Astrophysics.
Bergvall will be available to answer questions from the media about this research at poster #175.21, during the poster-viewing sessions at the AAS meeting on Thursday, January 12. A high resolution image is available from http://www.jhu.edu/news/home06/jan06/haro.html or from Lisa De Nike at [email protected] or 443-287-9960.
The FUSE project is a NASA Explorer mission developed in cooperation with the French and Canadian space agencies by the Johns Hopkins University, Baltimore, Md., the University of Colorado, Boulder, and the University of California, Berkeley. The mission is operated out of Johns Hopkins University's Homewood campus in Baltimore. NASA Goddard manages the program for NASA's Science Mission Directorate. For more information about the FUSE mission, visit http://fuse.pha.jhu.edu
Last reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009
Published on PsychCentral.com. All rights reserved.
|
The first printing press designed to use the newly invented Cherokee alphabet arrives at New Echota, Georgia.
The General Council of the Cherokee Nation had purchased the press with the goal of producing a Cherokee-language newspaper. The press itself, however, would have been useless had it not been for the extraordinary work of a young Cherokee named Sequoyah, who invented a Cherokee alphabet.
As a young man, Sequoyah had joined the Cherokee volunteers who fought under Andrew Jackson against the British in the War of 1812. In dealing with the Anglo soldiers and settlers, he became intrigued by their “talking leaves”-printed books that he realized somehow recorded human speech. In a brilliant leap of logic, Sequoyah comprehended the basic nature of symbolic representation of sounds and in 1809 began working on a similar system for the Cherokee language.
Ridiculed and misunderstood by most of the Cherokee, Sequoyah made slow progress until he came up with the idea of representing each syllable in the language with a separate written character. By 1821, he had perfected his syllabary of 86 characters, a system that could be mastered in less than week. After obtaining the official endorsement of the Cherokee leadership, Sequoyah’s invention was soon adopted throughout the Cherokee nation. When the Cherokee-language printing press arrived on this day in 1828, the lead type was based on Sequoyah’s syllabary. Within months, the first Indian language newspaper in history appeared in New Echota, Georgia. It was called the Cherokee Phoenix.
One of the so-called “five civilized tribes” native to the American Southeast, the Cherokee had long embraced the United States’ program of “civilizing” Indians in the years after the Revolutionary War. In the minds of Americans, Sequoyah’s syllabary further demonstrated the Cherokee desire to modernize and fit into the dominant Anglo world. The Cherokee used their new press to print a bilingual version of republican constitution, and they took many other steps to assimilate Anglo culture and practice while still preserving some aspects of their traditional language and beliefs.
Sadly, despite the Cherokee’s sincere efforts to cooperate and assimilate with the Anglo-Americans, their accomplishments did not protect them from the demands of land-hungry Americans. Repeatedly pushed westward in order to make room for Anglo settlers, the Cherokee lost more than 4,000 of their people (nearly a quarter of the nation) in the 1838-39 winter migration to Oklahoma that later became known as the Trail of Tears. Nonetheless, the Cherokee people survived as a nation in their new home, thanks in part to the presence of the unifying written language created by Sequoyah.
In recognition of his service, the Cherokee Nation voted Sequoyah an annual allowance in 1841. He died two years later on his farm in Oklahoma. Today, his memory is also preserved in the scientific name for the giant California redwood tree, Sequoia.
|
The conquest of Central America is primarily the story of the conquest of the Maya states in northern Central America (1551–1697). There were, however, other tribes further south. Rodrigo de Bastidas established Spain's claim to the isthmus of Panama. He sailied along the Darién coast (March 1501). Christopher Columbus, on his fourth voyage, sailed along the Caribbean coast of Central America from the Bay of Honduras to Panama. The next forays to Central America were launched from the growing Spanish colony of Cuba. Vasco Núñez de Balboa was the firstv European to cross the Isthmus of Panama. Balboa claimed the Pacific Ocean and all the lands adjoining it for the Spanish Crown. The next Spanish expedition from Cuba on the Yucatán Peninsula looking for slaves to work the Cuban plantations as the Native American population on the island had been desimated.
The focus of the Spanishm howevere turned north to the Aztec Empire. After defeating the Aztecs, the Spanish turned their attention south. The first Conquistador to lead an expedition south was Pedro de Alvarado, one of the most ambutious and cruel of the Conquistadores. The principal campaigns to control CentraL America were fought in the north by Alvarado. The strongrst tribes were located in the highlands of Guatemala and El
Salvador. These were the Maya and related states. Alvarado reached Guatemala traveling down the Pacific coast (1523). He commanded a relatively small force made up of a few hundred Spanish horsemen and soldiers, but backed with Native American allies he prevailed in a bloody campaign.
The Maya are one of the best studied of the major pre-Colombian native American civilizations. Unlike the Aztecs and Incas, the Maya were a much older civilization which had passed its peak by the time of the encounter with the Europeans. The Maya first appear in the Yucatan Peninsula about 2600 B.C. They became a civiization of major importance about 250 AD in what is now southern Mexico, Guatemala, western Honduras, El Salvador, and northern Belize. Unlike the Inda and Aztecs, the Maya were not a centralized imperial state. There virtually independent city states were connected by extensive trade routes. The Maya show evidence of assimilating the technology and culture of previous civilizations which had developed to the north in moden Mexic, especially the Olmecs. The Maya are especially noteworthy for their achievements in astronomy, mathematics, accurate calendars, hieroglyphics, and archectecture. Mayan hieroglyphics,probably of Olmec origins, was the most sophisticated writing system in Meso-America. The Mayan archetectural heritage is especially impressive. Many sites in the Yucantan and northerm Central America include temple-pyramids, palaces, and observatories. The Maya especially venerated the jaguar and built temple-pyramids to the being they saw as the Lord of the Underworld. As with the other Meso-American civilizations, these edifaces were built without metal tools, beasts of burden, or even the wheel. Mayan agriculture was especially impressive as methods such as storing rainwater in underground reservoirs dealt with the limited available groundwater. The Maya were also accomplished weavers and potters. The Spanish encountered the Maya centuries after their classical era, unlike the Aztec and Inca who were in their acendancy. The decline of the Maya is one of the great mysteries in archeology. There are numerous theories. Increasingly archelogists are coming to believe that the decline was a more gradual process than was once believed. The process appears to have involved expanding populations which required overcultivation of available land resulting in decling yields that could not support dense populations.
Rodrigo de Bastidas established Spain's claim to the isthmus of Panama. He sailied along the Darién coast (March 1501). Christopher Columbus, on his fourth and last voyage, sailed along the Caribbean coast of Central America from the Bay of Honduras to Panama (1502-03). When after sime difficulty, hecfinally made it back to Spain, he reported seeing natives wearuing gold ornamnts in Costa Rica. The first effort at colonization occurred in what is now Costa Rica, but faoled (1506). The next forays to Central America were launched from the growing Spanish colony of Cuba. Vasco Núñez de Balboa was the first European to cross the Isthmus of Panama and view the Pacific Ocean (1513). Balboa claimed the Pacific and all the lands adjoining it for the Spanish Crown. The next Spanish expedition from Cuba on the Yucatán Peninsula looking for slaves to work the Cuban plantations as the Native American population on the island had been desimated. The focus of the Spanishm howevere turned north to the Aztec Empire. Only gradually did the conquest of Central America take place where no rich empires were found.
Spain then colonized the Caribbean and then hearing rumors of a rich inland empire began to plan to colonize the mainland. The Aztec were a war-like people located in the central valley of Mexico and dominated much of southern Mexico during the 15th and early 16th centuries until the arrival of the Spanish conquistadores. Their capital Tenochtitlan was unknown to Europe, but was one of the great cities of the world. Diego Velasquez, Spanih Governor of Cuba, put a trusted soldier, Hernando Cortez, in charge of an expedition to the mainland. Hernando Cortés sailed from Cuba in 1519. Cortez's campaign against the Aztec's is one of the most dramatic events of history, brilliantly told by several historians. The golden booty helped make Spain the leading European power. It also provided a secure base for the further conquest of the Americas and this meant Central America.
The Spanish encountered the Maya centuries after their classical era, unlike the Aztec and Inca who were in their acendancy. The conquest of Central America is primarily the story of the conquest of the Maya states in northern Central America. There were, however, other tribes further south. After defeating the Aztecs, the Spanish turned their attention south. The first Conquistador to lead an expedition south was Pedro de Alvarado, one of the most ambutious and cruel of the Conquistadores. The principal campaigns to control CentraL America were fought in the north by Alvarado. The strongrst tribes were located in the highlands of Guatemala and El
Salvador. These were the Maya and related states. Alvarado reached Guatemala traveling down the Pacific coast (1523). He commanded a relatively small force made up of a few hundred horsemen, soldiers and Native American allies.
Although Yucatan is a part of modern Mexico it was at the time of Cotrez's conquest of Mexico (the Aztecs) not a part of the Aztec Empire, but rather populated by the Maya. When the Spaniards first reached Yucatan (1517-19), much of peninsula was controled by ruling castes of central-Mexican origin. They rebuilt Chichén Itzá into a powerful Early Postclassic center. During the Late Postclassic period (1250-1520) the center of Maya leadership in Yucatán had shifted to Mayapán which briefly reunified the region. Mayapán was defeated (1441).. As a result, northern Yucatán split nto sixteen small city-states. This fragmentation meant that the Spanish did not dencounter a strongly, centrally organized Mayan state. Francisco de Montejo, a Cotrz ally, became a wealty nobelman in Mexico. He lobbied the crown to grant him a Capitulación (royal contract) to raise an army and conquer Yucatán. The Crown hesitated for several years and finally issued the Capitulación (1526). The Spanish conquest consisted of three campaigns (1527-46).
The pre-Conquest population of Nicaragua is not known with any certainty. Some estimates suggest it may have been as high as 1 million people. One report speculates about an Aztec trading post. Columbus passed along the Caribbean coast (1502). The Spanish made no attempt to colonize what is now modern Nicaragua untill two decaded later. Gil Gonzalez Davila led the first Spanish expedition (1520). Francisco Fernandez de Cordoba conquered the area (1524). He founded the modern cities of Granada and Leon. The Native American population was descimated by the Spanish conquest. The Spanish killed some Native Americans in the conquest. European duseases killed even more. The population was largely enslaved. As many as 0.2 million were sent to work in Spanish mines in the new South American colony carved about of the Inca lands. The royal govenor in Nicaragua, Francisco de Castañeda, favored slave hunting. Rodrigo de Contreras became governor of Nicaragua (1532). Reformer Bartolomé de Las Casas and Emperor Carlos V forced Contreras to cancel a slave tradeing expedition (1536). The Governor expelled the reformer. A Spanish Census about three decades after the conquest showed only 11,137 Native Americans left in the country’s heartland of western Nicaragua (1548).
The Maya dominated western Honduras, but declined (early 9th century). The most important Mayan city-state was Copán. Christopher Columbus landed on the coast of modern Honduras near modern Trujillo on his fourth and last voyage (1502). He named the country Honduras because the waters was so deep along the coast. Conquistador Hernán Cortés landed in Honduras (1525). He left 6 months later after failing to find another rich Native American empire to plunder, returning to Spain. Spanish planters on Cuba raided the northern coast attempting to capture Native Americans they could enslave. Pedro de Alvarado began the actual conquest of Honduras. He defeated the resistance led by Çiçumba near Ticamaya (1536). Alvarado divided the conquered native lands among his men. The natives living their essentially became slaves in the repartimiento system. Native resistancde to Spanish britality flared up in Gracias a Dios, Comayagua, and Olancho (1537-38). Lempira led the uprising in Gracias a Dios.
Northern central America including Belize was dominated by the Mayan. The Maya appeared to have begun to move into coastal Belize from the Guatelan Highlands (about 1500 BC). The highpoint of Mayan civilization in Belize was a few centuries before the arival of the Spanish (1200 AD). Important Maya sites include: Caracol, Lamanai, Lubaantun, Altun Ha, and Xunantunich. Archeologists describe Maya cites with high population densities. Columbus sailed along the coast of Central America, including Belize (1502). There was for a long time even after Spanish settlement of Mexico and elsewhere in Central America, no settlement along the coast of what is now Belize. The first European settlement seems to have been inadvertant--shipwrecked English seamen (1638). More English settlements followed, but the English hold on the coast was precarious.
Christopher Columbus was the first European to find what is now Costa Rica. He landed near modern Puerto Limón during his fourth and last voyage (1502). He found a friendly population and noted the gold decorations that some of the local population wore. This was the origin of the name Costa Rica (rich coast). Columbus speculated thst their might be a rich empire farther inland. Spanish King Ferdinand, as a result, appointed Diego de Nicuesa governor of the region and disparched him to colonize it (1506). The Native Americans this time were not friendly. The diseases the Spanish brought may have been a factor. De Nicuesa was contronted by the tropical jungle, disease, and unfriendly natives. With half his party dead, he returned to Spain. The Spanish persisted. The major effort led by Gil González Dávila founded a colony on the Golfo de Nicoya (1522).
He launched a bloody conquest of the natives, killing and torturing them into submision. González returned with some gold, but many in his expedition died and there was still not settlmebt in Costa Rica. Juan Vásquez de Coronado arrived as another royal governor (1562). He decided that to found a permanent settlment that he needed to move inland to the central higlands. Here he founded Cartago which became the first permanent Spanish colony in Costa Rica (1563). The Spanish tended to found cities along the coat where they could be supported by their navy. Tropical diseases and the fertility if the central highlands with its rich volcanic soil caused Vásquez to move inland.
Navigate the Historic Boys' Clothing Web Site:
[Return to the Main Conquest page]
[Return to the Main Native American page]
[Introduction] [Activities] [Biographies] [Chronology] [Clothing styles] [Countries]
[Bibliographies] [Contributions] [Essays] [FAQs] [Glossaries] [Images] [Links] [Registration] [Tools]
[Boys' Clothing Home]
|
Nibbling by herbivores can have a greater impact on the width of tree rings than climate, new research has found. The study, published this week in the British Ecological Society's journal Functional Ecology, could help increase the accuracy of the tree ring record as a way of estimating past climatic conditions.
Many factors in addition to climate are known to affect the tree ring record, including attack from parasites and herbivores, but determining how important these other factors have been in the past is difficult.
Working high in the mountains of southern Norway, midway between Oslo and Bergen, a team from Norway and Scotland fenced off a large area of mountainside and divided it into different sections into each of which a set density of domestic sheep was released every summer.
After nine summers, cross sections of 206 birch trees were taken and tree ring widths were measured. Comparing these with local temperature and the numbers of sheep at the location where the tree was growing allowed the team to disentangle the relationship between temperature and browsing by sheep and the width of tree rings.
According to lead author Dr James Speed of the NTNU Museum of Natural History and Archaeology: "We found tree ring widths were more affected by sheep than the ambient temperature at the site, although temperatures were still visible in the tree ring records. This shows that the density of herbivores affects the tree ring record, at least in places with slow-growing trees."
The impact of large herbivores on tree rings has, until now, been largely unknown, so these findings could help increase the accuracy of the tree ring record as a way of estimating past climatic conditions, says Dr Speed: "Our study highlights that other factors interact with climate to affect tree rings, and that to increase the accuracy of the tree ring record to estimate past climatic conditions, you need to take into account the history of wild and domestic herbivores. The good news is that past densities of herbivores can be estimated from historic records, and from the fossilised remains of spores from fungi that live on dung."
"This study does not mean that using tree rings to infer past climate is flawed as we can still see the effect of temperatures on the rings, and in lowland regions tree rings are less likely to have been affected by herbivores because they can grow out of reach faster," he explains.
Tree rings give us a window into the past, and have been widely used as climate recorders since the early 1900s. The growth rings are visible in tree trunk cross sections, and are formed in seasonal environments as the wood is laid down faster in summer than winter. In years with better growing conditions (in cool locations this usually means warmer) tree rings are wider, and because trees can be very long-lived and wood is easily preserved, for example in bogs and lakes, this allows very long time-series to be established, and climatic conditions to be estimated from the ring widths.
Explore further: Ecuador seizes 200,000 shark fins
More information: James D. M. Speed, Gunnar Austrheim, Alison J. Hester and Atle Mysterud (2011), 'Browsing interacts with climate to determine tree ring increment', doi: 10.1111/j.1365-2435.2011.01877.x, is published in Functional Ecology on 27 July 2011.
|
In economics, a cost curve is a graph of the costs of production as a function of total quantity produced. In a free market economy, productively efficient firms use these curves to find the optimal point of production (minimizing cost), and profit maximizing firms can use them to decide output quantities to achieve those aims. There are various types of cost curves, all related to each other, including total and average cost curves, and marginal ("for each additional unit") cost curves, which are equal to the differential of the total cost curves. Some are applicable to the short run, others to the long run.
- 1 Short-run average variable cost curve (SRAVC)
- 2 Short-run average total cost curve (SRATC or SRAC)
- 3 Long-run average cost curve (LRAC)
- 4 Short-run marginal cost curve (SRMC)
- 5 Long-run marginal cost curve (LRMC)
- 6 Graphing cost curves together
- 7 Cost curves and production functions
- 8 Relationship between different curves
- 9 Relationship between short run and long run cost curves
- 10 U-shaped curves
- 11 Cost curves in reality
- 12 See also
- 13 Notes
- 14 References
Short-run average variable cost curve (SRAVC)
Average variable cost (which is a short-run concept) is the variable cost (typically labor cost) per unit of output: SRAVC = wL / Q where w is the wage rate, L is the quantity of labor used, and Q is the quantity of output produced. The SRAVC curve plots the short-run average variable cost against the level of output and is typically drawn as U-shaped.
Short-run average total cost curve (SRATC or SRAC)
The average total cost curve is constructed to capture the relation between cost per unit of output and the level of output, ceteris paribus. A perfectly competitive and productively efficient firm organizes its factors of production in such a way that the factors of production is at the lowest point. In the short run, when at least one factor of production is fixed, this occurs at the output level where it has enjoyed all possible average cost gains from increasing production. This is at the minimum point in the diagram on the right.
Short-run total cost is given by
where PK is the unit price of using physical capital per unit time, PL is the unit price of labor per unit time (the wage rate), K is the quantity of physical capital used, and L is the quantity of labor used. From this we obtain short-run average cost, denoted either SATC or SAC, as STC / Q:
- SRATC or SRAC = PKK/Q + PLL/Q = PK / APK + PL / APL,
where APK = Q/K is the average product of capital and APL = Q/L is the average product of labor.:191
Short run average cost equals average fixed costs plus average variable costs. Average fixed cost continuously falls as production increases in the short run, because K is fixed in the short run. The shape of the average variable cost curve is directly determined by increasing and then diminishing marginal returns to the variable input (conventionally labor).:210
Long-run average cost curve (LRAC)
The long-run average cost curve depicts the cost per unit of output in the long run—that is, when all productive inputs' usage levels can be varied. All points on the line represent least-cost factor combinations; points above the line are attainable but unwise, while points below are unattainable given present factors of production. The behavioral assumption underlying the curve is that the producer will select the combination of inputs that will produce a given output at the lowest possible cost. Given that LRAC is an average quantity, one must not confuse it with the long-run marginal cost curve, which is the cost of one more unit.:232 The LRAC curve is created as an envelope of an infinite number of short-run average total cost curves, each based on a particular fixed level of capital usage.:235 The typical LRAC curve is U-shaped, reflecting increasing returns of scale where negatively sloped, constant returns to scale where horizontal and decreasing returns (due to increases in factor prices) where positively sloped.:234 Contrary to the assertion of Canadian economist Jacob Viner, the envelope is not created by the minimum point of each short-run average cost curve.:235 This mistake is recognized as Viner's Error.
In a long-run perfectly competitive environment, the equilibrium level of output corresponds to the minimum efficient scale, marked as Q2 in the diagram. This is due to the zero-profit requirement of a perfectly competitive equilibrium. This result implies production is at a level corresponding to the lowest possible average cost,:259 does not imply that production levels other than that at the minimum point are not efficient. All points along the LRAC are productively efficient, by definition, but not all are equilibrium points in a long-run perfectly competitive environment.
In some industries, the bottom of the LRAC curve is large in comparison to market size (that is to say, for all intents and purposes, it is always declining and economies of scale exist indefinitely). This means that the largest firm tends to have a cost advantage, and the industry tends naturally to become a monopoly, and hence is called a natural monopoly. Natural monopolies tend to exist in industries with high capital costs in relation to variable costs, such as water supply and electricity supply.:312
Short-run marginal cost curve (SRMC)
A short-run marginal cost curve graphically represents the relation between marginal (i.e., incremental) cost incurred by a firm in the short-run production of a good or service and the quantity of output produced. This curve is constructed to capture the relation between marginal cost and the level of output, holding other variables, like technology and resource prices, constant. The marginal cost curve is usually U-shaped. Marginal cost is relatively high at small quantities of output; then as production increases, marginal cost declines, reaches a minimum value, then rises. The marginal cost is shown in relation to marginal revenue (MR), the incremental amount of sales revenue that an additional unit of the product or service will bring to the firm. This shape of the marginal cost curve is directly attributable to increasing, then decreasing marginal returns (and the law of diminishing marginal returns). Marginal cost equals w/MPL.:191 For most production processes the marginal product of labor initially rises, reaches a maximum value and then continuously falls as production increases. Thus marginal cost initially falls, reaches a minimum value and then increases.:209 The marginal cost curve intersects both the average variable cost curve and (short-run) average total cost curve at their minimum points. When the marginal cost curve is above an average cost curve the average curve is rising. When the marginal costs curve is below an average curve the average curve is falling. This relation holds regardless of whether the marginal curve is rising or falling.:226
Long-run marginal cost curve (LRMC)
The long-run marginal cost curve shows for each unit of output the added total cost incurred in the long run, that is, the conceptual period when all factors of production are variable so as minimize long-run average total cost. Stated otherwise, LRMC is the minimum increase in total cost associated with an increase of one unit of output when all inputs are variable.
The long-run marginal cost curve is shaped by returns to scale, a long-run concept, rather than the law of diminishing marginal returns, which is a short-run concept. The long-run marginal cost curve tends to be flatter than its short-run counterpart due to increased input flexibility as to cost minimization. The long-run marginal cost curve intersects the long-run average cost curve at the minimum point of the latter.:208 When long-run marginal costs are below long-run average costs, long-run average costs are falling (as to additional units of output).:207 When long-run marginal costs are above long run average costs, average costs are rising. Long-run marginal cost equals short run marginal-cost at the least-long-run-average-cost level of production. LRMC is the slope of the LR total-cost function.
Graphing cost curves together
Cost curves can be combined to provide information about firms. In this diagram for example, firms are assumed to be in a perfectly competitive market. In a perfectly competitive market the price that firms are faced with would be the price at which the marginal cost curve cuts the average cost curve.
Cost curves and production functions
Assuming that factor prices are constant, the production function determines all cost functions. The variable cost curve is the inverted short-run production function or total product curve and its behavior and properties are determined by the production function.:209 [nb 1] Because the production function determines the variable cost function it necessarily determines the shape and properties of marginal cost curve and the average cost curves.
If the firm is a perfect competitor in all input markets, and thus the per-unit prices of all its inputs are unaffected by how much of the inputs the firm purchases, then it can be shown that at a particular level of output, the firm has economies of scale (i.e., is operating in a downward sloping region of the long-run average cost curve) if and only if it has increasing returns to scale. Likewise, it has diseconomies of scale (is operating in an upward sloping region of the long-run average cost curve) if and only if it has decreasing returns to scale, and has neither economies nor diseconomies of scale if it has constant returns to scale. In this case, with perfect competition in the output market the long-run market equilibrium will involve all firms operating at the minimum point of their long-run average cost curves (i.e., at the borderline between economies and diseconomies of scale).
If, however, the firm is not a perfect competitor in the input markets, then the above conclusions are modified. For example, if there are increasing returns to scale in some range of output levels, but the firm is so big in one or more input markets that increasing its purchases of an input drives up the input's per-unit cost, then the firm could have diseconomies of scale in that range of output levels. Conversely, if the firm is able to get bulk discounts of an input, then it could have economies of scale in some range of output levels even if it has decreasing returns in production in that output range.
Relationship between different curves
- Total Cost = Fixed Costs (FC) + Variable Costs (VC)
- Marginal Cost (MC) = dC/dQ; MC equals the slope of the total cost function and of the variable cost function
- Average Total Cost (ATC) = Total Cost/Q
- Average Fixed Cost (AFC) = FC/Q
- Average Variable Cost (AVC) = VC/Q.
- ATC = AFC + AVC
- The MC curve is related to the shape of the ATC and AVC curves::212
- At a level of Q at which the MC curve is above the average total cost or average variable cost curve, the latter curve is rising.:212
- If MC is below average total cost or average variable cost, then the latter curve is falling.
- If MC equals average total cost, then average total cost is at its minimum value.
- If MC equals average variable cost, then average variable cost is at its minimum value.
Relationship between short run and long run cost curves
Basic: For each quantity of output there is one cost minimizing level of capital and a unique short run average cost curve associated with producing the given quantity.
- Each STC curve can be tangent to the LRTC curve at only one point. The STC curve cannot cross (intersect) the LRTC curve.:230:228–229 The STC curve can lie wholly “above” the LRTC curve with no tangency point.:256
- One STC curve is tangent to LRTC at the long-run cost minimizing level of production. At the point of tangency LRTC = STC. At all other levels of production STC will exceed LRTC.:292–299
- Average cost functions are the total cost function divided by the level of output. Therefore the SATC curveis also tangent to the LRATC curve at the cost-minimizing level of output. At the point of tangency LRATC = SATC. At all other levels of production SATC > LRATC:292–299 To the left of the point of tangency the firm is using too much capital and fixed costs are too high. To the right of the point of tangency the firm is using too little capital and diminishing returns to labor are causing costs to increase.
- The slope of the total cost curves equals marginal cost. Therefore when STC is tangent to LTC, SMC = LRMC.
- At the long run cost minimizing level of output LRTC = STC; LRATC = SATC and LRMC = SMC,.:292–299
- The long run cost minimizing level of output may be different from minimum SATC,.:229:186
- With fixed unit costs of inputs, if the production function has constant returns to scale, then at the minimal level of the SATC curve we have SATC = LRATC = SMC = LRMC.:292–299
- With fixed unit costs of inputs, if the production function has increasing returns to scale, the minimum of the SATC curve is to the right of the point of tangency between the LRAC and the SATC curves.:292–299 Where LRTC = STC, LRATC = SATC and LRMC = SMC.
- With fixed unit costs of inputs and decreasing returns the minimum of the SATC curve is to the left of the point of tangency between LRAC and SATC.:292–299 Where LRTC = STC, LRATC = SATC and LRMC = SMC.
- With fixed unit input costs, a firm that is experiencing increasing (decreasing) returns to scale and is producing at its minimum SAC can always reduce average cost in the long run by expanding (reducing) the use of the fixed input.:292–99 :186
- LRATC will always equal to or be less than SATC.:211
- If production process is exhibiting constant returns to scale then minimum SRAC equals minimum long run average cost. The LRAC and SRAC intersect at their common minimum values. Thus under constant returns to scale SRMC = LRMC = LRAC = SRAC .
- If the production process is experiencing decreasing or increasing, minimum short run average cost does not equal minimum long run average cost. If increasing returns to scale exist long run minimum will occur at a lower level of output than SRAC. This is because there are economies of scale that have not been exploited so in the long run a firm could always produce a quantity at a price lower than minimum short run average cost simply by using a larger plant.
- With decreasing returns, minimum SRAC occurs at a lower production level than minimum LRAC because a firm could reduce average costs by simply decreasing the size or its operations.
- The minimum of a SRAC occurs when the slope is zero. Thus the points of tangency between the U-shaped LRAC curve and the minimum of the SRAC curve would coincide only with that portion of the LRAC curve exhibiting constant economies of scale. For increasing returns to scale the point of tangency between the LRAC and the SRAc would have to occur at a level of output below level associated with the minimum of the SRAC curve.
These statements assume that the firm is using the optimal level of capital for the quantity produced. If not, then the SRAC curve would lie "wholly above" the LRAC and would not be tangent at any point.
Both the SRAC and LRAC curves are typically expressed as U-shaped.:211; 226 :182;187–188 However, the shapes of the curves are not due to the same factors. For the short run curve the initial downward slope is largely due to declining average fixed costs.:227 Increasing returns to the variable input at low levels of production also play a role, while the upward slope is due to diminishing marginal returns to the variable input.:227 With the long run curve the shape by definition reflects economies and diseconomies of scale.:186 At low levels of production long run production functions generally exhibit increasing returns to scale, which, for firms that are perfect competitors in input markets, means that the long run average cost is falling;:227 the upward slope of the long run average cost function at higher levels of output is due to decreasing returns to scale at those output levels.:227
Cost curves in reality
The U-shaped cost curves have no basis in fact. In a survey by Wilford J. Eiteman and Glenn E. Guthrie in 1952 managers of 334 companies were shown a number of different cost curves, and asked to specify which one best represented the company’s cost curve. 95% of managers responding to the survey reported cost curves with constant or falling costs.
Alan Blinder, former vice president of the American Economics Association, conducted the same type of survey in 1998, which involved 200 US firms in a sample that should be representative of the US economy at large. He found that about 40% of firms reported falling variable or marginal cost, and 48.4% reported constant marginal/variable cost.
- Economic cost
- General equilibrium
- Joel Dean (economist)
- Partial equilibrium
- Point of total assumption
- The slope of the short-run production function equals the marginal product of the variable input, conventionally labor. The slope of the variable cost function is marginal costs. The relationship between MC and the marginal product of labor MPL is MC = w/MPL. Because the wage rate w is assumed to be constant the shape of the variable cost curve is completely dependent on the marginal product of labor. The short-run total cost curve is simply the variable cost curve plus fixed costs.
- Perloff, J. Microeconomics, 5th ed. Pearson, 2009.
- Perloff, J., 2008, Microeconomics: Theory & Applications with Calculus, Pearson. ISBN 978-0-321-27794-7
- Lipsey, Richard G. (1975). An introduction to positive economics (fourth ed.). Weidenfeld & Nicolson. pp. 57–8. ISBN 0-297-76899-9.
- Viner, Jacob (1931). "Costs Curves and Supply Curves". Zeitschrift für Nationalökonomie 3 (1): 23–46. doi:10.1007/BF01316299. Reprinted in Emmett, R. B., ed. (2002). The Chicago Tradition in Economics, 1892–1945 6. Routledge. pp. 192–215.
- Sexton, Robert L., Philip E. Graves, and Dwight R. Lee, 1993. "The Short- and Long-Run Marginal Cost Curve: A Pedagogical Note", Journal of Economic Education, 24(1), p. 34. [Pp. 34-37 (press +)].
- Gelles, Gregory M., and Mitchell, Douglas W., "Returns to scale and economies of scale: Further observations," Journal of Economic Education 27, Summer 1996, 259-261.
- Frisch, R., Theory of Production, Drodrecht: D. Reidel, 1965.
- Ferguson, C. E., The Neoclassical Theory of Production and Distribution, London: Cambridge Univ. Press, 1969.
- Pindyck, R., and Rubinfeld, D., Microeconomics, 5th ed., Prentice-Hall, 2001.
- Nicholson: Microeconomic Theory 9th ed. Page 238 Thomson 2005
- Kreps, D., A Course in Microeconomic Theory, Princeton Univ. Press, 1990.
- Binger, B., and Hoffman, E., Microeconomics with Calculus, 2nd ed., Addison-Wesley, 1998.
- Frank, R., Microeconomics and Behavior 7th ed. (Mc-Graw-Hill) ISBN 978-0-07-126349-8 at 321.
- Melvin & Boyes, Microeconomics, 5th ed., Houghton Mifflin, 2002
- Perloff, J. Microeconomics Theory & Application with Calculus Pearson (2008) p. 231.
- Nicholson: Microeconomic Theory 9th ed. Page Thomson 2005
- Boyes, W., The New Managerial Economics, Houghton Mifflin, 2004.
- Wilford J. Eiteman and Glenn E. Guthrie, The Shape of the Average Cost Curve, American Economic Review, 42.5: 832–838
- Alan Stuart Blinder, Asking about Prices: A New Approach to Understanding Price Stickiness, Russell Sage Foundation, New York, 1998
|
Having analyzed Mars from afar via orbiting satellite, Los Alamos National Laboratory instruments will next be on their way to get out and play in the Martian dirt. Two of the eight instruments aboard NASA's planned Mars Science Laboratory rover, scheduled for launch in 2009, include Los Alamos technology.
Image: Mars Science Laboratory rover using ChemCam to analyze a rock. Artist's conception, courtesy French Space Agency (CNES) and Los Alamos National Laboratory.
The laboratory's contribution to the new Mars effort is two-fold, providing a laser unit to measure elemental composition of rocks and soils, plus an x-ray diffraction device to analyze minerals in complex soil and rock samples from a different perspective. The rolling Mars Science Laboratory rover will be designed to operate for a full Martian year, or two Earth years, exploring potential habitats for evidence of past or present life.
The Los Alamos laser unit, called ChemCam, uses laser-induced breakdown spectroscopy (LIBS), to measure the chemical content of the target samples. ChemCam works by firing an intense pulse of laser light at a surface from as far as 13 meters away.
The laser beam zaps a pinhead-sized area on the target, ablating or vaporizing it. A spectral analyzer then peers closely at the light from the vaporized sample. Atoms ablated in ionized states emit light and each sample yields a unique spectral emission of bright lines characteristic of the elements present in the material. Like fingerprints, the emission line wavelengths can be matched to a library of known chemical compounds. Even dust-covered rocks will reveal their inner secrets to the ChemCam interrogation. The laser also can be used to clean dust or weathering coatings from the sample prior to the analysis without the need to drive up to the target rock.
"ChemCam is the only instrument that can determine the elemental compositions of dust-covered rocks remotely," said Roger Wiens, Los Alamos' principal investigator on the project. The unit can recognize all known elements, noted Wiens, so detailed information on possible future Mars base sites can then begin flowing back to Earth for analysis.
The other piece of the ChemCam combo, the Remote Micro-Imager, will give very close-up images of the samples being analyzed, with an effective resolution that exceeds MER's Pancam by 5-10 times. The laser and camera are provided by the French space agency. Los Alamos is in charge of the spectrographs, data processing unit, power supply, software and project management.
Another of the rover's planned instruments is CheMin: an x-ray diffraction/x-ray fluorescence instrument for mineralogical analysis. Its principal investigor is David Blake of NASA's Ames Research Center in Moffett Field, Calif. Partnering with him are Los Alamos geologists Steve Chipera and David Vaniman. CheMin will identify and quantify all minerals in complex samples such as basalts, evaporites and soils, one of the principle objectives of Mars Science Laboratory.
"CheMin is named for its ability to obtain chemical and mineralogical data simultaneously from samples of soil or rock. The mineralogical capability is particularly powerful because it is based on x-ray diffraction, the standard method for mineral identification used in laboratories on Earth and required by the International Mineralogical Association for recognition of any new mineral," said Vaniman.
Both ChemCam and CheMin are previous winners in the "R&D 100" competition sponsored by Research and Development Magazine.
Source: Los Alamos National Laboratory
Explore further: SDO captures images of two mid-level flares
|
The shape of a proton depends on the speed of the quarks inside. Of the four shapes shown here, the spherical shape (lower right) is the shape most physicists expected to find. The peanut shape (top left) is produced by quarks traveling nearly at light speed and spinning the same direction as the proton. (Gerald A. Miller, University of Washington)
PHILADELPHIA — When Gerald A. Miller first saw the experimental results from the Thomas Jefferson National Accelerator Facility, he was pretty sure they couldn't be right. If they were, it meant that some long-held notions about the proton, a primary building block of atoms, were wrong.
But in time, the findings proved to be right, and led physicists to the conclusion that protons aren't always spherically shaped, like a basketball.
"Some physicists thought they did the experiment wrong," said Miller, a University of Washington physics professor. "Even I thought so initially. And then I remembered that it looked like something else I thought was wrong — our own conclusion in 1995."
In fact, by 1996 he and two colleagues were ready to publish a paper theorizing the angles at which protons would bounce off electrons after collisions in a nuclear accelerator. The measurements would tell a lot about protons' internal electric and magnetic properties, and virtually everyone expected the two effects to cause the same kinds of collisions. But the 1996 paper described collisions that were quite different.
Miller was sure he and his colleagues had gotten it wrong somehow — until he saw the results of the actual experimental work at Jefferson, a national laboratory in Newport News, Va. Researchers at Jefferson published their initial results in 2000 and updated their findings last year.
What Miller discovered from those results is that a proton at rest can be shaped like a ball — the expected shape and the only one described in physics textbooks. Or it can be shaped like a peanut, like a rugby ball or even something similar to a bagel.
He was able to use his model to predict the behavior of quarks, and he discovered that different effects of the quarks could change the proton's shape. The model showed that the highest-momentum quarks, those moving nearly at light speed inside the proton, produced the peanut shape.
"The quarks are like prisoners walking around in a jail cell. They just are walking very fast, and when they come to a wall they have to turn around and we can see that, indirectly, and measure it," Miller said.
If the quarks are moving more slowly, the surface indentations of the peanut shape fill in and the proton takes on a form something like a rugby ball, or a beehive. The slowest quarks produce the spherical shape that physicists generally expected to see. Another shape — a flattened round form like a bagel — is sort of a cousin to the peanut shape with the high-momentum quarks. In the peanut shape, the quarks spin in the same direction as the proton, while in the bagel shape they spin in the opposite direction as the proton.
The variety of shapes is nearly limitless and depends on the speed of the quarks inside the proton and what direction they are spinning, said Miller, who presents his findings today (April 5) during a news conference and an invited talk at the American Physical Society meeting in Philadelphia.
The Jefferson results, he said, are a small piece of the puzzle for physicists who are trying to unify the four forces of nature — gravity, electromagnetic, strong and weak — into a "theory of everything" by which they can understand the form and function of all matter. Taking this step, Miller said, allows physicists to make better predictions so other experiments can get even closer to a unified theory, and it provides clues for how to devise those experiments.
The first implication of the Jefferson findings, he said, is that "a bunch of textbooks will have to have some of their pages updated."
Beyond that, he said, it isn't clear right now whether there will be practical implications. However, he tells the story of Michael Faraday, who presented findings in the 1830s on electromagnetic induction but was at a loss to explain the value of his findings. Yet today, the principles he developed are responsible for all the electric generators sending juice from power stations.
"You just never know until you understand something where it might lead," Miller said.
Submitted: Saturday, April 5, 2003 - 1:00am
|
Objectives: In this lab you will learn:
- Metric measurements and conversions
- Use of basic laboratory equipment
- Preparation of solutions
- Proper use of the microscope
- Measurement of specimens using the micrometer
Series 1 Lab 1 Scientific Investigation Boot Camp
In this laboratory, we will review the metric system as it applies to laboratory science, and learn how to use some basic laboratory measuring equipment by preparing a cupric chloride or cobalt chloride solution. You will also become familiar with the compound microscope by using it to examine the ciliated protozoan Tetrahymena pyriformis.
PART I: Metric Measurements
Measurements in pounds, miles and gallons are still commonly used in the United States, but the metric system, which was developed in France in the 1790’s, has several advantages over the English system and is more convenient for scientific use. The metric system uses decimals and a system of prefixes to define the measurements of a variety of parameters. Since it is based on powers of ten, calculations using the metric system are simpler than the English system.
The metric units you will commonly use in the lab are:
To describe objects that are larger or smaller than the base units, a system of prefixes is used. The prefixes and corresponding exponential values you will commonly use to change the size of the base units are:
The power of 10 in the exponent column in the table above indicates the placement of the decimal point for that measurement. For example, 103 is equivalent to 1.0 x 103 and can be converted to 1000 by moving the decimal point 3 places to the right corresponding to the exponent value equal to 3. Similarly, 2.5 x 103 is converted to 2500 by once again moving the decimal point 3 places to the right. If the exponent is negative, the decimal point must be moved to the left the correct number of places. For example, we can convert 2.5 x 10-2 to 0.025 by moving the decimal point 2 places to the left. The use of exponents of 10 to place the decimal point is called scientific notation. Scientific notation is used to represent very large or very small numbers in calculations and scientific writing.
It is important to become familiar with measurements expressed in metric units and to be able to convert between these units (from grams to milligrams, for example). Conversions are straightforward since the power of ten between each unit is known. Since there are 1000 milligrams in each gram, a sample that weighs 1 gram also weighs 1000 milligrams. To keep track of the decimal point, 1000 milligrams is best written in scientific notation as 1.0 x 103, mg.
PART II: Basic Laboratory Equipment
In the first section of today’s lab, you reviewed the metric system as it is used to describe lengths, weights, volumes and temperatures. This section contains descriptions of some of the laboratory equipment available to make these measurements. With these tools, it is possible to make solutions of known concentrations and to accurately measure portions of these solutions. In Part III of this lab, you will use some of the equipment to prepare a solution needed for Parts IV and V of the lab. You will be using several types of containers and measurement devices in the laboratory this semester. The following pages describe some of the basic tools needed in the first series of laboratories. As the semester progresses, you will be introduced to more complex equipment. Please review the following descriptions before you use the equipment today, and direct any questions to your instructor.
A. Descriptions of Laboratory Equipment
Beakers are used to prepare solutions and can range in size from 10mL to 4L. Any volume markings present on a beaker are only approximate and are not an accurate indication of volume. Beakers are not used to store solutions.
Erlenmeyer Flasks are used to prepare solutions and microbiological media. They range in size from 25mL to 6L. Volume markings on an Erlenmeyer flask are only approximate and are not an accurate measure of volume.
Test tubes will be provided to you throughout the semester for experimentation. Round bottom glass test tubes will be used for making dilutions and biochemical assays. These tubes are identified by the diameter of the tube, so a 13mm test tube has a diameter of 13mm. A 16mm tube has a diameter of 16mm and can hold a greater volume of liquid than the 13mm. Centrifuge tubes can be glass or plastic, are often pointed at the bottom, and identified by the maximum volume they can contain. They are used in a centrifuge to separate biologic materials.
Graduated Cylinders are used to measure volumes from 10mL to 2L. The gradations on the cylinder are an accurate measure of the volume of liquid contained in the cylinder. The liquid in the cylinder will form a meniscus as shown in the diagram below. To measure the proper volume, the bottom of the meniscus must rest on the desired volume mark on the cylinder.
Serological Pipettes accurately measure volumes from about 1.0mL to 10mL using the gradations on the pipettes. They may be made of glass or plastic. Glass pipettes are washed and reused, while plastic pipettes are disposable. The markings on these pipettes are often in descending order, so to deliver 6mL using a 10mL pipette, the liquid must be drawn to the 4 mark on the pipette. To use the serological pipettes a green or blue pipette pump is attached to the top of the pipette. The blue one is used for smaller pipets (1ml or 2ml) and the green one is used for 5ml and 10 ml serologic pipettes. Never pipette by mouth. The pointed end of the pipette is then submerged in the liquid, and the liquid drawn into the pipette by turning the wheel on the pump until the correct volume is measured. To dispense the liquid from the pipette, simply depress the plunger of the pump. Some serological pipettes will dispense all liquid from the pipette tip; others are designed to retain a small amount of liquid in the tip. Remember the glass pipettes are not disposable. They should be placed tips down in the plastic pipette canisters at each bench after use.
Micropipettes are used to measure volumes less than 1.0mL. They may be fixed volume or adjustable volume pipettes. You are provided with 3 adjustable volume pipettes: one for 200-1000 microliters (μL), one for 20-200 μL, and one for 1-20 μL. To use a micropipette, attach a plastic tip to the end, depress the plunger on the micropipette to the first stop, and insert the tip into the liquid you wish to measure. Slowly release the plunger taking care not to trap any air bubbles in the tip or splatter any liquid onto the pipette base. Withdraw the tip from the liquid and dispense the volume into the desired receptacle by depressing the plunger as far as you can. Be sure to keep the plunger depressed until it is removed from the liquid, or you will remove some of the material you have just measured. Used tips are discarded in the trash at your lab bench.
Pasteur Pipettes are named after Louis Pasteur. They are glass dropper pipettes that require the addition of a separate bulb at the top to create the suction. Pasteur pipettes are not used to measure volume but are used to transfer liquid from one place to another. For example, Pasteur pipettes are often used to remove all the liquid from a tube after centrifugation.
Top Load Balances are used to weigh solid materials greater than 1.0 gram accurately. Either a plastic weigh boat or weighing paper is placed on the balance, and the balance is set to zero (tared). The solid material is then placed in the weigh boat or on the weighing paper for measurement. When the proper amount of solid material has been added, it is transferred to a beaker or Erlenmeyer flask, solvent added, and mixed until the solid material is dissolved.
Magnetic Mixers are used in the laboratory to facilitate the mixing of materials. They are particularly useful when making solutions. The mixer has a magnetic core that can rotate at various speeds. If a beaker filled with materials to be mixed is placed on the mixer and a magnetic stir bar added into the beaker, the mixer could now be adjusted to mix the material at the proper rate. The mixing will continue until the mixer is turned off. A magnetic wand is then used to remove the stir bar from the solution before it is brought to final volume.
Vortex Mixers are used in the laboratory to mix materials contained in test tubes. The force of the mixing is adjustable, and the mixer can be set to run only when a tube is inserted into the mixer receptacle. The force of the mixer should be set so that the liquid in the tube forms a vortex as it mixes, and the tube containing the liquid should always be pointed away from you or your lab partner. If a tube is more than 2/3 full, a vortex mixer should not be used because the liquid will splatter out of the tube.
Centrifuges are instruments that use centrifugal force to separate biological materials. You will be using several types of centrifuges this semester. Clinical centrifuges are bench top models that accommodate various sizes of centrifuge tubes depending upon the type of holder installed. Refrigerated centrifuges are used when it is necessary to maintain the biological material at a constant cool temperature and can also accommodate several sizes of tubes. Microcentrifuges are used for very small plastic centrifuge tubes called microfuge tubes. All centrifuges require that the centrifuge tube containing the biological material be placed across from a tube of the same weight. We call this “balancing the centrifuge.” Tubes cannot be balanced by filling them to equal volumes, as these may not have the same weight. They must be balanced by weight, using a scale.
B. Practice Exercises with Pipettes
- Practice using the 10, 5, 2 or 1 milliliter pipettes by transferring the following volumes of deionized water into 13mm test tubes. Observe the volume differences. To insure the best accuracy, it is important that the pipette chosen to measure a specific volume has a total volume close to the volume to be measured. For example, the best pipette available to measure a volume of 1.7mL is a 2.0mL pipette. Remember that many serological pipettes have the volume markings in descending order, so be sure to draw the liquid to the appropriate mark on the pipette so the correct volume is dispensed.
- Adjustable micropipettes are very expensive, so please take great care in using them. To use a micropipette, attach a plastic tip to the end, depress the plunger on the micropipette to the first stop, and insert the tip into the liquid you wish to measure. Slowly release the plunger taking care not to trap any air bubbles in the tip or splatter any liquid onto the pipette base. Withdraw the tip from the liquid and dispense the volume into the desired receptacle by depressing the plunger as far as you can. Be sure to keep the plunger depressed until it is removed from the liquid, or you will remove some of the material you have just measured. Be sure never to hold a pipette upside down or sideways when liquid is in the tip. Also, if you feel resistance when changing the volume setting, stop immediately and ask your instructor for help. The following website Using a Micropipette has more detailed information about micropipette use.
Practice using the micropipettes by completing the following exercise. Keep in mind that this is an exercise designed to allow you to practice and test your pipetting technique. When carrying out experiments, it is always important to choose the best pipette for the volume desired.
- To test your micropipetting prowess and/or to calibrate your P1000, P 200, and P 20 micropipettes, first label 6 microfuge tubes (1-6) and weigh them on the top loading balance. Remember to zero the balance before weighing the tubes. Record the weights in the table below.
- Following the table below, pipet the specified volumes into the pre-weighed microfuge tubes prepared above using either your P20, P200 or P1,000, as specified, then reweigh them. Record all weights.
- Calculate the weight of the water in each tube in grams. (1000 microliters of water should weigh 1 gram at room temperature.)
- If the water in any tube weighs more or less than 1 gram, your pipetting technique may need revision. Repeat steps 1-3 for the tube that is off the expected weight. If your water weight is significantly off after several repeated attempts, your pipette (or your technique) may need adjustment. Ask your instructor to watch your technique and/or to recalibrate your pipette.
PART III: Solutions
When doing biological experiments, it is essential that the exact concentration of every solution be known. It is also important to know the pH of a solution since pH is often critical in biological reactions. Buffered solutions, which can absorb minor challenges to pH, are often used to maintain the pH of a solution. Different ways of describing the concentration of a solution as well as pH and buffers are reviewed below.
A. Classifications of Solutions
1. Percentage by weight (w/v) Solutions
The number of grams in 100mL of solution is indicated by the percentage. For example, a 1% solution has one gram of solid in a final volume of 100mL solution. To make this type of solution properly, you should weigh 1.0g of the solid material and dissolve it in slightly less than 100mL of solvent. Once the solids have dissolved, you can bring the volume up to 100mL. If one liter of a 1% solution is needed, then 10g of solid would be dissolved in 1000mL of solution to maintain the 1% ratio of solid weight to solution volume.
2. Percentage by volume (v/v) Solutions
In this case the percentage indicates the volume of the full strength solution in 100mL of dilute solution. For example, a 60% ethanol solution is made by mixing 60mL of 100% ethanol with 40mL of water. If only 10mL of a 60% ethanol solution is needed, then 6mL of 100% ethanol should be mixed with 4mL of water.
3. Molar Solutions
A 1 molar (1M) solution is a solution in which 1 mole of a compound is dissolved in a final volume of 1 liter (1L). For example, the molecular weight of sodium chloride (NaCl) is 58.44, so one gram molecular weight (1 mole) is 58.44g. If you dissolve 58.44g of NaCl in a final volume of 1 liter, you have made a 1M NaCl solution. To make a 0.1M NaCl solution, you could weigh 5.844g of NaCl and dissolve it in a final volume of 1L water.
Convert a 0.2M solution of NaCl into %(w/v) units.
0.2M = 0.2mol/L = (58.44g/mol)(0.2mol)/L = 11.7g/1000mL = 1.17g/100mL = 1.17%(w/v)
4. Buffered Solutions
A buffered solution resists changes in pH. The pH of a solution is a measure of its acidity, and is defined as the negative log of the hydrogen ion concentration. The pH scale ranges from 0 to 14 where 0 is the most acidic, 14 is the most basic, and 7 is neutral. Since pH is a log scale, the difference between pH 5 and pH 6, for example, is a factor of 10. Buffers are used when biological samples need to be kept within a narrow range of pH to maintain activity. The enzymes involved in biochemical reactions often require a narrow pH range. This range is usually 7.2–7.4 for human and animal tissues. A phosphate buffer is commonly used in the biology laboratory, because it exhibits excellent buffering capacity in the neutral pH range. It is made up of a mixture of sodium monobasic phosphate (NaH2PO4) and sodium dibasic phosphate (Na2HPO4) dissolved in water.
B. Solution Preparation
To prepare the solutions needed in later exercises, the 4 students at one lab table will prepare two different solutions. One pair of students will prepare 50mL of a 0.11M solution of copper (II) chloride dihydrate (MW=170.5) and the other pair of students will prepare 50mL of a 0.26M solution of cobalt chloride hexahydrate (MW=237.9).
- Calculate the correct weight of the material to yield 50mL of the proper molarity for the cupric or cobalt chloride solutions and record all calculations in your lab notebook. What would be the millimolar concentration of your solution? Have your lab instructor check your calculations before you weigh the material.
- Put on gloves before handling chemicals in the solid state. Nitrile gloves are available on the bench at the front of the laboratory. Please advise your lab instructor if you have a nitrile allergy.
- Using one of the top loading balances, weigh the correct amount of solid material and transfer it to a 250mL beaker.
- Using a graduated cylinder, add 35mL of deionized water to the beaker. Add a magnetic stir bar and mix on a magnetic mixer until all the solid material has dissolved. Remove the magnetic stir bar.
- Transfer the solution into an empty 100mL graduated cylinder and bring the volume to exactly 50mL with deionized water.
- Transfer the solution to an empty storage bottle. Label the bottle with the name and concentration of the solution, your initials, your lab section and date.
PART IV: Microscopy
Most cells measure about 1–100 micrometers (µm) in diameter. This size is smaller than can be detected by the unaided human eye; therefore, microscopes are needed to visualize cells and their component parts. The compound light microscope can magnify to about 1000 times the actual size of the specimen and can resolve details as fine as 0.2µm. In this part of today’s lab, you will learn to use the compound microscope by examining several types of cells.
A. Care of the Microscopes
A compound microscope and a dissecting microscope are available for each student's use. We will only be using the compound microscope today. Remember at all times that your microscope is a precision optical instrument and must be handled carefully. When removing the microscope from the cabinet, do not jar or drop it, always carry it upright with one hand below the base and the other hand on the arm of the microscope. Place the microscope at least 6 inches from the edge of the bench. When returning the microscope to the cabinet, check that:
- The microscope light is turned off before the microscope is unplugged;
- All lenses have been cleaned with lens paper, especially the oil immersion lens;
- The lowest objective lens is near the stage and the stage itself is lowered; and
- The microscope is covered (if there is a cover available).
B. Parts of the Microscope
Figure 1 contains a diagram of a compound microscope and may help you locate some of the parts referred to in the following explanation. The compound microscope derives its name from the two sets of lenses it uses to magnify objects. These lenses are the objective lens, which can be found on the rotating nosepiece near the stage of the microscope, and the ocular lens, which is in the eyepiece. Your microscopes are equipped with several objective lenses, ranging from low to high magnification, including one oil immersion lens. The microscope magnifies by shining light from the light source through the iris diaphragm that limits the diameter of the light beam. The condenser lens focuses the light through the specimen that is on the stage. The stage is movable in order to view different parts of the specimen. The image we see is formed under the ocular lens by the objective lens and is a mirror image of the actual specimen.
C. Regulation of Illumination
- The illumination intensity knob is located on the right side of the microscope just below the on/off switch. It has a setting range of 1-10 with 10 being the brightest level. This knob should ordinarily be set to 7.
- Another way of adjusting illumination is by changing the position of the condenser lens. The condenser lens adjustment knob is located below the specimen stage and on the left side. It allows the user to move the condenser lens assembly up or down. As you move the condenser lens up, closer to the specimen, it concentrates (condenses) more light on your specimen. You will need to make this adjustment as you go up in magnification, so that you will have sufficient illumination.
- The condenser aperture diaphragm is located below the specimen stage on the condenser lens assembly. It is an adjustable opening, which allows you to make fine adjustments in illumination. The lever, which adjusts the size of the aperture, faces the user. By sliding the lever to the left or right, you may adjust the illumination to the correct level for your specimen. Changing the size of this aperture also affects the amount of contrast in the image. Thus, adjusting the condenser aperture involves finding the brightness level, which gives you the best combination of illumination and contrast. This is the method used most often in adjusting illumination in the light microscope.
- Another method of adjusting illumination is by using the field aperture diaphragm. This is mentioned here for the sake of completeness, as most light microscopes have an adjustable field aperture. However, the microscopes in the 110 lab lack this adjustability, which allows the user to obtain illumination that is uniformly bright and free from glare (Köhler illumination).
D. How to Locate Specimens Using a Compound Light Microscope
- Place specimen slide on microscope stage and secure with clamping arm. The slide is properly in position if turning each of the the stage adjustment knobs moves the slide appropriately. Position the 4x or 10x objective into place. It will click when it is properly positioned directly over the slide. Make sure that there are several inches of clearance between the glass slide and the lens.
- Using the stage adjustment knobs, position the edge of the coverslip in the center of the illuminated area. Look into the microscope and adjust the eye pieces so that you can see one image when you use BOTH eyes for viewing. While looking through the oculars and 4x or 10x objective lens, rotate the course focus knob slowly so that the distance between the slide and the objective lens is reduced. When you see the black line that indicates the edge of the coverslip is coming into focus, switch from the course to the fine focus adjustment and bring that "black line" into sharp focus. Switch to the 10x objective and refocus. Use the stage adjustment knob to move away from the edge of the coverslip into the area where your specimen is located. Find the #To #To increase the magnification of the specimen, rotate the nosepiece to the 40X objective lens and focus using the ONLY the fine adjustment knob. Never use the coarse focus knob when you have the 40x objective in place.
- Although you will not use the 100x objective to view your specimens today, be aware that immersion oil must be placed on the slide in order to use the 100X objective lens. After a specimen has been focused sharply using the 40x objective, you would move the 40x objective lenses out of the way and place a small drop of immersion oil onto the slide. Then you would rotate the nosepiece until the 100X (oil immersion) lens is selected and the 100X objective lens is completely immersed in the oil (no air between slide and objective). The specimen could then be focused using the fine adjustment knob only. Never use the course adjustment when focusing a specimen with the oil objective because doing so could result in damage to the 100X objective lens or to the slide. All traces of oil must be removed from the lens before putting away the microscope. Only lens paper should be used to remove oil from the 100X objective.
E. Calculation of Total Magnification
Total magnification of the specimen is determined by multiplying the magnifying power of the ocular and objective lenses. For example, a 10X ocular and a 40X objective together give a total magnification of 400X (Table 1). This means that the specimen appears 400 times larger when viewed with a microscope than its actual size.
F. MEASUREMENT OF SIZE
Cell size can be measured using an ocular micrometer. A micrometer has been installed in one of the ocular lenses of each microscope in the laboratory. It looks like a small ruler with both large and small units. The large units are numbered 1, 2, 3, etc. The small units are subdivisions of the large units and are not numbered. There are 10 small units per large unit. The small units represent different lengths depending on the objective lens in use. You will measure cellular structures in small units only, and then convert to metric units (µm = micrometers) using the conversion values below.
Therefore, if you are observing a cell with the 40X objective, and this cell spans 2.5 small units on the ocular micrometer scale, then the size of the cell is calculated by multiplying 2.5 small units x 2.5µm/small unit = 6.25µm. You should always calculate size of any object that is the focus of a figure in a photomicrograph for a scientific paper. Because digital imaging allows manipulation of size post-viewing, giving the total magnification is sometimes misleading. You should calculate and give the size of any important object in the figure legend.
Figure 1. A Cutaway Diagram showing the beam path of the Nikon Eclipse E200 Compound Light Microscope used in the Biological Sciences 110 Laboratory at Wellesley College.
G. Examination of Tetrahymena Pyriformis
To help you become familiar with the use of the compound microscope and to prepare for experiments with Tetrahymena pyriformis in Labs 2, 3, and 4, today you will observe live Tetrahymena. Spend some time observing the cells today, and in Lab 2 you will learn to take photographs of your cells. This week, draw them in your lab notebook. Make a circle to represent the field of view using the bottom of a beaker or petri dish and draw some represenative organisms, labeling all the organelles that you can identify. Be sure to include total magnification by every drawing.
NOTE: You will not use the oil (100X) objective today to view your Tetrahymena.
Obtain a microcentrifuge tube containing live Tetrahymena from the instructor’s bench. Mix it gently (no vortexing). It is best to obtain a sample from the TOP of the tube, since the cells will be more concentrated in that location. Add 20μL of the live Tetrahymena to the center of a clean glass slide, add a cover slip, and then view the cells using the microscope, starting with the lowest power and moving up to 400x magnification. Carefully observe the behavior of the Tetrahymena and record your observations in your lab notebook. If you have trouble locating the cells, it might help to adjust the field diaphragm.
- Place used non-disposable glass serological pipettes tips down in pipette canister to soak. Place used micropipette tips in the trash.
- Rinse out all other glassware with water and invert on a paper towel on the lab bench to dry. Ordinarily, the 13mm tubes are disposable, but do not discard them in today’s lab.
- Clean the objective lens of your microscope using only lens tissue (NOT Kimwipes®) starting with the lowest power (4x) and working up to the highest. Make sure that there is NO oil on any lens.
- Rotate the 4x objective lens into the viewing position.
- The binocular head must be rotated into the storage position, to protect the ocular lenses from damage. Loosen the setscrew on the right, rotate the head 180°, then retighten the screw. Turn off the microscope light.
- Have your instructor check your microscope, before returning it to the cabinet (with its plastic cover on).
- Place the stock solutions of cobalt chloride or cupric chloride in the tray next to the sink near the instructor’s table.
- Put all used microscope slides and cover slips in the glass disposal box.
- Familiarize yourself with the Lab wiki and bookmark it on your computer. Read material relating to course assignments and lab attendance, as well as instructions about lab notebooks in the Introduction To Cell Biology page and in the Resources section. Be sure to familiarize yourself with the safety information (also found in the Resources section).
- Make sure that you understand all the concepts covered in Lab 1. Solve the practice problems below on a piece of paper (NOT IN YOUR LAB NOTEBOOK!) and hand them in at the beginning of lab 2. Your instructor will grade the problems and return them to you in lab 3.
- Before you come to lab next time, read all the material in Lab 2 and outline or make a flowchart of the protocols in your notebook.
- Read about the process of endocytosis in your textbook and familiarize yourself with phagocytosis in Tetrahymena from the reference articles below. Don't try to understand all the complex science; just try to understand when and how Tetrahymena ingest particles or food. Your lab instructor will post pdf copies of these articles on your lab's Sakai site. You should bring a hard copy of the Gronlien et al. article with you to lab next time. We will spend part of Lab 2 talking about how to make effective figures from your data using some of the figures in this article as a model.
References on Tetrahymena and on phagocytosis in Tetrahymena:
Gronlien HK, Berg T, Lovlie AM. In the polymorphic ciliate Tetrahymena vorax, the non-selective phagocytosis seen in microstomes changes to a highly selective process in macrostomes. (2002), Journal of Experimental Biology 205, 2089-2097.
McLaughlin NB and Buhse HE, Jr., Localization by indirect immunofluorescence of tetrin, actin, and centrin to the oral apparatus and buccal cavity of the macrostomal form of Tetrahymena vorax., (2004), J Eukaryot. Microbiol., 51(2), 253-257.
Practice Problems (5 points)
Media:Lab_1_Practice_Problems110.doc Download Assignment below in Word format through this link
In the following problems, please show all calculations, including the units. To receive full credit, make sure to answer all parts of each question.
- a) How many grams of sucrose (MW=342) would you need to make 100mL of a 10-2 M sucrose solution? b) How many milligrams of sucrose would you need to make this same solution? c) Express 10-2 M as a millimolar (mM) concentration.
- Compound Z has a MW of 100. Your lab partner weighed 25 grams of compound Z and dissolved it in water to a final volume of 1 liter. a) What is the concentration of the solution expressed as a percentage by weight (w/v)? b) What is the concentration of the solution expressed as molarity?
- Convert a 0.26M solution of cobalt chloride hexahydrate (MW=237.9) into %(w/v) units. b) How would you make a 5% (v/v) solution of ethanol from 100% ethanol?
- You need to prepare 2 methanol solutions for lab today: a) 300mL of 50% (v/v) methanol and b) 200mL of 25% (v/v) methanol. You have been supplied with 100% methanol and deionized water as well as graduated cylinders and beakers of the appropriate sizes, a magnetic stirrer and Teflon stir bars. How would you use these materials to prepare the two methanol solutions? Note that this question requires more than just showing the math involved. You need to write the steps involved in preparing the solutions, including when and how you use the provided materials.
- Practice converting between the following units by writing your answers in decimal form and in scientific notation:
- For each of the following situations, please describe what you would do.
a. You just spilled Tetrahymena on your arm.
b. You just finished pipetting some water with a 10ml serologic pipet. Where do you put the pipet?
|
IN THE evolution of organs, skin came first. The discovery that even sponges have a proto-skin shows that the separation of insides from outsides in multicellular animals was key to their evolution.
It has been known since the 1960s that sponges have a distinct outer layer of cells, or epithelium. But because sponges lack the genes involved in expelling molecules, it was assumed that this was not a functional organ. Sally Leys and her team at the University of Alberta in Edmonton, Canada, have now shown otherwise. When they grew flat sponges on thin membranes, with liquid above and below, they found that the epithelium kept some molecules out, sometimes only allowing 0.8 per cent through in 3 hours (PLoS ONE, DOI: 10.1371/journal.pone.0015040).
Sponges were the first multicellular animals to evolve, so the finding means all complex life has a skin. Leys thinks the organ was vital as it isolated animals' insides from their surroundings. As a result, cells could send chemical signals to each other without interference, setting the stage for complex organs to evolve.
Rather than loose clusters of cells, sponges are self-contained animals, meaning they are much more like other animals than we thought, says Scott Nichols of the University of California, Berkeley.
This is an updated version that corrects a misquotation of Scott Nichols' comments
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
|
More than just strings
Another surprising revelation was that superstring
theories are not just theories of one-dimensional objects. There
are higher dimensional objects in string theory with dimensions from
zero (points) to nine, called p-branes.
In terms of branes, what we usually call a membrane would be a two-brane,
a string is called a one-brane and a point is called a zero-brane.
What makes a p-brane? A p-brane is a spacetime
object that is a solution to the Einstein equation in the low energy
limit of superstring theory, with the energy density of the nongravitational
fields confined to some p-dimensional subspace
of the nine space dimensions in the theory. (Remember, superstring theory
lives in ten spacetime dimensions, which means one time dimension plus
nine space dimensions.) For example, in a solution with electric charge,
if the energy density in the electromagnetic field was distributed along
a line in spacetime, this one-dimensional line
would be considered a p-brane with p=1.
A special class of p-branes in string theory are called D branes.
Roughly speaking, a D brane is a p-brane where the ends of open strings
are localized on the brane. A D brane is like a collective excitation
These objects took a long time to be discovered
in string theory, because they are buried deep in the mathematics of
T-duality. D branes are important in understanding black holes in string
theory, especially in counting the quantum states that lead to black
hole entropy, which was a very big accomplishment for string
How many dimensions?
Before string theory won the full attention
of the theoretical physics community, the most popular unified theory
was an eleven dimensional theory of supergravity, which is supersymmetry
combined with gravity. The eleven-dimensional spacetime was to be compactified
on a small 7-dimensional sphere, for example, leaving four spacetime
dimensions visible to observers at large distances.
This theory didn't work as a unified theory
of particle physics, because it doesn't have a sensible quantum limit
as a point particle theory. But this eleven dimensional theory
would not die. It eventually came back to life in
the strong coupling limit of superstring theory in ten dimensions.
How could a superstring theory with ten spacetime
dimensions turn into a supergravity theory with eleven spacetime dimensions?
We've already learned that duality relations between superstring theories
relate very different theories, equate large distance with small distance,
and exchange strong coupling with weak coupling. So there must be some
duality relation that can explain how a superstring theory that requires
ten spacetime dimensions for quantum consistency can really be a theory
in eleven spacetime dimensions after all.
Since we know that all string theories are related,
and we suspect that they are but different limits of some more fundamental
theory, then perhaps that more fundamental theory exists in eleven spacetime
dimensions? These question bring us to the topic of M
The theory currently known as M
Technically speaking, M
theory is is the unknown eleven-dimensional theory whose low
energy limit is the supergravity theory in eleven dimensions discussed
above. However, many people have taken to also using M
theory to label the unknown theory believed to be the fundamental
theory from which the known superstring theories emerge as special limits.
We still don't know the fundamental M theory,
but a lot has been learned about the eleven-dimensional M theory and
how it relates to superstrings in ten spacetime dimensions.
In M theory, there are also extended objects,
but they are called M branes rather
than D branes. One class of the M branes in this theory has two space
dimensions, and this is called an M2 brane.
Now consider M theory with the tenth space dimension
compactified into a circle of radius R. If one of the two space dimensions
that make up the M2 brane is wound around that circle, then we can equate
the resulting object with the fundamental string (one-brane) of type
IIA superstring theory. The type IIA theory appears to be a ten dimensional
theory in the normal perturbative limit, but reveals an extra space
dimension, and an equivalence to M theory, in the limit of very strong
We still don't know
what the fundamental theory behind string theory is, but judging
from all of these relationships, it must be a very interesting and rich
theory, one where distance scales, coupling strengths and even the number
of dimensions in spacetime are not fixed concepts but fluid entities
that shift with our point of view.
|
Exploration of Neptune
The Exploration of Neptune started on August 25, 1989, when Voyager 2 became the first and so-far only spacecraft to visit the planet. Like the other gas giants, Neptune has no solid land on its surface so landing on Neptune would be impossible. Currently, NASA is thinking to send another spacecraft known as the Neptune Orbiter to study more about Neptune; the spacecraft is planned to be probably launched on 2035.
- See also: Neptune (planet)#Exploration
In Voyager 2's last visit with a planet before leaving the solar system, Voyager 2 came only 3,000 miles above Neptune's north pole, the closest approach it made to any planet after it left Earth. Voyager 2 studied Neptune's atmosphere, its rings, its magnetosphere, and its moons. Several discoveries were made, including the discovery of the Great Dark Spot and Triton's geysers.
Voyager 2 found that Neptune's atmosphere was very active, even though it gets only 3% of the sunlight Jupiter receives. Voyager 2 discovered an anticyclone called the Great Dark Spot, similar to Jupiter's Great Red Spot. However, pictures taken by the Hubble Space Telescope showed that the Great Dark Spot had disappeared. Also seen in Neptune's atmosphere at that time was an almond-shaped spot called "D2", and a bright, quickly moving cloud high above the cloud decks named Scooter.
Voyager 2 also found four rings and found proof for ring arcs, or incomplete rings around Neptune. Neptune's magnetosphere was also studied by Voyager 2. The planetary radio astronomy instrument found that Neptune's day is sixteen hours, seven minutes. Voyager 2 also discovered auroras, like on Earth, but much more complicated.
Voyager 2 discovered six moons orbiting Neptune, but only three were photographed in detail: Proteus, Nereid, and Triton. Proteus turned out to be an ellipsoid, as large as an ellipsoid could become without turning into a sphere. Proteus is very dark in color. Nereid, although discovered in 1949, is still not well known even after Voyager 2 passed by. Triton was flown by at about 25,000 miles away, and became the last object Voyager 2 would ever explore. Triton was shown to have extraordinary active geysers and polar caps. A very thin atmosphere was found, as well as thin clouds.
A Neptune Orbiter is being considered to study Neptune in more detail, release atmospheric probes, and possibly release a Triton Lander. On the NASA website, it lists the earliest possible launch date as 2030. This mission is still a proposal, and budget cuts may eliminate it.
|
A term used to describe the relatively peaceful transfer of power in Czechoslovakia from the Communist Party to the civil rights movement, which the state had unsuccessfully tried to fight for over a decade. The collapse of Communism in Poland and Hungary, and the growing popular protests in Eastern Germany, triggered demonstrations against the Czechoslovak regime in Prague and Brno from August to October 1989. Initially, these were repressed, but the state security forces became powerless against the ever‐growing number of demonstrators. Leading human rights activists from Charter '77 created the Civic Forum on 18 November 1989, in order to coordinate and organize the opposition, and to engage in negotiations with the government. It called a general strike on 27 November, which showed that the old government completely lacked any popular basis. The Communist government collapsed, with the party's political monopoly being withdrawn on 29 November. On 10 December a new government consisting mostly of non‐Communists was formed. Subsequently, the two most consistent and respected critics of the Communist regime of the previous two decades were elevated into office: on 28 December Dubček was elected Speaker of parliament, and on 29 December Havel succeeded Husák as President. The Velvet Revolution was complete, and was confirmed by the free elections of 8– 9 June 1990, in which the Revolution's leaders were endorsed.
|
The pigment reveals that these animals were, at least partially, dark-coloured in life, which is likely to have contributed to more efficient thermoregulation, as well as providing means for camouflage and UV protection. Researchers at Lund University are among the scientists that made the spectacular discovery.
Preserved pigment in fossilized skin from a leatherback turtle, a mosasaur and an ichthyosaur suggests that these animals were, at least partially, dark-colored in life -- an example of convergent evolution. Note that the leatherback turtle and mosasaur have a dark back and light belly (a color scheme also known as countershading), whereas the ichthyosaur, similar to the modern deep-diving sperm whale, is uniformly dark-colored.
Credit: Illustration by Stefan Sølberg
During the Age of the dinosaurs, huge reptiles, such as mosasaurs and ichthyosaurs, ruled the seas. Previously, scientists could only guess what colours these spectacular animals had; however, pigment preserved in fossilised skin has now been analysed at SP Technical Research Institute of Sweden and MAX IV Laboratory, Lund University, Sweden. The unique soft tissue remains were obtained from a 55 million-year-old leatherback turtle, an 85 million-year-old mosasaur and a 196 million-year-old ichthyosaur. This is the first time that the colour scheme of any extinct marine animal has been revealed.
"This is fantastic! When I started studying at Lund University in 1993, the film Jurassic Park had just been released, and that was one of the main reasons why I got interested in biology and palaeontology. Then, 20 years ago, it was unthinkable that we would ever find biological remains from animals that have been extinct for many millions of years, but now we are there and I am proud to be a part of it", said Johan Lindgren about the discovery of the ancient pigment molecules.
Johan Lindgren is a scientist at Lund University in Sweden, and he is the leader of the international research team that has studied the fossils. Together with colleagues from Denmark, England and the USA, he now presents the results of their study in the scientific journal Nature. The most sensational aspect of the investigation is that it can now be established that these ancient marine reptiles were, at least partially, dark-coloured in life, something that probably contributed to more efficient thermoregulation, as well as providing means for camouflage and protection against harmful UV radiation.
The analysed fossils are composed of skeletal remains, in addition to dark skin patches containing masses of micrometre-sized, oblate bodies. These microbodies were previously interpreted to be the fossilised remains of those bacteria that once contributed to the decomposition and degradation of the carcasses. However, by studying the chemical content of the soft tissues, Lindgren and his colleagues are now able to show that they are in fact remnants of the animals' own colours, and that the micrometre-sized bodies are fossilised melanosomes, or pigment-containing cellular organelles.
"Our results really are amazing. The pigment melanin is almost unbelievably stable. Our discovery enables us to make a journey through time and to revisit these ancient reptiles using their own biomolecules. Now, we can finally use sophisticated molecular and imaging techniques to learn what these animals looked like and how they lived", said Per Uvdal, one of the co-authors of the study, and who works at the MAX IV Laboratory.
Mosasaurs (98 million years ago) are giant marine lizards that could reach 15 metres in body length, whereas ichthyosaurs (250 million years ago) could become even larger. Both ichthyosaurs and mosasaurs died out during the Cretaceous Period, but leatherback turtles are still around today. A conspicuous feature of the living leatherback turtle, Dermochelys, is that it has an almost entirely black back, which probably contributes to its worldwide distribution. The ability of leatherback turtles to survive in cold climates has mainly been attributed to their huge size, but it has also been shown that these animals bask at the sea surface during daylight hours. The black colour enables them to heat up faster and to reach higher body temperatures than had they instead been lightly coloured.
"The fossil leatherback turtle probably had a similar colour scheme and lifestyle as does Dermochelys. Similarly, mosasaurs and ichthyosaurs, which also had worldwide distributions, may have used their darkly coloured skin to heat up quickly between dives", said Johan Lindgren.
If their interpretations are correct, then at least some ichthyosaurs were uniformly dark-coloured in life, unlike most living marine animals. However, the modern deep-diving sperm whale has a similar colour scheme, perhaps as camouflage in a world without light, or as UV protection, given that these animals spend extended periods of time at or near the sea surface in between dives. The ichthyosaurs are also believed to have been deep-divers, and if their colours were similar to those of the living sperm whale, then this would also suggest a similar lifestyle, according to Lindgren.
Johan Lindgren | EurekAlert!
Fish Oil-Diet Benefits May be Mediated by Gut Microbes
28.08.2015 | University of Gothenburg
Bio-fabrication of Artificial Blood Vessels with Laser Light
28.08.2015 | Fraunhofer-Institut für Lasertechnik ILT
A University of Oklahoma astrophysicist and his Chinese collaborator have found two supermassive black holes in Markarian 231, the nearest quasar to Earth, using observations from NASA's Hubble Space Telescope.
The discovery of two supermassive black holes--one larger one and a second, smaller one--are evidence of a binary black hole and suggests that supermassive...
A team of European researchers have developed a model to simulate the impact of tsunamis generated by earthquakes and applied it to the Eastern Mediterranean. The results show how tsunami waves could hit and inundate coastal areas in southern Italy and Greece. The study is published today (27 August) in Ocean Science, an open access journal of the European Geosciences Union (EGU).
Though not as frequent as in the Pacific and Indian oceans, tsunamis also occur in the Mediterranean, mainly due to earthquakes generated when the African...
In mountainous regions earthquakes often cause strong landslides, which can be exacerbated by heavy rain. However, after an initial increase, the frequency of these mass wasting events, often enormous and dangerous, declines, in fact independently of meteorological events and aftershocks.
These new findings are presented by a German-Franco-Japanese team of geoscientists in the current issue of the journal Geology, under the lead of the GFZ...
Bacteria do not cease to amaze us with their survival strategies. A research team from the University of Basel's Biozentrum has now discovered how bacteria enter a sleep mode using a so-called FIC toxin. In the current issue of “Cell Reports”, the scientists describe the mechanism of action and also explain why their discovery provides new insights into the evolution of pathogens.
For many poisons there are antidotes which neutralize their toxic effect. Toxin-antitoxin systems in bacteria work in a similar manner: As long as a cell...
It comes when called, bringing care utensils with it and recording how they are used: Fraunhofer IPA is developing an intelligent care cart that provides care staff with physical and informational support in their day-to-day work. The scientists at Fraunhofer IPA have now completed a first prototype. In doing so, they are continuing in their efforts to improve working conditions in the care sector and are developing solutions designed to address the challenges of demographic change.
Technical assistance systems can improve the difficult working conditions in residential nursing homes and hospitals by helping the staff in their work and...
20.08.2015 | Event News
20.08.2015 | Event News
19.08.2015 | Event News
28.08.2015 | Physics and Astronomy
28.08.2015 | Health and Medicine
28.08.2015 | Life Sciences
|
Electric potential plays the similar role for charge that pressure does for fluids. It reflects the the speed and direction in that the free charged particle would move due electric forces caused by charge that create this potential.
The electric potential created by a point charge q, at a distance r from the charge, is equal to
where [[Math:c|\varepsilon_0]] is the electric constant, a feature of the free space around the objects. This is also known as the Coulomb Potential. It is tied to the potential energy of the charged object in electric field, as described in , for instance.
The electric potential due to a system of point charges is equal to the sum of the point charges' individual potentials. This fact simplifies calculations significantly, since addition of potential (scalar) fields is much easier than addition of the electric (vector) fields.
Field applet by Carlo Barraco, Todd Fuller
|
||It has been suggested that Grid bias be merged into this article. (Discuss) Proposed since March 2014.|
Biasing in electronics is the method of establishing predetermined voltages or currents at various points of an electronic circuit for the purpose of establishing proper operating conditions in electronic components. Many electronic devices such as transistors and vacuum tubes, whose function is processing time-varying (AC) signals also require a steady (DC) current or voltage to operate correctly; this is called bias. The AC signal applied to them is superposed on this DC bias current or voltage. The operating point of a device, also known as bias point, quiescent point, or Q-point, is the steady-state voltage or current at a specified terminal of an active device (a transistor or vacuum tube) with no input signal applied.
The term is also used for a steady (AC) signal applied to some electronic devices which is similarly required for correct operation, such as the tape bias signal applied to magnetic recording heads used in magnetic tape recorders.
In electrical engineering, the term bias has the following meanings:
- A systematic deviation of a value from a reference value.
- The amount by which the average of a set of values departs from a reference value.
- Electrical, mechanical, magnetic, or other force (field) applied to a device to establish a reference level to operate the device.
- In telegraph signaling systems, the development of a positive or negative DC voltage at a point on a line that should remain at a specified reference level, such as zero.
- Note: A bias may be applied or produced by (i) the electrical characteristics of the line, (ii) the terminal equipment, and (iii) the signaling scheme.
Most often, bias simply refers to a fixed DC voltage applied to the same point in a circuit as an alternating current (AC) signal, frequently to select the desired operating response of a semiconductor or other electronic component (forward or reverse bias). For example, a bias voltage is applied to a transistor in an electronic amplifier to allow the transistor to operate in a particular region of its transconductance curve. For vacuum tubes, a (much higher) grid bias voltage is also often applied to the grid electrodes for precisely the same reason.
A hot bias can lower the tube life span, but a "cool" bias can induce crossover distortion.
Bias is used in direct broadcast satellites such as DirecTV and Dish Network, the integrated receiver/decoder (IRD) box actually powers the feedhorn or low-noise block converter (LNB) receiver mounted on the dish arm. This bias is changed from a lower voltage to a higher voltage to select the polarization of the LNB, so that it receives signals that are polarized either horizontally or vertically, thereby allowing it to receive twice as many channels.
We still need to determine the optimal values for the DC biasing in order to choose resistors, etc. This bias point is called the quiescent or Q-point as it gives the values of the voltages when no input signal is applied. To determine the Q-point we need to look at the range of values for which the transistor is in the active region.
Importance in linear circuits
Linear circuits involving transistors typically require specific DC voltages and currents for correct operation, which can be achieved using a biasing circuit. As an example of the need for careful biasing, consider a transistor amplifier. In linear amplifiers, a small input signal gives larger output signal without any change in shape (low distortion): the input signal causes the output signal to vary up and down about the Q-point in a manner strictly proportional to the input. However, because a transistor is nonlinear, the transistor amplifier only approximates linear operation. For low distortion, the transistor must be biased so the output signal swing does not drive the transistor into a region of extremely nonlinear operation. For a bipolar transistor amplifier, this requirement means that the transistor must stay in the active mode, and avoid cut-off or saturation. The same requirement applies to a MOSFET amplifier, although the terminology differs a little: the MOSFET must stay in the active mode (or saturation mode), and avoid cut-off or ohmic operation (or triode mode).
Bipolar junction transistors
For bipolar junction transistors the bias point is chosen to keep the transistor operating in the active mode, using a variety of circuit techniques, establishing the Q-point DC voltage and current. A small signal is then applied on top of the Q-point bias voltage, thereby either modulating or switching the current, depending on the purpose of the circuit.
The quiescent point of operation is typically near the middle of the DC load line. The process of obtaining a certain DC collector current at a certain DC collector voltage by setting up the operating point is called biasing.
After establishing the operating point, when an input signal is applied, the output signal should not move the transistor either to saturation or to cut-off. However, this unwanted shift still might occur, due to the following reasons:
- Parameters of transistors depend on junction temperature. As junction temperature increases, leakage current due to minority charge carriers (ICBO) increases. As ICBO increases, ICEO also increases, causing an increase in collector current IC. This produces heat at the collector junction. This process repeats, and, finally, the Q-point may shift into the saturation region. Sometimes, the excess heat produced at the junction may even burn the transistor. This is known as thermal runaway.
- When a transistor is replaced by another of the same type, the Q-point may shift, due to changes in parameters of the transistor, such as current gain () which varies slightly for each unique transistor.
To avoid a shift of Q-point, bias-stabilization is necessary. Various biasing circuits can be used for this purpose.
Electret microphone elements typically include a junction field-effect transistor as an impedance converter to drive other electronics within a few meters of the microphone. The operating current of this JFET is typically 0.1 to 0.5 mA and is often referred to as bias, which is different from the phantom power interface which supplies 48 volts to operate the backplate of a traditional condenser microphone. Electret microphone bias is sometimes supplied on a separate conductor.
- Bipolar junction transistor
- Bipolar transistor biasing
- Idling current
- Small signal model
- Tape bias
- This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C" (in support of MIL-STD-188).
- IEC Standard 61938
- Sedra, Adel; Smith, Kenneth (2004). Microelectronic Circuits. Oxford University Press. ISBN 0-19-514251-9.
- P.K. Patil;M.M. Chitnis (2005). Basic Electricity and Semiconductor Devices. Phadke Prakashan.
- Robert L. Boylestad;Louis Nashelsky (2005). Electronic Devices and Circuit Theory. Prentice-Hall Career & Technology.
- Bias - from Sci-Tech Encyclopedia
|
Sassy Sally Snake
It is important for students to understand letter to sound correspondences in order to be successful readers and spellers. Children need to be able to identify letters when they hear the sound orally or when reading words that incorporate their phoneme. The purpose of this lesson is to assist the child to gain confidence when using the phoneme /s/ represented by S. Through hand gestures and practice of pronunciation, students will gain confidence in regards to identifying and writing the letter symbol S; as well as recognize the /s/ in written and spoken language.
*Primary paper and pencils
*Tongue twister on sentence strip (Sassy Sally sang songs for Sammy snail and Scotty seal.)
*Assessment worksheet identifying pictures with /s/
*Bear Snores On
*Cards with words (snow, snail, team, sun)
*Have you ever noticed that each letter of our alphabet is important for making up the words we say every day? It is very important that we learn the sounds that go with each letter of the alphabet so we can use it correctly when we are speaking to one another and writing. Today we are going to be working on the sound /s/. We spell /s/ using the letter S.
*Have you ever heard a snake speak? When they talk they make the sound /s/. During this lesson whenever we hear the sound /s/ we are going to pretend to be snakes and make our hands slither like snakes. *Demonstrate gesture. Let's make the /s/ sound and focus on the way your mouth and tongue moves to make the sound. *Have students as a whole class make the /s/. Notice how when we make the /s/ sound our lips and teeth are barely open. Our tongue is also supposed to be lightly touching the front of our mouth and the back of our teeth. Lets make the /s/ sound one more time. This time I want you to make sure your tongue is touching the back of your teeth and make the snake gesture as you say the sound.
*Now we are going to practice saying our tongue twister. It may sound strange saying this sentence out loud, but the purpose of this sentence is to continue practicing saying the /s/ sound correctly. I will say the sentence first and then I will have you repeat after me. *Use pointer to point to each word when saying it aloud on the sentence strip. Sassy Sally sang songs for Sammy snail and Scotty seal. Now you are going to say this sentence together as a class. Make sure to make your hand gesture when you are saying words that make the /s/. Now we are going to say the tongue twister again, but I want you to stretch the /s/ sound. We are doing this so you can really hear where the /s/ is placed in the words of this sentence. Don't forget to make your snake hands! (Sssasssyyy Ssssallyy sssang sssongss for Sssammy sssnail and Ssscotty ssseal.)
*Now I am going to show you how to find the /s/ in words. The first word I am going to demonstrate with is sleep. Ssss-l-ee-p. Can you hear it? Ssss-l-eep. I can hear it. Sometimes you can hear the /s/ in the middle of the word. I am going to say a word and I want you to make the snake hands when you hear /s/. (house, mouse)
*Now we are going to practice writing the /s/. We write the /s/ sound with the letter S. Show them the letter on the board. Notice how the letter S even looks like a snake! First we are going to write a lower case s. Watch first then I will let you try. We start just below the fence and make a c, then we curve down to the side walk. I want everyone to practice and raise your paper up after you have written one. When I put a check on your paper, then you can continue writing your lower case s. Now we are going to practice an upper case S. The upper case S looks exactly like the lower case except bigger. We start just below the rooftop and make our c that stops at the fence, then we curve down to the side walk. A good way to tell them apart is, the upper case S looks like the mommy snake, and the lower case s looks like a baby snake.
*Now I am going to say some words and you are going to raise your hand and tell me what word you hear the /s/. Ready? Snake v. cow, santa v. bunny, crow v. snow, pillow v. salami. Now I want you to make your snake hands when you hear the /s/: fur, scream, crouch, house, seal, smoke
*I am now going to read a book to you called: Bear Snores On. One by one, different animals and birds find their way into Bear's cozy cave. They make different kinds of snacks and treats to keep themselves from being in the cold. But even after they make all of their yummy snacks, bear continues to snore! Lets read and find out what happens when bear wakes up to a cave full of uninvited friends. *As I read the book, I want you to listen for the /s/. If you hear it, without speaking make your snake hands so I know that you have heard the sound we are focusing on in this lesson.
*I am going to show you some cards with words on them. The first word I am going to show you how you would be able to see if the /s/ is in the word. Example: snow v. blow. Sss-n-ow. Do you hear the /s/ and see the baby snake on the card? That is how I was able to decide if this card spelled snow and not blow. Now you are going to try some: SNAIL: snail v. pail, TEAM: scream v. team, SUN: sun v. fun
*For assessment, I am going to pass out a worksheet that has different pictures and partial spellings on them. Student's are going to circle the pictures that make the /s/ and complete the partial spellings.
*Daniel, Collier. Silly Silly Snake
*Bear Snores On
Return to Doorways Index
|
This issue is particularly critical on islands where feral cats have been implicated in the extinction of a number of native species.
Some animal rights proponents have promoted "trap, neuter, and return" programs as a potential humane solution to the problem.
Under this approach, feral cats are trapped, sterilized, and then released back to local areas where volunteer "colony caretakers" provide periodic care.
One of the assumptions underlying the approach is that neutered cats will have smaller home ranges and will stick around the areas where they are being fed, which will keep them from preying on native wildlife in natural areas.
However, a new study from Santa Catalina Island of the coast of California challenges this assumption and raises questions about the effectiveness of using sterilization programs alone.
Efforts to control feral cats on Santa Catalina include trap, neuter and return (TNR) programs at the two villages at the ends of the island.
As part of their study published in the journal Mammalogy, researchers Darcee Guttilla and Paul Statt from California State University Fullerton conducted their own TNR experiment. They captured feral cats using traps, neutered half the sample, and released both sterilized and non-sterilized individuals outfitted with GPS collars.
They found little difference in the home-ranges between sterilized and non-sterilized cats. Neutered and non-neutered individuals traveled long distances between human populated areas and the interior of the island which is comprise primarily of wildlands.
They estimated the island-wide feral cat population at 600-750 individuals and found trapability to be very low.
The authors write, "The influx of subsidized cats to natural habitats, combined with their high vagility and low trappability, makes TNR an unlikely solution for controlling feral cats on a large, rugged island like Catalina and, more generally, in other locations where human populations abut ecologically sensitive areas."
This is bad news for native prey on the island including birds reptiles, and small mammals.
This is also problematic for the highly threatened Catalina Island fox, which has little immunity from diseases brought from the mainland and nearly went extinct after an outbreak of canine distemper virus.
Scientists worry that feral cats on the island - in addition to putting the fox at risk to disease - may also compete with the foxes for food and displace them from their habitat. Based on their findings, the authors write,
"Until resources are available to implement more proactive control measures in these areas, cats trapped in the island interior should be removed and delivered to a shelter; if they are deemed adoptable, cats should be sterilized and added to the adoption pool on the mainland. If they are not adoptable or if there are insufficient resources to support relocation, they should be euthanized."
--by Rob Goldstein
Guttilla, D., & Stapp, P. (2010). Effects of sterilization on movements of feral cats at a wildland–urban interface Journal of Mammalogy, 91 (2), 482-489 DOI: 10.1644/09-MAMM-A-111.1
|
Heart murmurs and other sounds
A heart murmur is a blowing, whooshing, or rasping sound heard during a heartbeat. The sound is caused by turbulent (rough) blood flow through the heart valves or near the heart.
Chest sounds - murmurs; Heart sounds - abnormal; Murmur - innocent; Innocent murmur; Systolic heart murmur; Diastolic heart murmur
The heart has four chambers:
- Two upper chambers (atria)
- Two lower chambers (ventricles)
The heart has valves that close with each heartbeat, causing blood to flow in only one direction. The valves are located between the chambers.
Murmurs can happen for many reasons, such as:
- When a valve does not close tightly and blood leaks backward (regurgitation)
- When blood flows through a narrowed or stiff heart valve (stenosis)
There are several ways in which your doctor may describe a murmur:
- Murmurs are classified ("graded") depending on how loud the murmur sounds with a stethoscope. The grading is on a scale. Grade I can barely be heard. An example of a murmur description is a "grade II/VI murmur." (This means the murmur is grade 2 on a scale of 1 - 6).
- In addition, a murmur is described by the stage of the heartbeat when the murmur is heard. A heart murmur may be described as systolic or diastolic.
When a murmur is more noticeable, the doctor may be able to feel it with the palm of the hand over the heart.
Things the doctor will look for in the exam include:
- Does the murmur occur when the heart is resting or contracting?
- Does it last throughout the heartbeat?
- Does it change when you move?
- Can it be heard in other parts of the chest, on the back, or in the neck?
- Where is the murmur heard the loudest?
Many heart murmurs are harmless. These types of murmurs are called innocent murmurs. They will not cause any symptoms or problems. Innocent murmurs do not need treatment.
Other heart murmurs may indicate an abnormality in the heart. These abnormal murmurs can be caused by:
- Problems of the aortic valve (aortic regurgitation, aortic stenosis)
- Problems of the mitral valve (chronic or acute mitral regurgitation, mitral stenosis)
- Hypertrophic cardiomyopathy (idiopathic hypertrophic subaortic stenosis)
- Pulmonary regurgitation (backflow of blood into the right ventricle, caused by failure of the pulmonary valve to close completely)
- Pulmonary valve stenosis
- Problems of the tricuspid valve (tricuspid regurgitation, tricuspid stenosis)
Significant murmurs in children are more likely to be caused by:
- Anomalous pulmonary venous return (an abnormal formation of the pulmonary veins)
- Atrial septal defect (ASD)
- Coarctation of the aorta
- Patent ductus arteriosus (PDA)
- Ventricular septal defect (VSD)
Multiple murmurs may result from a combination of heart problems.
Children often have murmurs as a normal part of development. These murmurs do not need treatment. They may include:
- Pulmonary flow murmurs
- Still's murmur
- Venous hum
What to Expect at Your Office Visit
A doctor or nurse can listen to your heart sounds by placing a stethoscope over your chest. You will be asked questions about your medical history and symptoms, such as:
- Have other family members had murmurs or other abnormal heart sounds?
- Do you have a family history of heart problems?
- Do you have chest pain, fainting, shortness of breath, or other breathing problems?
- Have you had swelling, weight gain, or bulging veins in the neck?
- Does your skin have a bluish color?
The doctor may ask you to squat, stand, or hold your breath while bearing down or gripping something with your hands to listen to your heart.
The following tests may be done:
Goldman L. Approach to the patient with possible cardiovascular disease. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine. 24th ed. Philadelphia, PA: Saunders Elsevier; 2011:chap 50.
Sabatine MS, Cannon CP. The history and physical examination: An evidence-based approach. In: Bonow RO, Mann DL, Zipes DP, Libby P, eds. Braunwald's Heart Disease: A Textbook of Cardiovascular Medicine. 9th ed. Philadelphia, PA: Saunders Elsevier; 2011:chap 12.
Nishimura RA, Otto CM, Bonow RO, et al. 2014 AHA/ACC Guideline for the Management of Patients With Valvular Heart Disease. Circulation. 2014;129(23):2440-92.
Reviewed By: Michael A. Chen, MD, PhD, Associate Professor of Medicine, Division of Cardiology, Harborview Medical Center, University of Washington Medical School, Seattle, WA. Also reviewed by David Zieve, MD, MHA, Isla Ogilvie, PhD, and the A.D.A.M. Editorial team.
|
Choose a symbol to put into the number sentence.
Imagine a pyramid which is built in square layers of small cubes. If we number the cubes from the top, starting with 1, can you picture which cubes are directly below this first cube?
Can you make a cycle of pairs that add to make a square number
using all the numbers in the box below, once and once only?
Starting with the number 180, take away 9 again and again, joining up the dots as you go. Watch out - don't join all the dots!
Can you see why 2 by 2 could be 5? Can you predict what 2 by 10
The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for?
Can you use the numbers on the dice to reach your end of the number line before your partner beats you?
Start by putting one million (1 000 000) into the display of your
calculator. Can you reduce this to 7 using just the 7 key and add,
subtract, multiply, divide and equals as many times as you like?
Place the numbers from 1 to 9 in the squares below so that the difference between joined squares is odd. How many different ways can you do this?
Can you put the numbers 1 to 8 into the circles so that the four
calculations are correct?
Place six toy ladybirds into the box so that there are two ladybirds in every column and every row.
Find your way through the grid starting at 2 and following these
operations. What number do you end on?
Place the numbers 1 to 10 in the circles so that each number is the
difference between the two numbers just below it.
We start with one yellow cube and build around it to make a 3x3x3
cube with red cubes. Then we build around that red cube with blue
cubes and so on. How many cubes of each colour have we used?
Use the information about Sally and her brother to find out how many children there are in the Brown family.
Make one big triangle so the numbers that touch on the small triangles add to 10. You could use the interactivity to help you.
Can you hang weights in the right place to make the equaliser
In a square in which the houses are evenly spaced, numbers 3 and 10
are opposite each other. What is the smallest and what is the
largest possible number of houses in the square?
Choose four of the numbers from 1 to 9 to put in the squares so that the differences between joined squares are odd.
Place the numbers 1 to 6 in the circles so that each number is the
difference between the two numbers just below it.
Make your own double-sided magic square. But can you complete both
sides once you've made the pieces?
There are to be 6 homes built on a new development site. They could
be semi-detached, detached or terraced houses. How many different
combinations of these can you find?
If you have only four weights, where could you place them in order
to balance this equaliser?
If you have ten counters numbered 1 to 10, how many can you put into pairs that add to 10? Which ones do you have to leave out? Why?
Find all the numbers that can be made by adding the dots on two dice.
Sweets are given out to party-goers in a particular way. Investigate the total number of sweets received by people sitting in different positions.
There are 4 jugs which hold 9 litres, 7 litres, 4 litres and 2
litres. Find a way to pour 9 litres of drink from one jug to
another until you are left with exactly 3 litres in three of the
Can you each work out the number on your card? What do you notice?
How could you sort the cards?
How could you put eight beanbags in the hoops so that there are
four in the blue hoop, five in the red and six in the yellow? Can
you find all the ways of doing this?
Ahmed is making rods using different numbers of cubes. Which rod is twice the length of his first rod?
This challenge extends the Plants investigation so now four or more children are involved.
This challenge is about finding the difference between numbers which have the same tens digit.
This magic square has operations written in it, to make it into a
maze. Start wherever you like, go through every cell and go out a
total of 15!
Some Games That May Be Nice or Nasty for an adult and child. Use your knowledge of place value to beat your oponent.
In this game, you can add, subtract, multiply or divide the numbers
on the dice. Which will you do so that you get to the end of the
number line first?
Can you put plus signs in so this is true? 1 2 3 4 5 6 7 8 9 = 99
How many ways can you do it?
First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line.
This challenging activity involves finding different ways to distribute fifteen items among four sets, when the sets must include three, four, five and six items.
In how many ways could Mrs Beeswax put ten coins into her three
puddings so that each pudding ended up with at least two coins?
This problem is based on a code using two different prime numbers
less than 10. You'll need to multiply them together and shift the
alphabet forwards by the result. Can you decipher the code?
Here you see the front and back views of a dodecahedron. Each
vertex has been numbered so that the numbers around each pentagonal
face add up to 65. Can you find all the missing numbers?
Zumf makes spectacles for the residents of the planet Zargon, who
have either 3 eyes or 4 eyes. How many lenses will Zumf need to
make all the different orders for 9 families?
There are 44 people coming to a dinner party. There are 15 square
tables that seat 4 people. Find a way to seat the 44 people using
all 15 tables, with no empty places.
This article gives you a few ideas for understanding the Got It! game and how you might find a winning strategy.
Arrange eight of the numbers between 1 and 9 in the Polo Square
below so that each side adds to the same total.
Use these head, body and leg pieces to make Robot Monsters which
are different heights.
How have the numbers been placed in this Carroll diagram? Which
labels would you put on each row and column?
Use the interactivities to fill in these Carroll diagrams. How do you know where to place the numbers?
Can you make a train the same length as Laura's but using three differently coloured rods? Is there only one way of doing it?
A game for 2 people using a pack of cards Turn over 2 cards and try
to make an odd number or a multiple of 3.
|
Theorem 1.3: There is a uniquely determined line going through a given point with a given slope. If the given point is (x1, y1), and the line is vertical, then the equation is
If the line is not vertical, then the equation can be written as
which can be simplified to
Proof: If the point is (x1, y1) and the line is a vertical line then its equation is
which all of the points will satisfy, since by definition, all points on a vertical line have the same x-coordinate.
If the line is not vertical, then by Theorem 1.2, it satisfies and equation of the form y = mx + b. If (x, y) is any other point on the line, then by Theorem 1.1, the slope between it and (x1, y1) will be m. That is to say.
Clear the denominator
To get this into the form y = mx + b, solve for y. Remove parentheses
which is of the form y = mx + b if
next theorem (1.4)
|
1 Answer | Add Yours
The former Colonies were able to expand into North America simply because they had the military backing and a desire for additional resources and land. As colonial immigration increased, it simply became advantageous for the numbers of people, who, landing on the East Coast, kept pushing westward.
After the formation of the United States, the displacement of Native Americans occurred wholesale, as they no longer had any European power, British or French, who could support them. Ultimately they could not resist the onslaught of immigration. As more territories became states, boundaries kept pushing across the continent.
By 1890, the Frontier was gone; the expansionist tendency then left the continental United States and began to push into the Caribbean and Pacific. After World War II, the creation of new states and territories was no longer required, but only the incorporation of those areas into the American Sphere of Influence.
We’ve answered 301,030 questions. We can answer yours, too.Ask a question
|
Turtle embryology said to connect the evolutionary dots and reveal transitional turtle.
The origin of the turtle shell has long been a bone of contention among evolutionists. The latest fossil to make an appearance in this saga is the South African Eunotosaurus, which at a purported age of 260 million years is being billed as the transitional turtle connection to other reptiles.
Embryologically, the parts of a turtle shell form separately. The ribs become broader, flatter, and cross-sectionally T-shaped. The neural spines of the vertebrae broaden. Bony plates develop around the edge of the developing carapace. Finally, the parts fuse. Some evolutionists argue that turtle evolution must have followed this pattern. Others believe the shell evolved as external bony scales fused with internal ribs. The two most recent additions to the turtle evolution tale are claimed as evidence for the evolutionary pathway “predicted” by embryology. “It was very contentious,” says Tyler Lyson, lead author of “Evolutionary Origin on the Turtle Shell” published in Current Biology. “For the past 200 years, there’s been a lot of ink spilled on the question.”
Unlike other armored animals, a turtle’s protective covering consists of fused bones, not external scales. A turtle shell is a fusion of vertebrae, ribs, and pelvic bones. The carapace—the convex upper shell—is a fusion of broad flattened ribs and vertebrae with dermal bone. The shell is typically covered with scutes made of keratin, but because these scutes have a bony base in the dermis (the living part of the skin), they are also called osteoderms. The plastron is the flatter shell covering a turtle’s underbelly. Among living turtles, the leatherback marine turtle lacks a solid carapace, and even among turtles with hard shells, there are variations.
Until recently every known turtle fossil looked like today’s turtles, essentially unchanged for 200 million years (according to evolutionary reckoning). There is no anatomical structure similar to a turtle shell in other animals to help evolutionists guess at evolutionary relationships. A confusing array of anatomical, fossil, and genetic data has left turtles in a clade by themselves, without a clear-cut evolutionary ancestor.
This sequence is from an animation prepared to show how shells could have gradually evolved in these four varieties of turtles. The Eunotosaurus lacks a plastron but has ribs and vertebrae typical of turtles. Odontochelys is thought to have been a marine turtle and has a well-developed plastron; the broad ribs of its carapace are unfused. Proganochelys has both plastron and carapace typical of most modern turtles. Chelydra is a modern snapping turtle. The number and shape of the broad flat trunk vertebrae and ribs, seen in all these fossils, suggests that all are turtles but does not demonstrate evolutionary transitions, only variation. Images: T. Lyson through www.popsci.com
Evolutionary paleontologists trying to piece together the puzzle of the turtle shell also debate whether the turtle evolved on land and then lost its hard shell to produce some marine turtles or, alternatively, made its evolutionary appearance in the water and crawled out onto land as the shell evolved. Proganochelys, with a heavy shell, short toes, and a spiked tail—dated 210 million years by evolutionists—appears to have been terrestrial. But the turtle’s evolutionary origin shifted to the sea in 2008 with the discovery of Odontochelys. Said to be 220 million years old, the toothed Odontochelys had a well-developed plastron but no fused carapace. Modern leatherback turtles also lack a fused carapace. Nevertheless, its discoverer argued that its broad ribs but lack of a solid carapace made Odontochelys a transitional turtle, “an ideal missing link for turtle evolution,” supporting the embryologic theory.
This approach to coming up with an origins story is of course just a version of the fallacious “ontogeny recapitulates phylogeny” mantra. The existence of animals, apparently fully functional ones, with various traits resembling an embryologic stage is no proof that embryologic development retraces an evolutionary pathway.
The latest fossil relevant to the turtle story is Eunotosaurus africanus. Dated 260 million years, Eunotosaurus now takes its place as the earliest known turtle. Lyson and colleagues report Eunotosaurus—like modern turtles, Proganochelys, and Odontochelys—had 9 elongated trunk vertebrae, 9 broad T-shaped ribs that were touching, and no evidence of intercostal muscles between them.1 Based on the fossilized marks of fibers that connect muscle to bone, the muscles that would have assisted its breathing appeared to have been located beneath these ribs,2 as seen in modern turtles. Unlike modern turtles and Odontochelys, Eunotosaurus lacked broad protrusions from its vertebral bones. No fused plastron was evident.
Lyson considers the broad ribs to be the equivalent of a primitive carapace. In the absence of broad vertebral protrusions and external scales, he considers Eunotosaurus as a sort of proto-turtle at the base of an evolutionary path dictated by turtle embryology. “The first thing we see in a developing turtle embryo,” he says, “is the broadening of its ribs, followed by the broadening of its vertebrae, and finally by the acquisition of the osteoderms along the perimeter of the shell.”3 Lyson believes that Eunotosaurus and Odontochelys, because they conform somewhat to turtle embryologic stages, vindicate his version of the turtle evolutionary story.
“The turtle shell is a complex structure whose initial transformations started over 260 million years ago in the Permian period. The shell evolved over millions of years and was gradually modified into its present-day shape,”4 Lyson explains. “Eunotosaurus neatly fills an approximately 30-55-million year gap in the turtle fossil record. There are several anatomical and developmental features that indicate Eunotosaurus is an early representative of the turtle lineage; however, its morphology is intermediate between the specialised shell found in modern turtles and primitive features found in other vertebrates.”5 Lyson therefore believes “Eunotosaurus is a good transitional fossil which bridges the morphological gap between turtles and other reptiles.”4
Embryology reveals a continuous sequence of steps that successfully produce a fully developed organism. It is only natural that evolutionists trying to devise a story to construct a step-by-step evolutionary just-so-story for an animal’s origins look to embryology for guidance. Embryology, however, is observable; the evolution of complexity is not. And the epic difference is that the developing embryo already possesses the genetic blueprint for all the structural steps through which its body passes. A hypothetical evolutionary ancestor would not possess the information for the next step (or for the infinite number of next steps required in a random trial-and-error process), and there is no observable mechanism in biology to show how it could acquire the information for increasing complexity. Mutations lose genetic information; they do not add the complexity needed to even incrementally produce a more complex creature.
What these turtle fossils reveal is not a series of non-turtles evolving into turtles but just varieties of turtles.
The biblical account of origins fits observable biology. God created all “kinds” of animals about 6,000 years ago. Many creation scientists suspect the original creation included at least two kinds of turtles. Mutations, reshuffling of the genetic information in those turtles, natural selection, and other ordinary genetic processes explain the variations and even loss of some traits in their descendants. Loss of genetic information, not evolutionary gain, explains the biodiversity we see in turtles today and in the fossil record, where some extinct and some modern types of turtles were buried during the global Flood.
Last week, we learned a lot about microbial warfare. Did you know that viruses in our mucus may help us stay healthy? Bacteria that infect mosquitoes and traps based on smelly socks may help slow transmission of malaria, a disease that kills more than half a million people a year. We even explored the question of how such a devastating disease could develop in the world our good and loving God created. Then we covered the presumptuous decision by the U.S. Supreme Court to re-write the 6,000-year-old definition of marriage. What will this mean, and what’s coming next? Find out in Defining Marriage—Supreme Court v. Sovereign God. Be sure to check back daily next week to see how floppy feet aren’t just in cartoons and how biochemistry plays a special role in God’s design for aquatic mammals. And who knows what else will be in the News?
Remember, if you see a news story that might merit some attention, let us know about it! (Note: if the story originates from the Associated Press, FOX News, MSNBC, the New York Times, or another major national media outlet, we will most likely have already heard about it.) And thanks to all of our readers who have submitted great news tips to us. If you didn’t catch all the latest News to Know, why not take a look to see what you’ve missed?
Help keep these daily articles coming. Support AiG.
“Is creation a viable model of origins in today’s modern scientific era?” This DVD features Bill Nye and Ken Ham debating one of the biggest questions concerning the scientific community today.
Answers magazine is the Bible-affirming, creation-based magazine from Answers in Genesis. In it you will find fascinating content and stunning photographs that present creation and worldview articles along with relevant cultural topics. Each quarterly issue includes a detachable chart, a pullout children’s magazine, a unique animal highlight, excellent layman and semi-technical articles, plus bonus content. Why wait? Subscribe today and get a FREE DVD download!
|
© Teachers’ Curriculum Institute
Organizational Text Patterns
Expository texts, such as chapters in textbooks, have different organizationalpatterns. These patterns, or structures, can often be identified by
Text Pattern: Cause and EffectWhat is it?
Text organized to show cause and effect identifies the reasons thatevents occur and their results.
How to do it.
Signal words that help identify a cause and effect pattern includethe following:as a result because consequently due toeffects of for for this reason hencehow if . . . then in order to is caused by leads to may be due to so so thatthereby therefore thus when . . . then
Read the following passage. Then list causes and effects in a graphicorganizer like the one below.
As a result of the Civil War, many Americans began thinking of the United States as one country, rather than as a collection of sovereign states. Slavery nolonger existed because of the war. There were terrible costs, though. Due to thewar, more than 620,000 soldiers lay dead. Croplands lay in ruins. It would take generations for the South to recover.
Graphic Organizer: Cause and Effect
|
This map shows the five ocean drainage areas in Canada, the major river basins, the internal drainage areas and the diverted drainage areas. A drainage basin, sometimes called a watershed, is an area where all surface water shares the same drainage outlet. Surface water consists of the tiny trickles of water flowing on the surface of the earth that develop into larger streams and eventually combine to form a river. The boundary of a watershed is called a drainage divide.
What would you like to do?
- View this map as a JPEG (135 KB, 640 x 567 pixels).
- View this map as a PDF (3,660 KB, Adobe Acrobat Reader required).
- Both versions, above, are printer friendly. For best printing quality and greater detail, select the PDF version.
- To save this map to your hard drive first open the JPEG version above. Then, using your mouse, right-click on the map and select "Save Image / Picture As".
- Date modified:
|
Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made.
Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students.
The trigonometricfunctions (also called the circular functions) are functions of an angle.
They are used to relate the angles of a triangle to the lengths of the sides of a triangle.
Trigonometric functions are important in the study of triangles and modeling periodic phenomena, among many other applications.
The most familiar trigonometric functions are the sine, cosine, and tangent.
In the context of the standard unit circle with radius 1, where a triangle is formed by a ray originating at the origin and making some angle with the x-axis, the sine of the angle gives the length of the y-component (rise) of the triangle, the cosine gives the length of the x-component (run), and the tangent function gives the slope (y-component divided by the x-component).
|
A pattern rule contains the character ‘%’ (exactly one of them) in the target; otherwise, it looks exactly like an ordinary rule. The target is a pattern for matching file names; the ‘%’ matches any nonempty substring, while other characters match only themselves.
For example, ‘%.c’ as a pattern matches any file name that ends in ‘.c’. ‘s.%.c’ as a pattern matches any file name that starts with ‘s.’, ends in ‘.c’ and is at least five characters long. (There must be at least one character to match the ‘%’.) The substring that the ‘%’ matches is called the stem.
‘%’ in a prerequisite of a pattern rule stands for the same stem that was matched by the ‘%’ in the target. In order for the pattern rule to apply, its target pattern must match the file name under consideration and all of its prerequisites (after pattern substitution) must name files that exist or can be made. These files become prerequisites of the target.
Thus, a rule of the form
%.o : %.c ; recipe…
specifies how to make a file n.o, with another file n.c as its prerequisite, provided that n.c exists or can be made.
There may also be prerequisites that do not use ‘%’; such a prerequisite attaches to every file made by this pattern rule. These unvarying prerequisites are useful occasionally.
A pattern rule need not have any prerequisites that contain ‘%’, or in fact any prerequisites at all. Such a rule is effectively a general wildcard. It provides a way to make any file that matches the target pattern. See Last Resort.
More than one pattern rule may match a target. In this case
make will choose the “best fit” rule. See How Patterns Match.
Pattern rules may have more than one target. Unlike normal rules,
this does not act as many different rules with the same prerequisites
and recipe. If a pattern rule has multiple targets,
that the rule’s recipe is responsible for making all of the targets.
The recipe is executed only once to make all the targets. When
searching for a pattern rule to match a target, the target patterns of
a rule other than the one that matches the target in need of a rule
make worries only about giving a recipe and
prerequisites to the file presently in question. However, when this
file’s recipe is run, the other targets are marked as having been
|
Congress of Tucumán
Part of a series on the
|History of Argentina|
The Congress of Tucumán was the representative assembly, initially meeting in San Miguel de Tucumán, that declared the independence of the United Provinces of South America (modern-day Argentina, Uruguay, part of Bolivia) on July 9, 1816, from the Spanish Empire.
Following the May Revolution of 1810, the Viceroy had been replaced by the Primera Junta. The provinces had been moving towards full independence but royalist forces from the Viceroyalty of Peru have had the upper hand in the Upper Peru and were threatening the revolution.
On April 15, 1815, a revolution ended the mandate of Carlos María de Alvear and called a General Congress. Delegate deputies, each representing 15,000 inhabitants, were sent from all the provinces to the sessions that started on March 24, 1816. Nevertheless, some territories that formerly belonged to the Viceroyalty of the River Plate did not take part in the Congress: the delegates from the Banda Oriental ('Eastern Bank', today Uruguay) and other Liga Federal provinces, faithful to the democratic federalist project of José Gervasio Artigas were rejected based on formalities; Paraguay had already proclaimed its independence from Spain and remained isolated from the United Provinces politics. Representatives from Upper Peru Provinces (current Bolivia) were, however, present.
The congress was inaugurated in the house of Francisca Bazán de Laguna, consisted of 33 deputies, and its presidency rotated on a monthly basis. Because the congress had freedom to select the agenda, there were endless discussions. On July 9, it declared the independence of the United Provinces of South America, a name that was intended to appeal and eventually incorporate other Spanish American independentist regions that were not represented at the Congress.
At that time, the President of the Congress was Francisco Narciso de Laprida, delegate from San Juan Province. Subsequent discussions centred on the form of government that the young state should have and were the Congress and the executive power should reside.
The congress continued its work in Buenos Aires since 1817 and issued a Constitution in 1819, but the Constitution was rejected and the Congress was dissolved in 1820 after the Federal League Provinces of Santa Fe and Entre Ríos defeated a diminished Directorship army at the Battle of Cepeda, that staged the Unitarian (v.g. Centralist) versus Federal conflict on the battlefield.
The house where the declaration was made was rebuilt and is now a museum and monument. It is known as the "House of Tucumán".
Signatories of the declaration
- Francisco Narciso de Laprida, Deputy for San Juan, President
- Mariano Boedo, Deputy for Salta, Vice-president
- José Mariano Serrano, Deputy for Charcas (present Bolivia), Secretary
- Juan José Paso, Deputy for Buenos Aires, Secretary
- Dr. Antonio Sáenz, Deputy for Buenos Aires
- Dr. José Darragueira, Deputy for Buenos Aires
- Friar Cayetano José Rodríguez, Deputy for Buenos Aires
- Dr. Pedro Medrano, Deputy for Buenos Aires
- Dr. Manuel Antonio Acevedo, Deputy for Catamarca
- Dr. José Ignacio de Gorriti, Deputy for Salta
- Dr. José Andrés Pacheco de Melo, Deputy for Chichas (present Bolivia)
- Dr. Teodoro Sánchez de Bustamante, Deputy for Jujuy
- Eduardo Pérez Bulnes, Deputy for Córdoba
- Tomás Godoy Cruz, Deputy for Mendoza
- Dr. Pedro Miguel Aráoz, Deputy for Tucumán
- Dr. Esteban Agustín Gazcón, Deputy for Buenos Aires
- Pedro Francisco de Uriarte, Deputy for Santiago del Estero
- Pedro León Gallo, Deputy for Santiago del Estero
- Pedro Ignacio Rivera, Deputy for Mizque (present Bolivia)
- Dr. Mariano Sánchez de Loria, Deputy for Charcas (present Bolivia)
- Dr. José Severo Malabia, Deputy for Charcas (present Bolivia)
- Dr. Pedro Ignacio de Castro Barros, Deputy for La Rioja
- Lic. Gerónimo Salguero de Cabrera y Cabrera, Deputy for Córdoba
- Dr. José Colombres, Deputy for Catamarca
- Dr. José Ignacio Thames, Deputy for Tucumán
- Friar Justo de Santa María de Oro, Deputy for San Juan
- José Antonio Cabrera, Deputy for Córdoba
- Dr. Juan Agustín Maza, Deputy for Mendoza
- Tomás Manuel de Anchorena, Deputy for Buenos Aires
|Wikimedia Commons has media related to Congreso de Tucumán.|
|
|Topics in Roman mythology|
|Other minor Roman deities|
In Roman religion, Terminus was the god who protected boundary markers; his name was the Latin word for such a marker. Sacrifices were performed to sanctify each boundary stone, and landowners celebrated a festival called the "Terminalia" in Terminus' honor each year on February 23. The Temple of Jupiter Optimus Maximus on the Capitoline Hill was thought to have been built over a shrine to Terminus, and he was occasionally identified as an aspect of Jupiter under the name "Jupiter Terminalis".
Ancient writers believed that the worship of Terminus had been introduced to Rome during the reign of the first king Romulus (traditionally 753–717 BC) or his successor Numa (717–673 BC). Modern scholars have variously seen it as the survival of an early animistic reverence for the power inherent in the boundary marker, or as the Roman development of proto-Indo-European belief in a god concerned with the division of property.
The name of the god Terminus was the Latin word for a boundary stone, and his worship as recorded in the late Republic and Empire centred on this stone, with which the god could be identified. Siculus Flaccus, a writer on land surveying, records the ritual by which the stone was sanctified: the bones, ashes, and blood of a sacrificial victim, along with crops, honeycombs, and wine, were placed into a hole at a point where estates converged, and the stone was driven in on top. On February 23 annually, a festival called the Terminalia was celebrated in Terminus' honor, involving practices which can be regarded as a reflection or "yearly renewal" of this foundational ritual. Neighboring families would garland their respective sides of the marker and make offerings to Terminus at an altar—Ovid identifies these, again, as crops, honeycombs, and wine. The marker itself would be drenched in the blood of a sacrificed lamb or pig. There followed a communal feast and hymns in praise of Terminus.
These rites were practised by private landowners, but there were also related public ceremonies. Ovid refers to the sacrifice of a sheep on the day of the Terminalia at the sixth milestone from Rome along the Via Laurentina; it is likely this was thought to have marked the boundary between the early Romans and their neighbors in Laurentum. Also, a stone or altar of Terminus was located in the Temple of Jupiter Optimus Maximus on Rome's Capitoline Hill. Because of a belief that this stone had to be exposed to the sky, there was a small hole in the ceiling directly above it. On occasion Terminus' association with Jupiter extended to regarding Terminus as an aspect of that god; Dionysius of Halicarnassus refers to "Jupiter Terminalis", and one inscription names a god "Juppiter Ter."
There is some evidence that Terminus' associations could extend from property boundaries to limits more generally. Under the Republican calendar, when the intercalary month Mercedonius was added to a year, it was placed after February 23 or February 24, and some ancient writers believed that the Terminalia on February 23 had once been the end of the year. Diocletian's decision in 303 AD to initiate his persecution of Christians on February 23 has been seen as an attempt at enlisting Terminus "to put a limit to the progress of Christianity".
Ancient authors agreed that the worship of Terminus was of Sabine origin, ascribing its introduction to Rome either to Titus Tatius, the Sabine colleague of Rome's founding king Romulus (traditional reign 753–717 BC), or to Romulus' successor Numa Pompilius (717–673 BC). Those authors who gave the credit to Numa explained his motivation as the prevention of violent disputes over property. Plutarch further states that, in keeping with Terminus's character as a guarantor of peace, his earliest worship did not involve blood sacrifices.
The stone in the Capitoline Temple was believed to have been among the altars located on the Capitoline Hill before the Temple was built under Tarquinius Priscus (traditional reign 616–579 BC) or Tarquinius Superbus (535–510 BC). When the augurs took the auspices to discover whether the god or goddess of each altar was content for it to be moved, Terminus refused permission, either alone or along with Juventas the goddess of youth. The stone was therefore included within the Capitoline Temple, and its immovability was regarded as a good omen for the permanence of the city's boundaries.
According to the dominant scholarly view during the late 19th and much of the 20th century, Roman religion was originally animistic, directed towards spirits associated with specific objects or activities which were only later perceived as gods with independent personal existence. Terminus, with his lack of mythology and his close association with a physical object, seemed a clear example of a deity who had developed little from such a stage.
This view of Terminus retains some recent adherents, but other scholars have argued from Indo-European parallels that the personalised gods of Roman religion must have preceded the city's foundation. Georges Dumézil regarded Jupiter, Juventas and Terminus as the Roman form of a proto-Indo-European triad, comparing the Roman deities respectively to the Vedic Mitra, Aryaman, and Bhaga. In this view the sovereign god (Jupiter/Mitra) was associated with two minor deities, one concerned with the entry of men into society (Juventas/Aryaman) and the other with the fair division of their goods (Terminus/Bhaga).
Notes and references
- Herbert Jennings Rose; and John Scheid (2003). "Terminus". In Simon Hornblower and Antony Spawforth. The Oxford Classical Dictionary (3rd edition, revised ed.). Oxford: Oxford University Press. pp. 1485–1486. ISBN 0-19-860641-9.
- Ovid, Fasti 2.639–684.
- Siculus Flaccus, De Condicionibus Agrorum 11.
- W. Warde Fowler (1899). The Roman Festivals of the Period of the Republic: An Introduction to the Study of the Religion of the Romans. London: Macmillan and Co. pp. 324–327. Retrieved 2007-03-24.
- H. H. Scullard (1981). Festivals and Ceremonies of the Roman Republic. London: Thames and Hudson. pp. 79–80. ISBN 0-500-40041-5.
- Samuel Ball Platner; and Thomas Ashby (1929). "Terminus, Fanum". A Topographical Dictionary of Ancient Rome. London: Oxford University Press. p. 512. Retrieved 2007-03-19.
- Dionysius of Halicarnassus, Roman Antiquities 2.74.2–5.
- Georges Dumézil (1996) . Archaic Roman Religion: Volume One. trans. Philip Krapp. Baltimore: Johns Hopkins University Press. pp. 200–203. ISBN 0-8018-5482-2.
- Herbert Jennings Rose; and Simon R. F. Price (2003). "Calendar, Roman". In Simon Hornblower and Antony Spawforth. The Oxford Classical Dictionary (3rd edition, revised ed.). Oxford: Oxford University Press. p. 274. ISBN 0-19-860641-9.
- Varro, De Lingua Latina 6.3; Ovid, Fasti 2.47–54.
- J. H. W. G. Liebeschuetz (1979). Continuity and Change in Roman Religion. Oxford: Oxford University Press. p. 247. ISBN 0-19-814822-4.
- Varro, De Lingua Latina 5.10.
- Plutarch, Roman Questions 15; Numa 16.
- Livy 1.55; Dionysius of Halicarnassus, Roman Antiquities 3.69.3–6.
- Piccaluga, Giulia (1974). Terminus: I segni di confine nella religione romana (in Italian). Rome: Edizioni dell'Ateneo. OCLC 1989261.
- Woodard, Roger D. (2006). Indo-European Sacred Space. Vedic and Roman Cult. Urbana-Chicago: University of Illinois Press. ISBN 0-252-02988-7.
- Reviewed by Marco V. García-Quintela (2007), Bryn Mawr Classical Review 2007.02.36. Retrieved on June 13, 2007.
|Wikimedia Commons has media related to Terminus.|
|Wikisource has the text of the 1911 Encyclopædia Britannica article Terminus.|
The dictionary definition of Terminus at Wiktionary
|
How it Happens: The Physics Behind the Urban Heat Island Effect
To understand the urban heat island effect, we first need to understand a few simple rules of physics. Most importantly, we should understand that objects can absorb and reflect light. In fact, the color of an object depends on what kind of light it reflects. For example, a green object reflects green light and absorbs all the other visible colors of light. When we see a green object, we perceive it as green because it reflects the green wavelength of color back to our eyes. Darker colored objects are excellent absorbers of light. In fact, black surfaces absorb almost all light. On the other hand, lighter colored surfaces do not absorb much light at all -- rather they reflect almost all of it.
So what does the absorption of light have to do with heat? When an object absorbs light, it converts that light to thermal energy, and emits it back out as heat. So, because black objects absorb more light, they also emit more heat. That's why wearing a black shirt on a hot, sunny day will only make you hotter. The black shirt absorbs light and emits it as heat onto your skin. Wearing a white shirt, on the other hand, will help reflect the sunlight and keep you cooler.
The rate at which an object can reflect solar radiation is called its albedo [source: Budikova]. The bigger the albedo something has, the better it reflects radiation. Traditional asphalt has a low albedo, which means it reflects radiation poorly and instead absorbs it.
When we build and expand cities, we tend to erect buildings with dark surfaces and lay down asphalt pavement. The buildings and the pavement absorb a significant amount of light and radiation and emit it as heat, warming the city. Because more than half of the surfaces in cities are man-made, cities heat up more than rural areas, where structures are less concentrated [source: EPA]. This heat absorption is why the temperature difference between cities and rural areas is highest a few hours after sunset. Cities hold on to more heat for a longer period of time than rural areas do [source: EPA].
But that's not the only thing that causes the urban heat island effect. Scientists believe that vegetation plays a large part in keeping an area cool through a process called evaporative cooling. Evaporation is when liquid turns into gas. Plants take in water through their roots and depend on it to live. But after the plant is done with it, dry air absorbs that water by turning it into gaseous water vapor. The air provides the heat that drives this process, so during the process, the air loses heat and becomes cooler. We experience the same type of thing when we sweat -- when air hits your sweaty skin, it absorbs the moisture and cools the air around you [source: Asimakopoulos]. Because building a city means replacing vegetation with structures, the city loses the evaporative cooling advantages of vegetation.
Other factors also contribute to the effect. For instance, cars and air conditioners, which are ubiquitous in urban areas, convert energy to heat and release this heat into the air.
Now that we know what's causing this phenomenon, let's learn the steps to reduce it.
|
In population genetics, the founder effect is the loss of genetic variation that occurs when a new population is established by a very small number of individuals from a larger population. It was first fully outlined by Ernst Mayr in 1942, using existing theoretical work by those such as Sewall Wright. As a result of the loss of genetic variation, the new population may be distinctively different, both genotypically and phenotypically, from the parent population from which it is derived. In extreme cases, the founder effect is thought to lead to the speciation and subsequent evolution of new species.
In the figure shown, the original population has nearly equal numbers of blue and red individuals. The three smaller founder populations show that one or the other color may predominate (founder effect), due to random sampling of the original population. A population bottleneck may also cause a founder effect even though it is not strictly a new population.
The founder effect occurs when a small group of migrants that is not genetically representative of the population from which they came establish in a new area. In addition to founder effects, the new population is often a very small population and so shows increased sensitivity to genetic drift, an increase in inbreeding, and relatively low genetic variation. This can be observed in the limited gene pools of Icelanders, Faroe Islanders, Easter Islanders, and those native to Pitcairn Island. Another example is the legendarily high deaf population of Martha's Vineyard, which resulted in the development of Martha's Vineyard Sign Language.
In genetics, a founder mutation is a mutation that appears in the DNA of one or more individuals who are founders of a distinct population. Founder mutations initiate with changes that occur in the DNA and can get passed down to other generations.
Founder mutations originate in long stretches of DNA on a single chromosome—indeed, the original haplotype is the whole chromosome. As the generations progress, the proportion of the haplotype that is common to all carriers of the mutation is shortened (due to genetic recombination). This shortening allows scientists to roughly estimate the age of the mutation.
The founder effect is a special case of genetic drift, occurring when a small group in a population splinters off from the original population and forms a new one. The new colony may have less genetic variation than the original population, and through the random sampling of alleles during reproduction of subsequent generations, continue rapidly towards fixation. This consequence of inbreeding makes the colony more vulnerable to extinction.
When a newly formed colony is small, its founders can strongly affect the population's genetic makeup far into the future. In humans, which have a slow reproduction rate, the population will remain small for many generations, effectively amplifying the drift effect generation after generation until the population reaches a certain size. Alleles which were present but relatively rare in the original population can move to one of two extremes. The most common one is that the allele is soon lost altogether, but the other possibility is that the allele survives and within a few generations has become much more dispersed throughout the population. The new colony can experience an increase in the frequency of recessive alleles as well, and as a result, an increased number who are homozygous for certain recessive traits.
The variation in gene frequency between the original population and colony may also trigger the two groups to diverge significantly over the course of many generations. As the variance, or genetic distance, increases, the two separated populations may become distinctively different, both genetically and phenotypically, although not only genetic drift but also natural selection, gene flow and mutation will all contribute to this divergence. This potential for relatively rapid changes in the colony's gene frequency led most scientists to consider the founder effect (and by extension, genetic drift) a significant driving force in the evolution of new species. Sewall Wright was the first to attach this significance to random drift and small, newly isolated populations with his shifting balance theory of speciation. Following behind Wright, Ernst Mayr created many persuasive models to show that the decline in genetic variation and small population size accompanying the founder effect were critically important for new species to develop. However there is much less support for this view today since the hypothesis has been tested repeatedly through experimental research, and the results have been equivocal at best.[further explanation needed] Speciation by genetic drift is a specific case of peripatric speciation which in itself occurs in rare instances. It takes place when a random change in genetic frequency of population favours the survival of a few organisms of the species with rare genes which cause reproductive mutation. These surviving organisms then breed among themselves over a long period of time to create a whole new species whose reproductive systems or behaviors are no more compatible with the original population.[further explanation needed]
Serial founder effect
Serial founder effects have occurred when populations migrate over long distances. Such long distance migrations typically involve relatively rapid movements followed by periods of settlement. The populations in each migration carry only a subset of the genetic diversity carried from previous migrations. As a result, genetic differentiation tends to increase with geographic distance as described by the "Isolation by distance" model. The migration of humans out of Africa is characterized by serial founder effects. Africa has the highest degree of genetic diversity, which is consistent with an African origin of modern humans. After the initial migration from Africa, the Indian subcontinent was the first major settling point for modern humans. Consequently, India has the second highest genetic diversity in the world. In general, the genetic diversity of the Indian subcontinent is a subset of Africa, and the genetic diversity outside Africa is a subset of India.
Founder effects in island ecology
Founder populations are essential to the study of island biogeography and island ecology. A natural "blank slate" is not easily found, but a classic series of studies on founder population effects were done following the catastrophic 1883 eruption of Krakatoa, which erased all life on the island. Another continuing study has been following the biocolonization of Surtsey, Iceland, a new volcanic island that erupted offshore between 1963 and 1967. An earlier event, the Toba eruption in Sumatra of about 73,000 YBP, covered some parts of India with 3–6 metres (10–20 ft) of ash, and must have coated the Nicobar Islands and Andaman Islands, much nearer in the ash fallout cone, with life-smothering layers, forcing the restart of their biodiversity from zero. A four year long experiment studied the genetic diversity of seven founding populations of Anolis lizards, each on its own island, and determined that though there was evidence of natural selection, the differences in genetic variation between the islands was more relatable to the genetic variation of each of the island founders.
Founder effects in human populations
Due to various migrations throughout human history, founder effects are somewhat common among humans in different times and places. The French Canadians of Quebec are a classical example of founder population. Over 150 years of French colonization, between 1608 and 1760, it was estimated that 8,500 pioneers married and left at least one descendent on the territory. Following the takeover of the colony by the British crown in 1760, immigration from France effectively stopped, but descendents of French settlers continued to grow in number mainly because of high fertility rate. Intermarriage occurred mostly with the deported Acadians and migrants coming from the British Isles. Since the 20th century, immigration in Quebec and mixing of French Canadians involve people from all over the world. While the French Canadians of Quebec today may be partly of other ancestries, the genetic contribution of the original French founders is predominant, explaining about 90% of regional gene pools, while Acadians (descended from other French settlers in eastern Canada) explain 4%, British 2% and Native American and other groups contributed less. [clarification needed]
Founder effects can also occur naturally as competing genetic lines die out. This means that an effective founder population consists only of those whose genetic print is identifiable in subsequent populations. Because in sexual reproduction, genetic recombination ensures that with each generation, only half the genetic material of a parent is represented in the offspring, some genetic lines may die out entirely, even though there are numerous progeny. The misinterpretations of "Mitochondrial Eve" are a case in point: it may be hard to explain that a "mitochondrial Eve" was not the only woman of her time.
In humans, founder effects can arise from cultural isolation, and inevitably, endogamy. For example, the Amish populations in the United States exhibit founder effects. This is because they have grown from a very few founders, have not recruited newcomers, and tend to marry within the community. Though still rare, phenomena such as polydactyly (extra fingers and toes, a symptom of Ellis-van Creveld syndrome) are more common in Amish communities than in the American population at large. Maple syrup urine disease (MSUD) affects approximately 1 out of 180,000 infants in the general population. Due in part to the founder effect, however, the disease has a much higher prevalence in children of Amish, Mennonite, and Jewish descent. Similarly, there is a high frequency of fumarase deficiency among the 10,000 members of the Fundamentalist Church of Jesus Christ of Latter Day Saints, a community which practices both endogamy and polygyny, where it is estimated 75 to 80 percent of the community are blood relatives of just two men—founders John Y. Barlow and Joseph Smith Jessop.
In 1814, 15 British colonists founded a settlement on Tristan da Cunha, a group of small islands in the Atlantic Ocean, midway between Africa and South America. One of the early colonists apparently carried a recessive allele for retinitis pigmentosa, a progressive form of blindness that afflicts homozygous individuals. Of the founding colonists' 240 descendants on the island in the late 1960s, 4 had retinitis pigmentosa. The frequency of the allele that causes this disease is ten times higher on Tristan da Cunha than in the populations from which the founders came.
More severe illnesses exist among certain Jewish groups. Ashkenazi Jews, for example, have a particularly high chance of suffering from Tay-Sachs disease, a fatal condition in young children (see Medical genetics of Ashkenazi Jews).
- Provine WB (July 2004). "Ernst Mayr: Genetics and speciation". Genetics 167 (3): 1041–6. PMC 1470966. PMID 15280221.
- Templeton AR (April 1980). "The Theory of Speciation VIA the Founder Principle". Genetics 94 (4): 1011–38. PMC 1214177. PMID 6777243.
- Hartwell, Hood, Goldberg, Reynolds, Silver, Veres, 2004, Genetics – from genes to genomes, page 688, McGraw Hill Higher Education
- Raven, Evert, Eichhorn, 1999, Biology of plants, page 241, W H Freeman and Company
- "Bioinformatics Glossary". bscs.org. Retrieved 2009-03-23.[dead link]
- "Colorectal Cancer Research Definitions". www.mshri.on.ca. Retrieved 2009-03-23.[dead link]
- "Founder Mutations: Scientific American". www.sciam.com. Retrieved 2009-03-23.
- Wade, Michael S.; Wolf, Jason; Brodie, Edmund D. (2000). Epistasis and the evolutionary process. Oxford [Oxfordshire]: Oxford University Press. p. 330. ISBN 0-19-512806-0.
- Mayr, Ernst, Jody Hey, Walter M. Fitch, Francisco José Ayala (2005). Systematics and the Origin of Species: on Ernst Mayr's 100th anniversary (Illustrated ed.). National Academies Press. p. 367. ISBN 978-0-309-09536-5.
- Howard, Daniel J.; Berlocher, Steward H. (1998). Endless Forms (Illustrated ed.). United States: Oxford University Press. p. 470. ISBN 978-0-19-510901-6.
- Ramachandran S, Deshpande O, Roseman CC, Rosenberg NA, Feldman MW, Cavalli-Sforza LL (November 2005). "Support from the relationship of genetic and geographic distance in human populations for a serial founder effect originating in Africa". Proc. Natl. Acad. Sci. U.S.A. 102 (44): 15942–7. doi:10.1073/pnas.0507611102. PMC 1276087. PMID 16243969.
- DeGiorgio M, Jakobsson M, Rosenberg NA (September 2009). "Explaining worldwide patterns of human genetic variation using a coalescent-based serial founder model of migration outward from Africa". Proc. Natl. Acad. Sci. U.S.A. 106 (38): 16057–62. doi:10.1073/pnas.0903341106. PMC 2752555. PMID 19706453.
- Maji S, Krithika S, Vasulu TS (April 2009). "Phylogeographic distribution of mitochondrial DNA macrohaplogroup M in India". J. Genet. 88 (1): 127–39. doi:10.1007/s12041-009-0020-3. PMID 19417557.
- Thangaraj K, Chaubey G, Singh VK, et al. (2006). "In situ origin of deep rooting lineages of mitochondrial Macrohaplogroup 'M' in India". BMC Genomics 7: 151. doi:10.1186/1471-2164-7-151. PMC 1534032. PMID 16776823.
- Kolbe, J.J., Leal, M., Schoener, T.W., Spiller, D.A., & Losos, J.B. 2012. Founder Effects Persist Despite Adaptive Differentiation: A Field Experiment with Lizards. Science, 335: 1086-1089.
- "H. Charbonneau, B. Desjardins, J. Légaré, H. Denis, in A Population History of North America, M. R. Haines, Steckel, R.H., Ed. (Cambridge Univ. Press, 2000), pp. 99–142.
- "C. Bhérer et al., Am. J. Phys. Anthropol. 144, 432 (2011).
- McKusick VA, Egeland JA, Eldridge R, Krusen DE (October 1964). "Dwarfism in the Amish I. The Ellis-van Creveld syndrome". Bull Johns Hopkins Hosp 115: 306–36. PMID 14217223.
About.com: Rare diseases - June 2004. Maple syrup urine disease by Mary Kugler, R.N.
Article describes MSUD prevalence among Amish and Mennonite children.
- Jaworski MA, Severini A, Mansour G, Konrad HM, Slater J, Henning K, Schlaut J, Yoon JW, Pak CY, Maclaren N, et al. (1989). "Genetic conditions among Canadian Mennonites: evidence for a founder effect among the old country (Chortitza) Mennonites". Clin Invest Med 12 (2): 127–141. PMID 2706837.
- Puffenberger EG (2003). "Genetic heritage of Old Order Mennonites in southeastern Pennsylvania". Am J Med Genet C Semin Med Genet 121 (1): 18–31. doi:10.1002/ajmg.c.20003. PMID 12888983.
- Forbidden Fruit:Inbreeding among polygamists along the Arizona-Utah border is producing a caste of severely retarded and deformed children, by John Dougherty, The Phoenix New Times News, December 29, 2005, page 2.
- Mayr, Ernst (1954). "Change of genetic environment and evolution". In Julian Huxley. Evolution as a Process. London: George Allen & Unwin. OCLC 974739.
- Mayr, Ernst (1963). Animal Species and Evolution. Cambridge: Belknap Press of Harvard University Press. ISBN 0-674-03750-2.
|
Black Codes was a name given to laws passed by southern governments established during the presidency of Andrew Johnson. These laws imposed severe restrictions on freed slaves such as prohibiting their right to vote, forbidding them to sit on juries, limiting their right to testify against white men, carrying weapons in public places and working in certain occupations.
After the American Civil War the Radical Republicans advocated the passing of the Civil Rights Bill, legislation that was designed to protect freed slaves from Southern Black Codes (laws that placed severe restrictions on freed slaves such as prohibiting their right to vote, forbidding them to sit on juries, limiting their right to testify against white men, carrying weapons in public places and working in certain occupations).
In April 1866, President Andrew Johnson vetoed the Civil Rights Bill. Johnson told Thomas C. Fletcher, the governor of Missouri: "This is a country for white men, and by God, as long as I am President, it shall be a government for white men." His views on racial equality was clearly defined in a letter to Benjamin B. French, the commissioner of public buildings: "Everyone would, and must admit, that the white race was superior to the black, and that while we ought to do our best to bring them up to our present level, that, in doing so, we should, at the same time raise our own intellectual status so that the relative position of the two races would be the same."
Radical Republicans repassed the Civil Rights Bill and were also able to get the Reconstruction Acts passed in 1867 and 1868. Despite these acts, white control over Southern state governments was gradually restored when organizations such as the Ku Kux Klan were able to frighten blacks from voting in elections.
Slavery in the United States (£1.29)
A government organized by Congress and appointed by the President is to enforce laws and institutions, some of which are abhorrent to civilization. Take for instance, the Revised Code of North Carolina, which I have before me. "Any free person, who shall teach, or attempt to teach, any slave to read or write, the use of figures excepted, or shall give or sell to such slave any book or pamphlet, shall be deemed guilty of a misdemeanor, if a white man or woman, shall be fined not less than one hundred nor more than two hundred dollars, or imprisoned, and if a free person of colour, shall be fined, imprisoned, or whipped not exceeding thirty-nine nor less than twenty lashes.
Here is another specimen: "If any person shall willfully bring into the State, with an intent to circulate, or shall aid or abet the bringing into, or the circulation or publication, the State, any written or printed in or out of the State, the evident tendency whereof is to cause slaves to become discontented with the bondage in which they are held by their masters and the laws regulating the same, and free negroes to be dissatisfied with their social condition and the denial to them of political privileges, and thereby to excite among the said slaves and free negroes a disposition to make conspiracies, insurrections, or resistance against the peace and quiet of the public, such person so offending shall be deemed guilty of felony, and on conviction thereof shall, for the first offence, be imprisoned not less than one year, and be put in the pillory and whipped, at the discretion of the court, and for the second offence shall suffer death."
If you could extend the elective franchise to all persons of color who can read the Constitution of the United States in English and write their names and to all persons of color who own real estate valued at not less than two hundred and fifty dollars and pay taxes thereon, and would completely disarm the adversary. This you can do with perfect safety. And as a consequence, the radicals, who are wild upon negro franchise, will be completely foiled in their attempts to keep the Southern States from renewing their relations to the Union.
The abolition of slavery and the establishment of freedom are not the one and the same thing. The emancipated negroes were not yet really freemen. Their chains had indeed been sundered by the sword, but the broken links still hung upon their limbs. The question, "What shall be done with the negro? agitated the whole country. Some were in favour of an immediate recognition of their equal and political rights, and of conceding to them at once all the prerogatives of citizenship. But only a few advocated a policy so radical, and, at the same time, generally considered revolutionary, while many, even of those who really wished well to the negro, doubted his capacity for citizenship, his willingness to labour for his own support, and the possibility of his forming, as a freeman, an integral part of the Republic.
The idea of admitting the freedmen to an equal participation in civil and political rights was not entertained in any part of the South. In most of the States they were not allowed to sit on juries, or even to testify in any case in which white men were parties. They were forbidden to own or bear firearms, and thus were rendered defenceless against assault. Vagrant laws were passed, often relating only to the negro, or, where applicable in terms of both white and black, seldom or never enforced except against the latter.
In some States any court - that is, any local Justice of the Peace - could bind out to a white person any negro under age, without his own consent or that of his parents? The freedmen were subjected to the punishments formerly inflicted upon slaves. Whipping especially, when in some States disfranchised the party subjected to it, and rendered him for ever infamous before the law, was made the penalty for the most trifling misdemeanor.
These legal disabilities were not the only obstacles placed in the path of the freed people. Their attempts at education provoked the most intense and bitter hostility, as evincing a desire to render themselves equal to the whites. Their churches and schoolhouses were in many places destroyed by mobs. In parts of the country remote from observation, the violence and cruelty engendered by slavery found free scope for exercise upon the defenceless negro. In a single district, in a single month, forty-nine cases of violence, ranging from assault and battery to murder, in which whites were the aggressors and blacks the sufferers, were reported.
My own opinion is that, at this time, they cannot vote intelligently, and that giving them the right of suffrage would open the door to a great deal of demagogism, and lead to embarrassments in various ways. What the future may prove, how intelligent they may become, with what eyes they may look upon the interests of the state in which they may reside, I cannot say more than you can.
I repose in this quiet and secluded spot, not from any natural preference for solitude; but finding other cemeteries limited as to race, by charter rules, I have chosen this that I might illustrate in my death the principles which I advocated through a long life, equality of man before the Creator.
Can it be possible that the Northern people have made the negro free, but to be returned, the slave of society, to bear in such slavery the vindictive resentments that the satraps of Davis maintain today towards the people of the north? Better a thousand times for the negro that the government should return him to the custody of the original owner, where he would have a master to look after his well being, than that his neck should be placed under the heel of a society, vindictive towards him because he is free.
|
Collision With A Protoplanet?
At the same time, new findings about the formation of the solar system suggested that large numbers of massive objects, called protoplanets or planetesimals, had been orbiting the sun at the time the Earth formed. Gradually, scientists came to believe, these objects collided with one another and were broken up, or massed together to make the planets, or were flung out of the solar system.
At the second major conference on the origin of the moon, held in Kona, Hawaii, in 1984, the giant-impact theory was the center of attention. It became the prevailing view of most astronomers.
As more researchers tested the giant-impact theory, they had to contend with the fact that many of the early calculations of the theory were relatively crude. Although the calculations showed that an enormous collision could have blown enough matter out of the Earth (and from the vaporized outer layers of the impacting object itself) to make the moon, they did not trace the events in detail. So while the theory was very promising, astronomers did not know how realistic it was.
|
This project will start off by describing the basics of electronics. After that, the fundamentals of binary and boolean logic will be described. Lastly we will then move onto the function of the various parts of a simple-as-possible computer (with a few modifications) as described in Malvino's text Digital Computer Electronics. This means that the end product of this Instructable will be a computer that you can program with a unique instruction set. This project also leaves many of the design aspects of the computer up to you and serves as a guide for building your own computer. This is because there are many ways to approach this project. If you already have a sound understanding of boolean logic and the workings of binary feel free to skip to the meat of the project. I hope that you all enjoy and get something out of a build like this, I know that I sure did.
For this project you will need:
1.) A power supply
2.) Breadboards + lots of wires
3.) LED's for output
4.) Various logic IC's (discussed later)
5.) Free time
6.) A willingness to mess up and learn from mistakes
7.) A lot of patience
Optional (but very useful):
2.) Digital multimeter
3.) EEPROM programmer
4.) Sonic screwdriver
Useful Links for a Project Like This:
Digital Computer Electronics: http://www.amazon.com/Digital-computer-electronics-Albert-Malvino/dp/007039861
TTL Cookbook: http://www.amazon.com/TTL-Cookbook-Understanding-Transistor-Transistor-Integrated/dp/B0049UUV38
Step 1: What Is a Computer?
What is a Turing Machine? A Turing Machine consists of 4 parts: the tape, head, table and state register. To visualize the operation of such a machine you first have to imagine a film strip spanning infinitely in each direction. Now imagine that each cell of this film strip can contain only one of a defined set of symbols (like an alphabet). For this example let us imagine that each cell can only contain either a "0" or a"1". These cells can be rewritten an infinite amount of time but retain their information indefinitely until they are changed again. The part of the Turing Machine known as the head can write symbols to the cells as well as either increment or decrement its position on the film strip by a given integer (whole number) of cells. The next part is the table which holds a given set of instructions for the head to execute such as "move right 4 cells" and "set cell to 1". The fourth and final part of a Turing Machine is its state register whose purpose is to hold the current state of the machine. The state includes the instruction as well as the current data on the tape.
That is how simple the operation of a computer is. When your computer operates, it is actually operating as a turing machine. It processes data held on your computer by a given set of instructions and algorithms. The computer described in this Instructable is a very simplistic model of a computer, but it still operates as one that you can program with a set of instructions that it will follow and execute.
Wikipedia on Turing Machines: http://en.wikipedia.org/wiki/Turing_machine
|
Feverishly hot ocean surface waters potentially reaching more than 104 degrees Fahrenheit (40 degrees Celsius) may have helped cause the greatest mass extinction in Earth's history, researchers say.
"We may have found the hottest time the world has ever had," researcher Paul Wignall, a geologist at the University of Leeds in England, told LiveScience.
The mass extinction at the end of the Permian Era about 250 million years ago was the greatest die-off in Earth's history. The cataclysm killed as much as 95 percent of the planet's species. One key factor behind this disaster was probably catastrophic volcanic activity in what is now Siberia that spewed out as much as 2.7 million square miles (7 million square kilometers) of lava, an area nearly as large as Australia. These eruptions might have released gases that damaged Earth's protective ozone layer.
After the end-Permian mass extinction came a time "called the ' dead zone,'" Wignall said. "It's this 5-million-year period where there's no recovery, where there is a very low diversity of life."
The dead zone apparently experienced a serious case of global warming, but the extremes this global warming reached were uncertain. To find out, scientists analyzed fossils dating from 253 million to 245 million years ago, shortly before and after the mass extinction. [ Wipe Out: History's Most Mysterious Extinctions ]
Unraveling an isotope mystery
The researchers focused on isotopes or atomic variants of oxygen within these fossils. All isotopes of oxygen have eight protons in their atomic nuclei, but differ in the number of neutrons they possess — oxygen-16 has eight neutrons, while oxygen-18 has 10.
As marine creatures form shells, bones and teeth, "they tend to use lighter isotopes of oxygen under warmer conditions," Wignall said. "You can still see this today when looking at modern-day sea creatures. The ratios of oxygen isotopes in their shells are entirely controlled by temperature."
The researchers analyzed strange eel-like creatures known as conodonts, which are known mainly by their elaborate mouthparts. The fossils came from the Nanpanjiang Basin in south China, helping reconstruct what temperatures were like around the equator at the end-Permian.
Different groups of conodonts shed light on what temperatures were at different depths. For instance, one group, Neospathodus, lived down about 230 feet (70 meters) deep, while others, such as Pachycladina, Parachirognathus and Platyvillosus lived near the surface.
"We had to go through several tons of rock to look at tiny conodont fossils," Wignall said. "People always thought the end-Permian extinctions were related to temperature increases, but they never measured the temperature then in much detail before, since it involves a lot of hard work looking at these microfossils."
Extreme case of warming
The fruits of this labor? "We've got a case of extreme global warming, the most extreme ever seen in the last 600 million years," Wignall said. "We think the main reason for the dead zone after the end-Permian is a very hot planet, particularly in equatorial parts of the world." [ The Harshest Environments on Earth ]
The upper part of the ocean may have reached about 100 degrees F (38 degrees C), and sea-surface temperatures may have exceeded 104 degrees F (40 degrees C). For comparison, today's average annual sea-surface temperatures around the equator are 77 to 86 degrees F (25 to 30 degrees C).
"Photosynthesis starts to shut down at about 35 degrees C [95 degrees F], and plants often start dying at temperatures above 40 degrees C [104 degrees F]," Wignall said. "This would explain why there's not much fossil record of plants at the end-Permian — for instance, there are no peat swamps forming, no coal-forming whatsoever. This was a huge, devastating extinction."
Without plants to absorb carbon dioxide, more of this heat-trapping gas would stay in the atmosphere, driving up temperatures further. "There are other ways of taking carbon dioxide out of the atmosphere, but the planet lost a key way for millions of years," Wignall said.
These lethally hot temperatures may explain why the regions at and near the equator were nearly uninhabited. Nearly all fish and marine reptiles were driven to higher latitudes, and those creatures that remained were often smaller, making it easier for them to shed any heat from their bodies.
"I'm sure there will be questions as to whether sea-surface temperatures really did get this extreme," Wignall said. "But I think extreme temperatures would explain quite a lot with the fossils we see showing major losses of animal and plant life."
These findings show that global warming can directly cause extinctions. Still, although the world is currently warming, "we're not going to get anywhere near the level seen after the end-Permian," Wignall said. "We need to worry about global warming, but it's not going to get to this stage."
The scientists detailed their findings in the Oct. 19 issue of the journal Science.
- Top 10 Ways to Destroy Earth
- 50 Amazing Facts About Planet Earth
- Image Gallery: One-of-a-Kind Places on Earth
© 2012 LiveScience.com. All rights reserved.
|
President Obama has outlined a broad strategy to reduce pollution and slow the effects of climate change.
The President's strategy relies on deploying the low-carbon technologies we already have as well as developing new ones. One of these emerging technologies is called "chemical looping."
While chemical looping may not be a household term just yet, it represents one promising path forward for using fossil fuels as part of a clean energy future.
At most coal fired power plants, the coal is mixed with air so that it can burn. When it burns, the chemical reaction bonds together carbon from the coal and oxygen from the air to produce carbon dioxide. It also bonds together nitrogen gas from the air with oxygen to produce nitrogen oxides, which cause acid rain and smog. Separating out the nitrogen oxides from the carbon dioxide and other pollutants in the exhaust stream is both difficult and expensive.
But if you burn coal in pure oxygen instead of air, no nitrogen oxide is formed because there is no nitrogen present. The exhaust stream is almost entirely carbon dioxide and water.
This can be accomplished in a "chemical looping" system in which the coal is mixed with a metal oxide, such as iron oxide (otherwise known as iron ore). The coal reacts with the iron oxide and produces a stream of oxygen so that the coal can combust. The leftover iron oxides, which have lost some of their oxygen, flow into a separate chamber where they mix with air, which causes them to reoxidize. The iron oxides are then "looped" back into the fuel chamber, and the cycle repeats continuously.
This type of system can use coal -- or other fossil fuels -- nearly free of emissions, and produces a pure stream of carbon dioxide that would then need to be compressed and permanently stored underground.
The Department of Energy's National Energy Technology Laboratory has constructed a demonstration Chemical Looping Reactor, which is the only one of its kind in the Western Hemisphere, and is pioneering the development of this technology.
While chemical looping could be a low-cost approach to capturing carbon dioxide, the process is still experimental compared to other common technologies for capturing carbon dioxide. We hope that will change as our work progresses over the next few years. We are experimenting with different mixes of metal oxides that could make the process much more efficient and allow for large scale use of the technology.
Ultimately, the market will decide which technologies succeed, but our goal is to create as many viable options as we can so that our abundant fossil fuel resources can be part of a clean energy future. Chemical looping is one of many promising technologies the Energy Department will pursue over the next few years, and we look forward to seeing the results.
|
A-level Chemistry/OCR (Salters)/Alkenes
||A reader requests that the formatting and layout of this book be improved.
Good formatting makes a book easier to read and more interesting for readers. See Editing Wikitext for ideas, and WB:FB for examples of good books.
Please continue to edit this book and improve formatting, even after this message has been removed. See the discussion page for current progress.
ALKENES Alkenes are unsaturated hydrocarbons containing at least one double bond. What should be the general formula of alkenes? If there is one double bond between two carbon atoms in alkenes, they must possess two hydrogen atoms less than alkanes. Hence, general formula for alkenes is CnH2n. Alkenes are also known as olefins (oil forming) since the first member, ethylene or ethene (C2H4) was found to form an oily liquid on reaction with chlorine. Structure of Double Bond Carbon-carbon double bond in alkenes consists of one strong sigma (σ) bond (bond enthalpy about 397 kJ mol–1) due to head-on overlapping of sp2 hybridised orbitals and one weak pi (π) bond (bond enthalpy about 284 kJ mol–1) obtained by lateral or sideways overlapping of the two 2p orbitals of the two carbon atoms. The double bond is shorter in bond length (134 pm) than the C–C single bond (154 pm). You have already read that the pi (π) bond is a weaker bond due to poor sideways overlapping between the two 2p orbitals. Thus, the presence of the pi (π) bond makes alkenes behave as sources of loosely held mobile electrons. Therefore, alkenes are easily attacked by reagents or compounds which are in search of electrons. Such reagents are called electrophilic reagents. The presence of weaker π-bond makes alkenes unstable molecules in comparison to alkanes and thus, alkenes can be changed into single bond compounds by combining with the electrophilic reagents. Strength of the double bond (bond enthalpy, 681 kJ mol–1) is greater than that of a carbon-carbon single bond in ethane (bond enthalpy, 348 kJ mol–1).
|
by, 04-25-2012 at 10:22 PM (560 Views)
Communication mechanism is provided by the sockets b/w 2 computers by the help of TCP. Client program makes socket to make it connected to the server.
After the connection has been created, a socket object is created by the server at its communication end. Server & client then can communication by reading from or writing to the socket.
Socket is represented by the java.net.Socket & mechanism is provided by the java.net.ServerSocket for server program so that connections might be created with them.
By using socket, TCP connections are made b/w computers. Following steps are involved.
1. ServerSocket object is instantiated by the server, which denotes that which port number communication needs to be occured on.
2. ServerSocket class accept() method is invoked by the server. Such method waits till a connection is made b/w client and server, at a port.
3. Socket object is instantiated by the client after this that specifies the port number & server name.
4. Socket class constructor makes connection b/w client and port number & server. When communication has been established then socket object is present with client that makes communication possible with server.
5. Reference is returned by the accept() method to new socket present at server, which is connected to socket of client.
I/O streams could be used to communicate when establishment of connections has been done. Both InputStream and OutputStream is present in every socket. OutputStream of client & InputStream of server are connected, & serverís OutputStream is connected to the Clientís OutputStream.
TCP is considered to be a 2 way protocol of communication which makes it possible to send the data across stream, at same time. Useful classes which give methods for implementation of the sockets are given below.
|
Pronouns have traditionally been regarded as one of the parts of speech, but some modern theorists would not consider them to form a single class, in view of the variety of functions they perform. Subtypes include personal pronouns, reflexive and reciprocal pronouns, possessive pronouns, demonstrative pronouns, relative pronouns, interrogative pronouns, and indefinite pronouns.:1–34
The use of pronouns often involves anaphora, where the meaning of the pronoun is dependent on an antecedent. This applies especially to third-person personal pronouns, and to relative pronouns. For example, in the sentence That poor man looks as if he needs a new coat, the antecedent of the pronoun he is the noun phrase that poor man.
The adjective associated with pronoun is pronominal. A pronominal is also a word or phrase that acts as a pronoun. For example, in That's not the one I wanted, the phrase the one (containing the prop-word one) is a pronominal.
Personal pronouns may be classified by person, number, gender and case. English has three persons (first, second and third) and two numbers (singular and plural); in the third person singular there are also distinct pronoun forms for male, female and neuter gender.:52–53 Principal forms are shown in the table to the right (see also English personal pronouns).
English personal pronouns have two cases, subject and object. Subject pronouns are used in subject position (I like to eat chips, but she does not). Object pronouns are used for the object of a verb or preposition (John likes me but not her).:52–53
Other distinct forms found in some languages include:
- Second person informal and formal pronouns (the T-V distinction), like tu and vous in French. There is no such distinction in standard modern English, though Elizabethan English marked the distinction with thou (singular informal) and you (plural or singular formal), and this is preserved in some dialects.
- Inclusive and exclusive first person plural pronouns, which indicate whether or not the audience is included, that is, whether "we" means "you and I" or "they and I". There is no such distinction in English.
- Intensive (emphatic) pronouns, which re-emphasize a noun or pronoun that has already been mentioned. English uses the same forms as the reflexive pronouns; for example: I did it myself (contrast reflexive use, I did it to myself).
- Direct and indirect object pronouns, such as le and lui in French. English uses the same form for both; for example: Mary loves him (direct object); Mary sent him a letter (indirect object).
- Prepositional pronouns, used after a preposition. English uses ordinary object pronouns here: Mary looked at him.
- Disjunctive pronouns, used in isolation or in certain other special grammatical contexts, like moi in French. No distinct forms exist in English; for example: Who does this belong to? Me.
- Strong and weak forms of certain pronouns, found in some languages such as Polish.
Some special uses of personal pronouns include:
- Generic you, where second person pronouns are used in an indefinite sense: You can't buy good old-fashioned bulbs these days.
- Generic they: In China they drive on the right.
- Gender non-specific uses, where a pronoun needs to be found to refer to a person whose sex is not specified. Solutions sometimes used in English include generic he and singular they.
- Dummy pronouns (expletive pronouns), used to satisfy a grammatical requirement for a noun or pronoun, but contributing nothing to meaning: It is raining..
- Resumptive pronouns, "intrusive" personal pronouns found (for example) in some relative clauses where a gap (trace) might be expected: This is the girl that I don’t know what she said.
Reflexive and reciprocal
Reflexive pronouns are used when a person or thing acts on itself, for example, John cut himself. In English they all end in -self or -selves and must refer to a noun phrase elsewhere in the same clause.:55
Reciprocal pronouns refer to a reciprocal relationship (each other, one another). They must refer to a noun phrase in the same clause.:55 An example in English is: They do not like each other. In some languages, the same forms can be used as both reflexive and reciprocal pronouns.
Possessive pronouns are used to indicate possession (in a broad sense). Some occur as independent noun phrases: mine, yours, hers, ours, yours, theirs. An example is: Those clothes are mine. Others must accompany a noun: my, your, her, our, your, their, as in: I lost my wallet. (His and its can fall into either category, although its is nearly always found in the second.) Those of the second type have traditionally also been described as possessive adjectives, and in more modern terminology as possessive determiners. The term "possessive pronoun" is sometimes restricted to the first type. Both types replace possessive noun phrases. As an example, Their crusade to capture our attention could replace The advertisers' crusade to capture our attention.:55–56
Demonstrative pronouns (in English, this, that and their plurals these, those) often distinguish their targets by pointing or some other indication of position; for example, I'll take these. They may also be anaphoric, depending on an earlier expression for context, for example, A kid actor would try to be all sweet, and who needs that?:56
Indefinite pronouns, the largest group of pronouns, refer to one or more unspecified persons or things. One group in English includes compounds of some-, any-, every- and no- with -thing, -one and -body, for example: Anyone can do that. Another group, including many, more, both, and most, can appear alone or followed by of.:54–55 In addition,
- Distributive pronouns are used to refer to members of a group separately rather than collectively. (To each his own.)
- Negative pronouns indicate the non-existence of people or things. (Nobody thinks that.)
- Impersonal pronouns normally refer to a person, but are not specific as to first, second or third person in the way that the personal pronouns are. (One does not clean one's own windows.)
Interrogative pronouns ask which person or thing is meant. In reference to a person, one may use who (subject), whom (object) or whose (possessive); for example, Who did that? In colloquial speech, whom is generally replaced by who. English non-personal interrogative pronouns (which and what) have only one form.:56–57
In English and many other languages (e.g. French, Russian and Czech), the sets of relative and interrogative pronouns are nearly identical. Compare English: Who is that? (interrogative) and I know the woman who came (relative). In some other languages, interrogative pronouns and indefinite pronouns are frequently identical; for example, Standard Chinese 什么 shénme means "what?" as well as "something" or "anything".
The use of pronouns often involves anaphora, where the meaning of the pronoun is dependent on another referential element. The referent of the pronoun is often the same as that of a preceding (or sometimes following) noun phrase, called the antecedent of the pronoun. The following sentences give examples of particular types of pronouns used with antecedents:
- Third-person personal pronouns:
- That poor man looks as if he needs a new coat. (the noun phrase that poor man is the antecedent of he)
- Julia arrived yesterday. I met her at the station. (Julia is the antecedent of her)
- When they saw us, the lions began roaring (the lions is the antecedent of they; as it comes after the pronoun it may be called a postcedent)
- Other personal pronouns in some circumstances:
- Terry and I were hoping no-one would find us. (Terry and I is the antecedent of us)
- You and Alice can come if you like. (you and Alice is the antecedent of the second – plural – you)
- Reflexive and reciprocal pronouns:
- Jack hurt himself. (Jack is the antecedent of himself)
- We were teasing each other. (we is the antecedent of each other)
- Relative pronouns:
- The woman who looked at you is my sister. (the woman is the antecedent of who)
Some other types, such as indefinite pronouns, are usually used without antecedents. Relative pronouns are used without antecedents in free relative clauses. Even third-person personal pronouns are sometimes used without antecedents ("unprecursed") – this applies to special uses such as dummy pronouns and generic they, as well as cases where the referent is implied by the context.
Pronouns (antōnymía) are listed as one of eight parts of speech in The Art of Grammar, a treatise on Greek grammar attributed to Dionysius Thrax and dating from the 2nd century BC. The pronoun is described there as "a part of speech substitutable for a noun and marked for a person." Pronouns continued to be regarded as a part of speech in Latin grammar (the Latin term being pronomen, from which the English name – through Middle French – ultimately derives), and thus in the European tradition generally.
In more modern approaches, pronouns are less likely to be considered to be a single word class, because of the many different syntactic roles that they play, as represented by the various different types of pronouns listed in the previous sections.
Certain types of pronouns are often identical or similar in form to determiners with related meaning; some English examples are given in the table on the right. This observation has led some linguists, such as Paul Postal, to regard pronouns as determiners that have had their following noun or noun phrase deleted. (Such patterning can even be claimed for certain personal pronouns; for example, we and you might be analyzed as determiners in phrases like we Brits and you tennis players.) Other linguists have taken a similar view, uniting pronouns and determiners into a single class, sometimes called "determiner-pronoun", or regarding determiners as a subclass of pronouns or vice versa. The distinction may be considered to be one of subcategorization or valency, rather like the distinction between transitive and intransitive verbs – determiners take a noun phrase complement like transitive verbs do, while pronouns do not. This is consistent with the determiner phrase viewpoint, whereby a determiner, rather than the noun that follows it, is taken to be the head of the phrase.
The grammatical behavior of certain types of pronouns, and in particular their possible relationship with their antecedents, has been the focus of studies in binding, notably in the Chomskyan government and binding theory. In this context, reflexive and reciprocal pronouns (such as himself and each other) are referred to as anaphors (in a specialized restricted sense) rather than as pronominal elements.
In other languages
- Bulgarian pronouns
- Cantonese pronouns
- Chinese pronouns
- Dutch grammar: Pronouns and determiners
- Esperanto grammar: Pronouns
- French pronouns
- German pronouns
- Ido pronouns
- Interlingua pronouns
- Irish morphology: Pronouns
- Italian grammar: Pronouns
- Japanese pronouns
- Korean pronouns
- Macedonian pronouns
- Novial: Pronouns
- Portuguese personal pronouns
- Proto-Indo-European pronouns
- Slovene pronouns
- Spanish grammar: Pronouns
- Vietnamese pronouns
- Bhat, Darbhe Narayana Shankara (2007). Pronouns (Paperback ed.). Oxford: Oxford University Press. ISBN 978-0199230242.
- Börjars, Kersti; Burridge, Kate (2010). Introducing English grammar (2nd ed.). London: Hodder Education. pp. 50–57. ISBN 978-1444109870.
- Loos, Eugene E.; Susan Anderson; Dwight H. Day, Jr.; Paul C. Jordan; J. Douglas Wingate. "What is a pronominal?". Glossary of linguistic terms. SIL International. Cite uses deprecated parameter
- For example, Vulf Plotkin (The Language System of English, Universal Publishers, 2006, pp. 82–83) writes: "[...] Pronouns exemplify such a word class, or rather several smaller classes united by an important semantic distinction between them and all the major parts of speech. The latter denote things, phenomena and their properties in the ambient world. [...] Pronouns, on the contrary, do not denote anything, but refer to things, phenomena or properties without involving their peculiar nature."
- Postal, Paul (1966). Dinneen, Francis P., ed. "On So-Called "Pronouns" in English". Report of the Seventeenth Annual Round Table Meeting on Linguistics and Language Studies (Washington, D.C.: Georgetown University Press): 177–206.
- For detailed discussion see George D. Morley, Explorations in Functional Syntax: A New Framework for Lexicogrammatical Analysis, Equinox Publishing Ltd., 2004, pp. 68–73.
|Look up pronoun in Wiktionary, the free dictionary.|
|Look up Category:Pronouns by language in Wiktionary, the free dictionary.|
- English pronouns exercises, by Jennifer Frost
|
Sensory Integration (SI) is the neurological process that occurs in the central nervous system and involves receiving sensory information and turning it into functional responses. Sensory Processing Disorder (SPD) is a diagnostic term which describes an individual who is not able to effectively process and integrate sensory information from their environment. These pages will describe in more detail why the ability to integrate sensory information is key to our development and interaction with our world and help you to identify if we at OTA The Koomar Center can help you address challenges that you or your child may be experiencing.
Sensory Integration is a dynamic process that occurs in the central nervous system and involves receiving sensory information and turning it into functional responses. All day, every day, we receive sensory information through touch, hearing, sight, taste, smell, body position, and movement and balance. Sensory Integration “sorts, orders, and eventually puts all of the individual sensory inputs together into a whole brain function” (Ayres 1979). “When the functions of the brain are whole and balanced, body movements are highly adaptive, learning is easy, and good behavior is a natural outcome” (Ayres 1979), resulting in successful interaction in all aspects of daily life, at home, at school, at play, at work, and during social interactions.
Our Seven Senses
Information is received through seven primary senses that work in combination to allow us to feel safe, have fun, to learn and to interact successfully within our environment.
Touch – The tactile system provides information about the shape, size, and texture of objects. This information helps us to understand our surroundings, manipulate objects, and use tools proficiently. When you put your hand in your pocket and select a quarter from an assortment of change, you are using tactile discrimination.
Body Awareness - Proprioception, or information from the muscles and joints, contributes to the understanding of body position. This system also tells us how much force is needed for a particular task, such as picking up a heavy object, throwing a ball, or using a tool correctly.
Movement and Balance - Located in the inner ear, the vestibular system is the foundation for the development of balance reactions. It provides information about the position and movement of the head in relation to gravity and, therefore, about the speed and direction of movement. The vestibular system is also closely related to postural control. For example, when the brain receives a signal that the body is falling to the side, it, in turn, sends signals that activate muscle groups to maintain balance.
Hearing - We use our auditory system to identify the quality and directionality of sound. Our auditory sense tells us to turn our heads and look when we hear cars approaching. It also helps us to understand speech.
Sight - Our visual system interprets what we see. It is critical to recognizing shapes, colors, letters, words, and numbers. It is also important in reading body language and other non-verbal cues during social interactions. Vision guides our movements, and we continually monitor our actions with our eyes in order to move safely and effectively.
Taste and Smell- The gustatory and olfactory systems are closely linked. They allow us to enjoy tastes and smells of foods and cause us to react negatively to unpleasant or dangerous sensations.
Integrating Information from the Senses
Considering all of the sensory modalities involved, it is truly amazing that one brain can organize all of the information flooding in simultaneously and respond to the demands of the environment. The complex nature of this interaction is illustrated in the following example:
Michael receives the instruction "Please put on your coat." In order to comply, he must
In order to accomplish this seemingly simple task, the nervous system must integrate (focus, screen, sort, and respond to) sensory information from many different sources. Imagine the amount of sensory integration needed to ride a bicycle, drive a car, participate in a soccer game, or pay attention in an active classroom. Individuals who have difficulties with all or part of this process face significant challenges when engaging in daily functional activities.
Sensory Processing Disorder (SPD) is a diagnostic term which describes an individual who is not able to effectively process and integrate sensory information from their environment. Information from one's senses (e.g. sight, sound, touch, taste, smell, movement, proprioceptive and vestibular inputs) is not organized appropriately for the individual to carry out activities and interact with the environment as we would expect. An individual may have difficulty integrating information from one sensory system or a variety of sensory systems. Which sensory systems are impacted and how an individual responds to this sensory information results in how the disorder presents itself in any given person.
Most of us carry out our daily activities with ease, often without even thinking about them. We constantly detect, regulate, interpret and respond to sensory input. Through no fault of their own, individuals with SPD are not able to do this successfully. They consequently often have immense difficulty with the simplest daily task and need to exert much effort throughout their day to carry out the demands that are placed on them. This may be a child attending a playgroup who has difficulty engaging in exploration and social interaction; to an adult who struggles to function in her office environment and meet the work and social demands faced each day. Imagine yourself in a world where something as basic as the pull of gravity or the touch of other people is perceived as unreliable, inconsistent, or threatening. You would not feel secure and safe, you might not be able to have fun, and your self-esteem might be compromised as you realized that you were not able to do things as well as your peers.
As individuals we all like different things, dislike some other things, and avoid certain things, but for individuals with SPD their difficulty integrating sensory information often leads to feeling of discomfort and fear, or may lead to a need to seek out more sensory experiences to feel organized and able to engage. SPD can result in delays in motor skills and problems with self-regulation, attention, and behavior that can affect performance in school, at home, with peers, and during leisure and work activities. The diagnosis of SPD should only be made if the sensory processing difficulties impair daily routines, roles or functional skilled performance.
Diagnosing SPD can be challenging as this disorder includes a variety of different manifestations. Areas of difficulty include sensory modulation dysfunction (also known as sensory defensiveness), sensory discrimination difficulties, praxis disorders and postural-ocular challenges. These are not clear cut subgroups and many individuals experience difficulties in a number of these areas. Many researchers and clinicians are involved in identifying subgroups of Sensory Processing Dysfunction to aid in its recognition and to establish the most effective treatment models.
Sensory modulation is the ability to assign meaning and value to sensory experiences in order to screen out irrelevant sensory information and to respond appropriately to meaningful sensory input. It also involves the ability to habituate quickly following a sensory input that is arousing, so that the individual can rapidly return to involvement in age appropriate activities. An individual who attaches too much relevance to non-essential input, is over-sensitive to sensory inputs, or perceives inputs others typically find benign or pleasurable as negative or painful is considered to be sensory defensive. Often individuals will have problems with modulation of several sensory inputs such as touch and auditory inputs. They may respond to these inputs with distractibility or defensiveness resulting in flight, fright or fight behaviors.
Sensory discrimination is the ability to accurately identify and understand the specific types and qualities of sensory input and then interpret the information for skill development. Problems with discrimination may be exhibited as gross and fine motor skill delays, postural control, difficulties in motor planning and coordination, and contribute to problems in social interactions. Discrimination difficulties may also impact an individual’s arousal level, especially when encountering a challenging skillful activity. Problems in sensory discrimination are usually sensory specific, although an individual may demonstrate problems in more than one sensory area. Individuals typically respond to problems in discrimination with decreased functional skill performance and decreased self-esteem.
Postural-ocular control is the ability to stabilize the trunk and proximal joints during motor action. It is the foundation for development of both gross and fine motor skill and allows for safety and security while moving. It allows once to have a stable base of support for functional activities and skills, including the ability to use the eyes to gather information from the environment.
Praxis disorders refer to the ability to generate, organize, sequence, and execute motor activities. Praxis is necessary to respond adaptively and effectively to changes in the environment. It is essential for planning motor actions, exploratory play, and problem solving interactions with the physical and social environment. Effective praxis results from efficient sensory discrimination since the body must have appropriate sensory information to interact with the environment.
SPD can manifest in many different ways depending on the sensory systems that are impacted and how information is processed. The age of an individual will impact behaviors and performance as different expectations and demands are placed on individuals depending on age and developmental level. Some forms of SPD, particularly those that are classified as modulation difficulties, can result in inconsistent behaviors and responses which can be difficult for a parent or caregiver to understand. An individual may need a referral for an occupational therapy evaluation if difficulties are seen in several of the areas listed below or if one area causes major functional problems.
An infant or toddler may:
· Have difficulty consoling self and/or be unusually fussy
· Be unable to bring hands together and bang toys
· Be slow to roll over, creep, sit or stand
· Cry or becomes tense when moved through space
· Have difficulty tolerating tummy time
· Be overly active, seeking excessive movement
· Be unable to settle down and/or have sleep difficulties
A preschool child (3 to 5 years) may:
A school aged child may:
An adult may:
The term sensory integration dysfunction (DSI) was first used in 1963 by Dr. A. Jean Ayres, an occupational therapist and developmental psychologist who also had postdoctoral training in neuroscience. She explored and researched the association between sensory processing and the behavior of children with learning, developmental, emotional, and other disabilities which she reported in numerous scientific journals and later in her groundbreaking book, Sensory Integration and Learning Disorders. Ayres theorized that impaired sensory processing might result in various functional problems, which she labeled sensory integration dysfunction. The condition was initially based on studies using the Southern California Sensory Integration Tests and later from studies using the Sensory Integration and Praxis Tests (SIPT) and related clinical observations.
Since Dr. Ayres first proposed the theory of sensory integration, many theorists, researchers and clinicians have further developed her theory. Ayres’s original term, sensory integration dysfunction (DSI), was previously used to refer to the disorder of sensory processing and sensory integration. However, this term was often confused with the theoretical frame of reference, assessment process and intervention model used with this problem. Thus, as information on sensory processing grew it became evident that it was important to differentiate the terminology for diagnosis of problems associated with sensory integration deficits from that associated with intervention theory and techniques. Sensory Processing Disorder was therefore proposed as a diagnostic term which refers to the disorders resulting from poor sensory processing and sensory integration. This new diagnostic terminology, along with a diagnostic typology, was proposed and submitted for inclusion into the Diagnostic and Statistical Manual of Mental Disorders IV–TR of the American Psychiatric Association (2000), due out in 2012. The hope is that recognition of SPD as a formal diagnosis will lead to more opportunities for funding for research, more effective interventions and more comprehensive insurance coverage.
Because of the evolving nature of sensory integration theory and practice, other terms related to SPD may be familiar and found in the literature.
Sensory integration theory refers to the theoretical neurologically-based constructs that discuss how the brain processes sensation and impacts on motor, behavior, emotion, and attention responses. This is a brain-behavior theory.
Sensory integration assessment is a specialized occupational therapy assessment which is conducted from a sensory integration theory frame of reference. The evaluation process assesses how a person processes (discriminates and modulates) sensory information; how that sensory processing impacts on foundational mechanisms such as postural-ocular skills, visual perceptual skills, hand skills and handwriting; and how it affects fine and gross motor skills, as well as praxis abilities for daily life functioning.
Sensory integration intervention is a specific intervention model based on sensory integration theory whereby the provision of enhanced sensory information, in the context of meaningful and purposeful activities is believed to enhance the development of an individuals nervous system functioning. Ayres® Sensory Integration intervention is a unique intervention that is child/ person directed and takes place in a playful, loving and fun environment.
Developmental Coordination Disorder (DCD) is a DSM-IV diagnosis for a motor coordination disorder. This term is used frequently in research on motor coordination problems in children and is increasingly used by physicians. It is very commonly used in Great Britain and in Europe. DCD is characterized by a motor coordination problem which results in functional difficulties. Currently, this diagnosis cannot be given in conjunction with autism spectrum disorder. Within the sensory integration framework, DCD is viewed as an umbrella term which includes praxis disorders of motor planning, bilateral coordination and projected action sequences.
Proprioception is the sensory information generated by a person’s joints and muscles. It tells a person where their body parts are in space. It is important for force regulation, control of posture and body awareness. It is also an important sensory input for promoting self-regulation. Proprioception works in conjunction with both the tactile and the vestibular sensory systems.
Vestibular sensory inputs refer to a person’s movement sense. This is sensory information from the inner ear that is responsible for balance. It detects and processes information in all planes of movement. In addition to balance, the vestibular system controls one’s protective responses, one’s posture, and works in tandem with one’s visual system. It also has a strong influence on emotions and self-regulation.
May 22, 2013 - Dealing with Friends and Family
June 26, 2013 - Getting Ready for Summer
|
electionArticle Free Pass
- History of elections
- Functions of elections
- Types of elections
- Systems of vote counting
- Constituencies: districting and apportionment
- Voting practices
- Participation in elections
- Influences on voting behaviour
Developed in the 19th century in Denmark and in Britain, the single transferable vote formula—or Hare system, after one of its English developers, Thomas Hare—employs a ballot that allows the voter to rank candidates in order of preference. When the ballots are counted, any candidate receiving the necessary quota of first preference votes—calculated as one plus the number of votes divided by the number of seats plus one—is awarded a seat. In the electoral calculations, votes received by a winning candidate in excess of the quota are transferred to other candidates according to the second preference marked on the ballot. Any candidate who then achieves the necessary quota is also awarded a seat. This process is repeated, with subsequent surpluses also being transferred, until all the remaining seats have been awarded. Five-member constituencies are considered optimal for the operation of the single transferable vote system.
Because it involves the aggregation of ranked preferences, the single transferable vote formula necessitates complex electoral computations. This complexity, as well as the fact that it limits the influence of political parties, probably accounts for its infrequent use; it has been used in Northern Ireland, Ireland, and Malta and in the selection of the Australian and South African senates. The characteristic of the Hare formula that distinguishes it from other proportional representation formulas is its emphasis on candidates, not parties. The party affiliation of the candidates has no bearing on the computations. The success of minor parties varies considerably; small centrist parties usually benefit from the vote transfers, but small extremist parties usually are penalized.
What made you want to look up election?
|
Can you recreate these designs? What are the basic units? What
movement is required between each unit? Some elegant use of
procedures will help - variables not essential.
See if you can anticipate successive 'generations' of the two
animals shown here.
These are pictures of the sea defences at New Brighton. Can you
work out what a basic shape might be in both images of the sea wall
and work out a way they might fit together?
You have 27 small cubes, 3 each of nine colours. Use the small cubes to make a 3 by 3 by 3 cube so that each face of the bigger cube contains one of every colour.
I found these clocks in the Arts Centre at the University of
Warwick intriguing - do they really need four clocks and what times
would be ambiguous with only two or three of them?
How can you make an angle of 60 degrees by folding a sheet of paper
The triangle ABC is equilateral. The arc AB has centre C, the arc
BC has centre A and the arc CA has centre B. Explain how and why
this shape can roll along between two parallel tracks.
Bilbo goes on an adventure, before arriving back home. Using the
information given about his journey, can you work out where Bilbo
This article explores ths history of theories about the shape of our planet. It is the first in a series of articles looking at the significance of geometric shapes in the history of astronomy.
Starting with four different triangles, imagine you have an
unlimited number of each type. How many different tetrahedra can
you make? Convince us you have found them all.
The aim of the game is to slide the green square from the top right
hand corner to the bottom left hand corner in the least number of
A game for 2 people. Take turns joining two dots, until your opponent is unable to move.
Here is a solitaire type environment for you to experiment with. Which targets can you reach?
This task depends on groups working collaboratively, discussing and
reasoning to agree a final product.
Players take it in turns to choose a dot on the grid. The winner is the first to have four dots that can be joined to form a square.
Use the interactivity to listen to the bells ringing a pattern. Now
it's your turn! Play one of the bells yourself. How do you know
when it is your turn to ring?
Generate three random numbers to determine the side lengths of a triangle. What triangles can you draw?
Use the interactivity to play two of the bells in a pattern. How do
you know when it is your turn to ring, and how do you know which
bell to ring?
Problem solving is at the heart of the NRICH site. All the problems
give learners opportunities to learn, develop or use mathematical
concepts and skills. Read here for more information.
Square It game for an adult and child. Can you come up with a way of always winning this game?
The second in a series of articles on visualising and modelling shapes in the history of astronomy.
This article introduces the idea of generic proof for younger children and illustrates how one example can offer a proof of a general result through unpacking its underlying structure.
Can you fit the tangram pieces into the outlines of Mai Ling and Chi Wing?
Can you fit the tangram pieces into the outlines of the candle and sundial?
A game for 2 players. Given a board of dots in a grid pattern, players take turns drawing a line by connecting 2 adjacent dots. Your goal is to complete more squares than your opponent.
Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges.
Is it possible to rearrange the numbers 1,2......12 around a clock
face in such a way that every two numbers in adjacent positions
differ by any of 3, 4 or 5 hours?
Here are four tiles. They can be arranged in a 2 by 2 square so that this large square has a green edge. If the tiles are moved around, we can make a 2 by 2 square with a blue edge... Now try to. . . .
Can you fit the tangram pieces into the outlines of the watering can and man in a boat?
If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable.
Decide which of these diagrams are traversable.
Which of these dice are right-handed and which are left-handed?
A train leaves on time. After it has gone 8 miles (at 33mph) the driver looks at his watch and sees that the hour hand is exactly over the minute hand. When did the train leave the station?
Charlie and Alison have been drawing patterns on coordinate grids. Can you picture where the patterns lead?
Choose a couple of the sequences. Try to picture how to make the next, and the next, and the next... Can you describe your reasoning?
Lyndon Baker describes how the Mobius strip and Euler's law can
introduce pupils to the idea of topology.
Find a cuboid (with edges of integer values) that has a surface
area of exactly 100 square units. Is there more than one? Can you
find them all?
A bus route has a total duration of 40 minutes. Every 10 minutes,
two buses set out, one from each end. How many buses will one bus
meet on its way from one end to the other end?
This article for teachers describes how modelling number properties
involving multiplication using an array of objects not only allows
children to represent their thinking with concrete materials,. . . .
Can you cross each of the seven bridges that join the north and south of the river to the two islands, once and once only, without retracing your steps?
A Hamiltonian circuit is a continuous path in a graph that passes through each of the vertices exactly once and returns to the start.
How many Hamiltonian circuits can you find in these graphs?
Can you fit the tangram pieces into the outlines of the workmen?
Can you fit the tangram pieces into the outline of Little Ming and Little Fung dancing?
Slide the pieces to move Khun Phaen past all the guards into the position on the right from which he can escape to freedom.
This article for teachers discusses examples of problems in which
there is no obvious method but in which children can be encouraged
to think deeply about the context and extend their ability to. . . .
Investigate how the four L-shapes fit together to make an enlarged
L-shape. You could explore this idea with other shapes too.
A circle rolls around the outside edge of a square so that its circumference always touches the edge of the square. Can you describe the locus of the centre of the circle?
You have been given three shapes made out of sponge: a sphere, a cylinder and a cone. Your challenge is to find out how to cut them to make different shapes for printing.
Can you maximise the area available to a grazing goat?
Reasoning about the number of matches needed to build squares that
share their sides.
An extension of noughts and crosses in which the grid is enlarged
and the length of the winning line can to altered to 3, 4 or 5.
|
Interesting online CONFCHEM discussion going on right now on the ChemWiki and
greater STEMWiki Hyperlibary project. Come join the discussion.
The Pauli Exclusion Principle states that, in an atom or molecule, no two electrons can have the same four electronic quantum numbers. As an orbital can contain a maximum of only two electrons, the two electrons must have opposing spins. This means if one is assigned an up-spin ( +1/2), the other must be down-spin (-1/2).
Electrons in the same orbital have the same first three quantum numbers, e.g., \(n=1\), \(l=0\), \(m_l=0\) for the 1s subshell. Only two electrons can have these numbers, so that their spin moments must be either \(m_s = -1/2\) or \(m_s = +1/2\). If the 1s orbital contains only one electron, we have one \(m_s\) value and the electron configuration is written as 1s1 (corresponding to hydrogen). If it is fully occupied, we have two \(m_s\) values, and the electron configuration is 1s2 (corresponding to helium). Visually these two cases can be represented as
As you can see, the 1s subshell can hold only two electrons and when filled the electrons have opposite spins.
This material is based upon work supported by the National Science Foundation under Grant Number 1246120
|
Ancient astronomers thought that the Sun was a ball of fire, but now astronomers know that it’s nuclear fusion going on in the core of stars that allows them to output so much energy. Let’s take a look at the conditions necessary to create nuclear fusion in stars and some of the different kids of fusion that can go on.
The core of a star is an intense environment. The pressures are enormous, and the temperatures can be greater than 15 million Kelvin. But this is the kind of conditions you need for nuclear fusion to take place. Once these conditions are reached in the core of a star, nuclear fusion converts hydrogen atoms into helium atoms through a multi-stage process.
To complete this process, two hydrogen atoms are merged together into a deuterium atom. This deuterium atom can then be merged with another hydrogen to form a light isotope of helium – 3He. Finally, two of the helium-3 nuclei can be merged together to form a helium-4 atom. This whole reaction is exothermic, and so it releases a tremendous amount of energy in the form of gamma rays. These gamma rays must make the long slow journey through the star, being absorbed and then re-emitted from atom to atom. This brings down the energy of the gamma rays to the visible spectrum that we see streaming off the surface of stars.
This fusion cycle is known as the proton-proton chain, and it’s the reaction that happens in stars with the mass of our Sun. If stars have more than 1.5 solar masses, they use a different process called the CNO (carbon-nitrogen-oxygen) cycle. In this process, four protons fuse using carbon, nitrogen and oxygen as catalysts.
Stars can emit energy as long as they have hydrogen fuel in their core. Once this hydrogen runs out, the fusion reactions shut down and the star begins to shrink and cool. Some stars will just turn into white dwarfs, while more massive stars will be able to continue the fusion process using helium and even heavier elements.
|
In English, it is common to use more than one adjective before a noun — for example, “He's a silly young fool,” or “She's a smart, energetic woman.” When you use more than one adjective, you have to put them in the right order, according to type. This page will explain the different types of adjectives and the correct order for them.
1. The basic types of adjectives
|Opinion||An opinion adjective explains what you think about something (other people may not agree with you).
For example: silly, beautiful, horrible, difficult
|Size||A size adjective, of course, tells you how big or small something is.
For example: large, tiny, enormous, little
|Age||An age adjective tells you how young or old something or someone is.
For example: ancient, new, young, old
|Shape||A shape adjective describes the shape of something.
For example: square, round, flat, rectangular
|Colour||A colour adjective, of course, describes the colour of something.
For example: blue, pink, reddish, grey
|Origin||An origin adjective describes where something comes from.
For example: French, lunar, American, eastern, Greek
|Material||A material adjective describes what something is made from.
For example: wooden, metal, cotton, paper
|Purpose||A purpose adjective describes what something is used for. These adjectives often end with “-ing”.
For example: sleeping (as in “sleeping bag”), roasting (as in “roasting tin”)
2. Some examples of adjective order
|
Inflation means that the general level of prices is going up, the opposite of deflation. More money will need to be paid for goods (like a loaf of bread) and services (like getting a haircut at the hairdresser's). Economists measure inflation regularly to know an economy's state. Inflation changes the ratio of money towards goods or services; more money is needed to get the same amount of a good or service, or the same amount of money will get a lower amount of a good or service. Economists defined certain customer baskets to be able to measure inflation. There can be positive and negative effects of inflation.
Causes of inflation[change | change source]
When the total money in an economy (the money supply) increases too rapidly, the quality of the money (the currency value) often decreases. Economists generally think that this money supply increase (monetary inflation) causes the goods/services price increase (price inflation) over a longer period. They disagree on causes over a shorter period.
Demand-Pull inflation[change | change source]
The Demand-Pull inflation theory can be said simply as "too much money chasing too few goods." In other words, if the will of buying goods is growing faster than amount of goods that have been made, then prices will go up. This most likely happens in economies that are growing fast. Whenever a product is bought or sold beyond its real price for its worth, then Inflation of money occurs.
Cost-Push inflation[change | change source]
The Cost-Push inflation theory says that when the cost of making goods (which are paid by the company) go up, they have to make prices higher to make profit out of selling that product. The higher costs of making goods can include things like workers' wages, taxes to be paid to the government or bigger costs of getting raw materials from other countries.
However, Austrian School economists think this is wrong, because if people have to pay higher prices, this just means they have less to spend on other things.
Costs of inflation[change | change source]
Almost everyone thinks inflation is bad. Inflation affects different people in different ways. It also depends on whether inflation is expected or not. If the inflation rate is equal to what most people are expecting (anticipated inflation), then we can adjust and the cost is not as high. For example, banks can change their interest rates and workers can negotiate contracts that include automatic wage hikes as the price level goes up.
Problems arise when there is unanticipated inflation:
- Creditors lose and debtors gain if the lender does not guess inflation correctly. For those who borrow, this is similar to getting an interest-free loan.
- Uncertainty about what will happen next makes corporations and consumers less likely to spend. This hurts economic output in the long run.
- People with a fixed income, such as retirees, see a decline in their purchasing power and, consequently, their standard of living.
- The entire economy must absorb repricing costs ("menu costs") as price lists, labels, menus and so forth have to be updated.
- If the inflation rate is greater than in other countries, domestic products become less competitive.
- Nominal interest rate rise because inflation is anticipated.
|
VIEWS: 144 PAGES: 3 POSTED ON: 5/5/2011
Regina Public Schools Unit Comparison for Grade 7/8 Math Makes Sense Grade 7 – Unit 1: Patterning Grade 8 – Unit 1: Square Roots and the Pythagorean Theorem Lesson 1.1 Patterns in Division Lesson 1.1 Square Numbers and Area Models Lesson 1.2 More Patterns in Division Lesson 1.2 Squares and Square Roots Lesson 1.3 Algebraic Expressions Lesson 1.3 Measuring Line Segments Lesson 1.4 Relationships in Patterns Lesson 1.4 Estimating Square Roots Lesson 1.5 Patterns and Relationships Lesson 1.5 The Pythagorean Theorem in Tables Lesson 1.6 Graphing Relations Lesson 1.6 Exploring the Pythagorean Theorem Lesson 1.7 Reading and Writing Lesson 1.7 Applying the Pythagorean Theorem Equations Lesson 1.8 Solving Equations Using Algebra Tiles Grade 7 – Unit 2: Integers Grade 8 - Unit 2: Integers Lesson 2.1 Representing Integers Lesson 2.1 Using Models to Multiply Integers Lesson 2.2 Adding Integers with Lesson 2.2 Developing Rules to Multiply Tiles Integers Lesson 2.3 Adding Integers on a Lesson 2.3 Using Models to Divide Integers Number Line Lesson 2.4 Subtracting Integers with Lesson 2.4 Developing Rules to Divide Integers Tiles Lesson 2.5 Subtracting Integers on a Lesson 2.5 Order of Operations with Integers Number Line Grade 7 – Unit 3: Fractions, Grade 8 – Unit 3: Operations with Fractions Decimals, and Percents Lesson 3.1 Fractions to Decimals Lesson 3.1 Using Models to Multiply Fractions and Whole Numbers Lesson 3.2 Comparing and Ordering Lesson 3.2 Using Models to Multiply Fractions Fractions and Decimals Lesson 3.3 Adding and Subtracting Lesson 3.3 Multiplying Fractions Decimals Lesson 3.4 Multiplying Decimals Lesson 3.4 Multiplying Mixed Numbers Lesson 3.5 Dividing Decimals Lesson 3.5 Dividing Whole Numbers and Fractions Lesson 3.6 Order of Operations with Lesson 3.6 Dividing Fractions Decimals Lesson 3.7 Relating Fractions, Lesson 3.7 Dividing Mixed Numbers Decimals and Percents Lesson 3.8 Solving Percent Problems Lesson 3.8 Solving Problems with Fractions Lesson 3.9 Order of Operations with Fractions Grade 7 – Unit 4: Circles and Area Grade 8 – Unit 4: Measuring Prisms and Cylinders Lesson 4.1 Investigating Circles Lesson 4.1 Exploring Nets Lesson 4.2 Circumference of a Circle Lesson 4.2 Creating Objects from Nets Lesson 4.3 Area of a Parallelogram Lesson 4.3 Surface Area of a Right Rectangular Prism Lesson 4.4 Area of a Triangle Lesson 4.4 Surface Area of a Right Triangular Prism Lesson 4.5 Area of a Circle Lesson 4.5 Volume of a Right Rectangular Prism Lesson 4.6 Interpreting Circle Graphs Lesson 4.6 Volume of a Right Triangular Prism Lesson 4.7 Drawing Circle Graphs Lesson 4.7 Surface Area of a Right Cylinder Lesson 4.8 Volume of a Right Cylinder Grade 7 – Unit 5: Operations with Grade 8 – Unit 5: Percent, Ratio, and Rate Fractions Lesson 5.1 Using Models to Add Lesson 5.1 Relating Fractions, Decimals, and Fractions Percents Lesson 5.2 Using Other Models to Lesson 5.2 Calculating Percents Add Fractions Lesson 5.3 Using Symbols to Add Lesson 5.3 Solving Percent Problems Fractions Lesson 5.4 Using Models to Subtract Lesson 5.4 Sales Tax and Discount Fractions Lesson 5.5 Using Symbols to Lesson 5.5 Exploring Ratios Subtract Fractions Lesson 5.6 Adding with Mixed Lesson 5.6 Equivalent Ratios Numbers Lesson 5.7 Subtracting with Mixed Lesson 5.7 Comparing Ratios Numbers Lesson 5.8 Solving Ratio Problems Lesson 5.9 Exploring Rates Lesson 5.10 Comparing Rates Grade 7 – Unit 6: Equations Grade 8 – Unit 6: Linear Equations and Graphing Lesson 6.1 Solving Equations Lesson 6.1 Solving Equations Using Models Lesson 6.2 Using a Model to Solve Lesson 6.2 Solving Equations Using Algebra Equations Lesson 6.3 Solving Equations Lesson 6.3 Solving Equations Involving involving Integers Fractions Lesson 6.4 Solving Equations Using Lesson 6.4 The Distributive Property Algebra Lesson 6.5 Using Different Methods Lesson 6.5 Solving Equations Involving the to Solve Equations Distributive Property Lesson 6.6 Creating a Table of Values Lesson 6.7 Graphing Linear Relations Grade 7 – Unit 7: Data Analysis Grade 8 – Unit 7: Data Analysis and Probability Lesson 7.1 Mean and Mode Lesson 7.1 Choosing an Appropriate Graph Lesson 7.2 Median and Range Lesson 7.2 Misrepresenting Data Lesson 7.3 The Effects of Outliers on Lesson 7.3 Probability of Independent Events Averages Lesson 7.4 Applications of Averages Lesson 7.4 Solving Problems Involving Independent Events Lesson 7.5 Different Ways to Express Probability Lesson 7.6 Tree Diagrams Grade 7 – Unit 8: Geometry Grade 8 – Unit 8: Geometry Lesson 8.1 Parallel Lines Lesson 8.1 Sketching Views of Objects Lesson 8.2 Perpendicular Lines Lesson 8.2 Drawing Views of Rotated Objects Lesson 8.3 Constructing Lesson 8.3 Building Objects from their Views Perpendicular Bisectors Lesson 8.4 Constructing Angle Lesson 8.4 Identifying Transformations Bisectors Lesson 8.5 Graphing on a Coordinate Lesson 8.5 Constructing Tessellations Grid Lesson 8.6 Graphing Translations and Lesson 8.6 Identifying Transformations in Reflections Tessellations Lesson 8.7 Graphing Rotations (Christie Drever/2008) Other Notes and Suggestions: 1. Often the Grade 7 concepts can serve as a brief review for the Grade eight students before focusing on Grade 8 content. 2. As a result of our new curriculum reducing content for us, we no longer have a spiraling curriculum with overlap of content from grade to grade. This will require teachers looking at their instructional practices for mathematics and considering different approaches when they have a split grade. An option for teaching a split grade when the content for each grade is different is to consider looking at scheduling. One grade works independently in another subject area (social project, art) while the teacher works in mathematics with the other grade. Or have one grade work on the “Practice” while the teacher leads the “Explore” or “Investigate” and the “Connect” with the other grade.
Pages to are hidden for
"Curriculum Match – Grade 7_8 Math Makes Sense"Please download to view full document
|
|This article needs additional citations for verification. (June 2014)|
Temporal range: 299–251 Ma Early to Late Permian
|Class:||"Amphibia" (wide sense)|
Diplocaulus (meaning "double caul") is an extinct genus of lepospondyl amphibians from the Permian period of North America. It is one of the largest lepospondyls, with a distinctive boomerang-shaped skull. Remains attributed to Diplocaulus have been found from the Late Permian of Morocco and represent the youngest known occurrence of a lepospondyl.
Diplocaulus had a stocky, salamander-like body, but was relatively large, reaching up to 1 m (3.3 ft) in length. Its most distinctive features were the long protrusions on the sides of its skull, giving the head a boomerang shape. Judging from its weak limbs and relatively short tail, it is presumed to have swum with an up-and-down movement of its body, similar to modern whales and dolphins. The wide head could have acted like a hydrofoil, helping the creature glide through the water. Another possibility is that the shape was defensive, since even a large predator would have a hard time trying to swallow a creature with such a wide head. Rare trace fossils of Diplocaulus-like amphibians show that the tips of the boomerang-shaped head were connected to the body by flaps of skin.
A close relative of Diplocaulus is Diploceraspis.
Diplocaulus on display
- The fossilized skeleton of a Diplocaulus is on display at the University of Michigan Museum of Natural History in Ann Arbor. The display presents art of the Diplocaulus with the controversial skin extending from the tips of the head to the tail.
- The fossilized skeleton of a Diplocaulus is on display at the Houston Museum of Natural Science in Houston.
Skull of Diplocaulus magnicornis at the Museum für Naturkunde Berlin
|Wikimedia Commons has media related to Diplocaulus.|
- Cruickshank, A. R. I.; Skews, B. W. (1980). "The Functional Significance of Nectridean Tabular Horns (Amphibia: Lepospondyli)". Proceedings of the Royal Society B: Biological Sciences 209 (1177): 513–537. doi:10.1098/rspb.1980.0110.
- Palmer, D., ed. (1999). The Marshall Illustrated Encyclopedia of Dinosaurs and Prehistoric Animals. London: Marshall Editions. p. 55. ISBN 1-84028-152-9.
- von Walter, H.; R. Werneberg (1988). "Über liegespuren (Cubichnia) aquatischer Tetrapoden (Diplocauliden, Nectridea) aus den Rotteroder Schichten (Rotliegendes, Thüringer Wald/DDR)". Freiberger Forschungshefte (Leipzig) C419: 96–106.
- Wright, J. L.; I. J. Samson (1998). "The earliest known terrestrial tetrapod skin impressions (Upper Carboniferous, Shropshire, UK)". Journal of Vertebrate Paleontology. 18 (suppl.): 88A.
|
Stages of Wilms' Tumor
Staging refers to the extent of a cancer within the body, including whether the disease has spread from the original site to other parts of the body. In order to stage a disease, doctors perform exams and tests to learn more about the disease, especially to see whether the disease has spread from the original site to other parts of the body. There are five stages of Wilm's tumor.
Cancer is found in the kidney only and can be completely removed by surgery.
Cancer has spread to tissue near the kidney, to blood vessels, or to the renal sinus (a part of the kidney through which blood and fluid enter and exit). The cancer can be completely removed by surgery.
Cancer has spread to tissues near the kidney and cannot be completely removed by surgery. The cancer may have spread to blood vessels or organs near the kidney or throughout the abdomen. The cancer may also have spread to lymph nodes near the kidney.
Cancer has spread to organs further away from the kidney, such as the lungs, liver, bone and brain.
Cancer cells are found in both kidneys.
|
Differentiated instruction is the practice of modifying and adapting instruction, materials, content, student projects and products, and assessment to meet the learning needs of individual students. In a differentiated classroom, teachers recognize that all students are different and require varied teaching methods to be successful in school. They see their role as creating that environment for their students. Differentiation includes a wide range of strategies and methods such as:
Back to Special Education and Learning Disability Terms
Also Known As:
accommodations, modifications, modified instruction, responsive teaching
Differentiated instruction helps students learn in ways that are most effective for them.
|
Finding the acceleration of a moving object calculates its change in velocity, both a change in speed and direction. You can find the average acceleration to determine the average velocity of the object over a period of time. Acceleration is an important concept in physics, so you can learn how to find average acceleration by using the formula below.
Part 1 of 3: Physics Preparation
Part 2 of 3: Find Velocity
1Find the initial and final velocity of the object. Make sure you do not confuse velocity with speed.
- Speed is a scalar quantity, or simply how fast an object is moving. Velocity is a vector quantity, which calculates the rate that an object changes position. If an object returns to the initial starting point, it doesn't have velocity, because it didn't change position.
Part 3 of 3: Average Acceleration Formula
1Write down the average acceleration formula. This is Average Acceleration= Change in Velocity/Change in Time.
2Gather your initial velocity, final velocity and minutes or hours that the object was in motion. Make sure your initial velocity, final velocity and minutes are in the same units, such as meters per second.
- If your velocity figures and time are in different units, you will need to convert them to the proper unit by multiplying or dividing the ratio using the proper larger or smaller equivalent unit.
- The time is often given simply as minutes or hours; however, you may have to calculate the time if you are given time on a clock.
3Plug your numbers into the formula. Write the formula as follows: Average Acceleration: (Final Velocity-Initial Velocity)/Time Elapsed.
- For example, if the object is in motion for 15 seconds with an initial velocity of zero and a final velocity of 4 m/s you would write Average Acceleration= (4 m/s-0 m/s)/15 seconds.
4Solve for your answer.
- In our example, it would be 0.26 m/s^2. The answer is written as meters per second squared.
We could really use your help!
Things You'll Need
- Physics problem
- Paper and pencil
- Initial velocity
- Final velocity
Recent edits by: RG Dhruv Parikh, Hasbeen400, Teresa
In other languages:
Thanks to all authors for creating a page that has been read 50,341 times.
|
Shearing, also known as die cutting, is a process which cuts stock without the formation of chips or the use of burning or melting. In strict technical terms, the process of "shearing" involves the use of straight cutting blades; and, if the cutting blades are curved then the process is considered a "shearing-type operation." The most commonly sheared materials are in the form of sheet metal or plates, however rods can also be sheared. Shearing-type operations include: blanking, piercing, roll slitting, and trimming. It is used in metalworking and also with paper and plastics.
A punch (or moving blade) is used to push the workpiece against the die (or fixed blade), which is fixed. Usually the clearance between the two is 5 to 40% of the thickness of the material, but dependent on the material. Clearance is defined as the separation between the blades, measured at the point where the cutting action takes place and perpendicular to the direction of blade movement. It affects the finish of the cut (burr) and the machine's power consumption. This causes the material to experience highly localized shear stresses between the punch and die. The material will then fail when the punch has moved 15 to 60% the thickness of the material, because the shear stresses are greater than the shear strength of the material and the remainder of the material is torn. Two distinct sections can be seen on a sheared workpiece, the first part being plastic deformation and the second being fractured. Because of normal inhomogeneities in materials and inconsistencies in clearance between the punch and die, the shearing action does not occur in a uniform manner. The fracture will begin at the weakest point and progress to the next weakest point until the entire workpiece has been sheared; this is what causes the rough edge. The rough edge can be reduced if the workpiece is clamped from the top with a die cushion. Above a certain pressure the fracture zone can be completely eliminated. However, the sheared edge of the workpiece will usually experience workhardening and cracking. If the workpiece has too much clearance, then it may experience roll-over or heavy burring.
|This section requires expansion. (December 2009)|
The processes of straight shearing is done on sheet metal, coils, and plates. It uses a guillotine shear.
- Low alloy steel is used in low production of materials that range up to 0.64 cm (1/4 in) thick
- High-carbon, high chromium steel is used in high production of materials that also range up to 0.64 cm (1/4 in) in thickness
- Shock-resistant steel is used in materials that are equal to 0.64 cm (1/4 in) thick or more
Tolerances and surface finish
When shearing a sheet, the typical tolerance is +0.1 or -0.1, but it is feasible to get the tolerance to within +0.005 or -0.005. While shearing a bar and angle, the typical tolerance is +0.06 or -0.06, but it is possible to get the tolerance to +0.03 or -0.03. Surface finishes typically occur within the 250 to 1000 microinches range, but can range from 125 to 2000 microinches. A secondary operation is required if one wants better surfaces than this.
- Wick & Veilleux 1984, p. 6‐20
- Degarmo, p. 424.
- Degarmo, p. 425.
- Degarmo, E. Paul; Black, J T.; Kohser, Ronald A. (2003), Materials and Processes in Manufacturing (9th ed.), Wiley, ISBN 0-471-65653-4.
- Todd, Robert H.; Allen, Dell K.; Alting, Leo (1994), Manufacturing Processes Reference Guide, Industrial Press Inc., ISBN 0-8311-3049-0.
- Wick, Charles; Veilleux, Raymond F. (1984), Tool and Manufacturing Engineers Handbook: Forming (4th ed.), SME, ISBN 978-0-87263-135-9.
|
1. Activate students' prior knowledge about extreme natural events.
Ask: What do you already know about extreme natural events? Have students brainstorm a list of extreme natural events around the world, such as:
- flooding or drought
- snowstorms or blizzards
- severe thunderstorms, hail
Ask: What type is most likely to happen in our area? Then look at the photo gallery of extreme natural events. As you look at each photo, ask students if they or their families have ever experienced any of these conditions. Invite volunteers to share their experiences. Ask: How did you protect yourself? How do you think you could have been better prepared?
2. Have pairs write descriptions of extreme natural events.
Divide students into pairs. Show students photographs of natural disasters on the National Geographic Natural Disasters web page. For each image, ask pairs to write two captions to describe the event the image shows. Have pairs share their captions as you look at each photo as a class. As you look at each photo, ask:
- What makes this event "extreme"?
- What could be dangerous about this event?
3. Discuss how extreme natural events are the same and different.
After students have looked at all of the photos, ask:
- How are some of the events the same?
- How are some of the events different?
For example, students may point out that hurricanes, tornadoes, and thunderstorms all have strong winds and snowstorms. Avalanches and blizzards both have snow. Ask: Which extreme natural event do you think is most dangerous? Why?
Subjects & Disciplines
- Earth science
- discuss types of extreme natural events
- compare characteristics of extreme natural events
- Multimedia instruction
Connections to National Standards, Principles, and Practices
National Council for Social Studies Curriculum Standards
- Theme 3: People, Places, and Environments
National Geography Standards
- Standard 7: The physical processes that shape the patterns of Earth's surface
National Science Education Standards
- (K-4) Standard F-4: Changes in environments
Materials You Provide
The resources are also available at the top of the page.
- Internet Access: Required
- Tech Setup: 1 computer per classroom, Projector
Extreme natural events like hurricanes, floods, and wildfires can cause damage and harm to people, animals, and environments. Humans are better able to prepare for and recover from extreme natural events if they understand the dangers.
Term Part of Speech Definition Encyclopedic Entry extreme natural event Noun
short-term changes in the weather or environment that can have long-term effects, like a storm or earthquake.
natural disaster Noun
an event occurring naturally that has large-scale effects on the environment and people, such as a volcano, earthquake, or hurricane.
For Further Exploration
|
A new understanding of how glass is formed may assist with our understanding of everything from the design of golf club heads to the structure of the early universe.
Princeton chemists have found that the formation of glass -- a familiar substance that nonetheless retains some elusive scientific mysteries -- always occurs differently depending on how quickly a liquid substance is cooled into its solid form.
Though the findings will likely dash the hopes of condensed matter physicists who have long sought in vain for what is known as an "ideal" glass transition, they may also one day contribute to industrialists' efforts to create better plastics and other useful polymers.
"Glasses can be formed from any substance, and the way their molecules interact places them somewhere at the border between solids and liquids, giving them some properties that manufacturers can exploit," said Sal Torquato, a professor of chemistry who is also affiliated with the Princeton Center for Theoretical Physics. "Golf club heads made of metallic glasses, for example, can make golf balls fly farther. While our research could be utilized by industry, it can actually help us understand any 'glassy' multi-particle system, such as the early universe -- which cosmologists have described as a glass."
Torquato emphasized that it would probably be years before such practical applications become a reality, and that the findings were most significant for advancing our fundamental understanding of how the state of matter known as glasses behaves.
Citation: Do Binary Hard Disks Exhibit an Ideal Glass Transition? Torquato et al., Phys. Rev. Lett., 84, 2064 (2000).
Source: Princeton University
|
Scientists collected specimens from up to 6,500 feet (2,000 meters) beneath the surface of the Southern Ocean as part of an international project to take a census of Antarctic marine life.
Some of the animals far under the sea grow to unusually large sizes, a phenomenon called gigantism that scientists still do not fully understand.
"Gigantism is very common in Antarctic waters," Martin Riddle, the Australian Antarctic Division scientist who led the expedition, said in a statement. "We have collected huge worms, giant crustaceans, and sea spiders the size of dinner plates."
The specimens were being sent to universities and museums around the world for identification, tissue sampling, and DNA studies.
"Not all of the creatures that we found could be identified and it is very likely that some new species will be recorded as a result of these voyages," said Graham Hosie, head of the census project.
The expedition is part of an ambitious international effort to map life forms in the Antarctic's Southern Ocean and to study the impact of forces such as climate change on the undersea environment.
The work is part of a larger project to map the biodiversity of the world's oceans.
The French and Japanese ships sought specimens from the mid- and upper-level environment, while the Australian ship plumbed deeper waters with remote-controlled cameras.
"Fins in Various Places"
"In some places every inch of the sea floor is covered in life," Riddle said. "In other places we can see deep scars and gouges where icebergs scour the sea floor as they pass by."
Among the bizarre-looking creatures the scientists spotted were tunicates, plankton-eating animals that resemble slender glass structures up to a yard tall "standing in fields like poppies," Riddle said.
Other animals were equally baffling.
"They had fins in various places; they had funny dangly bits around their mouths," Riddle told reporters. "They were all bottom dwellers so they were all evolved in different ways to live down on the seabed in the dark. So many of them had very large eyes—very strange-looking fish."
Scientists are planning a follow-up expedition in 10 to 15 years to examine the effects of climate changes on the region's environment.
Free Email News Updates
Sign up for our Inside National Geographic newsletter. Every two weeks we'll send you our top stories and pictures (see sample).
|
Main Curriculum Tie:
Background For Teachers:
Enduring Understanding (Big Ideas):
Ways to Gain/Maintain Attention (Primacy):
Lesson Segment 1: How can I represent equivalent forms for decimals, fractions and percents?
Put the “Representing Equivalent Rational Numbers”, and “Fractions, Decimals, and Percents With Candy” on transparencies, so you can discuss with the class.
As a class discuss and work to complete “Representing Equivalent Rational Numbers”
Apply: Have students work to complete “Fractions, Decimals, and Percents With Candy” one question at a time. Use a Board Talk protocol.
Board Talk Protocol
Two or three students are randomly selected to come to the board to individually sketch and show reasoning for the first problem. The students work in separate spaces on the board, so the seated class members will be able to see and compare separate responses.
While the three students are working at the board, the remaining students work in their seats to complete the first item on their individual papers. Teacher selects a student at the board to explain to the class what they have done. The class is told they must each write one GOOD QUESTION about the explanation the student at the board is giving. A good question starts with how, why, what if, or can you clarify… Write these GOOD QUESTION starters on the board. Students must write their good question on their assignment paper as the student is explaining.
After the explaining student finishes, the teacher selects one or two from the class to ask their GOOD QUESTION to the explaining student.
The teacher may select a second or third student at the board to then explain their approach, especially if they have a different response. The seated students again write a GOOD QUESTION for that explaining student. Or, the teacher may ask the class members to look at all responses on the board and prepare to describe how they are similar or different.
We know from our last lesson that there are times when one form for a rational number is better than another. For example, we wouldn’t want to use the percent form for ½ in a recipe, and we wouldn’t want to use the fraction form for $1.35 at the store.
Lesson Segment 2: How can a rational number be converted to a different form? How does the fraction a/b relate to a divided by b?
Q. Why would we want to be able to convert from one rational number form to another?
There are many procedures for converting rational numbers. One of these is to use the decimal form for a rational number as the “Middle Man”. That is, that percents and fractions are first converted to the decimal form, and then can be converted to another form.
Sketch this graphic on the board. The idea here is that fractions can be converted to decimals by dividing denominator into numerator, and percents can be converted to a decimal by moving the decimal two places to the left.
Ask students if they have ever had to go through a “middle man”. For example, when I was younger, I always went to my mother to ask her to get something I wanted from my father because she was easier to work with.
Fraction to Decimal: Demonstrate using the TI-73 to write a fraction as a decimal by dividing the denominator into the numerator. Use common fractions such as ½, ⅓, ⅔, ⅛, ⅝, ¼, ¾, and the fifths. Discuss the repeating decimals for ⅓ and for ⅔ pointing out that the calculator rounds the last digit for ⅔.
Percent to Decimal: Show students how to use the calculator to divide any number written in percent form by 100 to get a decimal.
Once the number is written in decimal form, we can use the Decimal Conversion Procedures:
Help students make a Three-Flap Foldable for their journal that looks like this. Clip on the dotted lines up to the fold line.
Write the decimal conversion procedure under the center flap. Write the procedures for converting fractions and percents to decimal form under the two designated flaps.
Sing the Equivalent Forms of Rational Numbers Song with them (attached)
Another way to finding equivalent rational numbers is to use the feature on the TI-73.Show the students how to use the key on the TI-73.
Fraction to Decimal and Decimal to Fraction: Type number then push and .
Percent to Fraction: Type the number then push
Fraction to percent: Type in the fraction then push 100.
Percent to Decimal: Type the number then push
Having more than one strategy for finding equivalent rational numbers will be helpful.
Lesson Segment 3: Practice Game:
Assign any text practice as needed.
Created Date :
|
On this day in 1948, T.S. Eliot wins the Nobel Prize in literature, for his profound effect on the direction of modern poetry.
Eliot was born in St. Louis, Missouri, to a long-established family. His grandfather had founded Washington University in St. Louis, his father was a businessman, and his mother was involved in local charities. Eliot took an undergraduate degree at Harvard, studied at the Sorbonne, returned to Harvard to learn Sanskrit, and then studied at Oxford. He became lifelong friends with fellow poet Ezra Pound and later moved permanently to England. In 1915, he married Vivian Haigh-Wood, but the marriage was unhappy, partly due to her mental instability. She died in an institution in 1947.
Eliot began working at Lloyd's Bank in 1917, writing reviews and essays on the side. He founded a critical quarterly, Criterion, and quietly developed a new style of poetry. His first major work, The Love Song of J. Alfred Prufrock, was published in 1917 and hailed as the invention of a new kind of poetry. His long, fragmented images and use of blank verse influenced nearly all future poets, as did his masterpiece The Waste Land, published in Criterion and the American review Dial in 1922. While Eliot is best known for revolutionizing modern poetry, his literary criticism and plays were also successful.
Eliot lectured in the United States frequently in the 1930s and 40s, a time when his own worldview was undergoing rapid change as he converted to Christianity. In 1957, he married his assistant, Valerie Fletcher. He died in 1965.
|
Climate change and the developments it spurs carry the narrative of the Quaternary, the most recent 2.6 million years of Earth's history. Glaciers advance from the Poles and then retreat, carving and molding the land with each pulse. Sea levels fall and rise with each period of freezing and thawing. Some mammals get massive, grow furry coats, and then disappear. Humans evolve to their modern form, traipse around the globe, and make a mark on just about every Earth system, including the climate.
At the start of the Quaternary, the continents were just about where they are today, slowing inching here and there as the forces of plate tectonics push and tug them about. But throughout the period, the planet has wobbled on its path around the sun. The slight shifts cause ice ages to come and go. By 800,000 years ago, a cyclical pattern had emerged: Ice ages last about 100,000 years followed by warmer interglacials of 10,000 to 15,000 years each. The last ice age ended about 10,000 years ago. Sea levels rose rapidly, and the continents achieved their present-day outline.
When the temperatures drop, ice sheets spread from the Poles and cover much of North America and Europe, parts of Asia and South America, and all of Antarctica. With so much water locked up as ice, sea levels fall. Land bridges form between the continents like the currently submerged connector across the Bering Strait between Asia and North America. The land bridges allow animals and humans to migrate from one landmass to another.
During warm spells, the ice retreats and exposes reshaped mountains striped with new rivers draining to giant basins like today's Great Lakes. Plants and animals that sought warmth and comfort toward the Equator return to the higher latitudes. In fact, each shift alters global winds and ocean currents that in turn alter patterns of precipitation and aridity around the world.
Since the outset of the Quaternary, whales and sharks have ruled the seas, topping a food chain with otters, seals, dugongs, fish, squid, crustaceans, urchins, and microscopic plankton filling in the descending rungs.
On land, the chilliest stretches of the Quaternary saw mammals like mammoths, rhinos, bison, and oxen grow massive and don shaggy coats of hair. They fed on small shrubs and grasses that grew at the ever moving edges of the ice sheets. About 10,000 years ago, the climate began to warm, and most of these so-called megafauna went extinct. Only a handful of smaller, though still impressively large, representatives remain, such as Africa's elephants, rhinoceroses, and hippopotamuses. Scientists are uncertain whether the warming climate is to blame for the extinction at the end of the last ice age. At the time, modern humans were rapidly spreading around the globe and some studies link the disappearance of the big mammals with the arrival of humans and their hunting ways.
In fact, the Quaternary is often considered the "Age of Humans." Homo erectus appeared in Africa at the start of the period, and as time marched on the hominid line evolved bigger brains and higher intelligence. The first modern humans evolved in Africa about 190,000 years ago and dispersed to Europe and Asia and then on to Australia and the Americas. Along the way the species has altered the composition of life in the seas, on land, and in the air—and now, scientists believe, we're causing the planet to warm.
The Innovators Project
After achieving nuclear fusion at age 14, Taylor, now 19, is working with subatomic particles for solutions to nuclear terrorism and cancer.
How to Feed Our Growing Planet
National Geographic explores how we can feed the growing population without overwhelming the planet in our food series.
Larvae attract more larvae, but not if they don’t have any bacteria. by Ed Yong
The nation's most complete Tyrannosaurus rex specimen is taking a 2,000-mile road trip from Montana to its new home in Washington, D.C.
Shop Our Space Collection
The updated companion book to Carl Sagan's Cosmos, featuring a new forward by Neil deGrasse Tyson is now available. Proceeds support our mission programs, which protect species, habitats, and cultures.
|
Definition of Anglo-Saxon in English:
1Relating to or denoting the Germanic inhabitants of England from their arrival in the 5th century up to the Norman Conquest.
- In 10th Century Anglo-Saxon England, this dynamic had been complicated by a highly chequered history.
- In theory all freemen of Anglo-Saxon England were under an obligation to serve in the fyrd when called upon.
- To judge from the surviving manuscripts, these texts found a large audience in Anglo-Saxon England during the tenth and eleventh centuries.
1.1Of English descent.
- Alternatively, we can kick out all these immigrants, starting with those who claim Anglo-Saxon descent!
- After all, only so-called mainstream American authors counted, and almost all of them were of Anglo-Saxon descent.
- Secondly, the Anglo-Saxon background and common English language remain of profound importance to the relationship.
1.2Of, in, or relating to the Old English language.
- The two-man chorus is lent an alliterative, Anglo-Saxon form reminiscent of Heaney's Beowulf.
- The vast majority of all Anglo-Saxon name variants are included.
- The name Frome comes from the Anglo-Saxon word 'frum', meaning rapid, vigorous.
1.3 informal (Of an English word or expression) plain, in particular vulgar: using a lot of good old Anglo-Saxon expletives
More example sentences
- It is also used to label vernacular English, especially when considered plain, monosyllabic, crude, and vulgar: Anglo-Saxon words.
- Spelled out in simple Anglo-Saxon words ‘Patriotism’ reads ‘Women and children first!’
- There have been a few raised eyebrows in recent days about the use, by the third in line to the throne, of one particular Anglo-Saxon expression.
nounBack to top
1A Germanic inhabitant of England between the 5th century and the Norman Conquest.
- After the departure of the Romans in about 420, there were many wars in England involving Scots, Picts, Britons and Saxons, Anglo-Saxons and Danes, and, in 1066, the Norman conquest.
- The English, who are a synthesis of Celts, Anglo-Saxons, Vikings and Norman French, provided the seed for this distinct culture.
- Most notable amongst these were the counties or shires which the Normans inherited from the Anglo-Saxons.
1.1A person of English descent.
- To what category such terms were juxtaposed was also unclear: was it the English, the Anglo-Saxons, the British?
- The women of nineteenth-century Germany have been strikingly absent in almost any kind of historical work on this period whether written by Germans or Anglo-Saxons.
- For the Anglo-Saxons, the Germans, and the Slavs do not possess, and will never possess, what the Latins, with the French at their head, have given and will continue to give to the civilized world.
1.2chiefly North American Any white, English-speaking person.
- There's the fiery passion of the Latins, the cold implied fetishism of the Eastern European, and the faith-based frigidity of white Anglo-Saxons.
- This may have something to do with the fact that bigger is more acceptable in African-American culture than among the mighty white uptight Anglo-Saxons.
- What level does she calculate the immigrant population must exceed before the racist problem kicks in for her, a white woman among what she imagines to be fellow Anglo-Saxons?
2 [mass noun] The Old English language.
- We do not go back and study French to study the roots of the English language; we go back and study Old English and Anglo-Saxon - or, at least, we used to in the time that I was at university.
- The common tongue was by then very different from Old English or Anglo-Saxon.
- In the following, italics are used for words in Swedish, while bold text indicates Old English, or Anglo-Saxon.
2.1 informal Plain English, in particular vulgar slang.
- In Gaelic, apparently, one word serves for both - but unfortunately this war of words has been conducted in English, albeit with some ripe Anglo-Saxon thrown in.
- At first glance that hardly seems likely, given that Romanov has to speak through an interpreter - and how do you translate Lithuanian into basic Anglo-Saxon?
- Note the pedigree beasts understand very loud Anglo-Saxon.
Definition of Anglo-Saxon in:
- US English dictionary
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed.
|
Programmes work by manipulating data stored in memory. These storage areas come under the general heading of Variables. In this section, you'll see how to set up and use variables. You'll see how to set up both text and number variables. By the end of this section, you'll have written a simple calculator programme. We'll start with something called a String variable.
String Variables in C#.NET
The first type of variable we'll take a look at is called a String. String variables are always text. We'll write a little programme that takes text from a text box, store the text in a variable, and then display the text in a message box.
But bear in mind that a variable is just a storage area for holding things that you'll need later. Think of them like boxes in a room. The boxes are empty until you put something in them. You can also place a sticker on the box, so that you'll know what's in it. Let's look at a programming example.
If you've got your project open from the previous section, click File from the menu bar at the top of Visual C#. From the File menu, click Close Solution. Start a new project by clicking File again, then New Project. From the New Project dialogue box, click on Windows Application. For the Name, type String Variables.
Click OK, and you'll see a new form appear. Add a button to the form, just like you did in the previous section. Click on the button to select it (it will have the white squares around it), and then look for the Properties Window in the bottom right of Visual Studio. Set the following Properties for your new button:
Location: 90, 175
Size: 120, 30
Text: Get Text Box Data
Your form should then look like this:
We can add two more controls to the form, a Label and a Text Box. When the button is clicked, we'll get the text from the text box and display whatever was entered in a message box.
A Label is just that: a means of letting your users know what something is, or what it is for. To add a Label to the form, move your mouse over to the Toolbox on the left. Click the Label item under Common Controls:
Now click once on your form. A new label will be added:
The Label has the default text of label1. When your label is selected, it will have just the one white square in the top left. When it is selected, the Properties Window will have changed. Notice that the properties for a label are very similar to the properties for a button - most of them are the same!
Change the following properties of your label, just like you did for the button:
Location: 10, 50
You don't really need to set a size, because Visual C# will automatically resize your label to fit your text. But your Form should look like this:
Move your mouse back over to the Toolbox. Click on the TextBox entry. Then click on your form. A new Text Box will be added, as in the following image:
Instead of setting a location for your text box, simply click it with your left mouse button. Hold your left mouse button down, and the drag it just to the right of the Label.
Notice that when you drag your text box around, lines appear on the form. These are so that you can align your text box with other controls on the form. In the image below, we've aligned the text box with the left edge of the button and the top of the Label.
OK, time to add some code. Before you do, click File > Save All from the menu bar at the top of Visual C#. You can also run your programme to see what it looks like. Type some text in your text box, just to see if it works. Nothing will happen when you click your button, because we haven't written any code yet. Let's do that now. Click the red X on your form to halt the programme, and you'll be returned to Visual C#.
|
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, activities, or other ideas you'd like to contribute, we'd love to hear from you.
Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals.
Teacher Resources by Grade
|1st - 2nd||3rd - 4th|
|5th - 6th||7th - 8th|
|9th - 10th||11th - 12th|
Digging Up Details on Worms: Using the Language of Science in an Inquiry Study
|Grades||K – 2|
|Lesson Plan Type||Standard Lesson|
|Estimated Time||Four 50-minute sessions|
Foregrounding scientific vocabulary, this integrated lesson invites students to research worms in order to create a classroom habitat. Students are first introduced to inquiry notebooks and then use them record what they already know about worms. Next, students observe the cover of a fiction book about worms and make a hypothesis on whether the book is fact or fiction, and then check their hypotheses after the book is read aloud. Next, after an introduction to related scientific words such as hypothesis, habitat, attribute, predator, and prey, students conduct and record research and findings in their inquiry notebooks. Once they have gathered the necessary information, students plan and build a worm habitat, which becomes the springboard for further scientific exploration, observation, and experimentation.
Animal Inquiry: Using this online tool, students great graphic organizers by answering a series of questions about an animal they are studying. Students can select animal facts, babies, interactions, or habitats organizers.
Students are naturally curious about the world around them. Therefore, it is important to provide students with the opportunity to pose questions and discover answers on their own. Working across the disciplines helps to reinforce the facts, skills and information for the students. In her article, "Science Text Sets: Using Various Genres to Promote Literacy and Inquiry," Margaretha Ebbers suggests "in elementary classrooms, the scientific practices of observing, questioning, predicting, describing, explaining, and investigating should be woven together with the literacy practices of reading, writing, speaking, and listening." In this lesson plan, students actively participate in scientific practices and use scientific vocabulary while reading, writing, and researching.
Ebbers, Margaretha. "Science Text Sets: Using Various Genres to Promote Literacy and Inquiry." Language Arts 80.1 (September 2002): 40-50.
|
The renaissance flute, the type of instrument used in Europe from roughly 1500 to 1650 and later, was designed to blend well; it was often played with other flutes in a consort, or perhaps with voices or other soft instruments. Yet it can be seen in medium sized and large ensembles, where it presumably played in its high range. It also had military uses, though it is quite likely that the instruments so used may have been of a somewhat different design, perhaps more fife-like.
Shown below are three renaissance flutes by modern makers. The first is after flutes stamped with a trefoil in Verona and is at the original pitch of A=408; the second is modeled on instruments in Brussels, but made at A=440 for the convenience of modern players; the third is a copy of a flute by Rafi and is at the original pitch of A=388.
The renaissance flute is accoustically quite different from six-hole flutes from other traditions and parts of the world. It has a narrow cylindrical bore, and small finger holes and embouchure. This makes the instrument tend to be somewhat quiet and a bit sluggish in the lowest octave, but it is responsive and can be played lightly and delicately in its highest notes. The combination of narrow cylindrical bore and small holes also greatly affects the fingering and resulting sound (see below).
Renaissance flutes were made in several sizes. Three sizes of instruments (that we can call descant, tenor, and bass flutes) are described in several 16th century treatises, though only the two lower sizes are mentioned by Philibert Jambe de Fer (1556). These differed in pitch by a fifth. The lowest notes of the descant, tenor, and bass are, respectively, a', d', and g (though at a pitch somewhat below modern pitch). The middle size, the tenor in d', was the most common size, judging from depictions. But many of the surviving instruments from the 16th century are basses. And no true descants survive, indicating that they were less common (or perhaps more easily lost or broken). Shown below is a modern consort of renaissance flutes (a bass, three tenors, and a descant).
The renaissance flute had a large range for a renaissance wind instrument (at least this is the case for the tenor and descant). Most fingering charts show a range of d' to a''' for the tenor flute. The chart below is taken from Virgiliano (after 1600). What looks like the note B in the chart is not an English language B, but B flat. This can be seen from the fingerings given (which, admittedly, are hard to see in the chart). So the notes F natural and B flat are shown, and not the notes F sharp and B natural that belong to the D major scale.
In the 17th century, one finds evidence of a small renaissance-style flute in g', a tone above the 16th century descant. In partcular, a fingering chart for flute inserted in an edition of Jacob van Eyck's Der Fluyten Lust-hof is for such a small flute. Van Eyck's work was intended primarily for a small recorder with low note c'' (the modern soprano size), and the three lowest notes of the small flute in g' are thus not used at all, while it must regularly play into its third octave.
The design of the renaissance flute, with its small holes on a cylindrical bore, tends to favor harmonic fingerings. The small holes work well as vent holes, but are not ideal for tone holes once you get to the high notes. Much of the second octave is flat if one attempts to use the same fingerings as in the first octave. The fingerings start to diverge significantly with second octave a''. Fingered as 12----, this note is usually intolerably flat unless forced. The correct fingering for a'' is 12-456 (a harmonic of the low d', with hole 3 acting as a vent hole). It does tend to be sharp (shading hole 3 helps to keep it down), but speaks easily and is easy to sustain and shape. One must practice the transition from g'' (fingered 123---, and which tends to be flat) to a''.
(Larger holes, like on the Indian bansuri and modern Boehm flute, even though they have cylindrical bores, allow the fingering 12---- to work for A in both the first and second octaves.)
Why was a narrow bore and small hole design chosen? One can surmise that renaissance listeners prefered the sound that way. Harmonics can sound sweeter. Another reason is that this is simply the best solution renaissance makers found to making a flute that was able to handle the music of the time, based on the materials and technology they had to work with, and the designs they inherited. In any case, if we wish to hear the music as it might have sounded on flutes then, one must use this type of flute.
Renaissance music is based on a variety of 'modes'. The notes appearing in theses modes are the naturals and B flat. (Modes with B flat work better on the renaissance flute than those with B natural.) Other notes are required as musica ficta. While not in most fingering charts, it is easy to produce these with 'forked fingerings', except for the E flat in the first two octaves. This must be produced by covering the first five holes completely and the sixth hole half way, or so, and is difficult. Lowering the breath pressure somewhat—and practice—will help.
Tuning tends towards meantone (true meantone, 1/4 comma meantone). For example, the difference between F# and G in the first two octaves is quite large, while the difference between F# and F natural is small. The difference between c'''sharp fingered as --34-- and d''' as -23456 is large, i.e. more than an equal tempered semitone. There is no fingering that produces a note below d''' but closer to d'''. So one may say that the renaissance flute has a c'''sharp but no d''' flat. Similarly, the fingering 12-4-6 used for a note between d''' and e''' is sharp, i.e. closer to e'''. So we may say that the renaissance flute has an e'''flat, but no d'''sharp.
For more information on renaissance flutes, see
|
Messy play is important for young children, giving them endless ways to develop and learn. All types of play are crucial for children’s development and early learning. Play helps children to; improve physical skills and co-ordination, work co-operatively and collaboratively, use all their senses to discover and explore their environment, and develop their imagination, creative thinking and ability to problem solve.
Playing with toys alone can limit opportunities to develop imagination, creativity and critical thinking. Messy play is inexpensive and open ended. Children will discover enormous numbers of opportunities for learning and play, through timeless and accessible messy play activities.
Ideas for messy play
You don’t just need brushes and an easel. Be inventive!
- Use sponges, fingers, hands, feet and other various objects to make marks.
- Roll out old wallpaper in the garden and encourage children to make footprints across – mix colours, compare feet sizes etc.
- Use washing up bottles filled with watery paint to squeeze and spray across paper.
- Flick brushes across paper to make patterns.
- Bubble painting - Blow bubbles in pots of watery paint and lay paper across the top of the pot to catch the pattern.
- Marble painting - Dip marbles in paint and roll them over paper in a tray to explore lines and patterns.
- Blow painting - Make up different coloured runny paint to drop onto paper, Use a straw to blow the paint in different directions. Watch what happens when two colours mix.
- String painting - Drop string in paint then pull it across paper like a snake in different directions.
- Mirror image painting - Paint one half of paper, then fold it, press down and open to create a mirror image. This is good for butterflies and other symmetrical objects.
- Potato prints - Great effects can be achieved by printing with potatoes. Cut the potato in half and cut patterns into the flat side before you dip it into paint. This is great for encouraging repeating patterns.
- Welly boot printing - on rolls of wall paper.
- Magic painting – draw over white paper with a white candle. Make the picture appear by painting over with watered down paint.
Use with or without cutters and moulds. Encourage imaginary play such as rolling sausages for dinner or making cakes with decorations. Dough scissors are also good for fine motor control.
Basic recipe for play doh:
1 cup flour,
1 cup water,
½ cup salt,
1 tbsp oil,
2 tbsp cream of tartar.
Put all ingredients into a pan over a medium heat until the mixture starts to bind, stirring all the time. Remove from heat.
You could try adding:
- Food colouring
- Powder paint
- Uncooked rice
- Food essences e.g. strawberry or mint
- Oat meal
Keep in the fridge and change on a regular basis.
Remember: Dough harbours bacteria – if in doubt, throw it out!
Gloop (cornflower mixed with water)
Mix an amount of cornflour gradually with water until it binds. Place in a tray or shallow container and try to pick it up! Vary the consistency occasionally and for more exploratory experiences, let the children make it themselves and feel the cornflour dry and mix it up themselves.
You also could try exploring:
- Dry or cooked spaghetti or pasta
- Cold custard
- Shaving foam
- Shredded paper
- Scents such as cinnamon or lemon juice
- Food colouring
Drawing doesn’t just have to be with a pencil and paper. Messy play offers valuable pre-writing skills. There are many ways to make marks from patterns in the gloop with your finger to lines in the sand with a lolly stick.
Try some of these suggestions:
- Use coloured pencils/crayons/pens on different papers
- Use paintbrushes and water to ‘draw’ on the pavement – watching the marks disappear on a sunny day
- Use paint to make marks with brushes/fingers to make marks
- Use twigs, lolly sticks or rough surfaced materials to make marks
Cutting and sticking is always a favourite. Getting used to using scissors will really help improve co-ordination. Encourage your child to use the hand most comfortable to cut with. Children can cut out their own shapes and have some pre-cut shapes they can use. It’s good to use a range of materials such as paper, card, magazines, felt, foam shapes, feathers, glitter, natural materials such as twigs, leaves and shells. Encourage your child to talk about their creation and praise them.
Sand and Water play
If using sand remember to only use play sand and sterilise with hot water regularly.
Use various bottles, jugs, scoops, sieves, funnels, tools and containers. Filling various containers with water or dry sand gives children the experience of feeling different weights. Pouring from one container to another introduces the relationships of capacity and volume. Children love to explore floating and sinking. Damp sand feels different to dry sand, let your child explore both. Remember water play can be explored in the bath!
You could try adding:
- Baby bath for bubbles to your water tray
- Rice/pasta in sand or water
- Animals, cars, dinosaurs etc
- Shredded paper in the sand
- Damp sand is good for building – lolly sticks make great slicers.
- Spades, buckets and trowels
- Shells and other natural objects
When children work out how to balance one box onto another they are problem solving.
- Let children choose what they need from: packets, cardboard tubes, yogurt pots, different sized empty boxes and paper.
- If using cardboard egg boxes ensure there are no remnants of the egg in the box – if in doubt microwave the box.
- Use sticky tape as well as glue.
- Decorate with paint, glitter, stickers etc.
Messy play in the home
It may take a bit more time and thought when planning messy play activities but it is well worth the effort. Here are some helpful suggestions when planning messy play at home:
- Giving children craft aprons to wear will prevent getting their clothes messy. Or use big old t-shirts pulled together at the back with hair/bulldog clips for total cover.
- Take as much outdoors as possible (weather permitting!)
- Use dust sheets over furniture to protect them. Use plastic tablecloths or shower curtains on the floor depending on the activity.
- If you are playing outside, make tidying up a fun activity by letting the children wash away the chalk/paint etc on the patio using soapy water and brushes.
- If you are very anxious about the mess…think small. Messy play could just be a simple activity. E.g. a bowl of water and different containers, scented play doh or finger painting.
- The key thing to remember is to allow your child to become fully engaged with their activity and let them lead it. You can support their learning with vocabulary, posing questions and showing that you are interested and value what they are doing.
|
Did you know that without the atmosphere, Earth's surface would be covered with meteor craters and life on this planet would be non-existent? Protecting us from meteorites, regulating temperature, and providing the air we breathe are only some of the ways that the atmosphere makes Earth the home it is.
Earth's atmosphere contains many components that can be measured in different ways. This module describes these different components and shows how temperature and pressure change with altitude. The scientific developments that led to an understanding of these concepts are discussed.
The fact that the moon's surface is covered with meteorite impact craters is obvious to us today. Though the moon is not far from us, impact craters are few and far between on Earth. As it turns out, Earth has received just as many incoming meteorites as the moon, but the presence of the atmosphere has determined the fate of many of them. Small meteorites burn up in the atmosphere before ever reaching Earth. Those that do hit the surface and create an impact crater are lost to us in a different way – the craters are quickly eroded by weather generated in the atmosphere, and the evidence is washed away. The moon, on the other hand, has no atmosphere, and thus every meteor aimed at the moon hits it, and the craters have remained essentially unchanged for 4 billion years (Figure 1).
Composition of Earth's atmosphere
The early Greeks considered "air" to be one of four elementary substances; along with earth, fire, and water, air was viewed as a fundamental component of the universe. By the early 1800s, however, scientists such as John Dalton recognized that the atmosphere was in fact composed of several chemically distinct gases, which he was able to separate and determine the relative amounts of within the lower atmosphere. He was easily able to discern the major components of the atmosphere: nitrogen, oxygen, and a small amount of something incombustible, later shown to be argon.
The development of the spectrometer in the 1920s allowed scientists to find gases that existed in much smaller concentrations in the atmosphere, such as ozone and carbon dioxide. The concentrations of these gases, while small, varied widely from place to place. In fact, atmospheric gases are often divided up into the major, constant components and the highly variable components, as shown in Table 1 and Table 2.
|Neon, Helium, Krypton||0.0001%|
Table 1: Constant Components. Proportions remain the same over time and location.
|Carbon dioxide (CO2)||0.038%|
|Water vapor (H2O)||0-4%|
|Sulfur dioxide (SO2)||trace|
|Nitrogen oxides (NO, NO2, N20)||trace|
Table 2: Variable Components. Amounts vary over time and location.
Although both nitrogen and oxygen are essential to human life on the planet, they have little effect on weather and other atmospheric processes. The variable components, which make up far less than 1 percent of the atmosphere, have a much greater influence on both short-term weather and long-term climate. For example, variations in water vapor in the atmosphere are familiar to us as relative humidity. Water vapor, CO2, CH4, N2O, and SO2 all have an important property: They absorb heat emitted by Earth and thus warm the atmosphere, creating what we call the "greenhouse effect." Without these so-called greenhouse gases, the Earth's surface would be about 30 degrees Celsius cooler – too cold for life to exist as we know it. Though the greenhouse effect is sometimes portrayed as a bad thing, trace amounts of gases like CO2 warm our planet's atmosphere enough to sustain life. Global warming, on the other hand, is a separate process that can be caused by increased amounts of greenhouse gases in the atmosphere.
In addition to gases, the atmosphere also contains particulate matter such as dust, volcanic ash, rain, and snow. These are, of course, highly variable and are generally less persistent than gas concentrations, but they can sometimes remain in the atmosphere for relatively long periods of time. Volcanic ash from the 1991 eruption of Mt. Pinatubo in the Philippines, for example, darkened skies around the globe for over a year.
Though the major components of the atmosphere vary little today, they have changed dramatically over Earth's history, about 4.6 billion years. The early atmosphere was hardly the life-sustaining blanket of air that it is today; most geologists believe that the main constituents then were nitrogen gas and carbon dioxide, but no free oxygen. In fact, there is no evidence for free oxygen in the atmosphere until about 2 billion years ago, when photosynthesizing bacteria evolved and began taking in atmospheric carbon dioxide and releasing oxygen. The amount of oxygen in the atmosphere has risen steadily from 0 percent 2 billion years ago to about 21 percent today.
Nitrogen and oxygen, which make up more than 99% of Earth's atmosphere, have a bigger influence on climate than other components of the atmosphere.
Measuring the atmosphere
We now have continuous satellite monitoring of the atmosphere and Doppler radar to tell us whether or not we will experience rain anytime soon; however, atmospheric measurements used to be few and far between. Today, measurements such as temperature and pressure not only help us predict the weather, but also help us look at long-term changes in global climate (see our Temperature module). The first atmospheric scientists were less concerned with weather prediction, however, and more interested in the composition and structure of the atmosphere.
The two most important instruments for taking measurements in Earth's atmosphere were developed hundreds of years ago: Galileo is credited with inventing the thermometer in 1593, and Evangelista Torricelli invented the barometer in 1643. With these two instruments, temperature and pressure could be recorded at any time and at any place. Of course, the earliest pressure and temperature measurements were taken at Earth's surface. It was a hundred years before the thermometer and barometer went aloft.
While many people are familiar with Ben Franklin's kite and key experiment that tested lightning for the presence of electricity, few realize that kites were the main vehicle for obtaining atmospheric measurements above Earth's surface. Throughout the 18th and 19th centuries, kite-mounted instruments collected pressure, temperature, and humidity readings; unfortunately, scientists could only reach up to an altitude of about 3 km with this technique.
Unmanned balloons were able to take measurements at higher altitudes than kites, but because they were simply released with no passengers and no strings attached, they had to be retrieved in order to obtain the data that had been collected. This changed with the development of the radiosonde, an unmanned balloon capable of achieving high altitudes, in the early 1930s. The radiosonde included a radio transmitter among its many instruments, allowing data to be transmitted as it was being collected so that the balloons no longer needed to be retrieved. A radiosonde network was developed in the United States in 1937, and continues to this day under the auspices of the National Weather Service.
What was an advantage of the radiosonde over earlier data collection instruments?
Temperature in the atmosphere
Through examination of measurements collected by radiosonde and aircraft (and later by rockets), scientists became aware that the atmosphere is not uniform. Many people had long recognized that temperature decreased with altitude – if you've ever hiked up a tall mountain, you might learn to bring a jacket to wear at the top even when it is warm at the base – but it wasn't until the early 1900s that radiosondes revealed a layer, about 18 km above the surface, where temperature abruptly changed and began to increase with altitude. The discovery of this reversal led to division of the atmosphere into layers based on their thermal properties.
The lowermost 12 to 18 km of the atmosphere, called the troposphere, is where all weather occurs – clouds form and precipitation falls, wind blows, humidity varies from place to place, and the atmosphere interacts with the surface below. Within the troposphere, temperature decreases with altitude at a rate of about 6.5° C per kilometer. At 8,856 m high, Mt. Everest still reaches less than halfway through the troposphere. Assuming a sea level temperature of 26° C (80° F), that means the temperature on the summit of Everest would be around -31° C (-24° F)! In fact, temperature at Everest's summit averages -36° C, whereas temperatures in New Delhi (in nearby India), at an elevation of 233 m, average about 28° C (82.4° F).
At the uppermost boundary of the troposphere, air temperature reaches about -100° C and then begins to increase with altitude. This layer of increasing temperature is called the stratosphere. The cause of the temperature reversal is a layer of concentrated ozone. Ozone's ability to absorb incoming ultraviolet (UV) radiation from the sun had been recognized in 1881, but the existence of the ozone layer at an altitude of 20 to 50 km was not postulated until the 1920s. By absorbing UV rays, the ozone layer both warms the air around it and protects us on the surface from the harmful short-wavelength radiation that can cause skin cancer.
It is important to recognize the difference between the ozone layer in the stratosphere and ozone present in trace amounts in the troposphere. Stratospheric ozone is produced when energy from the sun breaks apart O2 gas molecules into O atoms; these O atoms then bond with other O2 molecules to form O3, ozone. This process was first described in 1930 by Sydney Chapman, a geophysicist who synthesized many of the known facts about the ozone layer. Tropospheric ozone, on the other hand, is a pollutant produced when emissions from fossil-fuel burning interact with sunlight.
Above the stratosphere, temperature begins to drop again in the next layer of the atmosphere called the mesosphere, as seen in the previous figure. This temperature decrease results from the rapidly decreasing density of the air at this altitude. Finally, at the outer reaches of Earth's atmosphere, the intense, unfiltered radiation from the sun causes molecules like O2 and N2 to break apart into ions. The release of energy from these reactions actually causes the temperature to rise again in the thermosphere, the outermost layer. The thermosphere extends to about 500 km above Earth's surface, still a few hundred kilometers below the altitude of most orbiting satellites.
All weather, including clouds, wind, and precipitation, occurs in the
Pressure in the atmosphere
Atmospheric pressure can be imagined as the weight of the overlying column of air. Unlike temperature, pressure decreases exponentially with altitude. Traces of the atmosphere can be detected as far as 500 km above Earth's surface, but 80 percent of the atmosphere's mass is contained within the 18 km closest to the surface. Atmospheric pressure is generally measured in millibars (mb); this unit of measurement is equivalent to 1 gram per centimeter squared (1 g/cm2). Other units are occasionally used, such as bars, atmospheres, or millimeters of mercury. The correspondence between these units is shown in the table below.
|bars||millibars||atmospheres||millimeters of mercury|
|1.013 bar||=||1013 mb||=||1 atm||=||760 mm Hg|
Table 3: Correspondence of atmospheric measurement units.
At sea level, pressure ranges from about 960 to 1,050 mb, with an average of 1,013 mb. At the top of Mt. Everest, pressure is as low as 300 mb. Because gas pressure is related to density, this low pressure means that there are approximately one-third as many gas molecules inhaled per breath on top of Mt. Everest as at sea level – which is why climbers experience ever more severe shortness of breath the higher they go, as less oxygen is inhaled with every breath.
Though other planets host atmospheres, the presence of free oxygen and water vapor makes our atmosphere unique as far as we know. These components both encouraged and protected life on Earth as it developed, not only by providing oxygen for respiration, but by shielding organisms from harmful UV rays and by incinerating small meteors before they hit the surface. Additionally, the composition and structure of this unique resource are important keys to understanding circulation in the atmosphere, biogeochemical cycling of nutrients, short-term local weather patterns, and long-term global climate changes.
|
NASA's Fermi Telescope peers deep into a microquasar
Cygnus X-3 is the first microquasar for which scientists can prove high-energy gamma-ray emission.
November 30, 2009
Provided by Goddard Space Flight Center, Greenbelt, Maryland
November 30, 2009
In Cygnus X-3, an accretion disk surrounding a black hole or neutron star orbits close to a hot, massive star. Gamma rays (purple, in this illustration) likely arise when fast-moving electrons above and below the disk collide with the star's ultraviolet light. Fermi sees more of this emission when the disk is on the far side of its orbit.
Photo by NASA's Goddard Space Flight Center
NASA's Fermi Gamma-ray Space Telescope has made the first unambiguous detection of high-energy gamma-rays from an enigmatic binary system known as Cygnus X-3. The system pairs a hot, massive star with a compact object — either a neutron star or a black hole — that blasts twin radio-emitting jets of matter into space at more than half the speed of light.
Astronomers call these systems microquasars. Their properties — strong emission across a broad range of wavelengths, rapid brightness changes, and radio jets — resemble miniature versions of distant galaxies (called quasars and blazars) whose emissions are thought to be powered by enormous black holes.
"Cygnus X-3 is a genuine microquasar, and it's the first for which we can prove high-energy gamma-ray emission," said Stephane Corbel at Paris Diderot University in France.
The system, first detected in 1966 as among the sky's strongest X-ray sources, was also one of the earliest claimed gamma-ray sources. Efforts to confirm those observations helped spur the development of improved gamma-ray detectors, a legacy culminating in the Large Area Telescope (LAT) aboard Fermi.
At the center of Cygnus X-3 lies a massive Wolf-Rayet star. With a surface temperature of 180,000° Fahrenheit (100,000° Celsius), or about 17 times hotter than the Sun, the star is so hot that its mass bleeds into space in the form of a powerful outflow called a stellar wind. "In just 100,000 years, this fast, dense wind removes as much mass from the Wolf-Rayet star as our Sun contains," said Robin Corbet at the University of Maryland, Baltimore County.
Every 4.8 hours, a compact companion embedded in a disk of hot gas wheels around the star. "This object is most likely a black hole, but we can't yet rule out a neutron star," Corbet said.
Fermi's LAT detects changes in Cygnus X-3's gamma-ray output related to the companion's 4.8-hour orbital motion. The brightest gamma-ray emission occurs when the disk is on the far side of its orbit. "This suggests that the gamma rays arise from interactions between rapidly moving electrons above and below the disk and the star's ultraviolet light," Corbel said.
When ultraviolet photons strike particles moving at an appreciable fraction of the speed of light, the photons gain energy and become gamma rays. "The process works best when an energetic electron already heading toward Earth suffers a head-on collision with an ultraviolet photon," said Guillaume Dubus at the Laboratory for Astrophysics in Grenoble, France. "And this occurs most often when the disk is on the far side of its orbit."
Through processes not fully understood, some of the gas falling toward Cygnus X-3's compact object instead rushes outward in a pair of narrow, oppositely directed jets. Radio observations clock gas motion within these jets at more than half the speed of light.
Between October 11 and December 20, 2008, and again between June 8 and August 2, 2009, Cygnus X-3 was unusually active. The team found that outbursts in the system's gamma-ray emission preceded flaring in the radio jet by roughly 5 days, strongly suggesting a relationship between the two.
The findings will provide new insight into how high-energy particles become accelerated and how they move through the jets.
|
From Wikipedia, the free encyclopedia - View original article
|Part of the nature series|
Monsoon (UK: //; US: //) is traditionally defined as a seasonal reversing wind accompanied by corresponding changes in precipitation, but is now used to describe seasonal changes in atmospheric circulation and precipitation associated with the asymmetric heating of land and sea. Usually, the term monsoon is used to refer to the rainy phase of a seasonally-changing pattern, although technically there is also a dry phase.
The major monsoon systems of the world consist of the West African and Asia-Australian monsoons. The inclusion of the North and South American monsoons with incomplete wind reversal has been debated.
The term was first used in English in British India (now India, Bangladesh and Pakistan) and neighbouring countries to refer to the big seasonal winds blowing from the Bay of Bengal and Arabian Sea in the southwest bringing heavy rainfall to the area. The south-west monsoon winds are called 'Nairutya Maarut' in India.
Strengthening of the Asian monsoon has been linked to the uplift of the Tibetan Plateau after the collision of the Indian sub-continent and Asia around 50 million years ago. Because of studies of records from the Arabian Sea and that of the wind-blown dust in the Loess Plateau of China, many geologists believe the monsoon first became strong around 8 million years ago. More recently, studies of plant fossils in China and new long-duration sediment records from the South China Sea led to a timing of the monsoon beginning 15–20 million years ago and linked to early Tibetan uplift. Testing of this hypothesis awaits deep ocean sampling by the Integrated Ocean Drilling Program. The monsoon has varied significantly in strength since this time, largely linked to global climate change, especially the cycle of the Pleistocene ice ages. A study of marine plankton suggested that the Indian Monsoon strengthened around 5 million years ago. Then, during ice periods, the sea level fell and the Indonesian Seaway closed. When this happened, cold waters in the Pacific were impeded from flowing into the Indian Ocean. It is believed that the resulting increase in sea surface temperatures in the Indian Ocean increased the intensity of monsoons.
Five episodes during the Quaternary at 2.22 Ma (PL-1), 1.83 Ma (PL-2), 0.68 Ma (PL-3), 0.45 Ma (PL-4) and 0.04 Ma (PL-5) were identified which showed a weakening of Leeuwin Current (LC). The weakening of the LC would have an effect on the sea surface temperature (SST) field in the Indian Ocean, as the Indonesian through flow generally warms the Indian Ocean. Thus these five intervals could probably be those of considerable lowering of SST in the Indian Ocean and would have influenced Indian monsoon intensity. During the weak LC, there is the possibility of reduced intensity of the Indian winter monsoon and strong summer monsoon, because of change in the Indian Ocean dipole due to reduction in net heat input to the Indian Ocean through the Indonesian through flow. Thus a better understanding of the possible links between El Niño, Western Pacific Warm Pool, Indonesian Throughflow, wind pattern off western Australia, and ice volume expansion and contraction can be obtained by studying the behaviour of the LC during Quaternary at close stratigraphic intervals.
The impact of monsoon on the local weather is different from place to place. In some places there is just a likelihood of having a little more or less rain. In other places, quasi semi-deserts are turned into vivid green grasslands where all sorts of plants and crops can prosper.
The Indian Monsoon turns large parts of India from a kind of semi-desert into green lands. See photos only taken 3 months apart in the Western Ghats. In places like this it is crucial for farmers to have the right timing for putting the seeds on the fields, as it is essential to use all the rain that is available for growing crops.
Monsoons are large-scale sea breezes which occur when the temperature on land is significantly warmer or cooler than the temperature of the ocean. These temperature imbalances happen because oceans and land absorb heat in different ways. Over oceans, the air temperature remains relatively stable for two reasons: water has a relatively high heat capacity (3.9 to 4.2 J g−1 K−1), and because both conduction and convection will equilibrate a hot or cold surface with deeper water (up to 50 metres). In contrast, dirt, sand, and rocks have lower heat capacities (0.19 to 0.35 J g−1 K−1), and they can only transmit heat into the earth by conduction and not by convection. Therefore, bodies of water stay at a more even temperature, while land temperature are more variable.
During warmer months sunlight heats the surfaces of both land and oceans, but land temperatures rise more quickly. As the land's surface becomes warmer, the air above it expands and an area of low pressure develops. Meanwhile, the ocean remains at a lower temperature than the land, and the air above it retains a higher pressure. This difference in pressure causes sea breezes to blow from the ocean to the land, bringing moist air inland. This moist air rises to a higher altitude over land and then it flows back toward the ocean (thus completing the cycle). However, when the air rises, and while it is still over the land, the air cools. This decreases the air's ability to hold water, and this causes precipitation over the land. This is why summer monsoons cause so much rain over land.
In the colder months, the cycle is reversed. Then the land cools faster than the oceans and the air over the land has higher pressure than air over the ocean. This causes the air over the land to flow to the ocean. When humid air rises over the ocean, it cools, and this causes precipitation over the oceans. (The cool air then flows towards the land to complete the cycle.)
Most summer monsoons have a dominant westerly component and a strong tendency to ascend and produce copious amounts of rain (because of the condensation of water vapor in the rising air). The intensity and duration, however, are not uniform from year to year. Winter monsoons, by contrast, have a dominant easterly component and a strong tendency to diverge, subside and cause drought.
Similar rainfall is caused when moist ocean air is lifted upwards by mountains, surface heating, convergence at the surface, divergence aloft, or from storm-produced outflows at the surface. However the lifting occurs, the air cools due to expansion in lower pressure, and this produces condensation.
The monsoon of western Sub-Saharan Africa is the result of the seasonal shifts of the Intertropical Convergence Zone and the great seasonal temperature and humidity differences between the Sahara and the equatorial Atlantic Ocean. It migrates northward from the equatorial Atlantic in February, reaches western Africa on or near June 22, then moves back to the south by October. The dry, northeasterly trade winds, and their more extreme form, the harmattan, are interrupted by the northern shift in the ITCZ and resultant southerly, rain-bearing winds during the summer. The semiarid Sahel and Sudan depend upon this pattern for most of their precipitation.
The North American monsoon (NAM) occurs from late June or early July into September, originating over Mexico and spreading into the southwest United States by mid-July. It affects Mexico along the Sierra Madre Occidental as well as Arizona, New Mexico, Nevada, Utah, Colorado, West Texas and California. It pushes as far west as the Peninsular Ranges and Transverse Ranges of Southern California, but rarely reaches the coastal strip (a wall of desert thunderstorms only a half-hour's drive away is a common summer sight from the sunny skies along the coast during the monsoon). The North American monsoon is known to many as the Summer, Southwest, Mexican or Arizona monsoon. It is also sometimes called the Desert monsoon as a large part of the affected area are the Mojave and Sonoran deserts.
The Asian monsoons may be classified into a few sub-systems, such as the South Asian Monsoon which affects the Indian subcontinent and surrounding regions, and the East Asian Monsoon which affects southern China, Korea and parts of Japan.
The southwestern summer monsoons occur from July through September. The Thar Desert and adjoining areas of the northern and central Indian subcontinent heats up considerably during the hot summers. This causes a low pressure area over the northern and central Indian subcontinent. To fill this void, the moisture-laden winds from the Indian Ocean rush in to the subcontinent. These winds, rich in moisture, are drawn towards the Himalayas. The Himalayas act like a high wall, blocking the winds from passing into Central Asia, and forcing them to rise. As the clouds rise their temperature drops and precipitation occurs. Some areas of the subcontinent receive up to 10,000 mm (390 in) of rain annually.
The southwest monsoon is generally expected to begin around the beginning of June and fade away by the end of September. The moisture-laden winds on reaching the southernmost point of the Indian Peninsula, due to its topography, become divided into two parts: the Arabian Sea Branch and the Bay of Bengal Branch.
The Arabian Sea Branch of the Southwest Monsoon first hits the Western Ghats of the coastal state of Kerala, India, thus making this area the first state in India to receive rain from the Southwest Monsoon. This branch of the monsoon moves northwards along the Western Ghats (Konkan and Goa) with precipitation on coastal areas, west of the Western Ghats. The eastern areas of the Western Ghats do not receive much rain from this monsoon as the wind does not cross the Western Ghats.
The Bay of Bengal Branch of Southwest Monsoon flows over the Bay of Bengal heading towards North-East India and Bengal, picking up more moisture from the Bay of Bengal. The winds arrive at the Eastern Himalayas with large amounts of rain. Mawsynram, situated on the southern slopes of the Khasi Hills in Meghalaya, India, is one of the wettest places on Earth. After the arrival at the Eastern Himalayas, the winds turns towards the west, travelling over the Indo-Gangetic Plain at a rate of roughly 1–2 weeks per state, pouring rain all along its way. June 1 is regarded as the date of onset of the monsoon in India, as indicated by the arrival of the monsoon in the southernmost state of Kerala.
The monsoon accounts for 80% of the rainfall in India. Indian agriculture (which accounts for 25% of the GDP and employs 70% of the population) is heavily dependent on the rains, for growing crops especially like cotton, rice, oilseeds and coarse grains. A delay of a few days in the arrival of the monsoon can badly affect the economy, as evidenced in the numerous droughts in India in the 1990s.
The monsoon is widely welcomed and appreciated by city-dwellers as well, for it provides relief from the climax of summer heat in June. However, the roads take a battering every year. Often houses and streets are waterlogged and slums are flooded despite drainage systems. A lack of city infrastructure coupled with changing climate patterns causes severe economical loss including damage to property and loss of lives, as evidenced in the 2005 flooding in Mumbai that brought Mumbai to a standstill. Bangladesh and certain regions of India like Assam and West Bengal, also frequently experience heavy floods during this season. Recently, areas in India that used to receive scanty rainfall throughout the year, like the Thar Desert, have surprisingly ended up receiving floods due to the prolonged monsoon season.
The influence of the Southwest Monsoon is felt as far north as in China's Xinjiang. It is estimated that about 70% of all precipitation in the central part of the Tian Shan Mountains falls during the three summer months, when the region is under the monsoon influence; about 70% of that is directly of "cyclonic" (i.e., monsoon-driven) origin (as opposed to "local convection").
Around September, with the sun fast retreating south, the northern land mass of the Indian subcontinent begins to cool off rapidly. With this air pressure begins to build over northern India, the Indian Ocean and its surrounding atmosphere still holds its heat. This causes cold wind to sweep down from the Himalayas and Indo-Gangetic Plain towards the vast spans of the Indian Ocean south of the Deccan peninsula. This is known as the Northeast Monsoon or Retreating Monsoon.
While travelling towards the Indian Ocean, the dry cold wind picks up some moisture from the Bay of Bengal and pours it over peninsular India and parts of Sri Lanka. Cities like Chennai, which get less rain from the Southwest Monsoon, receives rain from this Monsoon. About 50% to 60% of the rain received by the state of Tamil Nadu is from the Northeast Monsoon. In Southern Asia, the northeastern monsoons take place from December to early March when the surface high-pressure system is strongest. The jet stream in this region splits into the southern subtropical jet and the polar jet. The subtropical flow directs northeasterly winds to blow across southern Asia, creating dry air streams which produce clear skies over India. Meanwhile, a low pressure system develops over South-East Asia and Australasia and winds are directed toward Australia known as a monsoon trough.
The East Asian monsoon affects large parts of Indo-China, Philippines, China, Korea and Japan. It is characterised by a warm, rainy summer monsoon and a cold, dry winter monsoon. The rain occurs in a concentrated belt that stretches east-west except in East China where it is tilted east-northeast over Korea and Japan. The seasonal rain is known as Meiyu in China, Changma in Korea, and Bai-u in Japan, with the latter two resembling frontal rain.
The onset of the summer monsoon is marked by a period of premonsoonal rain over South China and Taiwan in early May. From May through August, the summer monsoon shifts through a series of dry and rainy phases as the rain belt moves northward, beginning over Indochina and the South China Sea (May), to the Yangtze River Basin and Japan (June) and finally to North China and Korea (July). When the monsoon ends in August, the rain belt moves back to South China.
Also known as the Indo-Australian Monsoon. The rainy season occurs from September to February and it is a major source of energy for the Hadley circulation during boreal winter. The Maritime Continent Monsoon and the Australian Monsoon may be considered to be the same system, the Indo-Australian Monsoon.
It is associated with the development of the Siberian High and the movement of the heating maxima from the Northern Hemisphere to the Southern Hemisphere. North-easterly winds flow down Southeast Asia, are turned north-westerly/westerly by Borneo topography towards Australia. This forms a cyclonic circulation vortex over Borneo, which together with descending cold surges of winter air from higher latitudes, cause significant weather phenomena in the region. Examples are the formation of a rare low-latitude tropical storm in 2001, Tropical Storm Vamei, and the devastating flood of Jakarta in 2007.
The onset of the monsoon over the Maritime Continent tends to follow the heating maxima down Vietnam and the Malay Peninsula (September), to Sumatra, Borneo and the Philippines (October), to Java, Sulawesi (November), Irian Jaya and Northern Australia (December, January). However, the monsoon is not a simple response to heating but a more complex interaction topography, wind and sea, as demonstrated by its abrupt rather than gradual withdrawal from the region. The Australian monsoon (the "Wet") occurs in the southern summer when the monsoon trough develops over Northern Australia. Over three-quarters of annual rainfall in Northern Australia falls during this time.
The European Monsoon (more commonly known as the Return of the Westerlies) is the result of a resurgence of westerly winds from the Atlantic, where they become loaded with wind and rain. These Westerly winds are a common phenomenon during the European winter, but they ease as spring approaches in late March and through April and May. The winds pick up again in June, which is why this phenomenon is also referred to as "the return of the westerlies".
The rain usually arrives in two waves, at the beginning of June and again in mid to late June. The European monsoon is not a monsoon in the traditional sense in that it doesn't meet all the requirements to be classified as such. Instead the Return of the Westerlies is more regarded as a conveyor belt that delivers a series of low pressure centres to Western Europe where they create unsettled weather. These storms generally feature significantly lower than average temperatures, fierce rain or hail, thunder and strong winds.
|Wikimedia Commons has media related to Monsoon.|
|Wikinews has related news: Asian monsoon rains force millions to flee|
|Wikisource has the text of the 1911 Encyclopædia Britannica article Monsoon.|
|
Evidence of Plate Tectonics http://www.mnh.si.edu/earth/text/4_1_2_3.html
Evidence of Plate Tectonics• 1980s• Ten years of telescope-based measurements from Earth to distant quasars confirmed that the North American and Eurasian Plates are moving away from each other at a rate of 1.7 cm (0.7 in) per year.• Geological studies of ocean- floor rocks suggest the plates have moved at this same average speed for tens of millions of years.
Evidence for Plate Tectonics • Two telescopes — one in Massachusetts, the other in Germany — measured their distance to a pulsing quasar at the same time. This measurement was repeated 803 times over 10 years. Each point on the graph indicates how far apart the two stations were at that time. As you can see, they slowly moved apart.
Plate Tectonics and Volcanos• A volcano is an opening, or rupture, in a planets surface or crust, which allows hot magma, volcanic ash and gases to escape from below the surface.• Volcanoes are generally found where tectonic plates are diverging or converging.
Earthquakes• An earthquake is the result of a sudden release of energy in the Earths crust that creates seismic waves.• The seismic activity of an area refers to the frequency, type and size of earthquakes experienced over a period of time.
1989 San Francisco Earthquake• http://www.youtube.com/watch?v=zkx- vO9I8r8&feature=related
Tsunami• A tsunami is a series of water waves caused by the displacement of a large volume of a body of water, typically an ocean or a large lake.• Earthquakes, volcanic eruptions and other underwater explosions, landslides, glacier calvings, meteorite impacts and other disturbances above or below water all have the potential to generate a tsunami.
|
MATHS PROJECT Quadrilaterals ppt
||MATHS PROJECT Quadrilaterals
1MATHS PROJECT.ppt (Size: 637.5 KB / Downloads: 111)
A plane figure bounded by four line segments AB,BC,CD and DA is called a quadrilateral.
In geometry, a quadrilateral is a polygon with four
sides and four vertices. Sometimes, the term quadrangle is used, for etymological symmetry with triangle, and sometimes tetragon for consistence with pentagon.
There are over 9,000,000 quadrilaterals. Quadrilaterals are either simple (not self-intersecting) or complex (self-intersecting). Simple quadrilaterals are either convex or concave.
Is a square a rectangle?
Some people define categories exclusively, so that a rectangle is a quadrilateral with four right angles that is not a square. This is appropriate for everyday use of the words, as people typically use the less specific word only when the more specific word will not do. Generally a rectangle which isn't a square is an oblong.
But in mathematics, it is important to define categories inclusively, so that a square is a rectangle. Inclusive categories make statements of theorems shorter, by eliminating the need for tedious listing of cases. For example, the visual proof that vector addition is commutative is known as the "parallelogram diagram". If categories were exclusive it would have to be known as the "parallelogram (or rectangle or rhombus or square) diagram"!
It has two pairs of sides.
Each pair is made up of adjacent sides (the sides meet) that are equal in length. The angles are equal where the pairs meet. Diagonals (dashed lines) meet at a right angle, and one of the diagonal bisects (cuts equally in half) the other.
|
The Cell Functions And Actions Guide
A cell is the primary building block of life and cells functions are required for everything that happens inside a human’s body. While an organism may consist of only one cell, such as bacteria, a human body is thought to contain over 100 trillion cells. The function of cells is determined by the genetic code that is present in each and every cell. All human cells are classified as being eukaryotic and are the same type as cells found in any organism that consists of more than one cell.
Perhaps the most important function of cells is metabolism. While plant cells are able to convert the energy from the sun into energy that they can use, human cells must have external nutrients to provide energy to the body. Cells are capable of breaking down sugars and other molecules to provide the necessary energy to keep a body alive. This process of metabolism allows all the other normal functions that a cell must complete. By breaking down down the specific molecules, cells create ATP, or Adenosine Triphosphate. Using energetic bonds, this substance is how cells transport energy to other cells within a person’s body. The energy is then used by organs in the production of hormones and by the brain to control these organs.
If cellular metabolism is the cell function that is most important, then the second most important function is reproduction. This is the process by which cells promote the synthesis of life. There are two methods of reproduction: mitosis and meiosis. For generation of new cells within a body, mitosis is performed by somatic cells. Meiosis only occurs in reproductive cells. These cells have a slightly different genetic code than the cells that created them and they are used to propagate the advancement of a species through sexual reproduction. Without the ability to create these types of cells, the human population would simply die out as no reproduction would occur. In this sense, the cell function of reproduction may be seen as even more important than cellular metabolism.
The last cell and function concern is the transport of molecules throughout the body. This is the important process by which blood is circulated and oxygen reaches the various organs in the body. There are two types of molecular transport: active and passive transport. In the first, active transport, the cells are able to move macromolecules, such as proteins, to their destination. The second, passive transport, occurs when the cells are able to absorb molecules by allowing them to cross the cellular membrane.
The relationship between cell and function is absolutely necessary for the successful maintenance of life. As cells can take different forms, like skin cells, white blood cells, and brain cells, the individual functions can differ greatly, but they are all important in making sure the body continues to work. There are a limitless number of cells functions and it will be many years before the complete relationship between each type of cell and overall human health is understood.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.