id int64 110 1.16M | edu_score float64 3.5 5.1 | url stringlengths 21 286 | text stringlengths 507 485k | timestamp stringdate 2026-01-18 07:33:45 2026-02-05 07:22:54 |
|---|---|---|---|---|
22,989 | 3.699284 | http://www.ruralvt.com/ancientroads/bearingDistanceCalculations.php | |Home |||News |||Tracking |||Forms/Templates |||Tutorials/Guides |||Reference |||Minutes |||External |||ListServ||FTP|
The surveyor's bearing and distance values, known as "survey pairs", found in the record of new roads, often use a notation and units not familiar to us 21st century folks. To acquaint you with the style of compass bearings and units of distance measure, the following explanations and interactive calculators are offered.
The compass bearings found in town roadway surveys often use a format consisting of a letter followed by a numeric value followed by another letter. The format looks like this:
The first portion being the letter 'N' or 'S'. The second portion is an angle between 0 and 90 degrees. The third and final portion is the letter 'E' or 'W'.
|N15E||Starting at North, turn 15 degrees toward the East. The resulting angle is 15 degrees.|
|S15E||Starting at South, turn 15 degrees toward the East. The resulting angle is 180 (South) minus 15 degrees equals 165 degrees.|
|S15W||Starting at South, turn 15 degrees toward the West. The resulting angle is 180 (South) plus 15 degrees equals 195 degrees.|
|N15W||Starting at North, turn 15 degrees toward the West. The resulting angle is 360 (North) minus 15 degrees equals 345 degrees.|
Ocassionally, when the direction is due (exactly) North, East, South, or West, the entry may simply be the letter 'N', 'E', 'S', or 'W' respectively.
|Here are the resulting numeric angles:|
|N||0 or 360 degrees|
Here is an interactive calculator. Select North or South, East or West, and enter a numeric angle. It will convert that surveyor bearing to a numeric compass angle.
Convert angles from "DDD MM.mmmm" format to "DDD.dddd".
Enter the number of chains, rods, and links; the number of feet will be calculated.
Following are some tools useful in Microsoft Excel workbooks for making the bearing and distance conversions. First is a Visual Basic for Excel macro. Enter it into a macro sheet. Then on a worksheet cell, refer to this macro and three worksheet cells (or constants) containing the "NS", angle, and "EW" information. Second is a simple Excel formula for converting a distance in chains, rods, and links to feet.
Here is the Visual Basic macro for converting a surveyor bearing to a numeric value.
Option Explicit ' ' This is a collection of macros written for use on the Hartland Ancient Roads project. ' ' Macros and functions were written by Gary Trachier. ' ' ' Name: SurveyBearing2Angle ' ' Purpose: Converts a surveyor bearing of the form [NS]ddd.d[EW] to a decimal angle with 0 degrees at North. ' ' Revision history ' 28 June 2007 First written. ' Public Function SurveyBearing2Angle(ByVal ns As String, ByVal numericAngle As Single, ByVal ew As String) Const errNS = -1 ' error value for a problem with the NS parameter Const errNumeric = -2 ' error value for a problem with the numeric parameter Const errEW = -3 ' error value for a problem with the EW parameter Const errUntrapped = -99 ' error value for all miscellaneous errors ns = LCase(ns) ew = LCase(ew) If (ns <> "n" And ns <> "s") Then ' error-check the NS value SurveyBearing2Angle = errNS ' set the error return value Exit Function ' bail out End If If (ew <> "e" And ew <> "w") Then ' error-check the EW value SurveyBearing2Angle = errEW ' set the error return value Exit Function ' bail out End If If (numericAngle < 0 Or numericAngle > 90) Then ' error-check the numeric portion SurveyBearing2Angle = errNumeric ' set the error return value Exit Function ' bail out End If ' handle each quadrant separately for the calculation Select Case ns Case "n" Select Case ew Case "e" SurveyBearing2Angle = 0 + numericAngle Case "w" SurveyBearing2Angle = 360 - numericAngle Case Else SurveyBearing2Angle = errUntrapped End Select Case "s" Select Case ew Case "e" SurveyBearing2Angle = 180 - numericAngle Case "w" SurveyBearing2Angle = 180 + numericAngle Case Else SurveyBearing2Angle = errUntrapped End Select Case Else SurveyBearing2Angle = errUntrapped End Select End Function
Here is the Excel equation for converting a distance in chains, rods, and links to feet. | 2026-01-18T16:15:45.699159 |
981,745 | 4.094966 | http://nature.nps.gov/geology/parks/tapr/ | Few places in the country demonstrate the connection between landscape and people better than the tallgrass prairie of the Flint Hills. The hills of the Tallgrass Prairie National Preserve and the surrounding area are shaped by the rocks that lie directly beneath the vegetation and soil— the same rocks which made cultivation difficult and led to the use of native prairie grasses for ranching. This rocky terrain, then, is closely tied to today’s ranching culture. This area, the Flint Hills, is characterized by thin soils, limestone outcrops, vegetation-covered shale intervals between the limestones, deeply incised valleys, and dissected topography. The Flint Hills cross east-central Kansas from the north near the Nebraska border, and extend into Oklahoma to the south. Many of the limestones contain nodules and layers of flint (also called chert)—a hard, dense rock that resists erosion. As the limestones erode, angular fragments of flint accumulate at the surface, giving the Flint Hills their name. The thin, rocky soils and steep slopes of the Flint Hills have precluded cultivation, effectively preserving the native grasslands. Historically, only deep ravines and the floodplains of streams were forested. Most cultivation is limited to river and stream bottoms, such as Fox Creek, just east of the ranch headquarters area; there, the bedrock is covered by a layer of river-deposited sediments that have developed thick soils that are especially valuable for cultivation.
280 Million Year-Old Rocks
Limestone ranges in color from nearly white to brown. It is hard, and much more resistant to erosion than the softer shales, which are usually gray or tan. The alternating beds of limestone and shale produce hillsides with a steplike appearance. Many of the limestone layers create notable benches on the hillsides; the shales form the steep slopes between the benches. The hills themselves are created by a process called differential erosion. Tougher, more resistant limestones and flint cap the tops of hills, while the land between them has been worn away and slowly removed.
The rocks of this area—alternating beds of limestone and shale—were deposited during the Permian Period of geologic history, about 280 million years ago. At that time, the climate here was hot, and the surface was covered by ocean water most of the time. The limestones represent
periods when the region’s surface was covered by shallow, tropical oceans which teemed with life; shales represent times when mud was deposited on the ocean floor. Each of these sedimentary rock layers has been named after towns, creeks, or other nearby landmarks; the names are based on the location where each rock layer was first found and described by geologists.
A closer look at the rock reveals many fossils. Most of these marine fossils are invertebrates—animals without backbones—such as corals, clams, snails, bryozoans (colonies of animals resembling sea fans), sea urchins, crinoids (a stalked animal that is distantly related to the starfish and sea urchin), and clam-like animals called brachiopods. All of these organisms at one time lived in a shallow, warm, tropical ocean. Particularly abundant in some limestones are fusulinids—fossils shaped like wheat grains; these were one-celled animals that floated in the water. When they died, their skeletons drifted to the bottom of the ocean and were preserved in the lime mud of the ocean floor. These lime muds eventually became limestone. Fusulinids can be seen in many of the limestone blocks used for building on the preserve.
Wood was scarce when the prairie was settled primarily by Anglo-American emigrants in the mid-1800s, so the abundant limestone became important for constructing buildings, bridges, and fences. The Cottonwood Limestone, a rock layer that occurs on the preserve near the base of the hills in the Fox Creek valley, is a common building stone in Kansas. The Cottonwood is thick, nearly white in color, even textured, durable, and contains numerous fusulinids. Blocks of stone three or more feet thick, and several feet in length and width, can be taken from a single ledge. The ranch house, portions of the schoolhouse and barn, and many other structures on the preserve are built with Cottonwood Limestone. Many other buildings in the State, including the Chase County Courthouse in Cottonwood Falls, and most of the State Capitol in Topeka, are constructed with Cottonwood Limestone.
The General park map handed out at the visitor center is available on the park's map webpage.For information about topographic maps, geologic maps, and geologic data sets, please see the geologic maps page.
A geology photo album has not been prepared for this park.For information on other photo collections featuring National Park geology, please see the Image Sources page.
A list of publications available about Tallgrass Prairie can be found here.
Parks and Plates: The Geology of Our National Parks, Monuments & Seashores.
Lillie, Robert J., 2005.
W.W. Norton and Company.
9" x 10.75", paperback, 550 pages, full color throughout
The spectacular geology in our national parks provides the answers to many questions about the Earth. The answers can be appreciated through plate tectonics, an exciting way to understand the ongoing natural processes that sculpt our landscape. Parks and Plates is a visual and scientific voyage of discovery!
Ordering from your National Park Cooperative Associations' bookstores helps to support programs in the parks. Please visit the bookstore locator for park books and much more.
For information about permits that are required for conducting geologic research activities in National Parks, see the Permits Information page.
The NPS maintains a searchable data base of research needs that have been identified by parks.
A bibliography of geologic references is being prepared for each park through the Geologic Resources Evaluation Program (GRE). Please see the GRE website for more information and contacts.
NPS Geology and Soils PartnersAssociation of American State Geologists
Geological Society of America
Natural Resource Conservation Service - Soils
U.S. Geological Survey
General information about the park's education and intrepretive programs is available on the park's education webpage.For resources and information on teaching geology using National Park examples, see the Students & Teachers pages. | 2026-02-02T11:06:21.491361 |
24,628 | 3.613995 | https://scienceafrica.co.ke/tress-in-african-drylands-critical-for-holding-groundwater-deep-soil/ | By HENRY OWINO (Senior Correspondent)
Climate change projections indicate that West Africa will experience an increase in the number of extreme rainfall events in the 21st century.
Results from a study carried out by scientists suggest that tree cover is key not only to making this an opportunity to increase soil and groundwater recharge but also to avoiding escalated land degradation.
If tree cover is absent, there is a higher risk for rainfall to be lost either through evaporation or overland flow. In contrast, when trees are present, rainfall is more likely to infiltrate into the soil and contribute to deep soil and groundwater recharge.
These were research finding by scientists from World Agroforestry (ICRAF), Swedish University of Agricultural Sciences (SLU) and Université de Ouagadougou that was funded by the Swedish Research Council (Vetenskapsrådet) and the Swedish Research Council Formas.
These results suggest that maintaining or promoting appropriate tree cover in tropical African drylands may be crucial to improving deep soil and groundwater recharge under a future climate with more heavy rainfall.
Knowing how changes in tree cover, either climate or human induced will affect soil and groundwater recharge under different scenarios of rainfall intensity is vital to plan sound strategies for adaptation to climate change.
Moreover, this information should be of great interest for large scale, tree‐based landscape restoration programs in the region, such as the African Forest Landscape Restoration Initiative or the Great Green Wall of the Sahara and the Sahel.
Specific role of tree cover in enhancing deep soil-water drainage
In semi-arid West Africa, rainfalls are characterized by a high spatial, intra and inter annual variability. Annual rainfall is concentrated in a single, relatively short, rainy season that occurs between May and October.
Rainfall intensities are high and a large proportion of annual rain falls during very intense storms.
Soils in the semi-arid tropics and in semi-arid West Africa in particular, are typically sensitive and vulnerable to degradation, which is mainly a result of their low structural stability, especially when soil organic matter is low.
The prevalence of high rainfall intensities, coupled with the physical characteristics of these soils, frequently leads to the formation of crusts on the soil surface. These crusts reduce water infiltration, resulting in enhanced overland flow and limited soil and groundwater recharge, which can negatively affect primary production, local water supplies, ecosystem services and the livelihoods of local people.
In most soils, the recharge of soil and groundwater occurs via a two‐domain flow process, that is, both through the soil matrix and through macropores. The soil matrix consists of solid particles and voids filled with water and air.
Macropores are large soil pores, generally greater than 0.08 mm in diameter. Macropores drain freely by gravity and allow easy movement of water and air. They provide habitat for soil organisms and the roots of plants can grow into them.
When water flow occurs primarily via matrix flow, the recharge process is typically slow. By contrast, water flow along macropores, also known as preferential flow, is much faster and leads to deeper water drainage.
In a previous study in the same area, the researchers found that the degree of preferential flow decreased with increasing distance to the nearest tree stem and that it was higher in small as compared to large open areas among trees. They concluded that this was likely the result of trees increasing the amount of macropores through the combined effect of leaf litter, root and faunal activity, and microclimate.
Macropores serve as pathways for the preferential flow of water, therefore, the degree of preferential flow will often decrease gradually from the vicinity of a tree towards an open area. This is particularly so in the case of ‘funneled preferential flow’, which occurs around the base of tree stems where stemflow concentrates.
The radius of influence of individual trees on enhancing preferential flow will largely depend on their root system and canopy architecture, in particular, on the radial extent of their lateral roots.
For example, in Burkina Faso, roots of Sarcocephalus latifolius, a native tree of West Africa, were found up to a distance of 20 metres from the trunk. It is therefore every likely that in the area the researchers studied, tree roots extended well beyond the canopy edge of trees into the open areas, which would explain why small open areas (radius 6–13 m) had a higher degree of preferential flow compared with large ones (radius 22–30 m).
That small open areas have a higher degree of preferential flow means that a larger portion of the infiltrating soil-water moves faster through the soil profile and penetrates more rapidly to deeper soil depths. This could explain why small open areas received more soil-water drainage when rainfall intensity increased.
In contrast, in large open areas, which were further away from the influence of trees, soil-water flowed mainly through the soil matrix and penetrated more slowly.
Implications for landscape management
In conditions typical of the semi-arid tropics, macropores are needed to enable the recharge of deep soil-water under increased rainfall intensities. In the absence of macropores, more intense rainfall events could lead to increased recharge of topsoil water but because matrix flow is a slow process, a large fraction of this water would likely return to the atmosphere as evaporation and never contribute to deep recharge, especially if evaporation increases as a result of global warming.
Because trees and associated soil fauna enhance macroporosity and preferential flow, maintaining and promoting a moderate tree cover might be a good strategy to improve deep soil and groundwater recharge under a future climate with more frequent heavy rainfall events.
However, tree cover should not be too high; otherwise, transpiration and interception losses from trees would counteract any beneficial effects they might have on deep soil and groundwater recharge.
In the same study area, the researchers found that there was an optimum tree cover that maximized groundwater recharge, which reflects the balance between the positive and negative effects of trees.
The optimum tree cover represents a threshold below which increasing tree cover leads to improved groundwater recharge whereas above this threshold more trees result in reduced water yields. In water limited environments, understanding the potential thresholds in the relationship between tree cover and water availability is critical.
In a similar study in China’s semi-arid Loess Plateau, researchers have estimated the threshold at which additional revegetation in the area will cause a shortage in the water supply for human activities. Additionally, they found that this threshold could be significantly reduced in the future owing to climate change and increased water withdrawals and called for a better match of species and planting density in large‐scale restoration programs.
In line with the growing awareness of the important relationship between tree cover and groundwater recharge, more research is needed to better understand how this relationship will change in response to projected changes in rainfall intensity. | 2026-01-18T16:53:26.121317 |
964,176 | 3.625414 | http://www.merckmanuals.com/pethealth/exotic_pets/guinea_pigs/disorders_and_diseases_of_guinea_pigs.html | Health problems among guinea pigs that live alone are usually related to aging, dental disease, reproductive disorders, injury, or improper care. Infectious diseases such as certain viruses and bacteria usually occur only in guinea pigs that live with other guinea pigs. Intestinal parasites are not common. Tumors are rare in young guinea pigs, but are more common in guinea pigs that are more than 5 years old. Treatment of infectious diseases can be complicated by the fact that guinea pigs are more sensitive to antibiotics than other types of pets.
Prevention of health problems in guinea pigs is key. A proper diet that does not change from day to day, clean water, bedding materials that are gentle on your pet's skin, frequent cleaning and disinfecting of the cage, a low-stress environment, and sufficient exercise all help to prevent illness.
Sickness causes guinea pigs to be stressed; if your pet is sick, hold it as little as possible. Antibiotics can cause problems in guinea pigs' digestive tracts, so your pet may not tolerate these medications. Most disease treatments should include extra vitamin C. Diarrhea and other illnesses may cause your guinea pig to become dehydrated. Signs of dehydration include dry stools, dark urine, or skin “tenting” (if you pinch the skin it does not settle back to normal immediately but instead remains standing up for a few seconds). If your pet is dehydrated, your veterinarian may provide fluid treatment. Animals that will not eat may require a stomach tube.
Guinea pigs are very sensitive to the effects of many antibiotics. These toxic effects may occur directly as a result of the medication (as in the case of the antibiotics streptomycin and dihydrostreptomycin). The antibiotics may also upset the balance of the bacteria that usually live in your pet's intestines. Many antibiotics, including penicillin, ampicillin, lincomycin, clindamycin, vancomycin, erythromycin, tylosin, tetracycline, and chlortetracycline, can cause this problem. If a guinea pig takes certain antibiotics, it may develop diarrhea, loss of appetite, dehydration, or a drop in body temperature. If treatment continues, it may die in less than a week. Inadequate nutrition and vitamin C deficiency can make your pet more likely to develop these problems. Even guinea pigs that do not show signs of problems with antibiotics may die suddenly. Your veterinarian can diagnose the toxic effects of antibiotics in your pet by examining the animal and testing its feces.
There is no effective treatment for this condition other than general support and stopping the antibiotics. In general, you should avoid giving your guinea pig any antibiotics unless specifically directed by a veterinarian familiar with these animals. If your guinea pig must take antibiotics, you will need to monitor its health carefully. If your pet develops diarrhea or stops eating during treatment, contact your veterinarian immediately. Antibiotic ointments used on the skin can be toxic if your pet licks or eats them.
Digestive disorders in guinea pigs may be caused by infections or by an improper diet.
Many types of bacteria, viruses, and parasites can upset a guinea pig's digestive system. Some signs that your pet's digestive system is upset are: diarrhea, weight loss, loss of energy, lack of appetite, and dehydration. Guinea pigs affected by these illnesses may die suddenly without seeming sick. Others may have a range of signs such as lack of energy, lack of appetite, rough fur coat, staining of the fur around the genital area with feces, loose stools, hunched posture, lack of energy, dull eyes, dehydration, weight loss, pain when the abdomen is touched or pressed, fever, or a low body temperature.
Treatment for diarrhea is usually the same, no matter what the cause. Roughage (fiber in the diet) should be increased and grains and sugars decreased. One way to do this is to provide hay in addition to commercial guinea pig feed. Feeding your guinea pig plain yogurt with active cultures, or a commercial supplement called a probiotic with live cultures, may help to restore the healthy balance of “good” bacteria in its digestive tract. Check with your veterinarian regarding the use of yogurt. It is important that your pet drink enough water. If your guinea pig will not voluntarily drink sufficient water, your veterinarian may provide additional fluids by injection. Antibiotics should only be used when absolutely necessary because their use can worsen the imbalance of bacteria in the digestive tract. Follow the treatment program prescribed by your veterinarian carefully. Keeping your guinea pig's bedding, water bottle, and housing clean and sanitized and promptly removing uneaten food can help prevent infection by reducing the level of disease-causing organisms.
Guinea pigs drool whenever there is a problem with chewing or swallowing. This condition is sometimes referred to as slobbers. The cause is usually a problem with the alignment of the teeth (called malocclusion). Malocclusion may occur due to heredity, lack of vitamin C, injury, or imbalances of certain minerals in the diet. The teeth of guinea pigs grow continuously throughout the animal's life. If the teeth or jaws do not meet properly, the teeth often become overgrown and chewing food becomes difficult. As a result, your pet may develop weight loss, bleeding from the mouth, or abscesses in the roots of its teeth that may spread infection to the animal's sinuses. These kinds of problems are very common in guinea pigs.
If your pet is slobbering or drooling, your veterinarian will evaluate this problem carefully. The molars in the back of the mouth are often the cause of this problem, even though teeth in the front of the mouth may seem normal. Some teeth may need to be clipped or filed to help your pet's jaw close properly. If the problem continues, monthly dental visits with your veterinarian may be necessary.
Eye and Ear Disorders
Signs of conjunctivitis (pink eye) include fluid oozing or dripping from the eye, inflammation of the lining of the eye, and redness around the edge of the eyelids. These infections are usually caused by bacteria, such as Bordetella or Streptococcus species, that cause general upper respiratory system disease (see Guinea Pigs: Lung and Airway Disorders). Treatment may include antibiotic eye drops and antibiotics that affect your pet's whole body. An easy way to administer eye drops is to wrap the guinea pig securely in a towel first. As always with guinea pigs, watch your pet's reactions to the medication carefully.
Ear infections are rare in guinea pigs. When they do occur, they are usually the result of bacterial infection. They may occur at the same time as pneumonia or other respiratory disease. Signs of infection may include pus or discharge from the ears; however, sometimes there are no signs of infection. In severe cases, the animal may become deaf. If the infection spreads from the middle ear to the inner ear, your pet may show signs of problems with its nervous system, such as imbalance, tilting head, walking in circles, or rolling on the ground. The usual treatment is to help alleviate signs. Treatment for the ear infection itself does not usually work.
The most common nutritional disorder in guinea pigs is a lack of vitamin C. Loss of appetite also occurs and is usually a sign of another problem such as disease or problems with the teeth.
Vitamin C Deficiency (Scurvy)
Like people, apes, and monkeys, guinea pigs cannot produce their own vitamin C. If they do not get enough of this vitamin in their diet, their bodies' supply of vitamin C disappears quickly. This can cause problems with blood clotting and with the production of collagen, a protein necessary for healthy skin and joints. Reduced collagen can cause problems walking, swollen joints, and bleeding under the skin, in the muscles, in the membranes around the skull, in the brain, and in the intestines. Guinea pigs with a vitamin C deficiency may be weak, lack energy, and walk gingerly or with a limp. They may have a rough hair coat, lose their appetite, lose weight, have diarrhea, become ill, or die suddenly. Your veterinarian can diagnose vitamin C deficiency by finding out what your pet's diet is like, and by examining your pet, looking especially for bleeding or joint problems.
Some guinea pigs may develop a vitamin C deficiency even when they get enough vitamin C in their diets. This can happen if they have other illnesses or problems that prevent them from eating enough or prevent their bodies from absorbing vitamin C properly. Treatment includes giving your pet vitamin C daily, either by mouth (as directed by your veterinarian) or by injection at your veterinarian's office for 1 to 2 weeks. Multivitamins are not recommended because your pet may have problems with some of the other vitamins contained in them. To prevent vitamin C deficiency, guinea pig food should contain at least 10 milligrams of vitamin C daily (30 milligrams for pregnant females).
Loss of appetite can happen for many reasons, including disease, recovery from surgery, exposure to drafts, not having access to enough fresh water, not being able to chew properly because of an underbite or overbite, and a condition called ketosis, in which your pet's body produces too much of one of the byproducts of digestion. Changes in the type of feed or water, or in the bowl or bottle that your pet eats or drinks from, may also trigger loss of appetite. If nothing is done for a guinea pig that is not eating, its condition may worsen very quickly, resulting in liver problems and death. Ketosis, which may be irreversible, can develop even in guinea pigs that begin to eat again. Your veterinarian will determine appropriate treatment, which may include giving your pet special foods such as a commercial hand-feeding formula or regular pelleted chow that has been ground up, vegetable baby foods, and vitamin C. Guinea pigs that refuse to eat may temporarily need to be force-fed by your veterinarian or by you if longer-term care is needed.
The most common metabolic disorders in guinea pigs involve abnormal metabolism of the mineral calcium.
Hardening of the Organs (Metastatic Calcification)
Guinea pigs that suffer from metastatic calcification (a hardening of the internal organs that spreads throughout the body) often die suddenly without any signs of illness. This condition usually occurs in male guinea pigs that are more than 1 year old. If your pet does have signs, they can include weight loss, muscle or joint stiffness, or increased urination (as part of kidney failure). The cause of this condition is uncertain, but is probably related to diets that have too much of the minerals calcium and phosphorus and not enough of the minerals magnesium and potassium. Most high-quality commercial guinea pig feed is formulated to contain the correct amounts of these vitamins and minerals. Check the nutrition information on the package label before buying pellets for your guinea pig, and do not give additional vitamin or mineral supplements.
Pregnancy Toxemia (Ketosis)
Ketosis, also known as pregnancy toxemia, occurs when a guinea pig's body produces too many ketones, which are a normal byproduct of metabolism. There are many causes of pregnancy toxemia in guinea pigs. These include obesity, large litter size, loss of appetite during the late stages of pregnancy, not eating enough, not exercising enough, environmental stress, and underdeveloped blood vessels in the uterus (an inherited condition). This problem usually happens in the last 2 to 3 weeks of pregnancy, or in the first week after a guinea pig gives birth. It most commonly affects guinea pigs that are pregnant with their first or second litters.
Although it occurs most often in pregnant female guinea pigs, ketosis can also happen in obese guinea pigs (male or female). A guinea pig may die suddenly of ketosis without ever demonstrating signs of illness. In other cases, a sick guinea pig has worsening signs that can include loss of energy, lack of appetite, lack of desire to drink, muscle spasms, lack of coordination or clumsiness, coma, and death within 5 days. Ketosis may cause fetal guinea pigs to die in the uterus.
Your veterinarian can diagnose ketosis by a blood test, and may also be able to identify a fatty liver and bleeding or cell death in the uterus or placenta. Treatment does not usually help, but options include giving your pet the medications propylene glycol, calcium glutamate, or steroids. However, once a guinea pig starts showing signs of this illness, the outcome is usually not good. To prevent ketosis, make sure your pet eats a high quality food throughout pregnancy, but limit the amount of food you give your pet in order to prevent obesity. Preventing exposure to stress in the last few weeks of pregnancy may also help.
Calcium Deficiency (Pregnant Females)
Because pregnancy and nursing require extra nutrients, pregnant guinea pigs may develop a sudden calcium deficiency. This happens most often in obese or stressed guinea pigs, or guinea pigs that have already been pregnant several times. The deficiency usually develops in the 1 to 2 weeks before, or shortly after, giving birth. In much the same way as in guinea pigs with pregnancy toxemia (see Guinea Pigs: Pregnancy Toxemia (Ketosis)), guinea pigs with this condition may die suddenly without signs, or may get sick slowly, with signs such as dehydration, depression, loss of appetite, muscle spasms, and convulsions. Your veterinarian will be able to identify similar problems as in a guinea pig with pregnancy toxemia, except they will likely be more severe. Guinea pigs with calcium deficiency should be treated with the mineral calcium gluconate. To prevent calcium deficiency, feed your pet only high-quality commercial guinea pig feed.
Lung and Airway Disorders
Respiratory diseases in guinea pigs can quickly become serious. If you notice that your guinea pig is having difficulty breathing, see your veterinarian as soon as possible.
Pneumonia, or inflammation of the lungs, is the most frequent cause of death in guinea pigs. Pneumonia in guinea pigs is usually caused by bacterial infection (most often Bordetella bronchiseptica, but other bacteria such as Streptococcus pneumoniae or Streptococcus zooepidemicus may also be the cause). In rare cases, it may be caused by a type of virus known as adenovirus. All of these infectious agents can cause illness without leading to pneumonia (see below).
Signs of pneumonia include oozing or discharge from the nose, sneezing, and difficulty breathing. In addition, guinea pigs with pneumonia often suffer from inflammation of the eyes (commonly called pink eye), fever, weight loss, depression, or loss of appetite. Sudden death can occur when there are outbreaks among groups of guinea pigs. Your veterinarian can diagnose pneumonia from an examination or from special tests performed on the fluid that may be oozing from your pet's eyes or nose. X-rays may also show pneumonia in the lungs.
In general, treatment for a guinea pig with pneumonia is really treatment for the signs of pneumonia instead of the pneumonia itself. This can include administering fluids (to ward off dehydration), forced feeding if necessary, oxygen therapy to help with breathing, and vitamin C. If the pneumonia is caused by bacterial infection, your veterinarian will likely prescribe longterm antibiotics. Although they can be toxic in guinea pigs (see Guinea Pigs: Antibiotics), certain antibiotics are safer than others, and your veterinarian may select one of these if needed. Commonly, the antibiotic is compounded into an oral suspension, which should then be given as directed. Watch any guinea pig receiving antibiotic treatment carefully. If the antibiotics cause diarrhea, the treatment should be stopped immediately and your veterinarian contacted. If you have more than 1 guinea pig, preventing and controlling outbreaks of pneumonia requires keeping your pets and their cages or tanks clean at all times, and removing guinea pigs that are sick from the company of the others.
Bordetella bronchisepta Infection
Guinea pigs without signs of illness may be infected with these bacteria in their nose or throat. Sometimes there can be an outbreak among groups of guinea pigs, during which all get sick and die quickly. Infection can be transmitted from one guinea pig to another when droplets are sprayed into the air by sneezing or coughing; in its genital form, infection can also be transmitted by sexual contact. Other animals, such as dogs, cats, rabbits, and mice, may be infected with these bacteria without showing any signs of illness, so pet owners should avoid letting their guinea pigs come into contact with other animals.
Guinea pigs may be infected with the Streptococcus pneumoniae bacteria without seeming sick. The bacteria can cause a sudden illness in previously healthy guinea pigs when they become stressed or stop eating; this can lead to death. One guinea pig can infect another by direct contact or by sneezing or coughing. Signs of streptococcosis include enlarged lymph nodes and difficulty breathing. Your veterinarian can spot other signs of infection with this bacteria, such as inflammation of the inner ear or eardrum (otitis media), inflammation of the joints (arthritis), and inflammation of the lining of the lungs, heart, abdomen, or uterus. He or she can diagnose streptococcosis based on these signs, other examination findings, and laboratory tests. Certain antibiotics can prevent one sick guinea pig from spreading the infection to other guinea pigs, but guinea pigs that do not seem sick may still be infected.
There is a type of adenovirus that is specific to guinea pigs. It may cause pneumonia (see Guinea Pigs: Pneumonia), but many guinea pigs have this virus without any signs of illness and are called carriers. Carriers can suddenly become sick as a result of stress or anesthesia. This occurs more often in guinea pigs that are young, old, or that have immune systems that are not working properly. Guinea pigs do not usually die from this virus, but those that do die often die suddenly without seeming sick. Signs of illness are similar to those seen in other viral or bacterial infections and include breathing difficulties, discharge from the nose, and weight loss.
Common reproductive problems in guinea pigs may involve the ovaries or breasts. There is also a metabolic disorder associated with improper calcium levels during pregnancy.
Ovarian cysts are very common in female guinea pigs between 18 months and 5 years of age. The cysts usually occur in both ovaries, but occasionally only the right ovary is affected. The cysts can often be felt in the abdomen. Other signs may include loss of appetite, energy, and sometimes hair loss on or around the abdomen. To confirm the diagnosis, your veterinarian may use ultrasonography or x-rays. The only effective treatment is spaying (removing the ovaries and the uterus). If left untreated, the cysts may continue to grow and could potentially burst, placing the guinea pig's life in danger.
Mastitis is inflammation of the mammary glands. It is usually caused by a bacterial infection. This often occurs during the period when a female guinea pig's offspring are suckling. Injury—such as cuts or scrapes in the skin—can make it easier for bacteria in the environment to enter the body and cause infection. Mastitis is a painful and serious condition. The milk glands become painful and enlarged, warm, firm, and bluish in color. Without prompt treatment, the infection may spread to the guinea pig's bloodstream and cause fever, lack of appetite, depression, dehydration, a lack of milk production, neglect of offspring, and death. Milk may be thick or bloody and clotted. Your veterinarian may treat mastitis with appropriate antibiotics. To prevent this condition, make sure your pet is well taken care of, its living quarters are clean and sanitary, and its bedding does not cause irritation.
Bordetellabacteria can infect guinea pig genitals and can be spread by sexual contact. Infection can cause infertility, stillbirth, or sudden death of guinea pig fetuses in the uterus.
Because pregnancy and nursing require extra nutrients, pregnant guinea pigs may develop a sudden calcium deficiency. (For a more detailed discussion of Calcium Deficiency, see Guinea Pigs: Calcium Deficiency (Pregnant Females).)
Dystocia (difficulty giving birth) in female guinea pigs is caused by the normal stiffening of the tough fibrous cartilage which joins the 2 pubic bones. When the cartilage (the symphysis) stiffens, it limits the spread of the pubic bones. If the symphysis has not been stretched by a previous birth, the female will be unable to deliver her offspring normally. Cesarean sections are very risky for guinea pigs and the survival rate for the mother is poor. The safest option is to either breed the female between 4 and 5 months of age or prevent pregnancy altogether by housing male and female guinea pigs separately or by spaying and neutering.
Skin problems in guinea pigs are often first noticed as patches of hair loss. Several underlying problems can lead to hair loss, including infestations of fur mites or lice, ringworm, or fighting between incompatible animals. Another skin problem, pododermatitis, affects the feet.
Severe infestation by fur mites may cause hair loss or itching along the rear end of a guinea pig's body. Some types of mites cause no signs, others cause hair loss but do not seem to affect the skin, and still others burrow into the skin and may cause intense itching, hair loss, and skin inflammation. This latter type of mite usually infects the inner thighs, shoulders, and neck. The skin underneath the affected fur may be dry or oily and thickened or crusty. In severely affected animals, the affected areas may become infected, which can cause the animals to lose weight, have low energy, or run around the cage. Left untreated, convulsions and death may result. Guinea pigs catch fur mites from other guinea pigs or from objects that are contaminated such as bedding. Your veterinarian can diagnose this condition either by examining your pet's fur or by looking at scrapings from your pet's skin under a microscope. To treat fur mites, your veterinarian will probably prescribe a powder or spray to be applied to your pet's skin or give your pet a series of injections. Infestations can be minimized or prevented by making sure that living quarters are clean and sanitary, and minimizing your pet's stress levels.
Guinea pigs that are infested with lice do not usually have signs, but in severe cases lice can cause itching, hair loss, and inflammation of the skin around the neck and ears. You can see the lice by looking at a piece of your pet's hair under a magnifying glass. To treat lice, your veterinarian will probably prescribe a powder or spray to be applied to your pet's skin. To prevent this condition, keep the guinea pig's cage clean and sanitary.
Skin infections in guinea pigs are most often caused by the fungus Trichophyton mentagrophytes, and less often by Microsporum species. The primary sign of ringworm is bald patches, usually starting at the head. The bald patches generally have crusty, flaky, red patches within them. When these patches appear on the face, it is usually around the eyes, nose, and ears. The disease may also spread to the back. A guinea pig can catch ringworm from another guinea pig or from contaminated objects such as bedding.
Your veterinarian can tell if your pet is infected with this condition by looking at the red patches on its skin, by shining a special ultraviolet light on its skin, or by a laboratory test. Ringworm usually goes away on its own if you take good care of your pet and keep its cage or tank clean and sanitary. The red, flaky patches can become infected, which causes them to become inflamed and pus-filled. Treatment is a 5- to 6-week course of an antifungal medicine called griseofulvin given by mouth. If there are only 1 or 2 bald patches or red, flaky areas that have not spread, they can be treated by applying an antifungal ointment recommended by your veterinarian every day for 7 to 10 days.
Ringworm is highly contagious to humans and other animals. If handling an infected guinea pig is necessary, you should wear disposable gloves or wash your hands thoroughly with soap and warm water after handling.
Guinea pigs may chew or tear their own or each other's hair as a result of conflicts between adult males or between adults and juveniles. This is referred to as barbering. When this happens, the hair loss tends to be in patches, and there may be evidence of bite marks or skin inflammation underneath the fur. Barbering may be prevented by separating affected animals, minimizing stress, weaning baby guinea pigs from their mothers early, and feeding animals long-stemmed hay. Hair loss can also be caused by genetic problems or problems in metabolism, or the body's breakdown of food into energy; this is especially true in female guinea pigs that have been used for breeding. Young guinea pigs that are weaning from their mothers may have hair thinning as their coat changes to coarser adult fur, or if their diet does not have enough protein.
Your pet's footpads can become inflamed, develop sores, or become overgrown over the course of many months. Staphylococcus aureus bacteria are often the cause and can enter your pet's feet through tiny cuts or scrapes. Factors that increase the risk of infection include obesity, wire floor caging, poor sanitation, and injury. When pododermatitis lasts for many months, it can lead to serious complications such as swelling of the lymph nodes, arthritis, inflammation of the tendons, and a buildup of a protein called amyloid in the kidney, liver, hormone glands, spleen, and pancreas. Your veterinarian can diagnose this condition by examining your guinea pig and by doing laboratory tests. If it is detected early, the condition may be treated simply by switching your pet's living quarters to ones with a smooth bottom, improving sanitation, and changing the bedding to softer material. Your veterinarian will likely clean any wounds, clip the hair around the affected areas, and trim any overgrown nails. Affected feet should be soaked in an antibiotic solution, and antibiotic ointment should be applied. In severe cases, animals may need antibiotics and pain medications. Guinea pigs that do not respond to therapy may require amputation of the affected area to avoid more serious complications.
Disorders Affecting Multiple Body Systems
Some guinea pig diseases affect more than one body system. These are also known as multisystemic or generalized diseases.
Enlarged Lymph Nodes (“Lumps” or Lymphadenitis)
Lymph nodes are glands that are located throughout the body that help fight infection. The lymph nodes around the neck often become enlarged or inflamed in guinea pigs. The usual cause of this problem is bacteria, most often Streptococcus zooepidemicus. The infected lymph nodes may become swollen and filled with pus (abscesses), sometimes only on one side. The infection can spread and cause an ear infection, inflammation of the eye, pneumonia, and toxins in the blood in younger animals. Other signs that you or your veterinarian might notice depend on which lymph nodes are affected, but may include tilting of the head, inflammation of the sinuses, inflammation of the eye, trouble breathing, skin that is pale or has a blue tint, blood or protein in the urine, fetal death or stillbirth in pregnant guinea pigs, arthritis, or inflammation of certain internal organs or tissues.
Guinea pigs catch this illness from other infected guinea pigs that are sneezing or coughing, by genital contact, or through cuts or scrapes in the skin or in the mouth. Your veterinarian can diagnose this condition by examination and laboratory tests. Antibiotics may or may not eliminate the infection. Abscesses might break open on their own, or they may be surgically opened and drained or removed. However, this may cause the bacteria to enter your pet's bloodstream. To help prevent infection of the lymph nodes, avoid any harsh or irritating bedding or food. Jaws that do not close properly or overgrown teeth should be fixed. Infections of the respiratory tract should be treated. Your pet's living quarters should be kept clean and sanitary, and sick animals should be housed away from other animals to prevent the spread of disease.
Although occurrences are rare, Salmonella bacteria can infect guinea pigs. Some signs of infection include inflammation of the eye, fever, lack of energy, poor appetite, rough hair coat, enlarged spleen and liver, and swollen lymph nodes around the neck. The bacteria are spread by direct contact with infected guinea pigs or wild mice or rats or by sharing food, water, or bedding with infected animals. Fresh vegetables may also carry Salmonella. Because an animal that is treated may still continue to infect other animals even when it does not seem sick, treatment may not be recommended. Guinea pigs can spread Salmonella infection to humans by direct contact, so appropriate sanitation measures (such as wearing disposable gloves and washing hands thoroughly) should be taken when handling any sick guinea pig.
Guinea pigs occasionally become infected with Yersinia pseudotuberculosis bacteria through contaminated food, bedding, or water. The bacteria can also enter a guinea pig's body through cuts or scrapes in the skin or through inhalation. If a guinea pig becomes infected, the illness may take several courses: 1) infection may spread to the bloodstream and cause sudden death; 2) infected guinea pigs may lose weight, develop diarrhea, and die over the course of 3 to 4 weeks; 3) swollen lymph nodes develop in the neck or shoulder; or 4) your pet may be infected without seeming sick. Veterinarians diagnose this infection by laboratory tests and examination of the sick guinea pig. All guinea pigs that are infected with these bacteria, or that have lived in close quarters with an infected guinea pig, must be euthanized (put to sleep), and the living quarters must be thoroughly sanitized and disinfected.
Cancers and Tumors
Younger guinea pigs may develop skin tumors or leukemia (a cancer of the blood), but most types of cancer are not common in guinea pigs until they are 4 to 5 years old. After that age, between one-sixth and one-third of guinea pigs will develop a tumor. Tumors are more common in strains of guinea pigs that have been inbred. Treatment, if recommended, will depend on the type and location of the tumor.
Benign skin tumors called trichoepitheliomas often occur in guinea pigs, commonly at the base of the tail. These can be easily removed with a simple surgical procedure.
Lymphosarcoma is the most common tumor in guinea pigs; it causes what is sometimes referred to as cavian leukemia. Signs may include a scruffy hair coat and occasionally masses in the chest area and/or an enlarged liver or spleen. The diagnosis is confirmed by a blood count and examination of fluids from the lymph nodes or chest cavity. The outlook for survival is poor; most guinea pigs only live a few weeks after diagnosis.
Last full review/revision July 2011 by Katherine E. Quesenberry, DVM, MPH, DABVP (Avian); Kenneth R. Boschert, DVM, DACLAM | 2026-02-02T05:22:27.890751 |
497,986 | 3.549439 | http://teachers.mam.org/resource/in-depth-discussion-thomas-gainsborough/ | Thomas Gainsborough (1727–1788) was a painter who loved landscapes, but who made his money through portraiture.
Mary, Countess Howe (ca. 1764) is commonly heralded as one of the great masterpieces of British painting. Gainsborough was paid to paint her portrait, along with that of her husband, Richard Howe, when he lived in Bath in the 1760s. Mary is dressed in the current fashion, shown especially by her accessories, such as her expensive Italian hat.
The countess was in Bath to help heal her husband’s gout. Bath was considered a “spa town” at the time and was a popular destination for those who were sick. Gainsborough was much more interested in Mary’s portrait than he was in her husband’s: X-rays show that he tweaked and adjusted many parts of the painting, whereas her husband’s portrait shows no signs of such meticulousness. These two paintings would have been hung next to each other—the upright, strong woman stepping forward, contrasted with her husband, who is merely leaning against a rock.
In fact, some art historians think that Gainsborough might have been making a statement about Mary as a powerful woman. She stands outside in a private, expansive, and fenced-in park; deer in the background allude to the English pastime of hunting. Mary was a known landowner, rare for women at the time—perhaps Gainsborough is celebrating her independence.
Gainsborough took inspiration from Anthony van Dyck for his portraits, particularly the full-length style (from head to toe) and fashionable dress. Take a look at Van Dyck’s Princess Henrietta of Lorraine Attended by a Page (1634), also in this exhibition, which depicts an exiled princess who was suspected of treason and perhaps even murder.
- What do we know about this woman from the portrait? What can we assume from what she is wearing, from her pose, and from her expression? What about the background or the objects around her—do they tell us anything about her?
- Explain to your students a bit about Mary’s background, particularly what art historians think about Gainsborough’s intent when depicting her. Do they agree or disagree? Why?
Though Gainsborough was well known in his time for his portraits, his true love was landscape painting. In Going to Market, he seems to have turned landscape into social commentary.
At the time, thanks to the Enclosure Movement, farmers in England were being forced to get jobs from landowners rather than working independently. Because their income changed so rapidly, many farmers were out of a job and a home.
Art historians think that Gainsborough, who himself grew up on a farm, might have been referencing this phenomenon in this painting of the English countryside. The work contrasts a prosperous family—their cottage sunlit, fields green and bright—with a homeless one, huddled in darkness and mud on the lower-right part of the canvas.
- Explain the historic context of this painting to your students. Do they agree with art historians’ suspicions? What evidence is in the work of art that supports and does not support the claim?
- What landscapes might students depict in Milwaukee? What might they include—what landmarks, people, or objects would be important? What stories would these landscapes tell the future? | 2026-01-26T00:44:09.404064 |
701,309 | 3.86138 | http://www.quackwatch.org/01QuackeryRelatedTopics/lyme.html | Questionable Diagnosis and Treatment
Edward McSweegan, Ph.D.
Lyme disease is the most common tick-borne disease in the United States. In 2011, the Centers for Disease Control and Prevention (CDC) recorded recorded 24,364 cases as confimed and 8,733 cases as probable . The infection is caused by Borrelia burgdorferi, a spiral-shaped bacterium (spirochete) named after Dr. Willy Burgdorfer, the public health researcher who discovered it in 1982. The infection is often contracted during warm-weather months when ticks are active. The spirochete enters the skin at the site of the tick bite. After incubating for 3-30 days, the bacteria migrate through the skin and may spread to lymph nodes or disseminate through the bloodstream to organs or distant skin sites.
|Ixodes scapularis is the most common tick vector in the northeastern and miswestern U.S. The picture shows (from left to right) the nymph, adult male, and adult female. The spot on the thumb shows the relative size of the nymph.||
Lyme disease frequently presents with a skin rash called erythema migrans (EM) and common flu-like symptoms of fever, malaise, fatigue, and muscle and joint pains. The characteristic EM rash is a flat or raised red area that expands, often with clearing at the center, to a diameter of up to 20 inches. However, it does not always occur, which can make the diagnosis more difficult, especially when the patient is not aware of having been bitten by a tick. Other early signs may include small skin lesions, facial nerve paralysis, lymphocytic meningitis, and heart-rhythm disturbances. Early infections usually are cured by two to four weeks of orally administered antibiotics (amoxicillin or doxycycline). However, if untreated or inadequately treated, neurologic, cardiac, or joint abnormalities may follow. Worldwide, Lyme disease has been directly responsible for fewer than two dozen deaths .
The disease is named after the town of Old Lyme, Connecticut, where researchers recognized its nature in 1975. In Europe, associations between tick bites and several skin diseases had been known for decades, but it was not understood that various conditions were part of a single illness. Since its nature was clarified, Lyme disease has emerged as a significant source of public controversy . Some people claim to be persistently infected with B. burgdorferi and suffering from debilitating symptoms as a result.
Many infectious agents can cause chronic infections or can be difficult to eradicate with standard antibiotic treatments. Unfortunately, it is often difficult to diagnose such infections and, in the case of Lyme disease, it is difficult to know what percent of cases persist in the form of chronic infections. Other possibilities for persistent symptoms include: autoimmune-like reactions in which the body attacks its own organs and tissues; physically damaged or scarred organs and tissues from an earlier infection; another tick-borne infection such as babesiosis or ehrlichiosis; and re-infection by B. burgdorferi .
Of course, symptoms occurring long after the onset of Lyme disease also can be coincidental. A long-term study of 212 Connecticut residents suspected of having Lyme disease found incidences of pain, fatigue, and difficulty with daily activities to be similar to 212 age-matched controls without Lyme disease . As noted in an accompanying editorial:
After a median follow-up of 51 months, patients with a diagnosis of Lyme disease that met the national surveillance case definition developed by the Centers for Disease Control and Prevention (CDC) had the same profile of symptoms and the same quality-of-life indicators as age-matched controls without Lyme disease. Thus, recognition and treatment of clear-cut Lyme disease resulted in a return to baseline with no measurable sequelae. On the other hand, patients who were reported to have Lyme disease but who did not meet the CDC's case definition of Lyme disease had increased symptoms and worsening quality-of-life indicators. The implication is that many of these individuals really did not have Lyme disease and therefore did not respond to the treatment .
Limitations of Laboratory Tests
The diagnosis of Lyme disease should be based primarily on an evaluation of a patient's symptoms and the probability of exposure to the Lyme spirochete. Laboratory evaluation is appropriate for patients who have arthritic, neurologic, or cardiac symptoms associated with Lyme disease, but it is not warranted in patients who have nonspecific symptoms, such as those of chronic fatigue syndrome or fibromyalgia .
Matthew J. Rusk, MD, and Stephen J. Gluckman, MD, of the University of Pennsylvania summarized the diagnostic situation, noting that “A true-positive test result consists of a positive enzyme-linked immunosorbant assay [ELISA] or immunofluorescent assay [ILA] followed by a positive Western blot. However, positive results do not prove that [the] patient has Lyme disease and have little predictive value in the absence of characteristic symptoms” .
The FDA agreed and outlined a two-step algorithm for laboratory testing . In a 1997 FDA Public Health Advisory, it advised physicians that:
“The results of commonly marketed assays for detecting antibody to Borrelia burgdorferi (anti Bb) . . . may be easily misinterpreted . . . . Although package inserts for some commercial assays describe their intended use ‘to aid in the diagnosis of Lyme disease,’ this statement does not fully reflect current knowledge . . . and many such assays yield potentially misleading results. . . . Assays for anti-Bb frequently yield false-positive results because of cross-reactive antibodies associated with autoimmune diseases or from infection with other spirochetes, rickettsia, ehrlichia, or other bacteria such as Helicobacter pylori.”
Several years ago, a diagnostic laboratory marketed a one-step Lyme Antigen Urine Test [LUAT], data for which were presented at Lyme advocacy meetings and published in the journal of a Lyme advocacy group. The LUAT, however, was found to return a high rate of false-positive test results and has been discredited .
In February 1999, the FDA approved the PreVue B. burgdorferi Antibody Detection Assay, an "in-office" test that provided results within an hour. The test results are similar in accuracy to those of the ELISA test, but must be confirmed with a Western blot test done by a laboratory.
In May 2001, the FDA approved another ELISA test called C6. It was the first diagnostic tool to use a synthetic hybrid molecule derived from the surface of the Lyme spirochete. A positive C6 test appears to correlate well with acute cases of Lyme disease . In addition, it detects all B. burgdorferi genotypes, but does not appear to cross-react with a related tick-borne pathogen, B. lonestari, which is associated with a Lyme-like infection called (Southern Tick-Associated Rash Illness (STARI) .
In 2005, concerns about inappropriate laboratory testing prompted the CDC and FDA to issue a warning about “commercial laboratories that conduct testing for Lyme disease by using assays whose accuracy and clinical usefulness have not been adequately established. These tests include urine antigen tests, immunofluorescent staining for cell wall-deficient forms of Borrelia burgdorferi, and lymphocyte transformation tests. In addition, some laboratories perform polymerase chain reaction tests for B. burgdorferi DNA on inappropriate specimens such as blood and urine or interpret Western blots using criteria that have not been validated and published in peer-reviewed scientific literature” . The CDC noted, “patients are encouraged to ask their physicians whether their testing…was performed using validated methods and whether results were interpreted using appropriate guidelines.”
Some private practice physicians incorrectly diagnose Lyme disease in patients and subsequently treat them with inappropriate and ineffective regimens. Some of these treatments are described below.
Malariotherapy and ICHT
Malaria is a parasitic disease that typically involves bouts of fever that may reach 40°-41°C (104°-106°F). Before the antibiotic era, patients in the late stages of syphilis were sometimes given malaria in the hope that a high fever would kill the spirochetes responsible for syphilis. The practice was never subjected to controlled studies and was abandoned decades ago when antibiotics became widely available. Over the years, some Lyme patients have allowed themselves to be injected with blood containing a malaria parasite, Plasmodium vivax . Persons seeking such treatments usually had to travel to Mexico. However, in one case, a Texas resident acquired P. vivax-contaminated blood from an unknown source, injected himself, then treated himself with the antimalarial drug chloroquine . There is no evidence that malaria cures Lyme disease (or any other disease for that matter). Moreover, patients who receive malaria-containing blood face significant risks of serious illness or death caused by malaria itself, transfusion reactions, or an infection by other pathogens that might be in the blood. Malariotherapy is far more dangerous than a common case of Lyme disease.
Another form of fever therapy administered to patients who are alleged to have chronic Lyme disease is intracellular hyperthermia therapy (ICHT). A chemical such as 2,4-dinitrophenol (DNP) is administered to the patient. According to one Web site devoted to Lyme disease and ICHT:
The net result of ICHT uncoupler therapy causes the mitochondria to be converted from efficient "powerhouses" of energy production to "chemical furnaces", heating cells from the "inside-out." The Lyme spirochetes are subjected to such an amount of heat over a prescribed time that they cannot survive. In essence, ICHT maybe considered a form of therapeutic "pasteurization."
Hyperbaric Oxygen Therapy (HBOT)
High-pressure (hyperbaric) oxygen is legitimately used to treat deep sea divers suffering from decompression sickness ("the bends") and smoke inhalation, and to help treat several other conditions. There are 300 hyperbaric facilities in the United States. Some of these facilities have been used to treat AIDS, chronic fatigue syndrome, and Lyme disease. The Lyme patients subjecting themselves to long hours in these small chambers apparently hope that high-pressure oxygen will enhance oxygen-dependent immune mechanisms and kill spirochetes lurking beyond the reach of antibiotics.
Is HBOT effective against Lyme disease? At far as I know, it has not been subjected to clinical testing for that purpose. One Lyme patient writing on the Internet said, "After 30 hours of therapy [at $4,000] in 90-minute doses I had no positive results for chronic Lyme treatment." Another online patient wrote, "The director of the clinic is refusing to refund . . . me. This money was for some of the dives I was scheduled to take, but was unable to because I was sick." At best, HBOT is an experimental treatment for some infectious diseases. It is one of several treatments not recommended by the Infectious Diseases Society of America (IDSA) .
Many colloidal silver and silver salt preparations have been touted as cures for AIDS, chronic fatigue, herpes, TB, syphilis, lupus, malaria, plague, acne, impetigo, and many other diseases. Lyme disease is just the latest target. A 1996 Federal Register notice stated the "FDA is not aware of any substantial scientific evidence that supports the use of . . . colloidal silver ingredients or silver salts for these disease conditions." The same notice stated that "human consumption of silver may result in argyria—a permanent ashen-gray or blue discoloration of the skin, conjunctiva, and internal organs" . Despite these warnings, some websites devoted to Lyme disease or colloidal silver products display misleading reports about laboratory experiments in which colloidal silver killed spirochetes. One such report is a letter from Dr. Burgdorfer, the discoverer of the Lyme spirochete. The letter merely reports on a pilot study using colloidal silver to kill spirochetes in a test tube and states that additional laboratory and human studies are underway. Many silver and Lyme advocates have used the letter to suggest that colloidal silver has been proven effective against Lyme disease. However, no study has shown that colloidal silver is safe or effective for treating people with Lyme disease (or anything else).
A quack electromagnetic frequency device from the 1930s also has been resurrected for use in treating Lyme disease. These rife machines are marketed through the Internet. As one website claims, “rife machine therapy is an affordable, life-saving treatment option. Lyme Literate Medical Doctors (LLMDs) often refer their patients to rife machines if antibiotics fail.” These devices are based on the false notion that disease-causing agents and diseased tissues emit radio-like frequencies that can be detected and cured by matching their frequencies.
Other quack treatments for Lyme disease include injections of hydrogen peroxide, and bismacine. In 2006, the FDA issued a warning about the use of "bismacine" for treating Lyme disease. The situation came to the FDA's attention after a patient died as a result of bismacine treatment. The errant doctor, John R. Toth, M.D., of Topeka, Kansas, surrendered his medical license in 2005 and is serving a 40-month prison sentence for manslaughter related to the death . FDA inspectors eventually uncovered a network of shady practitioners who were making bogus diagnoses of Lyme disease and using illegal "dietary supplements" to treat their victims. The situation came to light in December 2008 when Toth; Robert W Bradford; C.R.B., Inc. (d/b/a American Biologics); and C.R.B.'s chief operating officer Brigitte G. Bird were charged with a total of 25 counts of conspiring to violate federal food and drug laws and defraud individuals seeking medical care. The indictment states that Bradford, C.R.B., and Bird marketed bogus Lyme disease products and a microscope falsely claimed to diagnose the disease and that Toth had used the system in his office. In 2009, Carole Bradford was added as a co-defendant. In a separate case, Carl E. Haese, owner/operator of The Haese Clinic of Integrative Medicine in Las Cruces, New Mexico, has been charged with fraud in connection with using Bradford's system .
Overuse of Intravenous Antibiotics
Many Lyme disease activists insist Lyme disease is a difficult-to-treat, chronic infection that requires long-term consumption of powerful antibiotics. (See common beliefs about Lyme disease at the American College of Physicians.) Although decades of medical practice and recent clinical trials suggest otherwise , many Lyme patients still undergo expensive, long-term intravenous antibiotic treatments.
Outpatient intravenous therapy is a multi-billion-a-year business. It remains largely unregulated and can cost patients thousands of dollars per week. Price gouging, drug markups, kickbacks, and self-referral of patients by physicians with financial ties to infusion companies have occurred. In 1995, for example, Caremark, Inc., pled guilty to mail fraud charges for entering into illegal contracts with physicians by paying them to refer Medicaid patients to use Caremark's infusion products . In Michigan, prosecutors charged a physician and Caremark employees with scheming to over bill Blue Cross/Blue Shield for drugs and equipment for patients with Lyme disease .
More recently, Forbes magazine reported on the dangerous and expensive practices of so-called “Lyme Literate Doctors” (LLMD) who rely on powerful, long-term antibiotics to treat patients for presumptive Lyme disease .
The long-term intravenous antibiotic therapy administered to Lyme patients sometimes has disastrous results. During the early 1990s, the CDC described 25 cases of antibiotic-associated biliary complications among persons with suspected disseminated Lyme disease . All patients had received intravenous ceftriaxone for an average of 28 days for suspected Lyme disease. (Ceftriaxone can form precipitates in the presence of bile salts. The resulting "sludge" can block the bile duct.) Twelve patients subsequently developed gallstones. Fourteen underwent cholecystectomy to correct bile blockage. Twenty-two developed catheter-associated bloodstream infections. Yet most of the patients lacked documented evidence of disseminated Lyme disease or even antibodies to B. burgdorferi.In 2000, physicians reported the death of a 30-year-old woman who died from an infected intravenous set-up that had been left in place for more than two years. She was being treated for a case of "chronic Lyme disease" that could not be substantiated .
The risks and costs associated with such treatments were analyzed in a 1993 report whose authors concluded that for most patients with a positive Lyme antibody titer and only symptoms of fatigue or nonspecific muscle pains, the risks and costs of intravenous antibiotic therapy exceed the benefits . Yet fourteen years later, these conclusions continue to be ignored by patients and physicians alike.
In an Internet newsgroup posting, a woman described being on intravenous antibiotic, Rocephin, for 4 weeks, developing gallstones, and switching to another antibiotic regimen for three more weeks. She also described a sudden high fever, anemia, low white cell count, systemic pain, heart rhythm disturbance, and neurologic symptoms. Such descriptions are common among devout Lyme patients and provide an unsettling view into the desperate and dangerous measures some people will take to treat suspected Lyme disease. The woman ended her account by writing that she had switched her medication to ciprofloxacin. This is a powerful antibiotic with side effects that may include acute psychosis and other neuropsychiatric reactions . Other online “antibiotic addicts” have confessed to using veterinary and aquarium antibiotics when they could not get physician prescriptions .
Another patient writing on the Internet said he was treated at a Mexican clinic where the doctor admitted that he and his staff knew little about Lyme disease. The patient wrote, "I started on IV Rocephin (two grams a day), and later added oral azithromycin. My symptoms did improved, but I soon hit a treatment plateau. We then tried IV doxycycline, but this made me sick to my stomach." He went on to describe a long list of other drugs (IV Claforan, Cefobid/Unisyn, Premaxin, a second round of Cefobid/Uisyn, and IV Zithromax), followed by bouts of "severe diarrhea" and phlebitis. Three months and some $25,000 later, DMSO was added to another infusion of Zithromax.
Yet, the drug-seeking behaviors of self-described chronic Lyme patients and the prescribing practices of many “Lyme Literate doctors” remain at odds with published research. Investigators carried out two treatment trials of patients claiming to suffer from chronic Lyme disease. They reported that “treatment with intravenous and oral antibiotics for 90 days did not improve symptoms more than placebo" . Additional studies in Europe and the U.S. similarly found that: oral doxycycline is as effective as intravenous ceftriaxone in treating late-stage central nervous system infections [29,30]; and additional antibiotics are not beneficial in improving cognitive function in patients with post-treatment chronic Lyme disease .
In October 2006, the Infectious Diseases Society of America published guidelines for effective intravenous (and oral) antibiotic regimens to treat various manifestations of Lyme disease . The European Concerted Action on Lyme Borreliosis (EUCALB) also has published recommendations for treating Lyme disease with various oral and intravenous antibiotics.
Published treatment guidelines provide important navigation aids for both physicians and patients. If you don’t like the guidelines, however, there is nothing to stop you from making up your own. That’s what one group of doctors did recently. They posted a set of Lyme disease diagnosis and treatment guidelines on a website, and then proceeded to follow the guidelines they had drafted. They referred to their guidelines as “evidence-based,” but there is no evidence that the rationale for the guidelines has ever been validated in clinical trials or published in the professional literature, and there is no evidence that the guidelines have been endorsed by recognized medical societies such as the American Academy of Pediatrics or the American College of Physicians.
Indeed, the composers of these guidelines (copies of which they offered for $15) are a handful of private practice physicians and Lyme patient advocates. Some of these doctors have been disciplined by their state medical licensing boards. Many are not trained in infectious diseases and most have no research experience with Lyme disease. Still, that did not prevent them from having their guidelines listed in the National Guideline Clearinghouse or using that web listing to suggest their guidelines are clinically appropriate and professionally endorsed. (The Clearinghouse listing is a directory, much like a phonebook, which neither endorses nor evaluates those listed. Unfortunately, this fact is not readily evident to patients looking for online information.)
Many patients who believe they have a chronic or persistent Lyme infection are willing to endure considerable discomfort in their effort to get rid of their symptoms. This behavior is fostered, in part, by the misguided belief that antibiotic therapies are not working unless they make the patient feel worse. These patients typically refer to this condition as "herxing," a colloquial term for the Jarisch-Herxheimer (J-H) reaction. This reaction is an acute response to the release of toxic or biologically active molecules from certain types of bacteria in the presence of some antibiotics.
About 10% of patients treated for early Lyme disease experience a J-H reaction involving chills, fever, muscle pains, rapid heartbeat, and slight lowering of blood pressure during the first 24 hours of antibiotic therapy. These symptoms usually last for several hours, and require little more than aspirin and bed rest. Yet many Lyme newsgroup participants write about a "herx" beginning days or weeks after the start of antibiotic therapy, and "herxing" for weeks at a time—often in a cyclic fashion." Herxing" events have even been likened to an "exorcism" that is "a necessary evil to be endured." Some of these patients are likely to be suffering from the side effects of their inappropriately prescribed antibiotics. It is also safe to assume that the mistaken belief that Lyme treatment involves temporary worsening will lead some people to neglect other illnesses. Neurological symptoms, blurred vision, gastrointestinal upset, vomiting, and palpitations, for example, should be reported to a physician, not posted on the Internet with a request for comments.
Fear, ignorance and Internet rumors have also created an environment for expanding the mythology of Lyme's protean properties far beyond scientific fact or medical observation. For example, some Internet postings and websites suggest that Lyme can be acquired through sexual contact.
"I think that Lyme is also a STD [sexual-transmitted disease]," said one newsgroup poster. Another wrote, "I've talked to many couples who claim they transmitted to each other through sexual contact. I believe I gave it to my wife."
At least a few LLMD appear to be telling patients that Lyme is sexually transmitted and therefore their family members should be tested. One person reported to Quackwatch that a family member had been tested and told that the test was positive and that a 4-5 month course of antibiotics was necessary.
There is no basis for such advice or beliefs. Lyme infections are acquired from the bite of an infected tick. People are "dead end" hosts and do not spread Lyme infections to others.
The topic of pregnancy and Lyme is also rife with rumor and unnecessary fear. A recent review of case reports and other research found no specific patterns of fetal malformation or adverse events in pregnancy . In addition, the authors noted that “larger epidemiological and serological series have consistently failed to demonstrate an increased risk to pregnant women who develop Lyme disease if they receive appropriate antimicrobial therapy.” Attempts to demonstrate venereal, transplacental and contact transmission of Lyme spirochetes in hamsters also have failed .
The risk of acquiring Lyme disease from a blood transfusion is also very low. This was demonstrated in a study of patients in Connecticut whose antibodies were measured six weeks after they received multiple transfusions during cardiothoracic surgery. Of 155 subjects, 149 received a total of units of packed red blood cells and 48 received a total of 371 units of platelets. No patient developed antibodies to B. burgdorferi or clinical evidence of Lyme disease .
In contrast, a case of perinatal transmission of human granulocytic ehrlichiosis (HGE) was reported in the New England Journal of Medicine . Like B. burgdorferi, HGE is transmitted by the Ixodes tick, and simultaneous infections with both pathogens have been reported.
The fact that Lyme disease is readily curable has not discouraged the formation of over a hundred support groups and nonprofit foundations, some with ties to intravenous services, Lyme diagnostic labs, and physicians specializing in private Lyme disease practices. These groups and their ardent followers have used the Internet and other media to barrage politicians and the general public with misinformation, dire personal stories, rumors, and exaggerated claims about thousands of people being maimed, killed and bankrupted each year by Lyme disease. The core message is that Lyme is a deadly chronic disease that requires long-term antibiotic therapy paid for by insurance companies.
Despite the alleged frequency of chronic Lyme disease, clinical trials funded by the National Institutes of Health (NIH) were hampered by a lack of patients who met evidence-based medical criteria for Lyme disease. A third trial at Columbia University had to modify its patient entry criteria in order to find enough patients to carry out the study. The reality is Lyme remains a common bacterial infection that is antibiotic-responsive, nonfatal, non-communicable, and geographically- and seasonally-limited in range.
Still, support groups and individual patients have created numerous websites that contain unsubstantiated claims, inaccurate medical information, and personal testimonies for the dubious treatments described above. Indeed, the Internet has provided a powerful mechanism for organizing patients and presenting poorly documented information to the public and the press [36,37]. And as the owner of one Lyme diagnostic lab recently said, “Patients, because of the Internet, have become my best salesmen” .
Internet newsgroups also have posted violent polemics against physicians and researchers who disagree with their claims and concerns. Research reports that run counter to the claims of Lyme activists are denounced and their authors accused of incompetence and financial conflicts of interest. Magazines and news organizations whose stories on Lyme disease are not sufficiently hysterical are barraged with e-mail complaints and urged to contact certain organizations for "the truth." Protests have been organized to denounce Yale University because, according to the protesters, Yale "ridicules people with Lyme disease, presents misleading information, minimizes the severity of the illness, endorses inadequate, outdated treatment protocols, excludes opposing viewpoints, and ignores conflicts of interest."
Researchers have been harassed, threatened, and stalked . A petition circulated on the Web called for changes in the way the disease is routinely treated and the way insurance companies cover those treatments. Less radical groups have had their meetings invaded and disrupted by militant Lyme protesters. In October 2006, the New Jersey-based Lyme Disease Association (LDA) led a series of protests at NY Medical College to denounce the updated Lyme disease treatment guidelines published by the IDSA. The LDA organized another online petition against the guidelines, and a related LLMD organization demanded the treatment guidelines be retracted. Evidently, they were worried the guidelines would be accepted by insurance companies and therefore cut into their private practice profits . In 2012, the LDA's online referral directory included 65 medical doctors and 12 osteopaths.
In November 2006, Lyme activists persuaded the Connecticut Attorney General, Richard Blumenthal, to file a Civil Investigative Demand (CID) to look into possible anti-trust violations by the IDSA during the drafting of the treatment guidelines . A few weeks later, the activists persuaded Congressional Representative Chris Smith (R-NJ) to write a letter to the CDC Director questioning the CDC’s support of the IDSA guidelines and suggesting CDC needed to show support for alternative guidelines developed by activists and the private practice, for-profit physicians who treat them.
The CDC declined to do so. Moreover, few lawyers and physicians believe Blumenthal’s quixotic use of anti-trust law to intimidate a nonprofit professional society into changing or withdrawing voluntary clinical guidelines is going to affect the treatment of Lyme disease. Yet, these events demonstrate the power of Internet-connected activists to mobilize political power in order to question evidence-based medicine and peer-reviewed scientific research.
Not content to suppress evidence-based medicine through litigation and legislation, some Lyme organizations also have tried to raise funds for their own research on hyperbaric oxygen treatments, pregnancy-related Lyme, and a clinical trial of chronic Lyme patients. Others have organized "scientific" meetings and journals to present anecdotal reports and opinions from physicians friendly to their cause.
Several years ago, a Lyme Disease Buyers Club marketed vitamin and nutrient supplements (e.g., flax seed oil, evening primrose oil, coenzyme Q10, garlic, B-complex) to Lyme patients. The club indicated, "10 percent of each sale will go to Lyme disease research and advocacy projects." However, the initial proceeds went to a Lyme disease advocacy group (Lyme Alliance) in Michigan. The group filed an amicus brief supporting a court appeal by Joseph Natole, Jr., M.D., whose state medical board had sanctioned him for inappropriately managing patients with actual or suspected Lyme disease. According to a report on the Alliance's website: the court ruled against the doctor; his license was suspended for three months; he was fined $50,000; and he was subsequently indicted and pleaded guilty to federal charges of over billing insurance companies.
The Alliance later circulated a petition stating that, "Lyme disease can and does exist as a chronic illness with persisting infection, and that the disease is greatly underdiagnosed and undertreated." The petition demanded that, "Physicians who are on the front lines of Lyme disease patient care not be harassed, persecuted or made to fear for their medical practices because they do not adhere to the conservative "short term" care for Lyme disease."
That defiant battle cry still echoes today. Activists continue to file petitions and organize protests in support of any physician willing to provide them with a positive Lyme disease diagnosis and long-term access to antibiotics. Activists often attend state medical board hearings in support of their physicians and flood state legislators with demands for legislative protection of their doctors .
Currently, the Connecticut Department of Public Health is investigating a doctor for violating "the applicable standard of care," alleging that he diagnosed Lyme disease in children without examining them, that he failed to consider other causes for their symptoms, and that he improperly prescribed antibiotics. Since 2005, several other so-called “Lyme Literate” doctors have been accused of wrongdoing:
- A Kansas doctor was accused of murder after he gave one of his patients bismacine injections to treat her Lyme disease. Bismacine contains high amounts of bismuth, a metallic chemical that can be poisonous and is not approved by the FDA.
- In South Carolina, the widow of a man who died of prostate cancer filed suit against a Lyme doctor who gave her husband intravenous hydrogen peroxide and falsely diagnosed him as having Lyme disease. The doctor also prescribed testosterone, which caused his cancer to rapidly advance, resulting in his death about six weeks later.
- In North Carolina, the Medical Board suspended a physician's license for one year after finding he departed from prevailing methods of treating Lyme disease. The 12-member board also concluded he did not adequately inform patients that his treatment method, which includes months or years of intravenous antibiotics, is unorthodox. Five patients, including the widower of a woman who died of morphine poisoning while under his care, testified for the prosecution.
- In New Jersey, a doctor and former member of the Governor's Lyme Disease Advisory Council took money from Lou Gehrig's disease patients for a stem-cell treatment that she could not—and did not—perform, according to a federal indictment. The doctor and her assistant were charged with 11 counts of conspiracy, mail fraud, wire fraud and money laundering.
- Another New Jersey doctor was indicted on charges of conspiracy to defraud the United States, income tax evasion, and willful failure to account for and pay IRS employee taxes at two Lyme disease treatment centers. After a jury convicted him of income tax evasion, he was sentenced to 41 months in prison and ordered to pay a fine of $7,500 plus restitution of $246,791.
Despite the deaths and the prosecutions, support for LLMDs remains strong among activist groups, even as some of these doctors attempt to expand the range of diseases that can be blamed on B. burgdorferi and, therefore, treated with long-term antibiotics. Some of these diseases include complex or degenerative illnesses such as autism, multiple sclerosis and amyotrophic lateral sclerosis.
A Lyme Vaccine
After a decade of research, and pressure from patient advocates and Congress , the FDA licensed the first vaccine for Lyme borreliosis on December 21, 1998. The vaccine, called Lymerix, was derived from a recombinant version of the bacterium’s OspA lipoprotein. Lymerix was intended for "at-risk" individuals between the ages of 15 and 70 years. Given in three separate injections, the vaccine appeared to be effective in preventing infections.
Yet, after years of pre-license clinical trials and three years of commercial sales, the manufacturer, GlaxoSmithKline, pulled the vaccine off the market on February 26, 2002. The company cited poor sales and a projected low demand as the basis for their decision to end production and distribution of Lymerix.
The demise of Lymerix has not ended research on new Lyme vaccine candidates and vaccines against tick vectors. It may be difficult, however, to field-test new vaccines due to anti-vaccination organizations and the lingering hostility of Lyme activists to a vaccine .
Ironically, many of the original advocates for a vaccine turned against Lymerix as soon as it hit the market. Citing its less than perfect efficacy and anecdotal evidence of vaccine-induced arthritis and other injuries, they crowded FDA hearings with tales of personal injury, flooded the Internet with anti-vaccine tautologies, and joined lawsuits seeking compensation from Glaxo and Pasteur Mérieux Connaught, the maker of a second, but never licensed vaccine .
Despite the lawsuits and the website tales of personal anguish, repeated studies failed to find any evidence of specific adverse events associated with Lymerix . A CDC study published in the February 2002 issue of Vaccine also failed to detect any "unexpected or unusual patterns" of adverse reactions to vaccination . (Reports of adverse reactions to Lymerix, and other vaccines, can be searched for on the Vaccine Adverse Event Reporting System (VAERS) website.)
It is interesting to note the results of a 2002 survey of parental attitudes toward Lymerix . The survey authors found that respondents in Nassau County, New York indicated they would "definitely" or “likely" request Lymerix for their children (23% and 65% respectively). The positive response to Lymerix may be because most survey respondents got their information about Lyme disease and the vaccine from a friend or an advertisement (49% and 44%, respectively). The Internet was not identified as a source of information. Yet, the survey found that most respondents were "surprisingly misinformed" about Lyme infections. For example, they considered Lyme infections to be a chronic, difficult-to-treat disease.
"Chronic Lyme disease" remains the favored term of support groups and patient advocates, but has no basis in medical fact or practice . The endless public repetition of this misleading mantra may have influenced parental opinions in favor of vaccination as a means of preventing a chronic infection that does not exist. The option to vaccinate ended on February 26, 2002.
Interestingly, a recent study of patients presenting with recurrent signs of Lyme disease suggests that they are being repeatedly infected and not “relapsing” from a persistent infection . The study authors noted, “People experiencing recurrent episodes [of Lyme] tended to have frequent contact with vector ticks. Prompt administration of standard antibiotic therapy…reliably eliminates persistent infection and prevents relapse.” This is just what the vaccine was designed to do.
In 2004, two infectious disease specialists at the University of Connecticut reviewed the quality of online information about Lyme disease . Most of the websites they surveyed contained inaccurate or incomplete information. One of the authors, Henry Feder, told Reuters Health, “The problem is that some of these sites may have had an agenda other than education. They make the unusual seem common.” As the authors noted in their paper, “The challenge for medical providers is to convince worried patients . . . that some of the Internet-recommended testing and treatment . . . is inappropriate. This convincing can take multiple visits, debate, compromise and time.”
The steady flood of Lyme disease misinformation prompted Kent Sepkowitz, the director of infection control at Sloan-Kettering, to vent similar feelings in the New York Times (May 10, 2005). He wrote: “The vast, lumpy terrain of Lyme disease is a confusing place for doctor and patient alike. According to some, Lyme is able to cause any imaginable symptom, yet laboratory diagnosis remains famously elusive. This combination of plasticity and stealth makes it a convenient explanation for any ailment that otherwise makes no sense.”
Despite the Internet noise, fundraising letters from activists, and intimidating lawsuits, it is not difficult to find accurate information about Lyme disease. Most state health departments provide free brochures or direct online information. Good sources include:
- U.S. Centers for Disease Control and Prevention (CDC)
- American Lyme Disease Foundation (ALDF)
- Infectious Diseases Society of America’s Lyme Disease Treatment Guidelines
- University of Rhode Island's Tick Encounter Resource Center
- European Union Concerted Action on Lyme Borreliosis (EUCALB)
- Aetna Coverage Policy Bulletin 0215: Lyme Disease
The Bottom Line
- Lyme disease, when diagnosed early, is readily treatable with oral antibiotics.
- Positive antibody tests, by themselves, do not provide a sufficient basis for diagnosing Lyme disease. The diagnosis should be based on the overall clinical picture, including medical history and physical findings.
- Negative antibody testing after the first few weeks strongly suggests that the patient does not have Lyme disease.
- Many patients with chronic, nonspecific symptoms (such as headaches, fatigue, muscle aches, mental confusion, or sleep disturbances) mistakenly believe they have Lyme disease.
- Intravenous antibiotic therapy, when given appropriately, should not last more than a month. It should not be given unless oral antibiotic therapy has failed and persistent active infection has been demonstrated by culture, biopsy, or other bacteriologic technique.
- Malariotherapy, intracellular hyperthermia therapy, hyperbaric oxygen therapy, colloidal silver, dietary supplements, and herbs are not appropriate measures for treating Lyme disease. Doctors who recommend them should be avoided.
- Reported Lyme disease cases by state, 2002-2011. CDC Web site, accessed Oct 7, 2012.
- Barbour AG. Lyme Disease: The Cause, the Cure, the Controversy. Baltimore: Johns Hopkins University Press, 1996.
- Aronowitz R. Making Sense of Illness: Studies in Twentieth Century Medical Thought. New York: Cambridge University Press, 1998.
- Krause PJ and others, Tick-Borne Study Group. Reinfection and relapse in early Lyme disease. American Journal of Tropical Medicine and Hygiene 75:1090-1094, 2006.
- Seltzer EG and others. Long-term outcomes of persons with Lyme disease. JAMA 283:609-615, 2000.
- Gardner P. Long-term outcomes of persons with Lyme disease (editorial). JAMA 283:658-659, 2000.
- Sigal LH and Hassett AL. Contributions of societal and geographical environments to "chronic Lyme disease": The psychopathogenesis and aporology of a new "Medically Unexplained Symptoms" Syndrome. Environmental Health Perspectives 110:607-611, 2002.
- Rusk MH, Gluckman SJ. Serologic testing for Lyme disease. When—and when not—to order, and how to interpret results. Consultant 38:966-972, 1998.
- FDA Public Health Advisory: Assays for antibodies to Borrelia burgdorferi; limitations, use, and interpretation for supporting a clinical diagnosis of Lyme disease. July 7, 1997.
- Klempner MS and others. Intralaboratory reliability of serologic and urine testing for Lyme disease.American Journal of Medicine 110:217-219, 2001.
- Tjernberg I and others. C6 peptide ELISA test in the serodiagnosis of Lyme borreliosis in Sweden. European Journal of Clinical Microbiology and Infectious Disease 26:37-42, 2007.
- Philipp MT and others.. Serologic evaluation of patients from Missouri with erythema migrans-like skin lesions with the C6 Lyme test. Clinical and Vaccine Immunology 13:1170-1171, 2006.
- Caution regarding testing for Lyme disease. MMWR 54:125, 2005.
- Imported malaria associated with malariotherapy of Lyme disease—New Jersey. MMWR 39:873-875, 1990.
- Update: Self-induced malaria associated with malariotherapy for Lyme Disease—Texas. MMWR 40:665-666, 1991.
- Wormser GP and others. The clinical assessment, treatment, and prevention of Lyme disease, human granulocytic anaplasmosis, and babesiosis: clinical practice guidelines by the Infectious Diseases Society of America. Clinical Infectious Diseases 43:1089-1094, 2006.
- Federal Register 61:53685-53688, 1996. To access the full text, search the Federal Register for "colloidal silver."
- Consent order. In the matter of John R. Toth, M.D., before the Kasas State Board of Healing Arts. Docket No. 05-HA-79, Dec 12, 2005.
- Barrett S. Lyme disease quack arrested. Casewatch, June 30, 2009.
- Klempner MS.Two controlled trials of antibiotic treatment in patients with persistent symptoms and a history of Lyme disease. New England Journal of Medicine 345:85-92, 2001.
- Attorney General Montgomery stops Medicaid fraud and returns $2.3 million to state. Press release, June 19, 1995, Attorney General of Ohio.
- Kelly F. Lyme disease alleged to be false diagnosis. Ann Arbor News, June 19, 1998.
- Whelan, D. Lyme Inc.: Ticks aren’t the only parasites living off patients in borreliosis-prone areas. Forbes. March 12, 2007.
- Ettestad PJ and others. Biliary complications in the treatment of uncomplicated Lyme disease. Journal of Infectious Disease 171:356-361, 1995.
- Patel R and others. Death from inappropriate therapy for Lyme disease.Clinical Infectious Disease 31:1107-1109, 2000.
- Lightfoot RW Jr and others. Empiric parenteral antibiotic treatment of patients with fibromyalgia and fatigue and a positive serologic result for Lyme disease. A cost-effectiveness analysis. Annals of Internal Medicine 119:503-509, 1993.
- Mulhall JP, Bergmann LS. Ciprofloxacin-induced acute psychosis. Urology 46:102-103, 1995.
- McSweegan E. Why people go fishing for drugs. Washington Post. August 27, 2002, pp. H02.
- Borg R and others. Intravenous ceftriaxone compared with oral doxycycline for the treatment of Lyme neuroborreliosis. Scandinavian Journal of Infectious Disease 37:449-454, 2005.
- Ogrinc K and others. Doxycycline versus ceftriaxone for the treatment of patients with chronic Lyme borreliosis. Wiener klinische Wochenschrift 118:696-701, 2006.
- Kaplan RF and others. Cognitive function in post-treatment Lyme disease: do additional antibiotics help? Neurology 60:1916-1922, 2003.
- Walsh CA and others. Lyme disease in pregnancy: case report and review of the literature. Obstetrical and Gynecological Survey 62:41-50, 2007.
- Woodrum JE, Oliver JH Jr. Investigation of venereal, transplacental, and contact transmission of the Lyme disease spirochete, Borrelia burgdorferi, in Syrian hamsters. Journal of Parasitology 85:426-430, 1999.
- Gerber MA and others. The risk of acquiring Lyme disease or babesiosis from a blood transfusion. Journal of Infectious Disease 170:231-234.1994.
- Horowitz HW and others. Perinatal transmission of the agent of human granulocytic ehrlichiosis. New England Journal of Medicine 339:375-378, 1998.
- Cooper JD, Feder HM Jr. Inaccurate information about Lyme disease on the internet. Pediatric Infectious Disease Journal 23:1105-1108, 2004.
- Sood SK. Effective retrieval of Lyme disease information on the Web. Clinical Infectious Disease 35:451-464, 2002.
- Grann D. Stalking Dr. Steere over Lyme disease. New York Times Magazine, June 17, 2001.
- Warner S. State official subpoenas infectious disease group. The Scientist, February 7, 2007.
- Santaniello G. A schism over treatment philosophies puts a Connecticut pediatrician's license on the line. Northeast Magazine. Sept 17, 2006.
- Robinson, MB. Senators urge haste on Lyme vaccines. Bergen Record, Dec 7, 1997.
- Associated Press. Lyme vaccine pulled off market. Feb 26, 2002.
- Abbott A. Lyme disease: Uphill struggle. Nature 439:524-525, 2006.
- McSweegan E. The Lyme vaccine: A cautionary tale. Epidemiology and Infection 135:9-10, 2007.
- Nigrovic LE, Thompson KM. The Lyme vaccine: a cautionary tale. Epidemiology and Infection 8:1-8, 2006.
- Lathrop, SL and others. Adverse event reports following vaccination for Lyme disease. December 1998-July 2000. Vaccine 20:1603-1608, 2002.
- Barone SR and others. Parental knowledge of and attitudes toward LYMErix (Recombinant OspA Lyme vaccine). Clinical Pediatrics 41:33-36, 2002.
Dr. McSweegan is a microbiologist who lives and works in Maryland, but has spent many summers in Old Lyme, Connecticut. Between 1993 and 1995, he managed a federal Lyme disease research program. The original version of this article was reviewed by Judith N. Barrett, M.D., Luther Rhodes III, M.D., Marvin Rosenthal, M.D., and the late John H. Renner, M.D.
This article was revised on January 20, 2013. | 2026-01-29T01:42:30.217993 |
401,455 | 3.726161 | http://faculty.marianopolis.edu/c.belanger/quebechistory/encyclopedia/Conthistcan.htm | L’Encyclopédie de l’histoire du Québec / The Quebec History Encyclopedia
Constitutional History of Canada
[This text was written in 1948. For the full citation, see the end of the text]
There are four well-defined periods in the constitutional development of Canada. The first of these was the period of arbitrary government, covering the whole of the French régime and the first third of a century of British rule in Canada proper. The second was the period of representative but irresponsible government, dating from 1759 in the case of Nova Scotia and from 1784 in the case of New Brunswick , but beginning with 1791 in the case of Upper and Lower Canada . The third was the period of responsible government, which began between 1840 and 1850, but which, in the case of old Canada , was based on the negation of the principle of "representation by population." The last was the period of Confederation since 1867, in which the adoption of a federal system of government has been combined with both representative and responsible government, in ever increasing measure, both in the provinces and in the Dominion.
1. Period of arbitrary government .
From the day when Jacques Cartier planted on the shores of the Gaspé peninsula the cross and the fleur-de-lys, and took possession of the country in the name of the king of France, until the year 1663, when the king of France set up a system of royal government in Canada, the colony was governed by a series of commercial companies chartered by the king. These companies, which usually obtained a monopoly of trading rights in Canada on condition that they brought out a stipulated number of colonists, were granted almost unlimited powers of government in the colony. The head of the company, who was generally appointed the king's lieutenant-general, had the right of making laws and ordinances, of granting lands, of appointing officials, and of exercising justice, with the power of life and death. Champlain, who represented several successive companies as governor at Quebec, had virtually despotic powers, though the colony was so small that his powers were scarcely exercised. Under the Company of New France, which received its charter in 1627, these powers were more clearly evident. This company, which was organized by Cardinal Richelieu, with the support of the court and the great merchants of Paris and Rouen (as distinct from the merchants of the sea-port towns, who had been the backers of previous companies), was empowered to control the whole political and economic life of the colony; and its resident governor was, by his commission, given authority "de juger souverainement et en dernier ressort." These arbitrary powers provoked resentment, and in 1647 a council was appointed as a curb on the governor. In 1648 the syndics of Quebec, Three Rivers, and Montreal, who owed their appointment to popular election, were added to the council, thus introducing into it a representative element. But this system did not last long, because of the financial collapse of the Company of New France; and in 1663 the king took the government of the colony back into his own hands.
The system of royal government set up in New France in 1663 was modeled on that of the provinces of old France. It was a system of checks and balances. The official representative of the king was the governor; but while the governor of New France exercised, owing to local conditions, more power than the governor of a French province, whose duties were mainly ceremonial, he was only one of several officers who took their instructions direct from Paris . Beside him was the intendant, who was really the king's business agent in the colony, and whose powers in some respects exceeded those of the governor. There was also the bishop, whose duty it was to supervise the religious and moral life of the colony. Finally, there was appointed in 1663 a Superior or Sovereign Council, corresponding to the parlements found in some of the provinces of France; and this was intended as a check on the powers of the governor, the intendant, and the bishop, just as these were intended to be a check on one another. In 1672, it is true, Frontenac attempted to introduce into the government of New France a popular element. He called together at Quebec an assembly of the nobles, the clergy, and the third estate-a sort of replica of the States General of France. But for this he received a severe snub from the king's minister. "It is well," wrote the latter, "that each one should speak for himself, and no one for all." Nothing could illustrate more clearly than this incident. the principles on which the government of New France was based. Paternalism was its keynote. The king's minister was not only unwilling to see any popular element introduced into the government of Canada, but he corresponded direct, not only with the governor, the intendant, and the bishop, but with minor officials in the colony. No detail of government was too small for him to regulate, though he was two thousand miles away. To say that the government of New France was arbitrary is only half the truth; it was so arbitrary that the king's minister in Paris constantly interfered with the officers of the king in New France .
The conquest of Canada by British arms in 1760 might have been expected to put an end to arbitrary government in Canada . But it did not. From 1760 to 1763, when the Peace of Paris was signed, Canada was of course under military rule; but the Proclamation of 1763, which inaugurated civil rule in Canada, set up a system of government under a governor and council which was just as arbitrary, if not quite so paternal, as the system of government under the French regime. Authority was granted, it is true, for the calling of an Assembly; but this Assembly was not called. Under the Quebec Act of 1774 this system of government was continued. The Council was enlarged, and was given more extensive powers; but despite the agitation of the English merchants in the province for an Assembly, there was no introduction of a representative element in the government of old Canada until the passing of the so-called Constitutional Act.
2. Period of Representative but Irresponsible Government . Representative institutions, in the form of an elected Assembly, were introduced into Nova Scotia in 1759, and into New Brunswick on its creation in 1784; but it was not until 1791 that the Constitutional Act, which divided the old province of Quebec into Upper and Lower Canada , gave these provinces representative institutions. All these provinces were granted constitutions of the old colonial type, which had existed in the American colonies prior to the American Revolution. The executive government was placed in the hands of a governor or lieutenant-governor appointed by the Crown, acting in conjunction with an Executive Council also appointed by the Crown. The legislature was composed of a Legislative Council, or upper house, appointed by the Crown, and a Legislative Assembly elected by the people. The people were thus given a voice in legislation; but unless the Legislative Assembly was in harmony with the Legislative and Executive Councils, its voice was largely nugatory. In legislation, it was unable to achieve anything without the concurrence of the Legislative and Executive Councils; and it had no influence over the executive government. The revenues of the Crown - arising from customs duties, the sale of crown lands, and other sources - were in the hands of the executive government; and the moneys voted by the Legislative Assembly were mainly devoted to the building of roads and bridges. Under these circumstances, there grew up in each province a. governing class or clique known in Upper Canada as the " Family Compact", in Lower Canada as the " Château clique ", and in Nova Scotia as the " Council of Twelve " - which normally controlled the executive government, and which, by its control of nominations to the Legislative Council, was able to impose a veto on legislation as well.
"It is difficult to understand," wrote Lord Durham in his famous Report on the affairs of British North Americain 1839, "how any English statesman could have imagined that representative and irresponsible government could be successfully combined . . . To suppose that such a system could work well here implies a belief that the French Canadians have enjoyed representative institutions for half a century without acquiring any of the characteristics of a free people; that Englishmen renounce every political. opinion and feeling when they enter a colony, or that the spirit of Anglo-Saxons is utterly changed and weakened among those who are transplanted across the Atlantic ." As a matter of fact, it was not long before the system of government established by the Constitutional Act roused opposition both in Upper and in Lower Canada . The beginnings of a liberal movement were discernible in both these provinces before the outbreak of the War of 1812, though in Lower Canada the constitutional issue was complicated by the racial issue. In the Canadas this liberal movement led to armed rebellion in 1837. What William Lyon Mackenzie and Louis Joseph Papineau and their associates really rebelled against was not the British Crown, but the absurd and illogical system of government embodied in the constitution which William Pitt gave the Canadas in 1791. In the Maritime provinces , the tradition of loyalty among the loyalists of the American revolution who had settled these provinces prevented a similar outbreak; but here too, under Joseph Howe and others, opposition to the governing class under the old colonial constitution developed at a fairly early date. It was, however, the rebellion in the Canadas that brought things to a head. Public opinion in Great Britain was profoundly shocked that in 1837, the year of the accession of the young Queen Victoria, there should have been armed rebellion in the Canadas; and the British government appointed in 1838 Lord Durham as governor-in-chief of British North America and lord high commissioner to inquire into the affairs of the British North American provinces. Lord Durham, in his Report , recommended the union of Upper and Lower Canada, with a view to the submergence of the French Canadians in a province in which the English Canadians should be, in a majority; but he recommended also the adoption in British North America of the principle of responsible government - that is, the principle that the executive government should be responsible to the legislature. This principle had first been advocated by a Canadian statesman, Robert Baldwin, who later had the satisfaction of seeing that the principle was put into effect; but it was Lord Durham who introduced the idea into the arena of practical politics.
3. Period of Responsible Government, without " Representation. by Population". In recommending that responsible government should be brought into effect in the united province, Lord Durham expressly warned against any union of Upper and Lower Canada which should not be based on the principle of representation by population. "I am averse," he said, "to every plan that has been proposed for giving an equal number of members to the two provinces, in order to attain the temporary end of out numbering the French, because I think the same object will be obtained without any violation of the principles of representation." When Poulett Thomson, afterwards Lord Sydenham, was sent out to carry into effect Lord Durham's recommendations, this warning was, however, ignored. The Act of Union of 1841 gave to each part of the united province an equal representation in the united legislature; and thus introduced into the government of the colony a dualism or quasi-federalism that ultimately brought about the breakdown of all government. Under these circumstances, the inauguration of responsible government in united Canada did not proceed under the best auspices. Sydenham, it is true, set up in Canada the machinery of responsible government. He transformed the old Executive Council, the members of which seldom sat in the Legislative Assembly, and sometimes not even in the Legislative Council, into a counterpart of the British cabinet, the members of which were not only (as a rule) heads of departments, but were also members of parliament. But, because he was unwilling to admit the rebellious majority in French Canada to a share in government, he was unwilling to admit the principle of responsible government. With him the Council was "a council to be consulted and no more." He presided over the meetings of council, and in fact dominated it, so that he became virtually his own prime minister. His dexterity enabled him to preserve the unstable equilibrium of this position during his short period of office; but his system of government broke down under his successor Sir Charles Bagot. Bagot's ill-health compelled him to absent himself frequently from the council board, so that the office of prime minister began to emerge; and in 1843 he was compelled to accept a ministry reflecting the majority in the Legislative Assembly, including a number of the rebels of 1837. The principle of responsible government had a brief set-back under Bagot's successor, Sir Charles (afterwards, Lord) Metcalfe, who, like Sydenham, regarded the Council as a body "to be consulted, and no more," and who, after a disagreement with his council, appealed to the country in 1845, and won a temporary triumph at the polls. But in 1848 there came out to Canada as governor-general Lord Elgin, the son-in-law of Lord Durham, who was resolved to put the principle of responsible government into full operation; and the triumph of this principle was achieved by Elgin's admittance to office of the Baldwin-Lafontaine administration in 1848 and his assent to the Rebellion Losses Bill in 1849 [See the British reaction to the Bill]. In the other provinces of British North America , responsible government was introduced shortly after this.
The sphere in which responsible government operated was, however, at first circumscribed. Durham had recommended that it should be operative "except on points involving strictly imperial interests"; but these interests were deemed to be such important matters as crown lands, trade relations, defence, and foreign policy. It was not long before the crown lands were handed over to the Canadian parliament for administration; the control of the tariff, and hence of trade relations, was successfully asserted by the Canadian government in 1859; and most of the British troops in Canada were withdrawn in 1862. But the control of foreign policy was postponed for over half a century; and there were still in 1874 so many shackles on the will of the Canadian people that Edward Blake was able to describe them as "four millions of Britons who are not free." The constitutional history of Canada since 1849 has, indeed, been the story of the way in which these shackles have been gradually struck off.
4. Confederation . The provision in the Act of Union whereby the two parts of the united province were to have equal representation in the Legislative Assembly was the rock on which the union split. It brought about in the government of united Canada a dualism or quasi-federalism which ultimately proved unworkable. This dualism was reflected not only in the double-barrelled names of the administrations during this period (such as the Baldwin-Lafontaine administration, the Hincks-Morin, and the Macdonald-Cartier), but also in what was known as " the double-majority system" - a sort of convention by which a majority from the part of the province particularly affected was necessary for the passage of legislation. It was reflected even in the realm of public finance, so that if a sum of money was voted for Canada East, an equal sum of money had to be voted for Canada West.
When the Act of Union was passed, the population of Lower Canada was greater than that of Upper Canada ; and there was therefore no complaint from Upper Canada with regard to equal representation. But within a decade, the situation was reversed, and Upper Canada had a larger population than Lower Canada . The consequence was that there soon sprang up in Upper Canada a demand for " representation by population". This demand naturally roused opposition in Lower Canada; and Colonel (afterwards Sir) Etienne Taché declared that the surplus population of Upper Canada had no more right to representation than "so many codfish in Gaspé bay". The two parts of the province were thus set at variance; and it was not long before government came to an impasse. No government was able to command a majority in both parts of the province, and parties were so evenly divided that the fate of the government often hung by a thread. Between 1861 and 1864 there were four ministries formed, and two general elections held, yet without any decisive result.
This deadlock was, as Goldwin Smith said, "the true parent of Confederation." The idea of the federation of British North America was, it is true, not new. It was a favourite idea with the United Empire Loyalists; and it was later espoused by persons so different as John Strachan and John Beverley Robinson on the one hand and Robert Gourlay and William Lyon Mackenzie on the other. It was advocated by Lord Durham, though he did not think it was feasible in his time; and it became the theme of some of the most stirring oratory of Thomas D'Arcy McGee. "I see in the not remote distance," said McGee, "one great nationality, bound, like the shield of Achilles, by the blue rim of ocean." But it was not only the breakdown of government in United Canada that brought the idea of the union of British North America within the, sphere of practical politics. It became necessary to submerge or subordinate the rival animosities of Upper and Lower Canada in a larger arena; and in 1864 the leading politicians in both parties in Canada were brought together in a coalition government with a view to bringing about this result.
It so happened that at this time the idea of the union of the Maritime provinces was in the air; and in September, 1864, a meeting of delegates from the Maritime provinces was called to meet at Charlottetown, Prince Edward Island , in order to discuss this project. Delegates from Canada were sent to this conference to invite the delegates from the Maritime provinces to meet at Quebec in October to discuss the larger union. The invitation was accepted; and on October 10, 1864, there met at Quebec the famous Quebec Conference, which, after deliberations lasting two weeks, framed seventy-two resolutions embodying the basis of Confederation. Owing to political difficulties in the Maritime provinces , Confederation was not immediately consummated. It was not until 1866 that representatives of Canada, Nova Scotia, and New Brunswick met in London to discuss with representatives of the Colonial Office the terms of union; but the result of their deliberations was the British North America Act, passed by the British parliament in 1867, and by this Act Upper Canada (henceforth Ontario), Lower Canada (henceforth Quebec), New Brunswick, and Nova Scotia were united in a federal union to be known as the Dominion of Canada.
The outstanding feature of the new Dominion was that it combined the advantages of central government with those of local autonomy. A set of governmental machinery was created, with its headquarters in Ottawa ; but at the same time the individual provinces retained their identity and their control of local affairs. To the Dominion was given oversight of such general matters as customs and excise, trade and commerce, militia and defence, railways and canals, and criminal justice; whereas the provinces retained control of education, property and civil rights, municipal affairs, and other matters of local concern. This arrangement enabled the province of Quebec, for example, to preserve its peculiar institutions, such as its language, its civil laws, and its schools; while it gave to it at the same time the backing of the other provinces in matters of general concern, such as military and naval defence, the building of railways, postal facilities, and so forth. There has been at times difficulty in drawing the line between the spheres of the Dominion and the provinces, and a good deal of litigation has resulted; but, on the other hand, the application of-: the federal principle to Canadian government has gone a long way toward solving the problem of " the two races" in Canada. Federalism has removed most of the sources of friction between the French and the English in Canada , and while no one can pretend that friction has disappeared, it has been reduced to a minimum, and has never since 1867 been serious.
The federation of 1867 included only Ontario , Quebec , New Brunswick , and Nova Scotia . But, with astonishing rapidity, the infant Dominion proceeded to extend itself from the Atlantic to the Pacific. In 1869 the Dominion acquired the vast extent of the Hudson's Bay Company's territories, out of which have been carved since that time the provinces of Manitoba, Saskatchewan, and Alberta; in 1871 British Columbia came into federation, and in 1873 Prince Edward Island. Finally, in 1895, Canada took over from the mother country the islands of the Arctic archipelago . These accessions of territory gave to the Dominion an area greater than that of the United States .
Since 1867 the internal changes in the constitution of the Dominion of Canada have been slight [Reminder: this text was written in 1948; there have been significant changes made to the Canadian constitution, notably through the process of patriation and the inclusion of a Charter of Rights in 1982]. There has been some diminution in the powers of the governor-general, introduced in. 378 at the instance of Edward Blake; and there have been various amendments in the British North America Act, such as that changing the number of members in the Senate of Canada [This page provides the text of the constitutional documents]. There is, however, a growing feeling that the British North America Act, passed in a generation preceding the last, is now out-of-date, and is due for extensive revision. Many problems, such as that of unemployment relief, control of marketing; and the regulation of radios and aviation, were not contemplated by those who framed the British North America Act; and it is desirable that the constitution should be amended so as to indicate clearly where the, responsibility for dealing with such matters really lies.
Bibliography. For the constitutional history of the French .period, see Sir William Ashley, Nine lectures on the earlier constitutional history of Canada (Toronto, 1889) ; E. Salone, La colonisation. de la Nouvelle France (Paris , 1906); and R. D. Cahall, The Sovereign Council of New France (New York, 1915). Reference may also be made to F. Parkman, The old regime in Canada (Boston, 1880). The important documents are printed in Collection des documents relatifs à la Nouvelle France (4 vols., Quebec, 1883-5); wits, ordonnances royaux (2 vols., Quebec, 1854-6) ; Jugements et délibérations du Conseil Souverain de la Nouvelle France (4 vols., Quebec, 1885-8) ; Jugements et délibérations du Conseil Supérieur de Quebec (2 vols., Quebec, 1889-91); Ordonnances des intendants (4 vols., Quebec Archives, 1919); and Insinuations du Conseil Souverain (Quebec Archives, 1921). For the constitutional history of the British period, see Sir John G. Bourinot, A manual of the constitutional history of Canada (Toronto, 2nd. ed., 1901), and W., P. M. Kennedy, The constitution of Canada : An introduction to its development and law ( Oxford , 1922). The chief documents are to be found in W. Houston, Documents illustrative of the Canadian constitution (Toronto, 1891); H. E: Egerton and W. I. Grant, Canadian constitutional development (London , 1907); W. P. M. Kennedy, Statutes, treaties, and documents of the Canadian constitution (Oxford, 1930) ; A. Shortt and A. G. Doughty, Documents relating to the, constitutional history of Canada, 1759-1791 (2 vols., Ottawa, 191.8); A. G. Doughty and D. A. McArthur, Documents relating to the constitutional history of Canada, 1791-1818 (Ottawa, 1914); and A. . G. Doughty and Norah Story, Documents relating to the constitutional history of Canada , 1819-1828 (Ottawa , 1935).
[The reader should consult the following texts from the Quebec History site: a biography of Durham; large extracts from his Report; a discussion of the nature of Responsible government; the section on Canadian federalism and the Canadian Constitution contains many pertinent texts as well.]
Source: W. Stewart WALLACE, "History, Constitutional", in The Encyclopedia of Canada, Vol. 3, Toronto, University Associates of Canada, 1948, 396p., pp. 147-153.
© 2004 Claude Bélanger, Marianopolis College | 2026-01-24T09:48:32.030193 |
716,295 | 3.628077 | http://msdn.microsoft.com/en-us/library/windowsphone/develop/7eddebat(v=vs.105).aspx | Array.IndexOf Method (Array, Object)
December 03, 2013
Searches for the specified object and returns the index of the first occurrence within the entire one-dimensional Array.
Assembly: mscorlib (in mscorlib.dll)
- Type: System.Object
The object to locate in array.
Return ValueType: System.Int32
The index of the first occurrence of value within the entire array, if found; otherwise, the lower bound of the array minus 1.
The one-dimensional Array is searched forward starting at the first element and ending at the last element.
The elements are compared to the specified value using the Object.Equals method. If the element type is a nonintrinsic (user-defined) type, the Equals implementation of that type is used.
Since most arrays will have a lower bound of zero, this method would generally return –1 when value is not found. In the rare case that the lower bound of the array is equal to Int32.MinValue and value is not found, this method returns Int32.MaxValue, which is System.Int32.MinValue - 1.
This method is an O(n) operation, where n is the Length of array.
In the .NET Framework version 2.0, this method uses the Equals and CompareTo methods of the Array to determine whether the Object specified by the value parameter exists. In the earlier versions of the .NET Framework, this determination was made by using the Equals and CompareTo methods of the valueObject itself. | 2026-01-29T07:11:50.037357 |
386,938 | 3.696978 | http://www.mesa.edu.au/seachange/98/sample.asp | Theme: Celebrate the Sea: Clean Oceans
Seaweek '98 coordinator: Barbara Jensen
Sample Materials from the Seaweek '98 kits
Seaweek Primary Activity Booklet
This activity is one from the Seaweek'98 Primary Activity booklet. There is background information and activities in outline form. Each activity explores the issues of 'Clean Oceans' or helps celebrate the sea. The booklet is printed by Scholastic Australia.
The Primary and Secondary booklets, along with a new community section and other fact sheets, posters and information flyers make up the Seaweek Educational Kit.
Have you ever found a 'cuttlefish bone' on the beach? These lightweight skeletons are part of a cuttlefish. Cuttlefish are relatives of octopus and squid. They are 'intelligent' hunters. They can change colour to match their surroundings and slowly approach prey before catching it with their arms.
Cuttlefish make tasty meals for fish, penguins, dolphins, sea lions and seals. Many are eaten. Most predators do not eat the skeleton and release it to float on the surface of the sea. When a Cuttlefish dies the body rots away or is eaten and skeleton floats to the surface and is often washed up.
Some people collect them for birds to peck at. What most people do not notice is that about one in every five or ten cuttlefish bones have been imprinted with the teeth or beak of the animal that ate it.
Next time you visit an ocean beach look carefully at all the cuttlefish bones you can see and try to figure out what creature caught them.
Seaweek Secondary Activity Booklet
This activity is one from the Seaweek Secondary Activity Booklet which aims to make the connection between catchments and oceans and to provide activities that will develop appropriate strategies for individuals to act towards minimising their impact.
There is background information with activities and resource lists.
Preventing and Managing Oil Spills
No matter how much legislation and policy is in place oil spills will continue to occur. Therefore systems need to be in place to limit the intensity and damage from any spill.
Australia has a National Plan to Combat Pollution of the sea. This is a national integrated government and industry organisational framework for responding to oil pollution incidents in the marine environment.
The objectives of the National Plan are based on Australia's responsibility to protect natural and artificial environments from the adverse effects of oil pollution and minimise those effects where protection is not, or has not been possible.
The Australian Maritime Safety Authority (AMSA) administers the National Plan, supported by the Australian Marine Oil Spill Centre, State/NT Governments, and the shipping, petroleum and exploration industries.
The Australian Marine Oil Spill Centre (AMOSC) in Geelong, Victoria has the capability to respond to all industry spills and to assist government in responding to spills around Australia.
AMOSC was established in 1990 to provide the equipment and training in order to respond to major oil spills off Australia's coast. This is a subsidiary company of the Australian Institute of Petroleum.
The National Plan is implemented through effective co-ordination of relevant organisations, efficient and strategically located oil spill combat equipment and training. The Plan incorporates State and Local contingency plans developed for particular sites; for example, TORRESPLAN is developed specifically for Torres Strait.
The Plan utilises a variety of means to bring about an effective response to oil spills. Some of these means include:
- an On Scene Spill Model computer simulation that predicts the movement of oil spills using the prevailing weather conditions;
- the use registers of specific equipment and their locations;
- in addition a Coastal Resource Atlas has been developed with each State and the Northern Territory that identifies sensitive areas with high priority for protection.
The National Plan is funded through the principle that the 'potential polluter pays' and as a result, a small levy is imposed on commercial shipping operators using Australian ports.
No two spills are the same and often a combination of management approaches will be used. AMSA and AMOSC implement the following methods.
- Leave the spill alone but monitor if the oil is at sea and poses no threat to coastal areas. The natural processes of dispersion and biodegradation through the action of wind, sun, waves and currents eventually destroy the spill.
- Use dispersants to break up the oil. Dispersants act by reducing the surface tension that stops oil and water from mixing. The oil then forms into small particles that are dispersed through the water column increasing the rate of biodegradation.
- Contain and recover the oil through the use of booms and skimmers. A physical barrier or boom is used to contain or to divert the flow. An absorbent material can then be added such as peat, cotton, wool or pine bark which may absorb up to 20 times its own weight. However, these procedures are sometimes limited by the prevailing weather conditions.
- Introduce biological agents to hasten biodegradation is called bioremediation. These agents include bacteria and other micro-organisms that occur naturally in oceans. Fertilisers can also be applied to the spill to stimulate bacterial growth and hasten biodegradation.
Source: International Petroleum Industry Environmental Conservation Association. "A Guide to Contingency Planning for Oil Spills on Water."
- Recent oil spills
- Map their location.
- Determine if there are any 'hotspots'.
- What were the conditions that allowed the spill to occur, for example weather and condition of vessel.
- How were these spills managed?
- Who was involved and how?
- What was the impact of the spills?
- What suggestions can you make to minimise the impact of the spill?
- Map the protected area in and around the Great Barrier Reef Marine Park (GBRMP).
- Research major shipping routes.
- Is there any correlation between these two areas?
- Has there been any oil spills in this area?
- What impact did they cause?
- What other activities threaten the waters around the GBRMP?
- What suggestions can you make to minimise the impact of the spill?
- What plans are in place to prevent spills?
- Bioremediaton requires the use of oil microbes to neutralise the effect of oil.
- How does the process work?
- What recent spills used bioremediation?
- What microbes are used?
- What are the constraints and advantages of using microbes?
- Develop an experiment to illustrate how dispersants work.
- Design and make other methods for dealing with oil spills.
- The laws enacted under the international convention MARPOL 73/78 may be a wide reaching. But, can you think of any additional annexes that may need to be included now and in the future?
Additional and up-to-date information concerning legislation and managing oil spills can be obtained by contacting the AMSA through:
National Plan; Australia's National Plan to Combat Pollution of the Sea by Oil. AMSA | 2026-01-24T04:00:54.744547 |
573,768 | 4.13805 | http://www.scpr.org/news/2009/12/14/9163/groundwater-agu/ | California scientists say water stored naturally underground in the Central Valley is disappearing at a rapid rate. The likely cause is irrigation.
UC Irvine and NASA scientists monitor tiny month-to-month differences in Earth's gravity field for the Gravity Recovery and Climate Experiment. That gravity field changes, in part, because of where water is and where it moves over the planet.
Figuring out where water weighs more and less over the Earth can tell scientists about climate change's general effects on the water cycle. In California’s Central Valley, they've documented a regionally specific phenomenon.
The scientists are examining aquifers – spaces between rock and sediment underground where water percolates. In the last six years, Central Valley aquifers have lost enough water to fill the equivalent of Lake Mead.
In that part of the state, farming relies on diverted surface water and water pumped from below the surface for irrigation. As surface water has become scarcer, the Central Valley has drunk more deeply from these underground banks.
With water underground under light regulation, and with ongoing climate changes, Irvine scientists point out that the Central Valley will be thirsty for a while. | 2026-01-27T05:35:06.996140 |
12,002 | 3.619561 | https://medlineplus.gov/genetics/condition/deoxyguanosine-kinase-deficiency/ | Deoxyguanosine kinase deficiency is an inherited disorder that can cause liver disease and neurological problems. Researchers have described two forms of this disorder. The majority of affected individuals have the more severe form, which is called hepatocerebral because of the serious problems it causes in the liver and brain.
Newborns with the hepatocerebral form of deoxyguanosine kinase deficiency may have a buildup of lactic acid in the body (lactic acidosis) within the first few days after birth. They may also have weakness, behavior changes such as poor feeding and decreased activity, and vomiting. Affected newborns sometimes have low blood sugar (hypoglycemia) as a result of liver dysfunction. During the first few weeks of life they begin showing other signs of liver disease which may result in liver failure. They also develop progressive neurological problems including very weak muscle tone (severe hypotonia), abnormal eye movements (nystagmus) and the loss of skills they had previously acquired (developmental regression). Children with this form of the disorder usually do not survive past the age of 2 years.
Some individuals with deoxyguanosine kinase deficiency have a milder form of the disorder without severe neurological problems. Liver disease is the primary symptom of this form of the disorder, generally becoming evident during infancy or childhood. Occasionally it first appears after an illness such as a viral infection. Affected individuals may also develop kidney problems. Mild hypotonia is the only neurological effect associated with this form of the disorder.
The prevalence of deoxyguanosine kinase deficiency is unknown. Approximately 100 affected individuals have been identified.
The DGUOK gene provides instructions for making the enzyme deoxyguanosine kinase. This enzyme plays a critical role in mitochondria, which are structures within cells that convert the energy from food into a form that cells can use. Mitochondria each contain a small amount of DNA, known as mitochondrial DNA or mtDNA, which is essential for the normal function of these structures. Deoxyguanosine kinase is involved in producing and maintaining the building blocks of mitochondrial DNA.
Mutations in the DGUOK gene reduce or eliminate the activity of the deoxyguanosine kinase enzyme. Reduced enzyme activity leads to problems with the production and maintenance of mitochondrial DNA. A reduction in the amount of mitochondrial DNA (known as mitochondrial DNA depletion) impairs mitochondrial function in many of the body's cells and tissues. These problems lead to the neurological and liver dysfunction associated with deoxyguanosine kinase deficiency.
Deoxyguanosine kinase deficiency is inherited in an autosomal recessive pattern, which means both copies of the gene in each cell have mutations. In most cases, the parents of an individual with this condition each carry one copy of the mutated gene, but they typically do not show signs and symptoms of the condition.
Other Names for This Condition
- DGUOK-related mitochondrial DNA depletion syndrome
- Hepatocerebral mitochondrial DNA depletion syndrome
- Mitochondrial DNA depletion syndrome, hepatocerebral form
Additional Information & Resources
Genetic Testing Information
Genetic and Rare Diseases Information Center
Research Studies from ClinicalTrials.gov
Catalog of Genes and Diseases from OMIM
Scientific Articles on PubMed
- Alberio S, Mineri R, Tiranti V, Zeviani M. Depletion of mtDNA: syndromes and genes. Mitochondrion. 2007 Feb-Apr;7(1-2):6-12. doi: 10.1016/j.mito.2006.11.010. Epub 2006 Dec 5. Citation on PubMed
- Brahimi N, Jambou M, Sarzi E, Serre V, Boddaert N, Romano S, de Lonlay P, Slama A, Munnich A, Rotig A, Bonnefont JP, Lebre AS. The first founder DGUOK mutation associated with hepatocerebral mitochondrial DNA depletion syndrome. Mol Genet Metab. 2009 Jul;97(3):221-6. doi: 10.1016/j.ymgme.2009.03.007. Epub 2009 Mar 27. Citation on PubMed
- Copeland WC. Inherited mitochondrial diseases of DNA replication. Annu Rev Med. 2008;59:131-46. doi: 10.1146/annurev.med.59.053006.104646. Citation on PubMed or Free article on PubMed Central
- Dimmock DP, Dunn JK, Feigenbaum A, Rupar A, Horvath R, Freisinger P, Mousson de Camaret B, Wong LJ, Scaglia F. Abnormal neurological features predict poor survival and should preclude liver transplantation in patients with deoxyguanosine kinase deficiency. Liver Transpl. 2008 Oct;14(10):1480-5. doi: 10.1002/lt.21556. Citation on PubMed
- Dimmock DP, Zhang Q, Dionisi-Vici C, Carrozzo R, Shieh J, Tang LY, Truong C, Schmitt E, Sifry-Platt M, Lucioli S, Santorelli FM, Ficicioglu CH, Rodriguez M, Wierenga K, Enns GM, Longo N, Lipson MH, Vallance H, Craigen WJ, Scaglia F, Wong LJ. Clinical and molecular features of mitochondrial DNA depletion due to mutations in deoxyguanosine kinase. Hum Mutat. 2008 Feb;29(2):330-1. doi: 10.1002/humu.9519. Citation on PubMed
- El-Hattab AW, Scaglia F, Wong LJ. Deoxyguanosine Kinase Deficiency. 2009 Jun 18 [updated 2016 Dec 22]. In: Adam MP, Everman DB, Mirzaa GM, Pagon RA, Wallace SE, Bean LJH, Gripp KW, Amemiya A, editors. GeneReviews(R) [Internet]. Seattle (WA): University of Washington, Seattle; 1993-2023. Available from http://www.ncbi.nlm.nih.gov/books/NBK7040/ Citation on PubMed
- Freisinger P, Futterer N, Lankes E, Gempel K, Berger TM, Spalinger J, Hoerbe A, Schwantes C, Lindner M, Santer R, Burdelski M, Schaefer H, Setzer B, Walker UA, Horvath R. Hepatocerebral mitochondrial DNA depletion syndrome caused by deoxyguanosine kinase (DGUOK) mutations. Arch Neurol. 2006 Aug;63(8):1129-34. doi: 10.1001/archneur.63.8.1129. Citation on PubMed
- Labarthe F, Dobbelaere D, Devisme L, De Muret A, Jardel C, Taanman JW, Gottrand F, Lombes A. Clinical, biochemical and morphological features of hepatocerebral syndrome with mitochondrial DNA depletion due to deoxyguanosine kinase deficiency. J Hepatol. 2005 Aug;43(2):333-41. doi: 10.1016/j.jhep.2005.03.023. Citation on PubMed
- Mandel H, Szargel R, Labay V, Elpeleg O, Saada A, Shalata A, Anbinder Y, Berkowitz D, Hartman C, Barak M, Eriksson S, Cohen N. The deoxyguanosine kinase gene is mutated in individuals with depleted hepatocerebral mitochondrial DNA. Nat Genet. 2001 Nov;29(3):337-41. doi: 10.1038/ng746. Erratum In: Nat Genet 2001 Dec;29(4):491. Citation on PubMed
- Mousson de Camaret B, Taanman JW, Padet S, Chassagne M, Mayencon M, Clerc-Renaud P, Mandon G, Zabot MT, Lachaux A, Bozon D. Kinetic properties of mutant deoxyguanosine kinase in a case of reversible hepatic mtDNA depletion. Biochem J. 2007 Mar 1;402(2):377-85. doi: 10.1042/BJ20060705. Citation on PubMed or Free article on PubMed Central
- Saada-Reisch A. Deoxyribonucleoside kinases in mitochondrial DNA depletion. Nucleosides Nucleotides Nucleic Acids. 2004 Oct;23(8-9):1205-15. doi: 10.1081/NCN-200027480. Citation on PubMed
- Shieh JT, Berquist WE, Zhang Q, Chou PC, Wong LJ, Enns GM. Novel deoxyguanosine kinase gene mutations and viral infection predispose apparently healthy children to fulminant liver failure. J Pediatr Gastroenterol Nutr. 2009 Jul;49(1):130-2. doi: 10.1097/MPG.0b013e31819de7a6. No abstract available. Citation on PubMed
- Slama A, Giurgea I, Debrey D, Bridoux D, de Lonlay P, Levy P, Chretien D, Brivet M, Legrand A, Rustin P, Munnich A, Rotig A. Deoxyguanosine kinase mutations and combined deficiencies of the mitochondrial respiratory chain in patients with hepatic involvement. Mol Genet Metab. 2005 Dec;86(4):462-5. doi: 10.1016/j.ymgme.2005.09.006. Epub 2005 Nov 2. Citation on PubMed | 2026-01-18T12:05:24.014085 |
141,149 | 3.675394 | http://www.soilassociation.org/wildlife/bees | The plight of the bee
Bees are under threat like never before, with their numbers declining. There is strong evidence that neonicotinoids – a class of pesticide first used in agriculture in the mid 1990s at exactly the time when mass bee disappearances started occurring – are involved in the deaths. Another major factor is intensive agriculture – monoculture's and the widespread use of pesticide and herbicide contribute to a loss of habitat and food for bees. Organic farming, by contrast, encourages higher levels of wildlife – including bees – on organic farms.
Keep Britain buzzing: ways you can help
The Soil Association has been working to highlight these problems and protect bees for several years. Our Keep Britain Buzzing campaign aims to highlight the threats bees face and encourages us all to take action to protect bees. We want to see all neonicotinoids banned, and promote better farming to help ensure the health and future of our bees. You can get involved and take action today.
- Support our Keep Britain Buzzing campaign. Support our work and protect bees by donating to our campaign today and we'll send you a campaign badge and free packet of bee-friendly organic phacelia seeds so you can create a haven for bees in your own garden.
- Buy organic food. Organic farmers don’t use neonicitinoid pesticides. They also have more complex crop rotations, which means that there is a greater diversity of plants for bees to forage on. Supporting organic farmers at the checkout is an everyday action with a big impact
- Don't use neonicotinoid pesticides. The EU has decided to suspend three types of neonicotinoid pesticides but there are still other types available. These pesticides appear in a range of common garden products. Avoid them and urge your local retailer to stop stocking them. Click here for a list of products and letter writing hints.
- Use organic techniques in your own garden. Garden pesticides also have the potential to do damage to bees, and good rotations give an extra diversity of flowering crops. Use a wide variety of plants in your garden, and don’t be too tidy. Leave wild flowering plants in place, and ivy is a particularly important source of late season winter food for bees. Find out more about organic growing techniques.
- Take up beekeeping. If you've got the space, then keeping your own colony of bees is a great way of boosting bee numbers. There are some excellent courses available in our Practical courses section. Find out more about beekeeping courses.
- Join the Soil Association. By becoming a member of the Soil Association charity you are helping to fund our campaigning and policy work on this, and a range of other important issues like food security and GM. Join the Soil Association today.
Find out more
Our site contains much more information on the threat's facing bees, so to find out more browse the following pages. | 2026-01-20T10:11:40.153738 |
182,809 | 3.614106 | http://en.wikipedia.org/wiki/Semimajor_axis | In geometry, the major axis of an ellipse is the longest diameter: a line (line segment) that runs through the center and both foci, with ends at the widest points of the shape. The semi-major axis is one half of the major axis, and thus runs from the centre, through a focus, and to the edge of the ellipse; essentially, it is the radius of an orbit at the orbit's two most distant points. For the special case of a circle, the semi-major axis is the radius. One can think of the semi-major axis as an ellipse's long radius.
The semi-major axis of a hyperbola is, depending on the convention, plus or minus one half of the distance between the two branches. Thus it is the distance from the center to either vertex (turning point) of the hyperbola.
A parabola can be obtained as the limit of a sequence of ellipses where one focus is kept fixed as the other is allowed to move arbitrarily far away in one direction, keeping ℓ fixed. Thus and tend to infinity, a faster than b.
The semi-major axis is the mean value of the smallest and largest distances from one focus to the points on the ellipse. Now consider the equation in polar coordinates, with one focus at the origin and the other on the positive x-axis,
The mean value of and , (for ) is
In an ellipse, the semimajor axis is the geometric mean of the distance from the center to either focus and the distance from the center to either directrix.
The semi-major axis of a hyperbola is, depending on the convention, plus or minus one half of the distance between the two branches; if this is a in the x-direction the equation is:
In terms of the semi-latus rectum and the eccentricity we have
The transverse axis of a hyperbola coincides with the semi-major axis.
Orbital period
- a is the length of the orbit's semi-major axis
- is the standard gravitational parameter of the central body
Note that for all ellipses with a given semi-major axis, the orbital period is the same, regardless of eccentricity.
The specific angular momentum H of a small body orbiting a central body in a circular or elliptical orbit is:
- a and are as defined above
- e is the eccentricity of the orbit
In astronomy, the semi-major axis is one of the most important orbital elements of an orbit, along with its orbital period. For Solar System objects, the semi-major axis is related to the period of the orbit by Kepler's third law (originally empirically derived),
where G is the gravitational constant, M is the mass of the central body, and m is the mass of the orbiting body. Typically, the central body's mass is so much greater than the orbiting body's, that m may be ignored. Making that assumption and using typical astronomy units results in the simpler form Kepler discovered.
The orbiting body's path around the barycentre and its path relative to its primary are both ellipses. The semi-major axis used in astronomy is always the primary-to-secondary distance; thus, the orbital parameters of the planets are given in heliocentric terms. The difference between the primocentric and "absolute" orbits may best be illustrated by looking at the Earth–Moon system. The mass ratio in this case is 81.30059. The Earth–Moon characteristic distance, the semi-major axis of the geocentric lunar orbit, is 384,400 km. The barycentric lunar orbit, on the other hand, has a semi-major axis of 379,700 km, the Earth's counter-orbit taking up the difference, 4,700 km. The Moon's average barycentric orbital speed is 1.010 km/s, whilst the Earth's is 0.012 km/s. The total of these speeds gives a geocentric lunar average orbital speed of 1.022 km/s; the same value may be obtained by considering just the geocentric semi-major axis value.
Average distance
It is often said that the semi-major axis is the "average" distance between the primary focus of the ellipse and the orbiting body. This is not quite accurate, as it depends on what the average is taken over.
- averaging the distance over the eccentric anomaly (q.v.) indeed results in the semi-major axis.
- averaging over the true anomaly (the true orbital angle, measured at the focus) results, oddly enough, in the semi-minor axis .
- averaging over the mean anomaly (the fraction of the orbital period that has elapsed since pericentre, expressed as an angle), finally, gives the time-average
The time-averaged value of the reciprocal of the radius, r −1, is a −1.
Energy; calculation of semi-major axis from state vectors
for an elliptical orbit and, depending on the convention, the same or
for a hyperbolic trajectory
(standard gravitational parameter), where:
- v is orbital velocity from velocity vector of an orbiting object,
- is cartesian position vector of an orbiting object in coordinates of a reference frame with respect to which the elements of the orbit are to be calculated (e.g. geocentric equatorial for an orbit around Earth, or heliocentric ecliptic for an orbit around the Sun),
- G is the gravitational constant,
- M and m are the masses of the bodies.
- , is the Energy of the orbiting body.
Note that for a given amount of total mass, the specific energy and the semi-major axis are always the same, regardless of eccentricity or the ratio of the masses. Conversely, for a given total mass and semi-major axis, the total specific energy is always the same. This statement will always be true under any given conditions.
- Semi-major and semi-minor axes of an ellipse With interactive animation | 2026-01-21T01:32:17.992395 |
785,357 | 3.536356 | http://phys.org/news147374744.html | Vitamin D deficiency—which is traditionally associated with bone and muscle weakness—may also increase the risk of cardiovascular disease (CVD). A growing body of evidence links low 25-hydroxyvitamin D levels to common CVD risk factors such as hypertension, obesity and diabetes, as well as major cardiovascular events including stroke and congestive heart failure.
In their review article, published in the December, 9, 2008, issue of the Journal of the American College of Cardiology (JACC), the authors issue practical recommendations to screen for and treat low vitamin D levels, especially in patients with risk factors for heart disease or diabetes.
"Vitamin D deficiency is an unrecognized, emerging cardiovascular risk factor, which should be screened for and treated," said James H. O'Keefe, M.D., cardiologist and director of Preventive Cardiology at the Mid America Heart Institute, Kansas City, MO. "Vitamin D is easy to assess, and supplementation is simple, safe and inexpensive."
It is estimated that up to half of U.S. adults and 30 percent of children and teenagers have vitamin D deficiency, which is defined as a 25(OH)D level of
Recent data from the Framingham Heart Study suggest patients with vitamin D levels below 15 ng/ml were twice as likely to experience a heart attack, stroke or other CV event within the next five years compared to those with higher levels. This risk remained even when researchers adjusted for traditional CV risk factors.
"Restoring vitamin D levels to normal is important in maintaining good musculoskeletal health, and it may also improve heart health and prognosis," said Dr. O'Keefe. "We need large randomized controlled trials to determine whether or not vitamin D supplementation can actually reduce future heart disease and deaths."
Vitamin D Basics
Vitamin D deficiency is more prevalent than once thought, and greater attention to its treatment is warranted, according to Dr. O'Keefe. Although most of the body's vitamin D requirements can come from sun exposure, indoor lifestyles and use of sunscreen, which eliminates 99 percent of vitamin D synthesis by the skin, means many people aren't producing enough.
"We are outside less than we used to be, and older adults and people who are overweight or obese are less efficient at making vitamin D in response to sunlight," said Dr. O'Keefe. "A little bit of sunshine is a good thing, but the use of sunscreen to guard against skin cancer is important if you plan to be outside for more than 15 to 30 of intense sunlight exposure."
Vitamin D can also be consumed through supplements and food intake. Natural food sources of vitamin D include salmon, sardines, cod liver oil, and vitamin D-fortified foods including milk and some cereals.
Major risk factors for vitamin D deficiency include: older age, darkly pigmented skin, increased distance from the equator, winter season, smoking, obesity, renal or liver disease and certain medications.
Treating Vitamin D Deficiency
In the absence of clinical guidelines, the authors outline specific recommendations for restoring and maintaining optimal vitamin D levels in CV patients. These patients should initially be treated with 50,000 IU of vitamin D2 or D3 once weekly for 8 to 12 weeks. Maintenance therapy should be continued using one of the following strategies:
1. 50,000 IU vitamin D2 or D3every 2 weeks;
2. 1,000 to 2,000 IU vitamin D3 daily;
3. Sunlight exposure for 10 minutes for Caucasian patients (longer for people with increased skin pigmentation) between the hours of 10 a.m. to 3 p.m.
Vitamin D supplements appear to be safe. In rare cases, vitamin D toxicity (causing high calcium levels and kidney stones) is possible, but only when taking in excess of 20,000 units a day.
Source: American College of Cardiology
Explore further: Caution to pregnant women on red meat diabetes link | 2026-01-30T10:05:26.198431 |
273,254 | 3.781073 | http://www.ldeo.columbia.edu/~kastens/curriculum/BRF/orientation/hpuzz/HPuzz_words.html | Return to the Black Rock Forest Student Investigations Home Page
About the Harvester Puzzles:
Harvester Puzzles are designed to help students learn:
(a) to use the Earth Curriculum Data Harvester , and
(b) to use Earth data to answer questions.
The questions posed in the Harvester Puzzles are questions from everyday life: what clothing should we bring on our camping trip? do we need a tent for our wedding? and so forth. The pedagogical idea behind the Harvester Puzzles is to give students a chance to learn to manipulate and think about data, without asking them to struggle with new concepts in Earth Science at the same time.
Note that there is more than one way to solve many of the Harvester Puzzles. For example, the first puzzle could be solved by making a time series graph of 10cm and 100cm soil temperature over the course of a year. That's how we've done it in the worked-out answer. But the same puzzle could also be solved by making a scatter plot of 10cm soil temp versus100cm soil temp for the entire data set, and checking to see whether the "cloud" of data points extends past the0ƒC point on either axis. Provided that the students can defend their strategy, we think that you should encourage students to explore different approaches. The goal is to dig answers out of data, and there is more than one route to that goal.
On the illustrations below, annotations in green will not show up on the original Harvester screen. The green annotations mark features that the teacher may wish to point out to the students.
Harvester Puzzle #1: Fence posts
Imagine that you are going to put in a fence at the Black Rock Forest Open Lowland site. You want the put in the fence posts deep enough that they will go down below the frost line. Is 10cm deep enough? Is100cm deep enough?
One route to an answer:
Use the times series facility of the Data Harvester to plot soil temperature at Open Lowland over the course of one year. The 10cm and 100cm data can be plotted on the same graph or on separate graphs, as you choose. Soil temperature varies slowly, so it is only recorded daily, rather than hourly, at BRF.
At 10cm subsurface, the soil temperature is at or below freezing (0ƒC) for much of January and February. At 100cm subsurface, the soil temperature never came close to freezing in the 1996-1997 winter. If you put the fence posts down to 100cm, they should be safe from freezing.
Harvester Puzzle #2: A Tent for the Wedding?
Imagine that your best friends are planning a June wedding at the Black Rock Forest, out behind the Forest Headquarters. They are trying to decide whether to spend the money to lease a big tent. They decide that if the chance of rain is greater than 25%, they will spend the money to rent a tent. Otherwise, they will skip the tent, take their chances with the rain, and spend the money on better quality champagne. The wedding is going to be in the afternoon. In order to line up the tent rental, they need to make this decision two months in advance, so they can't just listen to the TV weather report at the last minute. Based on the historical record of rainfall in Black Rock Forest in previous Junes, what is the chance that it will rain on the wedding?
One route to an answer:
Use the time series capability of the Data Harvester to plot hourly precipitation for the month of June for one year. The fact that the question is posed in terms of time ("June"; "afternoon") is your clue that a time series probably an the appropriate data display strategy. This graph will tell you how many days in June had rain that year, and which days they were.
This useful, but we really only care about rain in the afternoon. Use the "zoom" capability of the Data Harvester to zoom in on one of the days that did have rain. On the zoomed display in the illustration below, you can see that there was rain on the afternoon of June 3 (between 12:00and 18:00 hours), but there was no rain on the afternoon of June 4.
Now, use your browser's "Back" function to return to the time series graph showing the entire month of June. Zoom in on another rainy day. Keep track on a piece of paper of which days had rain in the afternoon.
When you have checked each rainy day, count up the number of rainy afternoons. Divide the number of rainy afternoons, by the total number of days. Is it more than one quarter (25%) of the days? If so, better invest in that tent.
Teaching Note: You may need to review or teach the use of the 24 hour clock before students will be able to complete the second half of this puzzle. Here are some examples:
0000 = midnight
0600 = 6:00 in the morning
1200 = noon
1500 = 3:00 in the afternoon
1800 = 6:00 in the evening
2100 = 9:00 in the evening
2300 = 11:00 at night
Harvester Puzzle #3: Keep the Rare Books out of the Damp
Imagine that you are planning to site a rare books library at the Black Rock Forest. Moist air harms books, so you wish to select the site with the lower humidity. Which BRF site typically has lower relative humidity?
One route to an answer:
The question asks to you compare two different data sets, and see which of the two kinds of data has a lower value most of the time(lower relative humidity).One very powerful way to make such a comparison is to make a scatter plot, with one data set on one axis and the other data set on the other axis.
Then draw the 1:1 line, a line connecting point (0,0) with point (100,100). Points on this line represent times when the relative humidity at Ridgetop was exactly equal to the relative humidity at Open Lowland. For times represented by points near the 1:1 line, it doesn't much matter where you build the library; the relative humidity is nearly the same at both locations.
Points above the 1:1 line represent times when the relative humidity at Ridgetop was higher than the relative humidity at Open Lowland. Points below the 1:1 line represent times when the relative humidity at Open Lowland was higher than the relative humidity at Ridgetop.
There are a lot of data points that are well below the 1:1 line. At those times, the Open Lowland site would be much worse for a rare books library than the Ridgetop site. There are only a relatively few data points that are well above the 1:1 line. In conclusion, the Ridgetop Site seems like a better choice for a rare book library, as least as far as protecting the books from humidity is concerned.
Harvester Puzzle #4: The Camping Trip
Imagine that you are planning a week long camping trip to the Black Rock Forest, in October. What kinds of clothes and equipment should you bring? Use data from the forest to defend your answer.
One route to an answer:
The aspects of the weather that have the most impact on the comfort of a camper are air temperature and precipitation. Campers also care about what time of day the different weather conditions occur. For example, cold air temperature at night, when campers can be in their tents and sleeping bags, calls for different preparation than cold air temperature during the day. So let's plot time series of precipitation and air temperature for the month of October:
There was a significant amount of precipitation on 7 out 31days. We'd better bring rain gear and a weatherproof tent.
The temperature fluctuated drastically between day and night. A few nights, the temperature reached freezing (0ƒC). So we'd better bring our all-seasons sleeping bags. The day time temperatures are usually around 10ƒC, so we'll need long pants and a sweater. But occasionally, the day time temperature gets well above normal room temperature (20ƒ C), so let's bring shorts and bathing suits just in case...
Teaching Note: You may need to teach or review the Celsius temperature scale before students will be able to interpret the temperature graph in terms of clothing. With mathematically-adept students, you can give them the conversion formula:
ƒF = (9/5 * ƒC ) + 32
For younger children, you can just give them a few key conversions:
0ƒ C = 32ƒ F
5ƒ C = 41 ƒ F
10 ƒC = 50 ƒ F
15ƒ C = 59 ƒ F
20ƒ C = 68ƒ F
25ƒ C = 77ƒ F
Harvester Puzzle #5: Build a Sheltered Patio
Imagine that you are designing a house to be build at the Ridgetop site of Black Rock Forest. You wish to build a patio on the house, on the side of the house that will be most sheltered from the winds. Which side of the house would experience the least frequent occurrence of strong winds?
One route to an answer:
This puzzle asks you to consider two different kinds of data, and how they interact. A scatterplot is usually a good technique for exploring how two different kinds of data interact.
We want find out if there are certain wind directions where the wind speed is nearly always slow. In other words, we want to look at wind speed as a function of wind direction. Wind direction is our independent variable, so it goes on the horizontal axis. Wind speed is our dependent variable, so it goes on the vertical axis.
Notice that the strongest winds, those in excess of 7 or 8 m/s, nearly always come from the Northwest (direction approximately 315ƒ). Don't build your patio on that side of the house!
A low dip in the scattered data marks one direction which almost never has strong winds. That direction is 080ƒ-110ƒ; in other words, winds from the east are almost never very strong. Building your patio on the eastern side of the house would be a good plan.
There is one other interesting data display you can make with the Harvester that bears on this problem. In the scatter plot you have just created, seethe third parameter (color) to display air temperature.
Notice that the warm balmy days, the days on which you're likely to want to use your patio, are shown in reds and warmer colors. Frigid days, on which you won't want to use your patio anyway, are shown in blues and cooler colors. It turns out that those few scattered days on which there was a strongish easterly wind are all freezing days, on which you wouldn't want to sit out on the patio anyway. So your decision to build on the east side of the house is reconfirmed.
Teaching Note: You may need to review or teach the use of numerical compass directions before students can complete this investigation. Recall that there are 360ƒ in a circle. The convention is to call north "zero," and then circle clockwise around through the other compass points, as follows:
000ƒ = north
045ƒ = northeast
090ƒ = east
135ƒ = southeast
180ƒ = south
225ƒ = southwest
270ƒ = west
315ƒ = northwest
At first glance, these Harvester Puzzles may seem a bit contrived. But, in fact, the problems of finding shelter from the rain, the wind, and extremes of temperature are ones that are faced by all the animals that live in the forest. The problem of finding an appropriately humid microenvironment is one that is faced by many species of moisture-loving plants. By learning to think about environmental data when the questions are posed in terms of human problems, students will gain the skills to think about problems in natural systems.
Created by Kim Kastens (1997), Lamont-Doherty Earth Observatory
May be freely used for educational purposes provided appropriate credit is given.
Return to the Black Rock Forest Investigations Home Page | 2026-01-22T09:29:49.045404 |
315,127 | 3.623132 | http://www.drroyspencer.com/2009/05/climate-model-predictions-it%E2%80%99s-time-for-a-reality-check/ | The fear of anthropogenic global warming is based almost entirely upon computerized climate model simulations of how the global atmosphere will respond to slowly increasing carbon dioxide concentrations. There are now over 20 models being tracked by the IPCC, and they project levels of warming ranging from pretty significant to catastrophic by late in this century. The following graph shows an example of those models’ forecasts based upon assumed increases in atmospheric carbon dioxide this century.
While there is considerable spread among the models, it can be seen that all of them now produce levels of global warming that can not be ignored.
But what is the basis for such large amounts of warming? Is it because we know CO2 is a greenhouse gas, and so increasing levels of atmospheric CO2 will cause warming? NO!…virtually everyone now agrees that the direct warming effect from extra CO2 is relatively small – too small to be of much practical concern.
No, the main reason the models produce so much warming depends upon uncertain assumptions regarding how clouds will respond to warming. Low and middle-level clouds provide a ‘sun shade’ for the Earth, and the climate models predict that those clouds will dissipate with warming, thereby letting more sunlight in and making the warming worse.
[High-altitude (cirrus) clouds have the opposite effect, and so a dissipation of those clouds would instead counteract the CO2 warming with cooling, which is the basis for Richard Lindzen's 'Infrared Iris' theory. The warming in the models, however, is now known to be mostly controlled by the low and middle level clouds – the “sun shade” clouds.]
But is this the way nature works? Our latest evidence from satellite measurements says “no”. One would think that understanding how the real world works would be a primary concern of climate researchers, but it is not. Rather than trying to understand how nature works, climate modelers spend most of their time trying to get the models to better mimic average weather patterns on the Earth and how those patterns change with the seasons. The unstated assumption is that if the models do a better job of mimicking average weather and the seasons, then they will do a better job of forecasting global warming.
But this assumption can not be rigorously supported. To forecast global warming, we need to know how the average climate state — and especially clouds — will change in response to the little bit of warming from the extra CO2. Indeed, the model that best replicates the average climate of the Earth might be the worst one at predicting future warming.
This fact gets glossed over – or totally ignored – as the IPCC dazzles us with the level of effort that has been invested in computer modeling of the climate system over the last 20 years. The IPCC can show how many people they have working on improving the models, how many years and how much money has been invested, how big and fast their computers are, and how many peer-reviewed scientific publications have resulted.
But unless we know how clouds change with warming, it is all a waste of time from the standpoint of knowing how serious manmade global warming will be. Even the IPCC admits this is their biggest uncertainty…so why is so little work being done trying to answer that question?
AN APPEAL TO THE DECISION MAKERS
We now have billions of dollars in satellite assets orbiting the Earth, continuously collecting high-quality data on natural, year-to-year changes in climate. I believe that these satellite measurements contain the key to understanding whether manmade global warming will be catastrophic, or merely lost in the noise of natural climate variability.
That is why I spend as much time as I can spare trying to understand those satellite measurements. But we need many more people working on this effort. Despite its importance, I have yet to meet anyone who is trying to do what I am doing.
To be fair, the modelers do indeed compare their models to satellite measurements. But those comparisons have not been detailed enough to answer the most important questions…like how clouds respond to warming.
The comparisons they have done have been confusing and inconclusive, which is part of the reason why they don’t rely on the satellite measurements very much. The modelers claim that the satellite measurements have been too ambiguous, and so they increasingly rely only upon the models.
But I will continue to assert (until I am blue in the face or die, whichever comes first) that their confusion stems from a very simple issue they have overlooked: mixing up cause and effect. The previous satellite observations that showed clouds tend to decrease with warming does not mean that warming causes clouds to decrease!
We have recently submitted to Journal of Geophysical Research a research paper that shows how one can tell the difference between cause and effect — between clouds causing a temperature change, and temperature causing a cloud change. And when this is done during the analysis of satellite data, it is clear that warming causes an increase in the sunshade effect of clouds. (While the data did suggest strong positive water vapor feedback, which enhances warming, that was far exceeded by the cooling effect of negative feedback from cloud changes.)
These results suggest that the climate system has a strong thermostatic control mechanism – exactly opposite to the way the IPCC models have been programmed to behave — and that the widespread concern over manmade global warming might well be a false alarm.
The potential importance of this result to the global warming debate demands a reexamination of all of the satellite data that have been collected over the last 25 years, with the best minds the science community can spare. Simply asserting that ‘Dr. Spencer does not know what he is talking about’ will not cut it any more.
We now have two papers in the peer-reviewed scientific literature that paved the way for this work (here and here), and so one can not simply dismiss the issue based upon some claim that we ‘skeptics’ do not publish our work.
I just presented our latest results at the NASA CERES Team meeting to about 100 attendees, and there were no major objections voiced to my analysis of the results. (CERES is the instrument that monitors how global cloud changes affect the energy balance of the Earth). I was pleased to see that there are still some scientists who are interested in the science.
Rather than simply asserting that I am wrong, why not take a fresh look at the data that have been collected over the years? Given the importance of the issue, it would seem to be the prudent thing to do. A red team-blue team approach is needed here, with the red team specifically looking for evidence that the IPCC has been wrong in their previous evaluation of the satellite data.
I suggested this years ago in congressional testimony, but one thing I’ve learned is that most congressional hearings are not designed to uncover the truth.
Maybe those in control of the research dollars are afraid of what might be found if the research community looked too closely at the satellite measurements. There are now billions — if not trillions — of dollars in future taxes, economic growth, and transfers of wealth between countries that are riding on the climate models being correct.
Scientific debate has all been shut down. The science of climate change was long ago taken over by political interests, and I am not hopeful that the situation will improve anytime soon. But I will continue to try to change that. | 2026-01-23T01:03:14.953528 |
941,577 | 4.058851 | http://serc.carleton.edu/sp/ssac_home/general/examples/14935.html | How Large is the Great Pyramid of Giza? -- Would it make a wall that would enclose France?
In this Spreadsheets Across the Curriculum module, students do an estimation calculation that sheds light on the size of the Great Pyramid. The calculation was first done by Napoleon during the Battle of the Pyramids in 1798. While members of his party explored the great structure, Napoleon relaxed at its base and did what is now known as a back-of-the envelope calculation. When his men returned, he announced that there was enough stone in the Pyramid to construct a wall around France. The students build a spreadsheet to recreate this calculation. They find that Napoleon had the magnitude correct.
The module features the writing of the late I.B. Cohen, renowned scholar of the history of science, from his last book The Triumph of Numbers (2005). The module includes links to information about the Pyramid of Giza, the Battle of the Pyramids, Prof. Cohen, and why geologists and geographers know that there are 640 acres in a square mile.
- Gain experience in solving a problem approximately by making rough, order-of-magnitude assumptions and then carrying out the calculation.
- Make use of unit conversions involving acres.
- Recall (or be reminded of) the formula for the volume of a pyramid and use it in a calculation.
- Consider how to get length given the volume and cross-sectional area.
- Develop a spreadsheet to carry out a calculation.
- Use the back-of-the-envelope calculation to marvel at one of the Seven Wonders of the World.
- Be introduced to one of the fine, readable books relating to quantitative literacy.
- Begin to see the value of calculations based on approximate assumptions.
- Increase their skill at unit conversions.
- Distinguish conceptually between areas and volumes.
- From the first part of the problem, see an interesting use of a formula from solid geometry.
- From the second part of the problem, take an important conceptual step toward integration (finding a volume by adding up the areas of cross-sectional slices).
- Be impressed with how useful school mathematics can be in appreciating the world outside of a technical context.
Context for Use
I use this module in my Computational Geology course, GLY 4866 (Acrobat (PDF) 39kB Sep25 06). I wrote the moodule for SSAC with the objective that it be of interest outside the geology curriculum (e.g., geography, history).
In the context of Computational Geology, this module provides the students with their first experience in estimation ("back-of-the-envelope problems," "Fermi probelms"). The module comes in the fourth week of the semester, by which time the students have become familiar with spreadsheets and Polya's How to Solve It heuristic. The students work through the module as a homework assignnment after an in-class problem-solving session. I start the session by reading the quotation on Slide 3, visiting the links on Slides 3 and 4, and asking the question on Slide 4. The students then divide into 3- to 4-person groups and work out their answers to the question. Along the way, they torment over the size of an acre, and after a while I review the material in the end note of Slide 14. Students then worry about the size of France. That gives me the opportunity to elaborate on the nature of estimation problems. I do not show them Slide 15. They see that slide during the homework activity in which they build the spreadsheet that does the calculation.
Description and Teaching Materials
If the embedded spreadsheets in the PowerPoint module are not visible, save the file to disk and open it from there.
This PowerPoint file is the student version of the module. An instructor version is available by request. The instructor version includes the completed spreadsheet. Send your request to Len Vacher (email@example.com) by filling out and submitting the Instructor Module Request Form.
Teaching Notes and Tips
The end-of-module questions can be used for assessment.
The instructor version contains a pre-test | 2026-02-01T21:58:24.525534 |
225,740 | 4.407016 | http://www.catlin.edu/curriculum/unit/unit-3 | - What makes a good definition?
- What are some essential definitions in geometry that we will return to all year?
- How do we solve basic problems of coordinate geometry?
- Basic geometric terms
- Midpoint formula
- Parallel and perpendicular slopes
- Triangle coordinate geometry
Skills and Processes:
- Creating a clear and concise definition
- Becoming familiar with basic geometric terms
- Using the midpoint formula
- Finding equations for lines that are parallel/perpendicular to other lines
- Finding equations for the median, perpendicular bisector, and altitude of a triangle | 2026-01-21T16:53:27.772362 |
519,259 | 3.527765 | http://blog.drwile.com/?p=5313 | Posted by jlwile on May 27, 2011
In my previous post, I discussed the cane toad invasion of Australia. While studies of the invasion have shown a new mechanism of selection that is distinct from classic natural selection, they have also shown how limited the range of evolutionary change in cane toads really is. This is consistent with the creationist view and quite contrary to the evolutionist view.
In this post, I want to discuss the changes that the cane toads have produced in other Australian animals. As you might expect, as a foreign species spreads across an ecosystem, it is going to have an effect on the already-established species there. In general, one expects the effects to be negative, but that doesn’t always seem to be the case. Indeed, a large study designed to assess the damage that the cane toad invasion has done to the already-established animals in Australia says:1
Overall, some Australian native species (mostly large predators) have declined due to cane toads; others (especially species formerly consumed by those predators) have benefited; and for yet others, effects are minor or are mediated indirectly rather than through direct interactions with the invasive toads.
So in the end, it’s a bit of a mixed bag. However, what I find interesting are some of the details of how these animals have changed in response to the cane toad invasion.
Cane toads, like many other toads, are toxic to many predators. The level of toxicity varies depending on the predator. Some predators just get sick when they eat cane toads, others die. Many snakes, for example, tend to die when they eat cane toads. This, then, can be a serious problem for the snakes of Australia. What’s interesting, however, is an entire population of snakes can change in order to adapt to this problem.
Such was the case for two different species of toad-eating snakes in Australia. The two species in question are fond of eating toads and are big enough to eat cane toads. However, they both are very sensitive to the poisons produced by cane toads, so the snakes tend to die after eating them. These species, then, are considered “toad vulnerable” species. Two other species that also inhabit areas overrun by the cane toads are not considered vulnerable. One species is too small to eat them, and the other species was already quite resistant to their poisons before the cane toads showed up. A team studied how the two “toad vulnerable” species changed compared to the “not vulnerable to toads” species, and the results were fascinating.
The team took several measurements of preserved snakes from all four species as found in museums. They then went into the field and took the same measurements in the snakes that were exposed to the toads. They also noted the geographical location so they could correlate any changes to the length of time the snakes had been exposed to the toads. They found that in the two “not vulnerable to toads” species, nothing (on average) had changed. However, in the two “toad vulnerable” species, the snakes in the field were (on average) longer than the ones in the museums, and they had smaller heads! The longer the cane toads had been in the area, the more significant the differences were.2
How do the authors explain these changes? Well, the bigger the head, the easier it is for the snake to eat large toads. Cane toads are pretty large, so only the big-headed snakes can eat them. That, of course, resulted in a lot of dead big-headed snakes. The small-headed snakes, however, couldn’t eat the cane toads easily, so they didn’t. As most of the big-headed snakes got killed off, then, the small-headed snakes had less competition and flourished. As a result, the populations of those two species of snakes has, on average, changed in head and body size due to the invading cane toads.
Now of course this explanation makes complete sense, and it shows the value to having a population with many different individual characteristics. If all the snakes had been big-headed, the population might have been severely threatened. Instead, because there were a lot of different head sizes in the population, the population is seemingly not threatened by the invasive species.
Other species just learn to avoid the toads. A small marsupial called the planigale likes to eat toads, and when the cane toads moved in, planigales started eating them. Rather than dying, however, they just got sick. Rather quickly, they learned that it was best to avoid the cane toads, and that’s what they did. As a result, they still go after other toads – just not the cane toad.3
In other species, there seemed to be individual preferences on whether or not to eat toads. In a laboratory study of death adders, for example, some individuals would happily go for a toad if one was presented, and others would ignore the toad. When those snakes were then equipped with radio transmitters and followed in the field, the ones that ignored toads in the lab were more likely to survive than the ones that were happy to eat toads in the lab. The authors suggest that the individual behavior of ignoring toads will be naturally selected and passed on to future generations, making the death adder population less threatened by the cane toad invasion.4
Now note what is happening in each of these cases. The animals aren’t producing new traits to deal with the cane toad invasion. Instead, natural selection is just selecting traits that already exist among the individuals. Those traits that make the animals less vulnerable to the cane toads become the dominant traits in the population. Like the analysis of how the cane toad is changing, then, it shows that evolutionary change is quite limited. Evolution can produce snakes with smaller heads or predators that tend to ignore an invasive species. However, it cannot fundamentally change the snakes or other predators. That’s the “take home” message I get from these interesting studies on the cane toad invasion of Australia.
1. Richard Shine, “The Ecological Impact Of Invasive Cane Toads (Bufo marinus) In Australia,” Quarterly Review of Biology 85 (3):253-291, 2010.
Return to Text
2. Ben L. Phillips and Richard Shine, “Adapting to an invasive species: Toxic cane toads induce morphological change in Australian snakes,” Proceedings of the National Academy of Sciences USA 101:17150-17155, 2004.
Return to Text
3. Webb, J. K., G. P. Brown, T. Child, M. J. Greenlees, B. L. Phillips, and R. Shine, “A native dasyurid predator (common planigale, Planigale maculata) rapidly learns to avoid toxic cane toads,” Australian Ecology 33:821-829, 2008.
Return to Text
4. B. L. Phillips, M. J. Greenlees, G. P. Brown, and R. Shine, “Predator behaviour and morphology mediates the impact of an invasive species: cane toads and death adders in Australia,” Animal Conservation 13 (1):53-59, 2010.
Return to Text | 2026-01-26T09:28:26.932805 |
640,723 | 3.588419 | http://www.engr.wisc.edu/news/ar/1998/me.html | ssistant Professor Nicola Ferrier (right) is working with
postdoctoral researcher Kyunghwan Kim (left) to give robots a sense of
touch. Currently, a typical robot hand might include two ridged
fingers made of parallel plates. If an object is flat and its
placement known, the robot can successfully pick it up. But try
finding and picking up an egg with the same robotic hand and the task
becomes problematic. By incorporating force and shape sensors embedded
within deformable robotic fingertips, Ferrier has developed a method
of sensing an object's shape and the distribution of forces required
to manipulate the object. With both shape and force information, the
robotic hand can operate more dexterously. To facilitate locating an
object, Ferrier's research group will combine the
force-and-shape-sensing system with a visual sensory system. This
particular combination will give the robot eye and hand
coordination. Before that can be done, however, Ferrier's team must
figure out how to instruct the robot to manage various sensory
information so that it will know when to look and move in order to
successfully find and manipulate an object.
New ways to print computer chips
Powerful computer models that actually simulate the making of computer chips are helping lead manufacturers to a new generation of smaller, faster and better electronics.
Professor Roxann Engelstad is directing a $2 million project funded by Sematech to simulate four competing technologies for making semiconductors in the next century. Sematech, a 10-member consortium of semiconductor manufacturers, will use results of the project to generate data for the industry.
The technology that served the semiconductor industry for decades will hit a wall in a few years, forcing the industry to reinvent the way it builds chips, Engelstad says. Optical lithography, the current approach to making semiconductors, does not appear to have the capability to print future circuitry in the precise dimensions needed.
Four new approaches to lithography are being considered, including the use of X-ray, electron beam, projection ion beam and extreme ultraviolet. Finding which competing technology is most cost-effective, Engelstad says, is being hailed by some in the industry as the "decision of the century."
Creating a Collaborative Learning Environment (CCLE)
What can a professor of thermodynamics learn about teaching from a professor of South African history and vise versa? Quite a bit. At least that's what faculty in the CCLE program have found. What started out as an effort to improve teaching in the College of Engineering has grown to include the entire Madison campus under the direction of Associate Scientist Katherine Sanders.
The intent, says one faculty advisor and program participant Professor Patrick Farrell, is to provide a venue for faculty who are interested in developing a deep understanding of learning and teaching. The focus is on understanding how students and faculty learn and in particular how they learn in a collaborative environment. Farrell says most participants enter the year-long program convinced that faculty from such diverse fields can't possibly have anything in common with regard to teaching. But by looking at the simplest elements of how new knowledge forms, faculty across campus have found that the process of learning and the things that stop the process are essentially the same no matter what subject is being studied.
From this common viewpoint, faculty can help each other analyze the way they teach. Farrell says many find ways to make changes in their courses and interactions with students that can help foster better learning.
Kenneth W. Ragland, Chair
240 Mechanical Engineering
1513 University Avenue
Madison, WI 53706-1572
Content by email@example.com
Markup by firstname.lastname@example.org
Date last modified: Thursday, 01-Oct-1998 12:00:00 CDT
Date created: 1-Oct-1998
1998 Annual Report Contents | 2026-01-28T04:03:33.968565 |
16,124 | 3.507132 | http://nautil.us/issue/46/balance/a-brief-history-of-the-grand-unified-theory-of-physics?utm_source=frontpage&utm_medium=mshare&utm_campaign=a-brief-history-of-the-grand-unified-theory-of-physics | Particle physicists had two nightmares before the Higgs particle was discovered in 2012. The first was that the Large Hadron Collider (LHC) particle accelerator would see precisely nothing. For if it did, it would likely be the last large accelerator ever built to probe the fundamental makeup of the cosmos. The second was that the LHC would discover the Higgs particle predicted by theoretical physicist Peter Higgs in 1964 ... and nothing else.
Each time we peel back one layer of reality, other layers beckon. So each important new development in science generally leaves us with more questions than answers. But it also usually leaves us with at least the outline of a road map to help us begin to seek answers to those questions. The successful discovery of the Higgs particle, and with it the validation of the existence of an invisible background Higgs field throughout space (in the quantum world, every particle like the Higgs is associated with a field), was a profound validation of the bold scientific developments of the 20th century.
However, the words of Sheldon Glashow continue to ring true: The Higgs is like a toilet. It hides all the messy details we would rather not speak of. The Higgs field interacts with most elementary particles as they travel through space, producing a resistive force that slows their motion and makes them appear massive. Thus, the masses of elementary particles that we measure, and that make the world of our experience possible is something of an illusion—an accident of our particular experience.
As elegant as this idea might be, it is essentially an ad hoc addition to the Standard Model of physics—which explains three of the four known forces of nature, and how these forces interact with matter. It is added to the theory to do what is required to accurately model the world of our experience. But it is not required by the theory. The universe could have happily existed with massless particles and a long-range weak force (which, along with the strong force, gravity, and electromagnetism, make up the four known forces). We would just not be here to ask about them. Moreover, the detailed physics of the Higgs is undetermined within the Standard Model alone. The Higgs could have been 20 times heavier, or 100 times lighter.
Why, then, does the Higgs exist at all? And why does it have the mass it does? (Recognizing that whenever scientists ask “Why?” we really mean “How?”) If the Higgs did not exist, the world we see would not exist, but surely that is not an explanation. Or is it? Ultimately to understand the underlying physics behind the Higgs is to understand how we came to exist. When we ask, “Why are we here?,” at a fundamental level we may as well be asking, “Why is the Higgs here?” And the Standard Model gives no answer to this question.
Some hints do exist, however, coming from a combination of theory and experiment. Shortly after the fundamental structure of the Standard Model became firmly established, in 1974, and well before the details were experimentally verified over the next decade, two different groups of physicists at Harvard, where both Sheldown Glashow and Steven Weinberg were working, noticed something interesting. Glashow, along with Howard Georgi, did what Glashow did best: They looked for patterns among the existing particles and forces and sought out new possibilities using the mathematics of group theory.
In the Standard Model the weak and electromagnetic forces of nature are unified at a high-energy scale, into a single force that physicists call the “electroweak force.” This means that the mathematics governing the weak and electromagnetic forces are the same, both constrained by the same mathematical symmetry, and the two forces are different reflections of a single underlying theory. But the symmetry is “spontaneously broken” by the Higgs field, which interacts with the particles that convey the weak force, but not the particles that convey the electromagnetic force. This accident of nature causes these two forces to appear as two separate and distinct forces at scales we can measure—with the weak force being short-range and electromagnetism remaining long-range.
Georgi and Glashow tried to extend this idea to include the strong force, and discovered that all of the known particles and the three non-gravitational forces could naturally fit within a single fundamental symmetry structure. They then speculated that this symmetry could spontaneously break at some ultrahigh energy scale (and short distance scale) far beyond the range of current experiments, leaving two separate and distinct unbroken symmetries left over—resulting in separate strong and electroweak forces. Subsequently, at a lower energy and larger distance scale, the electroweak symmetry would break, separating the electroweak force into the short-range weak and the long-range electromagnetic force.
They called such a theory, modestly, a Grand Unified Theory (GUT).
At around the same time, Weinberg and Georgi along with Helen Quinn noticed something interesting—following the work of Frank Wilczek, David Gross, and David Politzer. While the strong interaction got weaker at smaller distance scales, the electromagnetic and weak interactions got stronger.
Every time we open a new window on the universe, we are surprised.
It didn’t take a rocket scientist to wonder whether the strength of the three different interactions might become identical at some small-distance scale. When they did the calculations, they found (with the accuracy with which the interactions were then measured) that such a unification looked possible, but only if the scale of unification was about 15 orders of magnitude in scale smaller than the size of the proton.
This was good news if the unified theory was the one proposed by Howard Georgi and Glashow—because if all the particles we observe in nature got unified this way, then new particles (called gauge bosons) would exist that produce transitions between quarks (which make up protons and neutrons), and electrons and neutrinos. That would mean protons could decay into other lighter particles, which we could potentially observe. As Glashow put it, “Diamonds aren’t forever.”
Even then it was known that protons must have an incredibly long lifetime. Not just because we still exist almost 14 billion years after the big bang, but because we all don’t die of cancer as children. If protons decayed with an average lifetime smaller than about a billion billion years, then enough protons would decay in our bodies during our childhood to produce enough radiation to kill us. Remember that in quantum mechanics, processes are probabilistic. If an average proton lives a billion billion years, and if one has a billion billion protons, then on average one will decay each year. There are a lot more than a billion billion protons in our bodies.
However, with the incredibly small proposed distance scale and therefore the incredibly large mass scale associated with spontaneous symmetry breaking in Grand Unification, the new gauge bosons would get large masses. That would make the interactions they mediate be so short-range that they would be unbelievably weak on the scale of protons and neutrons today. As a result, while protons could decay, they might live, in this scenario, perhaps a million billion billion billion years before decaying. Still time to hold onto your growth stocks.
With the results of Glashow and Georgi, and Georgi, Quinn, and Weinberg, the smell of grand synthesis was in the air. After the success of the electroweak theory, particle physicists were feeling ambitious and ready for further unification.
How would one know if these ideas were correct, however? There was no way to build an accelerator to probe an energy scale a million billion times greater than the rest mass energy of protons. Such a machine would have to have a circumference of the moon’s orbit. Even if it was possible, considering the earlier debacle over the Superconducting Super Collider, no government would ever foot the bill.
Happily, there was another way, using the kind of probability arguments I just presented that give limits to the proton lifetime. If the new Grand Unified Theory predicted a proton lifetime of, say, a thousand billion billion billion years, then if one could put a thousand billion billion billion protons in a single detector, on average one of them would decay each year.
Where could one find so many protons? Simple: in about 3,000 tons of water.
So all that was required was to get a tank of water, put it in the dark, make sure there were no radioactivity backgrounds, surround it with sensitive phototubes that can detect flashes of light in the detector, and then wait for a year to see a burst of light when a proton decayed. As daunting as this may seem, at least two large experiments were commissioned and built to do just this, one deep underground next to Lake Erie in a salt mine, and one in a mine near Kamioka, Japan. The mines were necessary to screen out incoming cosmic rays that would otherwise produce a background that would swamp any proton decay signal.
Both experiments began taking data around 1982–83. Grand Unification seemed so compelling that the physics community was confident a signal would soon appear and Grand Unification would mean the culmination of a decade of amazing change and discovery in particle physics—not to mention another Nobel Prize for Glashow and maybe some others.
Unfortunately, nature was not so kind in this instance. No signals were seen in the first year, the second, or the third. The simplest elegant model proposed by Glashow and Georgi was soon ruled out. But once the Grand Unification bug had caught on, it was not easy to let it go. Other proposals were made for unified theories that might cause proton decay to be suppressed beyond the limits of the ongoing experiments.
The Higgs is like a toilet. It hides all the messy details we would rather not speak of.
On Feb. 23, 1987, however, another event occurred that demonstrates a maxim I have found is almost universal: Every time we open a new window on the universe, we are surprised. On that day a group of astronomers observed, in photographic plates obtained during the night, the closest exploding star (a supernova) seen in almost 400 years. The star, about 160,000 light-years away, was in the Large Magellanic Cloud—a small satellite galaxy of the Milky Way observable in the southern hemisphere.
If our ideas about exploding stars are correct, most of the energy released should be in the form of neutrinos, despite that the visible light released is so great that supernovas are the brightest cosmic fireworks in the sky when they explode (at a rate of about one explosion per 100 years per galaxy). Rough estimates then suggested that the huge IMB (Irvine- Michigan-Brookhaven) and Kamiokande water detectors should see about 20 neutrino events. When the IMB and Kamiokande experimentalists went back and reviewed their data for that day, lo and behold IMB displayed eight candidate events in a 10-second interval, and Kamiokande displayed 11 such events. In the world of neutrino physics, this was a flood of data. The field of neutrino astrophysics had suddenly reached maturity. These 19 events produced perhaps 1,900 papers by physicists, such as me, who realized that they provided an unprecedented window into the core of an exploding star, and a laboratory not just for astrophysics but also for the physics of neutrinos themselves.
Spurred on by the realization that large proton-decay detectors might serve a dual purpose as new astrophysical neutrino detectors, several groups began to build a new generation of such dual-purpose detectors. The largest one in the world was again built in the Kamioka mine and was called Super-Kamiokande, and with good reason. This mammoth 50,000-ton tank of water, surrounded by 11,800 phototubes, was operated in a working mine, yet the experiment was maintained with the purity of a laboratory clean room. This was absolutely necessary because in a detector of this size one had to worry not only about external cosmic rays, but also about internal radioactive contaminants in the water that could swamp any signals being searched for.
Meanwhile, interest in a related astrophysical neutrino signature also reached a new high during this period. The sun produces neutrinos due to the nuclear reactions in its core that power it, and over 20 years, using a huge underground detector, physicist Ray Davis had detected solar neutrinos, but had consistently found an event rate about a factor of three below what was predicted using the best models of the sun. A new type of solar neutrino detector was built inside a deep mine in Sudbury, Canada, which became known as the Sudbury Neutrino Observatory (SNO).
Super-Kamiokande has now been operating almost continuously, through various upgrades, for more than 20 years. No proton-decay signals have been seen, and no new supernovas observed. However, the precision observations of neutrinos at this huge detector, combined with complementary observations at SNO, definitely established that the solar neutrino deficit observed by Ray Davis is real, and moreover that it is not due to astrophysical effects in the sun but rather due to the properties of neutrinos. The implication was that at least one of the three known types of neutrinos is not massless. Since the Standard Model does not accommodate neutrinos’ masses, this was the first definitive observation that some new physics, beyond the Standard Model and beyond the Higgs, must be operating in nature.
Soon after this, observations of higher-energy neutrinos that regularly bombard Earth as high-energy cosmic-ray protons hit the atmosphere and produce a downward shower of particles, including neutrinos, demonstrated that yet a second neutrino has mass. This mass is somewhat larger, but still far smaller than the mass of the electron. For these results team leaders at SNO and Kamiokande were awarded the 2015 Nobel Prize in Physics—a week before I wrote the first draft of these words. To date these tantalizing hints of new physics are not explained by current theories.
The absence of proton decay, while disappointing, turned out to be not totally unexpected. Since Grand Unification was first proposed, the physics landscape had shifted slightly. More precise measurements of the actual strengths of the three non-gravitational interactions—combined with more sophisticated calculations of the change in the strength of these interactions with distance—demonstrated that if the particles of the Standard Model are the only ones existing in nature, the strength of the three forces will not unify at a single scale. In order for Grand Unification to take place, some new physics at energy scales beyond those that have been observed thus far must exist. The presence of new particles would not only change the energy scale at which the three known interactions might unify, it would also tend to drive up the Grand Unification scale and thus suppress the rate of proton decay—leading to predicted lifetimes in excess of a million billion billion billion years.
As these developments were taking place, theorists were driven by new mathematical tools to explore a possible new type of symmetry in nature, which became known as supersymmetry. This fundamental symmetry is different from any previous known symmetry, in that it connects the two different types of particles in nature, fermions (particles with half-integer spins) and bosons (particles with integer spins). The upshot of this is that if this symmetry exists in nature, then for every known particle in the Standard Model at least one corresponding new elementary particle must exist. For every known boson there must exist a new fermion. For every known fermion there must exist a new boson.
Since we haven’t seen these particles, this symmetry cannot be manifest in the world at the level we experience it, and it must be broken, meaning the new particles will all get masses that could be heavy enough so that they haven’t been seen in any accelerator constructed thus far.
What could be so attractive about a symmetry that suddenly doubles all the particles in nature without any evidence of any of the new particles? In large part the seduction lay in the very fact of Grand Unification. Because if a Grand Unified theory exists at a mass scale of 15 to 16 orders of magnitude higher energy than the rest mass of the proton, this is also about 13 orders of magnitude higher than the scale of electroweak symmetry breaking. The big question is why and how such a huge difference in scales can exist for the fundamental laws of nature. In particular, if the Standard Model Higgs is the true last remnant of the Standard Model, then the question arises, Why is the energy scale of Higgs symmetry breaking 13 orders of magnitude smaller than the scale of symmetry breaking associated with whatever new field must be introduced to break the GUT symmetry into its separate component forces?
Following three years of LHC runs, there are no signs of supersymmetry whatsoever.
The problem is a little more severe than it appears. When one considers the effects of virtual particles (which appear and disappear on timescales so short that their existence can only be probed indirectly), including particles of arbitrarily large mass, such as the gauge particles of a presumed Grand Unified Theory, these tend to drive up the mass and symmetry-breaking scale of the Higgs so that it essentially becomes close to, or identical to, the heavy GUT scale. This generates a problem that has become known as the naturalness problem. It is technically unnatural to have a huge hierarchy between the scale at which the electroweak symmetry is broken by the Higgs particle and the scale at which the GUT symmetry is broken by whatever new heavy field scalar breaks that symmetry.
The mathematical physicist Edward Witten argued in an influential paper in 1981 that supersymmetry had a special property. It could tame the effect that virtual particles of arbitrarily high mass and energy have on the properties of the world at the scales we can currently probe. Because virtual fermions and virtual bosons of the same mass produce quantum corrections that are identical except for a sign, if every boson is accompanied by a fermion of equal mass, then the quantum effects of the virtual particles will cancel out. This means that the effects of virtual particles of arbitrarily high mass and energy on the physical properties of the universe on scales we can measure would now be completely removed.
If, however, supersymmetry is itself broken (as it must be or all the supersymmetric partners of ordinary matter would have the same mass as the observed particles and we would have observed them), then the quantum corrections will not quite cancel out. Instead they would yield contributions to masses that are the same order as the supersymmetry-breaking scale. If it was comparable to the scale of the electroweak symmetry breaking, then it would explain why the Higgs mass scale is what it is.
And it also means we should expect to begin to observe a lot of new particles—the supersymmetric partners of ordinary matter—at the scale currently being probed at the LHC.
This would solve the naturalness problem because it would protect the Higgs boson masses from possible quantum corrections that could drive them up to be as large as the energy scale associated with Grand Unification. Supersymmetry could allow a “natural” large hierarchy in energy (and mass) separating the electroweak scale from the Grand Unified scale.
That supersymmetry could in principle solve the hierarchy problem, as it has become known, greatly increased its stock with physicists. It caused theorists to begin to explore realistic models that incorporated supersymmetry breaking and to explore the other physical consequences of this idea. When they did so, the stock price of supersymmetry went through the roof. For if one included the possibility of spontaneously broken supersymmetry into calculations of how the three non-gravitational forces change with distance, then suddenly the strength of the three forces would naturally converge at a single, very small-distance scale. Grand Unification became viable again!
Models in which supersymmetry is broken have another attractive feature. It was pointed out, well before the top quark was discovered, that if the top quark was heavy, then through its interactions with other supersymmetric partners, it could produce quantum corrections to the Higgs particle properties that would cause the Higgs field to form a coherent background field throughout space at its currently measured energy scale if Grand Unification occurred at a much higher, superheavy scale. In short, the energy scale of electroweak symmetry breaking could be generated naturally within a theory in which Grand Unification occurs at a much higher energy scale. When the top quark was discovered and indeed was heavy, this added to the attractiveness of the possibility that supersymmetry breaking might be responsible for the observed energy scale of the weak interaction.
In order for Grand Unification to take place, some new physics at energy scales beyond those that have been observed thus far must exist.
All of this comes at a cost, however. For the theory to work, there must be two Higgs bosons, not just one. Moreover, one would expect to begin to see the new supersymmetric particles if one built an accelerator such as the LHC, which could probe for new physics near the electroweak scale. Finally, in what looked for a while like a rather damning constraint, the lightest Higgs in the theory could not be too heavy or the mechanism wouldn’t work.
As searches for the Higgs continued without yielding any results, accelerators began to push closer and closer to the theoretical upper limit on the mass of the lightest Higgs boson in supersymmetric theories. The value was something like 135 times the mass of the proton, with details to some extent depending on the model. If the Higgs could have been ruled out up to that scale, it would have suggested all the hype about supersymmetry was just that.
Well, things turned out differently. The Higgs that was observed at the LHC has a mass about 125 times the mass of the proton. Perhaps a grand synthesis was within reach.
The answer at present is ... not so clear. The signatures of new super- symmetric partners of ordinary particles should be so striking at the LHC, if they exist, that many of us thought that the LHC had a much greater chance of discovering supersymmetry than it did of discovering the Higgs. It didn’t turn out that way. Following three years of LHC runs, there are no signs of supersymmetry whatsoever. The situation is already beginning to look uncomfortable. The lower limits that can now be placed on the masses of supersymmetric partners of ordinary matter are getting higher. If they get too high, then the supersymmetry-breaking scale would no longer be close to the electroweak scale, and many of the attractive features of supersymmetry breaking for resolving the hierarchy problem would go away.
But the situation is not yet hopeless, and the LHC has been turned on again, this time at higher energy. It could be that supersymmetric particles will soon be discovered.
If they are, this will have another important consequence. One of the bigger mysteries in cosmology is the nature of the dark matter that appears to dominate the mass of all galaxies we can see. There is so much of it that it cannot be made of the same particles as normal matter. If it were, for example, the predictions of the abundance of light elements such as helium produced in the big bang would no longer agree with observation. Thus physicists are reasonably certain that the dark matter is made of a new type of elementary particle. But what type?
Well, the lightest supersymmetric partner of ordinary matter is, in most models, absolutely stable and has many of the properties of neutrinos. It would be weakly interacting and electrically neutral, so that it wouldn’t absorb or emit light. Moreover, calculations that I and others performed more than 30 years ago showed that the remnant abundance today of the lightest supersymmetric particle left over after the big bang would naturally be in the range so that it could be the dark matter dominating the mass of galaxies.
In that case our galaxy would have a halo of dark matter particles whizzing throughout it, including through the room in which you are reading this. As a number of us also realized some time ago, this means that if one designs sensitive detectors and puts them underground, not unlike, at least in spirit, the neutrino detectors that already exist underground, one might directly detect these dark matter particles. Around the world a half dozen beautiful experiments are now going on to do just that. So far nothing has been seen, however.
So, we are in potentially the best of times or the worst of times. A race is going on between the detectors at the LHC and the underground direct dark matter detectors to see who might discover the nature of dark matter first. If either group reports a detection, it will herald the opening up of a whole new world of discovery, leading potentially to an understanding of Grand Unification itself. And if no discovery is made in the coming years, we might rule out the notion of a simple supersymmetric origin of dark matter—and in turn rule out the whole notion of supersymmetry as a solution of the hierarchy problem. In that case we would have to go back to the drawing board, except if we don’t see any new signals at the LHC, we will have little guidance about which direction to head in order to derive a model of nature that might actually be correct.
Things got more interesting when the LHC reported a tantalizing possible signal due to a new particle about six times heavier than the Higgs particle. This particle did not have the characteristics one would expect for any supersymmetric partner of ordinary matter. In general the most exciting spurious hints of signals go away when more data are amassed, and about six months after this signal first appeared, after more data were amassed, it disappeared. If it had not, it could have changed everything about the way we think about Grand Unified Theories and electroweak symmetry, suggesting instead a new fundamental force and a new set of particles that feel this force. But while it generated many hopeful theoretical papers, nature seems to have chosen otherwise.
The absence of clear experimental direction or confirmation of super- symmetry has thus far not bothered one group of theoretical physicists. The beautiful mathematical aspects of supersymmetry encouraged, in 1984, the resurrection of an idea that had been dormant since the 1960s when Yoichiro Nambu and others tried to understand the strong force as if it were a theory of quarks connected by string-like excitations. When supersymmetry was incorporated in a quantum theory of strings, to create what became known as superstring theory, some amazingly beautiful mathematical results began to emerge, including the possibility of unifying not just the three non-gravitational forces, but all four known forces in nature into a single consistent quantum field theory.
However, the theory requires a host of new spacetime dimensions to exist, none of which has been, as yet, observed. Also, the theory makes no other predictions that are yet testable with currently conceived experiments. And the theory has recently gotten a lot more complicated so that it now seems that strings themselves are probably not even the central dynamical variables in the theory.
None of this dampened the enthusiasm of a hard core of dedicated and highly talented physicists who have continued to work on superstring theory, now called M-theory, over the 30 years since its heyday in the mid-1980s. Great successes are periodically claimed, but so far M-theory lacks the key element that makes the Standard Model such a triumph of the scientific enterprise: the ability to make contact with the world we can measure, resolve otherwise inexplicable puzzles, and provide fundamental explanations of how our world has arisen as it has. This doesn’t mean M-theory isn’t right, but at this point it is mostly speculation, although well-meaning and well-motivated speculation.
It is worth remembering that if the lessons of history are any guide, most forefront physical ideas are wrong. If they weren’t, anyone could do theoretical physics. It took several centuries or, if one counts back to the science of the Greeks, several millennia of hits and misses to come up with the Standard Model.
So this is where we are. Are great new experimental insights just around the corner that may validate, or invalidate, some of the grander speculations of theoretical physicists? Or are we on the verge of a desert where nature will give us no hint of what direction to search in to probe deeper into the underlying nature of the cosmos? We’ll find out, and we will have to live with the new reality either way.
Lawrence M. Krauss is a theoretical physicist and cosmologist, the director of the Origins Project and the foundation professor in the School of Earth and Space Exploration at Arizona State University. He is also the author of bestselling books including A Universe from Nothing and The Physics of Star Trek.
Copyright © 2017 by Lawrence M. Krauss. From the forthcoming book The Greatest Story Ever Told—So Far: Why Are We Here? By Lawrence M. Krauss to be published by Atria Books, a Division of Simon & Schuster, Inc. Printed by permission. | 2026-01-18T13:39:31.138025 |
310,657 | 3.868502 | http://westerndiatoms.colorado.edu/about/what_are_diatoms | The Division Bacillariophyta, or diatoms, are algae with a distinct silica cell wall called a frustule.
The division Bacilliariophyta is distinguished by the presence of an inorganic cell wall composed of silica (hydrated Si02). The wall, or frustule, consists of two parts called “valves”. Diatoms have evolved to have elaborate silica cell walls that reflect the types of habitat to which the particular species is adapted. Diatoms are abundant in nearly every habitat where water is found – oceans, lakes, streams, mosses, soils, even the bark of trees. Diatoms grow as single cells, or form simple filaments or colonies. They form the base of aquatic food webs in marine and freshwater habitats. There is a wide range in the number of species of diatoms present on earth, from 20,000 to over 1-2 million. This range of the number of species is large because scientists are still working to understand basic aspects about "what a diatom species is" and new and diverse forms are still being discovered and described in scientific publications. Diatoms are photosynthetic, gaining energy from the sun using chlorophylls a and c. Their accessory pigments fucoxanthin and β (beta) carotene give them a characteristic golden color. Cells store energy from photosynthesis in the form of chrysolaminarin and lipids. The high production of lipids in many diatom species has created great interest in diatoms as a source of biofuels. Indeed, as one of the important global sources of carbon fixation, diatoms already are an important “biofuel” for aquatic food webs. It is estimated that 40% of the earth’s oxygen (02) is produced through the photosynthetic activities of diatoms.
Diatom frustules are characteristically highly ornamented, forming an amazing range of forms. The shape of the diatom frustule is species specific. In other words, the evolutionary relationships of diatoms and their names (diatom taxonomy) has been based on the silica frustule, at least until recently (although there are exceptions). Two major groups are recognized within the diatoms: 1) Coscinodiscophyceae, or centric diatoms, cells with radial symmetry (about a point) and 2) Bacillariophyceae, or pennate diatoms, cells with bilateral symmetry (about a line). The centric diatoms are not able to move, but some pennate diatoms may move across surfaces or up and down within sediments. Cells are able to move by a structure termed the raphe.
The navigation of this site is organized around shape categories (morphological groups), into which we have grouped all of the genera.
Nearly all diatoms are microscopic - cells range in size from about 2 microns to about 500 microns (0.5 mm), or about the width of a human hair (note that one micron is equal to 10-6 meter). Scientists use light microscopes (LM) or scanning electron microscopes (SEM) to view diatom structures. When diatoms are viewed with a light microscope, the frustules appear clear (we are seeing through glass). When diatoms are viewed with a scanning electron microscope, the frustules appear opaque.
For most of their life history, diatom cells divide by vegetative division, also called vegetative reproduction. That is, a single cell divides and forms two new cells. But, the new valve (cell) walls are formed inside the parent cell and are smaller than the parent cells. That is, because new cells must be formed within the parent and the rigid, inorganic cell walls of silica cannot expand, so the daughter cells are constrained to be smaller than the parent. Furthermore, daughter cells have one valve of the parent and one valve that is newly formed and smaller in size. This biological constraint has important implications: with each cell division diatom cells become progressively smaller. In addition, as the cells of many species within a population become smaller, their relative dimensions change. The range of size and shape of a population is termed a "size diminution series". In this website, we demonstrate the size diminution series of each species.
Diatoms regain their maximum size through the formation of auxospores, which may be formed through sexual or asexual reproduction. An auxospore is a unique type of cell that possesses silica bands (perizonia) rather than a rigid silica cell wall. The perizonium allows the cell to expand to its maximum size, then produces a frustule of the normal cell morphology.
Diatoms live in aquatic and semi-aquatic habitats. Some diatoms live as free floating cells in the plankton of ponds, lakes and oceans. Planktonic species often have morphological adaptations that allow them to remain suspended in water. These adaptions to prevent sinking include forming long chains, linked by silica spines. Other species form zig-zag or stellate (star-shaped) colonies that resist sinking.
Other diatom species grow attached to surfaces. They may lie attached to a rock or aquatic plant. Many frustules of these species are shaped in such a way to aid in attachment. Their frustules may be arched or curved to fit nicely to the stem of a piece of aquatic moss.
Some diatom species form a stalk which is attached to a surface. While some species form short stalks, or mucilage pads, others form long branching stalks. The stalks function to hold the cells in place and are resistant to waves or high flow in rivers. Stalks also appear to function to obtain nutrients from the water.
Diatoms that have a raphe system are able to move over benthic surfaces, whether the surfaces are fine grains of sand, or within the mud of a tidal zone, or even on other diatoms. Some diatoms form mucilage tubes and move up and down inside the tubes. Diatoms have differing abilities to move, depending on the species. They are able to travel at different maximum speeds, related to the degree to which the raphe system is developed.
In general, diatom species are very particular about the water chemistry in which they live. In particular, species have distinct ranges of pH and salinity where they will grow. Diatoms also have ranges and tolerances for other environmental variables, including nutrient concentration, suspended sediment, flow regime, elevation, and different types of human disturbance. As a result, diatoms are used extensively in environmental assessment and monitoring. Furthermore, because the silica cell walls do not decompose, diatoms in marine and lake sediments can be used to interpret conditions in the past. Paleoecology is a field that utilizes both living and subfossil diatom valves that are preserved in marine and freshwater sediments. Scientists use living cells to understand the environmental factors that determine the modern presence and abundance. Then, scientists can apply the knowledge of species preferences in modern conditions to interpret the diatom species from the past, and the historical conditions that those species imply.
An excellent DVD with stunning video and still images of diatom biology is available. It is appropriate for more advanced students:
Pickett-Heaps, Jeremy D.and Pickett-Heaps, Julianne. 2003. Diatoms: Life in Glass Houses. Cytographics, 58 minutes. ISBN: 0-9586081-6-4
Useful references include:
Round, F.E., Crawford, R.M. and Mann, D.G. 1990. The Diatoms, Biology and Morphology of the Genera. Cambridge University Press. 747 p.
Smol, J.P. and Stoermer, E.F. 2010. The Diatoms: Applications for Environmental and Earth Sciences. Second Edition, Cambridge University Press. 667 p. | 2026-01-22T23:21:44.519165 |
152,324 | 3.565221 | http://www.chemeurope.com/en/encyclopedia/Maize.html | To use all functions of this page, please activate cookies in your browser.
With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter.
- My watch list
- My saved searches
- My saved topics
- My newsletter
Maize (IPA: /ˈmeɪz/) (Zea mays L. ssp. mays), known as corn in some countries, is a cereal grain that was domesticated in Mesoamerica and then spread throughout the American continents. Maize spread to the rest of the world after European contact with the Americas in the late 15th century and early 16th century.
The term maize derives from the Spanish form (maíz) of the indigenous Taino term for the plant, and is the form most commonly heard in the United Kingdom. In the United States, Canada and Australia, the usual term is corn, which originally referred to any grain (and still does in Britain), but which now refers exclusively to maize, having been shortened from the form "Indian corn".
Maize is the largest crop in all of the Americas (270 million metric tons annually in the U.S. alone). Hybrid maize is preferred by farmers over conventional varieties for its high grain yield, due to heterosis ("hybrid vigour"). While some maize varieties grow 7 metres (23 ft) tall at certain locations, commercial maize has been bred for a height of 2.5 metres (8 ft). Sweet corn is usually shorter than field-corn varieties.
The stems superficially resemble bamboo canes and the joints (nodes) can reach 20–30 centimetres (8–12 in) apart. Maize has a very distinct growth form; the lower leaves being like broad flags, 50–100 centimetres long and 5–10 centimetres wide (2–4 ft by 2–4 in); the stems are erect, conventionally 2–3 metres (7–10 ft) in height, with many nodes, casting off flag-leaves at every node. Under these leaves and close to the stem grow the ears. They grow about 3 centimetres a day.
The ears are female inflorescences, tightly covered over by several layers of leaves, and so closed-in by them to the stem that they do not show themselves easily until the emergence of the pale yellow silks from the leaf whorl at the end of the ear. The silks are elongated stigmas that look like tufts of hair, at first green, and later red or yellow. Plantings for silage are even denser, and achieve an even lower percentage of ears and more plant matter. Certain varieties of maize have been bred to produce many additional developed ears, and these are the source of the "baby corn" that is used as a vegetable in Asian cuisine.
Maize is a facultative long-night plant and flowers in a certain number of growing degree days > 50 °F (10 °C) in the environment to which it is adapted. The magnitude of the influence that long-nights have on the number of days that must pass before maize flowers is genetically prescribed and regulated by the phytochrome system. Photoperiodicity can be eccentric in tropical cultivars, where in the long days at higher latitudes the plants will grow so tall that they will not have enough time to produce seed before they are killed by frost. These characteristics, however, may prove useful in using tropical maize for Biofuels.
The apex of the stem ends in the tassel, an inflorescence of male flowers. Each silk may become pollinated to produce one kernel of corn. Young ears can be consumed raw, with the cob and silk, but as the plant matures (usually during the summer months) the cob becomes tougher and the silk dries to inedibility. By the end of the growing season, the kernels dry out and become difficult to chew without cooking them tender first in boiling water. Modern farming techniques in developed countries usually rely on dense planting, which produces on average only about 0.9 ears per stalk because it stresses the plants.
The kernel of corn has a pericarp of the fruit fused with the seed coat, typical of the grasses. It is close to a multiple fruit in structure, except that the individual fruits (the kernels) never fuse into a single mass. The grains are about the size of peas, and adhere in regular rows round a white pithy substance, which forms the ear. An ear contains from 200 to 400 kernels, and is from 10–25 centimetres (4–10 inches) in length. They are of various colors: blackish, bluish-gray, red, white and yellow. When ground into flour, maize yields more flour, with much less bran, than wheat does. However, it lacks the protein gluten of wheat and therefore makes baked goods with poor rising capability.
A genetic variation that accumulates more sugar and less starch in the ear is consumed as a vegetable and is called sweetcorn.
Immature maize shoots accumulate a powerful antibiotic substance, DIMBOA (2,4-dihydroxy-7-methoxy-1,4-benzoxazin-3-one). DIMBOA is a member of a group of hydroxamic acids (also known as benzoxazinoids) that serve as a natural defense against a wide range of pests including insects, pathogenic fungi and bacteria. DIMBOA is also found in related grasses, particularly wheat. A maize mutant (bx) lacking DIMBOA is highly susceptible to be attacked by aphids and fungi. DIMBOA is also responsible for the relative resistance of immature maize to the European corn borer (family Crambidae). As maize matures, DIMBOA levels and resistance to the corn borer decline.
Many forms of maize are used for food, sometimes classified as various subspecies:
This system has been replaced (though not entirely displaced) over the last 60 years by multi-variable classifications based on ever more data. Agronomic data was supplemented by botanical traits for a robust initial classification, then genetic, cytological, protein and DNA evidence was added. Now the categories are forms (little used), races, racial complexes, and recently branches.
Maize has 10 chromosomes (n=10). The combined length of the chromosomes is 1500 cM. Some of the maize chromosomes have what are known as "chromosomal knobs": highly repetitive heterochromatic domains that stain darkly. Individual knobs are polymorphic among strains of both maize and teosinte. Barbara McClintock used these knob markers to prove her transposon theory of "jumping genes", for which she won the 1983 Nobel Prize in Physiology or Medicine. Maize is still an important model organism for genetics and developmental biology today.
There is a stock center of maize mutants, The Maize Genetics Cooperation — Stock Center, funded by the USDA Agricultural Research Service and located in the Department of Crop Sciences at the University of Illinois at Urbana-Champaign. The total collection has nearly 80,000 samples. The bulk of the collection consists of several hundred named genes, plus additional gene combinations and other heritable variants. There are about 1000 chromosomal aberrations (e.g., translocations and inversions) and stocks with abnormal chromosome numbers (e.g., tetraploids). Genetic data describing the maize mutant stocks as well as myriad other data about maize genetics can be accessed at MaizeGDB, the Maize Genetics and Genomics Database.
In 2005, the U.S. National Science Foundation (NSF), Department of Agriculture (USDA) and the Department of Energy (DOE) formed a consortium to sequence the maize genome. The resulting DNA sequence data will be deposited immediately into GenBank, a public repository for genome-sequence data. Sequencing the corn genome has been considered difficult because of its large size and complex genetic arrangements. The genome has 50,000–60,000 genes scattered among the 2.5 billion bases – molecules that form DNA – that make up its 10 chromosomes. (By comparison, the human genome contains about 2.9 billion bases and 26,000 genes.)
There are several theories about the specific origin of maize in Mesoamerica:
The third model (actually a group of hypotheses) is unsupported. The second parsimoniously explains many conundrums but is dauntingly complex. The first model was proposed by Nobel Prize winner George Beadle in 1939. Though it has experimental support, it has not explained a number of problems, among them:
The domestication of maize is of particular interest to researchers—archaeologists, geneticists, ethnobotanists, geographers, etc. The process is thought by some to have started 7,500 to 12,000 years ago (corrected for solar variations). Recent genetic evidence suggests that maize domestication occurred 9000 years ago in central Mexico, perhaps in the highlands between Oaxaca and Jalisco. The wild teosinte most similar to modern maize grows in the area of the Balsas River. Archaeological remains of early maize ears, found at Guila Naquitz Cave in the Oaxaca Valley, date back roughly 6,250 years (corrected; 3450 BCE, uncorrected); the oldest ears from caves near Tehuacan, Puebla, date ca. 2750 BCE. Little change occurred in ear form until ca. 1100 BCE when great changes appeared in ears from Mexican caves: maize diversity rapidly increased and archaeological teosinte was first deposited.
Perhaps as early as 1500 BCE, maize began to spread widely and rapidly. As it was introduced to new cultures, new uses were developed and new varieties selected to better serve in those preparations. Maize was the staple food, or a major staple, of most the pre-Columbian North American, Mesoamerican, South American, and Caribbean cultures. The Mesoamerican civilization was strengthened upon the field crop of maize; through harvesting it, its religious and spiritual importance and how it impacted their diet. Maize formed the Mesoamerican people’s identity. During the 1st millennium CE (AD), maize cultivation spread from Mexico into the Southwest and a millennium later into Northeast and southeastern Canada, transforming the landscape as Native Americans cleared large forest and grassland areas for the new crop.
It is unknown what precipitated its domestication, because the edible portion of the wild variety is too small and hard to obtain to be eaten directly, as each kernel is enclosed in a very hard bi-valve shell. However, George Beadle demonstrated that the kernels of teosinte are readily "popped" for human consumption, like modern popcorn. Some have argued that it would have taken too many generations of selective breeding in order to produce large compressed ears for efficient cultivation. However, studies of the hybrids readily made by intercrossing teosinte and modern maize suggest that this objection is not well-founded.
In 2005, research by the USDA Forest Service indicated that the rise in maize cultivation 500 to 1,000 years ago in the southeastern United States contributed to the decline of freshwater mussels, which are very sensitive to environmental changes.
Maize is widely cultivated throughout the world, and a greater weight of maize is produced each year than any other grain. While the United States produces almost half of the world's harvest, other top producing countries are as widespread as China, Brazil, France, Indonesia, India and South Africa. Worldwide production was over 600 million metric tons in 2003 — just slightly more than rice or wheat. In 2004, close to 33 million hectares of maize were planted worldwide, with a production value of more than $23 billion.
Because it is cold-intolerant, in the temperate zones maize must be planted in the spring. Its root system is generally shallow, so the plant is dependent on soil moisture. As a C4 plant (a plant that uses C4 photosynthesis), maize is a considerably more water-efficient crop than C3 plants like the small grains, alfalfa and soybeans. Maize is most sensitive to drought at the time of silk emergence, when the flowers are ready for pollination. In the United States, a good harvest was traditionally predicted if the corn was "knee-high by the Fourth of July", although modern hybrids generally exceed this growth rate. Maize used for silage is harvested while the plant is green and the fruit immature. Sweet corn is harvested in the "milk stage", after pollination but before starch has formed, between late summer and early to mid-autumn. Field corn is left in the field very late in the autumn in order to thoroughly dry the grain, and may, in fact, sometimes not be harvested until winter or even early spring. The importance of sufficient soil moisture is shown in many parts of Africa, where periodic drought regularly causes famine by causing maize crop failure.
Maize was planted by the Native Americans in hills, in a complex system known to some as the Three Sisters: beans used the corn plant for support, and squashes provided ground cover to stop weeds. This method was replaced by single species hill planting where each hill 60–120 cm (2–4 ft) apart was planted with 3 or 4 seeds, a method still used by home gardeners. A later technique was checked corn where hills were placed 40 inches apart in each direction, allowing cultivators to run through the field in two directions. In more arid lands this was altered and seeds were planted in the bottom of 10–12 cm (4–5 in) deep furrows to collect water. Modern technique plants maize in rows which allows for cultivation while the plant is young, although the hill technique is still used in the cornfields of some Native American reservations.
In North America, fields are often planted in a two-crop rotation with a nitrogen-fixing crop, often alfalfa in cooler climates and soybeans in regions with longer summers. Sometimes a third crop, winter wheat, is added to the rotation. Fields are usually plowed each year, although no-till farming is increasing in use. Many of the maize varieties grown in the United States and Canada are hybrids. Over half of the corn area planted in the United States has been genetically modified using biotechnology to express agronomic traits such as pest resistance or herbicide resistance.
Before about World War II, most maize in North America was harvested by hand (as it still is in most of the other countries where it is grown). This often involved large numbers of workers and associated social events. Some one- and two-row mechanical pickers were in use but the corn combine was not adopted until after the War. By hand or mechanical picker, the entire ear is harvested which then requires a separate operation of a corn sheller to remove the kernels from the ear. Whole ears of corn were often stored in corn cribs and these whole ears are a sufficient form for some livestock feeding use. Few modern farms store maize in this manner. Most harvest the grain from the field and store it in bins. The combine with a corn head (with points and snap rolls instead of a reel) does not cut the stalk; it simply pulls the stalk down. The stalk continues downward and is crumpled in to a mangled pile on the ground. The ear of corn is too large to pass through a slit in a plate and the snap rolls pull the ear of corn from the stalk so that only the ear and husk enter the machinery. The combine separates out the husk and the cob, keeping only the kernels.
When maize was first introduced outside of the Americas it was generally welcomed with enthusiasm by farmers everywhere for its productivity. However, a widespread problem of malnutrition soon arose wherever maize was introduced. This was a mystery since these types of malnutrition were not seen among the indigenous Americans under normal circumstances.
It was eventually discovered that the indigenous Americans learned long ago to add alkali — in the form of ashes among North Americans and lime (calcium carbonate) among Mesoamericans — to corn meal to liberate the B-vitamin niacin, the lack of which was the underlying cause of the condition known as pellagra. This alkali process is known by its Nahuatl (Aztec)-derived name: nixtamalization.
Besides the lack of niacin, pellagra was also characterized by protein deficiency, a result of the inherent lack of two key amino acids in pre-modern maize, lysine and tryptophan. Nixtamalisation was also found to increase the lysine and tryptophan content of maize to some extent, but more importantly, the indigenous Americans had learned long ago to balance their consumption of maize with beans and other protein sources such as amaranth and chia, as well as meat and fish, in order to acquire the complete range of amino acids for normal protein synthesis.
Since maize had been introduced into the diet of non-indigenous Americans without the necessary cultural knowledge acquired over thousands of years in the Americas, the reliance on maize elsewhere was often tragic. In the late 19th century pellagra reached endemic proportions in parts of the deep southern U.S., as medical researchers debated two theories for its origin: the deficiency theory (eventually shown to be true) posited that pellagra was due to a deficiency of some nutrient, and the germ theory posited that pellagra was caused by a germ transmitted by stable flies. In 1914 the U.S. government officially endorsed the germ theory of pellagra, but rescinded this endorsement several years later as evidence grew against it. By the mid-1920s the deficiency theory of pellagra was becoming scientific consensus, and the theory was proved in 1932 when niacin deficiency was determined to be the cause of the illness.
Once alkali processing and dietary variety was understood and applied, pellagra disappeared. The development of high lysine maize and the promotion of a more balanced diet has also contributed to its demise.
Pests of maize
The susceptibility of maize to the European corn borer, and the resulting large crop losses, led to the development of transgenic expressing the Bacillus thuringiensis toxin. "Bt corn" is widely grown in the United States and has been approved for release in Europe.
Uses for maize
In the United States and Canada, the primary use for maize is as a feed for livestock, forage, silage or grain. "Feed corn" is also being increasingly used for heating; specialized corn stoves (similar to wood stoves) are available and use either feed corn or wood pellets to generate heat. Silage is made by fermentation of chopped green cornstalks. The grain also has many industrial uses, including transformation into plastics and fabrics. Some is hydrolyzed and enzymatically treated to produce syrups, particularly high fructose corn syrup, a sweetener, and some is fermented and distilled to produce grain alcohol. Grain alcohol from maize is traditionally the source of bourbon whiskey. Increasingly ethanol is being used at low concentrations (10% or less) as an additive in gasoline (gasohol) for motor fuels to increase the octane rating, lower pollutants, and reduce petroleum use (what is nowadays also known as "biofuels" and has been generating an intense debate regarding the human beings' necessity of new sources of energy, on the one hand, and the need to maintain, in regions such as Latin America, the food habits and culture which has been the essence of civilizations such as the one originated in Mesoamerica; the entry, January 2008, of maize among the commercial agreements of NAFTA has increased this debate, considering the bad labor conditions of workers in the fields, and mainly the fact that NAFTA "opened the doors to the import of corn from the United States, where the farmers who grow it receive multi-million dollar subsidies and other government supports. (...) According to OXFAM UK, after NAFTA went into effect, the price of maize in Mexico fell 70% between 1994 and 2001. The number of farm jobs dropped as well: from 8.1 million in 1993 to 6.8 million in 2002. Many of those who found themselves without work were small-scale maize growers."). However, introduction in the northern latitudes of the US of tropical maize for biofuels, and not for human or animal consumption, may potentially alleviate this. Human consumption of corn and cornmeal constitutes a staple food in many regions of the world. Corn meal is made into a thick porridge in many cultures: from the polenta of Italy, the angu of Brazil, the mămăligă of Romania, to mush in the U.S. or the food called sadza, nshima, ugali and mealie pap in Africa. It is the main ingredient for tortillas, atole and many other dishes of Mexican food, and for chicha, a fermented beverage of Central and South America. The eating of corn on the cob varies culturally. It is common in the United States but virtually unheard of in some European countries.
Sweetcorn is a genetic variation that is high in sugars and low in starch that is served like a vegetable. Popcorn is kernels of certain varieties that explode when heated, forming fluffy pieces that are eaten as a snack.
Maize can also be prepared as hominy, in which the kernels are bleached with lye; or grits, which are coarsely ground corn. These are commonly eaten in the Southeastern United States, foods handed down from Native Americans. Another common food made from maize is corn flakes. The floury meal of maize (cornmeal or masa) is used to make cornbread and Mexican tortillas. Teosinte is used as fodder, and can also be popped as popcorn.
Some forms of the plant are occasionally grown for ornamental use in the garden. For this purpose, variegated and coloured leaf forms as well as those with colourful ears are used. Additionally, size-superlative varieties, having reached 31 ft (9.4m) tall, or with ears 24 inches (60cm) long, have been popular for at least a century.
Corncobs can be hollowed out and treated to make inexpensive smoking pipes, first manufactured in the United States in 1869. Corncobs are also used as a biomass fuel source. Maize is relatively cheap and home-heating furnaces have been developed which use maize kernels as a fuel. They feature a large hopper which feeds the uniformly sized corn kernels (or wood pellets or cherry pits) into the fire.
An unusual use for maize is to create a Maize Maze as a tourist attraction. This is a maze cut into a field of maize. The idea of a Maize Maze was introduced by Adrian Fisher, one of the most prolific designer of modern mazes, with The American Maze Company who created a maze in Pennsylvania in 1993. Traditional mazes are most commonly grown using yew hedges, but these take several years to mature. The rapid growth of a field of maize allows a maze to be laid out using GPS at the start of a growing season and for the maize to grow tall enough to obstruct a visitor's line of sight by the start of the summer. In Canada and the U.S., these are called "corn mazes" and are popular in many farming communities.
Maize is increasingly used as a biomass fuel, such as ethanol, which as researchers search for innovative ways to reduce fuel costs has unintentionally caused a rapid rise in food costs. This has led to the 2007 harvest being one of the most profitable corn crops in modern history for farmers. A biomass gasification power plant in Strem near Güssing, Burgenland, Austria was begun in 2005. Research is being done to make diesel out of the biogas by the Fischer Tropsch method.
Maize is also used as fish bait called "dough balls". It is particularly popular in Europe for coarse fishing.
Stigmas from female corn flowers, known popularly as corn silk, are sold as herbal supplements.
Corn kernels can be used in place of sand in a sandbox-like enclosure for children's play.
Maize and art
Maize has been an essential crop in the Andes since the pre-Columbian Era. The Moche culture from Northern Peru made ceramics from earth, water, and fire. This pottery was a sacred substance, formed in significant shapes and used to represent important themes. Maize represented anthropomorphically as well as naturally.
|This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Maize". A list of authors is available in Wikipedia.| | 2026-01-20T14:21:10.118891 |
675,872 | 3.746591 | http://www.nature.nps.gov/Geology/parks/kefj/ | Glacier-carved Valleys Filled with Ocean Waters
The Kenai Fjords are coastal mountain fjords whose placid seascapes reflect scenic icebound landscapes and whose salt spray mixes with mountain mist. Located on the southeastern Kenai Peninsula, the national park is a pristine and rugged land supporting many unaltered natural environments and ecosystems. The land boasts
- an icefield wilderness,
- unnamed waterfalls in unnamed canyons,
- glaciers that sweep down narrow mountain valleys, and
- a coastline along which thousands of seabirds and marine mammals raise their young each year.
Kenai Fjords National Park derives its name from the long, steep-sided, glacier-carved valleys that are now filled with ocean waters. The seaward ends of the Kenai Mountains are slipping into the sea, being dragged under by the collision of two tectonic plates of the Earth's crust. What were once alpine valleys filled with glacier ice are now deepwater mountain-flanked fjords. The forces that caused this land to submerge are still present. In 1964, the Alaskan Good Friday earthquake dropped the shoreline another six feet in just one day. As the land sinks into the ocean, glacier-carved cirques are turned into half-moon bays and mountain peaks are reduced to wave-beaten islands and stacks.
Though the land is subsiding, a mountain platform one mile high still comprises the coast's backdrop. The mountains are mantled by the 300-square-mile Harding Icefield, the park's dominant feature. The icefield was not discovered until early this century when a mapping team realized that several coastal glaciers belonged to the same massive system. Today's icefield measures some 35 miles long by 20 miles wide. Only isolated mountain peaks interrupt its nearly flat, snowclad surface. These protruding nunataks-this Eskimo word means "lonely peaks"-rise dramatically from the frozen clutches of the Ice Age.
The mountains intercept moisture-laden clouds, which replenish the icefield with 35-65 feet of snow annually. Time and the weight of overlying snow transform the snow into ice. The pull of gravity and the weight of the snowy overburden make the ice flow out in all directions. It is squeezed into glaciers that creep downward like giant bulldozers, carving and gouging the landscape. Along the coast eight glaciers reach the sea, and these tidewater glaciers calve icebergs into the fjords. The thunderous boom of calving ice can sometimes be heard 20 miles away.
The park's wildlife is as varied as its landscape.
- Mountain goats, moose, bears, wolverines, marmots, and other land mammals have re-established themselves on a thin life zone between marine waters and the icefield's frozen edges.
- Bald eagles nest in the tops of spruce and hemlock trees.
- Steller sea lions haul out on rocky islands at the entrances to Aialik and Nuka Bays.
- Harbor seals ride the icebergs.
- Dall porpoises, sea otters, and gray, humpback, killer, and minke whales ply the fjord waters.
- Halibut, lingcod, and black bass lurk deep in these waters, through which salmon return for inland spawning runs.
- Thousands of seabirds, including horned and tufted puffins, black-legged kittiwakes, common murres, and the ubiquitous gulls, seasonally inhabit steep cliffs and rocky shores.
Exit Glacier, the remnant of a larger glacier once extending to Resurrection Bay, is one of several rivers of ice flowing off the icefield. Active, yet retreating, it provides the perfect setting to explore. Here are found newly exposed, scoured, and polished bedrock and a regime of plant succession from the earliest pioneer plants to a mature forest of Sitka spruce and western hemlock.
Humans have had little lasting impact on this environment, although the park includes a few Native American archeological sites and isolated gold extraction locations. The park's overwhelming significance is as a living laboratory of change. Plants and wildlife subsist here amidst dynamic interactions of water, ice, and a glacier-carved landscape relentlessly pulled down by the Earth's crustal movements. The Harriman Expedition, a steamship-borne venture visiting the fjords in 1899, predicted this area's future value as a scenic tourist attraction. To protect this life and landscape, a national monument was proclaimed in 1978, and the 580,000-acre Kenai Fjords National Park was established in 1980.
The General park map handed out at the visitor center is available on the park's map webpage.For information about topographic maps, geologic maps, and geologic data sets, please see the geologic maps page.
A photo album for this park can be found here.For information on other photo collections featuring National Park geology, please see the Image Sources page.
Currently, we do not have a listing for a park-specific geoscience book. The park's geology may be described in regional or state geology texts.
Parks and Plates: The Geology of Our National Parks, Monuments & Seashores.
Lillie, Robert J., 2005.
W.W. Norton and Company.
9" x 10.75", paperback, 550 pages, full color throughout
The spectacular geology in our national parks provides the answers to many questions about the Earth. The answers can be appreciated through plate tectonics, an exciting way to understand the ongoing natural processes that sculpt our landscape. Parks and Plates is a visual and scientific voyage of discovery!
Ordering from your National Park Cooperative Associations' bookstores helps to support programs in the parks. Please visit the bookstore locator for park books and much more.
Information about the park's research program is available on the park's research webpage.
For information about permits that are required for conducting geologic research activities in National Parks, see the Permits Information page.
The NPS maintains a searchable data base of research needs that have been identified by parks.
A bibliography of geologic references is being prepared for each park through the Geologic Resources Evaluation Program (GRE). Please see the GRE website for more information and contacts.
NPS Geology and Soils PartnersAssociation of American State Geologists
Geological Society of America
Natural Resource Conservation Service - Soils
U.S. Geological Survey
Currently, we do not have a listing for any park-specific geology education programs or activities.
General information about the park's education and intrepretive programs is available on the park's education webpage.For resources and information on teaching geology using National Park examples, see the Students & Teachers pages. | 2026-01-28T16:32:44.056725 |
270,771 | 3.554424 | http://ldpride.net/emotions.htm | Understand Your Learning Style
Download my free eBook, Understanding Your Learning Style. Discover tips on how to improve your learning. Enter your email and get the book now!
growing up with a learning disability often feel a sense of shame. For some, it
is a great relief to receive the diagnosis while for others the label only
serves to further stigmatize them. For many adults, especially older adults, an
accurate diagnosis was unavailable. These individuals were frequently labeled as
mentally retarded, written off as being unable to learn, and most passed through
the school system without acquiring basic academic skills.
feelings of shame often cause the individual to hide their difficulties. Rather
than risk being labeled as stupid or accused of being lazy, some adults deny
their learning disability as a defense mechanism. Internalized negative labels
of stupidity and incompetence usually result in a poor self concept and lack of confidence (Gerber,
Ginsberg, & Reiff, 1992)
Some adults feel ashamed of the type of difficulties they are struggling to cope with such as basic literacy skills, slow processing, attention difficulties, chronic forgetfulness, organizational difficulties, etc.
myths about learning disabilities have perpetuated the general public's
negative perception about learning disabilities:
learning disabilities have below average intelligence and cannot learn.
People with learning disabilities have average to above average intelligence (Gerber. 1998). In fact, studies indicate that as many as 33% of students with LD are gifted (Baum, 1985; Brody & Mills, 1997; Jones, 1986). With proper recognition, intervention and lots of hard work, children and adults with learning disabilities can learn and succeed!
Learning disabilities are just an excuse for irresponsible, unmotivated or lazy people.
disabilities are caused by neurological impairments not character flaws.
In fact, the National Information Centre for
Adults and Youth with Disabilities makes a point of saying that people with
learning disabilities are not lazy or unmotivated (NICHCY , 2002).
Learning disabilities only affect children. Adults grow out of learning disabilities.
It is now known that LD continues throughout the individual's lifespan and may even intensify in adulthood as tasks and environmental demands change (Michaels, 1994a). Sadly, many adults, especially older adults, have never been diagnosed with a learning disability. In fact, the majority of people with learning disabilities are not diagnosed until they reach adulthood (LDA, 1996)
Dyslexia and learning disability are the same thing.
Dyslexia is a type of learning disability. It is not a another term for learning disability. It is a specific language based disorder affecting a persons ability to read, write and verbally express themselves. Unfortunately, careless use of the term has expanded it so that it has become, for some, an equivalent for "learning disability".
Learning disabilities are only academic in nature. They do not affect other areas of a person's life.
Some people with learning disabilities have isolated difficulties in reading, writing or mathematics. However, most people with learning disabilities have more than one area of difficulty. Dr. Larry Silver asserts that "learning disabilities are life disabilities". He writes, The same disabilities that interfere with reading, writing, and arithmetic also will interfere with sports and other activities, family life, and getting along with friends." (Silver, 1998)
Typically, students with LD have other major difficulties in one or more of the following areas:
adults with learning disabilities have difficulty in performing basic everyday
living tasks such as shopping, budgeting, filling out a job application form or
reading a recipe. They may also have difficulty with making friends and
maintaining relationships. Vocational and job demands create additional
challenges for young people with learning disabilities.
with learning disabilities cannot succeed in higher education.
and more adults with learning disabilities are going to college or university
and succeeding (Gerber and Reiff 1994). With the proper accommodations and
support, adults with learning disabilities can be successful at higher
Another emotional difficulty for adults with learning
disabilities is fear. This emotion is often masked by anger or anxiety. Tapping
into the fear behind the anger and/or the anxiety response is often the key for
adults to cope with the emotional fallout of learning disabilities.
being found out
fear of failure
fear of judgment or criticism
Fear of Being Found Out
with learning disabilities live with fear of being found out. They develop
coping strategies to hide their disability. For example, an adult who can hardly
read might pretend to read a newspaper. Other adults may develop gregarious
personalities to hide their difficulties or focus on other abilities that do not
present learning barriers. Unfortunately some adults will have developed negative strategies
such as quitting their job rather than risking the humiliation of being terminated
because their learning disability makes it difficult for them to keep up with
The fear of
being found out is particularly troublesome for many older adults who have never
been diagnosed with a learning disability or those who received inappropriate
support. Such adults were frequently misdiagnosed with mental retardation,
inappropriately placed in programs for the mentally disabled, and/or stigmatized
by teachers and classmates. In later life, these adults often return to learning
through adult literacy programs in order make up for lost educational
opportunities. Seeking help is a difficult step forward for these adults because
it requires them to stop hiding their disability. The simple act of entering a
classroom can be an anxiety producing experience for adults who have been
wrongly labeled and/or mistreated by the educational system. For these adults,
returning to a learning environment is truly an act of courage!
Low literacy skills and academic difficulties are not the only type of learning disabilities adults try to hide. Adults with social skill difficulties may live in constant fear of revealing social inadequacies. For example, an adult who has trouble understanding humour, may pretend to laugh at a joke even through they don't understand it. They may also hide their social difficulties by appearing to be shy and withdrawn. On the other hand, hyperactive adults may cover up their attention difficulties by using a gregarious personality to entertain people.
Fear of Failure
The National Adult Literacy Survey, 1992, found that 58% of adult with self-reported learning disabilities lacked the basic functional reading and writing skills needed to experience job and academic success (Kirsch, 1993). Most of these adults have not graduated high school due to the failure of the school system to recognize and/or accommodate their learning disability. Needless to say, adult literacy programs are a second chance to learn the basic academic skills missed out in public school. As mentioned above, going back into an educational environment is often a fearful experience for adults with learning disabilities. One of the main reasons for this is the fear of failure. Many adults reason that, if they have failed before, what is to stop them failing again and, if they do fail again, then this failure must mean they, themselves, are failures.. The tendency for adults with learning disabilities to personalize failure (i.e. failure makes ME a failure) is perhaps the biggest self-esteem buster for adult learners. Educators need to be aware of these fears to help learner's understand that failure does not make them a failure and making mistakes is a part of the learning process.
people, anxiety about failing is what motivates them to succeed, but for people
with learning disabilities this anxiety can be paralyzing. Fear of failure may
prevent adults with learning disabilities from taking on new learning
opportunities. It might prevent them from participating in social activities,
taking on a new job opportunity or enrolling in an adult education
characteristic that often helps adults overcome their fear of failure is their
ability to come up with innovative strategies to learn and solve problems. These
strategies are often attributed to the
"learned creativity" that many adults with learning disabilities develop in
order to cope with the vocational , social
and educational demands in their everyday lives. (Gerber, Ginsberg,&
Fear of Ridicule
with learning disabilities frequently fear the ridicule of others. Sadly, these fears often develop after the individual has been
routinely ridiculed by teachers, classmates or even family members. The most
crushing of these criticisms usually relates to a perceived lack of intelligence
or unfair judgments about the person's degree of motivation or ability to
succeed. For example, comments such as you'll never amount to anything,
you could do it if you only tried harder, or the taunting of classmates
about being in the mental retard class have enormous emotional effects on
individuals with learning disabilities. For many of these adults, especially those with unidentified
learning disabilities, these and other negative criticisms, continue to affect
their emotional well-being into their adult years. It is not uncommon for
adults to internalize the negative criticisms and view themselves as dumb,
stupid, lazy, and/or incompetent. Such negative criticisms often fuel the fear
adults with learning disabilities have about being found out.
Fear of Rejection
learning disabilities frequently fear rejection if they are not seen to be as
capable as others. If they come from a middle to upper class family where
academic achievement is a basic expectation for its members, fear of rejection
may be a very real concern. They may also fear that their social skill deficits
will preclude them from building meaningful relationships with others and may
lead to social rejection. Prior experiences of rejection will likely intensify
this sense of fear.
Environmental and Emotional Sensitivity
Adults are often overwhelmed by too much environmental stimuli (e.g. background noise, more than one person talking at a time, side conversations, reading and listening at the same time). Many people with LD and ADD have specific sensitivities to their environment such as certain fabrics they cannot wear, foods they cannot tolerate, etc.
Many adults with learning disabilities see themselves as more emotionally sensitive than other people In its most extreme form, high levels of emotional sensitivity are both a blessing and a weakness. The positive features of this trait helps adults with learning disabilities build meaningful relationships with others. For example, they are often very intuitive and in-tune with both their own and other people's emotions. Sometimes they are actually able to perceive other's thoughts and feelings. However, this strength also serves as weakness due to its propensity to overwhelm the individuals. Emotional difficulties occur when they are unable to cope with the onslaught of emotions they are feeling. Highly sensitive adults with LD may be moved to tears more easily or feel their own and other people's pain more deeply. For example, Thomas West, writer of "The Minds Eye", not only gives a thorough explanation of Winston Churchill's learning disability, but also describes his sensitive nature. West details Churchill's tendency to break into tears quite easily" (West, 1997) even out in the public eye. He notes one incident in which Churchill was moved to tears after witnessing the devastating effects of a bomb.
This description of Churchill also serves to highlight the strong sense of justice that many adults with learning disabilities possess. Unfortunately, this sense of justice often serves as a double edged sword. On one hand, it is refreshing to behold the passion of many of these individuals in their fight to overcome injustice. While on the other hand, this very passion, when it crosses the line into aggression, can cause social rejection and/or emotional overload. Often the individual may be unaware that their behavior has turned aggressive. They only wish make their point known and have others understand it. This type of over reaction is not a purposeful attempt to hurt anybody. It is more likely to be caused by a difficulty with monitoring their emotions and consequent behavior.
4. Emotional Regulation
Difficulties with regulating emotions are common for highly sensitive adults with learning disabilities. Dr. Kay Walker, describes the connection between learning disabilities and self-regulation problems in her paper "Self Regulation and Sensory Processing for Learning, Attention and Attachment." She asserts that self-regulation problems frequently occur in those with learning disabilities (Walker, 2000) In its most extreme form, individual may easily shift from one emotion to the next. Others may experience difficulty regulating impulsive thoughts or actions.
Fortunately, most adults have learned to handle their emotional sensitivity to avoid becoming overwhelmed or engaging in negative social interactions. Nevertheless, some adults may be so deeply affected that they become depressed or suffer from anxiety. A lack of school, job and/or social success will likely add to this emotional burden. Some adults with LD, especially those who have been ridiculed by their family members, teachers and/or peers, may be more apt to take criticism to heart because of their experiences and/or their ultra-sensitive nature. Emotional wounds from childhood and youth may cause heightened emotional responses to rejection. In turn, social anxiety and social phobia may result
5. Difficulty Adjusting to Change
Change is scary for everyone, but for people with
learning disabilities and other neurological disabilities, change may be
particularly difficult. Children with learning disabilities may prefer
procedures to stay the same and have a hard time moving from one activity to
another. Usually this difficulty becomes less of an issue as the child matures.
However, adults with learning disabilities may still experience difficulty
adjusting to change in more subtle ways . For example, some adults will have
trouble moving from one work task to another without completely finishing the
first task before moving on to the next one. Adults with learning disabilities
are frequently described as inflexible when it comes to considering another
person's view point or a different way of doing something.
change is difficult for adults with LD because change brings the unexpected.
In general, people with learning disabilities are less prepared for the
unexpected. The unexpected may bring new learning hurdles, new job demands or
new social challenges. Since all these areas can be affected by learning
disabilities, it is no wonder why change can produce so much anxiety for adults
with learning disabilities.
To avoid the tendency to blame the person for their lack of flexibility, it is important to understand the neurological basis for this difficulty with adjusting to change. With this said, through social skills practice, adults with learning disabilities can improve their ability to tolerate change. In addition, parents, instructors, and other professionals can help adults with learning disabilities by making transition processes easier through understanding and accommodating the adults' needs.
(1996), They Speak for Themselves- A Survey of Adults with Learning Disabilities
(Shoestring Press) Pittsburgh, PA 15234
(1985). Learning disabled students with superior cognitive abilities: A
validation study of descriptive behavior. Unpublished doctoral dissertation,
University of Connecticut, Storrs.
Brody, L. E.
& Mills, C. J. (1997). gifted Children with Learning Disabilities: A review
of the issues. Journal of Learning Disabilities, 30(3), 382-296.
Ginsberg, R., & Reiff, H.B. (1992). Identifying alterable patterns in
employment success for highly successful adults with learning disabilities.
Journal of Learning Disabilities, 25 (8) 475-487.
Gerber, P. J.
(1998). Trials and tribulations of a teacher with learning disabilities through
his first two years of employment. In R. J. Anderson, C. E. Keller, & J. M.
Carp (Eds.), Enhancing diversity: educator with disabilities (pp. 41-59).
Washington, DC: Gallaudet University Press.
Gerber, P. J.,
and Reiff, H., eds. (1994) Learning Disabilities In Adulthood: Persisting
Problems And Evolving Issues: Stoneham, MA: Butterworth-Heinemann.
Jones H. B., (1986). The gifted Dyslexic. Annals of Dyslexia, 36, 301-317
Kirsch, Irwin S., Ann Jungeblut, Lynn Jenkins, et al. (1993) Adult Literacy in America: A First Look at the Results of the National Adult Literacy Survey, (pg. 44) U.S. Department of Education, NCES, Washington, DC.
Michaels, C. A. (1994a) Transition strategies for persons with learning disabilities. San Diego, CA.
National Information Centre for Children and Youth with Disabilities. (2002) General
Information about Learning Disabilities. (pg. 1) Fact sheet #7.
November 2, 2002, from http://www.ldonline.org/ld_indepth/general_info/nichcy_fs7.pdf
Silver, L. B. (1998) The Misunderstood Child: Understanding and Coping With Your Child's Learning Disabilities 3rd edition, NY: Random House Books.
Walker, K. (2000) Self Regulation and Sensory Processing for Learning, Attention and Attachment . Occupational Therapy Department, University of Florida.
West, T. G.
(1997). In the minds eye: visual thinkers, gifted people with dyslexia, and
other learning difficulties, computer images, and the ironies of creativity.
Amherst, NY: Prometheus Books. | 2026-01-22T08:34:11.421324 |
835,022 | 3.723813 | http://historymatters.gmu.edu/mse/maps/what.html | Maps can be an important source of primary information for historic investigation. But what is a map? This is a deceptively simple question, until you're asked to provide an answer -- you may find it far more difficult than you think. Yet we encounter maps on a daily basis. The media uses them to pinpoint the location of the latest international crisis, many textbooks include them as illustrations, and we consult maps to help us navigate from place to place. Maps are so commonplace; we tend to take them for granted. Yet sometimes the familiar is far more complex than it appears. "What is a map?" has more than one answer.
Norman Thrower, an authority on the history of cartography, defines a map as, "A representation, usually on a plane surface, of all or part of the earth or some other body showing a group of features in terms of their relative size and position."* This seemingly straightforward statement represents a conventional view of maps. From this perspective, maps can be seen as mirrors of reality. To the student of history, the idea of a map as a mirror image makes maps appear to be ideal tools for understanding the reality of places at different points in time. However, there are a few caveats concerning this view of maps. True, a map is an image of a place at a particular point in time, but that place has been intentionally reduced in size, and its contents have been selectively distilled to focus on one or two particular items. The results of this reduction and distillation are then encoded into a symbolic representation of the place. Finally, this encoded, symbolic image of a place has to be decoded and understood by a map reader who may live in a different time period and culture. Along the way from reality to reader, maps may lose some or all of their reflective capacity or the image may become blurred.
So what is a map? A map is text. John Pickles, a geographer with interests in social power and maps, suggests:
In this view, maps are a form of symbolization, governed by a set of conventions, that aim to communicate a sense of place. To fully understand a map we need to know how to decode its message and place it within its proper spatial, chronological, and cultural contexts. Maps, even modern maps, are historic. They represent a particular place at a particular point in time. This definition of a map (although, like the mirror image idea, is also problematic) suggests that maps can afford the viewer a great opportunity to gain insights into the nature of places.
Why do relatively few scholars outside of geography use maps and why do maps intimidate people? Michael Peterson, a cartographer and professor of Geography at the University of Nebraska, Omaha, raises a critical issue that may also help to explain why maps are not utilized. He asserts that even highly educated people have trouble using maps and that more than half lack "basic" map competency. Peterson concludes that, "Most people are essentially map illiterate." (See Michael P. Peterson's article, "Cartography and the Internet: Implications for Modern Cartography"). My own experience teaching geography courses for more than thirty years substantiates Peterson's assertions. Students often lack the basic skills necessary to read maps, much less the analytical skills needed to grasp the insights that maps can afford. This guide aims to help provide those basic skills. | 2026-01-31T06:00:38.986265 |
487,108 | 3.582522 | http://www.teacherspayteachers.com/Product/Scientist-Lingo-Science-vocabulary-used-by-scientists-scientific-method-342605 | Academic Vocabulary can be quite difficult for students especially in science. This product uses many of the common words that scientists use. The list of words can be seen in the bingo thumbnail picture. Some include: measure, model, hypothesis, predict, results, etc. Some of these materials can be reused if you have a substitute teacher. Great review!
Supplementary materials/ This product includes:
**One printable bingo board which the teacher photocopies and students copy the words themselves onto the board(mixing them up).
* Words for teachers to display on overhead/Elmo/pocket chart while calling out the words from bingo. These words can also be used for a word wall.
* Teacher hints to use for words
*A Mad Lab Science Game/ This is a partner game that has students use some application of the words. This will require reading on the part of the students.
* A simple matching game
**A word scramble with 8 of these words
**A word search
Please ask any and all questions before(or after) purchasing. I hope your students greatly benefit from Scientist Lingo and that it enables your student scientists to use this terminology throughout the school year. :)
I hope your class enjoys it. The product is intended for 2nd and above. The academic vocabulary is getting harder and my 2nd graders have to know these words. They are hard but it is part of the curriculum. With all the standards from my state and also Common Core, a more robust vocabulary is expected. The thumbnail( in the description) shows the words used in the product. Thank you for your feedback.
November 27, 2012
This is a great way to teach students science terms. Thank you!
You are welcome! After teaching the terms, I am reusing some of the games as a review for my students if they should have a substitute teacher. They already know how to play and these are words that need to be reinforced throughout their science year. | 2026-01-25T20:03:39.075340 |
15,869 | 4.212487 | http://edu.lva.virginia.gov/virginia-women-in-history/for-educators/ | Using the Primary Resources Associated with the Virginia Women in History Program
The Library of Virginia hopes that the Virginia Women in History program can be a part of your classroom curriculum. Each honoree’s page features a concise biography along with an image. These tools can help your students sharpen their analytical skills as they learn about and interpret the lives and contributions of each honoree.
Middle School Interviews
This is a creative writing exercise for middle and high school students. It will allow them to generate questions and to conduct interviews of their classmates based on the lives of Virginia Women honorees.
- Virginia Standards of Learning: VS.8(b), VS.9(c), English 8.1(a-d)
- National History Standards: Era 9 4A Grades 7–12
Prompt: Choose one of the Virginia Women honorees that you would like to learn more about. Imagine that you are going to interview this person for the local newspaper. What questions would you ask? Consider the person’s background and achievements and decide what the most relevant questions to you and your community are. Think about what people in your community would want to know about this person. After you have written down your questions, imagine that you are the person. How would you answer the interview questions?
For an additional activity, team up with a partner to take turns being the honoree and answering each other’s interview questions.
Create a Display Using the Educational Poster
Are you looking for a Women’s History Month display for your classroom or library? Have your students print the poster featuring this year’s eight honorees, and then use that to create a display. Just click to download, print, and post the pages on the bulletin board in your class, library, or office.
Drawing from the biographies and supporting materials of three of the honorees, develop an investigative process that summarizes their accomplishments and provides students with information that they can use in a summary discussion.
- Which three women faced the most challenges to their achievements? Why do you think their obstacles were greater? Consider factors such as an honoree’s race or socioeconomic class.
- How do you think these women perceived their accomplishments? Is it within the sphere of what was expected of women during the era in which they lived? What is your evidence? Would they agree with your perception of their accomplishments?
- Do modern honorees have as many obstacles to face as historic ones? Why or why not? Use specific examples from the biographies and outside historic knowledge.
Don’t Forget Next Year! Nominate an Honoree
We want to partner with your class and encourage your students to nominate an outstanding woman from Virginia for the 2019 Virginia Women in History program. We are offering an award to a school and teacher that submit a successful nomination for the 2019 programs. If your nominee is chosen, the nominating teacher will be eligible to win $250 toward school supplies, along with a complimentary three-volume set of the Dictionary of Virginia Biography (a $120 value) and other Library of Virginia publications for the school library. The teacher will also be recognized at the Virginia Women in History reception. Honoree Nomination. | 2026-01-18T13:34:19.763064 |
40,800 | 4.672068 | http://www.newworldencyclopedia.org/entry/Acceleration | In physics, acceleration is defined as the rate of change of velocity—that is, the change of velocity with time. An object is said to undergo acceleration if it is changing its speed or direction or both. A device used for measuring acceleration is called an accelerometer.
An object traveling in a straight line undergoes acceleration when its speed changes. An object traveling in a uniform circular motion at a constant speed is also said to undergo acceleration because its direction is changing.
The term "acceleration" generally refers to the change in instantaneous velocity. Given that velocity is a vector quantity, acceleration is also a vector quantity. This means that it is defined by properties of magnitude (size or measurability) and direction.
In the strict mathematical sense, acceleration can have a positive or negative value. A negative value for acceleration is commonly called deceleration.
The dimension for acceleration is length/time². In SI units, acceleration is measured in meters per second squared (m•s-²).
Then, for the definition of instantaneous acceleration;
also OR , i.e. Velocity can be thought of as the integral of acceleration with respect to the time. (Note, this can be a definite or indefinite integration).
- is the acceleration vector (as acceleration is a vector, it must be described with both a direction and a has a magnitude)
- v is the velocity function
- x is the position function (also known as displacement or change in position)
- t is time
- d is Leibniz's notation for differentiation
When velocity is plotted against time on a velocity vs. time graph, the acceleration is given by the slope, or the derivative of the graph.
If used with SI standard units (metres per second for velocity; seconds for time) this equation gives a the units of m/(s•s), or m/s² (read as "metres per second per second," or "metres per second squared").
An average acceleration, or acceleration over time, ā can be defined as:
- u is the initial velocity (m/s)
- v is the final velocity (m/s)
- t is the time interval (s) elapsed between the two velocity measurements (also written as "Δt")
Transverse acceleration (perpendicular to velocity), as with any acceleration which is not parallel to the direction of motion, causes change in direction. If it is constant in magnitude and changing in direction with the velocity, we get a circular motion. For this centripetal acceleration we have:
One common unit of acceleration is g, one g (more specifically, gn or g 0) being the standard uniform acceleration of free fall or 9.80665 m/s², caused by the gravitational field of Earth at sea level at about 45.5° latitude.
Jerk is the rate of change of an object's acceleration over time.
As a result of its invariance under the Galilean transformations, acceleration is an absolute quantity in classical mechanics.
Relation to relativity
After defining his theory of special relativity, Albert Einstein realized that forces felt by objects undergoing constant proper acceleration are indistinguishable from those in a gravitational field, and thus defined general relativity that also explained how gravity's effects could be limited by the speed of light.
If you accelerate away from your friend, you could say (given your frame of reference) that it is your friend who is accelerating away from you, although only you feel any force. This is also the basis for the popular Twin paradox, which asks why only one twin ages when moving away from his sibling at near light-speed and then returning, since the aging twin can say that it is the other twin that was moving.
General relativity solved the "why does only one object feel accelerated?" problem which had plagued philosophers and scientists since Newton's time (and caused Newton to endorse absolute space). In special relativity, only inertial frames of reference (non-accelerated frames) can be used and are equivalent; general relativity considers all frames, even accelerated ones, to be equivalent. With changing velocity, accelerated objects exist in warped space (as do those that reside in a gravitational field). Therefore, frames of reference must include a description of their local spacetime curvature to qualify as complete.
An accelerometer inherently measures its own motion (locomotion). It thus differs from a device based on remote sensing. Accelerometers can be used to measure vibration on cars, machines, buildings, process control systems and safety installations. They can also be used to measure seismic activity, inclination, machine vibration, dynamic distance and speed with or without the influence of gravity.
One application for accelerometers is to measure gravity, wherein an accelerometer is specifically configured for use in gravimetry. Such a device is called a gravimeter. Accelerometers are being incorporated into more and more personal electronic devices such as mobile phones, media players, and handheld gaming devices. In particular, more and more smartphones are incorporating accelerometers for step counters, user interface control, and switching between portrait and landscape modes.
Accelerometers are used along with gyroscopes in inertial guidance systems, as well as in many other scientific and engineering systems. One of the most common uses for micro electro-mechanical system (MEMS) accelerometers is in airbag deployment systems for modern automobiles. In this case, the accelerometers are used to detect the rapid negative acceleration of the vehicle to determine when a collision has occurred and the severity of the collision.
Accelerometers are perhaps the simplest MEMS device possible, sometimes consisting of little more than a suspended cantilever beam or proof mass (also known as seismic mass) with some type of deflection sensing and circuitry. MEMS Accelerometers are available in a wide variety of ranges up to thousands of gn's. Single axis, dual axis, and three axis models are available.
The widespread use of accelerometers in the automotive industry has pushed their cost down dramatically.
The Wii Remote for the Nintendo Wii console contains accelerometers for measuring movement and tilt to complement its pointer functionality.
Within the last several years, Nike, Polar and other companies have produced and marketed sports watches for runners that include footpods, containing accelerometers to help determine the speed and distance for the runner wearing the unit.
More recently, Apple Computer and Nike have combined the footpod, with Apple's iPod nano to provide real-time audio feedback to the runner on his/her pace and distance. It is known as the Nike + iPod Sports kit.
A small number of modern notebook computers feature accelerometers to automatically align the screen depending on the direction the device is held. This feature is only relevant in Tablet PCs and smartphones, including the iPhone.
Some laptops' hard drives utilize an accelerometer to detect when falling occurs. When low-g condition is detected, indicating a free-fall and an expected shock, the write current is turned off so that data on other tracks is not corrupted. When the free-fall and shock ends, the data can be rewritten to the desired track, thus negating the effects of the shock.
Camcorders use accelerometers for image stabilization.
Still cameras use accelerometers for anti-blur capturing. The camera holds off snapping the CCD "shutter" when the camera is moving. When the camera is still (if only for a millisecond, as could be the case for vibration), the CCD is "snapped."
Some digital cameras contain accelerometers to determine the orientation of the photo being taken and some also for rotating the current picture when viewing.
The Segway and balancing robots use accelerometers for balance.
- Coordinate vs. physical acceleration
- Speed and Velocity
- Cutnell, John D., and Kenneth W. Johnson. Physics. 7th ed. Hoboken, NJ: John Wiley, 2006. ISBN 0471663158
- Halliday, David, Robert Resnick, and Jearl Walker. Fundamentals of Physics. 7th ed. Hoboken, NJ: John Wiley, 2005. ISBN 0471216437 and ISBN 978-0471216438.
- Kuhn, Karl F. Basic Physics: A Self-Teaching Guide. 2nd ed. Hoboken, NJ: John Wiley, 1996. ISBN 0471134473
- Serway, Raymond A., and John W. Jewett. Physics for Scientists and Engineers. 6th ed. St. Paul, MN: Brooks/Cole, 2004. ISBN 0-534-40842-7.
- Tipler, Paul. Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics. 5th ed. New York: W. H. Freeman, 2004. ISBN 0-7167-0809-4.
All links retrieved August 16, 2012.
- Acceleration and Free Fall - a chapter from an online textbook.
- Science aid: Speed and Motion
- Physics Classroom: Acceleration
- Acceleration Calculator
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | 2026-01-18T23:13:32.797974 |
362,529 | 3.842683 | http://www.centuryinter.net/tjs11/hist/clovis.htm | Mystery of the Clovis Point
What happened to Clovis?
During the last ice age, 15,000 years ago, Asia and North America were connected near the Bering Sea by a 1,000 mile wide grassy plain. Hunters followed herds of large game animals across the land bridge to spread over North America during the next ten centuries. These people were named 'Clovis Point People' by archeologists because their distinctive stone tools were first found in Clovis, New Mexico. Descendants of the Clovis People moved south and east of the Bering Strait eventually reaching Lake Erie.
People first inhabited the area that is now Ohio about 12,000 BC. About 8000 BC, the peoples of the Archaic tradition began to occupy the land, followed by the Mound Builders. The Mound Builders were extinct by the time the first European explorers reached Lake Erie in the 1600's, and in their place were the Eastern Woodlands tribes, known as the Iroquoian-speaking people. The Erie Indians were located along the Southern shore of Lake Erie beginning near Buffalo, New York and then west to the vicinity of Sandusky, Ohio. Their homeland may also have extended far inland to include large parts of the upper Ohio River Valley and its branches in northern Ohio, western Pennsylvania and West Virginia.
The Iroquois consisted of five distinct nations linked by language and culture. By the fifteenth century, however, they had allied in a powerful confederation. This kept internal peace and allowed mutual defense against outsiders. The country of the Five Nations stretched across New York from the Mohawk River to the Niagara River. Ranged from east to west were the Mohawk, Oneida, Onondaga, Cayuga, and Seneca. These people became known as the Six Nations during the eighteenth century when the original five accepted the refugee Tuscarora from North Carolina.
The Eries were traditional enemies of the Iroquois, and there had been many wars between them before the Europeans. After the arrival of the Europeans, the Erie needed beaver for trade and probably encroached on other tribal territories to get it. The result was a war with an unknown Algonquin enemy in 1635 that forced the Erie to abandon some of their western villages. The Beaver Wars reached the western Great Lakes during the 1640s.
English traders along the Connecticut River in 1640 had tried to lure the Mohawk away from the Dutch with offers of firearms. To counter this, the Dutch reversed their previous policy and began selling large guns and ammunition to the Mohawk in whatever amounts they wanted. This dramatically escalated the violence in the Beaver Wars in the St. Lawrence Valley and Great Lakes.
The Neutral Nation were an Iroquoian people (the Attiwandaron 'people who speak a slightly different language'). The Neutrals were an agricultural people, growing corn, beans, squash and tobacco, and supplementing their diet with wild game and fish. Their palisaded riverside villages were moved as the soil became exhausted. The term Neutral was used by French explorers because of the peoples refusal to become involved in warfare between the Huron Nation to the north and the Iroquois Nation to the east. The Hurons and Iroquois visited and traded with the Neutrals, and at times would wage war in Neutral territory if they were not accompanied by members of the Neutral tribe.
An alliance between the Erie and Neutrals continued until 1648, when it ended after the Erie failed to support the Neutrals during a short war with the Iroquois. The failure of this alliance occurred just as the war between the Huron Confederacy and Iroquois League was reaching its final stage, and its timing could hardly have been worse. Huronia was overrun in the winter of 1648-49; the Tionontati met the same fate later that year; and in 1650 the Iroquois turned on the Neutrals. Defeated by 1651, large numbers of Neutral and Huron (several thousand) escaped and fled to the Erie. The Erie accepted these refugees but did not treat them well. Apparently, there were still bad feelings from the break-up of the past alliance. They were allowed to stay in the Erie villages but only in a condition of subjugation.
Meanwhile, the Iroquois League demanded the Erie surrender the refugees, but with hundreds of new warriors, the Erie refused. The dispute simmered for two years of strained diplomacy. The western Iroquois (Seneca, Cayuga, and Onondaga) continued to view the refugees as a threat and were not willing to let the matter drop. The Erie were just as determined not to be intimidated by Iroquois threats. Their position, however, was becoming precarious, since the Mohawk and Oneida in 1651 had begun a long war against the Susquehannock (Pennsylvania) isolating the Erie from their only possible ally. The violence grew, and an Erie raid into the Seneca homeland killed the Seneca sachem Annencraos in 1653. In an attempt to avoid open warfare, both sides agreed to a peace conference. However, in the course of a heated argument, one of the Erie warriors killed an Onondaga. The enraged Iroquois killed all 30 of the Erie representatives, and after this peace was impossible.
Although they had the advantage of firearms, the Iroquois considered the Erie as dangerous opponents, so they took the precaution of first making peace with the French before beginning the war. With their native allies and trading partners either dead or scattered by the Iroquois, the French did not need much encouragement to sign. Assured the French would not intervene, the western Iroquois attacked and destroyed two Erie fortified villages in 1654. However, the Erie inflicted heavy losses on the Iroquois during these battles. It took the Seneca, Cayuga, and Onondaga until 1656 before the Erie were defeated. Many survivors were incorporated into the Seneca to replace their losses in the war, and the Erie ceased to exist as a separate tribe.
No European explored the Ohio Valley until the 1670s, and they did not find any Erie (or anyone else for that matter). Some of the Erie, Neutrals, Tionontati, and Huron escaped (the Wyandot are the best example). Most of these were small groups, but some may have been fairly large. It took the Iroquois many years to track these people down, and the last group of Erie (southern Pennsylvania) did not surrender to the Iroquois until 1680. Where they had been hiding during the intervening 24 years is a mystery.
Many of the descendents of the Erie that were adopted by the Seneca began leaving the Iroquois homeland during the 1720s and returned to Ohio. Known as the Mingo (Ohio Iroquois), they were removed to the Indian Territory during the 1840s. It is very likely that many of the Seneca in Oklahoma today have Erie ancestors.
A legend of the Lenni Lenape (Delaware), recorded on the "Lenape Stone," may be an account of the movement of the Iroquoian peoples into Ohio and the land that became Avon. It reads as follows: "The Lenni Lenape (according to the traditions handed down to them by their ancestors) resided many hundred years ago in a very distant country, in the western part of the American continent. For some reason, they determined on migrating to the eastward, and accordingly set out together in a body. After a very long journey, and many nights' encampments by the way, they at length arrived at the Mississippi. The tradition goes on to say that at this river the Delawares fell in with the Mengwe (Iroquois, or Five Nations), who had likewise emigrated from a distant country, and had struck upon this river somewhat higher up. Their object was the same with that of the Delawares; they were, proceeding on to the eastward until they should find a country that pleased them.
The spies which the Lenape had sent forward for the purpose of reconnoitring, had long before their arrival discovered that the country cast of the Mississippi was inhabited by a very powerful nation, who had many large towns built on the great rivers flowing through their land. Those people were called Alligewi, and traces of their name, may still remain in the country, the Allegheny river and mountains having been named after them. Many wonderful things are told of this famous people. They are said to have been remarkably tall and stout, and there is a tradition that there were giants among them, people of a much larger size than the tallest of the Lenape. It is related that they had built regular fortifications, possibly the works of the mound-builders.
When the Lenape arrived on the banks of the Mississippi, they sent a message to the Alligewi to request permission to settle themselves in their neighborhood. This was refused them, but they obtained leave to pass through the country and seek a settlement farther to the eastward. This agreement, that the Lenape should cross in peace, might have been symbolized in the rock writings and historical song records of the tribe, by the figure of the pipe on the left of the stone, just above the water, and opposite the fish. "They accordingly began to cross the Mississippi," continues the account, "when the Alligewi, seeing that their numbers were so very great, and in fact they consisted of many thousands, made a furious attack on those who had crossed, threatening them all with destruction if they dared to persist in coming over to their side of the river. Enraged at the treachery of these people, and the great loss of men they had sustained, and besides, not being prepared for a conflict, the Lenape consulted on what was to be done, whether to retreat in the best manner they could, or try their strength, and let the enemy see they were not cowards, but men, and too high-minded to suffer themselves to be driven off before they had made a trial of their strength.
The Iroquois, who had been satisfied with being spectators from a distance, offered to join them on condition that, after conquering the country, they should be entitled to share it with them. Their proposal was accepted, and the resolution was taken by the two nations to conquer or die. Having thus united their forces, the Lenape and Iroquois declared war against the Alligewi, and great battles were fought, in which many warriors fell on both sides."
NOVA TRANSCRIPT, 11-9-04
[Mystery of the Clovis Point]
``NARRATOR: Stone Age America, 13,000 years ago: A virgin land, a world great beasts have ruled for millions of years, and for early human settlers it is an age in which stone weapons can be the difference between life and death.
This is a Clovis spear point. It is the greatest technological breakthrough of the Stone Age and long thought to be the oldest human artifact unearthed in the Americas. For years, these Stone Age weapons of mass destruction were thought to represent a culture of prehistoric big game hunters who came over a land bridge from Asia to become the first Americans. But new clues are forcing scientists to rewrite an epic story that, until now, had been considered the gospel.
Can these magnificent Clovis spear points, over 13,000 years old, help solve one of the greatest riddles of North American archaeology? Who were the first Americans and where did they come from? America's Stone Age explorers, up next on NOVA.
NARRATOR: The ancestors of modern humans originated in Africa at least 150,000 years ago. By 40,000 years ago, they had radiated out of Africa and were occupying most of Europe, Asia and Australia. But half the Earth, humans had yet to explore. How people first came to America remains one of the greatest mysteries of our past.
PAUL MARTIN (University of Arizona): Archaeologists have been looking for the earliest for a long time. It's been a Holy Grail for them. Who was first?
MICHAEL COLLINS (University of Texas): The whole question of the peopling of the Americas is a huge piece of the total human experience. That's just a question we can't leave unanswered.
NARRATOR: Who were these earliest explorers? Where did they come from? How did they make this epic journey to the New World?
The first clue to the mystery was found in a dried up lake in Clovis, New Mexico. Here, in 1933, archaeologists uncovered a stone tool made by human hands, an ancient spearhead. It became known as the Clovis point.
Alongside the Clovis point was the skeleton of a mammoth, which, evidently, the spear point had been used to kill. Later, scientists were able to date the bones, establishing the age of the spearhead as 13,500 years old. It made the Clovis point the oldest human artifact ever found in America. Archaeologists have now discovered thousands of Clovis spear points across much of the continent.
MICHAEL COLLINS: There's Clovis in every one of the 48 states in the United States, Mexico, Belize, Costa Rica, in all kinds of environments.
NARRATOR: So many spear points, spreading widely across the continent, suggested a rapid expansion of a weapon crucial to the lives of the earliest Stone Age American explorers.
KENNETH TANKERSLEY (Northern Kentucky University): The Clovis point was the fundamental basis for survival in Ice Age America.
DAVID KILBY (University of New Mexico): Clovis points, arguably, represent the state of the art in hunting weapons on Earth at the time and are probably capable of taking down just about any animal on the late Pleistocene landscape.
NARRATOR: In an age defined by its most valuable resource, stone, the Clovis spear point represented a great technological breakthrough, transforming rock into a killing machine.
DENNIS STANFORD (Smithsonian Institution): It's a very distinctive type of artifact. As you can see here, it has a flake that's been taken out of the base and there's also a flake on the other side removed from the base, and these are called flutes. And beyond that the projectile point is flaked on both sides. You see it's worked here and it's worked on this side, which is what we call "bifacial."
NARRATOR: The bifacial design transforms a rough stone into a projectile with a serrated sharp edge. The fluting, some archaeologists speculate, allows Clovis hunters to rapidly load and reload the deadly blades onto spear shafts.
DENNIS STANFORD: And when you throw this at an animal, this goes in and sticks in the animal and this comes back out so you can put a new one on it and start hunting again.
DAVID KILBY: There have been some experiments carried out by archaeologists, using replicas of Clovis points and other stone tools, in which they were used to penetrate the hides of modern elephants, elephants which had already deceased. And it's found, in, in all these cases, that they actually are all very efficient weapons and could potentially kill mammoth where you'd get them into the soft, vulnerable underbelly and then quickly back away.
NARRATOR: Testimony to the deadliness of the Clovis spear point is that, in a dozen cases, they were discovered in the remains of butchered mammoths. This led scientists to connect the spear point to a catastrophe that befell these Stone Age giants. For around 13,500 years ago, all the mega fauna in the Americas went extinct?the mammoths, the giant armadillo, the giant sloth, the short-faced bear?all disappeared within a few hundred years.
But who were these big game hunters with their Stone Age weapons of mass destruction? Where did they come from?
When archaeologists looked for an answer, they found an important clue in the climate of the ancient world. [The last great Ice Age ended 13,000 years ago.] Huge swaths of the northern hemisphere lay frozen under ice. These giant ice sheets locked up vast quantities of water, causing sea levels to drop far lower than they are today.
DAVID MELTZER (Southern Methodist University): When you've got that much ice on land, what happens is that it draws, essentially, water out of the oceans. So with that much ice on land, sea levels worldwide are lowered. By lowering sea levels, you expose the continental shelf between Siberia and Alaska, and that made it possible for people to walk to the Americas.
NARRATOR: Asia and North America were essentially one great continent, joined by a land bridge more than a thousand miles wide. But although it was possible to walk from Siberia to Alaska, giant ice sheets barred entrance to the rest of the continent. Then, as the climate warmed at the end of the Ice Age, the glaciers receded, opening up an ice-free corridor through the center of the continent. For the first time, it seemed, the door was open to the virgin landscape of the New World.
DAVID MELTZER: As that corridor opens up, that's just about the time when Clovis appears in the lower 48. So it all seemed to work out very, very beautifully in terms of the timing of getting these New World peoples from Asia into the Americas.
NARRATOR: The timing of the land bridge, the ice-free corridor and the Clovis dates all seemed to fit together in a simple elegant theory: 13,500 years ago, Clovis people, big game hunters from Asia, armed with their lethal Clovis spear point, walked across the land bridge to the Americas, followed the ice-free corridor down into the lower continent and spread across the land, killing all the great beasts. As ice age glaciers melted, the seas rose, submerging the land bridge. The descendents of the Clovis people, the Native Americans, remained isolated until their first contact with Columbus.
The theory became known as Clovis First. It was written into the textbooks and taught for the better part of a century. The Clovis spear point became the icon of the first Americans.
Clovis First was such a powerful story that, for years, few archaeologists looked back beyond 13,500 years ago. But then a few did. Jim Adovasio has spent the past 30 years excavating at Meadowcroft, a prehistoric site near Pittsburgh, Pennsylvania. The deeper he dug, the further back he descended in time.
JAMES ADOVASIO (Mercyhurst College): On these surfaces that you see before us, we have signs of repeated visits by Native Americans to this site. These discolorations literally represent a moment frozen in time.
NARRATOR: Each tag marks ancient fire pits that can be carbon dated, creating a cross section of who lived here and when, stretching back 13,500 years.
JAMES ADOVASIO: Just below the surface I'm standing on is where the conventional Clovis First model says that the earliest material should stop, basically, that there ought not to be anything beneath it, no matter how much deeper we dug.
NARRATOR: But then, Adovasio did go deeper, below 13,500 years, to a time in the Americas, when no trace of humans should exist, according to the Clovis First theory. He was astounded by what he found.
JAMES ADOVASIO: The artifacts simply continued, and we recovered blades like this all the way down to 16,000 B.C.
NARRATOR: When he published his findings, he was immediately attacked.
JAMES ADOVASIO: The majority of the archaeological community was acutely skeptical, and they invented all kinds of reasons why these dates couldn't possibly be right.
NARRATOR: Some claimed that nearby coal deposits had contaminated Adovasio's samples, but he was known to be a meticulous excavator. Eventually, a few other archaeologists began to report evidence questioning the Clovis First theory, and they too were attacked.
MICHAEL COLLINS: The best way in the world to get beaten up, professionally, is to claim you have a pre-Clovis site.
DENNIS STANFORD: When you dig deeper than Clovis, a lot of people do not report it, because they're worried about the reaction of their colleagues.
MICHAEL COLLINS: I've been accused of planting artifacts. People will reject radiocarbon dates just simply because there's not supposed to be any people here at those times, and it just goes on and on and on.
NARRATOR: Even faced with evidence to the contrary, Clovis First supporters refused to accept that people could have arrived in America earlier than 13,500 years ago. For, as they pointed out, although it was possible to walk across the land bridge into present day Alaska, ice sheets blocked entry to the rest of the continent until at least that time. As they put it, "If people were coming to the New World before then, how could they get past the ice?"
Some archaeologists began to defy the dogma and search for an alternative route down the coast of Alaska.
JAMES DIXON (University of Colorado at Boulder): Well, when I was a student, we learned that the entire northwest coast of North America was covered by glacial ice all the way out to the continental shelf, so really, there was no opportunity for plants or animals, much less humans, to exist along that coastline during the last Ice Age.
NARRATOR: Today, Jim Dixon and Tim Heaton are finding evidence of abundant plants and animals at a time when the northwest coast was thought to be a lifeless, frozen wasteland.
TIM HEATON (University of South Dakota): We just cleaned up this caribou antler I want you to take a look at.
NARRATOR: Here, along the coast, the glaciers destroyed most traces of the Ice Age world, but Heaton and Dixon have investigated a rare undisturbed site, deep underground in an ancient bear cave.
The cave floor is excavated, inch by inch, from dated layers of soil going back tens of thousands of years ...
NARRATOR: This excavation has uncovered a record of caribou, fox and bear bones dating back 50,000 years.
TIM HEATON: What this suggests is that bears survived the entire last period of glaciation, and if bears could have survived here, it's certainly clear that humans could have also.
JAMES DIXON: We now realize that those early portrayals of this massive continental glacier, all the way out to the ocean really is, is not accurate, and that by, oh, 14- to 16,000 years ago, this ice had retreated sufficiently to create habitat for plants and animals and ice-free areas that could have been used by humans.
NARRATOR: Abundant vegetation, temperate coastal climate and bear survival are all evidence of a possible Ice Age route to the Americas along the Alaska coast, by sea. But still no evidence that humans had actually made the voyage down the coast.
Then another surprise, from deep in the southern hemisphere, at a place called Monte Verde, this site of human habitation in Chile, 40 miles from the Pacific coast, was claimed to date back earlier than Clovis.
In 1997, a group of highly regarded archaeologists went to examine the evidence with their own eyes. They saw weapons, tools and other objects, the result of two decades of excavation. After intensely scrutinizing the dating, they confirmed the artifacts were older than Clovis by over a thousand years.
KENNETH TANKERSLEY: It wasn't until Monte Verde that we saw the first unambiguous, unquestionable evidence of people here before Clovis. It allowed us to think that perhaps the initial peopling of the New World was beyond 12-, 13,000 years ago and allowed us to look further.
NARRATOR: But even as more archaeologists allowed themselves to consider that Clovis might not have been first, the pillars of the Clovis First theory could not be completely toppled; Clovis First remained the entrenched answer to the question of the peopling of Americas.
And so it could have stayed until a remarkable discovery. Doug Wallace takes a different approach to the mystery of the first Americans. Instead of archaeology, he's using DNA to reveal traces of ancient migrations. Stored in his lab are DNA samples of indigenous people collected from all corners of the globe. DNA is the molecule of our genetic endowment expressed in a code of four letters representing four different chemical bases.
Every cell in these samples contains DNA. But Wallace studies a specific kind of DNA, not from the nucleus, which is a random mix of genes from both parents, but from the mitochondria, the cell's energy factories outside the nucleus.
This kind of DNA is inherited only from the mother and is passed intact from generation to generation as lineages diverge. But at a steady and predictable rate, tiny mutations creep, like spelling mistakes, into specific stretches of DNA. The amount of genetic variation between any two lineages can reveal how far back in time they shared a common ancestor.
DOUGLAS WALLACE (University of California, Irvine): So what we've been able to do, using genetic variation and comparing the genetic variation of aboriginal populations from all the major continents of the world, we've literally been able to reconstruct the history of migration.
NARRATOR: When Wallace and his team analyzed the mitochondrial DNA of Native Americans, they found four distinctive lineages that he labeled A, B, C and D. All four turned out to share common ancestors back in Siberia and northeast Asia.
So far, these findings were consistent with the Clovis First theory that the first Americans came from Asia. But when Wallace calculated how long ago the Asian and Native American DNA diverged, he was shocked. He repeated his work, as did other labs. The results were consistent. Three of the four main ancestral groups A, C and D, diverged from their Asian forbears at least 20,000 years ago. And even more striking, the first Americans didn't all come at once, but in at least three waves of migration.
DOUGLAS WALLACE: All of the papers that have been published have come to a very similar conclusion: that the first migration was in the order of 20- to 30,000 years ago.
NARRATOR: The DNA results made the Clovis First theory even more unlikely. Together with the evidence from Monte Verde, Meadowcroft and other sites, it now seemed as if Clovis people could not be the first Americans. The Pacific coast route offered a possible alternative to the Bering land bridge and the ice-free corridor, and the DNA suggested that humans had been coming to America in waves and far earlier than ever imagined.
Only one last pillar of the epic Clovis First theory was still standing: the artifact that inspired the theory, the icon of Stone Age America, the Clovis spear point itself. Where did it come from?
Archaeologist Dennis Stanford decided to search for its origins along the route from Asia to America. But as he worked back from Alaska to Siberia, the trail went cold. The weapons and tools he found in Asia were quite different.
DENNIS STANFORD: After looking at the collections, we were disappointed that we didn't find what we thought we would find, and I was surprised to find that the technologies were so much different.
NARRATOR: The Clovis spear point is a single stone, bifacial, or shaped on both sides, with a flute, or groove, at its base. The spear points in Asia are made from lots of small razor-like flints called micro-blades embedded in a bone handle.
DENNIS STANFORD: Microblade technology is making a projectile point or a knife blade out of bone and then cutting a slot in it and then putting the microblades in the slot. And that's a totally different philosophy, entirely, than using the bifacial projectile point, as you can see here. It's just a total different mindset.
NARRATOR: Now there was a real puzzle. The DNA says the earliest Americans are from Asia, yet the Clovis point, is nowhere to be found in Asia. It was a puzzle, not only for Stanford, but also his colleague Bruce Bradley. Bradley is an anthropologist and a skilled flint knapper, an expert at crafting stone tools.
One day, while making a Clovis point, he had a moment of inspiration. He remembered a popular science book he had seen when he was a student. It showed pictures of ancient spearheads made by the Solutreans, people who lived in Ice Age France and Spain. Their spear points resembled Clovis points. It seemed unbelievable, but Stanford and Bradley posed the question, "Could the Clovis point and some of the earliest Americans be from Europe?"
DENNIS STANFORD: I was going through the old arguments: "Yeah, well, Solutreans... 5,000 years older than Clovis." And "You've got the Atlantic Ocean out there." So I wasn't convinced that we really ought to push forward on it.
BRUCE BRADLEY (University of Exeter): I remember it a little bit differently. You said, "Are you out of your mind?"
NARRATOR: Despite the unlikelihood of the connection, Stanford and Bradley decided to pursue the idea. Bradley thought an important clue might lie in the specific technique involved in making Clovis points.
BRUCE BRADLEY: You can see how this, starting from this side, went and took off this whole other side. This is what we call an overshot or outre passe flake, a very intentional process.
NARRATOR: Overshot flaking was an unusual technique that left behind a distinctive byproduct, big flakes, at ancient Clovis stone working sites. Bradley wondered if traces of this technique might show up in southwestern France, where the Solutreans had lived 20,000 years ago.
When he went there to investigate, one thing soon became clear: the Solutreans were a remarkable people. The Solutreans were responsible for much of the great Stone Age art of Europe and were the forefathers of the artists who painted the Sistine Chapel of the Ice Age, the Caves of Lascaux.
DAVID MELTZER: They did a lot of carving in bone and in antler and in ivory, they fashioned spear throwers, they painted on cave walls; they had a fairly complex means of expressing themselves through their art.
NARRATOR: Could these remarkable Stone Age Europeans have brought the Clovis spear point to the Americas?
Bradley's research took him to the local museum in the town of Les Eyzies, France. What he saw were hundreds of what looked very much like Clovis points.
BRUCE BRADLEY: What we're seeing here is only the finished objects, only the things that museum people thought were really good for display. It doesn't always show you how things were made.
NARRATOR: To connect the Solutreans and Clovis, he needed to find out if they produced their spearheads using the same big flake technique.
BRUCE BRADLEY: So what we do is we go back to the collections of the broken materials, which is probably 99 percent of what there is here, and in that we're seeing the various ways that the Solutreans were making the things, not just the finished objects. And so it's the pieces that are hidden away that are going to tell us the most.
NARRATOR: And there in the drawers were big flakes, a clear sign that the Solutreans had made their spearheads in an identical technique to that of Clovis.
BRUCE BRADLEY: This is a good example here that shows a kind of flaking that...where the flake is struck from one side and went across the surface...removed some of the other side. And these pieces show it over and over and over again. I mean just about any piece you pick up shows this very special technique. I just knew there had to be some kind of a connection.
NARRATOR: Clovis and Solutrean spear points not only look alike, they are made the same unusual way. To Stanford and Bradley, this was a powerful clue that prehistoric explorers had come from Europe and brought with them the technology that transformed Stone Age America: the Clovis Spear Point.
It was an outrageous idea with a few big problems. The Solutrean's culture ended in Europe around 18,000 years ago, and the Clovis point would not arrive in America for another 5,000 years. If the Solutreans brought the Clovis point to America, where had they been?
Stanford and Bradley needed to find some artifact in the Americas to bridge the time gap. They scoured Clovis sites across the continent, places where other archaeologists had been digging for years. Then, from a site called Cactus Hill, in Virginia, a possibility, a point that resembled the Solutrean style, and it dated far earlier than the Clovis.
DENNIS STANFORD: Here we have a projectile point from a feature that dates right at 15,900 years or 16,000 years ago, which is clearly right in the middle between Clovis and Solutrean. And what's really exciting about it is that the technology here is very similar to Solutrean. In fact it's closer to Solutrean than Clovis where you can see that it's in a progression between Solutrean and Clovis, so you have Solutrean, Cactus Hill and Clovis.
NARRATOR: For Stanford and Bradley, the Cactus Hill point bridged the 5,000-year gap, connecting Solutreans in France and Clovis in America. But their fledgling theory now confronted another massive problem almost 3,000 miles wide: the Atlantic Ocean.
At the time of the Solutreans, ice sheets stretched down as far as southern France, where winter temperatures were 50 degrees colder than today. Unlike the more temperate Pacific coast, the Atlantic would, at times, have been thick with icebergs and blizzards.
LAWRENCE GUY STRAUS (University of New Mexico): There are 5,000 kilometers of open North Atlantic Ice Age conditions to be crossed. There are icebergs floating around in the Bay of Biscay, and it's a polar desert.
NARRATOR: Could the Solutreans, a Stone Age people, have made such a voyage?
Stanford flew to a place where he thought he might find the answer: Barrow, Alaska, on the edge of the continent at the northern most tip of the United States. Here he hopes the native people of Alaska, the Inupiat, might reveal how, thousands of years ago, the Solutreans could have made an epic transatlantic journey.
Today the Inupiat survive temperatures of minus 35 degrees. For warm waterproof clothing, traditionalists prefer caribou skin and sinew, the same materials available to their Stone Age ancestors. And for food on their seasonal hunting trips, the Inupiat turn to an age old resource, the sea.
RONALD BROWER (Inupiat Heritage Center): The sea has been our garden. We don't have any growth...growing things. There's nothing growing, up here, so we depend on the sea for our livelihood, and most of our hunting is based on sea mammal hunting. We have the great whales, polar bears, walrus, seals and fish.
NARRATOR: Even with warm clothing and food, could the Solutreans have made boats capable of crossing thousands of miles of treacherous, icy water? Today, traditional Inupiat build umiaks, whaling boats, using sealskin and caribou sinew stretched on wood frames and waterproofed with oil applied directly from seal blubber. These same techniques and materials would have been available to prehistoric people.
DENNIS STANFORD: Boats like these can...could have made the journey that we're hypothesizing for Solutrean people quite well. In fact, I was noticing on the distance signs here in the middle of town, they say it's about 1,500 miles to Greenland. And we know that, prehistorically, Eskimo peoples moved that distance from here to there several times.
NARRATOR: In Arctic seas filled with pack ice conditions similar to the Ice Age Atlantic, the boats pass the test as the Inupiat paddle from ice floe to ice floe.
DENNIS STANFORD: Well, it certainly is exactly the way I think the Solutrean guys were dealing with the ice edge, because you can get in and off of the ice real rapidly and, and if the weather gets a little, little nasty then you just pull up off...out of the water and onto the ice.
NARRATOR: For Stanford and Bradley, this ability to travel great distances in Arctic conditions suggested how the Solutreans could have made their epic journey during the Ice Age.
They had now gathered a broad range of evidence: physical similarities between the Solutrean and Clovis spear points, a similar technique used to make them, and the Cactus Hill point connecting Solutrean and Clovis in time. All added up to a radical and provocative theory, that the Solutreans invented the Clovis point technology, and Ice Age Europeans were amongst America's earliest explorers.
Immediately, the theory was attacked. The close resemblance of the spear points was not enough.
DAVID MELTZER: You can always find, if you're careful in your selection, you can always find one or two things that look alike. I'm not looking for one or two things. I'm looking for lots of things: the artwork, the antler spear throwers, where are they? Did they get left behind? There's no reason why they shouldn't be there, but we don't see it.
NARRATOR: Can one spear point bridge a 5,000 year gap?
KENNETH TANKERSLEY: Although Cactus Hill, its radiocarbon date and artifact have been used to bridge the gap between the Solutrean and Clovis, in reality, it will take a lot of sites, a lot of radiocarbon dates and a large assemblage of artifacts to make that connection.
NARRATOR: And although the Solutreans may have been capable of making a cross-Atlantic journey, there's little archeological evidence that they did.
LAWRENCE GUY STRAUS: There is absolutely no evidence of deep sea fishing. There's absolutely no evidence, for that matter, of boats.
NARRATOR: But Stanford argues that crucial evidence is missing, submerged under 300 feet of water as rising sea levels inundated the Solutrean coastline at the end of the Ice Age.
The debate raged on, with arguments for and against the Solutrean theory. Then came evidence that, again, seemed like it might end the battle: DNA.
It was the latest report from colleagues of Doug Wallace who were investigating early human migrations. They were puzzling over mitochondrial DNA samples from a Native American tribe called the Ojibwa.
DOUGLAS WALLACE: When we studied the mitochondrial DNA of the Ojibwa we found, as we had anticipated, the four primary lineages?A, B, C and D?but there was about a quarter of the mitochondrial DNAs that was not A, B, C and D.
NARRATOR: There was a fifth source of DNA of mysterious origin. They called it X, and unlike A, B, C and D, they couldn't find it anywhere in Siberia or eastern Asia. But it was similar to an uncommon lineage in European populations today. At first, they thought it must be the result of interracial breeding within the last 500 years, sometime after Columbus.
DOUGLAS WALLACE: We naturally assumed that perhaps there had been European recent mixture with the Ojibwa tribe and that some European women had married into the Ojibwa tribe and contributed their mitochondrial DNAs.
NARRATOR: But that assumption proved wrong. When they looked at the amount of variation in the X lineage, it pointed to an origin long before Columbus, in fact, to at least 15,000 years ago. It appeared to be evidence of Ice Age Europeans in America.
DOUGLAS WALLACE: Well, what it says is that a mitochondrial lineage that is predominantly found in Europe somehow got to the Great Lakes region of the Americas 14,000 to 15,000 years ago.
NARRATOR: Could X be genetic evidence of the Solutreans in America? Further investigation raised another possibility. The ancient X lineage may have existed in Siberia, but died out, though not before coming over to America with Ancient migrations.
DOUGLAS WALLACE: And so the DNA data itself cannot distinguish between those two alternatives. It could be either from Europe or from Siberia, of a population that is now lost.
NARRATOR: So X could have reached the Americas through Asia, or across the Atlantic directly from Europe. The DNA could not provide a storybook ending.
MICHAEL COLLINS: The hypothesis that Clovis may derive from Solutrean, it's going to be ... it's going to take years to sort that out. That's, that's not the most important thing right now. The very fact that that hypothesis is being articulated forces us to think in, in much broader terms about the problem of the peopling of the Americas.
NARRATOR: With Clovis First in ruins and the Solutrean theory still hotly contested, now archaeologists must pull together their discoveries into an all-encompassing new theory of the peopling of the Americas. And central to that quest is the origin of the Clovis point.
KENNETH TANKERSLEY: Although the technology needed to produce a Clovis point was found among other cultures during the Ice Age, the actual Clovis point itself is unique to the Americas, suggesting that it was invented here in the New World.
NARRATOR: Perhaps the Clovis spear point was not brought by big game hunters from Asia or seafaring Solutreans from Europe. Could the Clovis point be the first great American invention?
A prime place for investigating Clovis culture in America is the Gault Site, in central Texas. Unlike its hot, arid surroundings, Gault is a shady park-like oasis. Michael Collins, from the University of Texas, started excavating at Gault in 1998.
MICHAEL COLLINS: As you can see, the Gault site is really a special place. It's well watered, got lush vegetation, an abundance of resources, both plant and animal. It's an ideal place for people who are hunters and gatherers.
NARRATOR: Gault is the best of both worlds: nearby is a parched plateau for hunting game, while down in a cool stream-fed valley, are pecans, walnuts and berries. And not far from the streambed is a natural resource so crucial to the survival of prehistoric people that it defines the whole age, stone.
MICHAEL COLLINS: We're at an outcropping here, a rich outcropping of cretaceous chert. This was the choice material for making stone tools for at least 13,000 years. It's pretty good stuff when you break it open. It...see how it breaks. You get nice flakes of it out of there.
NARRATOR: To a Stone Age craftsman, this particular rock was perfect for fashioning stone tools and may have drawn people for hundreds of miles. To date, nearly half a million Clovis artifacts have been found at Gault, but curiously, very few are spear points.
MICHAEL COLLINS: The Clovis spear point is the, sort of the icon of Clovis culture. But what we see at the Gault site is we only have about 30 projectile points?mostly broken and worn out and discarded Clovis points in comparison to the several thousand other tools.
NARRATOR: What can explain the lack of spear points at one of Stone Age America's premiere stone quarries? And why would big game hunters need any other tools beside the spear point?
At the Texas Archeological Research Lab, Marilyn Shoberg examines the Clovis tools under a microscope. By studying the scratches on the tool she hopes to discover its function. The last hand to use this tool did so some 13,000 years ago.
MARILYN SHOBERG (Texas Archeological Research Laboratory): Very fine striations that are running parallel to the edge of the blade and these striations all parallel to the edge, indicate that it was used primarily in a longitudinal motion, sort of slicing, as in slicing grass.
NARRATOR: To test her idea, Collins and his colleagues created replica tools, made from the same Gault stone and used them at the site.
MICHAEL COLLINS: In cutting just this little bit of grass here I've already developed a bright sheen right along the edge and under the microscope that'll be a very bright polish built up on that edge, and it'll have striations in it going this way, because of my cutting motion.
NARRATOR: Under the microscope, the replica tool has the same sheen and pattern as the Clovis tool. Perhaps Clovis people were cutting grass or reeds for baskets, bedding or thatched roofs for shelter.
Shoberg examines other types of tools found at the site.
MARILYN SHOBERG: Deep troughed grooves, characteristic of contact with bone.
NARRATOR: A spear point used for hunting.
MARILYN SHOBERG: All along the edge of this artifact there is polish that's characteristic of contact with a soft material, like meat.
NARRATOR: A knife used for slicing food.
MARILYN SHOBERG: This is the hide punch.
NARRATOR: A punch or awl for sewing tailored clothing.
MARILYN SHOBERG: This little blade fragment was used to engrave or incise bone.
NARRATOR: Small pieces of limestone have been discovered at Gault, etched with mysterious geometric patterns among the only examples of Ice Age art in America. Art, tailored clothes, baskets and thatched roofs for shelter: all contradict the old Clovis First image of nomadic, mammoth murderers. And although the remains of a mammoth were found at Gault, Collins and colleagues have found far more bones of turtles, birds and small mammals. This menu suggests more variety than a big game hunter's diet of wooly mammoth and bison.
MICHAEL COLLINS: What emerges from the totality of all that information is these people were generalized hunters and gatherers. They were living on a variety of animals, staying in one place for quite a while and not simply pursuing large game as their primary way of life.
NARRATOR: There's even evidence of trade networks between Clovis people at different sites across the continent. It's not uncommon to find Clovis points hundreds of miles from the source of the original rock. And different bands of Clovis people probably traded more than just tools; they may have been exchanging potential spouses.
DAVID KILBY: Although we tend to think, sometimes, of hunter gatherers as being fairly simple in adaptation, it's actually a pretty complicated world in which they live. There have to be social mechanisms in place that allow you to sort of share information and relate to surrounding groups in some systematic way and to be on good enough terms with them that you're able to, sort of, exchange mates, and therefore genetic viability, across an otherwise, sort of, sparsely populated landscape.
NARRATOR: A clue that Clovis people had intimate knowledge of the landscape lies, once again, with the Clovis point. Many have been found in caches, bundles of spear points, hidden away for later use by Clovis hunters.
David Kilby has traveled the United States and studied all of the nearly two dozen known caches.
DAVID KILBY: This strategy of caching suggests intimate familiarity with the landscape and sort of a complex understanding of the distribution of different resources around the landscape. The fact that they're putting tools and raw material in specific places on the landscape and leaving them behind, suggests that they knew with some confidence where they were going to be in the future.
NARRATOR: Caching, trade and travel must have involved patterns of seasonal migration developed over dozens of generations. This emerging picture of the Clovis lifestyle contradicts the old image of Clovis as a single people, nomadic big game hunters, sweeping rapidly across the continent with their lethal spear, wiping out all the great beasts.
MICHAEL COLLINS: The longstanding notion of the rapid spread, the archaeologically rapid spread, of Clovis across the continent, has been taken to mean the spread of a people across the continent. An alternative to that might be that the spread of Clovis is actually the expansion of a technology across existing populations, a little bit analogous to the fact you can go anywhere in the world and find people driving John Deere tractors. Technology can spread across different languages, different cultures, quite readily.
NARRATOR: Perhaps this is the birth of an intriguing new theory for the peopling of America: the first Stone Age explorers arrive on this continent more than 20,000 years ago, much earlier than scientists ever imagined. They come from Asia, and maybe even Europe, by land and by sea. Tenuously, at first, these different groups spread across the virgin land, and over thousands of years they develop an intimate knowledge of the New World.
Around 13,500 years ago, a stone weapon is invented, so powerful, so crucial to survival that it spreads swiftly across all the people of the Americas. With this new technology they take root, proliferate and prosper. Clovis is the first great invention of the New World and the icon of the peoples who may rightfully be called fhe first Americans ...''
What happened to Clovis?
Commentator 1 wrote on 7-7-08:
My reading of the Wikipedia reference is that the the Younger Dryas was confined to the Northern Hemisphere and is thus consistent with a Tunguska event as opposed to, for example, a cyclical reduction in solar output. The evidence seems to indicate extensive destruction of human life in North America.
[The End of Clovis]
ARTICLE from ScienceDaily, 7-3-08
Exploding Asteroid Theory Strengthened By New Evidence Located In Ohio, Indiana
ScienceDaily (July 3, 2008) -- Geological evidence found in Ohio and Indiana in recent weeks is strengthening the case to attribute what happened 12,900 years ago in North America -- when the end of the last Ice Age unexpectedly turned into a phase of extinction for animals and humans – to a cataclysmic comet or asteroid explosion over top of Canada.
A comet/asteroid theory advanced by Arizona-based geophysicist Allen West in the past two years says that an object from space exploded just above the earth’s surface at that time over modern-day Canada, sparking a massive shock wave and heat-generating event that set large parts of the northern hemisphere ablaze, setting the stage for the extinctions.
Now University of Cincinnati Assistant Professor of Anthropology Ken Tankersley, working in conjunction with Allen West and Indiana Geological Society Research Scientist Nelson R. Schaffer, has verified evidence from sites in Ohio and Indiana, including, locally, Hamilton and Clermont counties in Ohio and Brown County in Indiana that offers the strongest support yet for the exploding comet/asteroid theory.
Samples of diamonds, gold and silver that have been found in the region have been conclusively sourced through X-ray diffractometry in the lab of UC Professor of Geology Warren Huff back to the diamond fields region of Canada.
The only plausible scenario available now for explaining their presence this far south is the kind of cataclysmic explosive event described by West's theory. "We believe this is the strongest evidence yet indicating a comet impact in that time period," says Tankersley ...
Tankersley was familiar through years of work in this area with the diamonds, gold and silver deposits, which at one point could be found in such abundance in this region that the Hopewell Indians who lived here about 2,000 years ago engaged in trade in these items.
Prevailing thought said that these deposits, which are found at a soil depth consistent with the time frame of the comet/asteroid event, had been brought south from the Great Lakes region by glaciers ...
Additional sourcing work is being done at the sites looking for iridium, micro-meteorites and nano-diamonds that bear the markers of the diamond-field region, which also should have been blasted by the impact into this region.
Much of the work is being done in Sheriden Cave in north-central Ohio's Wyandot County, a rich repository of material dating back to the Ice Age.
Tankersley first came into contact with West and Schaffer when they were invited guests for interdisciplinary colloquia presented by UC's Department of Geology this spring.
West presented on his theory that a large comet or asteroid, believed to be more than a mile in diameter, exploded just above the earth at a time when the last Ice Age appeared to be drawing to a close.
The timing attached to this theory of about 12,900 years ago is consistent with the known disappearances in North America of the wooly mammoth population and the first distinct human society to inhabit the continent, known as the Clovis civilization. At that time, climatic history suggests the Ice Age should have been drawing to a close, but a rapid change known as the Younger Dryas event, instead ushered in another 1,300 years of glacial conditions. A cataclysmic explosion consistent with West's theory would have the potential to create the kind of atmospheric turmoil necessary to produce such conditions ...
Evidence Acquits Clovis People Of Ancient Killings, Archaeologists Say (Feb. 25, 2003) -- Archaeologists have uncovered another piece of evidence that seems to exonerate some of the earliest humans in North America of charges of exterminating 35 genera of Pleistocene epoch ...
From Wikipedia, the free encyclopedia
The Younger Dryas stadial, named after the alpine / tundra wildflower Dryas octopetala, and also referred to as the Big Freeze, was a brief (approximately 1300 ± 70 years) cold climate period ... at the end of the Pleistocene between approximately 12,800 to 11,500 years Before Present, and preceding the Preboreal of the early Holocene ...
Abrupt climate change
The Younger Dryas saw a rapid return to glacial conditions in the higher latitudes of the Northern Hemisphere between 12,900 [and] 11,500 years before present ...
Was the Younger Dryas global?
Answering this question is hampered by the lack of a precise definition of "Younger Dryas" in all the records. In western Europe and Greenland, the Younger Dryas is a well-defined synchronous cool period. But cooling in the tropical North Atlantic may have preceded this by a few hundred years.
South America shows a less well defined initiation but a sharp termination. The Antarctic Cold Reversal appears to have started a thousand years before the Younger Dryas, and has no clearly defined start or end; Huybers has argued that there is fair confidence in the absence of the Younger Dryas in Antarctica, New Zealand and parts of Oceania ...''
ARTICLE from About.com (a part of The New York Times Company), 4-28-08,
Kris's Archaeology Blog
By K. Kris Hirst, About.com Guide to Archaeology since 1997
[Clovis and Black Mats]
``Although a lot of the hot news in archaeology these days is centered on Pre-Clovis, many scholars are focused on the end of the Clovis big game hunters.
Since the appearance of Clovis big game hunters on the North American continent has been redated to include a mere 300-500 years length, researchers have been trying to trace the reasons for its disappearance. One possible reason is the death of all of the big game Clovis was hunting -- called the Pleistocene megafauna extinctions. The megafauna that disappeared between 15,000 and 10,000 years ago include mastodons, horses, camels, sloths, dire wolves, tapir, and short-faced bear.
Because these megafauna disappeared at roughly the same time as the Clovis people (or at least their lifestyle), it has long been debated whether the Clovis people were the cause of the disappearance through overkill or merely the stressed-out survivors of a difficult climate change.
The Younger Dryas and the End of Clovis
Climatically, the end of Clovis coincides with the onset of the Younger Dryas period (abbreviated YD), which was substantially colder, dryer and windier, compared to the late Pleistocene and the Early Holocene on either side of the YD. The end of the Pleistocene was a warming trend as the glaciers retreated; and the YD was an abrupt and nasty surprise, a 1000-year-long return to tundra conditions. The YD was one of our ancestors' occasional struggles with abrupt climate change ...
One of the geological markers of the Younger Dryas climatic episode is an organic-rich layer of soil called "sapropelic silt", "peaty muds", "paleo-aquolls" and most commonly, "black mat".
Black Mats and Clovis
The first archaeologist to describe the "black mat" on an archaeological site was C. Vance Haynes, who in the 1960s was working on the Murray Springs Clovis site in northeastern southeastern ... Arizona, where he noticed a black mat directly overlaying the Clovis occupation.
A black mat is a thin layer of organic material, sometimes described as 'peaty', that Haynes thought at the time represented evidence of a drought. Black mats vary in color and content, but they are always found to have been created in moist to wet conditions such as ponds, elevated or perched water tables, boggy areas, wet meadows and the margins of spring pools. Scientists debate the genesis of these mats, but one possible theory is that in a shallow pond, for example, dead plants and animals filter through the water and drop to the bottom, creating a thick organic layer ...
None of the black mats investigated at these Clovis sites contained any Clovis nor any evidence of any of the Pleistocene fauna, although beneath it, and occasionally immediately beneath it, can be found Clovis mammoth kills. The mat at Murray Springs was dated between 9,800 to 10,800 uncalibrated radiocarbon years before the present.
Haynes did some poking around and discovered that black mats are fairly common, and in fact they are extremely common immediately above Clovis sites, or Clovis-aged natural deposits. Similar black mats have been identified at other Clovis sites, each of those had been similarly dated.
A Summary of Black Mat Research
Further investigation in the late 1990s, reported in the peer reviewed journal Quaternary Research revealed that in the Great Basin at least, black mats actually occur between 11,000 and 6300 [years before present] and again between 2300 and modern day [years before present]; in fact they are found today being created at the margins of springs. However, an extensive cluster of them do date near 10,000 [years before present]. Organic material identified within the YD black mat clearly supports the notion that these are wetland deposits.
In 2007 at the American Geophysical Union meetings, a session was given explaining the black mat as having followed the explosive destruction of a comet which was postulated to have broken into pieces over the Laurentide ice shield.
A formal paper, published in the Proceedings of the National Academy of Sciences in September of that year, described a thin sedimentary deposit immediately beneath the black mat, which contained high concentrations of magnetic grains with iridium, magnetic microspherules, glass-like carbon containing nanodiamonds, and fullerenes with extra-terrestrial helium.
Firestone et al. argue that the stuff underneath the black mat represents the detritus of an explosive low-density object--a comet--which destablized the Laurentide Ice Sheet. Widespread fires ensued, followed by an accelerated melting of the ice sheet and then a cooling period (the YD), brought on perhaps perhaps by persistent cloudiness. This combination, they claim, led to the megafaunal extinctions and the end of the Clovis big game adaptation ...
The evidence seems to be very strong. In a (geological) instant, the extinction of horses, camels, mammoths, mastodons, dire wolves, American lions, and tapirs occurred. At the same time, Clovis patterns of big game hunting ends ...''
TRANSCRIPT from the News Hour, 6-30-08
``Oregon Discovery Challenges Beliefs About First Humans
Until recently, most scientists believed that the first humans came to the Americas 13,000 years ago. But new archaeological findings from a cave in Oregon are challenging that assumption. Lee Hochberg of Oregon Public Television reports on the controversial discovery.
LEE HOCHBERG, NewsHour correspondent: What archaeologist Dennis Jenkins found in the Paisley Caves in south central Oregon may turn on its head the theory of how and when the first people came to North America.
Many scientists believe humans first came to this continent 13,000 years ago across a land bridge from Asia and they started the so-called Clovis culture. But Jenkins says they may have been living in these caves 1,000 years earlier, toward the end of the last ice age.
DENNIS JENKINS, archeologist: We certainly knew that people had lived in the caves, but we did not have adequate dating to prove that they were here at the end of the ice age.
LEE HOCHBERG: In 2002, he and his students at the University of Oregon began excavating the caves looking for proof. They discovered 14,000-year-old camel bones and signs they'd been butchered by humans. And then, they found artifacts of the humans themselves.
DENNIS JENKINS: It even includes on the top of it what's probably a chunk of feces.
LEE HOCHBERG: Although it was hardly the stuff of Indiana Jones.
DENNIS JENKINS: We were looking and hoping, of course, to find spear points, evidence of their technology. Instead, what we found was the perfect human signature, their coprolites. It was, if you will, the perfect artifact.
LEE HOCHBERG: Coprolites are an archeology term for fossilized feces. Jenkins says they're from humans, and they're more than 14,000 years old.
DENNIS JENKINS: So this was the evidence we had dug all summer to get to.
LEE HOCHBERG: It's not the first time this area of Oregon has given up clues suggesting humans were here earlier. Seventy years ago, another Oregon archaeologist, Luther Cressman, found these sandals in the cave woven from sagebrush bark.
LUTHER CRESSMAN, archaeologist: Now, the interesting thing here is that we have a toe flap. The toe fit in here.
LEE HOCHBERG: And he found stone tools that carbon dating suggested were from the Pleistocene age, more than 13,000 years old.
LUTHER CRESSMAN: And to find these things down here at Fort Rock Cave, at 13,200 years ago, means that the people were down in the great basin before the last glaciation. That's why these things are so important.
Human diet clues from DNA tests
LEE HOCHBERG: But other scientists said the evidence wasn't definitive enough to prove humans were here at that time. And instead, the theory of the land bridge took hold.
DENNIS JENKINS: I'm going to pull it out. It looks just like what it is.
LEE HOCHBERG: In 2004, Jenkins and his colleagues took their new evidence, the coprolites, to the university lab to see if modern science could offer more answers. They found the coprolites reflected a human diet.
DENNIS JENKINS: Here we have bone, some hair, vegetation, material. Those are all good indicators that it's a human coprolite.
LEE HOCHBERG: Carbon dating showed three of the coprolites, and the animal bones found with them were 14,300 years old. And DNA tests showed six samples with distinct markers of ancient Native Americans.
Three hundred additional coprolites the team recovered are now being analyzed. Jenkins says he's confident he's found the earliest evidence of humans in North America, who look like either current Native Americas or like Paleo-Indian people.
DENNIS JENKINS: They were probably somewhat shorter than we are, 5'5", 5'6", perhaps. They would have been wearing clothing like we are that was made out of hides or perhaps bull rush.
We found little tiny threads that were .04 millimeters, I mean, so tiny they're as small as the threads in your shirt. Clearly, people were sewing their clothing, form-fitting clothing just like we have, shirts, pants, those kinds of things, perhaps moccasins.
LEE HOCHBERG: And he says their coprolites show they ate desert parsley, which grows six inches under the ground.
DENNIS JENKINS: The fact that they were exploiting that plant just like the Native Americans of this region were doing at later times tells us that they were very well-adapted to their environment. These were not explorers. These were people who were living in this area. They were at home here.
LEE HOCHBERG: And perhaps most importantly, they would have had to have come here in a different way than long believed. Since at that time the continent was covered under an ice sheet miles thick, land travel from the land bridge south would have been impossible.
The early humans would have had to come by boat to the Pacific coast and then traveled inland through a strip of warmer swampland. Early peoples are thought to have arrived in Australia by boat, but it's a new idea for America ...
LEE HOCHBERG: Jenkins believes his theory, published recently in Science magazine, will become widely accepted, but it will take a few years to erode archaeology's deeply entrenched Clovis-first bias.
DENNIS JENKINS: For 60 or more years, we have had the concept that Clovis was first. And it made such a nice package that it was very believable. And the Clovis door has now been jarred apart. And if this evidence holds and is not disproven, then there's no way you're going to close it again.
LEE HOCHBERG: Jenkins is going to gather more Paisley Cave samples this fall and try to link his discovery to other pre-Clovis finds in Chile.
HISTORICAL SKETCH OF AVON, OHIO, TO 1974 | 2026-01-23T18:47:47.659854 |
762,746 | 3.729007 | http://www.newworldencyclopedia.org/entry/Fern | Polystichum setiferum showing unrolling young frond
A fern, or pteridophyte, is any one of a group of plants classified in the Division Pteridophyta, formerly known as Filicophyta. A fern is a vascular plant that differs from the more primitive lycophytes in having true leaves (megaphylls) and from the more advanced seed plants (gymnosperms and angiosperms) in lacking seeds, and instead reproducing with spores.
There are an estimated 10-15,000 known species of ferns, classified in about 40 families (Swale 2000). There are also plants known as "fern allies" that are also vascular plants and reproduce via spores, but are not true ferns. Hassler and Swale (2001) compiled a list of 12,838 ferns and fern allies in three classes, 19 orders, 58 families, and 316 genera.
Ferns are among the oldest land plants, dating back to the Carboniferous period (359 to 299 million years ago), when they considered to have been the dominant type of vegetation. The fronds of some Carboniferous ferns are almost identical with those of living species. Reproduction via spores preceded the development of angiosperm reproduction.
Ferns range in size from some aquatic species a few centimeters high to some tree ferns that can grow more than 20 meters high with fronds over three meters.
Ferns are distributed throughout the world, including tropical, temperate, and Arctic environments, although most species are located in tropical regions. They tend to grow in shady, damp areas, but are also found on rocks and dry ground. Some species grow on trees.
Families such as Marattiaceae, Gleicheniaceae, Grammitidaceae, Schizaeaceae, Cyatheaceae, Blechnaceae, and Davalliaceae are almost exclusive to the tropics, and the genera Athyrium, Cystopteris, Dryopteris, Polystichum are exclusive to temperate and Arctic regions.
Many species of fern are disjunct populations across a geographic range, which is thought to be the result of long distance dispersal of spores; however, disjunct populations across continents have also been found. These are thought to be ancient remnant populations dating back to a time when the continents were arranged differently and the populations linked together.
Like the sporophytes of seed plants, those of ferns consist of:
- Stems: Most often an underground creeping rhizome, but sometimes an above-ground creeping stolon, aerial shoot from a plant with the ability to produce adventitious roots and new offshoots of the same plant (e.g., Polypodiaceae), or an above-ground erect semi-woody trunk (e.g., Cyatheaceae) reaching up to 20 m in a few species (e.g., Cyathea brownii on Norfolk Island and Cyathea medullaris in New Zealand).
- Leaf: The green, photosynthetic part of the plant. In ferns, it is often referred to as a frond, but this is because of the historical division between people who study ferns and people who study seed plants, rather than because of differences in structure. New leaves typically expand by the unrolling of a tight spiral called a crozier or fiddlehead. This uncurling of the leaf is termed circinate vernation. Leaves are divided into two types:
- Trophophyll: A leaf that does not produce spores, instead only producing sugars by photosynthesis. Analogous to the typical green leaves of seed plants.
- Sporophyll: A leaf that produces spores. These leaves are analogous to the scales of pine cones or to stamens and pistil in gymnosperms and angiosperms, respectively. Unlike the seed plants, however, the sporophylls of ferns are typically not very specialized, looking similar to trophophylls and producing sugars by photosynthesis as the trophophylls do.
- Roots: The underground non-photosynthetic structures that take up water and nutrients from soil. They are always fibrous and are structurally very similar to the roots of seed plants.
The gametophytes of ferns, however, are very different from those of seed plants. They typically consist of:
- Prothallus: A green, photosynthetic structure that is one cell thick, usually heart- or kidney-shaped, 3-10 mm long and 2-8 mm broad. The thallus produces gametes by means of:
- Antheridia: Small spherical structures that produce flagellate sperm.
- Archegonia: A flask-shaped structure that produces a single egg at the bottom, reached by the sperm by swimming down the neck.
- Sporangiia: The reproductive structure of ferns. These are small sacks or capsules containing the spores by which ferns reproduce. This structure is found on the underside of the frond, arranged in a pattern associated with the veination of the leaf. Sometimes ferns provide a protective covering for the sorus called the indusium.
- Rhizoids: root-like structures that consist of single greatly-elongated cells that take up water and nutrients.
Like all vascular plants, ferns have a life cycle often referred to as alternation of generations, characterized by a diploid sporophytic and a haploid gametophytic phase. Unlike the gymnosperms and angiosperms, in ferns the gametophyte is a free-living organism. The life cycle of a typical fern is as follows:
- A sporophyte (diploid) phase produces haploid spores by meiosis;
- A spore grows by cell division into a gametophyte, which typically consists of a photosynthetic prothallus, a short-lived and inconspicuous heart-shaped structure typically two to five millimeters wide, with a number of rhizoids (root-like hairs) growing underneath, and the sex organs.
- The gametophyte produces gametes (often both sperm and eggs on the same prothallus) by mitosis
- A mobile, flagellate sperm fertilizes an egg that remains attached to the prothallus
- The fertilized egg is now a diploid zygote and grows by mitosis into a sporophyte (the typical "fern" plant).
Evolution and classification
Ferns first appear in the fossil record in the early-Carboniferous epoch. By the Triassic, the first evidence of ferns related to several modern families appeared. The "great fern radiation" occurred in the late-Cretaceous, when many modern families of ferns first appeared.
Ferns have traditionally been grouped in the Class Filices, but modern classifications assign them their own division in the plant kingdom, called Pteridophyta.
Two related groups of plants, commonly known as ferns, are actually more distantly related to the main group of "true" ferns. These are the whisk ferns (Psilotophyta) and the adders-tongues, moonworts, and grape-ferns (Ophioglossophyta). The Ophioglossophytes were formerly considered true ferns and grouped in the Family Ophioglossaceae, but were subsequently found to be more distantly related. Some classification systems include the Psilopytes and Ophioglossophytes in Division Pteridophyta, while others assign them to separate divisions. Modern phylogeny indicates that the Ophioglossophytes, Psilotopytes, and true ferns together constitute a monophyletic group, descended from a common ancestor.
Recent phylogenetic studies suggest that horsetails, Equisetaceae, are derived "ferns." More recently (Pryer, et al. 2004) clubmosses, spikemosses, and quillworts have been grouped as lycophytes. All ferns, whisk ferns, and horsetails have been grouped as monilophytes.
The true ferns may be subdivided into four main groups, or classes (or orders if the true ferns are considered as a class):
The last group includes most plants familiarly known as ferns. The Marattiopsida are a primitive group of tropical ferns with a large, fleshy rhizome, and are now thought to be a sibling taxon to the main group of ferns, the leptosporangiate ferns, which include the other three groups listed above. Modern research suggests that the Osmundopsida diverged first from the common ancestor of the leptosporangiate ferns, followed by the Gleichenopsida.
A more complete classification scheme follows:
- Division: Pteridophyta
- Class: Marattiopsida
- Order: Marattiales
- Order: Christenseniales
- Class: Osmundopsida
- Order: Osmundales (the flowering ferns)
- Class: Gleicheniopsida
- Subclass: Gleicheniatae
- Order: Gleicheniales (the forked ferns)
- Order: Dipteridales
- Order: Matoniales
- Subclass: Hymenophyllatae
- Order: Hymenophyllales (the filmy ferns)
- Subclass: Hymenophyllopsitae
- Order: Hymenophyllopsidales
- Subclass: Gleicheniatae
- Class: Pteridopsida
- Subclass: Schizaeatae
- Order: Schizeales (including the climbing ferns)
- heterosporous ferns
- Order: Marsileales (Hydropteridales) (the water-clovers, mosquito fern, water-spangle)
- Subclass: Cyatheatae
- Order: Cyatheales (the tree ferns)
- Order: Plagiogyriales
- Order: Loxomales
- Subclass: Pteriditae
- Order: Lindseales
- Order: Pteridales (including the brakes and maidenhair ferns)
- Order: Dennstaedtiales (the cup ferns, including bracken)
- Subclass: Polypoditae
- Order: Aspleniales (the spleenworts)
- Order: Athyriales (including the lady ferns, ostrich fern, maiden ferns, etc.)
- Order: Dryopteridales (the wood ferns and sword ferns)
- Order: Davalliales (including the rabbits-foot ferns and Boston ferns)
- Order: Polypodiales (including the rock-cap ferns or Polypodies)
- Subclass: Schizaeatae
- Class: Marattiopsida
Fern ally is a general term covering a somewhat diverse group of vascular plants that are not flowering plants (angiosperms) and not true ferns. Like ferns, these plants reproduce by shedding spores to initiate an alternation of generations. There are three or four groups of plants considered to be fern allies. In various classification schemes, these may be grouped as classes or divisions within the plant kingdom. The more traditional classification scheme is as follows (here, the first three classes are the "fern allies"):
- Kingdom: Plantare
- Division Tracheophyta (vascular plants)
- Class Lycopsida, (fern-allies) the clubmosses and related plants
- Class Sphenopsida or Equisetopsida, (fern-allies) the horsetails and scouring-rushes
- Class Psilopsida, (fern-allies) the whisk ferns
- Class Filices, the true ferns
- Class Spermatopsida (or sometimes as several different classes of seed-bearing plants)
- Division Tracheophyta (vascular plants)
A more modern or newer classification scheme is:
- Kingdom Plantare
- Division Lycopodiophyta
- Class Lycopodiopsida, the clubmosses
- Class Selaginellopsida, the spikemosses
- Class Isoetopsida, the quillworts
- Division Equisetophyta, the horsetails and scouring-rushes
- Division Psilotophyta, the whisk ferns
- Division Ophioglossophyta, the adders'-tongues and moonworts
- Division Pteridophyta, the ferns
- Division Spermatophyta (or as several different divisions of seed-bearing plants)
- Division Lycopodiophyta
Note that in either scheme, the basic subdivision of the fern allies is preserved, with the exception that the Ophioglossophyta (Ophioglossopsida), once thought to be true ferns, are now generally regarded by many to be a distinct group of fern allies.
Ferns are not of major, direct economic importance, with one possible exception. Ferns of the genus Azolla, which are very small, floating plants that do not look like ferns, called mosquito fern, are used as a biological fertilizer in the rice paddies of southeast Asia, taking advantage of their ability to fix nitrogen from the air into compounds that can then be used by other plants.
Other ferns with some economic significance include:
- Dryopteris filix-mas (male fern), used as a vermifuge
- Rumohra adiantoides (floral fern), extensively used in the florist trade
- Osmunda regalis (royal fern) and Osmunda cinnamomea (cinnamon fern), the root fiber being used horticulturally; the fiddleheads of O. cinnamomea are also used as a cooked vegetable
- Matteuccia struthiopteris (ostrich fern), the fiddleheads used as a cooked vegetable in North America
- Pteridium aquilinum (bracken), the fiddleheads used as a cooked vegetable in Japan
- Diplazium esculentum (vegetable fern), a source of food for some native societies
- Pteris vittata (Brake fern), used to absorb arsenic from the soil
- Tree ferns, used as building material in some tropical areas
In some cases, ferns provide negative value, such as in their role as weeds in agriculture.
Several non-fern plants are called "ferns" and are sometimes popularly believed to be ferns in error. These include:
- "Asparagus fern" - This may apply to one of several species of the monocot genus Asparagus, which are flowering plants. A better name would be "fern asparagus."
- "Sweetfern" - This is a shrub of the genus Comptonia.
- "Air fern" - This is an unrelated aquatic animal that is related to a coral; it is harvested, dried, dyed green, then sold as a plant that can "live on air." It looks like a fern but is actually a skeleton.
In addition, the book Where the Red Fern Grows has elicited many questions about the mythical "red fern" named in the book. There is no such known plant, although there has been speculation that the Oblique grape-fern, Sceptridium dissectum, could be referred to here, because it is known to appear on disturbed sites and its fronds may redden over the winter.
Gallery of ferns
- May, L. W. 1978. "The economic uses and associated folklore of ferns and fern allies." Bot. Rev. 44: 491-528.
- Moran, R. C. 2004. A Natural History of Ferns. Portland, OR: Timber Press. ISBN 0881926671.
- Pryer, K. M., E. Schuettpelz, P. G. Wolf, H. Schneider, A.R. Smith, and R. Cranfeld. 2004. "Phylogeny and evolution of ferns (Monilophytes) with a focus on the early Leptosporangiate divergences." American Journal of Botany 91:1582-1598.
- Pryer, K. M., H. Schneider, A. R. Smith, R. Cranfill, P. G. Wolf, J. S. Hunt, and S. D. Sipes. 2001. "Horsetails and ferns are a monophyletic group and the closest living relatives to seed plants." Nature 409: 618-622 (abstract here).Retrieved November 29, 2007.
- Pryer, K. M., E. Schuettpelz, P. G. Wolf, H. Schneider, A. R. Smith, and R. Cranfill. 2004. "Phylogeny and evolution of ferns (monilophytes) with a focus on the early leptosporangiate divergences." American Journal of Botany 91:1582-1598 (online abstract here).Retrieved November 29, 2007.
All links retrieved October 20, 2013.
- Croft, J. 1999. Checklist of World Ferns.
- Hassler, M., and B. Swale. 2001. Checklist of Ferns and Fern Allies.
- Knouse, J. A. Bibliography of Major Pteriodological Works.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | 2026-01-30T01:12:20.588717 |
538,436 | 3.600562 | http://wordpress.clarku.edu/tlivdahl/research/egg-hatch/egg-hatch-inhibition/ | Larva-induced egg hatch inhibition
Eggs within the genera Ochlerotatus and Aedes are capable of refraining from hatching when their environment is crowded with large larvae. We have shown this in laboratory situations (below, Livdahl et al. 1984) and field experiments (below, Livdahl and Edgerly 1987) with O. triseriatus. We also have laboratory evidence that this phenomenon occurs in A. albopictus and A. aeqypti, and that larvae inhibit eggs interspecifically with differential intensities and egg sensitivities (Edgerly et al. 1993).
Hatch rates of Ochlerotatus triseriatus eggs immersed in treeholes that had been stocked with fixed densities of O. triseriatuslarvae in August, 1986, pooled across four immersion periods (2, 4, 8, 16 d). There were no significant differences for diffent time intervals, probably because the eggs quickly entered diapause.
This novel interspecies interaction may have implications for the invasion of North America by A. albopictus, which showed the lowest sensitivity to high densities of larvae as well as the most inhibitory larvae. The mechanism of this inhibition is thought to be grazing of microbes from egg surfaces by larvae, which may remove the source of oxygen depletion that is necessary to stimulate egg hatch (right, Edgerly and Marvier 1991). Inhibition of egg hatching may provide a mechanism for the unhatched egg to choose a time for hatching that minimizes potential competition with larger larvae, as well as possible cannibalism by larger larvae. | 2026-01-26T17:27:28.493566 |
1,095,357 | 3.710434 | http://astronomy.com/news-observing/news/2006/04/prometheus%20pull | Saturn's moon Prometheus borrows material from the planet's faint F ring.
April 12, 2006
Saturn's faint outer ringlets have been a source of fascination since the Voyager spacecraft flybys in the early 1980s first showed them to have mysterious kinks and twists. While scientists think these rings are gravitationally corralled and kept in line by neighboring small moons, interactions between one particularly tangled strand known as the F ring and a small moon named Prometheus have remained a puzzle. New high-resolution photos beamed back from the Cassini spacecraft now orbiting Saturn are revealing even stranger structures never before seen in any planetary ring.
The latest images show what look like bare channels carved out within Saturn's F ring. Also, streamers of ring material appear to fly off periodically toward the 60-mile-wide (100 kilometers) Prometheus, which lies just inside the twisted ringlet.
Cassini has seen the streamers, which give the ring its knotted appearance, on previous passes. Astronomers think Prometheus' gravity pulls sand-size rock and ice material away from the ring every time the moon's elliptical orbit brings it in for a close encounter.
A team of researchers at the University of London, England, working with computer models of hundreds of thousands of virtual ring particles, has created a simulation of this ring-moon interaction. The model helps explain features seen in the latest images of the gas giant's outer ring. "The models are in excellent agreement with structures observed in the Cassini images," says team member Carlos Chavez.
Until the Cassini flyby of the rings last spring, researchers thought this thievery of ring material was permanent. But now they know these gaps are due to Prometheus' gravity temporarily pulling particles away from the main stream during close encounters.
"It is like a crowd of people walking in a number of lines in the same direction down a street. Suddenly, someone else comes from the other side of the street and collides with a few of them. He then tells them to come with him, and walks away. Only people in the closest lines follow him, which produces gaps in the crowd. However, they return back to the main group shortly afterwards," explains Chavez.
Presenting his team's results at the 2006 Royal Astronomical Society's National Meeting, held last week in Leicester, England, Chavez also revealed that less than 1 percent of the stolen ring particles actually end up colliding with Prometheus. This is a surprisingly low number, as astronomers previously thought the moon was bombarded by most of the material pulled out of the F ring.
Because Prometheus has a synchronous rotation — the same face is locked toward Saturn — the research team suspects this ring-moon interaction affects the satellite's surface albedo, or reflectivity. Already, their computer model suggests these collisions occur on the moon's trailing face and around the equatorial region.
The team hopes to get a ringside seat with Cassini and see if its predictions are correct in 2009, when one of the most dramatic encounters occurs. That's when Prometheus will be farthest from Saturn, and the nearby particles of the F ring will be closet to Saturn — and the moon and ring will be closest to each other.
Andrew Fazekas is an astronomy columnist and lecturer based in Montreal. | 2026-02-04T03:38:20.732706 |
165,588 | 3.660825 | http://news.discovery.com/human/psychology/walking-rooms-forget-111123.htm | Ever get up to retrieve something from another room only to completely forget what you needed after crossing the doorway?
You’re not alone, and scientists think forgetful trips between rooms result from how our brains interpret spatial information.
Researchers already know that walking from one space to another makes people more likely to forget tasks when compared to others who don’t make such a transition. Called “location-updating effect,” the phenomenon also causes people transitioning between rooms (even virtual ones) to take more time while attempting to recall items from memory.
Moving from one space to another seems to cue the brain to refresh itself and pay attention to the new space, making it harder to recall information from the previous space. By then, the previous experience is already filed away in the brain’s working memory, which is why recalling what you need can seem unnecessarily arduous.
The new research led by Gabriel Radvansky at the University of Notre Dame aims to find out if how a person experiences his environment alters memory the same way. For instance, does a person who’s more immersed in the surrounding environment remember differently?
In three experiments, the group sought to find out. In some setups, college-aged students were immersed in different virtual environments similar to what users would see in the game Half-Life. In others, they participated in normal rooms. After being asked to remember objects they carried while moving between spaces, participants were asked to recall items currently being carried or recently put down.
The team discovered that immersion didn’t really affect reactions, as people seemed to forget in both scenarios. They also looked at whether returning to familiar rooms helped people recall what they recently forgot. It didn’t.
Another study on the topic showed that participants forgot information from crossing through a doorway — regardless if it was something easier to remember by looking at the surrounding environment.
The authors point out that the location-updating effect may also tie in with humans’ tendency to remember events in segments rather than on a continuum.
The findings also hint that moving between spaces appears to have a greater effect on memory than a person’s immersion or engagement in the room itself.
Photo by dailyinvention/Flickr.com | 2026-01-20T19:09:09.620007 |
1,061,911 | 3.565742 | http://www.webelements.com/nexus/search/results/taxonomy%3A257?page=1 | The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Chemistry for 2006 to Prof Roger D. Kornberg of Stanford University (Stanford, CA, USA) "for his studies of the molecular basis of eukaryotic transcription".
In order for our bodies to make use of the information stored in the genes, a copy must first be made and transferred to the outer parts of the cells. There it is used as an instruction for protein production – it is the proteins that in their turn actually construct the organism and its function. The copying process is called transcription. Roger Kornberg was the first to create an actual picture of how transcription works at a molecular level in the important group of organisms called eukaryotes (organisms whose cells have a well-defined nucleus). Mammals like ourselves are included in this group, as is ordinary yeast.
Transcription is necessary for all life. This makes the detailed description of the mechanism that Roger Kornberg provides exactly the kind of "most important chemical discovery" referred to by Alfred Nobel in his will.
If transcription stops, genetic information is no longer trans ferred into the different parts of the body. Since these are then no longer renewed, the organism dies within a few days. This is what happens in cases of poisoning by certain toadstools, like the death cap, since the toxin stops the transcription process. Understanding of how transcription works also has a fundamental medical importance. Disturbances in the transcription process are involved in many human illnesses such as cancer, heart disease and various kinds of inflammation.
The capacity of stem cells to develop into different types of specific cells with well-defined functions in different organs, is also linked to how the transcription is regulated. Understanding more about the transcription process is therefore important for the development of different therapeutic applications of stem cells.
Forty-seven years ago, the then twelve-year-old Roger Kornberg came to Stockholm to see his father, Arthur Kornberg, receive the Nobel Prize in Physiology or Medicine (1959) for his studies of how genetic information is transferred from one DNA-molecule to another. Kornberg senior had described how genetic information is transferred from a mother cell to its daughters. What Roger Kornberg himself has now done is to describe how the genetic information is copied from DNA into what is called messenger-RNA. The messenger-RNA carries the information out of the cell nucleus so that it can be used to construct the proteins.
Kornberg's contribution has culminated in his creation of detailed crystallographic pictures describing the transcription apparatus in full action in a eukaryotic cell. In his pictures (all of them created since 2000) we can see the new RNA-strand gradually developing, as well as the role of several other molecules necessary for the transcription process. The pictures are so detailed that separate atoms can be distinguished and this makes it possible to understand the mechanisms of transcription and how it is regulated.
Earth's most severe mass extinction - an event 250 million years ago that wiped out 90 percent of all marine species and 70 percent of land vertebrates - was triggered by a collision with a comet or asteroid, according to a team led by The University of Washington, Seattle, USA. Evidence is based upon elegant findings involving carbon molecules called buckminsterfullerenes (C60, Buckyballs) with the gases helium and argon trapped inside their cage structures.
The scientists do not know the site of the impact 250 million years ago, when all Earth's land formed a supercontinent called Pangea. However, the space body left a calling card - a much higher level of complex carbon molecules called buckminsterfullerenes, or Buckyballs, with the noble (or chemically nonreactive) gases helium and argon trapped inside their cage structures. Fullerenes, which contain 60 or more carbon atoms and have a structure resembling a soccer ball or a geodesic dome, are named for Buckminster Fuller, who invented the geodesic dome.
The researchers know these particular Buckyballs are extraterrestrial because the noble gases trapped inside have an unusual ratio of isotopes. For instance, terrestrial helium is mostly helium-4 and contains only a small amount of helium-3, while extraterrestrial helium - the kind found in these fullerenes - is mostly helium-3.
"These things form in carbon stars. That's what's exciting about finding fullerenes as a tracer," according to Luann Becker, one of scientific team involved. The extreme temperatures and gas pressures in carbon stars are perhaps the only way extraterrestrial noble gases could be forced inside a fullerene, she said. These gas-laden fullerenes were formed outside the Solar System, and their concentration at the Permian-Triassic boundary means they were delivered by a comet or asteroid.
This is interesting. NASA scientists are examining a seemingly magical way to produce high-quality crystals.
Perhaps a NASA laboratory is an unlikely setting for a magic show. Nevertheless, this is where Frank Szofran and colleagues are growing high-quality crystals using a method as amazing as any conjuring trick. By carefully cooling a molten germanium-silicon mixture inside a cylindrical container, they coax it into forming a single large and extraordinarily well-ordered crystal. Such crystals have very few defects because, remarkably, they never touched the walls of the very container in which they grew.
Scientists at the Lawrence Berkeley National Laboratory in California, USA, have discovered that nanocrystals of germanium embedded in silica glass don't melt until the temperature rises almost 200 degrees Kelvin above the melting temperature of germanium in bulk. What's even more surprising, these melted nanocrystals have to be cooled more than 200 K below the bulk melting point before they resolidify. Such a large and nearly symmetrical "hysteresis" -- the divergence of melting and freezing temperatures above and below the bulk melting point -- has never before been observed for embedded nanoparticles.
"Melting and freezing points for materials in bulk have been well understood for a long time," says Eugene Haller (one of the authors) , "but whenever an embedded nanoparticle's melting point goes up instead of down, it requires an explanation. With our observations of germanium in amorphous silica and the application of a classical thermodynamic theory that successfully explains and predicts these observations, we've made a good start on a general explanation of what have until now been regarded as anomalous events."
The research was conducted because the properties of germanium nanoparticles embedded in amorphous silicon dioxide matrices have promising applications. "Germanium nanocrystals in silica have the ability to accept charge and hold it stably for long periods, a property which can be used in improved computer memory systems. Moreover, germanium dioxide (germania) mixed with silicon dioxide (silica) offers particular advantages for forming optical fibers for long-distance communication."1
- 1. Large Melting-Point Hysteresis of Ge Nanocrystals Embedded in SiO2,
, Physical Review Letters, Volume 97, Number 15, p.155701, (2006)
Nobel laureate Richard Smalley, the Rice University professor who helped discover buckyballs (buckminsterfullerene, C60, the football (soccer) ball shaped form of carbon, died at the age of 62. Richard Smalley shared the 1996 Nobel Prize in chemistry with Sir Harold Kroto (Sussex) and Robert Curl (also Rice) for the identification of the new form of carbon known as buckminsterfullerene because of its similarity to Buckminster Fuller's geodesic domes. The Richard E. Smalley Institute for Nanoscale Science and Technology continues to champion the efforts of Smalley through research, educational and community programs, corporate partnerships, and government relations.
Workers at The University of Wisconsin-Madison in the USA have managed to release thin membranes of semiconductors from a substrate and transfer them to new surfaces. The freed membranes which are just tens of nanometers thick retain all the properties of silicon in wafer form but the nanomembranes are flexible. By varying the thicknesses of the silicon and silicon-germanium layers composing them, membrane shapes are possible ranging from flat to curved to tubular.
Potential applications include flexible electronic devices, faster transistors, nano-size photonic crystals that steer light, and lightweight sensors for detecting toxins in the environment or biological events in cells.
The scientists made a three-layer nanomembrane composed of a thin silicon-germanium layer sandwiched between two silicon layers of similar thinness. The membrane sat upon a silicon dioxide layer in a silicon-on-insulator substrate. The nanomembranes may be etched away from the oxide layer with hydrofluoric acid.
Although the Wisconsin team grew their nanomembranes on silicon-on-insulator substrates, the method should apply to many substances beyond semiconductors, such as ferroelectric and piezoelectric materials. The key requirement is a layer, like an oxide, that can be removed to free the nanomembranes.1
- 1. Elastically relaxed free-standing strained-silicon nanomembranes,
, Nature Materials, 5/2006, Volume 5, Issue 5, p.388 - 393, (2006)
Workers in the USA verify the production of element 114 in the reaction of 244-MeV 48Ca with 242Pu. Two chains of time- and position-correlated decays were assigned to 286114 and 287114. The observed decay modes, half-lives, and decay energies agree with the original claims of researchers at the Joint Institute for Nuclear Research at Dubna in Russia. The Russian results were first reported in 1999. Such independent verification is vital for verification purposes. The measured cross sections at a center-of-target energy of 244 MeV for the 242Pu(48Ca,3–4n)287,286114 reactions were 1.4(+3.2, -1.2) pb each, which are lower than the reported values.1
- 1. Independent Verification of Element 114 Production in the Ca-48 + Pu-242 Reaction,
, Physical Review Letters, Volume 103, Number 13, p.132502, (2009)
Independent verification of the production of element 114 in the reaction of 244-MeV 48Ca with 242Pu is presented. Two chains of time- and position-correlated decays have been assigned to 286114 and 287114. The observed decay modes, half-lives, and decay energies agree with published results. The measured cross sections at a center-of-target energy of 244 MeV for the 242Pu(48Ca,3–4n)287,286114 reactions were 1.4(+3.2, -1.2) pb each, which are lower than the reported values.Independent Verification of Element 114 Production in the Ca-48 + Pu-242 Reaction, , Physical Review Letters, Volume 103, Number 13, p.132502, (2009)
The team of Berkeley Lab scientists that announced two years ago (1999) the observation of what appeared to be Element 118 (heaviest undiscovered transuranic element at the time) has retracted its original paper after several confirmation experiments failed to reproduce the results. This means that the pages for element 118 and parts of the data for element 116 are wrong. | 2026-02-03T15:24:58.795952 |
1,017,656 | 3.814957 | http://www.crystalinks.com/prehistoric_mining.html | Since the beginning of civilization, people have used stone, ceramics and, later, metals found on or close to the Earth’s surface. These were used to manufacture early tools and weapons, for example, high quality flint found in northern France and southern England were used to create flint tools. Flint mines have been found in chalk areas where seams of the stone were followed underground by shafts and galleries.
The mines at Grimes Graves are especially famous, and like most other flint mines, are Neolithic in origin (ca 4000 BC-ca 3000 BC). Other hard rocks mined or collected for axes included the greenstone of the Langdale axe industry based in the English Lake District.
The oldest known mine on archaeological record is the Lion Cave in Swaziland. At this site, which by radiocarbon dating proves the mine to be about 43,000 years old, paleolithic humans mined mineral hematite, which contained iron and was ground to produce the red pigment ochre. Mines of a similar age in Hungary are believed to be sites where Neanderthals may have mined flint for weapons and tools.
Ancient Egyptians mined malachite at Maadi. At first, Egyptians used the bright green malachite stones for ornamentations and pottery. Later, between 2,613 and 2,494 BC, large building projects required expeditions abroad to the area of Wadi Maghara in order to secure minerals and other resources not available in Egypt itself. Quarries for turquoise and copper were also found at Wadi Hamamat, Tura, Aswan and various other Nubian sites on the Sinai Peninsula and at Timna.
Mining in Egypt occurred in the earliest dynasties, and the gold mines of Nubia were among the largest and most extensive of any in Ancient Egypt, and are described by the Greek author Diodorus Siculus. He mentions that fire-setting was one method used to break down the hard rock holding the gold. One of the complexes is shown in one of earliest known maps. They crushed the ore and ground it to a fine powder before washing the powder for the gold dust.
The Romans used hydraulic mining methods on a large scale to prospect for the veins of ore, especially a now obsolete form of mining known as hushing. It involved building numerous aqueducts to supply water to the minehead where it was stored in large reservoirs and tanks. When a full tank was opened, the wave of water sluiced away the overburden to expose the bedrock underneath and any gold veins. The rock was then attacked by fire-setting to heat the rock, which would be quenched with a stream of water. The thermal shock cracked the rock, enabling it to be removed, aided by further streams of water from the overhead tanks. They used similar methods to work cassiterite deposits in Cornwall and lead ore in the Pennines.
The methods had been developed by the Romans in Spain in 25 AD to exploit large alluvial gold deposits, the largest site being at Las Medulas, where seven long aqueducts were built to tap local rivers and to sluice the deposits. Spain was one of the most important mining regions, but all regions of the Roman Empire were exploited. They used reverse overshot water-wheels for dewatering their deep mines such as those at Rio Tinto. In Great Britain the natives had mined minerals for millennia , but when the Romans came, the scale of the operations changed dramatically.
The Romans needed what Britain possessed, especially gold, silver, tin and lead. Roman techniques were not limited to surface mining. They followed the ore veins underground once opencast mining was no longer feasible. At Dolaucothi they stoped out the veins, and drove adits through barren rock to drain the stopes. The same adits were also used to ventilate the workings, especially important when fire-setting was used.
At other parts of the site, they penetrated the water table and dewatered the mines using several kinds of machine, especially reverse overshot water-wheels. These were used extensively in the copper mines at Rio Tinto in Spain, where one sequence comprised 16 such wheels arranged in pairs, and lifting water about 80 feet (24 m). They were worked as treadmills with miners standing on the top slats. Many examples of such devices have been found in old Roman mines and some examples are now preserved in the British Museum and the National Museum of Wales.
Mining in Europe has a very long history, examples including the silver mines of Laurium, which helped support the Greek city state of Athens. However, it is the Romans who developed large scale mining methods, especially the use of large volumes of water brought to the minehead by numerous aqueducts. The water was used for a variety of purposes, including using it to remove overburden and rock debris, called hydraulic mining, as well as washing comminuted or crushed ores, and driving simple machinery.
Mining as an industry underwent dramatic changes in medieval Europe. The mining industry in the early Middle Ages was mainly focused on the extraction of copper and iron. Other precious metals were also used mainly for gilding or coinage. Initially, many metals were obtained through open-pit mining, and ore was primarily extracted from shallow depths, rather than though the digging of deep mine shafts. Around the 14th century, the demand for weapons, armor, stirrups, and horseshoes greatly increased the demand for iron. Medieval knights for example were often laden with up to 100 pounds of plate or chain link armor in addition to swords, lances and other weapons. The overwhelming dependency on iron for military purposes helped to spur increased iron production and extraction processes.
These new military applications coincided with a population explosion throughout Europe in the 11th-14th centuries which enriched the demand for precious metals in order to fill a currency shortage. The silver crisis of 1465 occurred when the mines had all reached depths at which the shafts could no longer be pumped dry with the available technology. Although the increased use of bank notes and the use of credit during this period did decrease the dependence and value of precious metals, these forms of currency still remained vital to the story of medieval mining. Use of water power in the form of water mills was extensive; they were employed in crushing ore, raising ore from shafts and ventilating galleries by powering giant bellows.
Black powder was first used in mining in Selmecbanya, Kingdom of Hungary (present-day Banska Stiavnica, Slovakia) in 1627. Black powder allowed blasting of rock and earth to loosen and reveal ore veins, which was much faster than fire-setting, in which rock was exposed to heat and then doused with cold water. Black powder allowed the mining of previously impenetrable metals and ores. In 1762, the world’s first mining academy was established in the same town.
The widespread adoption of agricultural innovations such as the iron plowshare, as well as the growing use of metal as a building material, was also a driving force in the tremendous growth of the iron industry during this period. Inventions like the arrastra were often used by the Spanish to pulverize ore after being mined. This device employed animal power and utilized mechanical principles similar to that of the ancient Middle Eastern technology of grain threshing.
Much of our knowledge of Medieval mining techniques comes from books such as Biringuccio’s De la pirotechnia and probably most importantly from Georg Agricola’s De re metallica (1556). These books detail many different mining methods used in German and Saxon mines. One of the prime issues confronting medieval miners (and one which Agricola explains in detail) was the removal of water from mining shafts. As miners dug deeper to access new veins, flooding became a very real obstacle. As a result the mining industry became dramatically more efficient and prosperous as the use of various mechanical and animal driven pump systems were implemented.
In North America there are ancient, prehistoric copper mines along Lake Superior. Indians availed themselves of this copper starting at least 5000 years ago, and copper tools, arrowheads, and other artifacts that were part of an extensive native trade network have been discovered. In addition, obsidian, flint, and other minerals were mined, worked, and traded. While the early French explorers that encountered the sites made no use of the metals due to the difficulties in transporting it, the copper was eventually traded throughout the continent along major river routes. In Manitoba, Canada, there also are ancient quartz mines near Waddy Lake and surrounding regions.
In the early colonial history of the Americas, native gold and silver was quickly expropriated and sent back to Spain in fleets of gold- and silver-laden galleons mostly from mines in Central and South America. Turquoise dated at 700 A.D. was mined in pre-Columbian America; in the Cerillos Mining District in New Mexico, estimates are that about 15,000 tons of rock had been removed from Mt. Chalchihuitl using stone tools before 1700.
Mining in the United States became prevalent in the 19th century, and the General Mining Act of 1872 was passed to encourage mining of federal lands. As with the California Gold Rush in the mid 19th century, mining for minerals and precious metals, along with ranching, was a driving factor in the Westward Expansion to the Pacific coast. With the exploration of the West, mining camps were established and expressed a distinctive spirit, an enduring legacy to the new nation; Gold Rushers would experience the same problems as the Land Rushers of the transient West that preceded them. Aided by railroads, many traveled West for work opportunities in mining. Western cities such as Denver and Sacramento originated as mining towns.
Oldest mine in the Americas found MSNBC - May 20, 2011
Archaeologists uncover oldest mine in the Americas PhysOrg - May 19, 2011
Archaeologists have discovered a 12,000-year-old iron oxide mine in Chile that marks the oldest evidence of organized mining ever found in the Americas, according to a report in the June issue of Current Anthropology.
A team of researchers led by Diego Salazar of the Universidad de Chile found the 40 meter trench near the coastal town of Taltal in northern Chile. It was dug by the Huentelauquen people - the first settlers in the region - who used iron oxide as pigment for painted stone and bone instruments, and probably also for clothing and body paint, the researchers say.
The remarkable duration and extent of the operation illustrate the surprising cultural complexity of these ancient people. "It shows that mining was a labor-intensive activity demanding specific technical skills and some level of social cooperation transmitted through generations," Salazar and his team write.
An estimated 700 cubic meters and 2,000 tons of rock were extracted from the mine. Carbon dates for charcoal and shells found in the mine suggest it was used continuously from around 12,000 years ago to 10,500 years ago, and then used again around 4,300 years ago. The researchers also found more than 500 hammerstones dating back to the earliest use of the mine.
"The regular exploitation of the site for more than a millennium ... indicates that knowledge about the location of the mine, the properties of its iron oxides, and the techniques required to exploit and process these minerals were transmitted over generations within the Huentelauquen Cultural Complex, thereby consolidating the first mining tradition yet known in America," the researchers write. The find extends by several millennia the mining sites yet recorded in the Americas.
Before this find, a North American copper mine dated to between 4,500 and 2,600 years ago was the oldest known in the Americas.
ANCIENT AND LOST CIVILIZATIONS
PHYSICAL SCIENCES INDEX
ALPHABETICAL INDEX OF ALL FILES
CRYSTALINKS HOME PAGE
PSYCHIC READING WITH ELLIE
2012 THE ALCHEMY OF TIME | 2026-02-02T23:22:12.346012 |
944,222 | 3.706111 | http://www.slashgear.com/researchers-discover-mice-have-complex-singing-skills-11251363/ | A recent study shows that mice have pretty sophisticated singing skills, including the ability to change tunes. While scientists have known that dolphins and various birds possess the same ability as humans to learn and change tunes. Previously, the vocal abilities in mice was believed to be innate.
Scientists believe that mice possess a rudimentary motor control center in their brain, which works in conjunction with the vocal cords to provide voluntary control over pitch and tune. Assuming this hypothesis is correct, the information could lead to more effective studies of speak disorders found in humans. Perhaps most surprising, this connection between the brain and vocal cords is not present in chimps and monkeys.
Mice have the ability to sing in different pitches. As with humans, some mice are tenors, while some have deeper basses. Using this knowledge, scientists placed mice in a cage with other mice who exhibit different pitch. After several weeks, the pitch of the mice had changed to better match the pitch of the other mice in the cage. For example, the tenors developed a deeper sound, while some of the bases had a slightly higher pitch.
Not all scientists agree with the results of the study, however. Some researchers claim that rather than learning new tunes, the observed effects were the result of pitch convergences. In order to gain a better understanding of what this study has shown, plans are under way to examine the brain connections in the mice, as well as the genes and what effects they may have on the vocalization.
Post picture from Ratatouille | 2026-02-01T22:49:22.241914 |
210,603 | 3.595655 | http://www.sierraclub.ca/en/node/4448 | Schools of fish show engineers how to squeeze much more power from wind farms
A new wind farm design mimics a school of fish to exploit wind turbulence, and could dramatically improve power output.
Familiar propeller-style wind turbines with large sweeping blades have almost reached their limit of efficiency.
But in a wind farm, they must be spaced widely apart to avoid turbulence from the other turbines.
This has limited wind farm power output to around two watts per square metre of land at favourable sites.
But redesigned wind farms could perhaps get up to 10 times more power from the same land.
A test array in the California desert takes a whole new approach to the problem, according to a study published in the Journal of Renewable and Sustainable Energy.
This new study uses "vertical axis" wind turbines that resemble upright, spinning egg whisks. Although they are less efficient individually than the propeller-style turbines, they are able to use turbulent winds from many directions.
But the big step forward comes from the layout of the array which is based on fluid dynamics around schooling fish.
"Organising the arrangement of wind turbines based upon the vortices shed by schooling fish is definitely a new approach," said aeronautical engineer Robert Whittlesey of the California Institute of Technology (Caltech).
"The fish aim to align themselves to optimise their forward propulsion," he writes, and this can be adapted in a turbine array to maximise energy extraction.
The new design uses closely-spaced pairs of counter-rotating turbines that funnel air to their neighbours, with little energy lost to turbulence.
Not only do the neighbours benefit, but the funnelling effect is also important. In fact power generated by the paired turbines can actually be greater than that from the turbines working independently. In tests, a turbine five rows back still generated 95% of the power of the one on the front row.
A wind farm of this closely-packed design could produce 20 to 30 watts per square metre of land, around 10 times that of current wind farms.
Author of the study, Professor John Dabiri of Caltech, said: "While the connection between fish schooling and wind farms might seem non-intuitive at first, it is in fact a logical inference from the underlying flow physics."
The advantages don't stop there. At 10m high, the turbines used in this study were only around one tenth of the height of typical propeller-style turbines.
This means that they are less intrusive in the landscape, less visible to air-traffic control radar and could be less harmful to birds and bats.
The vertical-axis turbines are also "significantly more robust and probably less expensive. There are still some problems to be solved but they really deserve a second look" added Professor Charles Meneveau of Johns Hopkins University who was not involved in the study.
The big question now is whether this design works as a full sized wind farm. To work on this scale, energy from wind passing above the farm must be transferred to the turbines below by turbulence.
"It's a very interesting idea but this hasn't yet been shown," said Professor Loughhead of the UK Energy Research Centre. "Also, vertical-axis turbines face a lot of stress. It's difficult to make a tall turbine light enough to spin but rigid enough to stand up to the forces and vibrations that they're exposed to," he added.
"In this research field, the work seems to be met with great interest and a bit of healthy scepticism," observed Mr Whittlesey.
Further tests look promising though. "We have collected additional wind measurements this summer on an array of 18 turbines...The results suggest that the wind flow rates required for enhanced performance relative to horizontal-axis [propeller-style] wind turbines are regularly attained," said Professor Dabiri. | 2026-01-21T11:35:33.091367 |
920,927 | 3.819763 | https://news.nationalgeographic.com/news/2013/13/130517-billion-year-old-water-mine-canada-ancient-microbes-science/ | Photograph by J. Telling
Published May 17, 2013
Pockets of water trapped in rocks from a Canadian mine are over a billion years old, and the water could contain life forms that can survive independently from the sun, scientists said this week.
The ancient water was collected from boreholes at Timmins Mine beneath Ontario, Canada, at a depth of about 1.5 miles (2.4 kilometers).
"When these rocks formed, this part of Canada was the ocean floor," said study co-author Barbara Sherwood Lollar, an Earth scientist at Canada's University of Toronto.
"When we go down [into the mine] with students, we like to say imagine you're walking on the seafloor 2.6 billion years ago."
Working with U.K. colleagues Chris Ballentine and Greg Holland, Sherwood Lollar and her team found that the water was rich in dissolved gases such as hydrogen and methane, which could provide energy for microbes like those found around hydrothermal vents in the deep ocean.
In addition, the water contained different rare gases that include the elements helium, neon, argon, and xenon, which were created through interactions with the surrounding radioactive rock. By measuring the concentrations of isotopes of these "noble gases"—so called because they rarely interact with other elements—the team could estimate how long the water had been trapped underground and whether it had been isolated.
Depending on the noble gas analyzed, the age estimates for the water varied between 1.1 billion years old and 2.6 billion years old—or as old as the rocks in the mine itself.
"It shows us that there's been very little mixing between this water and the surface water," Sherwood Lollar said. "What we want to do with further work is see if we can narrow that [age range] down."
Teeming With Life?
Geologists have long known that a lot of water can be present in continental crust, locked away in microscopic voids in minerals, pore spaces between minerals, and veins and fractures in the rock. But what's been unclear is the age of such water, said geochemist Steven Shirey, a senior scientist at the Carnegie Institution for Science.
"The question is how old is it? Is it water that's part of current circulation with surface water? Or is it water that retains old chemistry and potential biota?" said Shirey, who was not involved in the study.
The new findings, detailed in this week's issue of the journal Nature, is evidence that ancient pockets of water can remain isolated in the Earth's crust for billions of years.
"That's the really exciting part about this study," Shirey said.
Sherwood Lollar and her team are testing the mine water to see if they can find evidence of living microbes. If life does exist in the water, she said, it could be similar to microbes previously found in far younger water flowing from a mine located 1.74 miles (2.8 kilometers) beneath South Africa.
Those microbes could survive without light from the sun, subsisting instead on chemicals created through the interactions between water and rock.
Such "buried" microbial communities are rare, and fascinating for scientists because they are often not interconnected.
"Each one of them may have a different age and a different history," Sherwood Lollar said. "It will be fascinating for us to look at the microbiology in each of them ... It'll tell us something about the evolution of life and the colonization of the subsurface."
The Timmins Mine water could also help scientists understand how much of the subsurface of the Earth is actually inhabited by life. The answer to that question has implications for life on other planets, such as Mars, scientists say.
"It opens up your horizons for what's possible," Shirey said. "If you think that you can have microbial life throughout the entire crust of the Earth, then all of a sudden it becomes very possible that life could live on other planets under the right condition."
That raises questions about potential life in relatively warm rock located beneath the cold surface of Mars, where liquid water could still exist.
"We're looking at billion-year-old rock here and we can still find flowing water that's full of the kind of energy that can support life," Sherwood Lollar said.
"If we find Martian rocks of the same age and in places of similar geology and mineralogy to our site, then there's every reason to think that we might be able to find the same thing in the deep subsurface of Mars."
There seems to be a debate over weather this is true or not. That's not a bad topic for the debate club at my school. I for one think this is not only possible, but likely. Yes, rocks are porous, and it depends upon the type of rock, but think of how much time it would take for water to seep through them.
I understand the claim but to say that the pocket of water is a billion of years old is impossible. The water is gathered in rock and rock is very porous and any water gathered there even with extreme isolation would have seeped through and or evaporated many times over.
I am excited about the containment of the elements of microbes and other sustained life to found there, even though those I suspect are now tainted by the opening of the pocket.
they don't know it was trapped there for billions of years. It's all a wild guess. A wrong wild guess.
@John Cotter since you and others are contradicting it
@John Cotter where do you derive your expertise? not trying to be snarky i really want to know if you're in the field or something
@j. winchell You don't know it wasn't trapped there for billions of years. You've just made a wild guess. A wrong wild guess.
@j. winchell We can know by analyzing the water and measuring the depth in the earth's crust.
@Simon Willett @j. winchell While there may be proof that it is billions of years old, humans should probably remember that it takes a very long time before we actually know anything for certain. Who knows if it has actually been there for billions of years or not. Its just a guess based not on wild assumptions, but on what scientists have discovered so far. There are very few things we can be 100% certain about, but we can do our best to make sense of the world around us. In other words, claiming that either one of you is wrong is a wild guess in and of itself.
Special Ad Section
Our new Phenomena space blogger, Nadia Drake, will be bringing you tales from beyond our home planet.
Latest News Video
During one of the worst droughts in centuries, California's top snow surveyor is sounding the alarm. | 2026-02-01T15:16:22.515052 |
468,143 | 3.870726 | http://lejos.sourceforge.net/nxt/nxj/tutorial/WheeledVehicles/WheeledVehicles.htm | Controlling Wheeled Vehicles
A common type of robot is the two wheeled vehicle with independently controlled motors. This design uses differential steering and can turn in place. There are other possible mechanical designs which controlled by other classes. The classes that control vehicles work at several levels of abstraction. At bottom, there are the motors that turn the wheels, controlled by the NXTRegulatedMotor class. The DifferentialPilot class uses the motors to control elementary moves: rotate in place, travel in a straight line, or travel in an arc. At the next level, the Navigator uses a DifferentialPilot to move the robot through a complicated path in a plane. To do navigation, the navigator needs the robot location and the direction it is heading. It uses a OdometeryPoseProvider to keep this information up to date. The relationships among these classes is shown in the in the table.
The flow of control is from the top down: the navigator controls the pilot which controls the motors. But the flow of information is from bottom up. The pilot uses information from the motors to control them. The pose provider uses odometry information from the pilot to update its current estimate of the robot pose. The pose consists of the robot's coordinates (x and y) and its heading angle (the direction it is facing ). The navigator uses this data to calculate the distance and direction to its destination.
The flow of information uses the listener and event model. The pilot registers as a listener to with motors, which inform it when a motor rotation is started or completed. The pose provider registers as a listener with the pilot, which informs it of the start and completion of every movement. This event driven information flow is automatic. In addition to this event driven flow, the navigator can also requests a pose estimate from the pose provider at any time, even while the robot is moving. This chain of listeners is established as the DifferentialPilot and Navigator are constructed.
The DifferentialPilot class controls a vehicle that has two driving
wheels, each with its own motor. It steers the vehicle by
controlling the speed and direction of rotation of its motors. It
is one of several Move Controllers that are based on different
mechanical designs, but the differential steering design is the
Straight line movement
To control the robot moving in a straight line, use:
To control the distance the robot moves, use:
You can cause the robot to begin rotating in place by using
If angle is positive, the robot turns to the left.
The immediateReturn parameter works as in the Motor
methods –allowing the calling thread to do other work while
the rotation task in progress. If another method is called on the
pilot while the rotation is in progress, the rotation will be terminated.
Write a program that uses a DifferentialPilot to trace out a square, using the travel and void rotate(double degrees) methods.
Write a program that traces 2 squares with increasing angle at the corners, then retraces the same path in the opposite direction.. Modify the traceSquare method of program DifferentialPilot 1 so it can trace a square in either direction, and use it in this program. This is stringent test of the accuracy of the wheel diameter and track width constants you use in you pilot.
Traveling in a circular path.
DifferentialPilot can also control the robot to move in a circular path using these methods:
The turnRate parameter determines the radius of the path. A positive value means that center of the circle is to the left of the robot (so the left motor drives the inside wheel). A negative value means the left motor drives the outside wheel. The absolute value is between 0 and 200, and this determines the ratio of inside to outside motor speed. The outside motor runs at the set speed of the robot; the inner motor is slowed down to make the robot turn. At turn rate 0, the speed ratio is 1.0 and the robot travels in a straight line. At turn rate 200, the speed ratio is -1 and the robot turns in place. Turn rate 100 gives speed ratio 0, so the inside motor stops. The formula is: speed ratio = 100 - abs(turnRate).
The angle parameter determines the rotation angle at which the robot stops. If the angle is negative, the robot follows the circular path with the center defined by the turn rate, but it moves backwards.
Write a program that uses the ButtonCounter to enter the turn rate and angle variables, and then calls the steer() method. It does this in a loop so you can try different values of these parameters to control the robot path.Methods that start the robot moving in a circular arc path:
If the radius is positive, center of rotation is on left side of the robot. Methods that complete a circular arc:
Communicating with OdometryPoseProvider
The OdometryPoseProvider keeps track of the robot position and
heading. To do this, it needs to know about every move made by the
DifferetnialPilot. So the pose provider needs to register as a
listener with the pilot by calling the
Other methods for DifferentialPilot
If you need very accurate pilot movement, you might need to use speed and acceleration values less than the defaults.
The CompassPilot is an extension of the DifferentialPilot class. It implements the same methods, but uses a Compass Sensor to ensure that the pilot does not deviate from the correct angle of robot heading.
It needs a HiTechnic or Mindsensors compass sensor plugged in to one of the sensor ports. Its constructors are similar those of DifferentialPilot, but with the additional information of the compass sensor port.
Additional methods in CompassPilot:
Additional methods in CompassPilot:
Write a program that does these steps:
Suggestion: while the robot is moving, nudge it off course and watch it steer back to the correct heading.
The responsibility of this class is to maintain a current estimate
of the robot location and the direction in which it is heading.
robot heading uses Cartesian coordinates, with angles in degrees; 0
degrees is the direction of the positive x axis, 90 degrees is the
positive y axis. The heading and x and y coordinates are stored in a Pose object. The API for Pose is
here. For the OdometryPoseProvider documentation
The only methods you are likely to need to use are:
If you want to know about the inner workings of this class, read
The odometry data is contained in a Move object. The API of this data carrier class is here.
The Navigator class uses a pilot to control the robot movement, and
a PoseProvider to keep track of the robot position. The navigator
follows a route, a sequence of locations. Each location is an instance
of the Waypoint class. The WayPoint API is here.
The route is an instance of the Path class, and the Path API is
here. The route behaves as a first-in-first-queue. When the a way
point is reached, it is removed from the route and the robot goes
to the next one. New way points can be added to the end of the
route at any time. Documentation for the Navigator is here.
Both constructors will register the pose provider as a MoveListener with the pilot.
All the methods (with one exception) in this class are non-blocking, i.e. they return immediatey.
Suppose your robot is in a known location and needs to get to a destination. But there are obstacles in the way. These obstacles are shown on a map. So what route should it follow? This class can find the shortest path to the destination, and produce a route (a collection of WayPoints) that the NavPathController can use. The map this class needs is a LineMap which, as its name suggests, consists of straight lines.The complete documentation for this class is here. Using this class is very simple. The constructor is:
After you constructed the path finder, you can get the route by using either of the route finding methods. They both use a Pose as the starting point, which might be returned returned by the a pose provider, and a WayPoint as the destination. The both will throw the DestinationUnreachableException if no route can be found.
The shortest path, if not a direct line, will have way points at the ends of lines, such as the corners of obstacles on the map. But the physical robot is not a point, so if the center of the robot tries to pass through a corner, there will be a crash. One solution to this difficulty is to enlarge the map obstacles to allow clearance for the robot. However, this may be tedious to by hand, and complex to do with software. A simpler scheme is to extends all the lines on the original map so that a corner now is represented by two points, each some distance from the corner. To make this modification of the map,use the method :
which adds a segment of length delta to each line on the map. The extension delta should probably be at least the track width of the robot, plus an allowance for uncertainties in the robot location.
There are several other path finding classes that use various map representations. See the links to the classes in the pathfinding package here. | 2026-01-25T12:07:58.226754 |
806,943 | 4.240938 | http://www.antennex.com/preview/Dec03/Dec603/Loopantennas.htm | By David Jefferies
What is a loop antenna?
loop antenna has a continuous conducting path leading from one conductor of a two-wire transmission line to the other conductor. You may think of it as a "coil that radiates". The coil may have only a single turn. It may have arbitrary shaped perimeter, but the essence of a coil is that the defining wire encloses an area. Thus, a folded dipole is not a loop antenna in this sense, since the area inside the conductor path is vanishingly small.
Symmetric loop antennas have a plane of symmetry running along the feed and through the loop. Planar loop antennas lie in a single plane which also contains the conductors of the feed.
Three-dimensional loop antennas have wire which runs in all of the x,y, and z directions (in a rectangular Cartesian system). By definition they are not planar. They may, however, be symmetric about planes which contain the feed.
It is possible for the loop antenna plane not to contain the run of the feed. This matters for situations where the feed currents are not perfectly balanced.
What size is a loop antenna?
There are at least two distances which define the "notion of size" in a loop antenna. These are, the total length of wire between the "go" and "return" of the feed, and the largest distance from one point on the loop conductor to another, measured in a straight line (as light would propagate). One might also think of another distance that "matters", namely the distance from the feed junction with the loop, to the most remote point on the loop conductor. All these distances need to be thought of in units of a wavelength at the carrier frequency handled by the antenna.
Loops and probes in waveguide
If one wants to couple radiation from a two-wire feed (possibly coax) to a waveguide, one commonly does this by means of a probe (which couples to the electric field in the guide, and is the equivalent of a monopole) or by means of a loop (which couples to the magnetic field in the guide; the maximum of magnetic field lines pass through the loop). Waveguide may be regarded as a microcosm of the great outside world.
Phase delay across a loop
Critical to the functioning of any loop antenna is the concept of the "phase delay" that occurs for em radiation to get from one point on the loop to another, some distance away. In the case of vanishingly small loops, the traditional calculation assumes that the current is the same everywhere around the loop perimeter,
as can be seen diagrammatically in this figure. In this case, the radiation along any loop diameter arrives from an oppositely directed but parallel element of current after a short time delay, which puts in a phase shift so that the radiation contributions do not entirely cancel.
This traditional argument quickly leads to the result that the radiation resistance of a small circular loop rises as the (ratio of loop diameter to wavelength) raised to the fourth power. A small increase in loop diameter therefore results in a greatly increased radiation resistance.
Just as with a small rod antenna, where the radiation resistance rises as the square of the length of the exposed radiating wire (see radimp.html), so also in a loop antenna the radiation resistance, were it not for the cancellation effects, might be expected to rise as the square of the circumference, and therefore as the square of the loop radius or diameter in the case of a small circular loop. However, there are additional cancellation effects, and this puts in an additional factor proportional to the square of the diameter, radius, or circumference.
One of the most significant attributes of a loop antenna is that "go" current in one part of the loop is offset by "return" current in another. It is only because these go and return paths are physically separated in space that a small loop antenna can radiate at all. Otherwise the radiation from one little current element would exactly cancel that from the other. In fact, this does happen for radiation directions normal to the plane of a vanishingly small planar loop. In such directions there is a deep radiation null.
Quantitatively, for a circular loop of radius R, when R/lambda = 0.25, the diameter is half a wavelength and the 180 degree phase shift, for the radiation to get from the "go" current at one end of the diameter to the oppositely-directed "return" current at the other end of the diameter, results in an enhancement factor of 2 over the radiation from just a single current element. The radiation from one element arrives "in phase" with the contribution from the other element, half a wavelength away but opposite in sign. The perimeter is then (2 pi 0.25) wavelengths which is 1.57 wavelengths and so the assumption of constant current around the loop perimeter has broken down.
For R/lambda = 1/100 or 0.01, the field contributions nearly cancel. The expression for the "enhancement factor" is [2 sin(2 pi R/lambda)] which then evaluates to 0.126 very nearly. This is a lot less (1/16th) than that due to the quarter-wave radius loop (enhancement of 2) and will make (1/16)^2 = 1/256 difference to the contribution of these little elements to the radiated power and to the radiation resistance.
For R/lambda = 1/50 (or 0.02), the enhancement factor is 0.25, and for R/lambda = 1/20 (or 0.05) the enhancement factor is 0.61 and at this point the perimeter has got to 0.3142 of a wavelength.
Of course, in certain loop structures the size of the currents in different elements of length along the loop wire will vary. Thus, loop antennas which have a total wire length approaching or exceeding an appreciable fraction of a wavelength can be efficient radiators with radiation resistance that approaches a match to common feed-line impedances. It is only in vanishingly small loop antennas that we are justified in assuming that the current is the same at every point along the loop wire. In intermediate cases, this may sometimes be a justifiable approximation, but certain textbooks which treat a circular loop antenna of radius lambda/25 (which has a loop wire length of about lambda/4) as if the approximation were sufficiently valid, may be in serious error. It is partly for this reason that there is some controversy about the radiation resistance of intermediate-sized loop antennas.
The folded-dipole approximation
For a loop where the perimeter is about a whole wavelength, the folded dipole analogy may be better. We imagine the loop as being formed from a "bulged-out" folded half-wave dipole for which the current distribution looks like this :-
The current in the element of wire diametrically opposite the feed is now directed from right to left, rather than from left to right as it would be in the vanishingly small loop. The currents up and down in the elements on the horizontal diameter have vanished, and it is for this reason that there is no radiation from this scenario in the plane of the loop in the horizontal direction.
The radiation resistance of this mode is quite high because the cancellation between currents on opposite ends of a diameter is no longer so complete. For a circular loop of perimeter one wavelength, the radius is 1/(2 pi) wavelengths, or about 0.16 wavelengths. We are still talking about a physically small-sized loop therefore. For example, at a wavelength of 20 metres such a loop would have a diameter of 6.4 metres and would be an effective wideband radiator.
The quarter-wave shorted line approximation
Now consider a loop which has a perimeter of just one half of a wavelength. At 20 metres wavelength this would be a loop of diameter 3.2 metres. We may consider a bulged-out length of transmission line having the same total wire length. A little thought shows that the transmission line model is a short circuited quarter-wave section of line. The input current is zero in the parallel line approximation, as the line presents an open circuit to the generator. Of course, this will not be exactly true when we have "bulged out" the line section; for one thing, the short circuit point will have moved physically closer to the feed.
What is clear, however, is that in this approximation the current in the element diametrically opposite the feed runs from left to right, not from right to left as it did in the folded dipole example. It is also going to be significantly larger than the current supplied by the feed.
As we increase the perimeter of the loop from a quarter wavelength to a half wavelength, there must therefore be a region where the current opposite the feed is smaller than the feed current, and indeed it must at some point pass through zero. Thus it is apparent that for all intermediate loops of diameter greater than 0.16 wavelength, the "small loop" approximation is not valid, and very significant radiation occurs. The Q will be reasonably low and the bandwidth and radiation resistance will have usefully large values. Those people with simulation packages may like to quantify these "general statements".
Phase shifts in the current distribution
Now, it is known that the radiation resistance for a small loop antenna is often swamped by the loss resistance due to the current being confined to a small skin depth of conductor at the wire or tube surface. Thus we have to consider phase shifts between oppositely-directed currents on opposite ends of a loop diameter (for the special case of a circular loop) brought about by the distributed inductance, resistance, and capacitance of the loop line. Loop radiation is often (usually) measured with the loop mounted in a vertical plane, and one goes away a significant distance on a flat ground so that the radiation is measured on a horizontal (level) path in the plane of the loop. It is easy to see that for a symmetric loop (as discussed above) the current elements contributing to the radiation, up and down on opposite sides of the loop, have balanced amplitudes and phases. Thus we expect the traditional formula for the radiation from a vanishingly small loop to be approximately correct for intermediate loops, in this scenario of horizontal path radiation. For those loops of this kind of size where radiated field strengths have been measured and reported, it is said that this was the measurement geometry.
For other directions of the diameters of the loop, which are at a slant (intermediate between horizontal and vertical) angle to the ground, there will be phase shifts and amplitude differences between the little elements of current flow at the ends of this diameter. As stated, these phase shifts are due to the combined effects of distributed series inductance and shunt capacitance, and series skin-effect loss resistance. However, for loops where the phase shifts have a significant effect, the total wire perimeter will probably be long enough so that the amplitudes of the current elements change as well, and this will also generate a contribution to the radiation.
Quantifying these phase and amplitude shifts would appear to be quite a difficult problem. In terms of the current flow through a continuous conductor having distributed inductance and resistance per unit length, Kirchoff's current law indicates that the current is the same everywhere and that there are no phase shifts. However, if we then allow for the shunt capacitance, between elements of the loop tube or wire (which has non-vanishing surface area), then the phasing of current flow around the loop becomes a function of the loop wire diameter (or tube diameter) as well as the skin depth loss. A simulation might sort out some of these issues, but as it would return global values for the antenna properties, the local behaviour might not be transparent.
It is not unreasonable to expect, therefore, that intermediate-sized loops will radiate more strongly along such slant diameters than the traditional theory might predict. This effect is expected to be quite small, overall. For, the cancellation of the oppositely-directed current elements is no longer so complete: they have differing amplitudes and phases. This will put up the radiation resistance of the loop. Paradoxically, therefore, the presence of loss resistance in the loop due to joule heating in the skin depth where the current flows, may enhance the total radiation over what it would be for a lossless conductor having the same geometry.
The loop (intermediate size) will therefore radiate up and down preferentially. If we mount the loop with its plane horizontal, it should be possible to check on this effect by moving around the loop at constant range, measuring the fields radiated as we go. The prediction is that there will be some anisotropy in the radiation, symmetrically disposed with reference to the feed axis.
For the case of non-constant amplitudes and phases, there will also be radiation normal to the plane of the loop. This forms the basis of a simple and sensitive experimental method of deciding whether a loop antenna is functionally "small", or in the "intermediate size" range. In the case of a truly "small" antenna, there should be a very deep null in the far field region at directions on the axis of rotation of the loop. This null progressively fills in as one makes the loop diameter larger. By the time the loop diameter is about lambda/10 there should be appreciable radiation along the loop axis. As remarked above, this will be accompanied by anisotropy in the radiation in the plane of the loop.
The folded dipole approximation to an intermediate loop antenna has deep nulls along the horizontal diameter (if the feed runs in from underneath) and for this reason radiation in this mode is not detected in the standard loop field-strength measurements reported by some others.
Inductance and self-resonance
Loop antennas have area, and generate magnetic fields which thread this area. These changing magnetic fields generate a back emf at the loop terminals which provides the loop with inductive impedance. Generally speaking, the larger the area, the larger the inductance. However, as the loop wire becomes longer, the phase shift between induced voltage and the current that gives rise to it changes. At a certain wire length, generally held for circular loops to be about 1/3 wavelength, the loop becomes "self resonant". Another way of looking at this phenomenon is to consider a loop to be a "bulged-out" length of parallel wire transmission line, shorted at the end remote from the feed point. In the case of a true parallel wire line, self resonance may be defined to occur when the total wire length (go and return) is 1/2 a wavelength plus the length of the short at the end. The line is then a quarter wave shorted stub.
A self-resonant antenna might be thought of as being "optimally efficient". Smaller loops require additional series or shunt capacitance to tune them to resonance so that the impedance presented to the feed becomes real.
In a self-resonant loop, then, it is clear that the standard small-loop theory breaks down. As we have indicated, this happens for total wire length of between 1/3 and 1/2 of a wavelength. The current distribution around the loop will be very non-uniform; the radiation resistance will be significantly large, and will swamp the loss resistance in all likelihood. As we gradually increase the dimensions of the loop antenna, nothing suddenly happens to the radiation properties. Therefore, we propose that the small loop limit really needs the loop radius to be very much less than the reported 1/25 of a wavelength. The controversy about small loops, however, deals with loops of precisely this size. We are not surprised.
Recently there have been reports in antenneX magazine about designs for three-dimensional small loops. In such loops, the ratio of wire length to maximum linear dimension of the antenna may be made significantly larger. Therefore, the small-loop limit will apply only for even yet smaller overall dimensions. The phase shifts, as we travel along the loop conductor, will result in less cancellation for oppositely-directed current elements and there will be enhanced radiation resistance and efficiency.
In three-dimensional small loops, it becomes easier to make the total wire runs longer than a wavelength, and to make adjacently-placed wire runs carry currents which run in the same directions, whose radiation therefore reinforces rather than subtracts.
Also, by wrapping up the wire runs, into a folded structure, the total current-carrying (and therefore radiating) elements inside the compact antenna volume may be significantly increased in length. This may be done without endlessly increasing the loop inductance, and so reaching self resonance at too short a length of radiating wire. For, the wire in 3-d may be run in such a way that the local magnetic fields generated subtract, from the contributions of different parts of the wire run. It appears, therefore, that we are still in the early stages of finding out what may be achieved, in small linear dimensions, with this exciting new class of antenna structures. -30-
~ antenneX ~ December 2003 Online Issue #80 ~
Send mail to firstname.lastname@example.org
with questions or comments.
Copyright © 1988-2011 All rights reserved - antenneX©
Last modified: December 31, 2010 | 2026-01-30T18:33:59.882020 |
872,188 | 3.571428 | http://web.nickshanks.com/history/medieval/manor | |Medieval Economy Home||
Feudalism and Manorialism
What is Maniorialism?
Manorialism, otherwise known as the Manorial System, is the political, economic, and social system by which peasants of medieval Europe were made dependent on their land and on their lord derived from the word ‘manor.’ Its basic unit was the manor, a self-sufficient stationary estate, or fief that was under the control of a lord who enjoyed a variety of rights over it and the peasants attached to it by means of serfom. The manorial system was the most convenient device for organizing the estates of the aristocracy and the clergy in the European Middle Ages, and it made feudalism possible, the system that granted the upper-class clergy and nobles power. Under other names the manorial system was found not only in France, Germany, Italy, Spain, and England, where it is known as Seignorialism, but also in varying degrees, in the Byzantine Empire, Russia, Japan, and elsewhere. The manorial system's importance as an institution varied in different parts of Europe at different times. In western Europe it was flourishing by the 8th century and had begun to decline by the 13th century, while in eastern Europe, it achieved its greatest strength after the 15th century.
Manorialism had its origins in the late Roman Empire, when large landowners had to consolidate their hold over both their lands and the labourers who workerd them. This was a necessity in the midst of the civil disorders, enfeebled governments, and barbarian invasions that wracked Europe in the 5th and 6th centuries AD. In such conditions, small farmers and landless labourers exchanged their land or their freedom and pledged their services in exchange for the portection of powerful landowners who had the military strength to defend them. In this manner, the poor, defenseless, landless, and weak were ensured permanent access to plots of land which they could work in return for the rendering of economic services to the lord who held that land, allowing a sort of bartering of one service for another. This arrangement developed into the manorial system, which in turn supported the feudal aristocracy of kings, lords, and vassals.
The typical western European manner in the 13th century consistsedpartly of the cottages, huts, and barns and gardens of its peasants or serfs, which were usually clustered together to form a small village. Ther might also be a church, a mill, and a wine or oil press in the village. Close by was the fortified dwelling, or manor house, of the lord, which might be inhabited by him or merely by his steward if the lord happened to hold more than one manor. The village was surrounded by the arable land that was divided into three large fields that were farmed in rotation, with one allowed to lie fallow each year. There were also usually meadows for supplying hay, pastures for livestock, pools for steaming fish, and forests and wastelands for wood gathering and foraging.
The manorial system was also an important feature of the social structure of the Middle Ages. It resulted in the division of plant cultivation practices ubti what we now recognize as horticulture (gardens close to the manor house and enclosed for protection, containing fruits, vegetables, and herbs), agronomy (the cultivation of grains and forages in open fieldsfather away from the manor house), and forestry (the wild lands that contained game and forests and were not managed to any extent).
Manorialism is simply the way of describing the system that allowed stability in these dark times, generally known as the Middle Ages. Although based on the word manor, the castle and fief were two very important features of this system. A manor is generally generally more comfort oriented than a castle, and the word ‘manor’ often is used to refer to large luxury homes that are not made for protection or defense. However, the lord and owner of a castle, that is constructed almost completely as a stronghold for use in war, insured that the lord of the castle would have many serfs under his rule. The extensive protection (not to mention menacing appearance) would flock many peasants and serfs eager to become citizen to the most powerful lord. In case of attack, the lord of the castle would allow his loyal serfs to retreat to his stronghold(s), in exchange for their previous services, from a nearby town where they earn a living in order to pay taxes to the lord. The Manorial System provided stability in those ancient and dark times where the only safety was behind the thick impenetrable walls of a mighty manor, or even more effective, castle.
What is feudalism?
Medieval Europe characterizes what we think of as feudalism, but many of the inner workings, such as who owns what and why, go unnoticed. Originally, feudums were just military items and goods, such as armor, weaponry, and horses. By A.D. 1000, feudums evolved into pieces of land known as fiefs. Since feudal Europe relied heavily on its agriculture, wealth was derived from land. Land, therefore, became a means of improving one's status. That meant that a fief was not only a gift bestowed to a vassal by his lord, but it was a way to turn a vassal into a member of the upper class, as became prevalent over time. In the marriage of nobles, a dowry of land was given to the husband, so that he would receive land from his family as well as from his wife's family. Fiefs were granted to a vassal only for the lifetime of that vassal. However, it became the norm for a son to inherit the title of his father. In fact, this became so common that the practice of primogeniture, of the bequeathing of land and duties to the eldest son of a family became established.
The decline of feudalism can be marked by the crusades. After the Crusades, a demand was put on the production of goods, and a money system was introduced. Many peasants that worked the fiefs of nobles moved to the cities and towns in order to seek out a better future. This left the vassals of smaller fiefs unable to compensate the remaining peasants on the land for their work. As a result, vassals had to return to fighting as a knight in the service of nobles in exchange for the fees that the nobles would give. Even this was difficult due to the advent of money, since it was much easier to hire someone to organize an army than to hire a knight, whose services were required for only 40 days out of the year. Also, land was becoming scarce, so money became a natural substitute. With the remaining vassals, lords found it more effective to pay a vassal an annual fee. Vassals than had loyalties to more than one lord in order to receive more money, and confusion among loyalties occurred. However, feudalism was on its down and would soon be replaced by the money system. | 2026-01-31T21:17:06.551378 |
275,001 | 3.661204 | http://www.newton.dep.anl.gov/askasci/eng99/eng99161.htm | Liquid Density Applications
Name: Sonya H.
My nine year old son and I are working on his science
project. We have mixed three different types of liquids together to prove
that one liquid is more dense than the other, and will separate. In other
words, one liquid is heavier than the other.
I am having problem applying this concept to everyday life, on a level
where he can understand.
I would appreciate it, if you would give me some examples of why we need
to know the density of a liquid, and how is it used in everyday life.
I am an engineer who deals with fluid systems. To me density is very
important when it comes to moving a fluid around. When I say fluid, I
mean both liquids and gases. Imagine if I want to pump water from point
A to point B. One of the important properties of the water is density,
because it determines what size pump I need to move it down the
pipeline. Now, imagine I want to pump ketchup from point A to point B.
It is more dense, so I need a more powerful pump to get it down the
line. Understand that density is just one of the properties that
determines how easy it is to pump a fluid from one place to another.
One example that your 9 year old would probably understand the best when
it comes to density of gases is a helium filled balloon. Helium trapped
in a balloon is less dense than the surrounding air and thus it rises.
Hope this helped.
Chris Murphy PE
Wow!! Where to start. 1. Many items in the grocery store are packaged by
volume, but sold by weight (potato chips for example). So are you getting
ripped off by just buying the biggest bag? 2. The WEIGHT of any object
floated on water is buoyed up by a force equal to the VOLUME of the fluid
displaced (Archimede's Principle). So some objects sink in pure water but
sink in salt water -- an egg for example. It's a matter of the density of
the object (egg) and the fluid (salt water).
The conversion from one to the other involves the density. 3. Submarines
float or sink depending upon how much their density is increases by taking
on ocean water ballast. 4. All "shots" in a doctor's office are delivered by
volume but are formulated by the weight of the active ingredients. The
conversion between the two is density. 5. A back yard grill that uses
propane as a fuel. The tank is filled with a certain weight of propane, but
the temperature of the grill is determined by the relative volume of gaseous
propane and air. Here we do not even do the calculation but still use the
Any place where one needs to be able to inter-convert from the volume of
something to its weight -- or the other way around -- uses the density to do
Click here to return to the Engineering Archives
Update: June 2012 | 2026-01-22T10:07:01.809601 |
191,169 | 3.50111 | http://programmers.stackexchange.com/questions/125883/are-there-any-languages-that-have-both-high-and-low-level-facilities/125888 | C++ is the canonical example of a language that combines low-level and high-level features1. It doesn't simulate anything, it provides native support for almost every high-level construct you'll usually find in a common high-level language and almost every low-level construct you'll find in C.
But of course the terms are highly relative, there was a point in time (not that long ago2) where C was considered a very high level language. And there are quite a few other languages that offer considerable low-level functionalities while still commonly regarded as high-level, and vice versa, the lines are kind of fuzzy.
As for the syntax, that's something that naturally affected by the language's level of abstraction. Low-level generally means:
In computer science, a low-level programming language is a programming language that provides little or no abstraction from a computer's instruction set architecture. Generally this refers to either machine code or assembly language. The word "low" refers to the small or nonexistent amount of abstraction between the language and machine language; because of this, low-level languages are sometimes described as being "close to the hardware."
So naturally a low-level language adopts a syntax that's closer to machine code, which is inherently non human friendly. Quite a few languages, like C++, have adopted a wide variety of syntactic sugar, as a mechanism to make things easier to read or to express. But syntactic sugar is something that almost every high level language has opted for, C++'s sugar alone doesn't make it a low-level language.
As for the complexity of a low & high-level language, it's also natural: It's a tool with multiple goals, every single goal adds to its complexity. That's unavoidable regardless of the goal. High-level languages are not "better" than low-level one, they are just more concentrated on one goal. Languages that are designed with ease of use as a primary goal tend to be high-level, but that's only important if the necessary trade-offs to achieve the goal don't affect your applications.
Low or high level doesn't really matter, languages are primarily tools. You should choose the one that best fits whatever you're building in combination with what skills you have. Most popular languages are multi-purpose and Turing complete, in theory they are valid choices for building almost anything. There are no absolutes, of course, you may win in some areas if you opt for a high-level language and in others if you opt for a lower-level one, even within the same application.
Most large scale applications mix and match, following the "right tool for the job" mentality, and that's a more efficient approach, imho, than trying to have your cake and eat it too.
1 But please note that there isn't a definitive answer on what's considered a strictly high-level feature and what a low-level one.
2 In human years, in software years it was long ago... | 2026-01-21T04:49:04.587162 |
955,282 | 3.922036 | http://en.wikipedia.org/wiki/Alogia | In psychology, alogia (Greek ἀ-, “without”, and λόγος, “speech”), or poverty of speech, is a general lack of additional, unprompted content seen in normal speech. As a symptom, it is commonly seen in patients suffering from schizophrenia, and is considered as a negative symptom. It can complicate psychotherapy severely because of the considerable difficulty in holding a fluent conversation.
Alogia is often considered a form of aphasia, which is a general impairment in linguistic ability. It often occurs with mental retardation and dementia as a result of damage to the left hemisphere of the brain. People can revert to alogia as a way of reverse psychology, or avoiding questions.
Alogia is characterized by a lack of speech, often caused by a disruption in the thought process. Usually, an injury to the left hemisphere of the brain will cause alogia to appear in an individual. In conversation, alogic patients will reply very sparsely and their answers to questions will lack spontaneous content; sometimes, they will even fail to answer at all. Their responses will be brief, generally only appearing as a response to a question or prompt.
Apart from the lack of content in a reply, the manner in which the person delivers the reply is affected as well. Patients affected by alogia will often slur their responses, and not pronounce the consonants as clearly as usual. The few words spoken usually trail off into a whisper, or are just ended by the second syllable. Studies have shown a correlation between alogic ratings in individuals and the amount and duration of pauses in their speech when responding to a series of questions posed by the researcher.
The disability to speak stems from a deeper mental disability that causes alogic patients to have difficulty grasping the right words mentally, as well as formulating their thoughts. A study investigating alogics and their results on the category fluency task showed that schizophrenics suffering from alogia display a more disorganized semantic memory than controls. While both groups produced the same number of words, the words produced by schizophrenics were much more disorderly and the results of cluster analysis revealed bizarre coherence in the alogia group.
Q: Do you have any children?
Q: Do you have any children?
Alogia can be brought on by frontostriatal dysfunction which causes degrading of the semantic store, the center located in the temporal lobe that processes meaning in language. A subgroup of chronic schizophrenic patients in a word generation experiment generated fewer words than the unaffected subjects and had limited lexicons, evidence of the weakening of the semantic store. Another study found that when given the task of naming items in a category, schizophrenic patients displayed a great struggle but improved significantly when experimenters employed a second stimulus to guide behavior unconsciously. This conclusion was similar to results produced from patients with Huntington's and Parkinson's disease, ailments which also involve frontostriatal dysfunction.
Medical studies conclude that certain adjunctive drugs effectively palliate the negative symptoms of schizophrenia, mainly alogia. In one study, Maprotiline produced the greatest reduction in alogia symptoms with a 50% decrease in severity. Of the negative symptoms of schizophrenia, alogia had the second best responsiveness to the drugs, surpassed only by attention deficiency. D-amphetamine is another drug that has been tested on people with schizophrenia and found success in alleviating negative symptoms. This treatment, however, has not been developed greatly as it seems to have adverse effects on other aspects of schizophrenia such as increasing the severity of positive symptoms.
Relation to schizophrenia
Although alogia is found as a symptom in a variety of health disorders, it is most commonly found as a negative symptom of schizophrenia.
The negative symptoms of schizophrenia have previously been considered to be related to a psychiatric form of the Dysexecutive Syndrome (also known as frontal lobe syndrome). Studies show that the symptoms of schizophrenia do indeed correlate with frontal lobe syndrome.
Previous studies and analyses conclude that there are three factors that include both the positive and negative symptoms of schizophrenia. These three factors are: alogia, attentional impairment, and inappropriate affect. Studies suggest that an inappropriate affect is strongly associated with bizarre behavior and positive formal thought disorder, attentional impairment correlates significantly with psychotic, disorganization, and negative symptom factors. However, alogia is seen to contain both positive and negative symptoms, with the poverty of content of speech as the disorganization factor, and poverty of speech, latency, and blocking as the negative symptom factor. These results suggest that three dimensions are needed to categorize schizophrenia's negative and positive symptoms.
- "MedTerms medical dictionary, Alogia definition". Archived from the original on 22 October 2006. Retrieved 2006-09-30.
- American Psychiatric Association (2000). Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition: DSM-IV-TR®. American Psychiatric Pub. p. 301. ISBN 978-0-89042-025-6. Retrieved 29 April 2012.
- "Alogia- Definition".
- Alpert, M., Kotsaftis, A. & Pouget, E.R. (1997). Speech fluency and schizophrenic negative signs. Schizophrenia Bulletin, 23, 171-177.
- Alpert, M. Clark, A. & Pouget, E.R. (1994). The syntactic role of pauses in the speech of schizophrenic patients with alogia. Journal of Abnormal Psychology, 103, 750-757
- Sumiyoshi, C.; Sumiyoshi, T.; Nohara, S.; Yamashita, I.; Matsui, M.; Kurachi, M.; Niwa, S. (Apr 2005). "Disorganization of semantic memory underlies alogia in schizophrenia: an analysis of verbal fluency performance in Japanese subjects.". Schizophr Res 74 (1): 91–100. doi:10.1016/j.schres.2004.05.011. PMID 15694758.
- Chen, R.Y., Chen, E.Y., Chan, C.K., Lam, L.C. & Lieh-Mak, E. (2000). Verbal fluency in schizophrenia: reduction in semantic store. Australian and New Zealand Journal of Psychiatry, 34, 43-48.
- Shafti, S.S., Rey, S., Abad, A. (2005). "Drug – Specific Responsiveness of Negative Symptoms.". International Journal of Psychosocial Rehabilitation. pp. 10 (1), 43–51.
- Desai, N., Gangadhar, B.N., Pradhan, N. & Channabasavanna, S.M. (1984). Treatment of negative schizophrenia with d-amphetamine. The American Journal of Psychiatry, 141, 723-724.
- Barch, D.M. & Berenbaum, H. (1996). Language production and thought disorder in schizophrenia. Journal of Abnormal Psychology, 105, 81-88.
- Miller, D., Arndt, S. & Andreasen, N. (2004). Alogia, attentional impairment, and inappropriate affect: Their status in the dimensions of schizophrenia. Comprehensive psychiatry, 34, 221-226. | 2026-02-02T02:25:29.767733 |
1,060,833 | 3.916428 | http://learner.org/courses/teachingmath/grades3_5/session_05/section_03_b.html | Teacher resources and professional development across the curriculum
Teacher professional development and classroom resources across the curriculum
|Introduction | Defining Representation | Connecting Representations | The Teacher's Role | Your Journal|
Using Variety of Representations
Much of students' mathematical learning involves expanding understanding of a mathematical idea or relationship by shifting from one type of representation to a different representation of the same relationship. This is one of the reasons that it is important for students to use a variety of manipulative materials, which are then carefully related to paper-and-pencil methods of solving problems. Through this work, they move from informal representations to the more formal, and abstract representations that more advanced work will require.
A third-grade teacher introduced a unit of study on two-digit multiplication with an open-ended assignment for student pairs. The students were asked to think of a story problem where someone would want to know the product of 15 x 12 and to then show a method for finding that product:
Students suggested several different contexts and manipulative materials to be used. For example, one suggestion was 15 plastic bags with 12 crayons each (represented by 15 rectangles with "12 crayons" written on each rectangle). Another suggestion was a dot array of 15 teams lined up on the playground, with 12 players on each team. Another group of students worked first on their solution method and struggled to match 15 x 12 to a story problem. They used a method that had been used to introduce multiplication in the prior grade: making 15 towers of linking cubes with 12 cubes each and then finding an efficient method for counting them all:
The methods used for finding the product were somewhat dependent on the representation that was used. The group with the crayons added 12 plus 12 plus 12, etc. -- first mentally, then on paper -- to find the total; they had some difficulty keeping track of the number of times that 12 was added:
The two groups with the array of team members and with the towers of linking cubes both experimented with a variety of ways of finding the product before partitioning their rows of dots or towers of cubes into two parts, 10 and 2. They worked first with the rows or towers of 10, finding that partial product easily, and then found the partial product of 15 times 2.
As students invent their own way to show a relationship, such as 15 x 12, with materials, pictures, or diagrams, they engage in thought that helps strengthen their understanding of the operation of multiplication. When a number of different representations for a given problem are shared and discussed, similarities in the mathematical structure of each representation can be highlighted. For example, a student with 15 towers with 12 linking cubes in each tower can point out the similarity to a representation that uses 15 plastic bags with 12 crayons in each. Similarly, an area diagram can be related to an array that is made of 12 rows with 15 objects per row:
Notice that while an array representation may initially encourage students to use simple counting to solve the problem, an area model clearly shows the advantage of making smaller, easier to calculate groups or areas. This also very clearly corresponds to the standard algorithm for multiplication, as shown above.
During the following week, the teacher extended the example of the lines of team players and connected it to using base-ten blocks to represent 1, 10, or 100 players instead of drawing individual dots. Over the course of the next several weeks, through class discussion and guidance from the teacher, the class developed connections between this manipulative model, arrays drawn on grid paper, and symbolic methods for finding the product of two two-digit numbers. They also practiced mental-math methods for finding products by breaking a problem into two parts, such as (15 x 10) + (15 x 2).
"[D]ifferent representations support different ways of thinking about and manipulating mathematical objects. An object can be better understood when viewed through multiple lenses." (NCTM, 2000, p. 360)
Watch the video segment (duration 0:27) in the viewer box on the upper left to hear a reflection from Pam Hardaway, a middle school teacher in California. Her ideas about manipulative materials are applicable in grades 3-5 as well.
|Teaching Math Home | Grades 3-5 | Representation | Site Map | © || | 2026-02-03T15:01:35.503443 |
935,530 | 3.596875 | http://docs.scipy.org/doc/numpy-1.5.x/reference/generated/numpy.random.multivariate_normal.html | Draw random samples from a multivariate normal distribution.
The multivariate normal, multinormal or Gaussian distribution is a generalisation of the one-dimensional normal distribution to higher dimensions.
Such a distribution is specified by its mean and covariance matrix, which are analogous to the mean (average or “centre”) and variance (standard deviation squared or “width”) of the one-dimensional normal distribution.
mean : (N,) ndarray
cov : (N,N) ndarray
size : tuple of ints, optional
out : ndarray
The mean is a coordinate in N-dimensional space, which represents the location where samples are most likely to be generated. This is analogous to the peak of the bell curve for the one-dimensional or univariate normal distribution.
Covariance indicates the level to which two variables vary together. From the multivariate normal distribution, we draw N-dimensional samples, . The covariance matrix element is the covariance of and . The element is the variance of (i.e. its “spread”).
Instead of specifying the full covariance matrix, popular approximations include:
- Spherical covariance (cov is a multiple of the identity matrix)
- Diagonal covariance (cov has non-negative elements, and only on the diagonal)
This geometrical property can be seen in two dimensions by plotting generated data-points:
>>> mean = [0,0] >>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis
>>> import matplotlib.pyplot as plt >>> x,y = np.random.multivariate_normal(mean,cov,5000).T >>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show()
Note that the covariance matrix must be non-negative definite.
|[R225]||A. Papoulis, “Probability, Random Variables, and Stochastic Processes,” 3rd ed., McGraw-Hill Companies, 1991|
|[R226]||R.O. Duda, P.E. Hart, and D.G. Stork, “Pattern Classification,” 2nd ed., Wiley, 2001.|
>>> mean = (1,2) >>> cov = [[1,0],[1,0]] >>> x = np.random.multivariate_normal(mean,cov,(3,3)) >>> x.shape (3, 3, 2)
The following is probably true, given that 0.6 is roughly twice the standard deviation:
>>> print list( (x[0,0,:] - mean) < 0.6 ) [True, True] | 2026-02-01T20:01:08.570065 |
225,507 | 4.137381 | http://www.history.navy.mil/branches/teach/pearl/infamy/infamy.htm | A Date Which Will Live in Infamy
- Use President Franklin D. Roosevelts radio address following the attack on Pearl Harbor as a primary source to understand American reaction following the attacks.
- Synthesize knowledge from all lesson plans to understand how different Americans reacted to FDR's call for war.
Resources / Materials
- Primary Document: President Franklin D. Roosevelts Pearl Harbor Speech
- Student Worksheet: Reaction to War
- If possible, try to obtain either a video or audio recording of President Franklin D. Roosevelts Pearl Harbor speech. Some suggested sources:
- The Century of Warfare "Japanese Blitzkrieg: Pacific Theater 1939-1942" Time Life Series
The day after Pearl Harbor was attacked, President Roosevelt addressed a joint session of Congress and the nation listened via radio. Congress responded with a unanimous vote in support of the war. Later that day, President Roosevelt signed a Declaration of War.
1. Distribute a copy of President Franklin D. Roosevelts speech. Read the speech and ask the students to follow along. If possible, try to obtain either video footage or an audio clip of this speech to allow students to gain a first-hand experience of the speechs impact.
2. Discuss the power of language and Roosevelts use of strong words to enhance the power of his speech. Ask students to locate examples in the speech of techniques for enhancing a speech, such as the use of repetition, emotionally charged words, appeal to self preservation, and the assurance of moral superiority.
3. Divide the class into three groups and assign them to be civilians, Navy personnel, or Congress to understand the impact of Pearl Harbor on different groups of Americans. The Student Worksheet: Reaction to War will help them understand their roles. Students will then get together as a class and discuss the impact of Pearl Harbor on their group.
4. Ask the class how the attack on Pearl Harbor is viewed today? Does this event help your understanding of the recent attacks in New York City, the Pentagon, and Pennsylvania?
Upon your visit to the Navy Museum, you and your students will be able to listen to segments of President Franklin D. Roosevelt's address to the nation in the "In Harms Way: The Navy in World War II".
Return to Pearl Harbor Lesson Plan Main Page | 2026-01-21T16:48:51.368783 |
295,827 | 3.943143 | http://www.icr.org/article/human-evolution-story-stumbles-over/ | Human Evolution Story Stumbles Over Footprints
by Brian Thomas, M.S. *
Sometime in the distant past, two or three individuals walked across wet volcanic ash, leaving a trail that continues to puzzle scientists. When the Laetoli footprints were discovered over 30 years ago in Tanzania, the tracks looked like they were caused by the feet of modern humans, which supposedly did not “emerge” until 2.5 million years ago. But the footprint-containing rock had been assigned an older age of 3.6 million years.
This problem was “solved” by attributing human-like bipedal features to australopiths. These extinct apes, like the famous “Lucy” fossil, were long considered ancestral to humans. This conclusion was reached, however, before the discovery of actual australopith fossil feet and before australopith remains were found in a rock layer dated at 2.2 million years. The australopith foot bones did not at all match those represented by the human-like Laetoli prints,1 and the australopith remains were dated as more recent than even known human remains, showing that australopiths had nothing to do with human evolution.2
Despite these observations, museums still portray Australopithecus as man’s ancestor, and they even use the Laetoli tracks as evidence that these apes walked like man!3 But a new study published in PLoS One has shown that the equal depression depth of the heel and big toe of the Laetoli prints “is a cardinal sign of a humanlike gait.”4 This confirms an in-depth analysis by University of Chicago professor Russell Tuttle, who concluded in 1990 that the “footprint trails at Laetoli site G resemble those of habitually unshod modern humans.”5
The researchers in the PLoS study compared three-dimensional scans of select Laetoli tracks with scans of modern tracks made by volunteers who walked normally across wet sand, as well as some who walked in an apelike crouch. They found that whoever made the Laetoli tracks “walked with weight transfer most similar to the economical extended limb bipedalism of humans.”6
Modern apes “walk” for short spans with bent and outward-pointed knees, whereas mankind’s unique gait involves knees pointing forward and legs straightening out with each step.
Because the footprints look like they were made by perfectly modern humans, the 3.6 million-year-old age assigned to the tracks constrained the researchers to conclude “that extended limb bipedalism evolved long before the appearance of the genus Homo.”6 So, rather than following the evidence where it leads--which in this case is that the Laetoli tracks were made by “genus Homo” (modern man)--these scientists quickly modified the eternally plastic story of human evolution.7
The researchers reasoned that “human-like bipedalism clearly evolved within the first three to four million years of hominin evolution.”6 But this puts evolution in a very tough spot, because now man’s distinctive way of walking had to have “emerged” faster than the neo-Darwinian concept of natural selection can reasonably account for. Plus, there is no known fossil to represent the creature that supposedly walked just like a man, but was in fact not a man.
These scientists evidently refuse to consider the idea that the Laetoli tracks were made by genus Homo because it would challenge their evolutionary assumptions. But in the creation/Flood model, mankind has existed alongside animals since the sixth day of the creation week. Accordingly, there is no surprise that people, who only thousands of years ago lived alongside australopiths, left modern footprints in supposedly ancient rock layers.
- Wong, K. Footprints to Fill: Flat feet and doubts about makers of the Laetoli tracks. Scientific American, August 1, 2005, 18-19.
- Walker, J., R. A. Cliff and A. G. Latham. 2006. U-Pb Isotopic Age of the StW 573 Hominid from Sterkfontein, South Africa. Science. 314 (5805): 1592-1594.
- Thomas, B. Museum’s ‘Science’ Exhibit Leaves More Questions than Answers. ICR News. Posted on icr.org January 11, 2010, accessed March 26, 2010.
- Bower, B. African Footprint Fossils Are Oldest Evidence of Upright Walk. Wired Science. Posted on wired.com March 23, 2010, accessed March 26, 2010.
- Tuttle, R. H. 1990. The Pitted Pattern of Laetoli Feet. Natural History. 99: 64.
- Raichlen, D. A. et al. 2010. Laetoli Footprints Preserve Earliest Direct Evidence of Human-Like Bipedal Biomechanics. PLoS One. 5 (3): e9769.
- Sherwin, F. 2010. Darwinism’s Rubber Ruler. Acts & Facts. 39 (2): 17.
Image credit: PLoS
* Mr. Thomas is Science Writer at the Institute for Creation Research.
Article posted on April 6, 2010. | 2026-01-22T17:50:27.291699 |
385,337 | 3.87707 | http://io9.com/5903221/meet-xna-the-first-synthetic-dna-that-evolves-like-the-real-thing | New research has brought us closer than ever to synthesizing entirely new forms of life. An international team of researchers has shown that artificial nucleic acids - called "XNAs" - can replicate and evolve, just like DNA and RNA.
We spoke to one of the researchers who made this breakthrough, to find out how it can affect everything from genetic research to the search for alien life.
The researchers, led by Philipp Holliger and Vitor Pinheiro, synthetic biologists at the Medical Research Council Laboratory of Molecular Biology in Cambridge, UK, say their findings have major implications in everything from biotherapeutics, to exobiology, to research into the origins of genetic information itself. This represents a huge breakthrough in the field of synthetic biology.
The "X" Stands for "Xeno"
Every organism on Earth relies on the same genetic building blocks: the the information carried in DNA. But there is another class of genetic building block called "XNA" — a synthetic polymer that can carry the same information as DNA, but with a different assemblage of molecules.
The "X" in XNA stands for "xeno." Scientists use the xeno prefix to indicate that one of the ingredients typically found in the building blocks that make up RNA and DNA has been replaced by something different from what we find in nature — something "alien," if you will.
Strands of DNA and RNA are formed by stringing together long chains of molecules called nucleotides. A nucleotide is made up of three chemical components: a phosphate (labeled here in red), a five-carbon sugar group (labeled here in yellow, this can be either a deoxyribose sugar — which gives us the "D" in DNA — or a ribose sugar — hence the "R" in RNA), and one of five standard bases (adenine, guanine, cytosine, thymine or uracil, labeled in blue).
The molecules that piece together to form the six XNAs investigated by Pinheiro and his colleagues (pictured here) are almost identical to those of DNA and RNA, with one exception: in XNA nucleotides, the deoxyribose and ribose sugar groups of DNA and RNA (corresponding to the middle nucleotide component, labeled yellow in the diagram above) have been replaced. Some of these replacement molecules contain four carbons atoms instead of the standard five. Others cram in as many as seven carbons. FANA (pictured top right) even contains a fluorine atom. These substitutions make XNAs functionally and structurally analogous to DNA and RNA, but they also make them alien, unnatural, artificial.
Information Storage vs Evolution
But scientists have been synthesizing XNA molecules for well over a decade. What makes the findings of Pinheiro and his colleagues so compelling isn't the XNA molecules themselves, it's what they've shown these alien molecules are capable of, namely: replication and evolution.
"Any polymer can store information," Pinheiro tells io9. What makes DNA and RNA unique, he says, "is that the information encoded in them [in the form of genes, for example] can be accessed and copied." Information that can be copied from one genetic polymer to another can be propagated; and genetic information that can be propagated is the basis for heredity — the passage of traits from parent to offspring.
In DNA and RNA, replication is facilitated by molecules called polymerases. Using a crafty genetic engineering technique called compartmentalized self-tagging (or "CST"), Pinheiro's team designed special polymerases that could not only synthesize XNA from a DNA template, but actually copy XNA back into DNA. The result was a genetic system that allowed for the replication and propagation of genetic information.
A simplified analogy reveals the strengths and weaknesses of this novel genetic system: You can think of a DNA strand like a classmate's lecture notes. DNA polymerase is the pen that lets you copy these notes directly to a new sheet of paper. But let's say your friend's notes are written in the "language" of XNA. Ideally, your XNA-based genetic system would have a pen that could copy these notes directly to a new sheet of paper. What Pinheiro's team did was create two distinct classes of writing utensil — one pen that copies your friend's XNA-notes into DNA-notes, and a second pen that converts those DNA notes back into XNA-notes.
Is it the most efficient method of replication? No. But it gets the job done. What's more, it does all this copying to and from DNA with a high degree of accuracy (after all, what good is replication if the copy looks nothing like the original?). The researchers achieved a replication fidelity ranging from 95% in LNA to as high as 99.6% in CeNA — the kind of accuracy Pinheiro says is essential for evolution:
"The potential for evolution is closely tied with how much information is being replicated and the error in that process," he explains. "The more error-prone… a genetic system is, the less information can be feasibly evolved." A genetic system as accurate as theirs, on the other hand, should be capable of evolution.
The researchers put this claim to the test by showing that XNA strands made up of the HNA xeno-nucleotides like the one pictured here could evolve into specific sequences capable of binding target molecules (like an RNA molecule, or a protein) tightly and specifically. Researchers call this guided evolution, and they've been doing it with natural DNA for some time. The fact that it can also be accomplished in the lab with synthetic DNA indicates that such a system could, in theory, work in a living organism.
"The HNA system we've developed," explains Pinheiro, is "robust enough for meaningful information to be stored, replicated and evolved."
A Step Toward Novel Lifeforms
The implications of the team's findings are numerous and far-reaching. For one thing, the study sheds significant light on the origins of life itself. In the past, investigations into XNA have been largely driven by the question of whether simpler genetic systems may have existed before the emergence of RNA and DNA; the fact that these XNAs appear to be capable of evolution adds to an ever-growing body of evidence of a genetic system predating DNA and RNA both.
Practical and therapeutic applications abound, as well. "The methodologies [we've developed] are a major step forward in enabling the development of nucleic acid treatments," says Pinheiro. Natural nucleic acids [i.e. DNA and RNA] can be forced to evolve so that they bind tightly and specifically to specific molecular targets. The problem is that these nucleic acids are unsuitable for therapeutic use because they are rapidly broken down by enzymes called nucleases. As a result, these evolved nucleic acid treatments have a short lifespan and have a difficult time reaching their therapeutic targets.
To get around this, Pinheiro says medicinal chemistry is used to modify evolved DNA sequences in an attempt to create a functional molecule that can still bind to a therapeutic target but resist nuclease degradation. But doing this is tough:
"Overall, this leads to high cost and a high failure rate for potential therapies - there is still only a single licenced [nucleic acid-based] drug on the market (Macugen)."
But all six of the XNAs studied by Pinheiro and his team are stronger than regular DNA or RNA, in that they're more resistant to degradation by biological nucleases.
As a result, these molecules would need little or no adaptation for therapeutic (or diagnostic) use. "Since these molecules can now be selected directly on XNA, medicinal chemistry should no longer be limiting," says Pinheiro. You could select a suitable XNA for its biocompatibility and therapeutic potential, and not worry about having it rapidly degrade inside the body.
Pinheiro also says the outcome of the research could even have a strong impact on exobiology:
In my view, exobiology looks for life in regions it cannot physically visit. In that context, it searches for tell tale signs of life that can be remotely monitored but it has only life on Earth as examples to identify such suitable markers. Based on extant biology, DNA and RNA are good candidates for such a search. However, by showing that other nucleic acids can also store information, replicate and evolve, our research may force a rethink as to whether DNA and RNA are the most suitable tell tale signs of life.
Of course, nothing would call the indispensability of DNA- or RNA-based life into question more than the generation of an entirely synthetic, alternative life form, built from the ground up entirely by XNA. Such an organism would require XNA capable of driving its own replication, without the aid of any biological molecules. Pinheiro says that's still a ways off. "Even in its simplest setup... it would be very challenging to develop an XNA system within a cell." Such a system would require XNA capable of self-replication, and capable of undergoing evolution in a self-sustained manner.
That said, his team's work represents a major step in the right direction. As the molecular machinery designed to manipulate XNAs grows, so, too, will the capacity for synthetic genetic systems to stand and operate on their own. | 2026-01-24T03:24:03.477817 |
458,128 | 3.719285 | http://dgcorner.ifpri.info/2012/07/26/responding-to-drought-to-prevent-another-global-food-crisis/ | According to the Economic Research Service of the USDA, 62 percent of US farms are located in areas experiencing drought. About 40 percent of maize and soybeans and 44 percent of livestock are produced in areas experiencing severe drought. As a result, national crop yield and harvest estimates for maize and soybeans have been lowered considerably. Experts suggest that crop losses for maize are coming close to 20 percent and could reach 30 percent or more if extreme drought conditions persist. Crop prices have already started to rise rapidly and could increase further depending on the degree of severity and extent of the drought. Just between June and mid-July, US Gulf port prices for maize and soybean increased by 22 and 14 percent respectively, with prices for both crops reaching record highs.
Poor and vulnerable groups in developing countries are hard hit by high and volatile prices of the agricultural commodities they depend on for their primary daily caloric intake. As experienced during the 2007-08 global food price crisis, price movements in domestic markets can have significant impacts on global markets, and vice versa. This is especially true, as the United States is the top producer and exporter of maize and soybeans. As of 2011, US production of the two crops accounted for more than 30 percent of total world production, and US exports represented over 40 percent of total world exports. Also, increases in maize prices, for example, can greatly affect the prices of such foods as meat and dairy, due to higher feed costs.
Several urgent actions must be taken to address the current situation in order to prevent a potential global food price crisis:
- Monitor the situation. Key institutions, including USDA, FAO, UNCTAD, World Bank, and WFP, in collaboration with local partners, should pay close attention to developments in food supply, consumption, prices, and trade, as well as agricultural commodity speculation. This will help quickly detect any imbalances and facilitate swift responses.
- Halt biofuel production from maize. Food crop demand for biofuels, particularly in the United States and EU, must be cut substantially to help relieve the pressures on both domestic and global food markets. Currently, about 40 percent of total maize production in the United States is used to produce ethanol.
- Avoid export bans and panic purchases. Countries must stay away from imposing export restrictions when food prices increase because they lead to tighter market conditions and panic purchases by food-importing countries, thereby exacerbating food price hikes.
- Prepare to use national grain reserves. Large food-producing countries must be ready to deploy some of their grain reserves to address food emergencies, with emphasis on vulnerable populations.
- Ensure the WFP has sufficient access to food purchases for emergency relief efforts. WFP’s access to food purchases must be enhanced in order to facilitate effective responses during times of crises. Such emergency preparedness is crucial as rising food prices have implications for the effectiveness of WFP’s food assistance programming, as well as the availability of funds for resilience building activities.
- Boost developing country agricultural output and productivity. Developing country crop production for the next season, as well as productivity, must be enhanced in order to reduce the effect of high and volatile prices on their national food security. In the long run, boosts to smallholder productivity, including enhanced access to high-quality/stress-tolerant seeds, fertilizer, new and affordable technologies, and rural infrastructure, must be made top priority. Innovations in financial services, for example, the use of modern communication technologies; risk-management mechanisms, such as weather-based index crop insurance; and institutional arrangements like social and rural knowledge networks are also imperative.
Posted in News, Statements | 2026-01-25T08:04:27.296000 |
879,655 | 3.684069 | http://www.britannica.com/print/topic/146212 | cultural evolution, also called sociocultural evolution, the development of one or more cultures from simpler to more complex forms. The subject may be viewed as a unilinear phenomenon that describes the evolution of human behaviour as a whole, or it may be viewed as a multilinear phenomenon, in which case it describes the evolution of individual cultures or societies (or of given parts of a culture or society).
Unilinear cultural evolution was an important concept in the emerging field of anthropology during the 18th and 19th centuries but fell out of favour in the early 20th century. Scholars began to propagate theories of multilinear cultural evolution in the 1930s, and these neoevolutionist perspectives continue, in various forms, to frame much of the research undertaken in physical anthropology and archaeology, the branches of anthropology that focus on change over time.
Courtesy of the National Portrait Gallery, LondonThe Age of Discovery introduced 15th- and 16th-century Europeans to a wide variety of “primitive” cultures. Almost immediately, European intellectuals began efforts to explain how and why the human condition had come to be so diverse. Although the 17th-century English philosopher Thomas Hobbes was very much mistaken when he described indigenous peoples as living in conditions in which there were “no arts, no letters, no society” and experiencing life as “solitary, poor, nasty, brutish, and short,” his description encapsulates the era’s popular conception of the “savage.” Ignoring or unaware of a variety of facts—many indigenous peoples enjoyed a much better standard of living than European peasants, for instance—Hobbes and other scholars posited that everything that was good and civilized resulted from the slow development away from this “lowly” state and toward the “higher” state represented by the cultures of Europe. Even rationalistic philosophes such as Voltaire implicitly assumed that the “upward” progress of humankind was part of the natural order.
This Enlightenment notion that there was, in fact, a “natural order” derived from the philosophers of ancient Greece, who had described the world as comprising a Great Chain of Being—a view in which the world is seen as complete, orderly, and susceptible to systematic analysis. As a result, scholarship during the Enlightenment emphasized categorization and soon produced various typologies that described a series of fixed stages of cultural evolution.
Cliché Musées Nationaux, ParisMost focused on three major stages, but some posited many more categories. For instance, in his Esquisse d’un tableau historique des progrès de l’esprit humain (1795; Sketch for a Historical Picture of the Progress of the Human Mind), the Marquis de Condorcet listed 10 stages, or “epochs,” of cultural evolution. He posited that the final epoch had begun with the French Revolution and was destined to usher in universal human rights and the perfection of the human race. The Danish archaeologist Christian Jürgenson Thomsen is widely acknowledged as the first scholar to have based such a typology on firm data rather than speculation. In Ledetraad til nordisk Oldkyndighed (1836; A Guide to Northern Antiquities), he categorized ancient European societies on the basis of their tools, calling the developmental stages the Stone, Bronze, and Iron ages.
The Granger Collection, New YorkIn the later 19th century, theories of cultural evolution were enormously influenced by the wide acceptance of the theory of biological evolution put forward by Charles Darwin in The Origin of Species (1859). Social scientists found that the framework suggested by biological evolution offered an attractive solution to their questions regarding the origins and development of social behaviour. Indeed, the idea of a society as an evolving organism was a biological analogy that was taken up by many anthropologists and sociologists and that persisted in some quarters even into the 20th century.
The English philosopher Herbert Spencer was among the first to work out a general evolutionary scheme that included human societies from across the globe. He held that human cultures evolved from less-complex “species” to those that were more so: people at first lived in undifferentiated hordes; then developed social hierarchies with priests, kings, scholars, workers, and so forth; and later accumulated knowledge that was differentiated into the various sciences. In short, human societies evolved, by means of an increasing division of labour, into complex civilizations.
Courtesy of the National Portrait Gallery, LondonThe anthropologists E.B. Tylor in England and Lewis H. Morgan in the United States were the chief exponents of cultural stages in the evolution of humankind. They emphasized the analysis of culture in general, not that of individual cultures, except as the latter might illustrate their theories of the overall evolution of humanity and civilization. Morgan summed up the precepts of the unilineal approach quite well:
Since mankind were one in origin, their career has been essentially one, running in different but uniform channels upon all continents, and very similarly in all the tribes and nations of mankind down to the same status of advancement. It follows that the history and experience of the American Indian tribes represent, more or less nearly, the history and experience of our own remote ancestors when in corresponding conditions.
This passage is from Morgan’s masterwork Ancient Society (1877), in which he also described seven stages of cultural evolution: lower, middle, and upper savagery; lower, middle, and upper barbarism; and civilization. He supported his ideas by citing contemporary societies characteristic of each stage except lower savagery, of which there were no extant examples.
Morgan’s work was very widely read and became the basis for further developments in anthropology, perhaps most notably its emphasis on cross-cultural comparison and its preoccupation with the mechanisms of change. His work underlay debates on matters, such as the relative importance of technological innovation (versus diffusion), that were of serious concern for the remainder of the 19th century and persisted well into the 20th. However, although it is considered important in the history of anthropology, Morgan’s work, and indeed unilineal cultural evolution as a whole, no longer hold credence in the field.
APA widespread reaction against sweeping generalizations about culture began in the late 19th century in the United States and somewhat later in Europe. Theories and descriptions of hypothetical stages of evolution generally, and of unilinear evolution specifically, were heavily criticized as racist; instead of presuming that some peoples were more evolved than others, the new trend was to regard all cultures as unique in time and place. In the United States this movement, known as cultural particularism, was led by the German-born anthropologist Franz Boas.
Boas and several generations of his students—including A.L. Kroeber, Ruth Benedict, and Margaret Mead—turned completely away from broad generalizations about culture and concentrated on fieldwork among traditional peoples, harvesting a great variety of facts and artifacts as empirical evidence of cultural processes within existing societies. The creation of encyclopaedic lists of cultural traits and changes therein led to the development of “culture histories” and dominated American anthropology for the first half of the 20th century. The culture history movement so influenced anthropology that grand theories of “Man” became far less common than in the past.
By mid-century, however, a number of American anthropologists, including Leslie A. White, Julian H. Steward, Marshall D. Sahlins, and Elman R. Service, had revived theoretical discussions regarding cultural change over time. They rejected universal stages outright, instead conceptualizing cultural evolution as “multilinear”—that is, as a process consisting of a number of forward paths of different styles and lengths. They posited that while no specific evolutionary changes are experienced by all cultures universally, human societies do generally evolve or progress. They further suggested that the primary mechanism for such progress involved technological breakthroughs that make societies more adaptable to and dominant over the environment; technology, in this case, was quite broadly conceived, and included such developments as improvements in tool forms or materials (as with the transition through the Stone, Bronze, and Iron ages and later the Industrial Revolution), transportation (as from pedestrian to equestrian to motorized forms), and food production (as from hunting and gathering to agriculture). Proponents of multilinear evolution hold that only in this sense can the whole of world culture be viewed as the product of a unitary process. | 2026-02-01T00:20:49.073504 |
998,379 | 3.716734 | http://www.enotes.com/topics/crime-and-punishment/reference | Crime and Punishment (American History Through Literature)
Scenes of transgressions and consequences inform Western cultural discourse going back to the first story of humankind in the Bible, so it is not surprising that crime and punishment form the basis of a number of literary works in the mid-nineteenth century. Under this rubric one finds memorable criminals such as the narrator of "The Tell-Tale Heart" by Edgar Allan Poe (1809849), scenes of imprisonment in fictions such as "Bartleby, the Scrivener" and Billy Budd by Herman Melville (1819891), and women described in fictions by Lydia Maria Child (1802880) and Catharine Sedgwick (1789867) who find themselves unjustly incarcerated. Perhaps one of the most celebrated literary efforts to document crime and punishment occurs in The Scarlet Letter by Nathaniel Hawthorne (1804864). The story of Hester Prynne begins at the prison door with a crowd of men and women, apparently waiting. The narrator provides a historical footnote to accompany the scene: "The founders of a new colony, whatever Utopia of human virtue and happiness they might originally project, have invariably recognized it among their earliest practical necessities to allot a portion of the virgin soil as a cemetery, and another portion as the site of a prison" (p. 53). The inevitability of death and punishment mark their incorporation into both the Puritan community and this narrative of Hester's complex relation to those who punish her. Likewise, crime and its consequences occupied the minds of American reformers and writers during the antebellum period.
Eager to demonstrate the success of the republican revolution, Americans in the late eighteenth century and early nineteenth century worked toward reconfiguring a criminal justice system deemed inefficient and cruel in its procedures and punishments. Some abjured the death penalty as despotic, linking it with monarchy. Many reformers considered corporal punishments inhumane, preferring to effect a program of work and solitude for the convict as a means of rehabilitating character and habit. Supporting methods used in particular prisons, prison associations in Philadelphia (formed in the 1780s), Boston (1826), and New York (1844) advocated rehabilitating prisoners by instruction, silent reflection, and work. They differed over whether to allow convicts to see one another in the penitentiary, how to incorporate study of the Bible and other texts, whether to promote solitary or congregate work, and whether to depend on convict labor to subsidize the penitentiary. Even Americans not involved with criminal procedures or reforms read about crimes, trials, prisons, and disciplinary methods in periodicals and in sensational, sentimental, and realist fictions.
CULTURAL INFLUENCES ON PRISON REFORM
Antebellum anxieties about crimes reflected general unease concerning social change, including immigration, slavery, and class mobility. Carroll Smith-Rosenberg explains that the difficulties of establishing social cohesion for a mobile, increasingly diverse population prompted Jacksonian reformers in cities to develop institutional mechanisms of preventing crime, notably almshouses and workhouses. Prisoners were disproportionately black, Indian, and immigrant, groups assumed more likely to be deviant. As David Rothman discerns, whether fears of increasing crimes and ideas about predispositions toward criminal behavior were justified by actual numbers or not remains an open question because statistics from the period are suspect.
Penitentiary reforms were also affected by other cultural formations, including other reform movements, theological arguments about sin, emerging social scientific theories of moral character, and the national project of information diffusion. Whitney Cross describes how diverse religious sects promoted contradictory doctrines concerning free will and the sovereignty of God. Religious groups joined with philanthropists to advocate regulating the social environment in ways that would appropriately shape individual moral character. The Second Great Awakening (beginning in the 1790s and at its height from 1822 to 1844) encouraged the formation of missionary organizations and tract societies, which in turn supported institutions for the prevention and amelioration of poverty, unemployment, and juvenile delinquency.
Concerned citizens in the antebellum period pressed for new laws regarding abolition, women's rights, temperance, and prison discipline. Reformers were enjoined to employ a spirit of sympathy in their benevolence. In an 1811 speech to the Humane Society of Massachusetts, Lemuel Shaw, who was appointed in 1830 as chief justice of the Supreme Judicial Court of Massachusetts, promoted "a habitual compassion for the wants and sufferings of others" (p. 6) as the necessary motivation for reformers. The Philadelphia publisher and bookseller Mathew Carey, whose politics exiled him to France from his native
Some reformers argued that environmental influences produced crimes. In a column in the abolitionist newspaper the National Anti-Slavery Standard, the activist and journalist Lydia Maria Child described visiting New York's Blackwell's Island prison in 1842. She responded to a companion's inquiry of "Would you have them [the prisoners] prey on society?" by affirming "I am troubled that society has preyed upon them. I will not enter into an argument about the right of society to punish these sinners; but I say she made them sinners" (pp. 20203). She posits that similar instincts motivate the soldier killing Indians, the frontier resident vindicating an insult, and a New York professional shooting someone who accuses him of dishonor, but that society nominates the first (Andrew Jackson) for the presidency, hails the second for bravery, and hangs the third. Sara Payson Willis Parton (1811872), writing as Fanny Fern about Blackwell's Island in 1858, pointedly asked readers if they were any less guilty in being "politic enough to commit only those [crimes] that a short-sighted, unequal human law sanctions?" (p. 305). She criticized the inefficacy of prison discipline: "I don't believe the way to restore a man's lost self-respect is to degrade him before his fellow creatures; to brand him, and chain him, and poke him up to show his points, like a hyena in a menagerie. No wonder that he growls at you, and grows vicious" (p. 306).
Reformers closely connected with institutions expressed greater confidence in penal techniques, arguing that because instinctive emotions influenced some to do good and others to transgress, habitual offenders ought to be carefully controlled. In her footnote to a criminal psychology text published in 1846, Eliza Farnham, women's matron at Sing Sing, describes "the inheritance of propensities" leading to "criminal indulgence" (p. 28). The book's author, M. B. Sampson, blames its possible biological cause: "a defective form of brain" (p. 7).
Most reformers were optimistic about the power of a controlled environment to improve individuals. The early nineteenth century witnessed a reading revolution, a popular lyceum movement, and a general disposition of Americans toward self-improvement and social progress. Richard Brown characterizes the diffusion of information during this period as "a great national enterprise," for the "seemingly inexhaustible market for this sort of personal improvement information . . . was driven by a popular desire to enjoy such material and psychological benefits as gentility afforded" (pp. 289, 274). Society's interest in encouraging moral improvement included developing rehabilitation methods in prisons to stop recidivism.
ORIGINS OF THE PENITENTIARY IN THE UNITED STATES
While the century witnessed a number of innovations relevant to the topics of crime and punishment, including establishing metropolitan police forces along with specialized detective units, the most celebrated landmarks of reform involved the construction of penitentiaries by various states in the Northeast. The transformation from prison to penitentiary began with eighteenth-century experiments and arguments advocated by Benjamin Rush and others in Philadelphia, where Eastern State Penitentiary, designed by John Haviland, was built in 1821823. New York also developed penal techniques and facilities, instituting a contract labor system in Auburn penitentiary in 1819; convicts worked silently in groups during the day and slept in solitary cells. Auburn's brief experiment with solitary cells permitting some convicts to work separately in the early 1820s was abandoned as unworkable, likely due to the poor conditions of the cells.
By 1828 the congregate Auburn penitentiary turned a profit, but its system of contract labor invited criticism from prisoners, most notably regarding the physical punishments employed to increase productivity. The ex-convict William Coffey's first-person account, Inside Out; or, An Interior View of the New-York State Prison (1823), indicted harsh disciplinary methods used during his incarceration and called for separating prisoners instead of requiring convict labor, a recommendation agreeing with the conclusions of an 1822 New York Senate report on prisons. Two works by Horace Lane, The Wandering Boy, Careless Sailor (1839) and Five Years in State's Prison (1835), respectively a first-person didactic account of his criminal life and a dialogue between two prisoners of Auburn and Sing Sing, also explore the inefficacy of corporal punishment.
Many reformers also objected to harsh corporal punishments as a way of forcing convicts to work. Philadelphia's inspector Richard Vaux, a frequent critic, argued in 1855: "It is believed that the congregation of convicts during their incarceration for crime-punishment, and their sale to the highest bidder as human machines, out of which profit is to be made, is of far greater evil to society, than society yet fully comprehends" (Staples, p. 33). Convicts were subdued with straitjackets, iron gags, the lash, and the cold shower-bath, punishments applied frequently in New York penitentiaries to improve productivity. Proponents of extreme punishments sometimes depicted blacks, immigrants, Indians, and certain white recidivists as more likely to be inured to pain, an argument also advanced by slaveholders. John W. Edmonds, a judge who founded the New York Prison Association in 1844, was one of many humanitarians opposed to flogging, which was finally outlawed by New York penitentiaries in 1847.
THE MODEL AMERICAN PENITENTIARY
Officials and philanthropists in the first half of the century built, managed, and theorized about penitentiaries based on principles of separation and solitary confinement. Americans acknowledged European predecessors, including Césare Beccaria, who argued that "it is better to prevent crimes than to punish them" (pp. 10405) and John Howard, whose writings improved British prisons. Elizabeth Fry's work in British prisons and Alexander Maconochie's in Australian institutions were also lauded by Americans. The 10 April 1847 issue of the Literary World praised Fry as having "gone forth with a mission to complete the unfinished labor of Howard," reminding readers that "our own country is taking the lead, most honorably, in this humane science" ("Review of Memoirs," p. 226).
Penitentiaries provided architectural and pro-grammatic models for visitors who wished to observe reforms in action. Scott Christianson describes the "inspection avenues" that allowed officials and visitors to Auburn to secretly observe convicts at work. Approved visitors to Eastern State, colloquially termed "Cherry Hill" because of its site in a former cherry orchard, were permitted to engage solitary inmates in conversations focused on moral rehabilitation. Norman Johnston argues that "over three hundred prisons worldwide show the direct or indirect imprint of Haviland's Philadelphia and Trenton prisons. It is on the basis of both contributions that Cherry Hill must be considered the most influential prison ever built and arguably the American building most widely imitated in Europe and Asia in the nineteenth century" (p. 105).
Touring the United States in the early 1830s as representatives of the French government, Gustave de Beaumont and Alexis de Tocqueville (who would also produce Democracy in America from this visit) reported on the American models of penal discipline in On the Penitentiary System in the United States and Its Application to France (1833); they were succeeded in 1837 by their countrymen Frédéric-Auguste Demetz and Guillaume-Abel Blouet and the Spaniard Ramón de la Sagra, who visited in the mid-1830s. Visitors noted that prison discipline societies associated with the penitentiaries emphasized the virtues of their own system and the vices of the other. Eastern State's solitary system, with individual cells used for work, contemplation, and sleeping, was more expensive to administer and appeared to induce mental degeneration in some inmates. The congregate Auburn system was profitable but appeared less humanitarian in its reliance on corporal punishments and on independent contractors to supervise silent convict laborers. As Beaumont and Tocqueville state, "the Philadelphia system produces more honest men, and that of New York more obedient citizens" (p. 60).
In 1841 Dorothea Dix (1802887), a schoolteacher and a writer of childrens' didactic literature, became an advocate for prisoners and the mentally ill after teaching a Sunday school class for women in an East Cambridge jail. In 1843, after surveying conditions for the incarcerated and institutionalized in the state, she reported her findings to the Massachusetts legislature, and in subsequent legislative addresses she identified abuses in other states' facilities. In Remarks on Prisons and Prison Discipline in the United States (1845), Dix connected rehabilitating criminals and diminishing poverty as "the two great questions" and argued for making paupers "useful" citizens and for paying convicts for work to help them improve habits and conscience (p. 5).
The reformer Samuel Gridley Howe (1801876) noted in 1846 that the debate between the supporters of separate and congregate establishments reflected cultural differences between Europeans, who were reluctant to endorse corporal punishment, preferring the Philadelphia system, while most Americans approved the profitable Auburn system. Howe counted the Americans Francis Lieber (1800872) and Dorothea Dix as recommending Eastern State, while George Combe and Charles Dickens, visiting from Britain, were horrified by the degenerative effects of solitary confinement there.
Others also kept prisons in the public eye. Chaplains, philanthropists, and officials reported on penitentiaries in publications issued by prison discipline societies; excerpts from such reports and from related books were often reprinted in reviews appearing in popular periodicals. Francis Lieber, a German immigrant and professor of political science who translated Beaumont and Tocqueville's report on penitentiaries into English, later wrote his own book on the subject. In an 1847 book on prison discipline, the American Francis Gray responded to Howe's 1846 book, which had expressed a preference for solitary confinement, by arguing on behalf of the congregate Auburn system. John Luckey, a chaplain, published Life in Sing Sing in 1860, a brief history documenting how moral instruction improved several convicts.
In the mid-1840s Eliza Farnham and Georgiana Bruce reorganized the previously badly managed women's prison at Mount Pleasant, associated with Sing Sing. Nicole Hahn Rafter notes that their educational program permitted some conversation and allowed fictional texts. Barbara Packer notes that, during Farnham and Bruce's four-year tenure, the prominent transcendentalist Margaret Fuller (1810850) read excerpts from prisoners' journals sent by Bruce. With Caroline Sturgis and W. H. Channing, Fuller visited Mount Pleasant in fall 1844 and motivated friends to donate books to supplement religious tracts; the same year she spent Christmas with the female convicts.
Other writers also advocated for prisoners by noting how economic circumstances drive individuals to crime, the cruelty of particular punishments, and the poor prospects for released convicts. As noted above, Lydia Maria Child advocated improvements in the criminal justice system in her journalism. In her story "The Irish Heart: A True Story," published in Fact and Fiction (1846), she tells of the young Irish immigrant James, unfairly sentenced as a forger to Sing Sing, which fails to provide him with adequate skills to earn a living; upon release he receives tools and other help from the New York Prison Association. In the same anthology's "Rosenglory," Child describes Susan, a domestic servant corrupted by one employer's son who later steals money from another employer for not paying her wages. Susan is sent to prison and later receives assistance from a home for discharged women convicts. In Harriet Beecher Stowe's (1811896) Uncle Tom's Cabin (serialized in 1851852), Marie St. Clare sends her slave Rosa to be whipped in a New Orleans jail despite her sister-in-law Ophelia's protests that to put a girl under a man's lash degrades her body and soul.
Other writers represented more positive views of how the law protects the innocent, emphasizing the speedy, efficient resolution of crimes. Catharine Sedgwick was involved with the Women's Prison Association and the Isaac Hopper Home for discharged women prisoners. She alluded to the unfortunate situation of the innocent convict in her novel Married or Single? (1857), in which Alice Clifford visits her brother Max in the Tombs because he has been falsely indicted for forgery. After Alice spends a night in a jail cell adjacent to his, Max is acquitted because several individuals help Alice prove his innocence at trial. Horatio Alger's (1832899) prototypical rags-to-riches tale Ragged Dick (1868) contains a subplot detailing how a fellow boarder steals Dick's bankbook; tipped off by the victim, the police set a trap for the thief, who is arrested, convicted, and sentenced to nine months on Blackwell's Island.
As Andie Tucher notes, American periodicals began printing crime news in 1820, shortly after certain London papers started columns reporting petty crimes. Economic fluctuations encouraged the editors James Gordon Bennett of the New York Herald and Horace Greeley of the New York Tribune to develop a newspaper readership among the working class by castigating greedy, fraudulent entrepreneurs and corrupt politicians as well as alleged murderers and thieves. Periodicals printed descriptions of jails and prisons based on reporters' visits and collected reports of domestic and foreign crimes. The National Police Gazette (published 1845 to 1933 and claiming a circulation of forty thousand readers in its first decade) described famous crimes in history and noted recent crimes reported in the popular press in the United States and abroad. The Gazette summarized criminal trials, editorialized about political crimes, and printed articles about historical and contemporary criminals; the editor, George Wilkes, also published the latter in pamphlet form some reprinted decades later. Patricia Cline Cohen argues that newspaper articles about the clerk Richard Robinson, acquitted for the alleged murder of his lover, the New York prostitute Helen Jewett, in 1836, increased sales of the papers and fueled competition in the 1840s by retailing obsessions about sex and death.
Sensational, dime, and western novels incorporated lurid details also used in newspaper crime stories. According to David Reynolds and Kimberly Gladman, the first American city novel was George Lippard's The Quaker City; or, The Monks of Monk Hall: A Romance of Philadelphia Life (1844845). Inspired by Eugène Sue's Mystères de Paris, The Quaker City describes upstanding citizens and criminals conspiring to seduce young women. Reynolds characterizes journalistic aspects of the prolific George Foster's New York in Slices (1849), Fifteen Minutes around New York (1854), and New York Naked (1854) as "realistic exposés" of everyday life in the city. George Thompson's many novels, including Venus in Boston (1849) and City Crimes (1849), lasciviously describe the degenerative effects of drinking and promiscuous sexual habits, unveiling seemingly virtuous individuals as deviants working closely with vicious criminal gangs. Dime novels, including some published in Erastus Beadle's series beginning in 1860, and some westerns were also criticized as morally pernicious. Bret Harte's "The Luck of Roaring Camp" (1868) more optimistically depicts the rehabilitation of those inclined toward transgression in plotting how a baby civilizes miners on the frontier.
REPRESENTING CRIME AND PUNISHMENT
In addition to the works already cited, images of incarceration and punishment appear in a number of other texts, including Indian captivity narratives and Revolution-era captivity narratives such as Thomas Dring's Recollections of the Jersey Prison Ship (1829); narratives of slavery, including Harriet Jacobs's Incidents in the Life of a Slave Girl (1861) and Harriet Wilson's novel Our Nig (1859); and anti-Catholic convent literature, such as Rebecca Reed's Six Months in a Convent (1835) and satires of the latter such as Six Months in a House of Correction (1835). After Henry David Thoreau resisted paying his poll tax in protest of the Mexican-American War, he described his night in the Concord jail in "Resistance to Civil Government" (1849).
Several American fiction writers refer to issues associated with historical and contemporary practices of crime and punishment, and some criticize contemporary reforms and reformers. Edgar Allan Poe's fictions represent transgressive anxieties and fears of incarceration; particularly chilling are depictions of murder in "The Tell-Tale Heart" (1843) and of an old man who "is the type and the genius of deep crime" (p. 272) in London, depicted in "The Man of the Crowd" (1840). In the first American detective stories, Poe describes Auguste Dupin's ratiocination in solving criminal cases in "The Murders in the Rue Morgue" (1841), "The Purloined Letter" (1844), and "The Mystery of Marie Roget" (1842843).
Nathaniel Hawthorne's works are more critical of criminal stereotypes and reform motivations. "Endicott and the Red Cross" (1838) and The Scarlet Letter (1850) characterize the cruelty of Puritan punishments directed at those who are different (Episcopalians, women, Indians). The House of Seven Gables (1851) denounces social conventions falsely accusing Clifford Pyncheon, foreigners, deviants, women, and the poor of transgressive behaviors. The Blithedale Romance (1852) suggests that reforms focused on moral rehabilitation fail in representing Hollingsworth's "impracticable plan for the reformation of criminals through an appeal to their higher instincts" (p. 36).
Herman Melville mentions the prison reformer Elizabeth Fry in The Confidence-Man (1857) and François Eugéne Vidocq, the French criminal turned detective, in White-Jacket (1850) and Moby-Dick (1851). Melville's other fictions note the imprisoning aspects of civilizing society (Typee, 1846) and colonialism (Omoo, 1847; Mardi, 1849). Images of captivity recur in Redburn (1849), The Piazza Tales (1856), and Israel Potter (1855), as protagonists experience diverse forms of captivity on land and sea. The last scenes of "Bartleby, the Scrivener" (1853) and Pierre (1852) take place in the Tombs, a New York jail, and in Billy Budd (1924) the title character is summarily executed in a hasty trial at sea.
Foreshadowing naturalistic depictions of crime and punishment, Life in the Iron Mills (1861), by Rebecca Harding Davis (1831910), depicts the wretched circumstances endured by poor ironworkers, who labor like convicts in that their lives and work are constrained by those in authority. In the novella, Deb picks the pocket of the rich observer who admires her cousin Hugh's ironwork, an action resulting in Hugh's conviction as a thief after he tries to return the money. Davis portrays the inevitable, cruel punishment heaped on the honest worker. Like Hawthorne and Melville, she suggests that Americans countenance social and economic inequalities as the byproduct of entrepreneurial spirit. Decades of investment in penitentiary programs and thousands of words endorsing moral rehabilitation in the antebellum period reflect and reconfigure reform ideals as inextricably tied to ideas of American progress.
See also Individualism and Community; Life in the Iron Mills; Psychology; Reform; Sensational Fiction
Beaumont, Gustave de, and Alexis de Tocqueville. On the Penitentiary System in the United States and Its Application to France. Translated by Francis Lieber. Philadelphia: Carey, Lea, and Blanchard, 1833.
Beccaria, Césare. Essay on Crimes and Punishment. Translated by E. D. Ingraham. Philadelphia: H. Nicklin, 1819. http://www.fordham.edu/halsall/mod/18beccaria.html.
Carey, Mathew. Appeal to the Wealthy of the Land. 2nd ed. Philadelphia: L. Johnson, 1833.
Child, Lydia Maria. Fact and Fiction: A Collection of Stories. New York: C. S. Francis, 1846.
Child, Lydia Maria. Letters from New York. 1845. Freeport, N.Y.: Books for Libraries, 1970.
Dix, Dorothea. Remarks on Prisons and Prison Discipline in the United States. Boston: Munroe and Francis, 1845.
Farnham, Eliza. Preface to The Rationale of Crime, by M. B. Sampson. New York and Philadelphia: D. Appleton and George S. Appleton, 1846.
Fern, Fanny. Ruth Hall and Other Writings. Edited by Joyce W. Warren. New Brunswick, N.J.: Rutgers University Press, 1991.
Hawthorne, Nathaniel. The Blithedale Romance. 1852. Columbus: Ohio State University Press, 1964.
Hawthorne, Nathaniel. The Scarlet Letter. 1850. Edited by Ross Murfin. Boston: Bedford Books, 1991.
"Review of Memoirs of Mrs. Elizabeth Fry." Literary World, 10 April 1847, pp. 22627.
Poe, Edgar Allan. Great Short Works of Edgar Allan Poe. Edited by G. R. Thompson. New York: Harper and Row, 1970.
Sampson, M. B. The Rationale of Crime, and Its Appropriate Treatment. New York and Philadelphia: D. Appleton and George S. Appleton, 1846.
Sedgwick, Catharine. Married or Single? New York: Harper, 1857.
Shaw, Lemuel. A Discourse Delivered before the Members of the Humane Society of Massachusetts, 11 June 1811. Boston: John Eliot, 1811.
Six Months in a House of Correction; or, The Narrative of Dorah Mahony. 1835. Boston: Benjamin B. Mussey, 1835.
Brown, Richard. Knowledge Is Power: The Diffusion of Information in Early America, 1700865. New York: Oxford University Press, 1989.
Carlton, Frank. "Abolition of the Imprisonment for Debt in the United States." Yale Review 17 (November 1908): 33944.
Christianson, Scott. With Liberty for Some: 500 Years of Imprisonment in America. Boston: Northeastern University Press, 1998.
Cohen, Patricia Cline. The Murder of Helen Jewett: The Life and Death of a Prostitute in Nineteenth-Century New York. New York: Knopf, 1998.
Colatrella, Carol. Literature and Moral Reform: Melville and the Discipline of Reading. Gainesville: University Press of Florida, 2002.
Cross, Whitney R. The Burned-Over District: The Social and Intellectual History of Enthusiastic Religion in Western New York, 1800850. New York: Harper and Row, 1950.
Davis, David Brion. Homicide in American Fiction 1798860. Ithaca, N.Y.: Cornell University Press, 1957.
Friedman, Lawrence M. Crime and Punishment in American History. New York: Basic, 1993.
Johnston, Norman, with Kenneth Finkel and Jeffrey A. Cohen. Eastern State Penitentiary: Crucible of Good Intentions. Philadelphia: Philadelphia Museum of Art, 1994.
Masur, Louis P. Rites of Execution: Capital Punishment and the Transformation of American Culture. New York: Oxford University Press, 1989.
Packer, Barbara. "Diaspora." In Cambridge History of American Literature, vol. 2, 1820865, edited by Sacvan Bercovitch, pp. 49547. Cambridge, U.K.: Cambridge University Press, 1995.
Rafter, Nicole Hahn. Partial Justice: Women, Prisons, and Social Control. 2nd ed. New Brunswick, N.J.: Transaction, 1990.
Reynolds, David S. Beneath the American Renaissance: The Subversive Imagination in the Age of Emerson and Melville. Cambridge, Mass.: Harvard University Press, 1989.
Reynolds, David S., and Kimberly R. Gladman. "Introduction." In Venus in Boston and Other Tales of Nineteenth-Century City Life, by George Thompson, pp. ixiv. Amherst: University of Massachusetts Press, 2002.
Rothman, David J. "Perfecting the Prison." In The Oxford History of the Prison: The Practice of Punishment in Western Society, edited by Norval Morris and David J. Rothman. New York: Oxford University Press, 1998.
Smith-Rosenberg, Carroll. Religion and the Rise of the American City: The New York City Mission Movement, 1812870. Ithaca, N.Y.: Cornell University Press, 1971.
Staples, William G. Castles of Our Conscience: Social Control and the American State, 1800985. New Brunswick, N.J.: Rutgers University Press, 1990.
Thomas, Brook. Cross-Examinations of Law and Literature: Cooper, Hawthorne, Stowe, and Melville. Cambridge, U.K.: Cambridge University Press, 1987.
Tucher, Andie. Froth and Scum: Truth, Beauty, Goodness, and the Ax-Murder in America's First Mass Medium. Chapel Hill: University of North Carolina Press, 1994. | 2026-02-02T16:39:05.058626 |
936,720 | 3.860343 | http://www.beliefnet.com/healthandhealing/getcontent.aspx?cid=%0D%0A%09%09%09%09%09%09311800 | Gastroesophageal Reflux Disease—Child
(GERD—Child; Chronic Heartburn—Child; Reflux Esophagitis—Child; Gastro-oesophageal Reflux Disease—Child; GORD—Child; Heartburn—Child; Reflux—Child)
Pronounced: Gas-tro-ee-sof-a-geal re-flux diseaseEn Español (Spanish Version)
Gastroesophageal reflux disease (GERD) is a disorder that results from food and stomach acid backing up into the esophagus from the stomach.
GERD is different from gastroesophageal reflux (GER). GER is a common disorder seen in infants, which causes them to spit up. Most infants outgrow GER within 12 months.
GERD can occur at any age and typically requires lifestyle changes, medications, and sometimes surgery. GERD can cause serious health issues and the sooner it is treated, the better the outcome.
Gastroesophageal Reflux Disease
© 2008 Nucleus Medical Art, Inc.
The exact cause of GERD is unknown. Several factors contribute to the condition, including:
- Abnormal pressure to the lower esophageal sphincter (LES), a valve that keeps food in the stomach
- Narrow or short esophagus
- Delayed emptying of the stomach
The following factors increase the chances of developing GERD. If your child has any of these risk factors, tell the doctor:
If your child has any of these symptoms do not assume it is due to GERD. These symptoms may be caused by other conditions. Tell the doctor if your child has any of these:
- Regurgitation or vomiting
- Bloody vomit
- Weight loss or poor weight gain
- Difficulty swallowing
- Pain in the abdomen or chest
- Recurrent pneumonia or respiratory problems
- Cough or wheezing
- Dental problems (due to the effect of the stomach acid on the tooth's enamel)
- Feeling full almost immediately after eating
- Irritation to esophagus
- Chronic heartburn
Your doctor will ask about your child’s symptoms and medical history, and perform a physical exam. Your child may need to see a pediatric gastroenterologist, a doctor who specializes in gastrointestinal diseases.
Tests may include:
- Upper GI series —a series of x-rays of the upper digestive system taken after drinking a barium solution
- Upper endoscopy with biopsy—a tube is inserted into esophagus to look at the lining and a piece of tissue is taken for testing
- 24-hour pH monitoring—a probe is placed in the esophagus to keep track of the level of acidity in the lower esophagus
- Short trial of medicine
Talk with your doctor about the best treatment plan for your child. Treatment options include the following:
Your child's doctor may suggest making lifestyle changes before trying medication. These changes may include:
- Eating small, frequent meals
- Not eating two to three hours before bedtime
- Raising the head of your child’s bed
- Instructing your child to lie on the left side when sleeping
Your child may also need to avoid certain foods, such as:
- Fried foods
- Spicy foods
- Caffeine products
- Carbonated beverages
- Foods high in fat and acid
If your child is obese, your doctor may recommend weight loss. Avoiding second-hand smoke is also important.
Medications may include:
- Histamine-2 receptor drugs—to decrease acid production (eg, Tagamet, Pepcid, Zantac)
- Proton pump inhibitors—to heal the esophagus lining and relieve symptoms (eg, Prilosec, Prevacid, Protonix, Nexium)
- Promotility drugs—to help slow stomach emptying (eg, Reglan)
- Over-the-counter antacids—to treat heartburn relief (eg, Tums, Maalox)
Many of these are over-the-counter medications. Talk to your child's doctor about any new medication.
In severe cases, the doctor may recommend surgery. The most common treatment is called fundoplication . During this procedure, the surgeon wraps part of the stomach around the lower esophageal sphincter. This makes the sphincter stronger and prevents stomach acid from backing up into the esophagus.
Children’s Digestive Health and Nutrition Foundation
National Digestive Diseases Information Clearinghouse (NDDIC)
About Kids Health
Canadian Digestive Health Foundation
Dente K. Quick lesson about gastroesophageal reflux disease in children and adolescents. EBSCO Nursing Reference Center website. Available at: http://www.ebscohost.com/thisTopic.php?marketID=16topicID=860 . Accessed May 19, 2008.
Gastroesophageal reflux in children and adolescents. National Digestive Diseases Information Clearinghouse website. Available at: http://digestive.niddk.nih.gov/ddiseases/pubs/gerinchildren/index.htm . Accessed May 19, 2008.
GERD in children and adolescents. Children’s Digestive Health and Nutrition Foundation website. Available at: http://gerd.cdhnf.org/User/Docs/PDF/AdolesGERDFlier.pdf . Accessed May 19, 2008.
GERD in children with an underlying structural anomaly. Children’s Digestive Health and Nutrition Foundation website. Available at: http://gerd.cdhnf.org/User/Docs/PDF/CUSA_Brochure.pdf . Accessed May 19, 2008.
Pediatric gastroesophageal reflux, clinical practice guideline summary. Children’s Digestive Health and Nutrition Foundation website. Available at: http://gerd.cdhnf.org/User/Docs/PDF/GERD_8_pg_brochure_031604.pdf . Accessed May 19, 2008.
Pediatric GE reflux clinical practice guidelines. J Pediatr Gastroenterol Nutr. 2001;32:S1-S31.
Last reviewed May 2008 by Kari Kassir, MD
Please be aware that this information is provided to supplement the care provided by your physician. It is neither intended nor implied to be a substitute for professional medical advice. CALL YOUR HEALTHCARE PROVIDER IMMEDIATELY IF YOU THINK YOU MAY HAVE A MEDICAL EMERGENCY. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with any questions you may have regarding a medical condition.
Copyright © 2011 EBSCO Publishing All rights reserved. | 2026-02-01T20:25:00.131670 |
953,497 | 3.669912 | http://www.oxfordbiblicalstudies.com/resource/lessonplan_15.xhtml | Lesson Plan: Exploring Biblical Poetry
Anne W. Stewart
Audience: Undergraduate Introduction to Bible course
By completing this lesson, the student will:
- 1. Gain an introduction to the nature of poetry in the Hebrew Bible, including its common features and forms
- 2. Explore the diverse genres of poetry in the Hebrew Bible
- 3. Understand the background of common images, metaphors, and motifs
- 4. Consider the range of tools needed to interpret biblical poetry
- 5. Analyze individual poems
Guide to this lesson
As presented here, the lesson is too ambitious to be accomplished in one class period. The material could be used in one of several ways:
- 1. One session: Use the material in section I, including the activity with Judges 4–5. End the class period by using the discussion questions in the Conclusion below, about the nature of poetry. As a follow-up assignment, the instructor may wish to assign a project on ancient and modern poetry (see Conclusion, paragraph 2).
- 2. Two sessions: After doing the introductory session (above), plan a second session with the material in 2–3 of the other sections in the lesson plan (The Psalms, The Prophets, Didactic Poetry, The Song of Songs, Lamentations). Focus the second session on comparing the poetry in different biblical books (such as the Psalms and the Prophets) or in different genres of poetry (such as wisdom poetry [including wisdom psalms] and poetry of lament [including Lamentations and lament psalms]).
- 3. Multiple sessions: For an entire unit on biblical poetry, use the material in section I for an introductory session. Focus additional sessions on different genres of biblical poetry, selecting one or all of the genres outlined below. The instructor may wish to incorporate relevant modern poems throughout the unit (see, for example, Chapters into Verse) and/or encourage students to write reaction papers or blog entries in response to the poems studied.
- 4. Multiple sessions: Alternatively, the instructor may focus additional sessions on features of biblical poetry, such as figurative language (using material from the Psalms and Song of Songs), form (using material from the Psalms and Lamentations), and parallelism (using material from Proverbs and Prophetic Poetry).
This lesson could be adapted for high school, college, or seminary students. High school instructors may wish to simplify the assigned readings, abbreviate the analysis of the textual units, and focus particularly on the interface between ancient and modern poetry and music. Seminary instructors may wish to assign additional reading, include some analysis of the text in Hebrew (particularly where the translation is disputed, as noted below), and incorporate discussion of theological interpretation of certain poems (see suggestions for Further Reading).
Introduction and Overview: What is poetry? What is poetry doing in the Bible?
Reading to assign in preparation (all available on OBSO): [The instructor should assign the combination of texts that will best suit the students' context. For an introductory high-school or undergraduate level, the first two entries provide brief overviews. For more advanced students, see Berlin's essay in the Jewish Study Bible, as well as the suggested Further Reading.]
- "poetry," in A Dictionary of the Bible
- Adele Berlin, "Poetry, Biblical Hebrew" in The Oxford Companion to the Bible
- "The Characteristics of Biblical Poetry," in The New Oxford Annotated Bible
- Adele Berlin, "Reading Biblical Poetry," in The Jewish Study Bible
- For example, use the side-by-side biblical text tool to have the students compare Genesis 1:27 in the New Oxford Annotated Bible (NRSV) and the Jewish Study Bible (TANAKH). They may wish to check other translations, as well. Note that some translations display this as poetry (in lines) and others display it as prose. Ask the students to discuss which decision they prefer and why.
[The instructor may wish to solicit the students' ideas about how to define poetry. For example, supply a Shakespearian sonnet, a list, and a paragraph of prose and ask the students to discuss which they would classify as poetry and why.]
There are innumerable ways to define poetry. Samuel Taylor Coleridge famously quipped that poetry is "the best words in the best order." Poetry is often identified visually by the blank space left on the page when the poem is printed in lines, as opposed to prose, which is usually printed in paragraph form. Much Western poetry is characterized by meter, rhyme, and lineation. Yet these are not foolproof markers. Not all poems have all of these features, and some poems do not have any of them.
What most commonly distinguishes poetry from prose is its organization into lines. However, poetry is of course much more than its visual arrangement. Poetry is often associated with an expressive quality, conveying heightened forms of perception, experience, or emotion. This feature of poetry makes it both rewarding and challenging to study, for it often requires the reader to read slowly and repeatedly, pausing over vivid imagery, novel uses of words, or heightened modes of perception.
Defining Poetry in the Bible
Poetry makes up a significant portion of the Hebrew Bible, including the book of Psalms, most of the wisdom books, the Song of Songs, Lamentations, many prophetic texts, and poems set within prose accounts (e.g., Exodus 15; Judges 5). However, defining poetry in the Bible is a particularly difficult task because unlike much poetry in the Western canon, Hebrew poetry does not have a characteristic meter, nor is it always arranged visually in lines. For this reason, scholars have vigorously debated how to differentiate poetry from prose (and whether or not such a distinction is even relevant to the Hebrew Bible).
It is helpful to recognize the features that are common to Hebrew poetry. While not every poem contains all of these features, most poems in the Hebrew Bible have a density of these features:
Parallelism: Many scholars have pointed to parallelism as the key feature of Hebrew poetry. Parallelism is the repetition of similar syntactical patterns in adjoining lines. For example, Proverbs 12:5 contains two parallel lines that mirror one another in syntax and sound: "The thoughts of the righteous are just; the advice of the wicked is treacherous" (NRSV). In Hebrew, each line has the same number of words (3 words/line), and each word in the first line is the same part of speech and similar in sound to its corresponding term in the second line. The verse could literally be translated: "plans (maḥšĕbôt) of the righteous (ṣaddîqîm)–justice (mišpāṭ) // counsels (taḥbūlôt) of the wicked (rĕšāʿîm)–deceit (mirmâ)." Interestingly, in this saying the words that sound so similar actually convey contrasting meanings about the righteous and the wicked. These two lines thus stand in a certain tension. The syntactical parallelism functions to hold the two lines together, even as their meaning holds the two apart.
There has been much debate among scholars about how best to describe and classify Hebrew parallelism. In the 18th century, Robert Lowth proposed that there are three kinds of Hebrew parallelism: synonymous, in which the lines mirror one another in meaning or imagery; antithetic, in which the meaning or imagery of the lines contrasts; and synthetic, in which the second line neither repeats nor contrasts the first. These categories are still commonly used today, though their deficiencies have also been noted. Not all instances of Hebrew parallelism fit within these types, and the synthetic type is especially unhelpful because it essentially contains all instances of parallelism that do not fit into the first two categories. Taking a different approach, James L. Kugel argued that biblical parallelism is characterized by the general pattern: A, and what is more, B. That is, the second clause in some way parallels, echoes, extends, or qualifies the first. For Robert Alter, the study of biblical parallelism is less about quantifying the types than identifying the dynamic movement of the poetic line. He says that in parallelism, "meaning emerges from some complicating interaction" between the lines (Alter: 164).
Lines: Not all Hebrew poetry contains parallelism, and parallelism is not the sole feature of poetry. For this reason, other scholars have suggested that the line, not parallelism, is the key feature of Hebrew poetry (see Dobbs-Allsopp). While poetic lines are not always displayed visually in Hebrew manuscripts, the lines are evident by patterns of parallelism, rhythm, and rhyme scheme. That is, the language and syntax, not the format of the text, indicates the presence of lines. In acrostic poems, the lines are evident by the pattern of each line beginning with a different letter of the alphabet (see below, "Lamentations and the Acrostic Form"). It is important to note that when biblical poetry is arranged in lines in contemporary English translations, this reflects the editorial decisions of the translators. While these decisions are usually based on a long history of manuscript traditions and enjoy wide consensus, at times they differ among translations. Certain poems may be lineated differently or may not be rendered in lines at all.
Terseness: As Coleridge said, poetry is the best words in the best order. There are no extraneous words in poetry. Similarly, Hebrew poetry is often characterized by its brevity. It lacks many of the prose particles (such as relative clause markers, conjunctions, and definite articles) and repetitive syntax of Hebrew prose. For example, compare excerpts from the prose and poetry versions of the story of Jael and Sisera in the book of Judges.
Prose: "He [Sisera] said to her [Jael], 'Please let me have a little water because I am thirsty.' And she opened the skin of the milk, and she gave him a drink. And she covered him." (Judges 4:19)
Poetry: "Water he asked; milk she gave. She brought curds in a grand bowl." (Judges 5:25)
Word Play: Just as English poetry often uses rhyme or word play to link different lines, make connections between certain themes or ideas, or simply to delight the ear of the reader, so Hebrew poetry uses language to similar effect. For example, consider the rhyme between the two lines in Proverbs 12:5, discussed above: plans (maḥšĕbôt) of the righteous (ṣaddîqîm)—justice (mišpāṭ) // counsels (taḥbūlôt) of the wicked (rĕšāʿîm)—deceit (mirmâ). This saying may also use word play between the final word of each line to make an ironic point. The ways of the righteous and the wicked that are diametrically opposed sound strikingly similar. One must listen carefully to hear the difference between mišpāṭ (justice) and mirmâ (deceit).
Figurative Language: Hebrew poetry often makes extensive use of figurative language, such as metaphor, simile, and personification. Figurative language conveys an idea or image by means of language put to another use than its common or plain sense. Similes compare two images or ideas. For example, Psalm 1 declares that the one who delights in the teaching of God is "like a tree planted by streams of water" (Psalm 1:3), while the wicked one is "like chaff that wind blows away" (Psalm 1:4). Metaphors compare two images or ideas by stating that one thing is the other thing. For example, Proverbs 11:30a describes the condition of the righteous in terms of the metaphor of flourishing plant life: "the fruit of the righteous is a tree of life," while Proverbs 10:20a states: "the tongue of the righteous is choice silver," using an image of a valuable commodity to convey the surpassing worth of righteous speech. Such comparisons between the righteous and plants or silver are at once striking for both their incongruity—a tongue is not literally money, nor is a person a plant—and their fittingness. In this sense, metaphors and similes communicate something meaningful about the image or idea, but they also have the ability to shock or startle as they connect two seemingly incompatible images, such as a righteous person and a plant. Personification is also a frequent device in biblical poetry. Proverbs 1–9, for example, develops the image of wisdom as a woman through several poems. Wisdom is described as a figure whom the student should love, embrace, and seek (e.g., Proverbs 4:8). Such figurative language functions to make meaning more imaginative and vivid. Wisdom comes to life as one who can speak, both rebuking and enticing the student. Like metaphor and simile, personification has both a certain appropriateness and an inappropriateness. Wisdom is like a woman in some ways but not in other ways. Moreover, figurative language is not simply a clever way to say what could be said otherwise. Rather, it communicates novel ways of looking at the world. It requires the reader to consider ideas in a different perspective.
Studying Poetry in the Bible
However one defines Hebrew poetry, a definition by itself will only go so far. There is not just one type of poetry in the Bible. For this reason, it is more useful to study particular genres of biblical poetry that share similar features, forms, or other characteristics. While genres can often be distinguished in particular divisions of biblical texts (e.g., the Prophets), genres sometimes cut across different books or sections of the canon (e.g., wisdom poetry can be found in both the Psalms and in Proverbs). This lesson will focus on several of the most prominent genres in the Hebrew Bible.
Studying poetry in the Bible is similar to studying any other kind of poem. It requires slow, careful reading and attention to metaphor, imagery, and literary artistry. Studying biblical poetry also requires a familiarity with the imagery and conventions in which these poems were steeped. This lesson thus encourages close reading of particular poems in conversation with imagery and literary forms from the ancient Near Eastern world.
Activity: Prose vs. Poetry in Judges 4 and 5
Dividing the class into small groups of students, give each group the text of Judges 5:24–31. Ask the students to identify features of Hebrew poetry.
Next ask each group to compare the prose version of the story in Judges 4:17–22 with the poem. What is different between the two versions? Do they share the same perspective on the action? Do they use the same literary tools? What does the poetry capture that the prose version does not? How does it do this?
Among other features, the students may observe:
- 1. Extensive parallelism in the poetry.
- 2. Different perspectives on the scene between the prose and poetry versions. Where the prose proceeds by narration between the characters, the poem focuses on praise of Jael and the lyric voice of Sisera's mother. In this sense, it gives more attention to the emotion of the characters.
- 3. The poem embellishes more details (e.g., Jael gives Sisera "curds in a lordly bowl" [5:25] vs. simply "a skin of milk" [4:19]; the description of spoils of war by Sisera's mother uses parallelism to heighten and dramatize the image [5:30]).
- 4. The prose version includes certain details that are missing from the poem (e.g., the description of the alliance between Jabin and Heber [4:17] and Barak's pursuit of Sisera [4:22]).
The Form of Poetry in the Psalms
Many psalms contain references to musical notations, instruments, or musical leaders, indicating that these poems may have functioned as songs. Indeed, many psalms are titled šîr, "song," and others reference singing or musical instruments. For example, Psalm 98:5-6 proclaims: "Sing to the Lord with a lyre, with a lyre and the sound of a song! With trumpets and the blast of a horn shout praise before the King, the Lord."
The book of Psalms contains poems with particular forms. While much attention has been focused on what these forms may reveal about the original context or setting of the psalms, identifying the forms is also relevant to poetic analysis. The primary forms include hymns (e.g., Psalm 8), laments (e.g., Psalm 13), thanksgiving psalms (e.g., Psalm 150), wisdom psalms (e.g., Psalm 1) and royal psalms (e.g., Psalm 2).
For example, the lament psalm typically has the following components in the following order: (1) address to God; (2) complaint; (3) petition; (4) confession of trust in God; and (5) vow of praise to God.
- Using Psalm 13, ask the students to identify the components of the lament form [(1): v. 2 ("O Lord"); (2): vv. 2–3; (3): vv. 4–5; (4): v. 6a; (5): v. 6b].
Yet poems may depart from the expected forms. Psalm 88, for example, intentionally breaks the lament form, which is a jarring and quite striking departure if one knows the typical form. This psalm ends not with a confession of trust and praise but with additional complaint!
The Function of Poetry in the Psalms
The psalms are prayers, usually in a first-person voice. Often, there is little historical or social context provided as to the specific circumstances of an original speaker. Yet this is part of the poetic function of the psalms. The first-person voice allows any individual who reads or sings the psalm to make the prayer his or her own. Furthermore, the form of the psalms points to another feature of poetry, more generally. It is episodic. It highlights a first-person perspective and often taps into deep emotion with expressive language. In the psalms, these features are often on full display. Individual psalms privilege discrete moments and emotions. Taken together as one collection, the psalms capture a range of emotions from exuberant joy to agonizing suffering, from jubilant praise to bitter complaint.
Activity: Metaphor in the Psalms
The psalms are a particularly fitting place to consider metaphor in biblical poetry, for the psalms are filled with a variety of metaphors that provide vivid imagery and novel perspectives. One of the most well-known metaphors from the psalms is the image of God as a shepherd in Psalm 23.
- 1. This poem turns on the metaphor of God as a shepherd. According to the poem, what does the image of a shepherd capture about God?
- 2. What about a shepherd is not fitting to what the psalmist conveys about God?
- 3. If the students have additional time, use OBSO to research shepherding in Israel and the ancient Near East. What was the life of a shepherd like? What other references are there to shepherds in the Bible (either metaphorical or literal)?
The Nature of Prophetic Poetry
The overwhelming majority of prophetic speech is in the form of poetry. While prophetic poetry evidences features of poetry similar to other genres of poetry in the Hebrew Bible, it is often characterized, in particular, by the dynamic of speech from the divine world (either in the voice of God or the voice of God through the prophet) to the human world, and it frequently addresses a historical audience with vocative language (see Alter: 139–140). For example, the book of Amos, which begins by offering a particular time and location for the prophet's speech (Amos 1:1), contains a series of oracles that address the audience in the voice of God, as in Amos 3:1–2: "Hear this word that the Lord has spoken against you, O people of Israel, against the whole family that I brought up out of the land of Egypt: You only have I known of all the families of the earth; therefore I will punish you for all your iniquities."
The Function of Prophetic Poetry
Through its vivid imagery and language of direct address, prophetic poetry functions to influence the perceptions, emotions, and actions of the audience. For example, Isaiah 55 layers various images from the natural world and metaphors of water and wine to emphasize the theme of comfort, nourishment, and hope: "Ho, everyone who thirsts, come to the waters; and you that have no money, come, buy and eat! Come, buy wine and milk without money and without price" (Isaiah 55:1). Amos 5:18–24 uses images of light and darkness to inspire fear of God's judgment: "Is not the day of the Lord darkness, not light, and gloom with no brightness in it?" (Amos 5:20).
Activity: Imagery and Poetry in Nahum
Ask the students to read Nahum 3 and to discuss the following questions in small groups:
- 1. What event does the text describe?
- 2. What kinds of emotions or ideas does it invoke? How does it convey those sentiments (e.g., what kinds of images or metaphors does it use)?
- 3. Verses 1–3 are a series of images that are not in complete sentences. What is the function of this style? What effect does it produce?
- 4. Using OBSO, find images of warfare from the ancient Near East. Compare the images in the poem to the images from the website. What is similar about the images of warfare in both sources? What is different? Are there elements of warfare (or the experience of warfare) which the poetry captures that the images cannot?
The Nature of Didactic Poetry
Robert Frost said that poetry "begins in delight and ends in wisdom." Israel's wisdom literature frequently uses poetry not only to delight the senses but to instruct the audience. The book of Proverbs is comprised both of longer poems in chapters 1–9 and of small lines of poetry in the proverbial sayings. The book of Ecclesiastes contains several discrete poems (e.g., Ecclesiastes 3:1–8), and the dialogues in the book of Job are in the form of poetry. Moreover, poetry is a vital aspect in how these books communicate their wisdom. Proverbs, for example, relies upon the personification of wisdom and foolishness to portray vividly the desirability and danger of the opposing paths. The dialogues in Job make extensive use of metaphor and vivid imagery to convey the unsearchable nature of wisdom and the anguish of human suffering. Job himself, for example, compares God to an archer who assaults him, proclaiming: "For the arrows of the Almighty are in me; my spirit drinks their poison. God's terrors are arrayed against me" (Job 6:4).
Terseness and Poetry: the Function of Proverbs
Poetry makes a decisive difference to the didactic function of wisdom literature, which is particularly evident in the proverbial sayings of the book of Proverbs. The terse nature of proverbial sayings is not only a feature of their form, but it also serves their function of imparting wisdom and cultivating discernment in the student.
Activity: Ambiguity in Poetic Lines—Proverbs 22:6
- 1. The Hebrew text of this verse can be translated literally as, "Train a child in his way, and when he is old, he will not depart from it."
- 2. The ambiguity of the phrase "his way" allows multiple interpretations of the verse. Ask the students to discuss what this saying means. They might mention such possibilities as:
- Train a child in his way, that is, the way of wisdom, the way of one who is trained correctly
- Train a child in his way, that is, according to his aptitude, according to the student's age and ability
- Train a child in his way, that is, according to his social position or future role (e.g., a scribe)
- Train a child in his own way, that is, the way that he (foolishly) desires
The Song of Songs represents some of the most sublime poetry in the Hebrew Bible. Indeed, the very name of the book indicates its beauty. In Hebrew, the construction "song of songs" expresses superlative meaning. It might also be translated as "the best song." What makes this poetry so beautiful and unique is the way in which it captures the range of emotion involved in desire and erotic love. From joy to anguish, from delight to pain, from experiences of pleasure to situations of danger, the Song touches upon a range of emotions, senses, and images.
There is a debate among scholars about whether the poems in the Song were originally independent or were written as part of a unified work. In its final form, the Song is a sequence of poems. Although there are various vignettes within the poems, the book as a whole does not proceed as a clear narrative tale across the eight chapters. Rather, a series of recurring themes and emotions frequently circle back upon each other. The poems do not narrate a past event from the perspective of an omniscient speaker. Rather, the lovers' experience of seeking after and longing for the other unfolds in the present, alternating between each of their unique voices.
One of the most important themes in the book is the praise of the beloved. This is also a common trope in Egyptian love poetry. This style of poetry is known as a waṣf, a descriptive poem or song that describes the beloved's body part-by-part.
This style of poetry results in some metaphors that may be quite surprising to modern readers. As already discussed, for a metaphor to work well, it needs both congruence and dissonance. That is, it captures a certain aspect of the compared object, but it is not fitting in all respects. For example, in Song 4:2 the lover proclaims of his beloved: "Your teeth are like a flock of shorn ewes that have come up from the washing, all of which bear twins and not one among them is bereaved." This metaphor compares the color and evenness of the woman's teeth to young sheep, but of course the metaphor is limited. It does not imply that her teeth are hairy or smell like animals! This, too, is part of the function of the metaphor. The surprise or dissonance of comparing two unlike things is a provocative image. It prompts the reader to consider, in this case, an image of teeth in a new way. While teeth and sheep may be a surprising comparison, it is effective because there is a common element between the two. One would probably not say, for example, that teeth are giraffes. In that case, there is no bridge between the parts.
Voice and Poetry
The Song of Songs highlights the voices of both the male and female lovers. One of the most striking things about the Song is the mutuality it presents between male and female voices. For example, in Song 1:15–16, in English translation it is very difficult to determine who is speaking to whom: "Ah, you are beautiful, my love; ah, you are beautiful; your eyes are doves. Ah, you are beautiful, my beloved, truly lovely." In Hebrew, the pronoun "you" has different masculine and feminine forms, which makes clear that the lovers' voices alternate. But the English translation nicely captures the back-and-forth nature of their speech. Neither lover's voice dominates the conversation. Moreover, the poetry unfolds by the alternation of these individual voices. It is not mediated by a third-person narrator. The poetry highlights the individual voice and experience of the speaker.
As the poetry highlights the individual voices of each lover, it functions to convey an important point about the particularity of love. The mutuality of the lovers does not imply that they are identical. Both figures desire one another with the same intensity; both seek out the other; and both send the other away. However, the two figures have unique ways of conceptualizing their experience. As J. Cheryl Exum notes, male and female have distinct personalities and different ways of viewing the beloved and describing their experience of love (see Exum: 14–17).
Activity: The Voices of Two Lovers
- 1. Divide the class into groups, giving half of the groups the text of Song of Songs 4 and the other half the text of Song of Songs 2. Ask each group to answer the following questions:
- How does the lover describe his/her beloved?
- How does the lover describe the experience of love?
- What emotions, senses, and images does the lover invoke?
- The lovers have different modes of speaking about love for the other. The man describes the woman's body, part by part. But she often speaks of love through stories about the lover's approach or his absence (but see Song 5:10–16 for her description of the lover's body).
- The lovers seem to be differently affected by love. While the woman speaks about what love does to her—it makes her sick or faint (Song 2:5; see also 5:8), the man speaks about what the woman does to him ("you have captured my heart," 4:9; see also "your eyes overwhelm me," 6.5). As Exum explains, "She is lovesick, he is awestruck" (Exum: 15).
The book of Lamentations contains several poems that present a gripping and emotional response to the experience of the destruction of Jerusalem by the Babylonians (587 BCE) and the exile of its inhabitants. With vivid imagery, the poems describe the aftermath of the siege, its effect on women and children, the hostility of the enemies, and the desolation of the city. Like the Song of Songs, this book is a sequence of poems. It does not tell a coherent narrative from beginning to end, but it unfolds through various first-person voices who represent different responses to exile.
Lamentations as Poetic Sequence
There has been some debate among scholars about whether the poems in the book were originally separate or were written as a whole by one author. On the one hand, the poems represent different perspectives on the experience of exile through different voices, which may indicate that they were originally separate poems. But on the other hand, these different perspectives do not necessarily require different authors.
While the individual poems in the book do not form a linear narrative across the book, the sequence of the poems contributes to their meaning and effect. F. W. Dobbs-Allsopp, among others, has argued that Lamentations can be read as a lyric sequence, a form in which individual poems are organized into a larger collection (see Dobbs-Allsopp, Lamentations: 21–23). There are several ancient examples of poems collected into meaningful sequences, and this is also a technique of modern poets, such as Walt Whitman in Leaves of Grass or T. S. Eliot in The Wasteland. As Dobbs-Allsopp explains, poetic sequences have both centrifugal and centripetal elements. That is, there are elements that function to bind the poems together, even as the episodicity of the individual poems resists a completely unified or continuous perspective.
Within Lamentations, there are both thematic and formal centripetal forces. The use of the acrostic form (see below) in each poem binds the individual poems together, as does the repetition of certain themes, including grief (e.g., Lamentations 1:2), complaint (e.g., Lamentations 3:43–48), and expressions of faith (e.g., Lamentations 3:22–33). Within the book as a whole, there is not a linear progression from, for example, despair to hope. Rather, the various emotions circle back among themselves as the sequence advances.
At the same time, the different voices, different modes of address, and multiple emotions throughout the poetry serve as centrifugal forces that produce a very fragmentary effect. Lamentations does not present a carefully reasoned argument about the cause of destruction or a narrative report of destruction like we might find in a newspaper. Instead, it offers a disjointed set of images that shift back and forth between description, accusation, and anguished prayer. The effect is disorientation. It is often hard to pick out exactly what is happening, where, and when. But the result is that the readers are just as affected by the chaos of the scene as the poet.
Lamentations and the Acrostic Form
An acrostic poem is one in which each stanza or line begins with a different letter of the alphabet. (For other acrostic poems in the Bible, see Psalms 9; 10; 25; 119; Proverbs 31:10–31; Nahum 1.) In Lamentations, there are several variations of the alphabetic acrostic, which proceeds through the alphabet in order from aleph to tav, the equivalent of "a" to "z" in the English alphabet. In chapters 1, 2, and 4 of Lamentations, each verse begins with a different letter of the Hebrew alphabet in sequence. In chapter 3, the form is heightened, with each letter repeated three times. Chapter 5 does not have the acrostic form, but the poem contains twenty-two verses, which is the number of letters in the Hebrew alphabet. In that sense, there is still a certain continuity of form with the preceding poems, though the absence of the form may also be significant (see below).
Within Lamentations, the acrostic form may have several different functions:
- 1. The acrostic is a "container" for the poetry. In the absence of narrative, the form holds the poetry together and guides the reader from beginning to end.
- 2. It functions to hold together the otherwise scattered and chaotic content of the poems, which is especially true of Lamentations' fragmented images.
- 3. The variation in form of the acrostic gives the entire collection a sense of development and dynamism. As indicated above, the form reaches its height in the middle of the book (chapter 3), and it breaks down by the end of the book, as chapter 5 is missing the alphabetic organization. In fact, the absence of the acrostic in the final chapter may be a commentary on the experience of conquest and exile: the breakdown of expected order.
Activity: Emotion and Poetry in Lamentations
- 1. What does the text convey about the experience of exile?
- 2. What kinds of emotions, ideas, or events does it describe?
- 3. How does it convey those sentiments (e.g., what kinds of images or metaphors does it use)?
If time permits, ask the students to use OBSO to research the experience of exile. What do we know, historically, about the exile? What sources are available, both in the Bible and in the form of texts and artifacts from the ancient Near East? Given the variety and diversity of such sources, what unique contributions does the poetry of Lamentations offer to understanding exile?
What is poetry? Why does it matter? While this lesson has provided only a glimpse of the diversity and delight of poetry in the Bible, it should be evident that biblical poetry rewards close study. The forms and features of poetry often make a significant contribution to the meaning of the text and its interpretation. Hebrew poetry is beautiful literature, which is rich in imagery, emotion, and complex thought. At the conclusion of the lesson, it may be helpful to return to the beginning and discuss some of the following questions:
- 1. What is poetry? How does one identify poetry in the Bible? What factors influence how or whether one considers a particular text to be a poem?
- 2. Why does it matter? Does it make a difference for reading or interpreting the text?
Biblical poetry can also be studied alongside modern poetry, including the music that students may listen to everyday. Ask the students to bring in a favorite poem or the lyrics to a favorite song that resonates with them differently after studying biblical poetry. Or ask the students to analyze a biblical poem alongside a modern poem. For example,
- 1. Psalm 121 and D. H. Lawrence, "The Hills"
- 2. Ecclesiastes 3:1–8 and The Byrds, "Turn! Turn! Turn!"
- Alter, Robert. The Art of Biblical Poetry. New York: Basic Books, 1985.
- Atwan, Robert and Laurance Wieder, eds. Chapters into Verse: Poetry in English Inspired by The Bible, Vol. 1: Genesis to Malachi. New York: Oxford University Press, 1993.
- Dobbs-Allsopp, F. W. Lamentations (Interpretation: A Bible Commentary for Preaching and Teaching. Louisville, KY: Westminster John Knox Press, 2002.
- Dobbs-Allsopp , F. W. "Poetry, Hebrew." In The New Interpreter's Dictionary of the Bible, edited by Katherine Doob Sakenfeld, vol. 4, 550–558. Nashville, TN: Abingdon Press, 2009.
- Exum, J. Cheryl. The Song of Songs. Old Testament Library; Louisville, KY: Westminster John Knox, 2005.
- Kugel, James L. The Idea of Biblical Poetry: Parallelism and Its History. New Haven, CT: Yale University Press, 1981.
- Petersen, David L. and Kent Harold Richards. Interpreting Hebrew Poetry. Minneapolis, MN: Fortress Press, 1992.
- Strawn, Brent A., "Lyric Poetry." In Dictionary of the Old Testament: Wisdom, Poetry, and Writings, edited by Tremper Longman III and Peter Enns, pp. 437–446. Downers Grove, IL: InterVarsity Press, 2008. | 2026-02-02T01:50:59.797109 |
756,883 | 3.529491 | http://www.allinahealth.org/mdex/ND2603G.HTM | Internal Radiation Therapy
What is internal radiation therapy? Internal radiation therapy, also called brachytherapy, is a type of radiation to treat cancer. The source of radiation is placed in your body or on an area of your body close to the tumor. It is used to shrink the tumor or kill the cancer cells. Brachytherapy may be used with other treatments such as external radiation therapy, medicines, and surgery.
How is brachytherapy done? The way that brachytherapy is given depends on many things, such as where the tumor or tumors are in your body. During brachytherapy, radioactive seeds are placed inside or around the tumor. Seeds are small objects that give off radiation (x-ray energy) in all directions. They can be placed on the skin, in an organ, or in a body cavity. Body cavities are openings in your body, such as your nose, mouth, and vagina. Some seeds can be left in your body permanently, while others will be removed. Brachytherapy may be given in several treatments. You may need to stay in the hospital during this procedure. You may need to return to have treatment every day for about a week.
What are the risks of brachytherapy?
- Radiation kills cancer cells, but it can also harm healthy cells. You may feel very tired during brachytherapy treatment. You may cough up blood or have blood in your saliva. You may be at an increased risk for urinary tract infections. You may have swelling and pain in organs or tissues. Women may have trouble getting pregnant. Your stomach, bowels, or other organs may not work as well as before, or they may stop working.
- Without internal radiation therapy, tumors can grow bigger and damage tissues around them. You can get very weak, lose weight, and have pain. It may be very hard for your body to heal. Cancer cells may spread and grow into new tumors in other parts of your body. These tumors can cause organ failure.
When should I contact my caregiver? Contact your caregiver if:
- You have a fever.
- You get a cold or flu.
- You cannot make it to your procedure on time.
- You have questions or concerns about your condition or care.
When should I seek immediate care? Seek care immediately or call 911 if:
- You get new or severe abdominal or pelvic pain.
- You feel weak, dizzy, or faint.
- You have a seizure.
- You have chest pain or shortness of breath.
- You have sudden memory changes.
You have the right to help plan your care. Learn about your health condition and how it may be treated. Discuss treatment options with your caregivers to decide what care you want to receive. You always have the right to refuse treatment.
© 2013 Truven Health Analytics Inc. Information is for End User's use only and may not be sold, redistributed or otherwise used for commercial purposes. All illustrations and images included in CareNotes® are the copyrighted property of the Blausen Databases or Truven Health Analytics.
The above information is an educational aid only. It is not intended as medical advice for individual conditions or treatments. Talk to your doctor, nurse or pharmacist before following any medical regimen to see if it is safe and effective for you.
References and sources | 2026-01-29T22:53:34.285985 |
950,242 | 3.64641 | http://www.instructables.com/id/Geodesic-Dome-Greenhouse/?ALLSTEPS | For more information about the YES program please visit www.youthexploringscience.com.
Working and building the domes has been exciting, but also very beneficial. We traveled around St. Louis teaching kids and adults about the purpose and functions of the greenhouses. Also, we supervised the building of domes at different community centers. With a geodesic dome greenhouse, you can extend the growing season of your plants and protect them from the harsh weather outside.
Greenhouses and how they work:
Here's what we learned about why our dome (and other greenhouses) help plants live for a longer season:
Plants germinate (sprout) from seeds and grow through their life cycle depending on light and soil temperature. We couldn't do much about how much the sun was shining, but our dome made the air, and the soil in the dome warmer than outside the dome. From what we saw, it seems that growing season depends more on soil temperature than light, because some of our tougher plants like cabbage and lettuce kept growing in our dome most of the winter. They slowed down a lot, though.
But we wanted to know, why is it so hot inside??? We can tell a difference when we step inside right away, even though the plastic is not that thick. It's much warmer and the air feels sticky sometimes. It feels really nasty in there sometimes in the summer. There are 2 main things that our dome does to help the temperature stay warmer than the outside air.
1. The air inside the dome is separated from the air outside of the dome.
2. The clear (or semi-clear) skin lets light energy in, but traps heat energy.
Even though our dome skin is thin, it keeps the air inside the dome from mixing with the outside air when the wind blows, or a bus drives by. When the sun shines on the dome, lots of the high-energy light can come through the skin. Light goes through space in waves, and the light that helps us see can go right through clear objects, like glass or our dome skin. When the light bounces off the ground inside the dome and plants and the tools, it loses some of its energy. That means that the waves that bounce off can't move as fast as the waves that came into the dome, and they get trapped inside the skin. So while the sun shines, the dome gets hotter and hotter as the energy from the sun gets trapped. And this hot air can't mix with all the other air outside, and level out. That's why it feels so different in the dome. We feel it right away. It's nice in the wintertime, but when it gets hot, we start sweating right away when we walk in.
At night, when it is colder, the air in the dome has to cool off before the ground can start getting colder. We buried a digital light and temperature reader in the middle of our dome, and also hung one up in the air on a string using a pipe cleaner to make a hook. We noticed that the air got colder, then the ground got colder. Also, when the air warmed up, the ground got warmer too.
The hotter air in the dome also means that the air is more humid. Humidity means that there is more water in the air, and it can make it seem even hotter than the real temperature. That's why we seal the wood for the dome real carefully. All the water in the air can make the wood get moldy and rot. When we open our vent flaps all the way, it gets cooler fast. The hot air rises out of the vents, and mixes with the outside air. The water in the air also leaves too, and it feels much better in there. We always see little drops of water by the vents when they are closed. That's because the water comes out of the air when it is near the colder air.
Last thing: We want to warn people who think they can grow anything they want all year round. You can try, but you will end up killing a lot of plants (like we did in our first year dome). There are plants that are good for planting in the cold season, like all the plants in the cabbage family. Their family name is Arabidopsis, so if a plant has that in its scientific name, it is probably good for putting in your dome in the fall.
If you decide to use our instructable and make your own dome, we want to know what you planted!!!!! We grow food for people who can't always buy food on their own. We also go to different community groups and build new domes for them. We are getting pretty good at making these, which is why we thought we would share our experience with everybody. How hot does your dome get? What did you add to make our dome even better? Please let us know.
YES-2-Tech Teens from the Youth Exploring Science Program
Step 1: Materials Needed
40 pieces of 1" x 2" x 8' wood
1 gallon waterproof sealant
10 6' flat perforated metal straps
25 coarse thread bolts 1/4" diameter x 3/4" length
25 1/4" hex nuts
For putting everything together:
250 1-1/4" drywall screws
250 #8 washers
box of 10' x 100' 6 mil plastic sheeting
2000 5/16" staples that fit your staple gun
For rebar bender:
4' x 6" x 6" piece of wood
4 spikes or large bolts
4' piece of 3/4" plumbing pipe or conduit
2"-3" general purpose paint brushes
hand wood saw
bench vise (used to bend metal strap)
drill hammer (baby sledge)
2 socket wrench sets
Phillips screwdriver drill bit
set of multi-sized drill bits
Step 2: Clearing the Ground
Step 3: Cutting the Wood
2) Once you have those pieces cut, measure and cut 35 pieces of wood that are 48" in length. These are your "B" pieces. Mark these with the letter B.
Step 4: Weatherproofing the Wood
Step 5: Making the Connectors
1) Begin by cutting ten strips that have ten holes each.
2) Next cut twenty strips that have seven holes each.
3) Once you have all of your strips cut, you will use a vise to bend the pieces. Each bend should be approximately a 25 degree angle. On the strips that have ten holes, bend them twice, at the fourth hole from each end. Do this for all ten strips. For the seven-hole strips, bend them once, at the second hole from one end.
4) Once you have all of the pieces bent, you will assemble them. Attach two of the seven-hole strips to the middle (fifth hole from either side) of the ten-hole strips. Place the seven-hole strips on top of each other. Then, using a bolt and a nut, connect them together. You do not need to fully tighten the bolts at this time.
5) Repeat step four until you have a total of ten connectors.
Step 6: 5-Way Connectors
2) Once you have all of these strips cut, use a vise to bend the pieces after the first hole from one end. Make 25 degree angles on all bends. You must do this for all thirty strips.
3) Once all of your strips have been bent, it is time to assemble them. Line up the first hole for five of the strips and connect them with a bolt and nut. For assistance on how they should be assembled, refer to the picture.
4) Repeat step three until you have a total of six connectors.
Step 7: 6-Way Connectors
2) Once you have all of the strips cut, you need to bend them using a 25 degree angle. Bend at the fourth hole from each end (leaves five holes in center space). You will bend each strip twice. Do this for all of the strips.
3) Assemble them by connecting them at the center hole (the seventh hole) with a nut and a bolt. Do this for all nine of the connectors. Once you have all of these steps completed, you are finished fabricating the connectors.
Step 8: Make the Pentagons
Using your drill with a screwdriver attachment, fasten one arm of a 5-way connector to a 42" (A) piece of wood using two screws (with the washer around each of the screws). Look at the picture below for the placement the screws. Once you have your first "A" piece of wood attached, you will repeat this step for the remaining four arms of the 5-way connector. Repeat for all five 5-way connectors.
Step 9: Adding the 4-way connectors
Step 10: Adding the 6-way connectors
Step 11: Attaching the "B" pieces
Step 12: Finish the pentagons
Step 13: Connect the Pentagons
Step 14: Connect The Bottom of the Pentagon
Step 15: Connect The Top of the Pentagon
Step 16: Preparing To Make the Roof
Step 17: Attaching The Roof
Step 18: Tighten all the Connectors
Step 19: Making the Door Frame
Step 20: Connect the door frame to the dome
Step 21: Adding the top of the door frame
Step 22: Adding the side supports
Step 23: Adding the top support for the door
Step 24: Make a Rebar Bender
To see a visual of how to make a rebar bender (jig) and bend the rebar to hold down the dome, visit this website: | 2026-02-02T00:45:48.935139 |
5,743 | 3.687085 | http://www.environment.gov.au/science/supervising-scientist/research/hgp/waterquality | Impacts on downstream water quality
This research area is closely aligned with research on sediment/contaminant delivery from mine sites to stream systems. Once the quantity of sediment eroded from a mine site, transported through a catchment and delivered to the stream system has been determined, the impact of this sediment on stream water quality needs to be assessed.
Location of the gauging stations upstream and downstream of Ranger (Left) and Jabiluka (Right). The direction of stream flow along the creeks is also shown.
The most obvious impact of suspended sediment is to reduce light penetration through the water column and therefore reduce the level of photosynthetic activity. Turbidity, which is related to suspended sediment concentration, can cause adverse affects at levels as low as 5 NTU. Elevated suspended sediment loads can affect fish and benthic organism respiration, feeding, reproduction and change in community structure. Local studies at Jim Jim Creek in Kakadu National Park found that macroinvertebrate community structure was impacted at suspended sediment increases of 17 mg/L. Impacts on fish populations were detected at concentrations of 100 mg/L.
A gauging station network has been implemented within the Magela Creek catchment to determine baseline sediment transport characteristics in the catchment against which the impact of future mine rehabilitation can be assessed. Flow and fine suspended sediment (mud) are monitored at gauging stations located upstream and downstream of Ranger (along Magela and Gulungul Creeks) and Jabiluka (along Ngarradj). Turbidimeters have been installed at all stations to indirectly monitor mud concentration in the streams.
Along Magela Creek, the equipment was installed on floating pontoons, which means that turbidity data were collected approximately 0.2 m below the surface for all flow conditions. Along Gulungul Creek and Ngarradj, turbidity sensors were contained within a plastic tube which extended from the stream bank to the water channel, and positioned approximately 0.2 m above the bed level at each station.
Water samples are collected at each station by an automatic pump sampler over a range of flow conditions. Sample collection is triggered by changes in stream turbidity. The mud concentration (silt and clay fraction) ( 0.45 mm) in each sample is determined by sieving, filtering and oven drying techniques (Erskine et al., 2001). These data, along with concurrent in situ turbidity measurements, are used to derive statistically significant relationships between mud concentration and turbidity for each station.
Monitoring station along Magela Creek
The flow and fine suspended sediment (mud) concentration data collected at these stations have been used to derive both mud concentration and event-based mud load trigger levels which can be used for impact assessment. The impact assessment technique we have used is a Before-After-Control-Impact Paired Difference design (BACIP), which is the methodology recommended in the Australian Water Quality Guidelines (AWQG). Mud concentration trigger values can be applied on a day-to-day basis to identify/flag a need for immediate action. Event-based mud load trigger values will be used to assess the behaviour of the rehabilitated landform on an annual basis.
- Monitoring stream sediment movement in Ngarradj
- The impact of Cyclone Monica (April 2006) on riparian vegetation, in-channel large wood loadings, channel erosion and tree fall in the Ngarradj catchment
- Monitoring sediment movement in Gulungul Creek
- The impact of Cyclone Monica (April 2006) on stream sediment loading and tree fall in the Gulungul catchment
- Assessment of continuous Magela Creek turbidity data upstream and downstream of Ranger mine
Monitoring station along Gulungul Creek
Relevant Supervising Scientist reports
Moliere DR, Boggs GS, Evans KG, Saynor MJ & Erskine WD 2002. Baseline hydrology characteristics of the Ngarradj catchment, Northern Territory. Supervising Scientist Report 172, Supervising Scientist, Darwin NT.
Evans KG, Moliere DR, Saynor MJ, Erskine WD & Bellio MG 2004. Baseline suspended-sediment, solute, EC and turbidity characteristics for the Ngarradj catchment, Northern Territory, and the impact of mine construction. Supervising Scientist Report 179, Supervising Scientist, Darwin NT.
Moliere D 2005. Analysis of historical streamflow data to assist sampling design in Gulungul Creek, Kakadu National Park, Australia. Supervising Scientist Report 183, Supervising Scientist, Darwin NT.
Evans KG, Martin P, Moliere DR, Saynor MJ, Prendergast JB & Erskine WD 2004. Erosion risk assessment of the Jabiluka mine site, Northern Territory, Australia. J. Hydrologic Engrg 9(6) 512-522.
Saynor MJ, Erskine WD, Evans KG & Eliot I 2004. Gully initiation and implications for management of scour holes in the vicinity of the Jabiluka Mine, Northern Territory, Australia. Geografiska Annaler 86 (2), 191–203.
Moliere DR, Saynor MJ & Evans KG 2005. Suspended sediment concentration-turbidity relationships for Ngarradj – a seasonal stream in the wet-dry tropics. Australian Journal of Water Resources 9(1), 37-48. | 2026-01-18T09:43:01.662145 |
817,100 | 3.816616 | http://scini1.mlml.calstate.edu/goals-2/sciencegoals/ | In our first year (2007) our scientific goal was to search for “The Lost Experiments” – a set of recruitment, colonization and predation experiments that were started in the early 1960’s by Dr. Paul Dayton at Scripps Institution of Oceanography. Those were the earliest days of Scuba diving, when depth limitations had not been established, and Dr. Dayton’s experiments extended into 60 m water depth. Safety limits were later established at 40 m, so these experiments were never completed. The locations of these experiments were “lost” during the intervening years. In 2007, SCINI and a VideoRay ROV searched the seafloor and found every one of the experimental sites! Relocating this scientific underwater treasure trove opens the door to assessing decadal shifts in Antarctica, an important issue as we struggle with the increasing impacts of recent climate changes.
This year, our scientific goal is to expand the known ecological space in McMurdo Sound. We have three specific target locations, guided by intriguing hints that these places host special communities.
Target 1. Bay of Sails. This area is an “iceberg graveyard” where because of the combination of wind, currents, and bathymetry icebergs collect in high numbers. The first explorers to sight this area in the very early 1900’s thought the pointed outlines of the ‘bergs looked like the sails on their own ships, and so named it. The icebergs are driven by wind and currents and plow through the seafloor, destroying communities in their path – and opening up new areas for colonization. With SCINI, we will dive to 300m to trace the iceberg-driven patterns of destruction and rebirth on the seafloor.
Target 2. Cape Armitage. The southern point of Ross Island, where the annual sea ice contacts the permanent Ross Ice Shelf, was named after Lieutenant Armitage, part of Scott’s Discovery Expedition in 1901-1904. Here, currents are strong and bathymetry is steep, and divers have been tempted by glimpses of vast fields of the giant volcano sponge, Anoxycalyx joubini, in the depths beyond diving limits. These huge sponges are large enough to surround a diver, and are likely thousands of years old. SCINI will map the boundaries of the sponge field, and perhaps offer some clues to solve the mystery of why the sponges are in this location, but not in others.
Target 3. Heald Island. Embedded in the permanent Ross Ice Shelf, the ice surrounding the island is up to 200 m thick, but cracks form at stress points from tidal motion of the ocean water beneath. SCINI will dive through these natural access points to the mysterious depths below. Beneath permanent ice shelves remains one of the few unexplored regions of our planet; we have had only a few tantalizing peeks at what may live in the unending darkness there. Reports of massive, rapid scavenger response to baited traps placed through holes (Slattery and Oliver 1986), and notes on chemosynthetic communities under the recently disintegrated Larsen ice shelf (Domack et al. 2005) are intriguing hints that unexpected and perhaps novel communities remain undiscovered. | 2026-01-30T22:37:54.090364 |
1,105,415 | 3.824599 | http://thecephalopodpage.org/ | |Home | What's New? | Cephalopod Species | Cephalopod Articles | Lessons | Bookstore | Resources | About TCP | FAQs|
Cephalopods, the class of mollusks which scientists classify octopuses, squid, cuttlefish and nautiluses, can change color faster than a chameleon. They can also change texture and body shape, and, and if those camouflage techniques don't work, they can still "disappear" in a cloud of ink, which they use as a smoke-screen or decoy. Cephalopods are also fascinating because they have three hearts that pump blue blood, they're jet powered, and they're found in all oceans of the world, from the tropics to the poles, the intertidal to the abyss. Cephalopods have inspired legends and stories throughout history and are thought to be the most intelligent of invertebrates. Some can squeeze through the tiniest of cracks. They have eyes and other senses that rival those of humans.
The class Cephalopoda, which means "head foot", are mollusks and therefore related to bivalves (scallops, oysters, clams), gastropods (snails and slugs), scaphopoda (tusk shells), and polyplacophorans (chitons). Some mollusks, such as bivalves, don't even have a head, much less something large enough to be called a brain! Yet cephalopods have well-developed senses and large brains. Most mollusks are protected by a hard external shell and many of them are not very mobile. Although nautilus has an external shell, the trend in cephalopods is to internalize and reduce the shell. The shell in cuttlefish, is internal and is called the cuttlebone, which is sold in many pet shops to supply calcium to birds. Squid also have a reduced internal shell called a pen. Octopuses lack a shell altogether.
Cephalopods are found in all of the world's oceans, from the warm water of the tropics to the near freezing water at the poles. They are found from the wave swept intertidal region to the dark, cold abyss. All species are marine, and with a few exceptions, they do not tolerate brackish water.
Cephalopods are an ancient group that appeared in the late Cambrian period several million years before the first primitive fish began swimming in the ocean. Scientists believe that the ancestors of modern cephalopods (Subclass Coleoidea: octopus, squid, and cuttlefish) diverged from the primitive externally-shelled Nautiloidea (Nautilus) very early - perhaps in the Ordovician, some 438 million years ago. How long ago was this? To put this into perspective, this is before the first mammals appeared, before vertebrates invaded land and even before there were fish in the ocean and upright plants on land!
Cephalopods were once one of the dominant life forms in the world's ocean. Today there are only about 800 living species of cephalopods (compare that with 30,000 living species of bony fish, see FishBase). However, in terms of productivity, some scientists believe that cephalopods are still giving fish a run for their money.
Many species of cephalopods to grow very fast, reproduce over a short period of time, and then die. Scientists classify this weed-like life history as "r-selection" - the r refers to exponential growth. If you were to clear cut an oak forest, the first plants to grow would not be more oak trees - it would be weeds. In life history terms, cephalopods are the weeds of the seas. With over-fishing and climate change, there may be more biomass of cephalopods now than anytime in recent history.
|Home | What's New? | Cephalopod Species | Cephalopod Articles | Lessons | Resources | About TCP | FAQs | Site Map|
The Cephalopod Page (TCP), © Copyright 1995-2014, was created and is maintained by Dr. James B. Wood, Associate Director of the Waikiki Aquarium which is part of the University of Hawaii. Please see the FAQs page for cephalopod questions, Marine Invertebrates of Bermuda for information on other invertebrates, and MarineBio.org and the Census of Marine Life for general information on marine biology. | 2026-02-04T07:45:12.518708 |
879,105 | 3.950445 | http://www.alpharubicon.com/med/enviroemerg.html | Probably the most common emergency weíll see as survivalists is some type of environmental emergency, such as hypothermia or hyperthermia. Being outdoors, training, or doing other vigorous activities can lead to temperature control problems within the body.
Hypothermia is caused from prolonged exposure to cold. It can occur in Southern climates as easily as Northern. Although the elderly and the young are usually the most affected, anyone can suffer from it. Hyperthermia can strike anyone who is outdoors in hot weather or working in a hot confined area. These conditions arenít selective of whom they strike, but fortunately, many steps can be taken to prevent them.
There are several methods of cool or heat transfer. Naturally, heat transfers to cold, but that can be altered. Conduction is when two objects of different temperatures touch. The warmer object will transfer heat to the cooler object. This can work in lowering or raising body temperature. If a person is hot, sitting on a cold rock will lower the body temperature. If a person is cold, body to body heating will raise their temperature. This technique will be discussed in depth later. Convection must have air currents to work. Wind blowing, a ceiling fan, using a strong piece of paper or cardboard to manually fan someone are examples. The air carries the heat away from the source. Radiation is a heating method and is as simple as the sunshine, or a woodstove emitting heat. Evaporation is a cooling measure and takes place when sweat or water evaporates off of the skin. Itís not the act of sweating that cools; itís the evaporation of the sweat. The last method of cooling is breathing. Inhaling cold air can cool the body. When the body temperature begins to fall, inhaling cold air hastens the cooling process. Usually the heating or cooling process is a combination of these methods and if you understand them you can better prevent or treat these types of emergencies.
Hypothermia can be generalized or specific. Generalized is when the core body temperature, which is normally 98.6 degrees F, drops below 90 degrees F. This condition affects the entire body and if left untreated will result in death. It can be caused by prolonged exposure to cold climate, submersion in water, shock, hypoglycemia, burns, and closed head injuries. In early stages, when the core temp is between 96-99 degF, shivering will occur. Next, when the core temp is between 91-95 degF, there will be intense shivering, numbness in fingers and toes, the skin will appear bright red and chapped, and there may be difficulty speaking. The vital signs will increase in the bodies attempt to compensate for temperature. When the temp drops to 86-90 degF the shivering decreases or stops and the muscles become rigid. Coordination is affected, movements become jerky and erratic. Mentation is decreased, comprehension less, memory failing. At 81-85 degF, the patient has become irrational and stuporous. Muscular rigidity continues and may worsen. Pulse rate begins to slow and cardiac arrythmias may develop. Other vital signs will slow down as the body stops trying to fix itself. This is called decompensation. At 78-80 degF the patient becomes unconscious and doesnít respond to verbal or painful stimuli. Most reflexes will not respond. Respiration may be extremely slow or absent, and the pulse rate will be slow and irregular. Cardiac arrest is eminent. Specified cold injuries are usually called frostbite. This is when certain body parts, usually ears, nose, cheeks, chin, and extremities actually form ice crystals in the tissues. Smoking or use of any other tobacco, can exacerbate frostbite because it causes vasoconstriction. In early stages the skin will appear white and waxy, then as it progresses will turn mottled (uneven colored), then grayish-yellow, then finally, grayish-blue. General hypothermia will almost always be present with frostbite.
Treatment for hypothermia can be done on two levels, basic and advanced. As soon as a patient is found suffering from hypothermia, establish baseline vital signs, including mentation, and core temperature (best taken rectally if possible). Initially employ passive warming techniques by removing all clothing and drying patient off completely. Wrap patient in warm dry blankets. Try to raise the temperature of the ambient air (fireplace, woodstove, etc.). Another method of passive warming, mentioned earlier is body to body heating. With this method, the rescuer also removes his or her clothing and lays their naked body against the naked body of the patient and warms them with their body heat. The two people then wrap blankets around each other. In any warming remember to insulate the patient from the ground or the efforts to warm will be fruitless. Body to body warming is the most affective method of warming without employing active warming measures. In more severe cases, active warming will be necessary. Active warming utilizes all of the above measures but also includes starting an IV and running warmed IV fluids into the patient. This must be done slowly in more severe cases. If a severely hypothermic person is rewarmed to quickly there is a possibility of cardiac arrest. Warming must be done slowly with careful monitoring of the patientís vital signs.
Treating frostbite is similar, because the patient will also be suffering from hypothermia. The big difference is how to treat the frostbitten areas. In any situation oxygen should be administered if it is available, but if it isnít do the best you can with what youíve got. Once again do not allow the patient to use any type of tobacco products, due to vasoconstriction. Active rewarming of a frostbitten area is tricky and must be done carefully to prevent further damage to the area. First find a container in which the entire affected area will fit WITHOUT touching the sides or bottom. Medium sized trashcans work nicely. If you cant find something use a trash bag supported with a crate or box. Then heat water between 100 degF and 105 degF. You should be able to put your fingers into the water without discomfort. Next, fill the container with the heated water and prepare the injured part buy removing clothing, jewelry, bands, straps, etc. Fully immerse the part in the heated water. DO NOT allow the injured area to touch the sides or bottom of the container. Do not place any pressure on the affected area and do not rub or massage the affected area. Continuously stir the water with a spoon or other clean utensil, without touching the part (actually its stirring the top of the water). When the water cools below 100 degF remove the injured part, and then add more warmed water until it reaches 100 degF to 105 degF. Reimmerse the entire area again. The patient may complain of moderate to severe pain. This is usually a good indication of successful warming. Keep repeating this procedure until the area turns to a red or purple/blue color. Gently dry the area without rubbing, only patting. Next apply a dry sterile dressing to the area, if the area is a hand or foot, place sterile dressings between the fingers or toes before wrapping it completely. Now gently cover the area with covers, but once again, no pressure can be placed on the area. Itís best to build some type of frame (can be done with pillows) around the area. Keep the patient at rest. Donít allow them to walk if it was a lower extremity that was frozen. Keep the patientís whole body warm. Continue to monitor the patientís condition and vital signs. Reassess the affected area often and check for pulses below or on the area. Do not allow the patient to be re-exposed to the cold and especially do not let the injured area become refrozen. Slightly elevate the injured area to prevent edema (swelling), but continue good circulation (about six inches is adequate).
The next environmental emergency is Hyperthermia, more commonly known as heat cramps heat exhaustion, and heat stroke. A person exposed to high temperatures for prolonged periods will have excessive sweating and can lose up to one liter of fluid an hour. When high temperature is combined with high humidity, itís harder for the body to cool itself. The sweat doesnít evaporate because the air is already saturated with water. There are other conditions such as age, heart disease, obesity, fatigue, diabetes, etc, that can increase the possibility of heat related problems.
Heat cramps is the least serious, but none the less painful, of the three types of heat related emergencies. The patient will have moist, pale, normal to cool skin. They will be sweating profusely, and very thirsty, drinking a lot of water. As they drink the water to rehydrate, they lose essential salts (electrolytes) from the body. This causes painful muscle cramps that can occur in any muscle, but most often in the legs.
Heat exhaustion is the next step up and is a mild form of hypovolemic shock usually seen in firefighters, construction workers, or anyone who does strenuous outdoor work. The patient will have pale, normal to cool skin, and the skin will be moist. They will have weakness and/or dizziness and possibly faintness. The breathing will be rapid and shallow, the pulse weak. The treatment for both of the above conditions is to remove the patient from the hot environment into a cool area. If oxygen is available, administer at 10-15 liters per minute with a non-rebreather mask. Loosen or remove clothing to aid in cooling. Fan the patient but watch for shivering. Lay patient on his back with feet slightly elevated (about 12 inches), called Trendellenburg. If patient is conscious, give him small amounts of water or electrolyte solution (Gatorade or Powerade) by mouth. Have them sip it slowly to prevent vomiting.
Heat stroke is the most serious of heat emergencies and can lead to many other problems. The skin will be red and feel hot to the touch. If the patient is still sweating itís a good sign, but most often with heat stroke, the patient has stopped sweating. Without sweat the body cannot cool itself. The breathing will be rapid and shallow, pulse full and rapid. There will be generalized weakness and possibly decreased mentation. The pupils will be dilated. If cooling isnít started soon, seizures are eminent. Treatment is the same as above, in addition to active cooling. Place cool packs to the back of the neck, under the armpits, and in the groin area. These are the areas where major arteries are close to the surface and where heat collects. Keep the skin wet by applying water with sponges or wet towels. IV access with Lactated Ringers solution is necessary for rehydration. The fluids may be cool, but not cold. Do not submerge the patient in an icebath (too extreme) or rub alcohol on the skin (ineffective). Do not take the patient from the heat stroke to hypothermia.
Ways to avoid heat related emergencies: Drink plenty of water or electrolyte solution. Take breaks often and rest in the shade or other cool spot. If feeling like you are getting too hot stop the activity and place a cool cloth on back of neck.
These are the situations that will be very common, but are often overlooked as medical problems. Hopefully with this information everyone will be aware of them and know how to treat them should they occur.
All materials at this site not otherwise credited are Copyright © 1996 - 2003 Trip Williams. All rights reserved. May be reproduced for personal use only. Use of any material contained herein is subject to stated terms or written permission. | 2026-02-01T00:07:23.346086 |
191,813 | 3.948024 | http://libcom.org/book/export/html/39391 | The First World War and then immigration restrictions instituted afterward cut off the flow of European immigrants into the United States. Between 1920 and 1930 the numbers of Russian-born people had fallen 16%, Irish by 11%; Germans and Czechs also registered slight declines. Blacks continued to move north as they had done before the war. New York and Illinois (New York City and Chicago) received the largest numbers (Virginia, South Carolina and Georgia lost the greatest number), and in proportion to the total population. Blacks doubled in both. By 1950 9.8% of new York City's population was black (1920, 2.9%); in Chicago 14.1% was black (1920, 4.2%).
The jobs the newly arriving blacks took were the ones the European immigrants had traditionally taken. By the criterion they were the worst jobs. In the slaughter houses and meat- packing plants, for example, this was the picture in 1909. Foreign-born white workers outnumbered the native-born whites and blacks by nearly four to one; blacks were only 3%. The largest ethnic group was Polish (28%) and after them came Lithuanians (12%), Germans and Czechoslovaks (10%). In 1928 the situation was quite different. Blacks were now the largest group in the work force (30%) and native-born whites came after them (27%). The number of Polish-born had dropped to a third of what it had been (12%), Lithuanians were down to 8%, Germans to 3%, and Czechoslovaks to 2% (Taylor 1932:40).
Quite quickly blacks became concentrated in the jobs at the bottom of the occupational structure, and foreign-born ethnics who had been there before them moved out. In part this reflected some very real gains which the white working class had made through union organization and agitation (cf. Rosenbaum 1972) - gains, , incidentally, which were taken at the expense of the black workforce which was excluded from union membership and shut out of the wage bargains the membership could achieve. In part it also reflected broader obstacles of institutional racism and prejudice which stopped blacks getting the education or skill enhancement needed to justify higher wages or jobs in industries with relatively high technological development, expanding productivity and stability of employment.
Early migration effects, income differences and effective residential segregation worked together to imprison blacks in central-city neighborhoods, while the white working class was able to, and chose to, move out. It was in these neighborhoods that narcotics were to be found in this period, just as the trade in drugs had flourished when Jews and Italians (etc.) had lived there in the teens of the century. The spatial displacement paralleled occupational displacement, and both are linked to the substitution of black for white narcotics users through the 1940s and 1950s.
The birth rate among blacks arriving in the North during the 1930s remained higher than among whites, notwithstanding a significant fall-off during the Depression. Both fell, but the black rates fell less steeply. The effect was to widen the disparity between the two birth rates. Thus in 1920 the non-white rate was 35.0 (live births per 1000 population), just over 30% higher than the white (26.9). In 1936, however, the year when rates for both racial groups reached their lowest point in recorded history, the black rate was nearly 43% higher than the white; in 1934 the gap was even greater - 45%.
Since a person born in 1934 turned sixteen in 1950, as compared with earlier years the numerical gap between blacks and whites reached a high point from 1949 to 1951, and there were bound to be relatively more blacks aged sixteen than whites of the same age. This is the age at which regular heroin use typically starts.
Since the age of onset of narcotics use corresponds with age of initial entry into the labor force - this has evidently been true going back as far as 1910 (Bloedorn 1917) - and since our orienting hypothesis predicted that periods of severe labor surplus would be periods of increased narcotics use, we have examined labor market conditions for the 16-19 year group, by race, for the two peak periods of narcotics use in the last two decades, 1949-53 and 1969-73.
The first point to observe is that special discriminating forces operate in the labor market against teenagers, although only recently have economists been able to identify these forces as pure discrimination in the sense that labor market outcomes for teenagers (wage rates, for example) are unresponsive to parity, difference or just simply change in the economic characteristics of teenage labor viv-a-vis adult labor (Kalachek 1969; Bureau of Labor Statistics 1970).
The second point is that the demographic factors already cited, in particular the differential birth rates, produced an unusual situation around 1950 when the teenage share of the general population fell (by 25%) and at the same time the black share of the total teenage population (by 9%). What resulted was complex: there was a significant increase (14%) in the overall numbers of teenagers entering the labor market, but this increase was entirely absorbed by whites. Black labor force participation actually fell during the decade and, among males, continued to fall through 1970. This is the first sign that a deteriorating labor market was directly connected to the narcotics epidemics of the period. There are several others.
The fall in labor force participation among black teenagers might have led to more people staying in school, only this is not what happened.
Table 9 indicates that since 1940 white teenagers overall have increasingly taken advantage of longer periods of schooling, and consequently the ratio of school non-attenders to attenders has shown a consistent decline. Between 1940 and 1950, however, black teenagers, blocked from entering the job market, did not stay in school, and in part the relative number of dropouts has increased. Unemployment rates reached a peak between 1949 and 1950, and again between 1971 and 1972, and the relative severity of unemployment for blacks grew at the same time, as measured by the ratio of black to white rates. Other evidence indicates that among black teenagers, even supposing they could find jobs, a longer period of schooling, including high school graduation and even under-graduate training would not significantly alter their long-term income prospects (Davis 1972), nor in the short-term their particular occupational chances and (in the aggregate) their occupational distribution (Stevenson 1972; Strauss 1972).
One of the inadequacies of the data we have used is that the racial groups were disaggregated by class indices so that stage our discussion must be limited to black-white differences as they relate to narcotics use, and not whatever differences or similarities may exist between blacks and working-class whites. To the extent that, as indicated before, the rates of recruitment of the latter to narcotics use has been much faster than that of the former since 1960, it is reasonable to suppose that these economic conditions have become increasingly similar for both groups.
For terminoligical convenience we speak of these conditions together as labor market dualism. From a series of recent studies we are beginning to understand more about elements of the labor force are channeled into distinct compartments of industrial work, across which mobility is severely restricted by technological and social barriers (for a summary and review of research, see Gordon, Reich and Edwards 1973). Only one of these compartments conforms to prevailing popular conceptions of what is "normal," or to the normative assumptions made by sociologists seeking to measure deviance among these groups locked into the bottom compartment.
The primary labor market. The upper compartment contains those workers who achieve family living standards of minimum decent poverty or better. This compartment functions with skill and effort rewarded by higher wages, with jobs having reasonably high stability and employees having a reasonably low turnover rate, and with technological levels, management efficiency, and labor productivity steadily advancing. While the primary labor market is by no means immune from crisis, as witnessed by the middle-class employment calamities that have recently befallen such aerospace centers as Long Island or Seattle, crisis is the exception, and sailing along on an even keel is the rule. Employers and employees on the whole have a mutual stake in cooperating to increase stability which reduces the costs of job search for the employee and training for the employer; they also have a mutual interest in increasing productivity which is the basis of both better wages and profits. While for contrast this picture is overdrawn on the side of harmony, there is no question that the primary labor market achieves at least one important social end. It provides its workers with earned incomes that are adequate to finance socially sanctioned living levels, and under modern conditions of national economic management and unemployment compensation it does so with no more than tolerable dislocations over the working life of the average individual. Most persons who are in the primary labor market are firmly convinced that this is the way the labor market as a whole operates. Yet this is not so.
The secondary labor market. In the lower compartment, low wage levels, technological backwardness, and low skills form a vicious circle. Technology and management methods are either archaic by prevailing standards, as in the case of subcontracting firms on the fringes of an established industry, or a sector as a whole stagnates, such as garment manufacturing or retail or personal services. Even otherwise advanced and well-managed firms may harbor a corner of backwardness and stagnation, for example in the area of janitorial services. The stagnant firms or activities are under-capitalized, have lower productivity, and pay substandard wages. Wage levels average at least one-third below family living incomes, making it impossible for workers in the secondary labor market to aspire to stable, settled patterns of family living.
This market is, moreover, locked in a state of permanent crisis. Marginal firms are fighting for survival and have neither the resources nor the will to upgrade their technology, management methods, or wage levels. While the skills and productivity of the secondary labor force are low, they are all that the backward production methods can effectively utilize; higher education and skills are neither desired nor rewarded in this market. Nor is labor stability prized; the low technologies require next to no labor training and therefore the employer has no stake at all in holding on to his workers. On the contrary, he prefers a high turnover since this reduces the chance of wage demands or union organization. And ironically, the motivations of the workers are such as to reinforce these conditions. They live a floating existence of crisis, and they lack any drive to improve their skills, productivity or employment stability. "turnover" amounts to several job changes per year interspersed with periods of unemployment. Adaptability to training is minimal, not surprisingly since officially sponsored training programs are near-total failures in opening the doors to the primary labor market. They are seen as simply another dead-end temporary job (Vietorisz and Harrison 1972).
This briefly describes the institutional structures of labor market opportunity to which teenage recruits must adapt. One of the adaptations is recruitment to crime as an alternative income source when labor mobility is blocked and when either the real return on labor in the secondary market declines ( e.g., in periods of rapid inflation ) or unemployment in that sector rises - or both. In this light we identify recruitment to narcotics use as recruitment to the narcotics trade and industry, and as economically rational in these circumstances. Although a research project is underway to test a series of hypotheses related to this central theoretical idea, the data in the historical record are worthy of publication. They are persuasive, perhaps, but still far from decisive. | 2026-01-21T05:02:38.181418 |
378,814 | 4.123274 | http://www.liveleak.com/view?i=464_1208951613&comments=1 | When Darwin's The Origin of Species was published in 1859, it was believed that he had put forward a theory that could account for the extraordinary variety of living things. He had observed that there were different variations within the same species. For instance, while wandering through England's animal fairs, he noticed that there were many different breeds of cow, and that stockbreeders selectively mated them and produced new breeds. Taking that as his starting point, he continued with the logic that "living things can naturally diversify within themselves," which means that over a long period of time all living things could have descended from a common ancestor.
However, this assumption of Darwin's about "the origin of species" was not actually able to explain their origin at all. Thanks to developments in genetic science, it is now understood that increases in variety within one species can never lead to the emergence of another new species. What Darwin believed to be "evolution," was actually "variation."
The Meaning of Variations
Variation, a term used in genetics, refers to a genetic event that causes the individuals or groups of a certain type or species to possess different characteristics from one another. For example, all the people on earth carry basically the same genetic information, yet some have slanted eyes, some have red hair, some have long noses, and others are short of stature, all depending on the extent of the variation potential of this genetic information.
Variation does not constitute evidence for evolution because variations are but the outcomes of different combinations of already existing genetic information, and they do not add any new characteristic to the genetic information. The important thing for the theory of evolution, however, is the question of how brand-new information to make a brand-new species could come about.
Variation always takes place within the limits of genetic information. In the science of genetics, this limit is called the "gene pool." All of the characteristics present in the gene pool of a species may come to light in various ways due to variation. For example, as a result of variation, varieties that have relatively longer tails or shorter legs may appear in a certain species of reptile, since information for both long-legged and short-legged forms may exist in the gene pool of that species. However, variations do not transform reptiles into birds by adding wings or feathers to them, or by changing their metabolism. Such a change requires an increase in the genetic information of the living thing, which is certainly not possible through variations.
Darwin was not aware of this fact when he formulated his theory. He thought that there was no limit to variations. In an article he wrote in 1844 he stated:"That a limit to variation does exist in nature is assumed by most authors, though I am unable to discover a single fact on which this belief is grounded."28 In The Origin of Species he cited different examples of variations as the most important evidence for his theory.
For instance, according to Darwin, animal breeders who mated different varieties of cattle in order to bring about new varieties that produced more milk, were ultimately going to transform them into a different species. Darwin's notion of "unlimited variation" is best seen in the following sentence from The Origin of Species:
I can see no difficulty in a race of bears being rendered, by natural selection, more and more aquatic in their structure and habits, with larger and larger mouths, till a creature was produced as monstrous as a whale.29
The reason Darwin cited such a far-fetched example was the primitive understanding of science in his day. Since then, in the 20th century, science has posited the principle of "genetic stability" (genetic homeostasis), based on the results of experiments conducted on living things. This principle holds that, since all mating attempts carried out to transform a species into another have been inconclusive, there are strict barriers among different species of living things. This meant that it was absolutely impossible for animal breeders to convert cattle into a different species by mating different variations of them, as Darwin had postulated.
Norman Macbeth, who disproved Darwinism in his book Darwin Retried, states:
The heart of the problem is whether living things do indeed vary to an unlimited extent... The species look stable. We have all heard of disappointed breeders who carried their work to a certain point only to see the animals or plants revert to where they had started. Despite strenuous efforts for two or three centuries, it has never been possible to produce a blue rose or a black tulip.30
Luther Burbank, considered the most competent breeder of all time, expressed this fact when he said, "there are limits to the development possible, and these limits follow a law."31 In his article titled "Some Biological Problems With the Natural Selection Theory," Jerry Bergman comments by quoting from biologist Edward Deevey who explains that variations always take place within strict genetic boundaries:
Deevey concludes, "Remarkable things have been done by cross-breeding ... but wheat is still wheat, and not, for instance, grapefruit. We can no more grow wings on pigs than hens can make cylindrical eggs." A more contemporary example is the average increase in male height that has occurred the past century. Through better health care (and perhaps also some sexual selection, as some women prefer taller men as mates) males have reached a record adult height during the last century, but the increase is rapidly disappearing, indicating that we have reached our limit.32
In short, variations only bring about changes which remain within the boundaries of the genetic information of species; they can never add new genetic data to them. For this reason, no variation can be considered an example of evolution. No matter how often you mate different breeds of dogs or horses, the end result will still be dogs or horses, with no new species emerging. The Danish scientist W. L. Johannsen sums the matter up this way:
The variations upon which Darwin and Wallace placed their emphasis cannot be selectively pushed beyond a certain point, that such variability does not contain the secret of 'indefinite departure'.33
Confessions About "Microevolution"
As we have seen, genetic science has discovered that variations, which Darwin thought could account for "the origin of species," actually do no such thing. For this reason, evolutionary biologists were forced to distinguish between variation within species and the formation of new ones, and to propose two different concepts for these different phenomena. Diversity within a species-that is, variation-they called "microevolution," and the hypothesis of the development of new species was termed "macroevolution."
These two concepts have appeared in biology books for quite some time. But there is actually a deception going on here, because the examples of variation that evolutionary biologists have called "microevolution" actually have nothing to do with the theory of evolution. The theory of evolution proposes that living things can develop and take on new genetic data by the mechanisms of mutation and natural selection. However, as we have just seen, variations can never create new genetic information, and are thus unable to bring about "evolution." Giving variations the name of "microevolution" is actually an ideological preference on the part of evolutionary biologists.
The impression that evolutionary biologists have given by using the term "microevolution" is the false logic that over time variations can form brand new classes of living things. And many people who are not already well-informed on the subject come away with the superficial idea that "as it spreads, microevolution can turn into macroevolution." One can often see examples of that kind of thinking. Some "amateur" evolutionists put forward such examples of logic as the following: since human beings' average height has risen by two centimeters in just a century, this means that over millions of years any kind of evolution is possible. However, as has been shown above, all variations such as changes in average height happen within specific genetic bounds, and are trends that have nothing to do with evolution.
In fact, nowadays even evolutionist experts accept that the variations they call "microevolution" cannot lead to new classes of living things-in other words, to "macroevolution." In a 1996 article in the leading journal Developmental Biology, the evolutionary biologists S.F. Gilbert, J.M. Opitz, and R.A. Raff explained the matter this way:
The Modern Synthesis is a remarkable achievement. However, starting in the 1970s, many biologists began questioning its adequacy in explaining evolution. Genetics might be adequate for explaining microevolution, but microevolutionary changes in gene frequency were not seen as able to turn a reptile into a mammal or to convert a fish into an amphibian. Microevolution looks at adaptations that concern only the survival of the fittest, not the arrival of the fittest. As Goodwin (1995) points out, "the origin of species- Darwin's problem-remains unsolved.34
The fact that "microevolution" cannot lead to "macroevolution," in other words that variations offer no explanation of the origin of species, has been accepted by other evolutionary biologists, as well. The noted author and science expert Roger Lewin describes the result of a four-day symposium held in November 1980 at the Chicago Museum of Natural History, in which 150 evolutionists participated:
The central question of the Chicago conference was whether the mechanisms underlying microevolution can be extrapolated to explain the phenomena of macroevolution. …The answer can be given as a clear, No.35
We can sum up the situation like this: Variations, which Darwinism has seen as "evidence of evolution" for some hundred years, actually have nothing to do with "the origin of species." Cows can be mated together for millions of years, and different breeds of cows may well emerge. But cows can never turn into a different species-giraffes or elephants for instance. In the same way, the different finches that Darwin saw on the Galapagos Islands are another example of variation that is no evidence for "evolution." Recent observations have revealed that the finches did not undergo an unlimited variation as Darwin's theory presupposed. Moreover, most of the different types of finches which Darwin thought represented 14 distinct species actually mated with one another, which means that they were variations that belonged to the same species. Scientific observation shows that the finch beaks, which have been mythicized in almost all evolutionist sources, are in fact an example of "variation"; therefore, they do not constitute evidence for the theory of evolution. For example, Peter and Rosemary Grant, who spent years observing the finch varieties in the Galapagos Islands looking for evidence for Darwinistic evolution, were forced to conclude that "the population, subjected to natural selection, is oscillating back and forth," a fact which implied that no "evolution" that leads to the emergence of new traits ever takes place there.36
So for these reasons, evolutionists are still unable to resolve Darwin's problem of the "origin of species."
The Origin of Species in the Fossil Record
The evolutionist assertion is that each species on earth came from a single common ancestor through minor changes. In other words, the theory considers life as a continuous phenomenon, without any preordained or fixed categories. However, the observation of nature clearly does not reveal such a continuous picture. What emerges from the living world is that life forms are strictly separated in very distinct categories. Robert Carroll, an evolutionist authority, admits this fact in his Patterns and Processes of Vertebrate Evolution:
Although an almost incomprehensible number of species inhabit Earth today, they do not form a continuous spectrum of barely distinguishable intermediates. Instead, nearly all species can be recognized as belonging to a relatively limited number of clearly distinct major groups, with very few illustrating intermediate structures or ways of life.37
Therefore, evolutionists assume that "intermediate" life forms that constitute links between living organisms have lived in the past. This is why it is considered that the fundamental science that can shed light on the matter is paleontology, the science of the study of fossils. Evolution is alleged to be a process that took place in the past, and the only scientific source that can provide us with information on the history of life is fossil discoveries. The well-known French paleontologist Pierre-Paul Grassé has this to say on the subject:
Naturalists must remember that the process of evolution is revealed only through fossil forms... only paleontology can provide them with the evidence of evolution and reveal its course or mechanisms.38
In order for the fossil record to shed any light on the subject, we shall have to compare the hypotheses of the theory of evolution with fossil discoveries.
According to the theory of evolution, every species has emerged from a predecessor. One species which existed previously turned into something else over time, and all species have come into being in this way. According to the theory, this transformation proceeds gradually over millions of years.
If this were the case, then innumerable intermediate species should have lived during the immense period of time when these transformations were supposedly occurring. For instance, there should have lived in the past some half-fish/half-reptile creatures which had acquired some reptilian traits in addition to the fish traits they already had. Or there should have existed some reptile/bird creatures, which had acquired some avian traits in addition to the reptilian traits they already possessed. Evolutionists refer to these imaginary creatures, which they believe to have lived in the past, as "transitional forms."
If such animals had really existed, there would have been millions, even billions, of them. More importantly, the remains of these creatures should be present in the fossil record. The number of these transitional forms should have been even greater than that of present animal species, and their remains should be found all over the world. In The Origin of Species, Darwin accepted this fact and explained:
If my theory be true, numberless intermediate varieties, linking most closely all of the species of the same group together must assuredly have existed... Consequently evidence of their former existence could be found only amongst fossil remains.39
Even Darwin himself was aware of the absence of such transitional forms. He hoped that they would be found in the future. Despite his optimism, he realized that these missing intermediate forms were the biggest stumbling-block for his theory. That is why he wrote the following in the chapter of the The Origin of Species entitled "Difficulties of the Theory":
…Why, if species have descended from other species by fine gradations, do we not everywhere see innumerable transitional forms? Why is not all nature in confusion, instead of the species being, as we see them, well defined?… But, as by this theory innumerable transitional forms must have existed, why do we not find them embedded in countless numbers in the crust of the earth?… But in the intermediate region, having intermediate conditions of life, why do we not now find closely-linking intermediate varieties? This difficulty for a long time quite confounded me.40
The only explanation Darwin could come up with to counter this objection was the argument that the fossil record uncovered so far was inadequate. He asserted that when the fossil record had been studied in detail, the missing links would be found.
The Question of Transitional Forms and Stasis
Believing in Darwin's prophecy, evolutionary paleontologists have been digging up fossils and searching for missing links all over the world since the middle of the nineteenth century. Despite their best efforts, no transitional forms have yet been uncovered. All the fossils unearthed in excavations have shown that, contrary to the beliefs of evolutionists, life appeared on earth all of a sudden and fully-formed.
Robert Carroll, an expert on vertebrate paleontology and a committed evolutionist, comes to admit that the Darwinist hope has not been satisfied with fossil discoveries:
Despite more than a hundred years of intense collecting efforts since the time of Darwin's death, the fossil record still does not yield the picture of infinitely numerous transitional links that he expected.41
Another evolutionary paleontologist, K. S. Thomson, tells us that new groups of organisms appear very abruptly in the fossil record:
When a major group of organisms arises and first appears in the record, it seems to come fully equipped with a suite of new characters not seen in related, putatively ancestral groups. These radical changes in morphology and function appear to arise very quickly…42
Biologist Francis Hitching, in his book The Neck of the Giraffe: Where Darwin Went Wrong, states:
If we find fossils, and if Darwin's theory was right, we can predict what the rock should contain; finely graduated fossils leading from one group of creatures to another group of creatures at a higher level of complexity. The 'minor improvements' in successive generations should be as readily preserved as the species themselves. But this is hardly ever the case. In fact, the opposite holds true, as Darwin himself complained; "innumerable transitional forms must have existed, but why do we not find them embedded in countless numbers in the crust of the earth?" Darwin felt though that the "extreme imperfection" of the fossil record was simply a matter of digging up more fossils. But as more and more fossils were dug up, it was found that almost all of them, without exception, were very close to current living animals.43
The fossil record reveals that species emerged suddenly, and with totally different structures, and remained exactly the same over the longest geological periods. Stephen Jay Gould, a Harvard University paleontologist and well-known evolutionist, admitted this fact first in the late 70s:
The history of most fossil species include two features particularly inconsistent with gradualism: 1) Stasis - most species exhibit no directional change during their tenure on earth. They appear in the fossil record looking much the same as when they disappear; morphological change is usually limited and directionless; 2) Sudden appearance - in any local area, a species does not arise gradually by the steady transformation of its ancestors; it appears all at once and 'fully formed'.44
Further research only strengthened the facts of stasis and sudden appearance. Stephen Jay Gould and Niles Eldredge write in 1993 that "most species, during their geological history, either do not change in any appreciable way, or else they fluctuate mildly in morphology, with no apparent direction."45 Robert Carroll is forced to agree in 1997 that "Most major groups appear to originate and diversify over geologically very short durations, and to persist for much longer periods without major morphological or trophic change."46
At this point, it is necessary to clarify just what the concept of "transitional form" means. The intermediate forms predicted by the theory of evolution are living things falling between two species, but which possess deficient or semi-developed organs. But sometimes the concept of intermediate form is misunderstood, and living structures which do not possess the features of transitional forms are seen as actually doing so. For instance, if one group of living things possesses features which belong to another, this is not an intermediate form feature. The platypus, a mammal living in Australia, reproduces by laying eggs just like reptiles. In addition, it has a bill similar to that of a duck. Scientists describe such creatures as the platypus as "mosaic creatures." That mosaic creatures do not count as intermediate forms is also accepted by such foremost paleontologists as Stephen Jay Gould and Niles Eldredge.47
The Adequacy of the Fossil Record
Some 140 years ago Darwin put forward the following argument: "Right now there are no transitional forms, yet further research will uncover them." Is this argument still valid today? In other words, considering the conclusions from the entire fossil record, should we accept that transitional forms never existed, or should we wait for the results of new research?
The wealth of the existing fossil record will surely answer this question. When we look at the paleontological findings, we come across an abundance of fossils. Billions of fossils have been uncovered all around the world.48 Based on these fossils, 250,000 distinct species have been identified, and these bear striking similarities to the 1.5 million identified species currently living on earth.49 (Of these 1.5 million species, 1 million are insects.) Despite the abundance of fossil sources, not a single transitional form has been uncovered, and it is unlikely that any transitional forms will be found as a result of new excavations.
A professor of paleontology from Glasgow University, T. Neville George, admitted this fact years ago:
There is no need to apologize any longer for the poverty of the fossil record. In some ways it has become almost unmanageably rich and discovery is outpacing integration… The fossil record nevertheless continues to be composed mainly of gaps.50
And Niles Eldredge, the well-known paleontologist and curator of the American Museum of Natural History, expresses as follows the invalidity of Darwin's claim that the insufficiency of the fossil record is the reason why no transitional forms have been found:
The record jumps, and all the evidence shows that the record is real: the gaps we see reflect real events in life's history - not the artifact of a poor fossil record.51
Another American scholar, Robert Wesson, states in his 1991 book Beyond Natural Selection, that "the gaps in the fossil record are real and meaningful." He elaborates this claim in this way:
The gaps in the record are real, however. The absence of a record of any important branching is quite phenomenal. Species are usually static, or nearly so, for long periods, species seldom and genera never show evolution into new species or genera but replacement of one by another, and change is more or less abrupt.52
This situation invalidates the above argument, which has been stated by Darwinism for 140 years. The fossil record is rich enough for us to understand the origins of life, and explicitly reveals that distinct species came into existence on earth all of a sudden, with all their distinct forms.
The Truth Revealed by the Fossil Record
But where does the "evolution-paleontology" relationship, which has taken subconscious root in society over many decades, actually stem from? Why do most people have the impression that there is a positive connection between Darwin's theory and the fossil record whenever the latter is mentioned? The answer to these questions is supplied in an article in the leading journal Science:
A large number of well-trained scientists outside of evolutionary biology and paleontology have unfortunately gotten the idea that the fossil record is far more Darwinian than it is. This probably comes from the oversimplification inevitable in secondary sources: low-level textbooks, semipopular articles, and so on. Also, there is probably some wishful thinking involved. In the years after Darwin, his advocates hoped to find predictable progressions. In general these have not been found yet the optimism has died hard, and some pure fantasy has crept into textbooks.
N. Eldredge and I. Tattersall also make an important comment:
That individual kinds of fossils remain recognizably the same throughout the length of their occurrence in the fossil record had been known to paleontologists long before Darwin published his Origin. Darwin himself, ...prophesied that future generations of paleontologists would fill in these gaps by diligent search ...One hundred and twenty years of paleontological research later, it has become abundantly clear that the fossil record will not confirm this part of Darwin's predictions. Nor is the problem a miserably poor record. The fossil record simply shows that this prediction is wrong.
The observation that species are amazingly conservative and static entities throughout long periods of time has all the qualities of the emperor's new clothes: everyone knew it but preferred to ignore it. Paleontologists, faced with a recalcitrant record obstinately refusing to yield Darwin's predicted pattern, simply looked the other way.54
Likewise, the American paleontologist Steven M. Stanley describes how the Darwinist dogma, which dominates the world of science, has ignored this reality demonstrated by the fossil record:
The known fossil record is not, and never has been, in accord with gradualism. What is remarkable is that, through a variety of historical circumstances, even the history of opposition has been obscured. ... 'The majority of paleontologists felt their evidence simply contradicted Darwin's stress on minute, slow, and cumulative changes leading to species transformation.' ... their story has been suppressed.55
Let us now examine the facts of the fossil record, which have been silenced for so long, in a bit more detail. In order to do this, we shall have to consider natural history from the most remote ages to the present, stage by stage.
28 Loren C. Eiseley, The Immense Journey, Vintage Books, 1958, p. 186.; cited in Norman Macbeth, Darwin Retried: An Appeal to Reason, Harvard Common Press, Boston, 1971, p. 30.
29 Charles Darwin, The Origin of Species: A Facsimile of the First Edition, Harvard University Press, 1964, p. 184.
30 Norman Macbeth, Darwin Retried: An Appeal to Reason, Harvard Common Press, Boston, 1971, pp. 32-33.
31 Norman Macbeth, Darwin Retried: An Appeal to Reason, Harvard Common Press, Boston, 1971, p. 36.
32 Jerry Bergman, Some Biological Problems With the Natural Selection Theory, The Creation Research Society Quarterly, vol. 29, no. 3, December 1992.
33 Loren Eiseley, The Immense Journey, Vintage Books, 1958. p 227., cited in Norman Macbeth, Darwin Retried: An Appeal to Reason, Harvard Common Press, Boston, 1971, p. 33.
34 Scott Gilbert, John Opitz, and Rudolf Raff, "Resynthesizing Evolutionary and Developmental Biology", Developmental Biology, 173, Article no. 0032, 1996, p. 361. (emphasis added)
35 R. Lewin, "Evolutionary Theory Under Fire", Science, vol. 210, 21 November, 1980, p. 883.
36 H. Lisle Gibbs and Peter R. Grant, "Oscillating selection on Darwin's finches," Nature, 327, 1987, pp. 513; For more detailed information, please see Jonathan Wells, Icons of Evolution, 2000, pp. 159-175.
37 Robert L. Carroll, Patterns and Processes of Vertebrate Evolution, Cambridge University Press, 1997, p. 9
38 Pierre Grassé, Evolution of Living Organisms, Academic Press, New York, 1977, p. 82.
39 Charles Darwin, The Origin of Species: A Facsimile of the First Edition, Harvard University Press, 1964, p. 179.
40 Charles Darwin, The Origin of Species by Means of Natural Selection, The Modern Library, New York, p. 124-125. (emphasis added)
41 Robert L. Carroll, Patterns and Processes of Vertebrate Evolution, Cambridge University Press, 1997, p. 25.
42 K. S. Thomson, Morphogenesis and Evolution, Oxford, Oxford University Press, 1988, p. 98.
43 Francis Hitching, The Neck of the Giraffe: Where Darwin Went Wrong, Tichnor and Fields, New Haven, 1982, p. 40.
44 S.J. Gould, "Evolution's Erratic Pace", Natural History, vol. 86, May 1977. (emphasis added)
45 Stephen Jay Gould and Niles Eldredge, "Punctuated Equilibria: The Tempo and Mode of Evolution Reconsidered", Paleobiology, 3 (2), 1977, p. 115.
46 Robert L. Carroll, Patterns and Processes of Vertebrate Evolution, Cambridge University Press, 1997, p. 146.
47 S. J. Gould & N. Eldredge, Paleobiology, vol. 3, 1977, p. 147.
48 Duane T. Gish, Evolution: Fossils Still Say No, CA, 1995, p. 41
49 David Day, Vanished Species, Gallery Books, New York, 1989.
50 T. Neville George, "Fossils in Evolutionary Perspective," Science Progress, vol. 48, January 1960, pp. 1, 3. (emphasis added)
51 N. Eldredge and I. Tattersall, The Myths of Human Evolution, Columbia University Press, 1982, p. 59. (emphasis added)
52 R. Wesson, Beyond Natural Selection, MIT Press, Cambridge, MA, 1991, p. 45.
53 Science, July 17, 1981, p. 289. (emphasis added)
54 N. Eldredge, and I. Tattersall, The Myths of Human Evolution, Columbia University Press, 1982, pp. 45-46. (emphasis added)
55 S. M. Stanley, The New Evolutionary Timetable: Fossils, Genes, and the Origin of Species, Basic Books Inc., N.Y., 1981, p. 71. (emphasis added)
Click to view image: '175726-galapagos.jpg'
|Liveleak on Facebook| | 2026-01-24T00:55:58.984644 |
468,818 | 3.647521 | http://en.wikipedia.org/wiki/Thomas_Cavendish | Sir Thomas Cavendish (19 September 1560 – May 1592) was an English explorer and a privateer known as "The Navigator" because he was the first who deliberately tried to emulate Sir Francis Drake and raid the Spanish towns and ships in the Pacific and return by circumnavigating the globe. While members of Magellan's, Loaisa's, Drake's, and Loyola's expeditions had preceded Cavendish in circumnavigating the globe, it had not been their intent at the outset. His first trip and successful circumnavigation made him rich from captured Spanish gold, silk and treasure from the Pacific and the Philippines. His richest prize was the captured 600 ton sailing ship the Manila Galleon Santa Ana (also called Santa Anna). He was knighted by Queen Elizabeth I of England after his return. He later set out for a second raiding and circumnavigation trip but was not as fortunate and died at sea at the age of 32.
Early life
Cavendish was born in 1560 at Trimley St. Martin near Ipswich, Suffolk, England. His father was William Cavendish; he was a descendant of Roger Cavendish, brother to Sir John Cavendish from whom the Dukes of Devonshire and the Dukes of Newcastle derive their family name of Cavendish.
When Cavendish was 12 he inherited a fortune from his deceased father William. After leaving school at age 17, he spent most of the next 8 years or so on luxurious living. At the age of 15 he attended Corpus Christi College, Cambridge University for two years, but did not take a degree. He was a member of the Parliament for Shaftesbury, Dorset, in 1584. In 1585 he sailed with Sir Richard Grenville to Virginia, gaining much valuable experience but losing money on his investments. He was a member of Parliament for Wilton, 1586.
By July 1586 Spain and England were in a war which would culminate with the Spanish Armada and its threatened invasion of England in 1588. Thomas Cavendish determined to follow Sir Francis Drake by raiding the Spanish ports and ships in the Pacific and circumnavigating the globe. After getting permission for his proposed raids Cavendish built a larger 120 ton sailing ship, with 18 cannons, named the Desire. He was joined by the 60 ton, 10 cannon, ship Content, and the 40 ton ship Hugh Gallant.
Departure and Atlantic crossing
He anchored first at the island of Santa Magdalena near present day Punta Arenas, Chile. There, in two hours, they killed and salted two barrels-full of penguins for food. After extensive exploration of the many inlets, labyrinths, and intricate channels of the islands and broken lands of Tierra del Fuego and its environs they emerged from the strait into the Pacific on 24 February and sailed up the coast of South America.
Exploring and raiding off the west coast of South America
There on the Pacific coast he sank or captured 9 Spanish ships and looted several towns of quantities of fresh food, supplies and treasure while intentionally sinking the ship Hugh Gallant to use her crew to replace crew members lost on his other ships.
Capturing a Manila galleon
One of the captured Spanish ships' pilots revealed that a Manila galleon was expected in October or November 1587 and usually stopped at Cape San Lucas on the Baja California peninsula before going on to Acapulco. The Manila galleons were restricted by the Spanish Monarch to one or two ships/year and typically carried all the goods accumulated in the Philippines in a year's worth of trading silver, from the Mints in Peru and Mexico, with the Chinese and others, for spices, silk, gold and other expensive goods. In 1587 there were two Manila galleons: the San Francisco and the Santa Ana. Unfortunately both encountered a typhoon on leaving the Philippines and were wrecked on the coast of Japan. Only the Santa Ana was salvageable and after repairs resumed her voyage.
Upon reaching the Gulf of California in October 1587 Cavendish and his two ships put in at an island above Mazatlan where they careened their ships to clean their bottoms and made general repairs. They had to dig wells for water. They sailed for Cape San Lucas on the Baja Peninsula and set up patrols to see if they could spot the Manila galleon. Early on 4 November 1587 one of Cavendish's lookouts spotted the 600 ton galleon manned with over 200 men. After a several hour chase the English ships overhauled the Santa Ana--which conveniently had no cannons on board to allow more cargo. After several hours of battle during which Cavendish used his cannon to fire ball and grape shot into the galleon while the Spanish tried to fight back with small arms, the Santa Ana, now starting to sink, finally struck her colours and surrendered.
Because of the great disparity in size the Content and Desire had to pick and choose what rich cargo they wanted to transfer to their ships from the much larger Santa Ana. One hundred and ninety Spaniards (including Sebastián Vizcaíno (1548–1624), later explorer of the California coast), and Filipino crewmen, were set ashore with food and some weapons in a location where they had water and food available. Cavendish kept with him two Japanese sailors, three boys from Manila, a Portuguese traveler familiar with China and a Spanish pilot (navigator). They loaded all the gold (about 100 troy pounds or 122,000 pesos worth) and then picked through the silks, damasks, musks (used in perfume manufacture), spices, wines, and ship's supplies for what they could carry. Some in Mexico claimed that the total value of the cargo was about 2,000,000 pesos. After setting fire to the Santa Ana, the Desire and Content sailed away on 17 November 1587 to begin their voyage across the Pacific Ocean.
While burning, the Santa Ana drifted onto the coast where the Spanish survivors extinguished the flames, re-floated the ship and limped into Acapulco.
The Content was never heard from again. The Desire tried to avoid conflict for the rest of her voyage.
Crossing the Pacific Ocean and exploring the islands of South-east Asia
After crossing the Pacific Ocean, Cavendish and the Desire arrived at the island of Guam on 3 January 1588. There he traded iron tools for fresh supplies, water and wood, supplied by the natives. On further landings in the Philippines, Java and other islands he traded some of his captured linen and other goods for fresh supplies, water and wood, and collected information about the Chinese and Japanese coasts. He hoped to use this information to augment existing English knowledge of the area and for a possible second voyage. His crew of about 48 men replaced their worn out clothing and bedding with uniforms made out of silken damask.
Return to England
Cavendish's first voyage was a huge success both financially and otherwise; Cavendish was only 28. The circumnavigation of the globe had been completed in two years and 49 days, nine months faster than Drake, although, like Drake, Cavendish returned with only one of his ships—the Desire with a crew of about 48 men. He was knighted by Queen Elizabeth I of England who was invited to a dinner aboard the Desire. England celebrated both the return of the Desire and the defeat of the Spanish Armada earlier that year.
Second voyage
Cavendish sailed on a second expedition in August 1591, accompanied by the navigator John Davis. They went further south to the Strait of Magellan and then returned to Brazil, where they lost most of the crew in a battle against the Portuguese at the Village of Vitória. One abandoned sailor, Anthony Knivet, later wrote about his adventures in Brazil. Cavendish set off across the Atlantic towards Saint Helena with the remainder of the crew, but died of unknown causes at age 32, possibly off Ascension Island in the South Atlantic in 1592. The last letter of Cavendish, written to his executor a few days before his death, accuses John Davis of being a "villain" who caused the "decay of the whole action". John Davis continued on with Cavendish's crew and ships and discovered the Falkland Islands, before returning to England with most of his crew lost to starvation and illness.
- Judkins, 2003
- Wilson, Derek (2003). The Circumnavigators. Carroll & Graf. pp. 49–50. ISBN 978-0-7867-1150-5.
- Venn, J.; Venn, J. A., eds. (1922–1958). "Cavendish, Thomas". Alumni Cantabrigienses (10 vols) (online ed.). Cambridge University Press.
- Shurtz, William Lytle; "The Manila Galleon"; p. 303-314; E. P. Dutton & Company, New York, 1939--one of the more complete references
- Geoffrey Treasure, Ian Dawson, Who's who in British history: beginnings to 1901. A-H, Volume 1, Thomas Cavendish
- David Judkins (2003), "Cavendish, Thomas (1560-1592)" in Literature of Travel and Exploration: An Encyclopedia, volume 1.
- Peter Edwards, editor (1988). Last Voyages: Cavendish, Judson, Ralegh: The Original Narratives. Oxford. ISBN 0-19-812894-0
- Richard Hakluyt. Chapter: "The prosperous voyage of the worshipful Thomas Candish..", in Voyages and Discoveries: Principal Navigations, Voyages, Traffiques & Discoveries of the English Nation. Found in volume 8 of the 1907 Everyman's Library edition. Also found in Penguin edition ISBN 0-14-043073-3
- Shurtz, William Lytle; "The Manila Galleon"; p. 303-314; E. P. Dutton & Company, New York, 1939—one of the more complete references
- John D. Neville. "Thomas Cavendish", Heritage Education Program, US National Park Service.
- Christian Isobel Johnstone (1831). Lives and Voyages of Drake, Cavendish, and Dampier. Oliver & Boyd. From Google Books.
- Christian Isobel Johnstone (1892). Early English voyagers : or, The adventures and discoveries of Drake, Cavendish, and Dampier. London : Nelson. From Internet Archive
- Chisholm, Hugh, ed. (1911). "Cavendish, Thomas". Encyclopædia Britannica (11th ed.). Cambridge University Press. | 2026-01-25T12:24:18.843027 |
594,530 | 3.886091 | http://en.wikipedia.org/wiki/Reciprocity_(Canadian_politics) | Reciprocity (Canadian politics)
In nineteenth and early twentieth century Canadian politics, reciprocity meant the removal of protective tariffs on all natural resources being imported and exported between Canada and the United States. Reciprocity and free trade have been emotional issues in Canadian history, as they pitted two conflicting impulses, the desire for beneficial economic ties with the United States against the fear that closer economic ties would lead to American domination and annexation.
1880s to 1910s
After Confederation, reciprocity was initially promoted as an alternative to Prime Minister John A. Macdonald's National Policy. Reciprocity meant that there would be no protective tariffs on all natural resources being imported and exported between Canada and the United States. This would allow prairie grain farmers access to the larger American market, and allow them to make more money on their exports. In the 1890s, it also meant that Western Canadian farmers could obtain access to cheaper American farm machinery and manufactured goods, which otherwise had to be obtained at higher prices from central Canada.
In the 1891 election, the Liberal Party of Canada ran on a reciprocity platform. It lost to Macdonald who won with his nationalist slogan, "The Old Flag, The Old Policy, The Old Leader." The Liberals temporarily shelved the concept. When reciprocity came up again in 1896, it was the Americans who proposed it to Wilfrid Laurier's Liberals. The idea excited them, and they immediately began to campaign for it. The Conservatives feared that they would lose the election again due to the valuable agreement, and despite their general belief that it would do Canada good, began to campaign against it.
The Liberal Party went on to win the 1896 election, and some years later it negotiated an elaborate reciprocity agreement with the United States in 1911. However in the 1911 election reciprocity again became a major issue, with the Conservatives saying that it would be a "sell out" to the United States. The Liberals were defeated by the Conservative party whose slogan was "No truck or trade with the Yankees".
Free trade in the 1980s
The concept of reciprocity with the United States was revived in the 1985 when the Royal Commission on the Economic Union and Development Prospects for Canada headed by former Liberal Minister of Finance Donald S. Macdonald issued a report calling for free trade with the US. The Progressive Conservative government of Brian Mulroney acted on the recommendation by negotiating the Canada-US Free Trade Agreement and successfully fighting the 1988 election on the issue.
- Ellis, 1939
- Donald S. MacDonald, et al. (1987). Building a Canadian-American Free Trade Area. IRPP.
- Raymond Benjamin Blake (2007). Transforming the Nation: Canada and Brian Mulroney. McGill-Queen's Press. p. 120.
- Beaulieu, Eugene; Emery, J.C. Herbert. "Pork Packers, Reciprocity, and Laurier's Defeat in the 1911 Canadian General Election," Journal of Economic History (2001) 61#4 pp 1083–1101 in JSTOR
- Clements, Kendrick A. "Manifest Destiny and Canadian Reciprocity in 1911," Pacific Historical Review (1973) 42#1 pp. 32–52 in JSTOR
- Ellis, Lewis E. (1939). Reciprocity, 1911: a study in Canadian-American relations. Greenwood. | 2026-01-27T12:31:26.309732 |
531,722 | 3.583409 | http://www.healthline.com/galecontent/allergic-bronchopulmonary-aspergillosis | Allergic Bronchopulmonary Aspergillosis
Allergic bronchopulmonary aspergillosis, or ABPA, is one of four major types of infections in humans caused by Aspergillus fungi. ABPA is a hypersensitivity reaction that occurs in asthma patients who are allergic to this specific fungus.
ABPA is an allergic reaction to a species of Aspergillus called Aspergillus fumigatus. It is sometimes grouped together with other lung disorders characterized by eosinophilia—an abnormal increase of a certain type of white blood cell in the blood—under the heading of eosinophilic pneumonia. These disorders are also called hypersensitivity lung diseases.
ABPA appears to be increasing in frequency in the United States, although the reasons for the increase are not clear. The disorder is most likely to occur in adult asthmatics aged 20-40. It affects males and females equally.
Causes and symptoms
ABPA develops when the patient breathes air containing Aspergillus spores. These spores are found worldwide, especially around riverbanks, marshes, bogs, forests, and wherever there is wet or decaying vegetation. They are also found on wet paint, construction materials, and in air conditioning systems. ABPA is a nosocomial infection, which means that a patient can get it in a hospital. When Aspergillus spores reach the bronchi, which are the branches of the windpipe that lead into the lungs, the bronchi react by contracting spasmodically. So the patient has difficulty breathing and usually wheezes or coughs. Many patients with ABPA also run a low-grade fever and lose their appetites.
Patients with ABPA sometimes cough up large amounts of blood, a condition that is called hemoptysis. They may also develop a serious long-term form of bronchiectasis, the formation of fibrous tissue in the lungs. Bronchiectasis is a chronic bronchial disorder
The diagnosis of ABPA is based on a combination of the patient's history and the results of blood tests, sputum tests, skin tests, and diagnostic imaging. The doctor will be concerned to distinguish between ABPA and a worsening of the patient's asthma, cystic fibrosis, or other lung disorders. There are seven major criteria for a diagnosis of allergic bronchopulmonary aspergillosis:
- a history of asthma.
- an accumulation of fluid in the lung that is visible on a chest x ray.
- bronchiectasis (abnormal stretching, enlarging, or destruction of the walls of the bronchial tubes).
- skin reaction to Aspergillus antigen.
- eosinophilia in the patient's blood and sputum.
- Aspergillus precipitins in the patient's blood. Precipitins are antibodies that react with the antigen to form a solid that separates from the rest of the solution in the test tube.
- a high level of IgE in the patient's blood. IgE refers to a class of antibodies in blood plasma that activate allergic reactions to foreign particles.
Other criteria that may be used to support the diagnosis include the presence of Aspergillus in samples of the patient's sputum, the coughing up of plugs of brown mucus, or a late skin reaction to the Aspergillus antigen.
The laboratory tests that are done to obtain this information include a complete blood count (CBC), a sputum culture, a blood serum test of IgE levels, and a skin test for the Aspergillus antigen. In the skin test, a small amount of antigen is injected into the upper layer of skin on the patient's forearm about four inches below the elbow. If the patient has a high level of IgE antibodies in the tissue, he or she will develop what is called a "wheal and flare" reaction in about 15-20 minutes. A "wheal and flare" reaction is characterized by the eruption of a reddened, itching spot on the skin. Some patients with ABPA will develop the so-called late reaction to the skin test, in which a red, sore, swollen area develops about six to eight hours after the initial reaction.
Aspergillus can sometimes be seen under a micro-scope slide made from the patient's sputum, but the diagnosis is considered definite only when the fungus is cultured in the laboratory. Aspergillus is easy to culture, and can be identified when it is stained with periodic acid-Schiff (PAS), Calcofluor, or potassium hydroxide (KOH) preparations.
Patients with ABPA should be given periodic checkups with chest x rays and a spirometer test. A spirometer is an instrument that evaluates the patient's lung capacity.
Most patients with ABPA respond well to corticosteroid treatment. Others have a chronic course with gradual improvement over time. The best indicator of a good prognosis is a long-term fall in the patient's IgE level. Patients with lung complications from ABPA may develop severe airway obstruction.
ABPA is difficult to prevent because Aspergillus is a common fungus; it can be found in the saliva and sputum of most healthy individuals. Patients with ABPA can protect themselves somewhat by avoiding haystacks, compost piles, bogs, marshes, and other locations with wet or rotting vegetation; by avoiding construction sites or newly painted surfaces; and by having their air conditioners cleaned regularly. Some patients may be helped by air filtration systems for their bedrooms or offices.
"Aspergillosis." In Professional Guide to Diseases. 5th ed. Springhouse, PA: Springhouse Corporation, 1995.
Beavis, Kathleen G. "Systemic Mycoses." In Current Diagnosis. Vol. 9. Ed. Rex B. Conn, et al. Philadelphia: W. B. Saunders Co., 1997.
Hamill, Richard J. "Infectious Diseases: Mycotic." In Current Medical Diagnosis and Treatment, 1998. 37th ed. Ed. Stephen McPhee, et al. Stamford: Appleton & Lange, 1997.
Harrison's Principles of Internal Medicine. Ed. Anthony S. Fauci, et al. New York: McGraw-Hill, 1997.
Larsen, Gary L., et al. "Respiratory Tract & Mediastinum." In Current Pediatric Diagnosis & Treatment, ed. William W. Hay Jr., et al. Stamford: Appleton & Lange, 1997.
Physicians'Guide to Rare Diseases. Ed. Jess G. Thoene. Montvale, NJ: Dowden Publishing Co., Inc., 1995.
"Pulmonary Disorders: Hypersensitivity Diseases of the Lungs." In The Merck Manual of Diagnosis and Therapy. 17th ed. Ed. Robert Berkow. Rahway, NJ: Merck Research Laboratories, 1997.
Stauffer, John L. "Lung." In Current Medical Diagnosis and Treatment, 1998. 37th ed. Ed. Stephen McPhee, et al. Stamford: Appleton & Lange, 1997.
Centers for Disease Control and Prevention. 1600 Clifton Rd., NE, Atlanta, GA 30333. (800) 311-3435, (404) 639-3311. <http://www.cdc.gov>.
National Organization for Rare Disorders. P.O. Box 8923, New Fairfield, CT 06812-8923. (800) 999-6673. <http://www.rarediseases.org>.
National Institute of Allergy and Infectious Disease. Building 31, Room 7A-50, 31 Center Drive MSC 2520, Bethesda, MD 20892-2520. (301) 496-5717. <http://www.niaid.nih.gov/default.htm>.
Rebecca J. Frey
Antifungal—A medicine used to treat infections caused by a fungus.
Antigen—A substance that stimulates the production of antibodies.
Bronchiectasis—A disorder of the bronchial tubes marked by abnormal stretching, enlargement, or destruction of the walls. Bronchiectasis is usually caused by recurrent inflammation of the airway and is a diagnostic criterion of ABPA.
Bronchodilator—A medicine used to open up the bronchial tubes (air passages) of the lungs.
Eosinophil—A type of white blood cell containing granules that can be stained by eosin (a chemical that produces a red stain).
Eosinophilia—An abnormal increase in the number of eosinophils in the blood.
Hemoptysis—The coughing up of large amounts of blood. Hemoptysis can occur as a complication of ABPA.
Hypersensitivity—An excessive response by the body to a foreign substance.
Immunoglobulin E (IgE)—A type of protein in blood plasma that acts as an antibody to activate allergic reactions. About 50% of patients with allergic disorders have increased IgE levels in their blood serum.
Nosocomial infection—An infection that can be acquired in a hospital. ABPA is a nosocomial infection.
Precipitin—An antibody in blood that combines with an antigen to form a solid that separates from the rest of the blood.
Spirometer—An instrument used to test a patient's lung capacity.
Wheezing—A whistling or musical sound caused by tightening of the air passages inside the patient's chest. | 2026-01-26T14:35:31.876916 |
969,491 | 3.804676 | http://en.wikipedia.org/wiki/Great_Highland_Bagpipe | Great Highland Bagpipe
The Great Highland Bagpipe (Scottish Gaelic: a' phìob mhòr; often abbreviated GHB in English) is a type of bagpipe native to Scotland. It has achieved widespread recognition through its usage in the British military and in pipe bands throughout the world.
The bagpipe is first attested in Scotland around 1400 AD, having previously appeared in European artwork in Spain in the 13th century. The earliest references to bagpipes in Scotland are in a military context, and it is in that context that the Great Highland Bagpipe became established in the British military and achieved the widespread prominence it enjoys today, whereas other bagpipe traditions throughout Europe, ranging from Portugal to Russia, almost universally went into decline by the late 19th and early 20th centuries.
Though widely famous for its role in military and civilian pipe bands, the Great Highland Bagpipe is also used for a solo virtuosic style called piobaireachd, or simply pibroch.
Though popular belief sets varying dates for the introduction of bagpipes to Scotland, concrete evidence is limited until approximately the 15th century. The Clan Menzies still owns a remnant of a set of bagpipes said to have been carried at the Battle of Bannockburn in 1314, though the veracity of this claim is debated. There are many ancient legends and stories about bagpipes which were passed down through minstrels and oral tradition, whose origins are now lost. However, textual evidence for Scottish bagpipes is more definite in 1396, when records of the Battle of the North Inch of Perth reference "warpipes" being carried into battle. These references may be considered evidence as to the existence of particularly Scottish bagpipes, but evidence of a form peculiar to the Highlands appears in a poem written in 1598 (and later published in The Complaynt of Scotland which refers to several types of pipe, including the Highland: "On hieland pipes, Scotte and Hybernicke / Let heir be shraichs of deadlie clarions."
In 1746, after the forces loyal to the Hanoverian government had defeated the Jacobites in the Battle of Culloden, King George II attempted to assimilate the Highlands into Great Britain by weakening Gaelic culture and the Scottish clan system, though the oft-repeated claim that the Act of Proscription 1746 banned the Highland bagpipes is not substantiated by the text itself, nor by any record of any prosecutions under this act for playing or owning bagpipes. However the loss of the Clan Chief's power and patronage and widespread emigration did contribute to its decline. It was soon realised that Highlanders made excellent troops and a number of regiments were raised from the Highlands over the second half of the eighteenth century. Although the early history of pipers within these regiments is not well documented, there is evidence that these regiments had pipers at an early stage and there are numerous accounts of pipers playing into battle during the 19th century, a practice which continued into World War I when it was abandoned after the early battles, due to the high casualty rate.
The custom was revived by the 51st Highland Division for their assault on the enemy lines at the start of the Second Battle of El Alamein on 23 October 1943. Each attacking company was led by a piper, playing tunes that would allow other units to recognise which Highland regiment they belonged to. Although the attack was successful, losses among the pipers were high, and they were not used in combat again during the war. A final use of the pipes in combat was in 1967 during the Aden Emergency, when 1st Battalion, The Argyll and Sutherland Highlanders were led into the rebel-held Crater district by their Pipe Major playing the regimental marches.
The Great Highland Bagpipe is classified as a woodwind instrument, like the bassoon, oboe, or clarinet. Although it is further classified as a double-reed instrument, the reeds are all closed inside the wooden stocks, instead of being played directly by mouth as other woodwinds are. The GHB actually has four reeds: the chanter reed (double), two tenor drone reeds (single), and one bass drone reed (single).
A modern set has a bag, a chanter, a blowpipe, two tenor drones, and one bass drone.
The scale of the chanter is in Mixolydian mode, which has a flattened 7th scale degree. It has a range from one whole tone lower than the tonic to one octave above it. Bagpipers call the nine resulting notes low G, low A, B, C, D, E, F, high G, and high A.
The key is close to B-flat major; however, Bagpipe music is written in the key of D major, where the C and F are sharp (the key-signature is usually omitted from scores). This means that a bagpipe note is more than a semitone sharper than the similarly named note in common music. For example, the bagpipe low A is normally tuned to around 480 Hz, which is sharper than the standard B♭ at 466.16 Hz.
Traditionally, certain notes were sometimes tuned slightly off from just intonation. For example, on some old chanters the D and high G would be somewhat sharp. According to Forsyth (1935), the C and F holes were traditionally bored exactly midway between those for B and D and those for E and G, respectively, resulting in approximately a quarter-tone difference from just intonation, somewhat like a "blue" note in jazz. Today, however, the notes of the chanter are usually tuned in just intonation to the Mixolydian scale. The two tenor drones are generally an octave below the keynote of the chanter (low A), and the bass drone two octaves below, but they may be retuned to suit the mode of the melody. Forsyth lists three traditional drone tunings: Ellis, A3/A3/A2; Glen, A4/A4/A2; and Mackay, G3/B3/C2.
Modern developments have included reliable synthetic drone reeds as well as synthetic bags that deal with moisture arguably better than hide bags.
Highland pipes were originally constructed of such locally available woods as holly, laburnum, and boxwood. Later, as expanding colonisation and trade provided access to more exotic woods, tropical hardwoods including cocuswood (from the Caribbean), ebony (from West Africa and South and Southeast Asia) and African blackwood (from Sub-Saharan Africa) became standard in the late 18th and 19th centuries. In the modern day, synthetic materials, particularly Polypenco, have become quite popular, especially among pipe bands where uniformity of chanters is desirable.
The Gaelic word pìobaireachd simply means "pipe music", but it has been adapted into English as piobaireachd or pibroch. In Gaelic, this, the "great music" of the GHB is referred to as ceòl mòr, and "light music" (such as marches and dance tunes) is referred to as ceòl beag.
Ceòl mòr consists of a slow "ground" movement (Gaelic ùrlar) which is a simple theme, then a series of increasingly complex variations on this theme, and ends with a return to the ground. Ceòl Beag includes marches (2/4, 4/4, 6/8, 3/4, etc.), dance tunes (particularly strathspeys, reels, hornpipes, and jigs), slow airs, and more. The ceòl mòr style was developed by the well-patronized dynasties of bagpipers - MacArthurs, MacGregors, Rankins, and especially the MacCrimmons - and seems to have emerged as a distinct form during the 17th century.
Compared to many other musical instruments, the GHB is limited by its range (nine notes), lack of dynamics, and the enforced legato style, due to the continuous airflow from the bag. The GHB is a closed reed instrument, which means that the four reeds are completely encased within the instrument and the player cannot change the sound of the instrument via mouth position or tonguing. As a result, notes cannot be separated by simply stopping blowing or tonguing so gracenotes and combinations of gracenotes, called embellishments, are used for this purpose. These more complicated ornaments using two or more gracenotes include doublings, taorluaths, throws, grips, and birls. There are also a set of ornaments usually used for pìobaireachd, for example the dare, vedare, chedare, darado, taorluath and crunluath. Some of these embellishments have found their way into light music over the course of the 20th century. These embellishments are also used for note emphasis, for example to emphasize the beat note or other phrasing patterns. These three single gracenotes (G, D, and E) are the most commonly used and are often played in succession. All gracenotes are performed rapidly, by quick finger movements, giving an effect similar to tonguing or articulation on modern wind instruments. Due to the lack of rests and dynamics, all expression in GHB music comes from the use of embellishments and to a larger degree by varying the duration of notes. Despite the fact that most GHB music is highly rhythmically regimented and structured, proper phrasing of all types of GHB music relies heavily on, the ability of the player to stretch specific notes within a phrase or measure. In particular, the main beats and off-beats of each phrase are structured, however, sub-divisions within each beat are flexible.
"Few attempts have been made hitherto to combine the bagpipes with classical orchestral instruments, due mainly to conflicts of balance and tuning," said composer Graham Waterhouse about his work Chieftain’s Salute op. 34a for Great Highland Bagpipe and String Orchestra (2001). "A satisfactory balance was achieved in this piece by placing the piper at a distance from the orchestra." Peter Maxwell Davies' Orkney Wedding, With Sunrise (1985) also features a GHB solo towards the end.
The GHB plays a role as both a solo and ensemble instrument. In ensembles, it is generally played as part of a pipe band. One notable form of solo employment is the position of Piper to the Sovereign, a piper tasked to perform for the British sovereign, a position dating back to the time of Queen Victoria.
The GHB is widely used by both soloists and pipe bands civilian and military, and is now played in countries around the world. It is particularly popular in areas with large Scottish and Irish emigrant populations, mainly England, Canada, the United States, Australia, New Zealand and South Africa.
Former British Empire
The GHB has also been adopted by many countries that were formerly part of the British Empire, despite their lack of a Scottish or Irish population. These countries include India, Pakistan and Nepal.
The GHB also spread to parts of Africa and the Middle East where the British military's use of pipes made a favourable impression. Piping spread to Arabic countries such as Jordan, Egypt and Oman, some of whom had previously existing bagpipe traditions. In Oman, the instrument is called habban and is used in cities such as Muscat, Salalah, and Sohar. In Uganda president Idi Amin forbade the export of African blackwood, so as to encourage local bagpipe construction, during the 1970s.
The GHB was also adopted in Thailand; around 1921, King Rama VI ordered a set to accompany the marching exercises of the Sua Pa, or Wild Tiger Corps. This was a royal guard unit which had previously practiced to the sounds of an oboe called pi chawa.
Although the bagpipes arrived from the British Isles with a user's manual, no one was able to figure out how to play them, so bassoon player Khun Saman Siang-prajak went to the British Embassy and learned how to play the instrument with the British soldiers, and then became instructor to the rest of the Corps. The band, which plays Thai as well as Scottish tunes, still practices at Vachiravuth High School in Bangkok, which is named for Rama VI.
During the First World War, some Breton pipers serving in the French Army came in contact with the pipers of Scottish regiments, and brought back home a few GHBs which Breton pipe-makers started copying. Polig Monjarret led the introduction of the GHB to Brittany during the Celtic revival of the 1920s Breton folk music scene, inventing the bagad, a pipe band incorporating a biniou braz section, a bombarde section, a drums section, and in recent years almost any added grouping of wind instruments, e.g. saxophones, brass instruments, such as the trumpet and trombone, etc.
Well known bagadou include Bagad Kemper, Kevrenn Alre, Bagad Brieg and Bagad Cap Caval. In Brittany, the GHB is known as the biniou braz, in contrast to the biniou kozh, the small traditional Breton bagpipe.
Some of the most famous pipe bands in the world -not all for playing- are the Simon Fraser University Pipe Band (SFUPB), The Field Marshal Montgomery Pipe Band, the St. Laurence O'Toole Pipe Band some of which have won the World pipeband championships.
- Practice chanter, a bagless and droneless double-reeded pipe with the same fingerings as the GHB. These are meant to serve as practice instruments which are more portable and less expensive than a set of pipes.
- Practice goose, a small, single-chanter, droneless bag used to transition between the practice chanter and full pipes
- Reel pipes (or "kitchen" or "parlour" pipes), smaller versions of the GHB for indoor playing
- Border pipes are similar to the GHB, but quieter and thus suited to playing for dances and sessions. Rather than being inflated by mouth, their air is provided by bellows under the arm.
- Scottish smallpipes are a modern interpretation of extinct smaller Scottish pipes used for recreational music. They were revived in the late 20th century by pipemakers such as Colin Ross.
- Electronic bagpipes are electronic instruments with a touch-sensitive "chanter" which senses finger position and modifies its tone accordingly. Some models also produce a drone sound, and the majority are made to simulate GHB tone and fingering.
- Great Irish Warpipes are similar to the GHB, but have two drones instead of the GHB's third.
- Brian Boru bagpipes, based on GHB but with a keyed chanter to extend the range and add chromatic notes.
- Hugh Cheape. The Book of the Bagpipe (Belfast: The Appletree Press, 1999).
- Francis Collinson. The Traditional and National Music of Scotland (London: Routledge & Kegan Paul, 1966).
- Francis Collinson. The Bagpipe (London and Boston: Routledge & Kegan Paul, 1975).
- William Donaldson. The Highland Pipe and Scottish Society 1750-1950 (Edinburgh: Tuckwell Press, 1999).
- John Gibson. Old and New World Highland Bagpiping (Montreal & Kingston: McGill-Queen’s University Press, 2002).
- Edinburgh Research Archive. The Bagpipe: perceptions of a national instrument. Hugh Cheape.[when?]
- Alexander Ellis's early (1885) measurements of the Bagpipe scale, and its relation to Arabian scales.
- The bagpipe: the history of a musical instrument. Francis M. Collinson. Routledge, 1975 ISBN 0-7100-7913-3, ISBN 978-0-7100-7913-8. Pg 132
- Collinson, 135
- Collinson, 141.
- "History of the Great Highland Bagpipes". Celtic-Instruments.com. 2005. Retrieved 12 September 2010.
- Colonel David Murray, The 51st Highland Division at El Alamein
- 1st Battalion The Argyll and Sutherland Highlanders - Aden 1967 - The Re-entry into Crater
- Cecil Forsyth. Orchestration, 2nd Edition (London: MacMillan & Co. Ltd., 1935, 1948)
- Joshua Dickson (9 October 2009). The Highland bagpipe: music, history, tradition. Ashgate Publishing, Ltd. pp. 50–. ISBN 978-0-7546-6669-1. Retrieved 27 April 2011.
- William Donaldson (September 2005). Pipers: a guide to the players and music of the Highland bagpipe. Birlinn. p. 7. ISBN 978-1-84158-411-9. Retrieved 27 April 2011.
- "Graham Waterhouse on Chieftain’s Salute". Retrieved 20 August 2009.
- Roongruang, Panya (1999). "Thai Classical Music and its Movement from Oral to Written Transmission, 1930-1942: Historical Context, Method, and Legacy of the Thai Music Manuscript Project." Ph.D. dissertation. Kent, Ohio: Kent State University, p. 146.
- as shown, for instance, by PM W.Lawrie's tune "The 8th Argyll's farewell to the 116th Régiment de Ligne", published in Vol.2 of the Scots Guards Standard Settings, the 116th being a line infantry regiment based in Vannes during the Great War. For more details read in Major H.W. Brewsher's "History of the 51st (Highland) Division", Blackwood, Edinburgh, 1921, p.31 details of the relief of the 116th Régiment d'Infanterie by the 8th Argyll Regiment on 30 July 1915, and subsequent composing of the tune by P.M. Lawrie | 2026-02-02T07:05:43.001231 |
1,016,369 | 3.846082 | http://pandasthumb.org/archives/2013/05/press-release-t.html | Introgression or genetic exchange between crops and their wild relatives is of broad interest due to concern regarding the escape of transgenes from genetically engineered crops. Many fear the potential deleterious effects of such introgression including decreased fitness or diversity of wild relatives and/or the creation of “superweeds” that are resistant to the current arsenal of herbicides. But there is another side to crop-wild gene exchange. A paper published in the open-access journal PLoS Genetics this week reveals that crop-wild introgression is likely a longstanding and potentially beneficial phenomenon in some agroecosystems. Matthew Hufford, Jeffrey Ross-Ibarra and colleagues describe how introgression from wild relatives has shaped the genome of corn, potentially providing essential adaptations as it spread from a narrow center of domestication into novel environments.
Corn was domesticated in the lowlands of southwest Mexico ~10,000 years ago from a wild grass known as teosinte. A few thousand years later corn colonized the high altitudes of the Mexican Central Plateau. There, it came into contact with a different wild teosinte, one presumably well adapted to the new environment. Both corn and teosinte in the highlands have characteristics such as purple pigmentation and hairy stalks and leaves that are believed to help these plants tolerate the lower temperatures and higher ultra-violet radiation of the highlands. For some time, biologists have been stumped as to whether corn and teosinte obtained these highland adaptations independently or through introgression, with some arguing that the shared characteristics were a good example of maize genes escaping into the wild.
Through analysis of genetic markers from across the corn genome, Hufford and co-authors provide evidence that the shared characteristics of teosinte and highland corn are due to introgression, but with gene exchange occurring predominantly in one direction — from the wild teosinte into corn. Introgression was particularly common in regions of the genome previously linked to highland adaptation traits. When corn with and without introgression in these genomic regions was grown at low temperature, plants with teosinte introgression showed pigmentation and hairy leaves, consistent with highland adaptation, whereas those without introgression lacked these traits. These results suggest that the successful spread of corn to the highlands may have been enabled by gene exchange with highland teosinte.
In contrast, regions of the corn genome selected by early farmers during domestication appeared particularly resistant to introgression in either direction of gene flow, indicating continued selection against wild traits in corn as well as selection against corn genes in the wild teosinte.
In addition to corn, a number of crops have spread from small domestication centers into novel environments, often populated with locally-adapted wild relatives. The work by Hufford and coauthors suggests genomic data in other systems may reveal that introgression from wild relatives has provided crops with adaptations such as drought tolerance, acclimation to extreme temperature and disease resistance. Rather than focus solely on the biosafety aspects of crop-wild introgression, we can also begin to assess how crop-wild introgression may have contributed to — and continue to be harnessed for — crop improvement.
Reference: Hufford MB, Lubinksy P, Pyhäjärvi T, Devengenzo MT, Ellstrand NC, et al. (2013) The Genomic Signature of Crop-Wild Introgression in Maize. PLoS Genet 9(5): e1003477. doi:10.1371/journal.pgen.1003477 | 2026-02-02T22:54:16.292641 |
642,181 | 3.562742 | http://www.understandingrace.org/resources/glossary.html | A | B | C | D | E | F | G | H | I - K | L | M | N-O | P | R | S | T | U | W-Z
ABO blood system: a human blood typing system that consists of 4 distinct types: A, B, AB, and O. ABO blood type is determined by the alleles present at a single locus, which are inherited from parents.
abolition: the abolition movement consisted of organized efforts to do away with legalized slavery, in the United States. Emancipation was gained gradually in northern states, and slavery was abolished throughout the country by the Thirteenth Amendment to the US Constitution.
acculturation: cultural exchange and change that results from sustained contact between different groups
Affirmative Action: first established by the Federal government in 1965, this legal mandate consists of special actions in recruitment, hiring, and other areas designed to eliminate the effects of past discrimination.
African replacement model: the hypothesis that modern humans evolved as a new species in Africa between 150,000 and 200,000 years ago and then spread throughout the Old World, replacing archaic populations; sometimes called the recent African origin model.
allele: the alternative form of a gene or DNA sequence that occurs at a given locus. Some loci have only one allele, some have two, and some have many alternative forms. Alleles occur in pairs, one on each chromosome.
Allen’s rule: states that mammals in cold climates tend to have shorter and bulkier limbs, allowing less loss of body heat, whereas mammals in hot climates tend to have long, slender limbs, allowing greater loss of body heat.
anatomically modern Homo sapiens: the modern form of the human species, which evolved in Africa between 150,000 and 200,000 years ago.
anthropology: the study of humans and their cultures, both past and present. The field of anthropology includes archaeology, biological anthropology, cultural anthropology, linguistic anthropology, and applied anthropology.
anthropometrics: measurements of the human body.
anti-miscegenation laws: U.S. laws that forbade sexual relations or marriage between people of different races. Declared unconstitutional in 1967 (Loving v. Virginia).
anti-Semitism: prejudice or discrimination against Jews
applied anthropology: the subfield of anthropology that applies the knowledge and methods of anthropology to present-day problems.
archaeology: the subfield of anthropology that focuses on cultural variation and power relations in past populations by analyzing material remains (material culture or artifacts).
assimilation: the process of change that occurs when an individual or group adopts the characteristics of the dominant culture and is fully incorporated into that culture’s social, economic, and political institutions.
Back to top
base pairs: the rungs of the ladder are composed of four bases in pairs that specify genetic instructions – adesine (A), thymine (T), guanine (G) and cytosine (C). "A" always pairs with "T", and "G" always pairs with "C".
behaviorism: a school of thought in psychology emphasizing the importance of overt behavior responses over conscious experience for understanding human social interactions
Bergmann’s rule: states that (1) among mammals of similar shape, the larger mammal loses heat less rapidly than the smaller mammal, and that (2) among mammals of similar size, the mammal with a linear shape will lose heat more rapidly than the mammal with a nonlinear shape.
biocultural approach: The use of biological and cultural research methods and interdisciplinary theory to study human biological variation and other factors such as health in relationship to social and cultural practices, environment and change.
biological anthropology: the subfield of anthropology that focuses on the biological evolution of humans and human ancestors, the relationship of humans to other organisms and to their environment, and patterns of biological variation within and among human populations. Also referred to as physical anthropology.
biological determinism: the philosophy or belief that human behavior and social organization are fundamentally determined by innate biological characteristics, so that differences in behavior within and between groups are attributed to genetic variation rather than influences of environment and learning.
Blumenbach, Johann Friedrich (1752-1840): German naturalist who developed one of the earliest, non-scientific human racial classification systems, which included geographically defined "Caucasian," "Mongolian," "Ethiopian," "American," and "Malay" races. See also Linnaeus, Carolus.
Back to top
caste system: closed, hereditary system of hierarchy, often dictated by religion and occupation; status is ascribed at birth, so that people are locked into their parents' social and economic position.
Caucasian: a non-scientific term invented by German physician Johann Blumenbach in 1795 to describe light-skinned people from Europe (and, originally, from western Asia and North Africa as well) whom Blumenbach mistakenly thought came from the Caucasus Mountains. The term became synonymous with "white."
cell: the smallest unit of life. Our human bodies are composed of more than 100 trillion cells. Inside the cell membrane is the nucleus. The cell nucleus is surrounded by cytoplasm.
census: an official count of a population and collection of demographic data. The United States Census is conducted every 10 years.
chromosome: long strands of DNA found inside the cell nucleus. Human cells each contain 23 pairs of chromosomes, inherited from our parents.
Civil Rights movement: legal and other efforts led by African Americans against racism and segregation and for the enactment of legislation ensuring their full civil and human rights. The modern Civil Rights movement dates to the mid-1950s and proceeded in earnest throughout the 1960s.
classification: the ordering of items into groups on the basis of shared attributes. Classifications are cultural inventions and different cultures develop different ways of classifying the same phenomena (e.g. colors, plants, relatives, and other people).
cline: a gradual, continuous change in a particular trait or trait frequency over space.
codominant: both alleles affect the phenotype of a heterozygous genotype, and neither is dominant over the other. For example, in the ABO blood type system, alleles A and B are codominant and, together, produce blood type AB.
complex trait: a physical trait affected by more than one loci, which interact with environmental conditions. Most studied human traits are complex (e.g. height, body size and skin color).
continuous trait: a characteristic that is measured on a scale that is ordered and does not have gaps or divisions (e.g., skin color).
creationism: the belief that the universe was created by God.
cultural anthropology: the subfield of anthropology that focuses on describing and understanding human cultures, including human cultural variability (over time, throughout the world).
cultural construct: an idea or system of thought that is rooted in culture. It can include an invented system for classifying things or for classifying people, such as a racial system of classification.
cultural determinism: the belief that human behavior and social organization are fundamentally determined by cultural factors.
culture: the full range of shared, learned, patterned behaviors, values, meanings, beliefs, ways of perceiving, systems of classification, and other knowledge acquired by people as members of a society; the processes or power dynamics that influence whether meanings and practices can be shared within a group or society.
culture shock: the disorienting experience of realizing that the perspectives, behaviors and experiences of an individual, group or society are not shared by another individual, group or society.
cross-cultural comparison: the method of comparing characteristics of one culture to another. This is one of the hallmarks of anthropological knowledge.
cultural relativism or cultural relativity: the belief that the values and standards of cultures differ and cannot be easily compared with the values and standards of other cultures.
Back to top
discordance: disagreement, see nonconcordance.
discrete trait: a biological characteristic that takes on distinct values and properties (such as ABO blood type).
discrimination: policies and practices that harm and disadvantage a group and its members.
DNA (Deoxyribonucleic acid): the molecule that encodes heredity information.
dominant allele: an allele that masks the effect of the other allele (which is recessive) in a heterozygous genotype.
double helix: the DNA looks like a long twisted ladder. The sides of the ladders are composed of phosphates and sugars.
Back to top
Emancipation: freedom from legalized slavery gained by most enslaved persons of African descent immediately following the Civil War. The Emancipation Proclamation made slavery illegal in Confederate states.
essentialism: the idea that all things have an underlying or true essence. Racial essentialists argue that all members of a specific racial group share certain basic characteristics or qualities that mark them as inherently different from members of other racial groups.
ethnicity: an idea similar to race that groups people according to common origin or background. The term usually refers to social, cultural, religious, linguistic and other affiliations although, like race, it is sometimes linked to perceived biological markers. Ethnicity is often characterized by cultural features, such as dress, language, religion, and social organization.
ethnocentrism: the deeply felt belief that your own cultural ways are universal, natural, normal, and even superior to other cultural ways.
ethnography: anthropological research in which one learns about the culture of a society through fieldwork, the data-gathering methods that are combined with and/or built upon first-hand participation and observation in that society.
eugenics: from Greek eugenes meaning wellborn; The eugenics movement of the late nineteenth and early twentieth centuries sought to "improve" the human species and preserve racial "purity" through planned human breeding. Eugenicists supported anti-miscegenation laws and other, sometimes more extreme measures such as sterilization.
evolution: the transformation of a species of organic life over long periods of time (macroevolution) or from one generation to the next (microevolution) due to four evolutionary forces. Anthropologists study both the cultural and biological evolution of the human species.
evolutionary forces: the four mechanisms that can cause changes in allele frequencies across generations: mutation, natural selection, genetic drift, and gene flow.
exogamy: choosing mates and marriage partners from outside the local population.
Back to top
fieldwork: a form of data collection. Anthropological fieldwork involves a number of techniques and strategies that rely upon the firsthand observation of social interaction (in cultural anthropology) or the conducting of excavations (in archaeology).
founder effect: a type of genetic drift that occurs when all individuals in a population trace back to a small number of founding individuals. The small size of the founding population may result in very different allele frequencies from its original population. Examples of populations exhibiting founder effect include the French Acadians, the Amish and the Hutterites.
Back to top
gene: a unique combination of bases (see base pairs) that creates a specific part of our body.
gene flow: a mechanism for evolutionary change involving genetic exchange across local populations. Gene flow introduces new alleles into a population and makes populations more similar genetically to one another.
genetic distance: an average measure of relatedness between populations based on various traits. Genetic distances are used for understanding effects of genetic drift and gene flow, which should affect all loci to the same extent.
genetic drift: a mechanism for evolutionary change resulting from the random fluctuations of gene frequencies (e.g. from one generation to the next). In the absence of other evolutionary forces, genetic drift results in the eventual loss of all variation. See founder effect.
genetics: the study of human heredity, its mechanisms and related biological variation. Heredity may be studied at the molecular, individual (organism) or population level.
genome: one complete copy of all the genes and DNA for a species.
genotype: the genetic endowment of an individual from the two alleles present at a given locus. See phenotype.
Back to top
HapMap: an international research effort to find genes associated with human diseases and response to pharmaceuticals.
heritability: in biology, the proportion of variation of a trait due to genetic variation in a population.
heterozygous: the two alleles at a given locus are different.
holistic: the perspective that understanding human variation requires understanding how its different aspects (e.g. biological and cultural) are interrelated. This is one of the hallmarks of anthropological knowledge.
homozygous: both alleles at a given locus are identical.
Human Genome Project: an international research effort to sequence and map the human genome, all of the genes on every chromosome. The project was completed in 2003.
human variation: the differences that exist among individuals or among groups of individuals regarded as populations. Anthropologists study both cultural and biological variation.
human biological variation: refers to observable differences among individuals and groups that have resulted from the processes of human migration, marriage and environmental adaptations. Human biological variation is often referred to as human biological diversity.
hypothesis: a proposed explanation of observed facts. A scientific hypothesis must be testable.
Back to top
immigration: the act of entering a country of which one is not a native to become a permanent resident. In the United States and elsewhere, immigration and immigration policies are often racially-charged issues.
intelligence: the innate potential to learn and solve novel problems.
intelligent design creationism: the idea that the biological world was created by an intelligent entity and did not arise from natural processes. This idea is somewhat different from that proposed by "creation scientists."
interfertility: the ability to interbreed or mate and produce fertile offspring. All humans are members of the same species and are interfertile.
institutional racism: the embeddedness of racially discriminatory practices in the institutions, laws, and agreed upon values and practices of a society.
linguistic anthropology: the subfield of anthropology that focuses on the nature of human language and the relationship of language to culture.
linguistics: the comparative study of the function, structure, and history of languages and the communication process in general. Linguistics is also referred to as linguistic anthropology.
Linnaeus, Carolus (1707-1778): Swedish botanist and physician who developed the system for sorting living organisms into major (genus) and then more specific (species) categories (e.g. Homo sapiens). In the 1758 tenth edition of Systema naturae (Natural System), Linnaeus created the first formal, non-scientific human racial classification scheme. It included five varieties of Homo sapiens – "Americanus," "Europaeus," "Asiaticus," "Afer," and "Ferus" – based on physical and cultural descriptions that favored Europeans. Linnaeus’ human classification system influenced the way race is conceptualized in the US. See also Blumenbach, Johann Friedrich.
locus: the location of a particular gene or DNA sequence on a chromosome.
Back to top
macroevolution: the study of macroevolution focuses on biological evolution over many generations and on the origin of higher taxonomic categories, such as species.
malaria: a group of diseases caused by any of four different microorganisms called plasmodia (Plasmodium falciparum, vivax, ovale, and malariae), which are transmitted by certain species of mosquitoes. Malaria is potentially life-threatening and is found mostly in tropical and subtropical regions of the world.
Mendelian genetics: the branch of genetics concerned with inheritance. This field was named after Gregor Mendel who discovered the basic laws of inheritance in the nineteenth century.
meritocracy: the idea that merit and individual effort, rather than one’s family or social background (including race, gender, class and legacy), determine one’s success, one’s social and economic position. Similarly, the idea that social inequalities are the result of individual differences in merit and effort.
microevolution: the study of microevolution focuses on changes in allele frequencies from one generation to the next.
mitochondrial DNA: a small amount of DNA that is located in the mitochondria of cells. Mitochondrial DNA is inherited only through the mother.
monogeny: pre-evolutionary scientific argument that human biological "races" all descended from a single source (or biblical "Adam"). See polygeny.
Mulatto: originally from Spanish mulato meaning hybrid; an offspring of European and African parentage or a descendant of European and African ancestors; also used to refer to a person whose phenotype suggests mixed African and European ancestry.
multiregional evolution model: the hypothesis that modern humans evolved throughout the Old World as a single species after the first dispersion of Homo erectus out of Africa. According to this model, the transition from Homo erectus to archaic humans to modern Homo sapiens occurred within a single evolutionary line throughout the Old World.
mutation: a mechanism for evolutionary change resulting from a random change in the base sequence of a DNA molecule. Mutations are the ultimate source of all genetic variation but must occur in sex cells to cause evolutionary change.
Back to top
natural selection: a mechanism for evolutionary change favoring the survival and reproduction of some organisms over others because of their particular biological characteristics under specific environmental conditions. Natural selection does not create variation, but acts on existing variation.
nonconcordance: the tendency of some human traits to vary independently, often in response to environmental or selective conditions. For example, skin color and ABO blood type are nonconcordant.
nonrandom mating: deliberate patterns of mate choice that influence the distributions of genotype and phenotype frequencies. Non-random mating does not lead to changes in allele frequencies. Arranged marriage is a form of nonrandom mating.
phenotype: the observable or detectable characteristics of an individual organism. A person's phenotype includes easily visible traits such as hair or eye color as well as abilities such as tongue-rolling/curling.
philology: the comparative study of human speech and literature, especially those aspects useful for understanting population movements and cross-cultural interactions in the past. See also linguistics and linguistic anthropology.
physical anthropology: the study of the non-cultural, or biological, aspects of humans and our fossil ancestors. Physical anthropologists are usually involved in one of three different kinds of research: 1) non-human primate studies (usually in the wild), 2) recovering the fossil record of human evolution, and 3) studying human biological diversity, inheritance patterns, and biological adaptation to environmental stresses, and cultural means of adapting to environmental stressors that impact biology. Physical anthropology is also referred to as biological anthropology.
physiology: referring to the organic or bodily processes of an organism.
polygenic: affected by two or more loci. See complex trait.
polygeny: pre-evolutionary scientific argument that human biological "races" are separate species, each descended from different biblical "Adams." See monogeny.
polymorphism: a discrete genetic trait in which there are at least two alleles at a locus having frequencies greater than 0.01.
polytypic: a species with physically distinguishable regional populations. The human species (Homo sapiens) is polytypic.
populational model: in reference to humans, an outdated classification system based on the assumption that the only biologically distinct groups are long-isolated breeding populations with distinct evolutionary lineages. In practice, populations are difficult to define scientifically.
primary African origin model: a variant of the multiregional evolution model of the origin of modern humans that suggests most of the transition from archaic to modern humans took place first in Africa and then spread throughout the rest of the species across the Old World by gene flow.
Back to top
race: a recent idea created by western Europeans following exploration across the world to account for differences among people and justify colonization, conquest, enslavement, and social hierarchy among humans. The term is used to refer to groupings of people according to common origin or background and associated with perceived biological markers. Among humans there are no races except the human race. In biology, the term has limited use, usually associated with organisms or populations that are able to interbreed. Ideas about race are culturally and socially transmitted and form the basis of racism, racial classification and often complex racial identities.
racial classification: the practice of classifying people into distinct racial groups based on certain characteristics such as skin color or geographic region, often for the purpose of ranking them based on believed innate differences between the groups.
racial endogamy: marriage within one’s own racial group (see also anti-miscegenation laws).
racial identity: this concept operates at two levels: (1) self identity or conceptualization based upon perceptions of one’s race and (2) society’s perception and definition of a person’s race.
racialization: the process by which individuals and groups of people are viewed through a racial lens, through a culturally invented racial framework. Racialization is often referred to as racialism.
racial profiling: the use of race (and often nationality or religion) to identify a person as a suspect or potential suspect. Racial profiling is one of the ways that racism is manifested and perpetuated.
racial stratification: a system of stratification and inequality in which access to resources (political, economic, social) depends largely upon one’s racial classification.
racism: the use of race to establish and justify a social hierarchy and system of power that privileges, preferences or advances certain individuals or groups of people usually at the expense of others. Racism is perpetuated through both interpersonal and institutional practices.
recessive allele: an allele whose effect is masked by the other allele (which is dominant) in a heterozygous genotype.
RNA (Ribonucleic acid): the molecule that functions to carry out the instructions for protein synthesis specified by the DNA molecule.
Back to top
selective pressure: environmental pressure on individuals within a population that results in evolutionary change; the driving force of natural selection. Extreme temperature and ultraviolet radiation are examples of selective pressure.
sickle cell allele: an allele of the hemoglobin locus. Individuals homozygous for this allele have sickle cell anemia, while heterozygotes have sickle cell trait. In areas of the world where malaria is endemic, people with the sickle cell trait have a selective advantage (see natural selection).
sickle cell anemia: a genetic disease that occurs in a person homozygous for the sickle cell allele, which alters the structure of red blood cells, giving it a "sickled" shape. These abnormally-shaped red blood cells are less efficient in transporting oxygen throughout the body, which can cause pain and even organ damage.
single-nucleotide polymorphisms (SNP; pronounced "snip"): a single base pair within a DNA sequence that can vary among individuals. An example of a SNP is the change from A to T in the sequences AATGCT and ATTGCT.
slavery: an extreme form of human oppression whereby an individual may "own" another person and the rights to his or her labor. In the colonial Americas, a form of racial slavery evolved that would eventually distinguish only persons of African descent as "slaves."
social class: a social grouping of people based on common economic and other characteristics determined by society and reflecting a social hierarchy.
stereotype: the process of attributing particular traits, characteristics, behaviors or values to an entire group or category of people, who are, as a consequence, monolithically represented; includes the process of negative stereotyping.
stratification: in reference to society, a system by which social, economic and political inequalities are structured in society.
subspecies: physically distinguishable populations that are genetically distinct within a species. Humans do not conform to the subspecies criteria.
symbol: a sign or attribute that stands for something else, to which it may or may not have any relationship. For example, the bald eagle or "Uncle Sam" are symbols of the United States.
Back to top
taxonomy: the science of describing and classifying organisms.
trait: a characteristic or aspect of one's phenotype or genotype.
typological model: in reference to humans, an attempt to classify people based on the false assumption that humans can be unambiguously placed into discrete groupings on the basis of selected traits such as skin color, hair form, and body shape.
universalism: the belief that values and standards are commonly shared among cultures.
white privilege: A consequence of racism in the United States that has systematically, persistently, and extensively given advantages to so-called white populations, principally of European origin, at the expense of other populations.
Whiteness studies: the investigation of white racial identity, defined differently throughout United States history, but usually based on the maintenance or pursuit of white privilege.
Back to top | 2026-01-28T04:33:45.553562 |
1,148,687 | 3.542144 | http://www.britannica.com/print/topic/334981 | Adrien-Marie Legendre, (born September 18, 1752, Paris, France—died January 10, 1833, Paris), French mathematician whose distinguished work on elliptic integrals provided basic analytic tools for mathematical physics.
Little is known about Legendre’s early life except that his family wealth allowed him to study physics and mathematics, beginning in 1770, at the Collège Mazarin (Collège des Quatre-Nations) in Paris and that, at least until the French Revolution, he did not have to work. Nevertheless, Legendre taught mathematics at the École Militaire in Paris from 1775 to 1780. In 1782 he won a prize offered by the Berlin Academy of Sciences for his effort to “determine the curve described by cannonballs and bombs, taking into consideration the resistance of air[, and] give rules for obtaining the ranges corresponding to different initial velocities and to different angles of projection.” The next year he presented research on celestial mechanics to the French Academy of Sciences, and he was soon rewarded with membership. In 1787 he joined the French team, led by Jacques-Dominique Cassini and Pierre Mechain, in the geodetic measurements jointly conducted with the Royal Greenwich Observatory in London. At this time he also became a member of the British Royal Society. In 1791 he was named along with Cassini and Mechain to a special committee to develop the metric system and, in particular, to conduct the necessary measurements to determine the standard metre. He also worked on projects to produce logarithmic and trigonometric tables.
The Academy of Sciences was forced to close in 1793 during the French Revolution, and Legendre lost his family wealth during the upheaval. Nevertheless, he married at this time. The following year he published Éléments de géométrie (Elements of Geometry), a reorganization and simplification of the propositions from Euclid’s Elements that was widely adopted in Europe, even though it is full of fallacious attempts to defend the parallel postulate. Legendre also gave a simple proof that π is irrational, as well as the first proof that π2 is irrational, and he conjectured that π is not the root of any algebraic equation of finite degree with rational coefficients (i.e., π is a transcendental number). His Éléments was even more pedagogically influential in the United States, undergoing numerous translations starting in 1819; one such translation went through some 33 editions. The French Academy of Sciences was reopened in 1795 as the Institut Nationale des Sciences et des Arts, and Legendre was installed in the mathematics section. When Napoleon reorganized the institute in 1803, Legendre was retained in the new geometry section. In 1824 he refused to endorse the government’s candidate for the Institut and lost his pension from the École Militaire, where he had served from 1799 to 1815 as the mathematics examiner for graduating artillery students.
Legendre’s Nouvelles méthodes pour la détermination des orbites des comètes (1806; “New Methods for the Determination of Comet Orbits”) contains the first comprehensive treatment of the method of least squares, although priority for its discovery is shared with his German rival Carl Friedrich Gauss.
In 1786 Legendre took up research on elliptic integrals. In his most important work, Traité des fonctions elliptiques (1825–37; “Treatise on Elliptic Functions”), he reduced elliptic integrals to three standard forms now known by his name. He also compiled tables of the values of his elliptic integrals and showed how they can be used to solve important problems in mechanics and dynamics. Shortly after his work appeared, the independent discoveries of Niels Henrik Abel and Carl Jacobi completely revolutionized the subject of elliptic integrals.
Legendre published his own researches in number theory and those of his predecessors in a systematic form under the title Théorie des nombres, 2 vol. (1830). This work included his flawed proof of the law of quadratic reciprocity. The law was regarded by Gauss, the greatest mathematician of the day, as the most important general result in number theory since the work of Pierre de Fermat in the 17th century. Gauss also gave the first rigorous proof of the law. | 2026-02-05T01:12:55.018403 |
29,617 | 4.604753 | http://www.borenson.com/Standards/tabid/866/Default.aspx | “Without any doubt, the foundational skill of
algebra is fluency in the use of symbols.”
Final Report of the National Mathematics Advisory Panel
Report of the Task Group on Conceptual Knowledge and Skills, Page 17
The NCTM Principles and Standards for School Mathematics notes that “a strong foundation in algebra should be in place by the end of the eighth grade.” Hands-On Equations is the unique program that is able to provide students with that foundation beginning in the 3rd and 4th grades. One of the greatest stumbling blocks that students have to the learning of algebra is the abstract nature of the symbolism that is used. Indeed, the final report of the National Mathematics Advisory Panel noted (page 60) that, “Many students have difficulty grasping the syntax or structure of algebraic expressions.” This is another way of saying that for many students algebra is a foreign language.
Hands-On Equations performs the essential function of demystifying algebraic notation through its unique visual representation of equations using pawns and numbered cubes. Almost instantly students understand the elements or makeup of an equation such as 4x + 3 = 3x + 9. They understand the essential way, for example, in which the 3 constant on the left side differs from the 3 which is coefficient that is on the right side. Once students understand what an equation means, and once they understand a few basic principles made very clear through the use of physical actions or gestures, they attain a very high level of success with algebraic linear equations normally presented only in an algebra course. Students are then able to apply that learning to the algebraic solution of verbal problems.
In Level I, the first seven lessons of Hands-On Equations, students learn:
- the concept of an unknown
- the relational meaning of the equal sign (both sides have the same value)
- the meaning of an equation
- how to balance equations (using the subtraction property of equality)
- the concept of the check of an equation
- the ability to solve one and two-step equations with unknowns on both sides
- how to combine like terms
- how to work with a multiple of a parenthetical expression, i.e., the distributive property
- how to evaluate an expression (when they check each side of an equation)
In Levels II and III of Hands-On Equations, students learn:
- the concept of the opposite of an unknown
- how to evaluate algebraic expressions involving x and (-x).
- the additive property of inverses
- the addition property of equality
- the additive identity property
- the concept that subtracting an entity gives the same result as adding its opposite
- addition and subtraction of integers
Hands-On Equations is an Essential Component
of Middle School Mathematics
Dr. Borenson, the inventor of Hands-On Equations, strongly urges all districts to provide at a very minimum Level 1 of Hands-On Equations, which consists of the first seven lessons of the program, followed by several lessons where the students apply this learning to the algebraic solution of verbal problems -- before students enter an Algebra 1 course. This will be the best investment districts can make to help their students succeed with the abstract world of algebra!
"A strong foundation in algebra should be in place by the end of eighth grade...".
Principles and Standards for School Mathematics, NCTM
Hands-On Equations Objectives
The above link will provide the objectives for Hands-On Equations, Level I (the red manual), and for the Hands-On Equations Introductory Verbal Problems Workbook and the Hands-On Equations Verbal Problems Book, Level 1.
NCTM Math Standards Correlation
Common Core State Standards
GRADES 3 - 5
GRADES 6 - 8
Our two-day training will provide your teachers with the skills to help their students meet these Common Core algebra standards:
- Solve two-step word problems and represent those problems using equations with a letter standing for the unknown. (3.OA.8). Click here for video demo.
- Solve multistep word problems using drawing and equations with a symbol for the unknown (4.OA.2). Click here for video demo.
- Use visual models and equations to represent and solve word problems involving addition and subtraction of fractions (4.NF.3d). Click here for video demo.
- Apply the property of operations to generate equivalent expressions. For example, apply the distributive property to the expression 3(2+x) to produce the equivalent expression 6+3x; apply the distributive property to the expression 24x + 18y to produce the equivalent expression 6(4x + 3y) (6EE.3)
- Understand that positive and negative numbers have opposite values (6.NS.5)
- Solve mathematical problems using algebraic expressions and equations (7.EE.4). Click here for video demo.
- Recognize linear equations which have no solutions, one solution, and infinitely many solutions (8.EE.7)
- Strategically choose process for solving equations in one variable (grade 8)
IMPORTANT: The staff training we provide to your teachers will enable your students to excel algebraically and go beyond the Common Core standards. For example, we make it possible for third and fourth graders to meet the above 7th and 8th grade standard and solve word problems using algebraic equations. Not only will your students have the skill and knowledge to solve algebraic equations with unknowns on both sides, they will experience the joy of learning and the enhanced self-confidence that comes from real achievement. Click here to see what district leaders say about our training. | 2026-01-18T19:21:36.809620 |
1,098,270 | 3.968345 | http://www.eurekalert.org/pub_releases/2010-06/uog-sbn062310.php | This release is available in Spanish.
The separation of Neardenthal and Homo sapiens might have occurred at least one million years ago, more than 500.000 years earlier than previously believed after DNA-based analyses. A doctoral thesis conducted at the National Center for Research on Human Evolution (Centro Nacional de Investigación sobre la Evolución Humana) -associated with the University of Granada-, analysed the teeth of almost all species of hominids that have existed during the past 4 million years. Quantitative methods were employed and they managed to identify Neanderthal features in ancient European populations.
The main purpose of this research –whose author is Aida Gómez Robles- was to reconstruct the history of evolution of Human species using the information provided by the teeth, which are the most numerous and best preserved remains of the fossil record. To this purpose, a large sample of dental fossils from different sites in Africa, Asia and Europe was analysed. The morphological differences of each dental class was assessed and the ability of each tooth to identify the species to which its owner belonged was analysed.
The researcher concluded that it is possible to correctly determine the species to which an isolated tooth belonged with a success rate ranging from 60% to 80%. Although these values are not very high, they increase as different dental classes from the same individual are added. That means that if several teeth from the same individual are analysed, the probability of correctly identifying the species can reach 100%.
Aida Gómez Robles explains that, from all the species of hominids currently known "none of them has a probability higher than 5% to be the common ancestor of Neardenthals and Homo sapiens. Therefore, the common ancestor of this lineage is likely to have not been discovered yet".
What is innovative about this study is that computer simulation was employed to observe the effects of environmental changes on morphology of the teeth. Similar studies had been conducted on the evolution and development of different groups of mammals, but never on human evolution.
Additionally, the research conducted at CENIEH and at the University of Granada is pioneer –together with recent studies based on the shape of the skull- in using mathematical methods to make and estimation of the morphology of the teeth of common ancestors in the evolutionary tree of the human species. "However, in this study, only dental morphology was analysed. The same methodology can be used to rebuild other parts of the skeletum of that species, which would provide other models that would serve as a reference for future comparative studies of new fossil finds."
To carry out this study, Gómez Robles employed fossils from a number of archaeological-paleontological sites, such as that of the Gran Colina and the Sima de los Huesos, located in Atapuerca range (Burgos, Spain), and the site of Dmanisi in the Republic of Georgia. She also studied different fossil collections by visiting international institutions as the National Museum of Georgia, the Institute of Human Paleontology and the Museum of Mankind in Paris, the European Research Centre Tautavel (France), the Senckenberg Institute Frankfurt, the Museum of Natural History in Berlin, the Institute of Vertebrate Paleontology and Paleoanthropology in Beijing and the Museum of Natural History in New York and Cleveland.
Although the results of this research were disclosed in two articles published in one of the most prestigious journals in the field of human evolution, Journal of Human Evolution (2007 and 2008), they will be thoroughly presented within a few months.
Contact: Aida Gómez Robles. Group of Dental Anthropology. Centro Nacional de Investigación sobre la Evolución Humana (Burgos). Physical Anthopology Laboratory of the University of Granada. Phone: +34 947 04 50 63. E-mail: firstname.lastname@example.org
Accessible on English version
Accesible en Versión española
Accessible sur le site Version française
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system. | 2026-02-04T04:50:15.986473 |
395,839 | 3.662946 | http://www.cartage.org.lb/en/themes/sciences/Earthscience/Geology/WaterCycles/EfficientUseofWater/EfficientUseofWater.htm | |Themes > Science > Earth Sciences > Geology > Water and Water Cycles > Efficient Use of Water in the Garden and Landscape|
During 1984, an estimated 1.25 million acre feet of water were used by Texans in the care and maintenance and residential landscapes. Texas is expected to soon become the second most popular state in the U.S. with two-thirds of the population located in urban/suburban areas. With this growth, conservative estimates indicate water needs will increase 75 percent by the year 2000. Thus, conservation, reclamation and efficient use of water resources will become increasingly important.
Essentially all water used in Texas is derived from precipitation. Part of the precipitation flows into streams, ponds, lakes and reservoirs, and some of this eventually reaches the Gulf; another portion infiltrates the soil to the rooting zone of plants; a third portion percolates below the rooting zone and becomes groundwater.
Surface water sources are recharged rapidly, but groundwater reservoirs such as the Ogallala Aquifer, are recharged very slowly. The Ogallala Aquifer is slowly being exhausted in some areas of heavy pumping. The proportion of precipitation received in Texas that is returned to the atmosphere as water vapor is estimated to be 70 percent from non-irrigated land areas and 2 percent from irrigated areas. Most of this loss represents evaporation or transpiration from plant surfaces.
Efficient, Responsible Water Use
The danger of exhausting valuable aquifers by excessive pumping is paralleled by the threat of polluting the groundwater with industrial, agricultural and home landscape contaminants. Nitrates from excessive and untimely fertilization are especially threatening.
Plants, Soils and Water
When water is applied to the soil it seeps down through the root zone very gradually. Each layer of soil must be filled to "field capacity" before water descends to the next layer. This water movement is referred to as the wetting front. Water moves downward through a sandy coarse soil much faster then through a fine-textured soil such as clay or silt.
If only one-half the amount of water required for healthy growth of your garden or landscape is applied at a given time, it only penetrates the top half of the root zone; the area below the point where the wetting front stops remains dry as if no irrigation has been applied at all.
Once enough water is applied to move the wetting front into the root zone, moisture is absorbed by plant roots and moves up through the stem to the leaves and fruits. Leaves have thousands of microscopic openings, called stomates, through which water vapor is lost from the plant. This continual loss of water called transpiration, causes the plant to wilt unless a constant supply of soil water is provided by absorption through the roots.
The total water requirement is the amount of water lost from the plant plus the amount evaporated from the soil. These two processes are called evapotranspiration. Evapotranspiration rates vary and are influenced by day length, temperature, cloud cover, wind, relative humidity, mulching, and the type, size and number of plants growing in a given area.
Water is required for the normal physiological processes of all plants. It is the primary medium for chemical reactions and movement of substances through the various plant parts. Water is an essential component in photosynthesis and plant metabolism, including cell division and enlargement. It is important also in cooling the surfaces of land plants by transpiration.
Water is a primary yield-determining factor in crop production. Plants with insufficient water respond by closing the stomata, leaf rolling, changing leaf orientation and reducing leaf and stem growth and fruit yield.
Not all water is suitable for use as an irrigation source. Prior to implementing an irrigation system, the water source should be tested for water quality. The instructions for testing and the testing results may be obtained from the Texas Agricultural Extension Service or an independent water lab. The results of the test will determine if the water is suitable for irrigation or reveal if any special tactics will be required to overcome quality deficiencies.
Major factors in determining water quality are its salinity and sodium contents. Salinity levels are expressed as categories based on conductivity.
Category C-1 represents a low salinity hazard. Water in this category has a conductivity of less than 2.5 millimhos/cm. It can be used for most crops without any special tactics.
Category C-2 reflects salinity that results in a conductivity of 2.5 - 7.5 millimhos/cm. The water in this category can be used for tolerant plants if adequate leaching occurs.
Category C-3 is high salinity water that has conductivity in the 7.5-22.5 millimhos/cm range. It can not be used effectively on poorly drained soils. On well drained, low salt soils the water can be used for salt tolerant plants if it is well managed.
Category C-4 water is very high salinity and cannot be used for irrigation on a regular basis.
Sodium is a major component of the salts in most saline waters but its impact can be detrimental to soil structure and plant growth beyond its status as a component of salinity. The level of sodium (Na) in irrigation water is another important factor of quality.
Table 1. Determination of soil moisture content.
Sodium levels are expressed as categories based on concentration and impact on soils.
The S-1 category denotes low-sodium water. Water in this category can be used for most plants without any special tactics.
S-2 water has a medium level of sodium. Its use may be a problem on some fine textured soils.
S-3 water has high levels of sodium and will produce harmful effects in most situations. Sometimes it is useful on soils with high gypsum levels and in low salinity situations where it can be chemically treated.
S-4 water has very high sodium levels and is generally unsatisfactory as irrigation water.
There are critical growth periods when water stress is most detrimental. It is imperative that a good moisture supply be maintained during seed germination and seedling emergence from the soil. Water transplants immediately. Many shallow-rooted plants and newly planted trees and shrubs suffer water stress. Wilting followed by browning leaf tips and edges are signs of water stress.
To determine if irrigation is needed, feel the soil in the soil zone where most roots are located. Table 1 explains how to determine the soil's moisture by feel. As you gain experience feeling the soil and observing plant symptoms, it will help you time irrigations.
Proper watering methods are seldom practiced by most gardeners. They either under- or over water when irrigating.
The person who under-waters usually doesn't realize the time needed to adequately water an area; instead he applies light, daily sprinklings. It is actually harmful to lightly sprinkle plants every day. Frequent light applications wet the soil to a depth of less than 1 inch. Most plant roots go much deeper. Light sprinkling only settles the dust and does little to alleviate drought stress of plants growing in hot, dry soil. Instead of light daily waterings, give plants a weekly soaking. When watering, allow the soil to become wet to a depth of 5 to 6 inches.
This type of watering allows moisture to penetrate into the soil area where roots can readily absorb it. A soil watered deeply retains moisture for several days, while one wet only an inch or so is dry within a day.
In contrast, there are those who water so often and heavily that they drown plants. Symptoms of too much water are the same as for too little. Leaves turn brown at the tips and edges, then brown all over and drop from the plant. These symptoms should be the same, since they result from insufficient water in the plant tissue.
Too much water in a soil causes oxygen deficiency, resulting in damage to the root system. Plant roots need oxygen to live. When a soil remains soggy little oxygen is present in the soil. When this condition exists roots die and no longer absorb water. Then leaves begin to show signs of insufficient water. Often gardeners think these signs signal lack of water, so they add more. This further aggravates the situation and the plant usually dies quickly.
Thoroughly moisten the soil at each watering, and then allow plants to extract most of the available water from the soil before watering again.
A mulch is a layer of material covering the soil surface around plants. This covering befriends plants in a number of ways.
It moderates soil temperature, thus promoting greater root development. Roots prefer to be cool in summer and warm in winter. This is possible under a year-round blanket of mulch.
Mulch conserves moisture by reducing evaporation of water vapor from the soil surface. This reduces water requirements.
Mulching prevents compaction by reducing soil crusting during natural rainfall or irrigation. Falling drops of water can pound the upper 1/4 inch of soil, especially a clay soil, into a tight, brick-like mass that retards necessary air and water movement to the root zone.
Mulching also reduces disease problems. Certain types of diseases live in the soil and spread when water splashes bits of infested soil onto a plant's lower leaves. Mulching and careful watering reduce the spread of these diseases. Mulching also keeps fruit clean while reducing rot disease by preventing soil-fruit contact.
Most weed seeds require light to germinate so thick mulch layer shades them and reduces weed problems by 90 percent or more.
Any plant material that is free of weed seed and not diseased is suitable for mulch. Weed-free hay or straw, leaves, grass clippings, compost, etc., are all great. Fresh grass clippings are fine for use around well-established plants, but cure them for a week or so before placing them around young seedlings.
Mulch vegetable and flower gardens the same way. First get plants established, then mulch the entire bed with a layer 3 to 4 inches thick. Work the mulch material up around plant stems.
Organic mulches decompose or sometimes wash away, so check the depth of mulches frequently and replace when necessary.
Recent research indicates that mulching does more to help newly planted trees and shrubs become established than any other factor except regular watering. Grasses and weeds, especially bermuda grass, which grow around new plants rob them of moisture and nutrients. Mulch the entire shrub bed and mulch new trees in a 4-foot circle.
Four distinct methods of irrigating are sprinkling, flooding, furrow-irrigation and drip irrigation. Consider the equipment and technique involved in each method before selecting the "right" system. Select a system that will give plants sufficient moisture without wasting water.
Sprinkler irrigation, or "hose-end overhead sprinkling" as it is sometimes called, is the most popular and most common watering method. Sprinkler units can be set up and moved about quickly and easily. They are inexpensive to buy, but if used incorrectly they can be extremely wasteful of water.
Sprinkler equipment varies in cost from a few dollars for a small stationary unit to $50 or more for units that move themselves. A solid-set sprinkler system for a small garden could cost more than $100, although it is not necessary to spend that much. The best investment is an impact-driving sprinkler than can be set to water either a full or partial circle.
Sprinkler irrigation has its advantages. The system can be used on sloping as well as level areas. Salt does not accumulate because water percolates downward from the surface carrying salts with it. Different amounts of water can be applied to separate plantings to match plant requirements.
However, there are some drawbacks. Use sprinkler irrigation early in the day to allow time for the soil surface to dry before nightfall. Irrigation in a wind of more than 5 miles per hour distributes the water unevenly. If you have poor quality water, the mist which dries on leaves may deposit enough salt to injure them. Strong winds may carry the water away to neighbors' yards. Some water also is wasted by attempting to cover a square or rectangular area with a circular pattern. Move the sprinkler unit at regular intervals if the garden is larger than the sprinkler pattern. With caged tomatoes or trellised crops, set the sprinkler on a stand to allow the spray to arch up and over the top of the leaf canopy. Improper timing and operating in wind or at night can damage plants and waste water.
Flooding is one of the oldest irrigation methods. It is often used in areas with extreme summer heat, especially in large farming operations. It can also be used in the home garden.
First, a shallow dam is raised around the entire perimeter of the area to be watered. Then, water is allowed to flow over the soil until the dammed area is completely covered. Beneficial flooding is possible only if the area is level and the soil contains enough clay to cause the water to spread out over the surface and penetrate slowly and evenly. The soil must not remain flooded with water for more than a few hours.
Flood irrigation is useful where alkaline water causes a buildup of salts to toxic levels in the soil. Flooding leaches (flushes down) these excess soluble salts out of the soil. It is best to do this type of flooding before spring fertilizing, tilling and planting.
However, flood irrigation has its drawbacks. It can waste water because it is easy to apply much more water than is required to meet normal plant needs. Runoff is hard to avoid. Also, rapidly growing plants are injured by the low oxygen level present (oxygen starvation) in flooded soil, and fruits resting on flooded soil stay wet, often rotting as a result.
Furrow irrigation is a popular method of applying water, primarily to vegetable gardens. Successful furrow irrigation requires soil with enough clay so that water flows along shallow ditches between the rows and sinks in slowly. The water must reach the low end of the rows before much has soaked in at the high end. Many sandy or open soils are so porous that water seeps in too quickly, never reaching the end of the row. To solve this problem, use short rows in gardens with sandy soil.
Most gardens can be irrigated easily with the furrow method by using a hoe or shovel to make shallow ditches. To test furrow irrigation, make one shallow ditch from end to end and run water down it. If the water runs 20 to 30 feet in a few minutes, that's fine. If the water sinks in too fast at the high end, divide the garden lengthwise into two or more runs and irrigate each run separately. Make a serpentine ditch to guide the water up and down short rows in small gardens on level ground. The number of rows which can be irrigated at the same time depends on the volume of water available and your ingenuity.
Leaves and fruit of erect plants such as beans and peppers will stay dry during furrow irrigation. New seedlings can be watered by running water as often as needed to keep the seedbed moist. The surface soil of a raised bed does not pack as with sprinkler irrigation, so there is less crusting. Only a hoe or shovel and a length of hose are needed to get the water from the house faucet to the garden.
But, furrow irrigation does have some disadvantages. Mature fruits of vine and tomato crops usually rest on the soil. Some will become affected with a soil rot after repeated wetting. And it is difficult, if not impossible, to protect them with mulch. Train vining plants away from furrows even though it is not an easy task. In areas with salty water, salts accumulate near the center of the row and can injure plants. If only a small volume of water is available, water a few rows at a time and then change to a new set. This can be time consuming and wasting water at the ends of the rows is a common problem.
Trickle or drip irrigation is an improvement over all the above as a watering technique. It applies a small amount of water over a long period of time, usually several hours. This is discussed in detail later in this publication.
Using Water Around Home Trees and Shrubs
Grass and/or weeds growing under and around trees and shrubs compete for the same nutrients and water. When summer rainfall is low and less than adequate watering occurs, competition for water and nutrients imposed by weeds or grass substantially reduced tree growth, bud development and fruit size. When competition from grass is eliminated, roots are more evenly distributed, root numbers increase and they utilize a larger volume of soil. Effective soil utilization by a large root system means that fertilizer and moisture will be used more efficiently.
Remove grass and/or weeds from beneath newly planted trees and shrubs as soon as possible. The longer turfgrass grows under trees and shrubs, the greater the reduction of new growth. There is also a cumulative effect which may decrease tree growth for several years. For instance, if the growth of a tree is reduced by 20 percent for one year because of grass competition, the growth automatically is 20 percent less during the second year's growth. Grass competition reduces growth by as much as 50 percent.
If trees and shrubs are surrounded closely by tenacious grasses such as bermuda, remove or kill the turf. The safest grass killer for use near young trees and shrubs is glyphosate, which is sold as Roundup, Kleenup, Doomsday or Weed and Grass Killer.
This herbicide totally eliminates grasses and roots, yet is inactivated upon soil contact. Use a piece of wood, cardboard, etc, as a shield to prevent spray droplets from touching trunks or foliage of desirable plants. Use only the amount of glyphosate suggested on the product label.
Liberal watering offsets the retarding effect of grass. If the competition of grass for water can be overcome by extra watering, plants will grow much better.
Trees need a deep, thorough soaking once a week in the growing season, either from natural rainfall or supplemental irrigation. When irrigating, be thorough and allow the water to penetrate deeply. To water large trees let water flow slowly onto an area under the dripline of the tree for several hours.
Professionals indicate that large trees require more deep watering than homeowners can imagine. Remember that watering which is adequate for lawn grasses growing under trees is not adequate for actively growing trees.
Young and mature pecans, which are popular lawn trees in many areas, respond positively to irrigation. Irrigation can be very beneficial if not necessary, in June, July, and August. Irrigation often means the difference between a marketable and unmarketable product. A dry June and July may cause many or all nutlets to drop. Drought during July and early August can decrease nut size. Pecans fill during August and September. Drought during three months may cause nuts that are poorly filled. A dry September and October may prevent shuck opening and cause a high proportion of "sticktights". Drought-induced sticktights can be a serious problem.
Growth of young, nonbearing pecan trees depends on a regular supply of water from April bud break to mid-August. The frequency of irrigation varies with the system used. However, avoid applying too much water. An understanding of internal soil drainage prevents overwatering. When too much water is supplied, oxygen is forced out of the root zone and many serious problems result, including the following:
A guide for young tree irrigation is shown in Table 2. If soil drainage is poor, apply 50 percent of this volume.
All bearing pecan trees respond positively to irrigation. In general, pecans in good soil bear with only 32 inches of rainfall from August to October. However, more water increases tree health and regular production.
Table 2. Average weekly water requirements in gallons per tree.
Pecans require 1 inch of water each week from April to October; the optimum amount is 2 inches per week.
A bearing pecan tree has its greatest water needs during the following periods:
March, immediately before growth begins.
Severe drought during one of these four periods can cause complete crop failure or serious loss. If these occur during the last period, a poor crop results the following year.
Pecan roots can dry out and die if no rain occurs from September to April. Therefore, consider a mid-winter irrigation to ensure good tree health and regular production.
Water needs vary considerably among the turfgrasses. Consider this when establishing a lawn, for it may significantly reduce irrigation needs during the summer. Of the common turfgrasses tall fescue requires the most water and buffalo-grass the least. St. Augustine, hybrid bermuda grass and common bermuda grass have intermediate water needs.
Lightly water newly seeded or sprigged lawns at frequent intervals. Keep the seed or sprigs moist but not saturated during this initial growth period. This may require watering four or five times on hot, windy days.
The first 10 days to 2 weeks are especially critical. If young plants dry out, they may die. After a couple of weeks root system development should be well under way and the watering frequency can be slowly reduced. At about 1 month after seedling or sprigging the lawn it should be treated as an established lawn. Purple or red colored bermuda grass may indicate seedlings are overwatered. If this occurs, reduce watering and plants usually recover.
Water newly sodded lawns much like established lawns except more frequently. After the sod is applied, soak it with enough water so that the soil under the sod is wet to a depth of 2 to 3 inches. Each time the sod begins to dry out, resoak it. Roots develop fairly rapidly and within 2 weeks or so the sod can be treated like an established lawn.
Ideally, a lawn should be watered just before it begins to wilt. Most grasses take on a dull purplish cast and leaf blades begin to fold or roll. Grass under drought stress also shows evidence of tracks after someone walks across the lawn. These are the first signs of wilt. With careful observation and experience, one can determine the correct number of days between waterings. Common bermuda grass lawns can go 5 to 7 days or longer between waterings without loss of quality.
Early morning is considered the best time to water. The wind is usually calm and the temperature is low so less water is lost to evaporation. The worst time to water is late evening because the lawn stays wet all night, making it more susceptible to disease.
When watering a lawn, wet the soil to a depth of 4 to 6 inches. Soil type affects the amount of water needed to wet soil to the desired depth.
It takes about 1/2 inch of water to achieve the desired wetting depth if the soil is high in sand, and about 3/4 inch of water if the soil is a loam. For soils high in clay, an inch of water is usually necessary to wet the soil to the desired depth.
If waterings are too light or too frequent the lawn may become weak and shallow-rooted, which in turn makes it more susceptible to stress injury.
Use the following steps to determine the amount of water your sprinkler or sprinkler system puts out and check its distribution pattern at the same time.
Many soils will not take an inch of water before runoff occurs. If this is a problem with your lawn, try using a wetting agent, also called a surfactant, which reduces the surface tension of water making it "wetter." This "wetter" water runs into the soil at a faster rate and goes deeper than water in a non-treated soil.
There are a number of wetting agents available; apply them according to directions on their labels. If this does not solve to runoff problem, it may be necessary to apply 1/2 inch one day and 2 inch the next day.
Generally speaking, if you keep your tomatoes happy, the rest of the vegetables will receive enough water. Obviously, irrigating a garden containing many kinds of vegetables is not simple. Early in the season when plants are young and have small root systems, they remove water from the soil near the center of the row. As the plants grow larger, roots penetrate into more soil volume and withdraw greater quantities of water faster.
In sandy loam soils, broccoli, cabbage, celery, sweet corn, lettuce, potatoes and radishes have most of their roots in the top 6 to 12 inches of soil (even though some roots go down 2 feet) and require frequent irrigation of about 3/4 to 1 inch of water. Vegetables which have most of their root systems in the top 18 inches of soil including beans, beets, carrots, cucumbers, muskmelons, peppers and summer squash. These vegetables withdraw water from the top foot of soil as they approach maturity and can profit from 1 to 2 inches of water per irrigation.
A few vegetables, including the tomato, cantaloupe, watermelon and okra, root deeper. As these plants grow they profit from irrigations of up to 2 inches of water.
For fruiting crops, the most critical growth stage regarding water deficit is at flowering and fruit set. Moisture shortage at this stage may cause abscission of flowers or young fruits, resulting in insufficient fruit for maximum yield.
The longer the flowering period, the less sensitive a species is to moisture deficits. For example, the relative drought resistance of beans during flowering and early pod formation is the result of the lengthy flowering period --30 to 35 days with most varieties. Slight deficits during part of this period can be partially compensated for by subsequent fruit set when the water supply is adequate. More determinate crops such as corn or processing tomatoes are highly sensitive to drought during the flowering period.
In terms of food production, the period of yield formation or enlargement of the edible product (fruit, head, root, tuber, etc.) is critical for all vegetables and is the most critical for non-fruiting crops. Moisture deficits at the enlargement stage normally result in a smaller edible portion because nutrient uptake and photosynthesis are impaired.
Irrigation, especially over irrigation during the ripening period may reduce fruit quality. Ample water during fruit ripening reduces the sugar content and adversely affects the flavor of such crops as tomatoes, sweet corn and melons. Moisture deficits at ripening do not significantly reduce yield of most fruit crops, irrigate at this time with extreme caution.
Drip Irrigation for the Home Landscape, Garden and Orchard
One of the best techniques to use in applying water to home landscapes, gardens and orchards is drip irrigation. This is the controlled, slow application of water to soil. The water flows under low pressure through plastic pipe or hose laid along each row of plants. The water drops out into the soil from tiny holes called orifices which are either precisely formed in the hose wall or in fittings called emitters that are plugged into the hose wall at a proper spacing.
Use drip irrigation for watering vegetables, ornamental and fruit trees, shrubs, vines and container grown plants outdoors.
Drip irrigation is not well suited for solid plantings of shallow-rooted plants such as grass and some ground covers.
The basic concepts behind the successful use of drip irrigation are that soil moisture remains relatively constant, and air, as essential as water is the plant root system, is always available. In other watering methods there is an extreme fluctuation in soil water content, temperature and aeration of the soil.
Soil, when flooded or watered by sprinkler, is filled to capacity. It is then left to dry out, and often it is not until the plant begins to show signs of stress that it is watered again. When the soil is saturated in this way, there is little or no available oxygen; at the end of the cycle there is insufficient water. Drip irrigation overcomes this traditional watering problem by keeping water and oxygen levels within absorption limits of the plants. It frequently (even daily) replaces the water lost through evaporation and transpiration (evapotranspiration). In addition to maintaining ideal water levels in the soil, this also prevents extreme temperature fluctuations which result from wet-dry cycles associated with other watering methods.
With proper management, drip irrigation reduces water loss by up to 60 percent or more as compared to traditional watering methods. These methods deliver water at a faster rate than most soils can absorb. Water applied in excess of this penetration rate can only run off the surface, removing valuable topsoil and nutrients. With drip irrigation the water soaks in immediately when the flow is adjusted correctly. There is neither flooding nor run-off, so water is not wasted. With a properly used drip irrigation system, all of the water is accessible to the roots. Watering weed patches, walkways and other areas between plants and row is avoided. Wind does not carry water away as it can with sprinkler systems, and water lost to evaporation is negligible.
Drip irrigation requires little or no time for changing irrigation sets and only about half as much water as furrow or sprinkler irrigation because water is delivered drop by drop at the base of the plants.
Water shortage and high energy costs motivate gardeners to harvest the greatest possible yield from every precious drop of water. If you have shied away from installing a drip irrigation system because it looked too complicated or too costly, this publication explains how to have one easily and economically.
The financial investment is reasonably small if you are willing to spend a few hours to plan, assemble and install the system. Savings in water combined with increased yield and quality of vegetables and flowers more than pays for the cost of parts to maintain a drip system.
The life of a drip system is extended by proper design, proper filtering, avoiding puncture with tillage tools, mulching over plastic lateral driplines to shield them from sunlight, and flushing and draining lines and storing system components inside a warm building before hard freezing temperatures arrive.
The 3- to 5-gallons-per-minute flow from a typical house faucet limits the area which can be adequately irrigated to usually not more than 1,500 to 2,000 square feet.
From $15 to more than $30 per 100 feet of row can be spent for equipment in an average sized home garden, depending on whether it is simple or has fancy automatic controls, pressure regulators and fertilizer injectors. As with most tools and machines, the simpler the better.
The two basic kinds of drip irrigation systems which have worked best for Texas growers are the two-channel plastic tubing represented by IRS Bi-Wall and Chapin Twin-Wall, and the plastic pipe with insert emitters represented by Submatic, Melnor Tirosh, Spot, Microjet and many others. The emitters are made by cutting 1-foot lengths of microtubing.
When planning a drip system, consider your needs, one at a time:
Most water does not contain enough salt to be injurious to plants. However, irrigation water adds salt to the soil, where it remains unless it is removed in drainage water or the harvested crop. When the amount of salt added to the soil exceeds the amount removed, salt accumulates until the concentration in the soil may become harmful to plants.
The principal effect of salinity is to reduce the availability of water to the plant; however, certain salts or ions may produce specific toxic effects. Poor quality irrigation water containing moderate amounts of salt often can be used more successfully with drip irrigation than with sprinkler or surface irrigation. Less total salt is added with drip irrigation since less water is applied. In addition, a uniformly high soil moisture level is maintained with drip irrigation, which keeps the salt concentration in the soil at a lower level.
Salts accumulate in the soil around the edges of the west area under drip irrigation emitters, and some leaching (removal of salts with drainage water) may be required. Sufficient rainfall is received in much of the state to accomplish any required leaching of salts. However, extra irrigation water may be required in some areas to leach accumulated salts from the root zone. Operating the system when the crop's water requirement is low can probably accomplish required leaching of salts in most cases.
If fruit and ornamental trees are to be drip-irrigated, use insert emitters. The number of emitters per tree or plant depends on plant size. A large fruit or ornamental tree having a canopy spread of 15 feet or more in diameter needs six emitters. A smaller tree or shrub needs one emitter for each 2 1/2 feet of canopy diameter. The number of emitters multiplied by the rated output per emitter gives the flow rate needed to irrigate all the trees and shrubs simultaneously. For example, if there are 12 trees on which 72 emitters will be used, each with a rated output of 1 gallon per hour at 15 pounds per square inch, the flow rate will be 72 gallons per hour or 1.2 gallons per minute. A 1/2 inch main line is sufficient according to the following guidelines.
Make a sketch of the area to be irrigated. Use graph or grid paper to draw the area's shape using a scale of 1 inch to 5 to 10 feet.
Measure the length and width of the area. The distance from the water source to the edge of the area to be irrigated is the length of garden hose or plastic pipe needed to connect to the irrigation system.
Draw in the actual lines of drip hose required. If planning a garden, a drip hose will be run down each row. Count the number of rows and multiply the number of major rows by the row length to get the total length of drip hoses needed. If you run several rows close together (only a few inches apart) to create a bed culture, consider using one drip hose if it is up to 18 inches wide and two drip hoses if it is 24 to 36 inches wide. If wide beds are used for planting flowers, use one drip hose every 18 inches.
Other helpful facts involve the direction of downward slope in the garden and the gallons per minute delivered by your faucet. Use a container of known volume, such as a 5-gallon pail, and a watch to estimate gallons per minute.
Installing A Drip System
When buying irrigation equipment avoid mixing brands of fittings, hoses and emitters unless they are compatible. The design and installation of Bi-Wall and Twin-Wall drip tubing and the design and installation of Submatic, Melnor, Spot and Microjet emitter systems are discussed separately so that the instructions are easier to understand.
Table 3. Plastic line sizes for lengths less than 100 feet.
When planning a Bi-Wall or Twin-Wall system, use a 1/2-inch (16 millimeter) main water supply plastic hose (header) to feed the water into the drip tubing which runs alongside each row. Most house faucets supply enough water to run 200 to 300 feet of drip tubing at once. Divide irrigation systems for larger areas into two or more sets when the water volume is insufficient to cover the whole area at once.
Parts needed for a drip tubing system with a header are a hose long enough to reach from the house faucet to the header, a 1/2-inch female hose connector, a 1/2-inch diameter header long enough to connect all the drip tubes, an ear tee for each drip tube, a drip tube for every row, a nylon string or strong wire to tie the ends of the drip tubing and a sharp knife.
When a header is used, begin the installation by running a hose from the house faucet to a female hose connector which is installed in the end of the header closest to the faucet. The other end of the header is plugged or folded back and tied off. Be sure the header spans the entire width of the area to be irrigated on the high side.
Place the correct lengths of Bi-Wall drip tubing along each row. Plan rows to make the best use of water.
Small plants such as carrots, onions, radishes, lettuce, bush beans, etc., can be double-rowed; that is, seed can be planted on each side of the drip tubing.
To join the Bi-Wall tubing to the header pipe (the main water supply), use a connecting attachment called an ear tee. At each row, punch a small hole in the side of the 16-millimeter header tubing facing down the row. Use a blunt eight penny nail to punch the holes. Push the ear tee into the hole and wrap the two ears around the header. To secure the far end of the Bi-Wall, fold back 2 inches and tie with a string. If the water contains sand or dirt particles, screw a filter to the hose connector as sand particles and other trash can clog openings in the Bi-Wall tubing.
All of the drip irrigation fittings are connected to the plastic tubing in the same manner. For the hose connector, push the 16-millimeter header over the shaft and under the locking collar. When the header is as far as you can push it, pull back on the tubing. This binds the tubing under the locking collar. To disassemble, reverse the procedure. For installing Bi-Wall tubing, push it on the ear tee as far as it will go; push the collar outward, then grasp the Bi-Wall tubing and pull back on it while holding the ear tee in place with the other hand. This binds the Bi-Wall tubing under the locking collar. Note the difference in the locking collar for the Bi-Wall and the header. If irrigating only one row with Bi-Wall, put a wide Bi-Wall collar on the hose connector, install it in the Bi-Wall and fasten it to a water hose or faucet just as for the header. It may be necessary to twist the locking collar to allow the Bi-Wall to go all the way up.) Work the locking collar down on the Bi-Wall, then hold the ear tee in one hand and pull on the Bi-Wall tubing with the other hand. If it leaks around the collar on the ear tee, push the Bi-Wall farther up on the eat tee, twist the locking collar again and pull on the tubing. The notch on the collar should be over the top of the Bi-Wall.
The second type of drip irrigation system involves the use of insert emitters. When designing a drip system with insert emitters, strive to have the same amount of water flowing out of all emitters in the system. Secondly, have the flow rate regulated so that water drips into the soil without puddles forming on the surface. Insert emitter systems are ideally suited for irrigating trees, which are planted farther apart than garden crops, flowers or shrubs.
Trees previously irrigated by the other methods change their root systems when drip irrigation is used. New feeder roots concentrate near the emitters and become major suppliers. It is best to start drip irrigation at the beginning of spring growth to allow time for new roots to develop before hot weather arrives. If drip irrigation is initiated in midsummer, an occasional supplemental irrigation by the old method is recommended to avoid plant stress.
Soil texture is of primary importance in the design and use of drip irrigation. It directly affects the number or placement of emitters. In sandy soil where spaces between sand grains are relatively large, gravitational forces affect water movement more than capillary action. As a result, water moves down rather than laterally through the soil. In finer soils such as clay, capillary action is much stronger and water spreads laterally before penetrating very deeply. An emitter in sandy soil will water an area with a diameter of about 15 inches, while in clay soil the same emitter will water an area up to 2 feet in diameter. Since the same amount of water is released in both cases, the sandy soil obviously receives deeper watering than the clay.
The following chart on emitter placement suggests a 1-gallon-per-hour emitter at the base of the plant, assuming you have a low shrub in sandy soil. In fact, placing two 1/2-gallon emitters, each about 9 inches from the base, increases the area of coverage while using the same amount of water. Increasing the wet area encourages wider development of the root system, and watering time is reduced somewhat. However, remember that smaller volume emitters clog more easily than larger volume emitters.
When working with vegetable crops and sandy soil, use closer spacing (12 inches) to ensure that all shallow roots receive sufficient moisture. With finer soils, use greater distances between emitters while still ensuring proper coverage. To get a better idea of soil structure experiment with slow water applications to observe lateral movement and depth of water penetration. Observe the application rate and time so better decisions on emitter placement, as well as watering practices, can be made. Be sure that a sufficient percentage of the root zone is watered. Shallow root zones require emitters with closer spacing; deep roots allow wider spacing. The widest spacing to use safely on vegetables and ground cover is closer than the narrowest required by tree crops. This is shown in the table on the number and placement of emitters.
Water quality may be a factor in emitter location since salts concentrate at the edges of the wet area. It may be necessary to locate emitters so that wet areas overlap the tree trunk to prevent harmful salt accumulations near the trunk.
A popular emitter arrangement for large trees such as pecans uses a loop which circles the tree between the trunk and the dripline. The lateral pipeline which carries water along each row of trees is under ground. A 1/2-inch or 3/8-inch polyethylene pipe connected to the lateral near each tree extends to the soil surface and circles the tree. The tree loop is usually 6 to 12 feet long initially and contains one or two emitters. Additional lengths of pipe 8 to 12 feet long, each containing another emitter, are connected to the initial loop as the trees grow and require more water. Large pecan trees may require tree loops with five to nine emitters.
In-line emitter arrangements have been used satisfactorily for smaller trees such as apples, peaches and citrus. Install two or four emitters in the lateral so that wet areas overlap in line with the tree row.
Emitter selection and performance are keys to the success of all drip irrigation systems. Some emitters perform satisfactorily underground while others must be used only above ground. Emitter clogging is still a major problem in drip irrigation. Emitter openings must be small to release small amounts of water, consequently, they clog easily.
Table 4. Selection, number and spacing of emitters and orifices.
Emitters are more easily observed, cleaned and oriented near the tree when they are located on the soil surface, although drip systems with underground emitters are out of the way. Some emitters can be flushed easily to remove sand or other particles which cause clogging, while others are more difficult to clean.
Ease of installation and durability are important considerations in emitter selection. Most emitters are either connected in-line or by attaching to the lateral. In-line connections are made by cutting the pipe and connecting the emitter to the pipeline at the cut. Clamps, which increase costs, are required for connecting emitters in some pipes. Check the pipe and in-line emitters for correct fit before purchasing. Emitters which attach to the lateral are either inserted into the pipe or clamped to it.
The flexibility of a drip irrigation system makes it ideal for most landscapes. When native plants are transplanted they often require watering for the first year or so until they establish a root system. After that they usually survive on natural rainfall.
As plants grown and watering needs increase, more emitters can be installed very easily. Or, 1 gallon emitters can be replaced with 2- or 4-gallon-per-hour emitters.
In landscaping, plants with different watering requirements must frequently be mixed together. Some ornamentals require occasional deep watering, while others prefer more frequent shallow watering. Differing needs can be satisfied through the number or size of emitters by placing either a greater number of emitters or by using emitters with a greater flow rate for plantings requiring extra water. In clay soils it is best to increase the number of emitters rather than the rate of flow since soil density limits absorption rates.
Once the system is set up this way, maximum benefit for all plants is achieved by several shallow waterings--leaving the water on for a short time (20 minutes to 2 hours) with an occasional deep watering (several hours) as needed, depending on season, plants and soil type.
Burial of the drip system is usually preferred by landscapers and ornamental gardeners. Generally 3 to 4 inches deep is sufficient. This not only hides the tubing from view but also adds to the system's life expectancy. Most emitters can also be buried, but check them occasionally. Rodent damage (sometimes they chew through the tubing) and accidental damage from shovels or tillers are problems associated with buried systems. Repairing cut or punctured laterals is easy with a couple of connectors and a new section of tubing.
Drip irrigation is the best method for watering landscape trees also. A tree with only 25 percent of its roots wet regularly will do as well as a tree with 100 percent wetting at 14-day intervals. This saves water in drought situations by wetting only part of the root zone. Thus a single lateral line is often sufficient for even large trees.
Remember that the root system grows more vigorously in moist soil. If emitters are placed on only one side of a tree, the root system is not balanced and stability is threatened. In one experiment with drip irrigation, a large crop of trees was blown over in a storm because the roots had been watered on one side only.
When watering closely spaced plants such as garden crops, flowers or shrubs using insert emitters, a system must have the capability to maintain uniformly moist soil near the surface along any row where you wish to germinate seeds.
It is not feasible to place an emitter where each plant will grow. You do not use the same spacing for all vegetables and flowers and you must not grow the same kind of plant in the same spot year after year. All things considered, a spacing of 2 feet between emitters is best for most closely spaced plants and soils; a spacing of 18 inches might be better in very sandy soil.
Water is not wasted with 2-foot spaces even if plants are set 4 or 5 feet apart. Roots soon penetrate the soil around the plant in a radius several feet from the stem, and absorb water from every cubic inch of this soil.
Knowing the total length of a drip hose required allows you to buy a ready-made kit with emitters already inserted in the hose. Usually, hose length in these kits is either 50 or 100 feet. The better kits have a filter and flow control of some sort.
Installing these kits is simple. Lay enough garden hose to reach from the house faucet to the area to be irrigated, attach the hose end to the coupling on the emitter hose and unroll the hose down the first row. At the end of the row, curve the hose back up along the second row and so on for remaining rows. If the kit has a Y hose for equal lengths of hose connected to each leg of the Y, put the Y near the center row at the high end. If there is extra hose, run the excess back over the last row.
Taking one step at a time in customizing a drip system to fit your planting area is fun and easy. First, select an emitter that delivers 1 to 2 gallons per hour when operated in a pressure range of 2 to 10 pounds per square inch. One emitter commonly used in Texas is rated at 2 gallons per hour when operated at a pressure of 10 pounds per square inch. When operated at 2 pounds per square inch, this same emitter delivers 1 gallon per hour. In actual practice the emitter would be operating at a pressure somewhere between these two extremes. Emitter systems with insets irrigate most uniformly when the pressure in the hose along the row is maintained in a range of 3 to 6 pounds per square inch. The lower the pressure, the greater the effect of elevation changes.
Water flow through a pipe is slowed by the friction it creates. That is why water flows faster from the emitter nearest the header and slowest from the emitter farthest from the header. Keep this difference as small as possible. Well-designed small systems can be operated with no more than 10 to 15 percent variation in flow rate. Design your system for a uniform flow rate by limiting the emitter hose length to less than 50 feet when the emitters are 2 feet apart on 3/8-inch hose.
With row lengths of 60 to 100 feet select 1/2-inch diameter hose. If the 3/8-inch hose is used for runs up to 100 feet, a drop in flow rate of more than 25 percent from the head to tail of the hose will occur. Water is wasted at the beginning of the row to get enough water into the soil at the end of the row. If the garden is level, it is easy to shorten the length of run by placing the header in the center (halfway down the length of the garden). To keep the water volume adequate increase the diameter of the supply hose or main to 3/4 inch.
If the garden slope is only slight and there are only a few rows, put the header on the high end. For steep slopes where rows must be contoured, run the header down the slope and the emitter hose across the slope with the contour.
Now determine if the water supply is sufficient for the drip system to work properly. Count the number of emitters and multiply by the rated gallons per hour of the emitter. Divide this number by 60 to get the gallons per minute your water source must supply to allow the system to irrigate uniformly. For example, 100 emitters multiplied by 2 gallons per hour per emitter equals 200 gallons per hour, 200 gallons per hour divided by 60 equals 3.3 gallons per minute. If your water supply is 5 gallons per minute, design the header hose to irrigate the garden in one set; if your water supply is only 2 to 3 gallons per minute, divide the header into two sets using a tee with two shutoffs to permit irrigating each half of the garden separately.
Select the proper size main and submain (header) hoses next. For flow rate up to 3 gallons per minute, 1/2-inch diameter hose is adequate for the main hose from the faucet to the header and for the header, too. When a flow of 3 to 6 gallons per minute is required to satisfy the emitter hose, the main hose carrying water to the header should be 3/4 inch in diameter and the header can be 1/2-inch diameter hose.
For example, here is a hypothetical garden 20 feet wide and 30 feet long, with 25 feet from the hose faucet. It has six drip emitter hoses with emitters 2 feet apart in the hose. Starting at the house faucet, a drip system would require one 80-mesh hose strainer, 25 feet of 1/2-inch supply hose with threaded coupling, one 1/2-inch female swivel hose thread poly compression tee, 20 feet of 1/2-inch header hose, four male hose thread poly compression tees, six 1/2-gallon-per-minute flow control valves, 180 feet of 3/8-inch male hose compression couplings with caps, 100 emitters which deliver 1 to 2 gallons per hour and one twist punch. Include several repair couplings and a dozen hole or 'goof' plugs to help repair accidents. Row shutoffs and flow control valves can be omitted, but the system would be less versatile and less uniform in flow rate.
Installing this emitter hose system requires only a knife to cut the hose and a twist punch or hand punch to install insert emitters. Some hose comes with emitters already installed, and the cost is only slightly more.
Assemble the system starting at the house faucet. Lay hose from the faucet to the soil at the edge of the garden, leaving it slack. Sink wooden stakes in the soil to hold the hose and fittings where you place them. Measure pieces of header hose and push them into the compression fittings (tees) so that the drip hose lines up exactly with a center of the row. Then, punch a hole with the twist punch along the top side of the drip hose every 2 feet and press an emitter into each hole. Turn on the water to flush any foreign particles out of the end of the hoses. When the lines are cleaned, stop the water and cap the end of each drip hose. Now it's ready to irrigate.
Operating a Drip System
Operating a drip system is a matter of deciding how often to turn it on and how long to leave it on. The object is to maintain adequate soil moisture without wasting water by applying too much.
Anyone can turn on a faucet for an hour or two every day, and some drip system manufacturers advise leaving systems on continuously for the entire growing season. Not all gardens, however, use the same amount of water daily. Knowing how often and how long to water depends on the system's rate of delivery, soil type, varying weather conditions, kinds of plants, their growth stage and cultural practices in use. Irrigating trees has the same restrictions. Water requirements are influenced by tree size and growth as well as rainfall, temperature, relative humidity and wind velocity. Ideal system operation applies just enough water to replace the amount used by the plants the previous day. Uniform soil moisture content is maintained and the volume of moistened soil neither increases nor decreases.
Estimate daily operating time in hours by dividing the daily water requirement of each plant in gallons by the application rate to each plant in gallons per hour. Continuous irrigation may be required for short periods when water use by the plants is maximum, but continuous operation when it is not required offsets the basic advantage of minimum water application with drip irrigation.
The object of each watering is to bring the moisture level in the root zone up to a satisfactory level. Any more means cutting off necessary oxygen along with the loss of water and nutrients below the root zone. The system is then run again before the satisfactory moisture levels in the soil is lost. If plants are showing signs of insufficient moisture and watering duration is long enough (see Table 5), then shorten intervals between watering.
Table 5. Watering time (in hours) per irrigation.*
Table 6 give the amount of water various plants need under a range of temperature conditions. This is evapotranspiration. It considers the water used by the plant as well as the water evaporated. Plants need three to four times as much water in hot weather as they do in cool weather. Both tables are needed to calculate the number of waterings each week.
Table 6. Irrigation time needed each week.*
Divide the amount of water needed per week by the watering time to determine the number of waterings weekly. For example, a closely spaced vegetable garden in medium soil needs to be watered for 2 hours at each watering, and with warm weather the garden needs 6 hours of water each week. Divide six by two and the answer is three waterings per week. The formula makes it easier to figure weekly waterings.
Most home gardens have plants with various watering needs. This makes it difficult to give each type of planting optimum watering, but with some care results can be more than satisfactory. Plants with shallow root zones and shorter watering times benefit from more frequent applications. Other plants requiring deeper watering are satisfied by emitters with greater outputs, or in the case of clay soils, a greater number of emitters.
Knowing the number of gallons delivered per hour by a drip system is also vitally important. If the delivery rate of a system is known, one can easily decide how long to leave it on to get the desired amount of water.
For example, a typical system which delivers 15 gallons per hour to each 100 square feet of area irrigates at the rate of 1/4 inch per hour. Thus, you would leave the system on for 4 hours to get a 1-inch irrigation. To apply a 1-inch irrigation to a garden, run the system long enough to deliver about 60 gallons for each 100 square feet of garden area. Likewise, a system with a 30-gallon-per hour rate of delivery would do the same job in 2 hours.<
To calculate the delivery rate of a particular drip system, read the meter again, subtract the first reading from the second and divide the total gallons per hour by the approximate number of units of 100 square feet in the garden. Divide the gallons per hour per 100 square feet by 60 to see what fraction of an inch is applied in 1 hour.
Another method of measuring the volume delivered by one emitter in 1 minute is to use a measuring cup or graduated cylinder. Repeat this for several emitters and take the average. Multiply this volume by the number of emitters in the system to get the volume per minute. Multiply this volume by 60 to get volume per hour and convert this to gallons per hour. Again, divide your gallons per hour by the number of units of 100 square feet in the garden to get gallons per hour per 100 square feet.
Probably the easiest method is to install an inexpensive water meter with automatic shutoff on the faucet. Then attach the hose which carries the water to the header pipe. Set the water to the header pipe. Set the meter to deliver the number of gallons needed to apply in inch of water. This volume would be 60 times the number of units of 100 square feet in the garden.
Turn on the water and stay nearby to record the time it shuts off. The elapsed time is how long it took the system to deliver the inch of water.
For newly seeded gardens the system should be run only a short time every day for a few days to keep the surface soil from drying out. Plants loaded with fruit will need an inch of water every other day.
Most people new to drip irrigation notice immediately that the soil surface is dry except for a circle of moist soil right around the emitter. The wet circles overlap where emitter holes are closely spaced. Two examples are the Bi-Wall and Twin-Wall hoses.
Moist surface soil is desirable only when germinating seed. At other times it is a waste of water because tremendous quantities evaporate from a wet soil surface. The small circle of moist surface soil around a drip irrigation emitter is like the tip of an iceberg, because after a few hours of irrigating a great volume of water under the emitter has spread out through the soil for several feet in all directions.
The water which falls gently from the drip hose into the soil is pulled downward by gravity. It is also pulled sideways, moving from one tiny soil particle to the next by a force known as capillary attraction. The slower the water flows into the soil, the greater is its sideways flow relative to its downward flow.
It is easy to see why water from a drip hose in the row spreads out several feet in all directions even though only a small circle of wetness on the soil surface is visible. Actually, the dry surface soil prevents moisture from evaporating into the air, thus conserving water.
Very often after spring or fall tillage, especially rototilling, the soil is fluffy and very loose. This soil will not conduct drip irrigation water properly. Instead of spreading out and wetting the entire soil volume in the garden, the water travels almost straight down. A narrow column of soil will be waterlogged, but most of the surrounding soil remains dry.
For tilled soil to regain its ability to conduct the water sideways, soil particles must settle back together after each spading, plowing or rototilling. Sprinkle irrigate an inch of water on the entire garden after spring and fall tillage to settle soil particles so that the soil will conduct water laterally as well as downward. An inch or two of rain also settles the soil.
Sandy loam soils hold less water per foot of depth than clay loam soils. Water moves downward faster in sandy soils than in those with high clay content. Generally, water spreads sideways more in clay loam than in sandy loam soils, but there are exceptions. Some homeowners have added so much organic matter to their sandy soil that the water from an emitter travels outward in a circular pattern, wetting soil 3 feet away from the emitter to within 3 inches of the soil surface.
In Texas, spring rainfall is often adequate to get plants started. In June and July rainfall is less, and higher air temperatures and longer days cause plants and soil to lose much more water into the air. Watch the weather and record the amount and frequency of rainfall, remembering that supplemental irrigation may be necessary even in a rainy week if the required amount has not been supplied naturally.
The frequency of irrigation should increase as hot summer weather approaches. When temperatures reach the high 90's and humidity is low, fruiting tomato plants require irrigation every other day with at least an inch of water for maximum production. In the fall, with the return of more frequent rainfall and cooler temperatures, allow more time between irrigations. An inch every 5 to 7 days is adequate then.
Inspect plants regularly to determine necessary adjustments in daily irrigation time. If the zone of moistened soil is increasing in size, reduce operating time; if the moistened soil zone is decreasing in size, increase operating time.
The frequency and duration of drip irrigation also depend on the kinds of plants being grown. For instance, tomatoes use more water than any other vegetable in the garden when full grown and laden with fruit.
Three to 6 gallons of water daily usually is sufficient for a tree during the first and second year after planting. Only 3 to 6 hours of irrigation time are required daily during maximum water use months if one 1-gallon-per-hour emitter is used at each tree.
Water is a limited and fragile resource. Each gardener utilizes a small part of the total water consumed, but the total use by all gardeners is significant. Irrigating home gardens and landscapes is considered a luxury use of water by many people. Non-essential use of water implies a special responsibility on the part of gardeners to efficiently use the resource and to protect its quality.
This responsibility is fulfilled by following the recommendations in this bulletin concerning water conservation and to further avoid practices that contribute to surface and groundwater contamination. Among the threats to pure water are improper use of fertilizers, pesticides and soil erosion. Label instructions on all pesticides and fertilizers must be followed faithfully and water run-off due to excess irrigation should be minimized.
Abscission - The falling off or breaking off of a leaf or fruit as the result of a weak point which forms at a point on the petiole or stem.
Bi-Wall drip tubing - A brand of drip tubing which has a small diameter plastic tube fused to the top side of a large diameter plastic tube. Water flows through the large tube and into the small tube through holes spaced every 4 to 6 feet. Water drips out of the small tubing onto the soil from holes spaced about 1 foot apart. This system allows water to be distributed evenly along a relatively long row of up to several hundred feet.
Drip irrigation - The slow application of water, usually drop by drop, to the soil.
Ear tee - A fitting used to conduct water from a given point along a header pipe into a length of Bi-Wall or Twin-Wall tubing. The ears are two semi-rigid loops of plastic that are looped over the header pipe to prevent the tee from being pushed out by water pressure.
Emitter - A small fitting (usually in the size range of an aspirin to a spark plug) with a precisely formed orifice or channel in it. This emitter is plugged into flexible plastic pipe permitting water to flow out of the pipe at a very slow rate at any point along its length.
Evapotranspiration - The combined loss of water from the soil by evaporation and from leaves by transpiration.
Filter - A device which captures particles of sand or other matter which might plug orifices in the lateral drip lines.
Fittings - Collectively, the parts of a drip system; pipe, connecting tees, valves, emitters, etc.
Flow rate - The volume of water passing through a pipe or out of an emitter.
Flushing - The process of washing captured particles out of a filter.
GPH - Gallons per hour, a term which specifies the rate of water flow through a pipe or the amount of water delivered by a pump.
GPM - Gallons per minute, a term which specifies the rate of water flow through a pipe or the amount of water delivered by a pump.
Header - The length of pipe placed along the high side of the garden to conduct the water into the drip hoses, tubes or lateral driplines that are laid down along the row.
Hose connector - The fitting connected to a plastic pipe or garden hose which has hose threads that match the threads on the house faucet.
Irrigation - Application of water to the soil surface.
Lateral drip lines - Lengths of plastic pipe or tubing, containing emitters or precisely formed orifices, laid down along the center of a row of plants.
Line - Another term for plastic pipe or plastic tubing that is used to transport water along rows of plants or from tree to tree in a drip system.
Line size - Usually the diameter of a particular pipe or tubing used to conduct water in a drip system.
Moisture deficit - a condition in which a plant's requirement for water is greater than the supply available to it, thereby preventing the plant from reaching its full potential of beauty, yield and quality.
Mulch - Generally, any organic or inorganic substance such as hay, lawn clippings, paper or plastic applied to the soil surface to prevent weed growth and water loss.
Orifice - A precisely formed hole in a plastic pipe or tube or in a small fitting (known as an emitter) plugged into plastic pipe through which water flows out in drops or a tiny stream.
Photosynthesis - The formation of glucose by the reaction of carbon dioxide and water in the green leaf.
PSI - Pounds per square inch, a term used to specify water pressure to the amount of force pushing on the water in the pipe.
Root zone - The location of most of a plant's root system in terms of lateral spread and depth.
Run-off - Water that flows over the surface of the ground rather than penetrating the soil.
Salts - Chemical elements in the form of dissolved ions that are carried in irrigation water and deposited in the soil when water moves into plants or evaporates from the soil surface.
Soil texture - The relative amounts of sand, silt and clay present in a soil which places it in one of the textural classes: sand, loamy sand, sandy loam, silty loam, clay loam or clay.
Soil tube - A hollow metal tube that is forced into the soil to remove a sample of soil.
Soluble salts - Various naturally occurring or introduced salts such as sodium chloride and calcium which are dissolved in water.
Sprinkler - A device attached to a hose to propel streams of water into the air, thereby distributin water evenly over a lawn or garden surface.
Stomates - Tiny pores in the leaf surfaces (more on the underside) which open and close to allow carbon dioxide gas to enter and oxygen and water vapor to exit.
Transpiration - The process by which water moves from the leaf into the air in vapor form.
Twin-Wall drip tubing - A brand of drip tubing which consists of two plastic tubes, one inside the other, joined by a seam that runs along the length. The inner tube conducts the water along the length of the row. It flows into the outer layer of tubing through tiny holes spaced 4 to 6 feet apart. Then the water drips out of tiny holes formed every 12 to 18 inches in the walls of the outer tubing. | 2026-01-24T07:34:55.687790 |
163,169 | 3.582091 | http://www.colby.edu/personal/r/rmscheck/GermanyB4.html | The Social Democrats:
The anti-socialist laws had strengthened but also radicalized and ostracized the socialist workers' movement. The fall of the anti-socialist laws in 1890 allowed the SPD to build up a centrally organized mass party. Membership grew impressively: 100,000 in 1890; 1.1 million in 1914. Votes rose as well: 1.4 million in 1891, 4.25 million in 1912 - from 19 to 34 percent of the overall vote. The SPD drew its strength from the big cities and industrial areas such as the Ruhr, Saxony, and Berlin. In Berlin 75% voted for the SPD in 1912. The SPD was underrepresented in rural and Catholic areas. The free (socialist) trade unions had 2.5 million members in 1914, more than twice the party's membership.
The SPD was a distinctive party. Whereas most bourgeois parties were rather informal associations with few permanent members and a minimal bureaucracy, the SPD became a home to its members and, together with the trade unions, formed a state within the state. The SPD and the socialist trade unions built up an extensive bureaucracy and formed an alternative cultural and social network. Working men and women joined Socialist clubs, sports teams, men's and women's choirs, and poetry groups; socialist associations and institutions existed for almost everything, from party or union-sponsored child care centers to funeral homes; working-class people read the party newspaper and many of the theoretical works by their leaders printed by socialist publishing houses; whenever they felt that the state-supported social security system proved insufficient they could join the union's health and accident insurance or draw from the union's poverty funds.
This alternative structure was made possible because members of the SPD and the free trade unions were generally disciplined and willing to sacrifice time and money for the sake of the whole organization. It mirrored the exclusion of the Socialists from the regular channels of political power in the Second Empire. The bourgeois parties formed alliances against the SPD, the Conservatives mobilized the countryside against them, and the Catholic Center Party attacked them as a godless party. Wilhelm II and his government chastised them for allegedly denying and betraying their fatherland, and careers in the bureaucracy and army were almost impossible for Socialists.
Even when the SPD became stronger in the Reichstag and local parliaments, there was no question of letting it participate in government. Discrimination was widespread, but one must also admit that the SPD's own revolutionary rhetoric and internationalist posture often antagonized the bourgeois parties and state authorities. Nevertheless, one could only wish that the bourgeois parties would have opposed the Nazis in 1930-33 as tenaciously as they resisted Socialist influence before 1914!
To some ironic observers, the SPD seemed like a mirror image of the Prussian state: bureaucratically organized, disciplined, hierarchic (though more democratic), with a venerated leader at the top. August Bebel, the party chief, was often called the "Workers' Kaiser." The goals of his party, however, were contradictory. The party program (1891) had two parts, a declaration of principle and a practical plan. These parts, written by different party leaders, contradicted each other, at least in political practice.
The first was Marxist. It aimed at the socialization of the means of production and a classless society. The second part was pragmatic and demanded social and democratic reform, a democratic constitution and women's suffrage. It was unclear whether or not the reform path was meant to lead to a classless society implied by the first part -- and would thus make a revolution superfluous -- or if it aimed at creating a social welfare state within the framework of a still capitalist society. Marx considered revolution necessary for the overthrow of bourgeois society, but a "revisionist" group of Socialists around Eduard Bernstein disagreed. The discrepancy between the revolutionary and reformist path became the party's main inner conflict. The left wing considered revolution indispensable and wanted to prepare for it (although this would have led the party into illegality). The right wing believed that reforms were possible in the Wilhelmine Empire and that a gradual improvement of the legal, political, and social situation of the proletariat might result from peaceful political work. The party's impressive electoral success and the growth of the socialist trade unions supported this view. A centrist group, finally, used the revolutionary rhetoric to distance themselves from the existing state and to increase the cohesion of the SPD while pursueing a reformist course in their everyday activity.
Socialists, not only in Germany, did not agree on what to make of the development of modern industrial societies. Many of them thought that Marx and Engels believed in inevitable immiseration of the proletariat leading automatically to revolution. After the end of a long depression in 1896, however, the situation of the working people grew better. Combined pressure of party and trade unions could help to improve conditions without revolution. Marx had claimed that this reformist path would fail because the capitalists would be unable to make the necessary concessions in the long run. Unions, so Marx, would be able to wring wage increases from the employers up to a certain point; then the whole system would turn against them and only revolution could prevent the workers from falling back into bondage. Marx's most loyal heirs feared that successful reformism would mitigate class conflicts and thus give the doomed capitalist systema new lease on life.
The SPD and the socialist trade unions, however, had too much to lose to build up a radical revolutionary party like the Russian Bolsheviks, a party run in exile and in the underground by a handful of dedicated revolutionaries. The German Social Democrats were caught in the dilemma between successful reformism and revolutionary principle. Karl Kautsky, the party's chief ideologue, put it as follows: "the SPD is a revolutionary party, but not one that organizes revolutions." Although this conflict was never resolved, the inherent dualism did not do as much harm to the SPD as could have been expected. Until 1914 it proved to be more of an integrating than a splitting force. The revolutionary appeal attracted frustrated workers, while the reformist program steadily and visibly increased the party's wealth and parliamentary strength. Many socialists, moreover, needed the revolutionary legitimation for their reformist practice.
This became clear in the party debate on Bernstein's revisionism. In opposition to Marx, Bernstein advocated a socialism that allowed for cooperation with left-wing liberals and, if feasible, for participation in government. He suggested that the SPD drop the revolutionary claim and integrate itself into the existing state, trying to democratize it from within. This was too much for the party doctrinaires, however. Bernstein's revisionism caused a party scandal. But after the party had condemned revisionism one of the party leaders told Bernstein in private: "Look, dear Ede, we all practice reformism. Always do it, but never speak about it!"
That the reformists did not stand up more forcefully and failed to win control of the party was to a large degree the effect of continuing pressure from the political system and the right-to-center parties. Threats of renewed repression of the SPD never ended, and many other parties tried to build a stable block against Social Democrats, excluding the SPD from all political influence.
The dualism within the party became more accentuated in the wake of the Russian revolution of 1905. After the Russian armed forces had lost the war with Japan social and political tensions erupted in the Tsarist Empire. Bourgeois liberals together with socialists fought for a constitutional system and a national parliament, the Duma. The workers went on a general strike, and the Tsar, left without much armed support, made concessions, most of which he withdrew in the following two years. The outburst of revolutionary activity in Russia nevertheless inspired socialists all over the world. German socialists, in particular, thought about the general strike as a means of political struggle. But while reformists wanted to use it merely as a defensive weapon in case Wilhelm II tried to carry out a coup d'état, the radicals on the Left hoped to use the general strike as a prelude to revolution. This alienated the trade unions, which did not want to risk their achievements and funds in a revolutionary gamble for power. The conflict was not resolved when the First World War polarized the socialist movement further, pitting a patriotic majority against an initially small pacifist and revolutionary minority.
It also remained unclear how the reformists would hope to win power in the state. A socialist Reichstag majority would not have been able to bring down the chancellor and to change the constitution - it would probably have provoked more repression. Moreover, the SPD's doctrinarianism set limits for its growth at the polls. The Social Democrats, for instance, never managed to appeal to the farmers, not even to the poor rural laborers on the estates of the Junkers. Marxist theory predicted that the agrarian sector would be mostly absorbed by the industrial sector. After the proletarian revolution large agricultural collectives would ensure the essential supplies. This program -- help up by the SPD -- could not appeal to farmers, who felt that the SPD threatened their property rights. Altering the program, however, would have shaken the SPD's theoretical foundation.
Altogether, despite its limitations, the rise of organized labor in Wilhelmine Germany was an impressive success story. Germany had by far the strongest worker party and free trade unions in the world. The debates within the SPD were closely followed by all socialists in the entire world. Bebel was an authority venerated or at least respected by socialists around the world, and the Second International, the organization of all socialist parties, was virtually dominated by the SPD. But international observers could not help but wonder whether the SPD's revolutionary and internationalist rhetoric was serious. Josef Stalin, later the dictator of the Soviet Union, once watched German workers getting ready to board a train ready to take them to a neighboring city in which a big workers' demonstration was scheduled. However, the railroad official whose duty it was to invalidate the train tickets before people could board the train was not on his post. The German workers got upset but remained outside the gate until the train left. Nobody broke through the gate without an invalidated ticket or stormed the train. Stalin watched with amazement and wondered how such workers would ever be able to undertake a revolution.
For the text of the Socialist International Hymn, see H-German: Die Internationale (1888)
In the light of the Nazi crimes Jews in Wilhelmine Germany have received special attention. Most German Jews had become highly assimilated. This often resulted in a conspicuous modernity of their forms of life. Jewish families had few children, the women were relatively emancipated, and some Jews showed a predilection for modern art and technology. Assimilation went so rapidly that Jews often seemed the epitome of modernity to outsiders. Conservatives with their cultural criticism of modernity therefore singled out the Jews as a target. Although many Jews remained traditionalist, those who became "modern" were more conspicuous than the others. (Assimilation did thus not lead to "normalization" of the Jews.)
Latent and open anti-Semitism existed. Toward the end of the long depression, in the early 1890s, an anti-Semitic party was founded that drew its support mostly from rural areas. But it received no more than 3.4% at the Reichstag elections of 1893 and faltered not much later. Other manifestations of anti-Semitism, however, forced many Jews reluctantly to identify themselves as a group again, much as they wanted to be German and consider religion and descent a private matter. Exclusion from fraternities, for instance, compelled Jewish students to found their own organizations. Like Social Democrats, Jews had limited access to the administration, the highest academic rank, and the officer corps, although it proved possible for some of them to reach the rank of a state secretary (which would have been impossible for a Social Democrat).
All separate Jewish organizations, however, always stressed that German Jews had no political agenda in common with non-German Jews. Jewish ethnic ties, they argued, were no more than an historical memory and did not matter in the present. Politically, German Jews predominantly adhered to the liberals, mostly to their left wing. Some Jews supported the Social Democrats, but the conspicuous position of Jewish intellectuals in the SPD's leadership made Jewish support for socialism look much bigger than it was. Zionism did not find many followers; in any case, the German Zionists until shortly before 1914 regarded Palestine as a homeland for Eastern European Jews rather than for themselves.
German Jews shared the patriotic enthusiasm triggered by the outbreak of the war in 1914. They were particularly inspired by the fight against the Jew-baiting Tsarist autocracy. Almost exactly the same proportion of the Jewish and non-Jewish population fought in the German front lines, and the death toll was comparable, too. In Eastern Europe, Jews greeted the advancing German troops as liberators from Russia. Germans, in turn, "discovered" Yiddish as a language related to an older form of German. But the strains of war after 1916 fanned anti-Semitism mainly on the Right and, in particular, among the Pan-Germans. Rightist newspapers started questioning the Jews' commitment to national defense, and popular resentment of war profiteers often mixed with anti-Semitism. Later, under the "threat" of democratization and socialist revolution, rightists denunciated democracy and socialism as "Jewish inventions" meant to undermine the strength of the German people. Many Jews were alarmed at the rise of anti-Semitism but hoped that it would calm down after the war. During the Weimar Republic, the Zionist movement thus gained momentum and centered on Germany, while more Jewish intellectuals emphasized a Jewish culture separate from Germany. The majority of German Jews, however, continued to identify with Germany. Even after 1933 many German Jews, having emigrated after the first wave of anti-Jewish terror in early 1933, returned to Germany.
In any case, it would be wrong to see German-Jewish relations only under the aspect of the Holocaust. There was confrontation, but there was also a lot of productive coexistence. There were many mixed marriages: Tirpitz's wife was half-Jewish, and Stresemann and General Seeckt, both important figures of the 1920s, were married to Jewish women. Altogether, German (and Austrian) Jews made some of the greatest cultural and intellectual contributions to world history, if one considers the achievements of Karl Marx, Heinrich Heine, Sigmund Freud, Gustav Mahler, Arnold Schoenberg, Albert Einstein, Max Reinhardt, Theodor Adorno, and many, many others.
No doubt, anti-Semitism did exist in Wilhelmine Germany, but I see no reason to point to German anti-Semitism as having been any more prevalent, nasty, or eliminationist than anti-Semitism in other countries. In many ways, France, Austria, and Russia seemed more openly anti-Semitic than Germany. The Dreyfus Affair in France sparked some of the worst tirades against Jews. Vienna elected an anti-Semite as city mayor in the late 1890s (one should mention, however, that the Austrian emperor Franz Joseph disliked anti-Semitism and refused to counter-sign the appointment of the elected mayor for several years). There was a lot of anti-Jewish feeling in the non-German parts of the Austro-Hungarian Empire, too. Pogroms were common, moreover, in the Russian Empire, where they often received support from state officials. The "Protocols of the Wise Men of Zion," an alleged plan for a Jewish world conspiracy, was a vicious Russian forgery that at times received governmental support in Russia. Pogroms continued in Eastern Europe even after the First World War; I once found a protest of the German women's movement against a Polish pogrom in early 1919. The protest also contained an admonition to Germans not to let anti-Semitism thrive in their own country.
The women's movement:
With some roots going back to the revolution of 1848, a German women's movement constituted itself in the last decade of the nineteenth century. As in other maturing industrial states, women got increasingly involved in work within the "tertiary sector" (office clerks, administrative jobs). They started to gain economic power while the law still subordinated them strictly to husbands and fathers. Bourgeois women, mostly schoolteachers, believed that women in these new professions needed to be protected and organized. A broad range of women's clubs was founded, many of which joined an umbrella organization, the League of German Women's Associations (Bund deutscher Frauenvereine; BDF). The BDF advocated equal rights and better access of women to education. Some of its member associations wanted female suffrage. But the BDF also contained non-political organizations who hoped to provide support for women in the new professions or to organize women for voluntary auxiliary work in society.
Outside the BDF a vocal socialist women's movement emerged, but it saw women's liberation more as an ultimate outcome of a socialist revolution rather than a goal that it could pursue together with bourgeois women in the BDF. To the right of the BDF existed a spectrum of patriotic women's associations, for instance a Navy League of German Women hoping to instill German women with enthusiasm for Tirpitz's fleet building. The Navy League of German Women organized, for example, a savings campaign for the building of a new battleship. A women's colonial league tried to prevent inter-racial marriages of German men in the colonies by sending German women there. Other women's organizations did voluntary social work or prepared for auxiliary services in war (nursing, supplies). Both Catholic and Protestant women's organizations emerged as well. Some of them tried to broaden the opportunities of women to work within the church and the local administration (mostly in stereotypically female roles, for example as providers of poor relief), but many of them saw the question of women's work only as an issue for unmarried women. (There was a surplus of roughly one million women in Germany in 1914, and it more than doubled during the war.) The confessional women's leagues sometimes combined moderate feminism with outspoken nationalism.
Altogether, the Socialist women and the BDF were the most visible and openly political parts of the German women's movement, particularly after a reform in 1908 legalized political activity of women (party membership, attendance of party meetings). But the quiet majority of German women shared more conservative attitudes than the SPD women and the BDF. Nevertheless, Germany had a thriving and complex women's movement by 1914 on a scale comparable only to the United States and England. The German women's movement has been bashed for being less political, feminist, and demanding than its Anglo-Saxon sister movements, but this criticism often downplays the context in which the German women's movement worked. Until 1908 all political activity of women was forbidden, and the undemocratic structure of the German states made political reform look rather hopeless (unlike in the United States or in Britain). Improving educational and social welfare opportunities for women thus appeared as a more feasible and promising immediate goal that might lead to greater rights later on.
The Wilhelmine Empire on the eve of the First World War appears as both a modern and conservative society. It had a thriving industry, a flourishing intellectual and artistic life, and probably the best universities and schools of the world. On the other hand, access to education and upper-level jobs was restricted to men of middle and upper-class background, many of the new approaches in the arts were rejected by the state authorities (which often made them more notable), and there was some discrimination. The landed nobility of the regions east of the Elbe River held privileged positions in the army and the state apparatus and enjoyed a degree of political power incommensurate with its numerical and economic importance. Elections to the diet of the single states privileged property owners, and the democratically elected Reichstag had little power. But there was much talk of suffrage reform above all in Prussia, and the Reichstag became more vocal and influential after 1890 by making better use of its powers. In an age of millions of industrial workers and mass armies it was probably impossible to maintain a semi-autocratic government system in the long run. But how would change have come about? Could the Second Empire have become more constitutional and democratic through peaceful reform, as the moderate SPD members and the liberals hoped? Or was a violent clash inevitable, as the more radical socialists and some conservatives believed? Historians still disagree. The Empire was reformed in October 1918, but democratic concessions were made under the threat of military defeat and revolution.
Compared to other states around 1914, however, living conditions in Germany were safe and stable. There was rather little repression, and the Wilhelmine Empire seemed a livable place for the vast majority of the population. Diplomatically, Germany was not in an enviable position (having no strong allies), and politically it became clear that at some point a reform of the system would be hard to postpone. Many conservative critics were alarmed at the new trends in the arts, in thinking, and in the development of the socialist and the women's movements. But to see Wilhelmine Germany as a state in severe crisis intent on "escaping" into war, as some historians have done, would probably have seemed strange to most contemporaries.
Go on to C.1. | 2026-01-20T18:18:20.132419 |
428,800 | 4.370888 | http://www.teachervision.fen.com/musical-notation/lesson-plan/4862.html | Jazz and Math: Improvisation Permutations
Grade Levels: 9 - 12
After an introduction to improvisation through a drama game, a discussion, and a video clip, students explore how many different rhythmic combinations can be improvised in a jazz/blues piece of music. They use trial and error techniques, derive a mathematical formula, and apply the formula to calculate the number of possible rhythmic combinations.
- Students will make connections between various types of improvisation and uses for improvisation.
- Students will observe that there are myriad combinations of rhythms to choose from when improvising jazz and blues music, and recognize that while the variations seem infinite, they are in fact finite.
- Students will estimate the number of possible variations given a number of rhythms to choose from to fill one 4 beat measure.
- Students will experiment with creating various combinations in an attempt to verify estimates (trial and error).
Students will derive a mathematical relationship that will allow students to calculate the actual number of possible musical permutations given the limited set of options to choose from.
- Students will compare the number of actual permutations with previous estimates and account for discrepancies.
- Students will gain understanding of scales and chords so that students can estimate the possible number of notes in a given measure of a 12 bar blues progression (any given measure uses one chord only).
- Students will calculate the actual number of permutations of notes per measure.
- Students will combine calculations for rhythms with calculations for notes to find the overall permutations possible in one measure of a 12 bar blues progression.
- Students will notate a 12 bar blues progression using a different combination of notes and rhythms for each of the 12 bars, and then perform it on a keyboard or virtual piano online.
- VCR, TV, and PBS Ken Burns JAZZ documentary, Episode One "Gumbo." Verbal cue is "…a form called the Blues. And it's a useful form that is elastic because it is simple… " (24:48 - 25:17).
- Examples of improvisations on tape, CD, or from the PBS JAZZ Web site (http://www.pbs.org/jazz/lounge/101_improvisation.htm). Suggestions: For an example of early stages of improvisation, see "Afro" on Erykah Badu's Baduizm album; for more polished examples, try Ella Fitzgerald's Ella Live in Berlin "How High the Moon" which has a long and impressive scat section. Other suggested pieces include: Ella Fitzgerald's The Intimate Ella "Black Coffee", or The Best of Ella Fitzgerald First Lady of Song "Can't We Be Friends?" and "I Won't Dance" (both with Louis Armstrong).
- Fraction Notation Chart (http://www.pbs.org/jazz/classroom/printerfriendlyfractionsworksheet.html)
- Rhythms Worksheet (http://www.pbs.org/jazz/classroom/printerfriendlyrhythms.html)
- List of Ways To Notate One Beat
- White board and dry erase markers AND/OR overhead projector, transparencies and marker
For the last activity before assessment: access to a classroom with headphones so students can listen individually to a piece of jazz music and try their hand at improvising various rhythms along with the music percussively (snapping, tapping, drumming on table).
- Ask students to describe what improvisation means. Discuss improvisation in drama and explain that to get a feel for improvising, students will play a drama game. The game is entitled "You shouldn't have!" and it involves two people who will be improvising dialogue and pantomime. The first person mimes passing a gift to the other person (the gift needs to have a definite shape and size), who receives it, opens it and declares, "You shouldn't have!" followed by an improvised line telling what the gift is. The actors need to play off of each other because the person receiving the gift needs to make up a reasonable thing for the gift to be, considering how the giver pantomimed the shape, size, weight, etc. of the gift.
- Next link the theme of improvisation to students' lives by eliciting examples of improvisation in everyday life (e.g., conversations with other people, dancing, etc.). Discuss what the prerequisites for good improvisation are. For example, to converse well, one needs to know the language fluently—think about how hard it is to improvise a conversation in a language that you are learning in school. To dance well one needs to have learned some dance steps and moves—think of how nerve-racking it can be to get on the dance floor when you don't know how to do a certain type of dance.
- Make the link to improvisation in music by listening to a piece of music that contains improvising, such as Ella Fitzgerald's How High The Moon. Ask for student impressions about the improvisation. How difficult do they think it would be to do that? Have they ever had to make up a story on the spot? Discuss how important and difficult it is to make up good details in both a story and a scat song.
- Then watch the PBS JAZZ video clip from Episode One about the blues being simple and elastic, which allows for an infinite number of variations. After watching, discuss two key points:
- How many different variations can students imagine for Ella's scat, their dramatic improvisations, or for how the teacher conducts class?
- What does the concept of infinite really mean? How many infinite things can they think of? How do you know if something is infinite or just really extensive?
Doing The Math (Small Groups, Experimentation, Teacher Guidance)
- Let's just consider the number of different rhythms that the jazz musician has to decide between when improvising. Look over the rhythmic notation and fraction note chart from the lesson entitled "Rhythmic Innovations." For practice, try the Rhythms Worksheet (http://www.pbs.org/jazz/classroom/printerfriendlyrhythms.html). Estimate how many different combinations of rhythms could be made to fill one measure.
- Have students work in small groups to experiment with writing various combinations of rhythms. Encourage them to look for an efficient and systematic way to find how many possible permutations exist. Suggest that they begin by breaking down the measure into beats and try to figure out how many different ways one can notate a single beat.
- They should come up with a fairly comprehensive listing of all the possible ways to notate one beat. If they miss something, supplement their list (a List of Ways To Notate One Beat [http://www.pbs.org/jazz/classroom/printerfriendlyonebeat.html] may be printed and distributed). This list should go up on the board and be copied down by each group. (This list will act as the finite number of rhythmic possibilities that will be considered in making combinations of measures.)
- Once there is a class list of the different ways to notate one beat, the groups are to experiment with combinations of four different beats to create unique measures of music.
- After the groups have worked for a while, but before they are thoroughly frustrated, ask groups to read off their measures to the teacher or a scribe who will write them on the board. Nudge them along through the problem solving process by beginning to organize types of combinations. For example, when students give measures starting with a quarter note, write those on one side of the board separate from measures that start with 2 eighth notes. Then begin to ask the class, "how many of you have measures that start with quarter notes? Let's put all of those together..." (and so on).
- Discuss any observable patterns in the measures. Ask, "How many different ways can a measure be completed if it starts out with a quarter note? What about if it starts with two eighth notes?" (This is rather overwhelming to figure out, so tell them that we can break it down and make it a simpler version of the same problem so that we can figure it out.)
Explicit Problem Solving
- Show students how to break the problem down into smaller chunks, and how to create a simpler mathematical model to test the concept. For example, tell them that we will concern ourselves with only 4 different ways to notate a beat and then figure out how many different combinations are possible for a 2 beat measure.
Four simple ways to notate one beat are: a quarter note, two eighth notes, four sixteenth notes, or a dotted-eighth sixteenth.
If we label the four different notations for a beat A (quarter note), B (two eighth notes), C (four sixteenth notes) and D (a dotted-eighth sixteenth), then we can arrange them into groups of two systematically:
A-A B-B C-C D-D A-B B-A C-A D-A A-C B-C C-B D-B A-D B-D C-D D-C
Essentially, there are 2 beats and one can choose between 4 different notations for each beat; therefore to find the number of possible combinations you multiply 4 x 4= 16. Or 42.
Now that the problem has been broken down and we have created a mathematical model and tested it out, we can apply our model to the problem at hand.
We created a finite and comprehensive list of all the ways to notate one beat. Instead of just four ways, now we have as many ways as are on our class-created list (M). Also, we want to make measures with 4 beats instead of 2.
Essentially, there are four beats and one can choose between M different notations for each beat; therefore to find the number of possible combinations, M x M x M x M, or M4.
- Now students will apply this method to the problem at hand. Given a certain number of ways to notate one beat, how many different ways can one notate one measure?
Student Improvisation And Reflection (Individual Work)
- The final part of the lesson will be for students to try their hand at improvising. This should be done with headphones so each student may listen closely to a selected jazz piece and improvise rhythms by clapping, tapping, humming, etc. (The headphones allow for less distraction.) Students will process this experience in part by writing a reflection piece about how a jazz musician must decide on rhythms in a split second when they are improvising. What does this tell us about the base of knowledge that gifted jazz musicians must have?
Reapply the method to the problem after adding the condition that any and all types of rests may be used (whole rest, half rest, quarter rest, eighth rest, and sixteenth rest). Students will need to figure out how many ways one beat can be filled with rests and then extend that to all four beats in the manner that they have been taught.
For students who benefit from more visual and/or hands-on activities, the possible ways to notate one beat can be written on notecards. Then the students can rearrange the cards to create different permutations while another student, a scribe, or an aide records the permutations on a sheet of paper. For students with poor motor control, the notecards can be mounted on a thicker medium to make them easier to pick up and rearrange (foam core board or just thin foam rubber are two suggestions).
If students have difficulty organizing their combinations, the teacher can suggest creating a chart and model how to do it.
This lesson correlates to the following math and technology standards established by the Mid-continent Regional Educational Laboratory (McREL) at http://www.mcrel.org/standards-benchmarks/index.asp:
- Formulates a problem, determines information required to solve the problem, chooses methods for obtaining this information, and sets limits for acceptable solutions.
- Generalizes from a pattern of observations made in particular cases, makes conjectures, and provides supporting arguments for these conjectures (i.e., uses inductive reasoning).
- Uses formal mathematical language and notation to represent ideas, to demonstrate relationships within and among representation systems, and to formulate generalizations.
- Understands various sources of discrepancy between an estimated and a calculated answer.
- Uses recurrence relations (i.e., formulas expressing each term as a function of one or more of the previous terms, such as the Fibonacci sequence or the compound interest equation) to model and to solve real-world problems (e.g., home mortgages, annuities).
- Understands counting procedures and reasoning (e.g., use of the Addition Counting Principle to find the number of ways of arranging objects in a set, the use of permutations and combinations to solve counting problems).
- Understands that mathematics is the study of any pattern or relationship, but natural science is the study of those patterns that are relevant to the observable world.
- Understands that theories in mathematics are greatly influenced by practical issues; real-world problems sometimes result in new mathematical theories and pure mathematical theories sometimes have highly practical applications.
- Understands that science and mathematics operate under common principles: belief in order, ideals of honesty and openness, the importance of review by colleagues, and the importance of imagination.
- Understands that mathematics provides a precise system to describe objects, events, and relationships and to construct logical arguments.
In partnership with PBS.
|Provided in partnership with NAfME| | 2026-01-24T20:31:05.375460 |
537,035 | 3.540358 | http://phys.org/news/2011-04-complex-protein-networks-cells-insights.html | (PhysOrg.com) -- Scientists have developed a way of studying cells by comparing how proteins inside them bind with one another.
The team, from Imperial College London, have developed an algorithm called MI-GRAAL that enables them to study protein-protein interaction (PPI) networks, where a cells proteins bind together in complex networks so that they can carry out their functions. These PPI networks are the building blocks of some of the most significant molecular processes, such as DNA replication, so comparing different PPI networks of different species could give new insights into biology.
In the study, the researchers used MI-GRAAL to compare the protein-protein interaction networks in a range of cell species, including yeast, a human cell and different strains of the herpes virus.
The team found that the PPI networks in the yeast cell and human cell were 78% identical, which surprised them as the species are at the opposite end of the evolutionary spectrum. However, they say their finding suggests that the cells in all life forms have a similar way of organising their internal structures.
The researchers also analysed different strains of the herpes virus and found that it was possible to see that they were from the same family and reconstruct their evolutionary relationships by looking at their PPI networks. Prior to this work, only comparison of DNA sequences had been able to reveal these kinds of patterns.
In addition, the researchers compared Campylobacter jejuni and Escherichia coli (E.coli), which are bacteria that cause food poisoning . They found that 56% of the PPI network in Campylobactr jejuni is present in Escherichia coli. The researchers say their work is the first step towards understanding the role of proteins in these bacteria, which are so far poorly understood.
Dr Natasa Przulj, from the Department of Computing at Imperial College London who is the lead author of the study, says:
Scientists currently compare the genetic sequences of different species or individuals to understand more about how were put together and the causes of particular diseases and that has provided us with a wealth of information. However, genes are just the instruction kit that tell networks in our bodies how to produce various types of proteins and it is these proteins that actually do all the work. Now, weve developed a way of looking at proteins that we hope will give us a completely new perspective on cell biology. Protein-protein interactions are really complex, so weve put a lot of hard work into coming up with an algorithm that can analyse and align them. We think that what weve created should be a fantastically useful tool.
To analyse cells, researchers enter existing data on PPI networks into a computer and they then use the MI-GRAAL algorithm to align and compare the networks across different cells.
MI-GRAAL should help scientists to work out the functions of particular proteins in cells, many of which are still poorly understood. It could also be used to compare PPIs of the networks of diseased and healthy cells, in order to determine which proteins are damaged in particular diseases and to design new treatments that target these proteins.
The study was published in the March 2011 edition of the journal Bioinformatics.
Explore further: X-ray tomography on a living frog embryo
More information: bioinformatics.oxfordjournals.org/ | 2026-01-26T16:50:38.827904 |
302,402 | 4.121898 | http://ecmweb.com/nec/grounding-bonding-definitions | 250.2 Definitions Find more on Grounding and Bonding
Why is grounding so difficult to understand? One reason is because many do not understand the definition of many important terms. So let’s review a few important definitions contained in Articles 100 and 250.
Bonding . The permanent joining of metal parts together to form an electrically conductive path that has the capacity to conduct safely any fault current likely to be imposed on it.
Author’s comment: Bonding is accomplished by the use of conductors, metallic raceways, connectors, couplings, metallic-sheathed cables with fittings, and other devices recognized for this purpose [250.118].
Bonding jumper . A conductor properly sized in accordance with Article 250 that ensures electrical conductivity between metal parts of the electrical installation.
Effective ground-fault current path [250.2]. An intentionally constructed, permanent, low-impedance conductive path designed to carry fault current from the point of a ground fault on a wiring system to the electrical supply source. The effective ground-fault current path is intended to help remove dangerous voltage from a ground fault by opening the circuit overcurrent protective device.
Equipment grounding conductor . The low-impedance fault-current path used to bond metal parts of electrical equipment, raceways, and enclosures to the effective ground-fault current path at service equipment or the source of a separately derived system.
Author’s comment: The purpose of the equipment grounding (bonding) conductor is to provide the low-impedance fault-current path to the electrical supply source to facilitate the operation of circuit overcurrent protection devices in order to remove dangerous ground-fault voltage on conductive parts [250.4(A)(3)]. Fault current returns to the power supply (source), not the earth! Refer to 250.118 for acceptable types of equipment grounding conductors.
Ground (Earth) . Earth or a conductive body that is connected to earth.
Grounded . Connected to earth.
Ground fault . An unintentional connection between an ungrounded conductor and metal parts of enclosures, raceways, or equipment.
Ground-fault current path[250.2]. An electrically conductive path from a ground fault to the electrical supply source.
Author’s comment: The fault-current path of a ground fault is not to the earth! It’s to the electrical supply source, typically the XO terminal of a transformer. The difference between an “effective ground-fault current path” and “fault-current path” is that the effective ground-fault current path is “intentionally” constructed to provide the low-impedance fault-current path to the electrical supply source for the purpose of clearing the ground fault. A ground-fault current path is simply all of the available conductive paths over which fault current flows on its return to the electrical supply source during a ground fault.
Grounded (Earthed) . Connected to earth.
Grounded neutral conductor . The conductor that terminates to the terminal that is intentionally grounded to the earth.
Grounding (Earthing) conductor . The conductor that connects equipment to the earth via a grounding electrode.
Author’s comment: An example would be the conductor used to connect equipment to a supplementary grounding electrode [250.56].
Grounding (Earthing) electrode . A device that establishes an electrical connection to the earth. (See 250.50 through 250.70)
Grounding electrode (earth) conductor . The conductor that connects the grounded neutral conductor at service equipment [250.24(A)], the building or structure disconnecting means enclosure [250.32(A)], or a separately derived systems enclosure [250.30(A)] to an electrode (earth).
Main bonding jumper . A conductor, screw, or strap that bonds the equipment grounding (bonding) conductor at service equipment to the grounded neutral service conductor in accordance with 250.24(B). (For more details, see 250.24(A)(4), 250.28, and 408.3(C).)
Solidly grounded . The intentional electrical connection of one system terminal to the equipment grounding (bonding) conductor in accordance with 250.30(A)(1).
Author’s comment: The industry calls a system that has one terminal bonded to its metal case a solidly grounded system.
System bonding jumper . The conductor, screw, or strap that bonds the metal parts of a separately derived system to a system winding in accordance with 250.30(A)(1).
Author’s comment: The system bonding jumper provides the low-impedance fault-current path to the electrical supply source for the purpose of clearing the ground fault. For more information see 250.4(A)(5), 250.28, and 250.30(A)(1).
Editor’s note: This information was extracted from Mike Holt’s textbook, Understanding the National Electrical Code | 2026-01-22T20:18:14.281521 |
719,690 | 3.59639 | http://www.theregister.co.uk/2013/10/24/boffins_hide_supercapacitors_on_silicon_chips/ | Boffins hide supercapacitors on silicon chips
We don't need no STEEENKING BATTERIES
Scientists at Vanderbilt University have created a silicon-based supercapacitor they say could scale all the way from grid-level storage down to consumer electronics.
The reason they're trumpeting it as a breakthrough is that silicon, while abundant and with well-established fabrication techniques, doesn't work well in capacitors (or as the authors describe it in their paper, available in full at Nature, “double-layer charge storage”). Its highly reactive nature makes it much better as an anode in a metal-ion battery, they write; in supercapacitors “it reacts readily with some of chemicals in the electrolytes that provide the ions that store the electrical charge”.
As noted in this university press release: “Instead of storing energy in chemical reactions the way batteries do, “supercaps” store electricity by assembling ions on the surface of a porous material. As a result, they tend to charge and discharge in minutes, instead of hours, and operate for a few million cycles, instead of a few thousand cycles like batteries”.
To do this in silicon, the researchers used porous silicon, created by etching the surface of a silicon wafer. To combat the reactivity of the silicon, they then coated the surface with carbon, baked at between 600 and 700°C. The result was a graphene surface coating created at far lower than the 1400°C-plus temperatures used to create graphene.
Graphene coating creates high-capacity supercap. Image: Cary Pint, Vanderbilt University
The graphene coating both stabilised the surface of the silicon, and allowed them to construct a supercapacitor with energy densities “over two orders of magnitude” better than uncoated porous silicon, “and significantly better than commercial supercapacitors.”
Research leader Cary Pint believes the approach could allow unused silicon on wafers to become the power source for devices like mobile phones. At a larger scale, supercapacitors could be built onto the back of solar cells, again using otherwise-unused silicon. ® | 2026-01-29T08:25:27.409408 |
950,023 | 3.691441 | http://edition.tefl.net/ideas/grammar/present-simple-continuous/ | 15 fun activities for Present Simple/Present Continuous
The best way of teaching the present tenses is to compare and contrast them. These ideas will show you how to do the even more difficult task of combining them in practice activities, all of them done in simple and entertaining ways.
There are many well-known and fun activities for the Present Continuous, such as ones involving miming and ones using pictures of crowded street scenes. There are also quite a few things you can find in photocopiable activity books for the Present Simple, such as timetables where students have to fill the gaps in by asking each other questions. However, by far the easiest and clearest way of showing the meanings and uses of the Present Simple and Present Continuous tenses is to contrast them. Perhaps the main reason why this approach isn’t used more in the classroom is that it can be difficult to find speaking and writing activities with a natural mix of the two tenses. These activities aim to do away with that lack once and for all!
1. Mimes plus
Give students a list of Present Continuous sentences that they can mime to their partners for them to guess, e.g. “You are eating bread and jam.” You can add the Present Simple to this by choosing actions that some people do every day (e.g. “You are eating spicy food” and “You are blowing your nose”) and asking them to go on to discuss how often they do those things and why. This is more interesting if it is a topic that is linked to cultural differences, e.g. table manners.
2. Mimes plus Two
Another way of combining Present Continuous mimes with the Present Simple is to ask students to mime actions that they do in their real lives (perhaps choosing from a list with sentences like “You are taking a shower”). The people watching the mimes have to make a Present Continuous sentence to describe the action and also make a true Present Simple sentence about the person miming and that action (e.g. “You take a shower every morning” or “You sometimes take a shower but you usually take a bath”).
3. Definitions game
Give students a list of words and ask them to choose one and describe it with just sentences using the Present Simple and Preset Continuous. For example, if the word is “breathe” they could say “I do this many many times every day” and “Everyone in the world is doing this now except some divers.”
4. 20 questions
With the same list of words as in Definitions Game above, students ask each other Present Simple and Present Continuous Yes/No questions until they guess which of the words their partner chose. Possible questions include “Are you doing this now?”, “Is anyone in this class doing this now?”, “Are many people in this city doing this now?”, “Do you do this every day?” and “Do you do this more than twice a week?”
Ask students to imagine that they are writing a postcard while they are sitting on the balcony of their hotel room, on the beach or outside a café. They should naturally use the Present Continuous to describe what is happening at the moment they are writing (e.g. “The sun is shining” or “The children are playing beach volleyball”) and the Present Simple for their daily routine while on holiday (e.g. “I spend most of the day next to the swimming pool” or “I have breakfast in the same café every morning”), but you could also specifically ask them to stick to those tenses. Alternatively, you could give them sentence stems that should get them using those two tenses, e.g. “All around me…” or “In the evenings…” You can then get students to read other people’s postcards with a task to do as they are reading, for example to guess which place the person writing was supposed to be in or to choose the best holiday.
6. Chain postcards
Especially if you have prepared sentence stems for the start of each line of the postcard, you can combine the ideas in Postcards above with the famously fun game Chain Writing (= Consequences). Each person fills in the first line of a postcard, e.g. completing “I am writing to you from…” with “… the best holiday resort ever” or “… the hills of Tuscany”. They fold over the paper so that the next person can’t see what they have written and pass it to the next person for them to continue the postcard. They continue writing and passing until the postcards are finished, then they are passed one last time and opened for general hilarity and a discussion about which postcards make most sense, sound like the best holiday and/or are funniest.
7. Present Simple and Continuous taboo topics
The strange thing about the use of the Present Continuous to talk about the present is that we actually rarely use it in conversation, and least of all to ask typical textbook questions like “What are you wearing?” In fact, questions like “What kind of underwear are you wearing?” are basically taboo. We can take advantage of this by giving a list of such taboo Present Continuous questions mixed up with similarly taboo Present Simple questions like “How often do you shave your armpits?” If we sprinkle in a few more typical and harmless questions such as “What time do you usually get up?”, we can ask students to rank the questions from 5 points (taboo) to 1 point (easy to answer), then decide on which ranking of question they want to be asked. How many points they actually get depends on how well they answer the question. For example, if they ask for a four point question (usually uncomfortable to answer but not really taboo) and kind of answer it but with lots of pausing and some avoiding of the question, their partners can decide to reward them with two points (half the total of four points that they could have got).
8. Ask and tell
Students make Present Continuous and Present Simple questions, then flip a coin to see whether they will have to answer the question themselves (tails = tell) or be allowed to ask the question to someone else (heads = ask). This is more fun that it sounds because many present tense questions are quite personal and the person who has made the question will often be dismayed by having to answer their own question. You can make this more risqué and add vocabulary by suggesting words and expressions that they can or must include in their questions, e.g. “snore” and “itchy”. Alternatively, they could roll a dice to decide which tense they should use in their questions (e.g. Present Simple if they throw a one, two or three), or the topic they should ask about (e.g. families if they throw a one).
9. Time zones
If you give students a list of countries in different time zones, they should be able to make sentences about what is probably happening there right now, as well as their impressions of what daily life is like, e.g. “People are probably coming home from bars about now. I think they often stay up until very late but sleep after lunch” to describe their picture of Spanish life. Their partners should listen and guess the country.
10. Guess the person
You can also get the students to describe and guess different kinds of people from what they are (probably) doing now and their routines, e.g. “your mother-in-law” from “She texts my husband several times a day” and “At this time she is probably doing a flower arrangement class.”
11. Describe a photo
Perhaps the most natural situation in which to use a mix of the two tenses is to describe a photo containing people that you know, for example “The person standing next to my brother is his girlfriend. She lives in Canada, so they only meet a few times a year.”
12. Tour guides
A group of people who probably use the two tenses together more than the rest of us is tour guides, for example to explain what is happening in a painting and how many people come to see it every day. The same language is fairly natural to describe Tower Bridge opening, Big Ben striking twelve, and a herd of wildebeest running across the plains. You can use this situation by asking students to guess the tourist site from the descriptions and then make up their own descriptions for other people to guess from, or with roleplays in which the people on the tour keep on asking more and more questions.
13. Test your classmates
Students test each other on the present dress and actions and routines of their classmates with questions like “What is George wearing on his feet?” and “Does Ronaldo often wear glasses?” Students will need to have their eyes closed when they are being tested, and they might need to check some of the answers with the person who the question is about.
14. Sentence completion
Give students incomplete sentences for them to complete to give true personal information, e.g. I am feeling __________, I often feel __________, I rarely __________ and My brother is __________. Students read out just the part they have filled in (e.g. “cook” or “hungover”) and their partners guess which sentence they put those words in.
15. Discussion questions
You can easily make discussion questions with the Present Simple and Present Continuous, e.g. “What things are getting better in your country?” and “Do people in your country pay attention to government campaigns? Why/why not?” You can also use both tenses for sentences that students should agree or disagree with, e.g. “People buy brands because they think they are better quality” and “People are slowly becoming more ecologically friendly in their lifestyles.” Alternatively, you can give questions which aren’t written in those tenses but should elicit answers that are, e.g. “Describe the changes in the economy of your country at the moment.” | 2026-02-02T00:41:42.693386 |
194,139 | 3.535885 | http://www.timesrepublican.com/page/content.detail/id/556562/A-woodpecker-as-big-as-a-crow.html?nav=5127 | PILEATED WOODPECKERS (Dryocopus pileatus) are big birds that do draw attention to themselves. They are not particularly common for us in central Iowa, but they do call our mature forest areas home. Almost the same size as a crow, this woodpecker makes a lot of noise whether it is its load calls or its hammer loud chiseling on dead trees. The drumming sound of pecking at a tree is not just for finding food, but it also plays a large part in defining territories between other pileated woodpeckers.
This scribe sees them every year but most often during wildlife observation times while sitting in an elevated tree stand. I'm primarily there to watch and wait for deer. However, I'm entertained by a large variety of other critters big and small as I pass the time. If one wants to hear pileateds make more noise, get two of them together. I've seen an acrobatic forest flight of two from my observation posts. The birds are fast flyers, very agile, and able to navigate a forest full of trees with great ease.
A dead tree may hold carpenter ants. If so, the pileated woodpecker will find them. And for some dead trees of just the right age, a large rectangular hole will be excavated to enter a chamber that is carved out to a depth of 24 inches. Each spring a pair will work together to raise a brood of young offspring. Four white eggs will be laid. Incubation is accomplished by both birds with the male sitting on the eggs at night. Hatching takes place after eighteen days. Twenty four to 30 days later, the young will take their first flight. Learning takes place during the summer and fall as the parent birds demonstrate what it takes to survive. Come late fall, the young birds are on their own.
Today’s wildlife photo is courtesy of Ruth Knutson of Le Grand. She made this photo of a female Pileated woodpecker while it was perched at the suet feeder of Don and Jane Hays. Pileated woodpeckers are big and mostly covered with black feathers. Their body length is about 16.5 to 17 niches long. Its wingspan is 24 inches. And the wings have a large white patch, bottom side only, that shows up distinctly if one sees the bird in flight. A flame red head crest is also present. For females like the one pictured here, red covers the top and back side of the head. Males have a red patch from the base if its bill all the way over its head. Ruth used a Canon camera and 55 – 270 mm lens to capture this image. Thanks Ruth.
Several years ago in the forested wetlands of east central Arkansas, an observer claimed to have seen the big cousin of the Pileated, the Ivory-billed Woodpecker. Teams of expert researchers explored the flooded woodlands of Mississippi River backwaters for several summers. A lot of work and time was involved to determine if the birds were indeed Ivory-billed. The reason for all the excitement is that the last known living Ivory-billed was from the 1930s. One can hope the Ivory-billed is still alive with a small but viable population. Even in the best of times, this species is very wary and secretive. This species may be extinct. Good bird books qualify the status as perhaps extinct.
OUTDOOR ACTIVITIES are big business in Iowa. One way to gage the impact and importance of fishing and hunting is to look at the numbers. Here is a tabulation of licenses purchased in Iowa during 2012. On the fishing side of the equation, residents of the Hawkeye State laid down cash for 328,718 fishing licenses. Of this 318,003 were for the annual fishing license, 6,938 were lifetime, 1,215 were seven day permits and 2,562 were for just a one day license. For those interested in trout, there were 39,351 trout stamps sold. Non-residents purchased 40,090 fish licenses and trout enthusiasts paid for 4,306 trout stamps. Most of the trout fishing takes place in the streams of northeast Iowa where cold spring water is vital to the rainbow, brown or brook trout.
Hunting licenses show a total of 164,194 for residents of which 161,843 were the annual tag and 2,351 were lifetime licenses. If the habitat fee was required, as it is in most cases for persons age 16 to 64, that item sold was 161,228. Migratory game bird fees numbered 24,301 which are needed by waterfowl hunters. However, anyone can buy this game bird fee as the funds are vitally important for wetland habitat research and management. Fur harvesters bought 19,219 licenses to legally pursue their passion. Spring turkey tags numbered 16,919 just for seasons 1, 2 and 3. Season 4 tags came in at 12,368. In addition archers bought 5,295 spring tags. Youth turkey licenses were 3,789. Landowner turkey numbers came in at 5,017.
Deer are big part of the total picture in Iowa. Residents wanting shotgun season 1 totaled 57,783 and an additional 45,971 were shotgun season 2. Archers purchased 55,075 bow tags. Muzzleloader sales were 20,425. Landowners wanting to take bucks or does paid for 33,707. If they wanted antlerless deer only, that total was 32,613. Non-residents going for deer had to buy the regular hunting license and habitat fee first and this came in at 23,421. (Non-residents also paid for 2,162 migratory game bird fees.) Deer tags applications came to 4,073. Shotgun season 1 total was 2,760, 961 were for season 2, and muzzleloaders were 1,291. For those willing to wait for any sex deer tags, they applied for annual preference points. How many people was that? ... 9,257. It take three years on average for a non-resident to draw an Iowa deer tag if they want to legally take a buck deer.
A bigger picture emerges from the data I reviewed for you. As big as hunting, fishing and trapping are as indicated by license sales, add in other outdoor recreational pursuits such as camping, hiking and family gatherings. Where do many of these events take place? In state, county and city parks. According to a recently completed economic impact study regarding natural resources, Iowa's state and county parks, lakes, rivers and trails generate over 56.5 million visits each year. That generates over $3 billion in spending, $700 million in income and nearly 31,000 jobs. It is quite apparent that the people of Iowa want natural resources in all of its variety, utilized in many different ways, to continue in a strong relationship toward our well being and constructive quality of life issues. This something to think about. Our forefather conservation leaders had a vision for what could be in Iowa. Will we have the foresight to carry on that vision? Iowa will be a better place if we do.
WINTER. The air temperatures are cold. Some days are a bit warmer and some a lot colder than what we think of as average for January. By the time February and March get here, winter's holdout will be gradually loosing the race to an advancing spring. Consider these facts of natural history: Just a few weeks ago, on Jan. 2, our Earth was at its closest point of its orbit around the Sun. We were a mere 91,402,560 miles away that shinning orb. As the earth continues on its orbit, we will gradually get farther from the sun until the whole process starts to repeat itself after June 22. The angle of the northern hemisphere's tilt toward the sun is increasing each day. More solar insolation will build as we approach a new summer season. Photons of light from the sun take about eight minutes to reach Earth. Just one of the sun's solar cycles, the 11 year phenomenon, is a natural variation in the number of sunspots and flares that affect solar irradiance levels on Earth. The current cycle began in 2008 and is expected to peak in May 2013.
WINTER AND EAGLES also go together. Jan. 26 is one of Iowa's Bald Eagle Day celebration at Lake Red Rock, Pella. Hours for eagle watching are sunrise to sunset. Eagle programs however will be held from 10 a.m. until 5 p.m. To learn more about eagles at Red Rock, call 641-828-7522. If you don't to travel to Pella, then make a short drive to Three Bridges County Park located between Marshalltown and Le Grand. Eagles like to perch in the tall cottonwood trees overlooking the open water of the riffles.
PHOTO CONTEST entries to the Marshall County Conservation Board are due by Feb. 1. A $3 entry fee will be charged per photo to help defer the cost of a chili supper on Feb. 7. For details on submission of photos, call 752-5490 at the Conservation Center at the Grimes Farm.
"People rarely succeed unless they have fun in what they are doing." -Dale Carnegie
Garry Brandenburg is a graduate of Iowa State University with BS degree in Fish & Wildlife Biology. He is the retired director of the Marshall County Conservation Board. Contact him at PO Box 96, Albion, IA 50005. | 2026-01-21T05:51:52.152141 |
1,143,174 | 3.652355 | http://www.thinknovation.com/section-blog/38-global-warming/48-whats-wrong-with-the-weather.html | The average temperature of Earth's atmosphere reached the highest level ever recorded in the first two-thirds of 1998, jumping off the charts. Six of the first eight months of the year were the warmest since records began in 1866. The warning signs are clear.
Even before 1998, the 14 hottest years on record occurred since 1980. University of Massachusetts researchers say that recent temperatures are the warmest in 600 years. A more unstable climate is causing record-breaking heat waves. One hundred Texans died in a prolonged summer heat spell during which temperatures rose above 35 degrees centigrade for weeks on end.
An estimated 3,000 people died in India's most intense heat wave on record. One of the planet's most prominent "hot spots" lies south of the southern tip of Argentina, in Antarctica. The peninsula has warmed up by 2. 5 degrees centigrade since the mid-40s. According to research by the U. S. Geological Survey, Antarctica is warmer now than any time in the last 4,000 years.
In March, a 200-square-kilometer block of ice broke away from the Larsen B ice shelf, shrinking the shelf to its smallest size on record. Last month an iceberg of 7,125 square kilometers in area separated from the Ronne ice shelf.
Scientist with the British Antarctic Survey believe the Larsen B ice shelf may be on the verge of entering an "irreversible retreat phase. " They are also concerned about the collapse of the larger West Antarctic ice sheet, which would raise sea levels by as much as 5 meters and inundate coastal regions.
Half the glacier ice in the European Alps has disappeared in the last century. The famous ice field in America's Glacier National Park is shrinking fast, as are the glaciers in the Patagonian Andes along the Argentine border.
Many scientists would have us believe that this fluctuation in the earth's environment is solely a result of mankind's adding billions of tons of carbon dioxide into the atmosphere each year. While humanity's callous disregard of the atmosphere is a factor in the extreme weather phases we are experiencing, certain researchers are proposing that this is just a cover up for a much larger problem, a phenomenon that has far reaching implications for all of civilisation.
In 1992 everything was normal with our sun. It had a magnetic north and south pole. It was functioning normal by scientific standards. In December of 1994, the Ulysses spacecraft from NASA arrived at the sun to measure it's magnetic field. NASA was astonished to find out that the magnetic field of the sun no longer has a north and south pole.
The sun's magnetic field had changed dramatically into a homogeneous field. There was, of course, no scientific explanation. No one had ever seen anything like this before. Then the SOHO satellite was launched to study the sun for a two year period. Early in June 1998 two comets entered into the sun.
This is not unusual. As many as twenty five or more comets or asteroids a year will either enter the sun or graze it. Nothing has ever happened before when the sun was struck by a cosmic body. But this time the sun reacted in a way no one has every seen before. Approximately 30 to 35 solar flares erupted on the surface of the sun. If even two or three solar flares were to erupt at one time, this would be a great concern because of the magnetic storms that could be caused on earth.
30 or 35 is outrageous.
Further, according to the researcher Gregg Braden, the solar proton flux which is measured in PUI rose to about 2500 PUI in the late 1980's. The scientific community was very concerned about this much energy reaching the earth. Do you know what it was a few days ago? 42,000 PUI. No one is saying anything. What can they say?
Another interesting point, on June 25th, 1998, the SOHO satellite that was watching the sun suddenly became inoperative according to NASA. No more information to us. Could this be real or a manmade problem to stop the flow of information to the public? Another interesting point, on June 26th, 1998, we had a major magnetic storm on earth that reached 6 or 7 magnitude. Usually the whole world is informed preparing us for this potential problem. NASA did not inform the public?
This is one of the real reasons for the man made global warming media meme. It makes people depressed and hides the reality of our solar seasons. We won't go into the details of the how and why, that's for other sites, we recommend www.disinfo.com, and www.whatdoesitmean.com. | 2026-02-04T23:03:26.150169 |
1,017,750 | 3.56857 | http://www.livinganthropologically.com/anthropology/many-origins-of-agriculture/ | Many Origins of Agriculture – Anthropology 2.4
Just as there is no single way to gather and hunt, there are many different ways to cultivate plants and interact with non-human animals. Major forms of agriculture arose independently in at least seven different areas of the world. Each depended on a different mix of plants and non-human animals. Moreover, there are different mixes of horticulture, or small-scale gardening, with other activities, like gathering and hunting. So instead of talking about the origin of agriculture, we need to think about the many origins of agriculture.
The notion that there was one period of transition, from one form of life to another, takes the experience of one part of the world and projects it on everyone else. It is acceptable to use broad classificatory terms like hunting-and-gathering, horticulture, agriculture, and pastoralism to describe a predominant mode of subsistence. However, these are generalizations and abstractions. They are classifications that may hide a broad range of activities, as people create different historically changing mixes of plants and animals.
Inevitably a change in the mix of plants and animals will have complex effects, which can lead to different forms of social organization. Such changes are not automatic, and they are rarely once-and-for-all.
Putting together the many ways to hunt and gather with many forms of cultivating makes it clear Jared Diamond was playing fast and loose with the evidence when he wrote “The Worst Mistake in the History of the Human Race.” Returning to his headline: “With agriculture came the curses of social and sexual inequality, disease, and despotism” (1987:66). Each of those terms is questionable, and some are blatantly untrue. The evidence is sometimes in the very sources Diamond uses. Going through the curses:
a) Social inequality
At the time of Diamond’s “Worst Mistake,” anthropologists and archaeologists were reassessing the view that agriculture was invariably linked to inequality. Citing sources from the early and mid-1980s, Gary Feinman documents the reassessment:
Although institutionalized inequality has long been recognized among a few select hunter-gatherer populations, recent edited collections have documented that such cases were more prevalent than previously realized. These studies have repeatedly illustrated that agricultural production cannot be considered a necessary precondition for unequal or hierarchically organized social formations, since such sociopolitical transitions have been evidenced in a wide array of nonagricultural populations. (1995:257)
Feinman goes on to say inequality cannot simply be an “epiphenomenon” of agriculture–inequality must be examined on its own terms, since social inequality can show up in non-agricultural societies. Moreover, agricultural societies can be relatively egalitarian.
b) Sexual inequality
Similarly with regard to sexual inequality. Ernestine Friedl’s oft-reprinted article, “Society and Sex Roles” (1978) demonstrated a range of variation among hunters and gatherers, from the relatively egalitarian to pronounced inequality. A range of variation is also seen in horticultural and agricultural societies.
Most of the diseases associated with agriculture are actually linked to closeness with domesticated animals. In the Americas, sophisticated agricultural systems developed, but without the infectious diseases plaguing the Old World.
Diamond specifically cites the work by Armelagos and colleagues. However, although “Disease and Death at Dr. Dickson’s Mounds” is an ominous title, the article actually explains this as a specific case, with specific problems. In the concluding paragraph:
Agriculture is not invariably associated with declining health. A recent volume . . . analyzed health changes in twenty-three regions of the world where agriculture developed. In many of these regions there was a clear, concurrent decline in health, while in others there was little or no change or slight improvements in health. (Goodman and Armelagos 1985)
Agriculture does not inevitably mean declining health–it can also improve health.
As for Diamond’s dramatic example of skeletons from Greece and Turkey, where “height crashed” after agriculture (1987:66), more careful reviews of specific areas show this is not a general or absolute trend: “Several geographic settings show stature declines with agriculture adoption or intensification in prehistoric societies. In contrast, other regions show increases or no change in stature. Thus, there is evidence for stature reduction with agriculture in selected settings, but this is not a universal pattern” (Larsen 1995:191). Again, there is not one universal transition to agriculture, nor a universal pattern of a height decline or increase.
What exactly Diamond has in mind by despotism is unclear, but there are certainly many non-state horticultural and agricultural societies. Anthropologists had long documented herders and agriculturists featuring non-government forms of political organization, where leaders had no abusive power–the best leaders could persuade, but had no coercive authority.
Domesticating plants and animals was not one distinct event. It was not in itself a watershed moment or definitive transition. In different parts of the world, domesticating specific plants and animals initiated processes with uncertain outcomes and consequences. But there was no such key turning point that inevitably put humanity down a pre-destined path. As Tim Ingold writes about domesticating animals, “when hunters became pastoralists they began to relate to animals, and to one another, in different ways. But they were not taking the first steps on the road to modernity” (2000:75 and see Domestication of plants and animals opens relational pathways).
Next: 2.5 – More than Guns, Germs, and Steel
Previous: 2.3 – Domestication of Plants and Animals Opens Relational Pathways
To cite: Antrosio, Jason, 2011. The many origins of agriculture. Living Anthropologically, http://www.livinganthropologically.com/anthropology/many-origins-of-agriculture/. Last updated July 7, 2011. | 2026-02-02T23:24:22.280252 |
634,658 | 3.785539 | http://www.thefullwiki.org/History_of_Sicily | The history of Sicily has seen it usually controlled by greater powers—Roman, Vandal, Byzantine, Islamic, Hohenstaufen, Catalan, Spanish—but also experiencing periods of independence, as under the Greeks and later as the Emirate then Kingdom of Sicily. Although today part of the Republic of Italy, it has its own distinct culture.
Sicily is both the largest region of the modern state of Italy and the largest island in the Mediterranean Sea. Its central location and natural resources ensured that it has been considered a crucial strategic location due in large part to its importance for Mediterranean trade routes. For example, the area was highly regarded as part of Magna Graecia, with Cicero describing Siracusa as the greatest and most beautiful city of all Ancient Greece.
At times the island has been at the heart of great civilizations, at other times it has been nothing more than a colonial backwater. Its fortunes have often waxed and waned depending on events out of its control, in earlier times a magnet for immigrants, in later times a land of emigrants. On rare occasions, the people of Sicily have been able to wrest control of their island and live through fleeting moments of political independence.
The indigenous peoples of Sicily, long absorbed into the population, were tribes known to ancient Greek writers as the Elymians, the Sicani and the Siculi or Sicels (from which the island gets its name). Of these, the last were clearly the latest to arrive on this land and were related to other Italic peoples of southern Italy, such as the Italoi of Calabria, the Oenotrians, Chones, and Leuterni (or Leutarni), the Opicans, and the Ausones. It is possible, however, that the Sicani were originally an Iberian tribe. The Elymi, too, may have distant origins outside of Italy, in the Aegean Sea area. Complex urban settlements become increasingly evident from around 1300 BC.
From the 11th century BC, Phoenicians begin to settle in western Sicily, having already started colonies on the nearby parts of North Africa. Within a century we find major Phoenician settlements at Soloeis (Solunto), present day Palermo and Motya (an island near present day Marsala). As Carthage grew in power, these settlements came under its direct control.
Sicily was colonized by Greeks in the 8th century BC. Initially, this was restricted to the eastern and southern parts of the island. The most important colony was established at Syracuse in 734 BC. Other important Greek colonies were Gela, Akragas, Selinunte, Himera, Kamarina and Zancle or Messene (modern-day Messina, not to be confused with the ancient city of Messene in Messenia, Greece). These city states were an important part of classical Greek civilization, which included Sicily as part of Magna Graecia - both Empedocles and Archimedes were from Sicily.
These Greek city-states enjoyed long periods of democratic government, but in times of social stress, in particular, with constant warring against Carthage, tyrants occasionally usurped the leadership. The more famous include: Gelon, Hiero I, Dionysius the Elder and Dionysius the Younger.
As the Greek and Phoenician communities grew more populous and more powerful, the Sicels and Sicanians were pushed further into the centre of the island. By the 3rd century BC, Syracuse was the most populous Greek city in the world. Sicilian politics was intertwined with politics in Greece itself, leading Athens, for example, to mount the disastrous Sicilian Expedition in 415 BC during the Peloponnesian War.
The Greeks came into conflict with the Punic trading communities, by now effectively protectorates of Carthage, with its capital on the African mainland not far from the southwest corner of the island. Palermo was a Carthaginian city, founded in the 8th century BC, named Zis or Sis ("Panormos" to the Greeks). Hundreds of Phoenician and Carthaginian grave sites have been found in a necropolis over a large area of Palermo, now built over, south of the Norman palace, where the Norman kings had a vast park. In the far west, Lilybaeum (now Marsala) was never thoroughly Hellenized. In the First and Second Sicilian Wars, Carthage was in control of all but the eastern part of Sicily, which was dominated by Syracuse. However, the dividing line between the Carthaginian west and the Greek east moved backwards and forwards frequently in the ensuing centuries.
The constant warfare between Carthage and the Greek city-states eventually opened the door to an emerging third power. In the 3rd century BC the Messanan Crisis motivated the intervention of the Roman Republic into Sicilian affairs, and led to the First Punic War between Rome and Carthage. By the end of the war in (242 BC), and with the death of Hiero II, all of Sicily except Syracuse was in Roman hands, becoming Rome's first province outside of the Italian peninsula.
The success of the Carthaginians during most of the Second Punic War encouraged many of the Sicilian cities to revolt against Roman rule. Rome sent troops to put down the rebellions (and it was during the siege of Syracuse that Archimedes was killed). Carthage briefly took control of parts of Sicily, but in the end was driven off. Many Carthaginian sympathizers were killed - in 210 BC the Roman consul M. Valerian told the Roman Senate that "no Carthaginian remains in Sicily".
For the next six centuries Sicily was a province of the Roman Republic and later Empire. It was something of a rural backwater, important chiefly for its grain fields which were a mainstay of the food supply of the city of Rome until the annexation of Egypt after the Battle of Actium largely did away with that role. The empire made little effort to Romanize the region, which remained largely Greek. One notable event of this period was the notorious misgovernment of Verres as recorded by Cicero in 70 BC in his oration, In Verrem. Another was the Sicilian revolt under Sextus Pompeius, which liberated the island from Roman rule for a brief period.
A lasting legacy of the Roman occupation, in economic and agricultural terms, was the establishment of the large landed estates, often owned by distant Roman nobles (the latifundia).
Despite its largely neglected status, Sicily was able to make a contribution to Roman culture through the historian Diodorus Siculus and the poet Calpurnius Siculus. The most famous archeological remains of this period are the mosaics of a nobleman's villa in present day Piazza Armerina.
It was also during this period that in Sicily we find one of the very first Christian communities. Amongst the very earliest Christian martyrs were the Sicilians Saint Agatha of Catania and Saint Lucy of Syracuse.
As the Roman Empire was falling apart, a Germanic tribe known as the Vandals took Sicily in 440 AD under the rule of their king Geiseric. The Vandals had already invaded parts of Roman France and Spain, inserting themselves as an important power in western Europe. However, they soon lost these newly acquired possessions to another East Germanic tribe in the form of the Goths. The Ostrogothic conquest of Sicily (and Italy as a whole) under Theodoric the Great began in 488; although the Goths were Germanic, Theodoric sought to revive Roman culture and government and allowed freedom of religion.
The Gothic War took place between the Ostrogoths and the Eastern Roman Empire, also known as the Byzantine Empire. Sicily was the first part of Italy to be taken under general Belisarius who was commissioned by Eastern Emperor Justinian I. Sicily was used as a base for the Byzantines to conquer the rest of Italy, with Naples, Rome, Milan and the Ostrogoth capital Ravenna falling within five years. However, a new Ostrogoth king Totila, drove down the Italian peninsula, plundering and conquering Sicily in 550. Totila, in turn, was defeated and killed in the Battle of Taginae by the Byzantine general Narses in 552.
When Ravenna fell to the Lombards in the middle of the 6th century, Syracuse became Byzantium's main western outpost. Latin was gradually supplanted by Greek as the national language and the Greek rites of the Eastern Church were adopted.
Byzantine Emperor Constans II decided to move from the capital Constantinople to Syracuse in Sicily in 663, the following year he launched an assault from Sicily against the Lombard Duchy of Benevento, which then occupied most of Southern Italy. The rumours that the capital of the empire was to be moved to Syracuse, along with small raids probably cost Constans his life as he was assassinated in 668. His son Constantine IV succeeded him, a brief usurpation in Sicily by Mezezius being quickly suppressed by the new emperor. Contemporary accounts report that the Greek language was widely spoken on the island during this period.
In 826, Euphemius the commander of the Byzantine fleet of Sicily forced a nun to marry him. Emperor Michael II caught wind of the matter and ordered that general Constantine end the marriage and cut off Euphemius' nose. Euphemius rose up, killed Constantine and then occupied Syracuse; he in turn was defeated and driven out to North Africa. He offered rule of Sicily over to Ziyadat Allah the Aghlabid Emir of Tunisia in return for a place as a general and safety; an Islamic army of Arabs, Berbers, Moors, Cretans and Persians was sent. The conquest was a see-saw affair, they met much resistance and had internal struggles amongst themselves, it took over one hundred years for the conquest of Byzantine Sicily to be completed with Syracuse holding out for a long time, Taormina fell in 902 and all of the island was conquered by 965.
Throughout this reign, continued revolts by Byzantine Sicilians happened especially in the east and part of the lands were even re-occupied before being quashed. Agricultural items such as oranges, lemons, pistachio and sugar cane were brought to Sicily, the native Christians were allowed nominal freedom of religion with jaziya (tax on kafirs imposed by Muslim rulers) to their rulers for the right to practise their own religion. However, the Emirate of Sicily began to fragment as inner-dynasty related quarrels took place between the Muslim regime. By the 11th century mainland southern Italian powers were hiring ferocious Norman mercenaries, who were Christian descendants of the Vikings; it was the Normans under Roger I who conquered Sicily from the Muslims. After taking Apulia and Calabria, he occupied Messina with an army of 700 knights. In 1068, Roger Guiscard and his men defeated the Muslims at Misilmeri but the most crucial battle was the siege of Palermo, which led to Sicily being completely in Norman control by 1091.
Palermo continued on as the capital under the Normans. Roger's son, Roger II of Sicily, was ultimately able to raise the status of the island, along with his holds of Malta and Southern Italy to a kingdom in 1130. During this period the Kingdom of Sicily was prosperous and politically powerful, becoming one of the wealthiest states in all of Europe; even wealthier than England.
The Norman kings relied mostly on the local Sicilian population for the more important government and administrative positions. For the most part, initially Greek remained as the language of administration while Norman was the language of the royal court. Significantly, immigrants from Northern Italy and Campania arrived during this period and linguistically the island would eventually become Latinised, in terms of church it would become completely Roman Catholic, previously under the Byzantines it had been more Eastern Christian.
The most significant change the Normans were to bring to Sicily was in the areas of religion, language and population. Almost from the moment Roger I controlled much of the island, immigration was encouraged from both Northern Italy and Campania. For the most part these consisted of Lombards who were Latin speaking and more inclined to support the Western church. With time, Sicily would become overwhelmingly Roman Catholic and a new vulgar Latin idiom would emerge that was distinct to the island.
Roger II's grandson, William II (also known as William the Good) reigned from 1166 to 1189. His greatest legacy was the building of the Cathedral of Monreale, perhaps the best surviving example of siculo-Norman architecture. In 1177 he married Joan of England (also known as Joanna). She was the daughter of Henry II of England and the sister of Richard the Lion Heart. When William died in 1189 without an heir, this effectively signalled the end of the Hauteville succession. Some years earlier, Roger II's daughter, Constance of Sicily (William II's aunt) had been married off to Henry VI of Hohenstaufen, meaning that the crown now legitimately transferred to him. Such an eventuality was unacceptable to the local barons, and they voted in Tancred of Sicily, an illegitimate grandson of Roger II.
Tancred died in 1194 just as Henry VI and Constance were travelling down the Italian peninsula to claim their crown. Henry rode into Palermo at the head of a large army unopposed and thus ended the Norman Hauteville dynasty, replaced by the south German (Swabian) Hohenstaufen. Just as Henry VI was being crowned as King of Sicily in Palermo, Constance gave birth to Frederick II (sometimes referred to as Frederick I of Sicily).
Frederick, like his grandfather Roger II, was passionate about science, learning and literature. He created one of the earliest universities in Europe (in Naples), wrote a book on falconry (De arte venandi cum avibus, one of the first handbooks based on scientific observation rather than medieval mythology). He instituted far-reaching law reform formally dividing church and state and applying the same justice to all classes of society, and was the patron of the Sicilian School of poetry, the first time an Italianate form of vulgar Latin was used for literary expression, creating the first standard that could be read and used throughout the peninsula.
Many repressive measures, passed by Frederick II, were introduced in order to please the Popes who could not tolerate Islam being practiced in the heart of Christendom, which resulted in a rebellion of Sicily's Muslims. This in turn triggered organized resistance and systematic reprisals and marked the final chapter of Islam in Sicily. The Muslim problem characterized Hohenstaufen rule in Sicily under Henry VI and his son Frederick II. The rebellion abated, but direct papal pressure induced Frederick to mass transfer all his Muslim subjects deep into the Italian hinterland, to Lucera. In 1224, Frederick II, Holy Roman Emperor and grandson of Roger II, expelled the few remaining Muslims from Sicily.
Frederick was succeeded firstly by his son, Conrad, and then by his illegitimate son, Manfred, who essentially usurped the crown (with the support of the local barons) while Conrad's son, Conradin was still quite young. A unique feature of all the Swabian kings of Sicily, perhaps inherited from their Siculo-Norman forefathers, was their preference in retaining a regiment of Saracen soldiers as their personal and most trusted regiments. Such a practice, amongst others, ensured an ongoing antagonism between the papacy and the Hohenstaufen. The Hohenstaufen rule ended with the death of Manfredi at the battle of Benevento (1266).
Throughout Frederick's reign, there had been substantial antagonism between the Kingdom and the Papacy, that was part of the Guelph Ghibelline conflict. This antagonism was transferred to the Hohenstaufen house, and ultimately against Manfred.
In 1266 Charles I, duke of Anjou, with the support of the Church, led an army against the Kingdom. They fought at Benevento, just to the north of the Kingdom's border. Manfred was killed in battle and Charles was crowned King of Sicily by Pope Clement IV.
Growing opposition to French officialdom and high taxation led to an insurrection in 1282 (the Sicilian Vespers) which was successful with the support of Peter III of Aragón who was crowned King of Sicily by the island's barons. Peter III had previously married Manfred's daughter, Constance, and it was for this reason that the Sicilian barons effectively invited him. This victory split the Kingdom in two, with Charles continuing to rule the mainland part (still known as the Kingdom of Sicily as well). The ensuing War of the Sicilian Vespers lasted until the peace of Caltabellotta in 1302, although it was to continue on and off for a period of 90 years. With two kings both claiming to be the King of Sicily, the separate island kingdom became known as the Kingdom of Trinacria. It is this very split that ultimately led to the creation of the Kingdom of the Two Sicilies some 500 years on.
Peter III's son, Frederick III of Sicily (also known as Frederick II of Sicily) reigned from 1298 to 1337. For the whole of the 14th century, Sicily was essentially an independent kingdom, ruled by relatives of the kings of Aragon, but for all intents and purposes they were Sicilian kings. The Sicilian parliament, already in existence for a century, continued to function with wide powers and responsibilities.
During this period a sense of a Sicilian people and nation emerged, that is to say, the population was no longer divided between Greek, Arab and Latin peoples. Catalan was the language of the royal court, and Sicilian was the language of the parliament and the general citizenry. These circumstances continued until 1409 when because of failure of the Sicilian line of the Aragonese dynasty, the Sicilian throne became part of the Crown of Aragon.
With the union of the crowns of Castile and Aragon in 1479, Sicily was ruled directly by the kings of Spain via governors and viceroys. In the ensuing centuries, authority on the island was to become concentrated amongst a small number of local barons.
Sicily suffered a ferocious outbreak of the Black Death in 1656, followed by a damaging earthquake in the east of the island in 1693. Sicily was frequently attacked by Barbary pirates from North Africa. The subsequent rebuilding created the distinctive architectural style known as Sicilian Baroque. Periods of rule by the crown of Savoy (1713-1720) and then the Austrian Habsburgs gave way to union (1734) with the Bourbon-ruled kingdom of Naples, under the rule of Don Carlos of Bourbon (later Charles III of Spain).
The Bourbon kings officially resided in Naples, except for a brief period during the Napoleonic Wars between 1806 and 1815 when in the royal family lived in exile in Palermo. The Sicilian nobles welcomed British military intervention during this period and a new constitution was developed specifically for Sicily based on the Westminster model of government. The Kingdoms of Naples and Sicily were officially merged in 1816 by Ferdinand I to form the Kingdom of the Two Sicilies (although the term had already come into use in the previous century). This single act effectively put an end to Sicilian aspirations of independent responsible government.
Simmering discontent with Bourbon rule and hopes of Sicilian independence was to give rise to a number of major revolutions in 1820 and 1848 against Bourbon denial of constitutional government. The 1848 revolution resulted in a sixteen month period of independence from the Bourbons before its armed forces took back control of the island on 15 May 1849. The bombardments of Messina and Palermo earned Ferdinand II the name "King Bomba".
Sicily was joined with the Kingdom of Sardinia in 1860 following the expedition of Giuseppe Garibaldi's Mille; the annexation was ratified by a popular plebiscite. The Kingdom of Sardinia became in 1861 the Kingdom of Italy, in the context of the Italian Risorgimento.
In 1866, Palermo revolted against Italy. The city was bombed by the Italian navy, which disembarked on September 22 under the command of Raffaele Cadorna. Italian soldiers summarily executed the civilian insurgents, and took possession once again of the island.
A limited, but long guerrilla campaign against the unionists (1861-1871) took place throughout southern Italy, and in Sicily, inducing the Italian governments to a severe military response. These insurrections were unorganized, and were considered by the Government as operated by "brigands" ("Brigantaggio"). Ruled under martial law for several years, Sicily (and southern Italy) was the object of a harsh repression by the Italian army that summarily executed thousands of people, made tens of thousands prisoners, destroyed villages, and deported people.
The Sicilian economy did not adapt easily to unification, and in particular competition by Northern industry made attempts at industrialization in the South almost impossible. While the masses suffered by the introduction of new forms of taxation and, especially, by the new Kingdom's extensive military conscription, the Sicilian economy suffered, leading to an unprecedented wave of emigration.
In 1894 labour agitation through the radical left-wing Fasci dei lavoratori led again to the imposition of martial law.
Ongoing government neglect in the late 19th century period ultimately enabled the establishment of organised crime networks commonly known as the mafia. These were gradually able to extend their influence across all sectors over much of the island (and many of its operatives also emigrated to other countries, particularly the United States). The mafia was partly contained under the Fascist regime beginning in the 1920s, but recovered quickly following the World War II Allied invasion of Sicily in July 1943.
Following some political agitation, Sicily became an autonomous region in 1946 under the new Italian constitution, with its own parliament and elected President. Sicily benefited to some extent from the partial Italian land reform of 1950-1962 and special funding from the Cassa per il Mezzogiorno, the Italian government's development Fund for the South (1950-1984). Sicily returned to the headlines in 1992, however, when the assassination of two anti-mafia magistrates, Giovanni Falcone and Paolo Borsellino triggered a general upheaval in Italian political life.
In the past decade, Sicily, and its surrounding islets, has become a target destination for illegal immigrants and people-smuggling operations. | 2026-01-28T01:58:13.221822 |
743,474 | 3.952898 | http://www.mnn.com/earth-matters/space/stories/far-side-of-the-moon-explained | Far side of the moon explained
The moon always keeps the same side turned toward Earth, and now scientists have a way to describe the far side.
Thu, Nov 11 2010 at 3:26 PM
NOW YOU SEE IT: This graphic (not to scale) shows that the moon's crust is thickest on the central far side and becomes thinner towards the north pole in a manner described with a simple math formula. (Photo: Science/AAAS)
The far side of the moon is forever hidden from the naked eye on Earth, but now scientists have developed a simple way to describe how it looks, and in doing so could shed light on its enigmatic history.
The simple mathematical formula they devised "explains at least a quarter of the moon's geography and geology," including the lunar far side's highlands, Ian Garrick-Bethell, a lunar scientist at the University of California, Santa Cruz, said. [Graphic: The moon's far side explained]
The near and far sides of the moon are very different — for instance, elevations on the far side are some 1.2 miles (1.9 km) higher, on average — and understanding the roots of those differences could shed light on the mysterious early days of the moon.
Far side of the moon
The moon always keeps the same side turned toward Earth, which means one cannot see its far side — often erroneously referred to as its "dark side" — from Earth's surface. Humanity saw its first pictures of the far side of the moon in 1959 from unmanned probes, and human eyes first directly observed it during the Apollo 8 mission in 1968.
Researchers discovered the formula while analyzing sets of lunar topography and gravity data, Garrick-Bethell told SPACE.com.
The stretch of the moon's far side surface explained by the new formula has to be the oldest lunar feature seen yet, since it lies beneath the ancient South Pole-Aitken Basin. The mathematics of it is similar to what applies to Jupiter's tidal effects on its moon Europa.
"Europa is in many ways different from the moon, but early on, the moon had a liquid ocean under its crust, and it likely shares that in common with present-day Europa," Garrick-Bethell said. "The ocean for the moon was of liquid rock, however, not water."
Moon's magma ocean
Just as the moon tugs on Earth's oceans, generating tides, so does Earth pull at the moon. The researchers suggest that roughly 4.4 billion years ago, when the moon was less than 100 million years old and its crust floated on an ocean of molten rock, these tidal effects caused distortions that were later frozen in place.
"People have been thinking about tidal explanations for the large-scale structure and shape of the moon for at least 100 years," Garrick-Bethell said. "The new thing here was to look at only one specific region of the moon that is very old, rather than to test the hypothesis over the moon as a whole, which was done previously.
"As a whole, the moon exhibits a wide range of geologic processes, some young and some old, so I don't think it's fair to explore it as a whole."
These findings yield insight into the fundamental processes that built the lunar crust, Garrick-Bethell said. "I would like to map out how this terrain may actually extend to other parts of the moon, and encompass even more surface area than we initially report," he added.
The scientists detail their findings in the Nov. 12 issue of the journal Science.
This article was reprinted with permission from SPACE.com
Related on Space.com:
You might also like: | 2026-01-29T17:38:25.535254 |
20,529 | 3.665148 | https://tedi31.com/2010/08/16/theory-of-psychological-types/ | Jung’s Theory of Psychological Types
Berens (1999) describes Jung’s theory of Psychological Types as a departure from the works of Sigmund Freud and Alfred Adler as Freud’s focus on his patients “seemed to be on the external world of adjustment to the outside world,” while Adler’s practice “seemed to be more focused on the primacy of the patients’ inner world in determining their behaviors.” Jung then conceptualized two fundamental concepts known as extraverted and introverted attitudes. He believed that orientation of individuals could either gravitate towards “the world outside your world (extroversion) or the world inside your world (introversion)” (Gerke, 2006).
Mental or Cognitive Processes (Functions)
After a period of study, Jung came to the conclusion that the differences in people weren’t limited to “just the inner world or outer world” but also took into consideration the content of the “mental activities which they were engaged in when they were in these worlds” (Berens, 1999). Gerke (2006) adds that Jung referred to these mental activities or cognitive/mental processes, as functions, which is derived from the performed function. Berens and Nardi (2004) described the two cognitive processes of Jung as perception and judgment wherein each cognitive process is divided into two categories with Sensation and Intuition falling under perception while Thinking and Feeling highlight judgment. The authors add that Jung’s theory focused on the idea that “every mental act consists of using at least one of these four cognitive processes in either an extraverted or introverted way,” thereby producing eight processes.
Perception and Judgment
Berens (1999) defined Jung’s perception, as a stimulus wherein an individual “becomes aware of something” and in the process is able to “gather or access information.” Jung considered this to be an “irrational process” as the recognition of the stimulus was brought about by external factors. Briggs-Myers et al., (2003) define Jung’s two kinds of perception as Sensation and Intuition. Sensing is defined as information which is assimilated through the senses (tangible information) while Intuiting focuses on a person’s ability in using “possibilities, meanings and relationships in gaining insight” (conceptual information) (Briggs-Myers et al., 2003).
The other core psychological process is Judgment or the ability “organize information and drawing conclusions from it.” Briggs-Myers et al., (2003) define Jung’s two kinds of judgment as Thinking and Feeling with Thinking defined as the “function that comes to a decision by linking ideas together through logical connections” whereas Feeling is the function wherein decisions are reached “by weighing relative values and merits of the issues.” Lastly, Berens (1999) adds these four functions: Sensation, Intuition, Thinking, and Feeling can be function on either the “extraverted world or introverted world.”
Carl Jung considers functions to be dichotomous opposites in nature. Dichotomous opposites are similar to water and fire wherein the utilization of one is in direct opposite to the other but does not depreciate their value and importance. Berens (1999) considers Sensing and iNtuiting to also be opposite in nature but despite this an individual has the ability to “shift their attention from one kind of information to another” on a number of occasions. A good example would be assimilating sensory information such as a beautiful painting of the ocean and then visualize its representation in the form of intuitive information like the season of summer, clarity, and peace of mind. Similarly, Thinking and Feeling judgments are polar opposites as well. Berens (1999) believes that to be both “value-based and criterion-based” simultaneously is unattainable. But in certain circumstances, both may be used to some extent. One example would be a traveler determining what he or she may need for a particular trip. Through a predetermined criterion, the traveler would be able to assess what is essential for the impending trip.
Briggs-Myers et al., (2003) add that the creation of the Judging-Perceiving dichotomy by Isabel Briggs Myers and Katharine Cook Briggs was brought about “to identify the dominant and auxiliary functions for each type” in the Myers-Briggs Type Indicator (MBTI). Jung believed that it was important to have one function above all others that are “dominant or trusted and developed” in order to facilitate the “characterization of one’s personality” (Berens, 1999).
Berens (1999) found the following:
Jung also indicated that there was more to a personality type than the dominant function. The dominant process gives a person only one mental process to rely on, and if the dominant process is a perceptive process (Sensing or iNtuting), there would be no way to evaluate information, so there must be a preference also for a judging process (Thinking or Feeling), there would be no way to access information. So the personality is also characterized by having another process play an “auxiliary” role that provides support to the dominant. The idea of a dominant and auxiliary is often referred to as the hierarchy of functions.
The auxiliary process provides balance to the dominant process in two ways.
1) The kind of process, perception, or judgment, is different. If the dominant process is a perceiving process, then the auxiliary process is a judging process or vice versa.
2) The attitudes or orientations of the processes are different. If the dominant process is focused on the outer world (extraverted), then the auxiliary process is focused on the inner world (introverted) or vice versa.
Aspects of Personality
Berens, Ernst, & Smith (2004) share that since we are “complex, adaptable beings,” an MBTI personality type is only capable of “predicting way we might prefer to behave in a given situation and not determine them.” Gerke (2006) adds that when viewing personality types, three interrelated areas much be taken into consideration: the contextual self; the developed self; and the core self. The author defines contextual self as a person’s behavior in relation to the given situation; the developed self then occurs once the individual is able to “adapt and grow based on the choices and decisions we have made as well as by interactions and roles;” while the core self is described as an individuals innate tendency to “to behave in certain ways which influences how one adapts, grows, and develops.”
Distinctions between types
Dissemination of the Myers-Briggs Type Indicator (MBTI) can be broken down into three types: reported type, the best-fit type, and true type (Briggs-Myers et al., 2003). The reported type refers to the initial personality type extracted from the answers provided in the indicator. This is followed by the best-fit type which is the type pattern selected by those tested from the “themes and preferred processes” of their reported type that suits them best (Berens et al., 2004). Lastly, Berens (1999) describes true type as “the pattern of tendencies inherent in the individual.” The author believes that since “patterns cannot be measured and can only be mapped or described,” true discovery can only come from the individuals personal cognitive resources. Briggs-Myers et al., (2003) add that although an individual’s type “does not change over time,” they may express their preferences “in somewhat different ways at different times and at different ages, and stages of life.”
Berens, L.V., Ernst, L.K., & Smith, M.A. (2004). Quick guide to the 16 personality types and teams: applying team essentials to create effective teams. Canada: Telos Publications.
Berens, L.V., & Nardi, D. (2004). Understanding yourself and others: an introduction to the personality type code. USA: Telos Publications.
Gerke, S.K. (Speaker). (2006). Jung’s theory of extroversion and introversion (Cassette Recording No, 1). Huntington Beach, California: Ramon Eduardo Gustilo Villasor. | 2026-01-18T15:21:09.121719 |
231,081 | 3.546728 | http://www.bio.davidson.edu/courses/genomics/2005/Cowell/MFYG1.html | If dogs hadn't gotten it first, Saccharomyces cervisiae would probably have been bequeathed the title of "Man's Best Friend." Although they don't fetch or bark or radiate a sense of everlasting companionship, the unicellular fungi can be counted on to metabolize sugar by fermentation. Scientists discovered that this process generates free ethanol and carbon dioxide around dawn of agricultural civilization, and humankind has been domesticating yeasts like S. cervisiae for the production of bread and beer ever since.1
Science has changed a lot over the last 3,500 years. Beer and bread have become staples of our diet, more often found in refrigerators than on the cutting edge, and yeast itself has become something of a staple for our science. S. cervisiae is one of the best-characterized organisms for studying genetics, and in 1996 became the first eukaryote to have its complete genome sequenced.2
But sequence is only the first step to understanding. The S. cervisiae genome consists of 12,156,590 base-pairs divided amongst 16 chromosomes, encoding a predicted 6,591 ORFs.3 While functional information has been determined for a majority of these, roughly one-third of the ORFs remain totally unannotated. To explore what the differences in data are between annotated and unannotated genes, I present as much information as I could find, excluding microarray & protein interaction data sources or resorting to actual experimentation, on two genes that are located directly next to each other on chromosome 13.
Yeast prefers to metabolize glucose by fermentation, but if environmental sugars run out, it can transform to a nonfermentative metabolism and use the ethanol (or other C2 or C3 energy substrate) it may have previously produced as an energy source.4 This change is called the diauxic shift, and as one might expect for a process that drastically alters metabolic input, changes the expression of a wide variety of genes. CAT8 encodes a transcriptional activator zinc-cluster protein designated Cat8p which derepresses at least 34 other genes that are highly induced during the initial stages of the diauxic shift.5 Almost all 34 of these genes have the CSRE (carbon source-responsive element) motif somewhere in their promoter region, which is the suspected site by which Cat8p regulates their expression.5
To guage how CAT8 is conserved among different species, a PSI-BLAST search was conducted with the gene's amino acid sequence against the UniRef database. Figure 5 and table 2 summarize the results, and indicate that CAT8 is conserved only in fungi.
Unfortunately, there are no solutions to CAT8 in the protein database, and the closest homolog (ID66) only has 51 amino acids in common(4.1%) with an e-value of 1.7e-05. An NCBI6 search for conserved domains (fig 6.) only produced three hits, two of which were found in GAL4-like transcription regulators that had a zinc-finger, and one of which was simply called "Fungal specific transcription factor." A Kyte-Doolittle hydropathy plot (fig. 7) revealed several regions with hydropathicity scores greater than 1.8, but none were of realistic extent for a true transmembrane protein. Besides, CAT8 has been well characterized as a transcriptional controller, so perhaps the peaks have to do with certain regions that interact with DNA.
As indicated in the overview section, there currently is no accepted gene ontology information for YMR279C. What can we find out using the awesome array of public databases online? Let's start with a PSI-BLAST query (Fig. 9 & Table 3). Interestingly, YMR279C seems to be much more conserved than CAT8. Perhaps this is because CAT8 is nearly 3 times as large as YMR279C and hence, from a purely statistical perspective, less likely to retain its integrity over time. On the other hand, CAT8 may just be a very specific regulatory gene for yeast, and YMR279C of more general function. The BLAST results indicate YMR279C may be potentially have a role as a transporter of some kind, possibly a drug resistance transporter, in which case it would probably have a number of transmembrane domains. Let's investigate the domains conserved in our potential transporter, and then judge its potential to be embedded in a phosopholipid bilayer by calculating a Kyte-Doolittle plot.
The conserved domain search (fig. 10) presents interesting results, correlating YMR0279C with two other transport proteins, one apparently involved in pumping toxins. However, neither have enough sequence in common with YMR0279C to generate significant e-values. The Kyte-Doolittle hydropathy plot (fig. 11) indicates the protein is predicted to have a substantial number (at least 7) of transmembrane domains, which supports the hypothesis that YMR0279 is a transmembrane protein.
Almost no direct data is available for the non-verified yeast protein YMR279C, and nothing can be certain where expirimental evidence is lacking. Nonetheless, by searching for other, known genes that encode similar amino acid sequences, it was possible to generate some preliminary data. It appears as if YMR279C is a transmembrane protein, possibly involved in the transport of a moderately complex molecule. The evidence for this comes from its conserved domains, which were also present in a protein that was involved in transporting sugar, and another that was involved in transporting a toxin.
CAT8, on the other hand, has been rather well studied, and besides its actual structure, of which there is no definitive information, its function is understood rather well. While not the direct initiator of the diauxic shift, CAT8 becomes active rather early as an essential translation regulator and is responsible for derepressing at least 34 genes as it participates in the expression cascade that ultimately reverses yeast's metabolism.
1Wikipedia. 2005 3 Oct. Fermentation#History.<http://en.wikipedia.org/wiki/Fermentation#History>. Accessed 2005 8 Oct.
2NCBI. 2003. National Center for Biotechnology Information. <http://www.ncbi.nih.gov/>. Accessed 2005 8 Oct.
3Saccharomyces Genome Database. 2005. <http://www.yeastgenome.org/cache/genomeSnapshot.html>. Accessed 2005 8 Oct.
4Randez-Gil, F., Bojunga, N., Proft, M., & Entian, K-D. 1997. Glucose Derepression of Gluconeogenic Enzymes in Saccharomyces cerevisiae Correlates with Phosphorylation of the Gene Activator Cat8p. Molecular and Cellular Biology 17(5):2502-2510. Freetext .pdf available at http://mcb.asm.org/cgi/reprint/17/5/2502?view=reprint&pmid=9111319. PMID:9111319. Accessed 10/8/2005.
5Haurie, V. et al. 2001.The Transcriptional Activator Cat8p Provides a Major Contribution to the Reprogramming of Carbon Metabolism during the Diauxic Shift in Saccharomyces cerevisiae. The Journal of Biological Chemistry 276(1):76-85. Freetext .pdf available at http://www.jbc.org/cgi/content/full/276/1/76. PMID:11024040. Accessed 10/8/2005.
6Marchler-Bauer A, Bryant SH (2004), "CD-Search: protein domain annotations on the fly.", Nucleic Acids Res. 32:W327-331.
Kyte-Doolite Hydropathy Plot. 2003. <http://occawlonline.pearsoned.com/bookbind/pubbooks/bc_mcampbell_genomics_1/medialib/activities/kd/kyte-doolittle.htm>. Accessed 2005 8 Oct.
[PREDATOR] <http://npsa-pbil.ibcp.fr/cgi-bin/npsa_automat.pl?page=/NPSA/npsa_preda.html>. Accessed 2005 8 Oct.
© Copyright 2005 Department of Biology, Davidson College, Davidson, NC 28035 | 2026-01-21T18:41:36.613926 |
481,624 | 3.675947 | http://mathequalslove.blogspot.com/2012_05_01_archive.html | Tuesday, May 1, 2012
After my 8th graders finished their state testing, my cooperating teacher and I decided to plan a mini-unit on probability. Probability is an 8th grade math standard in Oklahoma, but it is not tested by the state.
To be clear, I did not make up this game. I discovered this probability game a few weeks ago on a (new to me) blog, Walking in Mathland. There, the game is referred to as Beano. And, each student is given twelve dried beans to play with. Knowing my middle school students, I decided that I didn't want to hear jokes all day long about the name of the game. And, I could just picture students pelting dried beans at one another.
So, I modified the game to be played with twelve small game pieces. I had a bag of small washers that made perfect game pieces, but any small object could be substituted. Each student will need twelve game pieces.
Rules of the Game
1. Place your twelve pieces on the game board. You may place all your pieces on one rectangle or spread them out however you wish.
2. Roll two dice. Find the sum.
3. If you had a playing piece on the sum rolled, you may remove one playing piece.
4. Continue rolling the dice and removing playing pieces one at a time until one person empties their board.
8th Grade Pre-Algebra (50 minute class period)
Step 1: Hand out game boards and playing pieces. Instruct students to place their playing pieces on the board. Go over basic rules.
Step 2: Jump straight into the first game. I used the "Multiple Dice" feature from the Smart Notebook Gallery. Students will soon realize that their strategy for placing their pieces could be revised. Once a person has won, instruct students to think about their strategy for placing their game pieces. Remind them that this is a game of probability.
Step 3: Play the second round.
Step 4: Next, I told my students that I wanted to help them win this game. I handed out the worksheet posted on Walking in Mathland. Of course, I modified it slightly since we weren't playing with beans. Students completed the chart at the top asking them to list all of the possible sums that result from rolling two dice.
Step 5: Tell students to use this chart to help them place their game pieces for the third round. Play the third round.
Step 6: Have students complete the bottom half of the worksheet. The instructions say to create a bar graph, but I had students ask if they could create other types of graphs. Of course! We had a short conversation about what types of data displays would be appropriate (another 8th grade math standard!). So, I ended up having students create bar graphs, line plots, and scatter plots.
Step 7: Play again. By now, students start to realize that a winning strategy is to place their game pieces in a similar manner to the graph they created.
Step 8: Have students answer the 4 questions provided on the back of the worksheet.
Step 9: Continue playing as time allows.
My students really enjoyed this game. It was really a fun experience to watch them develop strategies for winning the game. I had one student whose initial strategy was to place all of his twelve playing pieces on the number 12. He soon realized that the number 12 didn't appear much in the game. We had lots of opportunities to talk about the probability behind the game. We discussed the differences between theoretical and experimental probability. The students were fully engaged in the activity. | 2026-01-25T17:45:18.601833 |
43,654 | 3.660064 | http://www.trust.org/item/?map=do-earthworms-contribute-to-climate-change/ | By Neil Palmer
We often hear about livestock being a major cause of greenhouse gas emissions. Picture them: Formidable herds of flatulent quadrupeds munching their way – figuratively speaking – through millions of hectares of rainforest.
But it seems we need to look below the surface, literally, to find another climate change culprit: earthworms.
That’s because a new study published in the scientific journal Nature Climate Change shows that worms may be a major cause of greenhouse gas emissions, creating what it describes as the “earthworm dilemma”.
The findings could soil the reputation of a creature which, as one of nature’s ugliest, has beaten all the odds to find its way into the hearts and minds of gardeners and ecologists the world-over, by virtue of its much-championed role as a custodian of soil fertility.
In the study, scientists from Wageningen University, the International Center for Tropical Agriculture (CIAT) and the University of California-Davis reviewed existing literature and found that earthworms, through the supposedly benevolent act of breaking down organic matter and boosting soil health, could be responsible for as much as one-third of the carbon dioxide emissions from soil, and 42 percent of soil-based emissions of nitrous oxide, a greenhouse gas 300 times more potent than carbon dioxide.
Crucially, they also bring into question the long-established idea that earthworms help trap carbon dioxide in the soil, thought to have at least partially negated their greenhouse gas footprint (well, drilosphere). Instead the study found that worms may contribute to global warming by helping release carbon dioxide from the soil into the atmosphere.
And that’s not all: The study also questions the increasingly popular “no-till” farming practices that eschew ploughing in order to protect soil structure. That helps preserve earthworm habitats, enabling them to thrive. Couple that with the increasing use of organic fertiliser – a veritable banquet for earthworms – and their greenhouse gas emissions could be set to rise further still.
A SLIPPERY SUBJECT
But according to CIAT soil scientist Steven Fonte, one of the authors of the study, when it comes to dishing the dirt on earthworms, the creatures still have a significant amount of, well, wriggle room.
“These are really important findings that challenge a long-held consensus of the precise role of worms in climate change mitigation,” he said.
“But earthworms should definitely not be seen as pests. They’re still vital to farm productivity and food security. They help to move nutrients through the soil, providing food for plants, and by improving soil fertility they can also reduce the need for chemical fertilisers.
“They can help quickly restore seriously degraded land to make it productive again, which, for a smallholder farmer, can mean the difference between a failed harvest and a bountiful one.”
For Fonte, the jury is definitely still out: “While our findings are provocative, they are mostly based on laboratory studies and largely ignore the potential for earthworm benefits to plant growth and nitrogen use, which could counteract the negative trends observed here”.
While further research might help vindicate the humble earthworm, it could be some time before they grow back their good reputation.
Neil Palmer is a photographer and writer for the International Center for Tropical Agriculture. This blog first appeared on the CIAT website. | 2026-01-19T00:11:23.130583 |
235,525 | 3.658766 | http://www.lavidalocavore.org/showDiary.do?diaryId=2789 | |Famine in Pre-British India
An 1878 study published in the Journal of the Statistical Society found that there were 31 serious famines in 120 years of British rule compared to 17 famines in 2000 years of Indian rule. And that doesn't even count two more major famines, in 1888 and in the late 1890s. How can this be?
Prior to British rule, Indians kept larger village-level grain reserves and they were generally free of grain price speculation.
According to the book, Mogul rulers saw protecting peasants as their obligation, and used 4 methods for relief:
- Embargoes on grain exports
- Anti-speculative price regulation
- Tax relief
- Distribution of free food without a forced labor component
A very important component of Mogul famine-prevention was their investment in well construction via generous tax breaks for anyone who built a well. In another example, under Maratha rule, between 1170 and 1820 only three bad seasons hit Maratha lands. The rulers dealt with it by forcing local elites to feed the poor. Furthermore, Indian rulers tied taxation rates to actual harvest. While this may sound similar to our idea of an income tax today (you are only taxed on what you earn), the British drastically changed the system of taxation, to the detriment of the Indian people.
Shares of World GDP (percent)
From p. 293
Setting the Stage for Disaster
So what did the British do, leading up to the eve of the first famine in 1877? Step one was an enormous capital drain out of India to England.
Robbing the Indian People Blind
First of all, they forced Indians, and the Indian government in particular, to buy British-made goods.
India, of course, was the greatest captive market in world history, rising from first to third place among consumers of British exports in the quarter century after 1870. "British rulers, writers Marcello de Cecco in his study of the Victorian gold standard system, "deliberately prevented Indians from becoming skilled mechanics, refused contracts to Indian firms which produced materials that could be got from England, and generally hindered the formation of an autonomous industrial structure in India." p. 298
By 1910, India purchased 40% of the UK's finished cotton goods and 60% of its exports of electrical products, railway equipment, books, and pharmaceuticals.
Add to that massive exports FROM India, even during the middle of famines when millions of Indians were starving. The opening of the Suez canal improved the economics of exporting goods from India to the UK, and exports from India increased eightfold between 1840 and 1886. In addition to opium, India exported indigo, cotton wheat, and rice. These crops were grown in monocultures, supplanting acres upon acres of subsistence grains.
Between 1987 and 1900, years that included the worst famines in Indian history, annual grain exports increased from 3 million to 10 million tons: a quantity that, as Romesh Dutt pointed out, was equivalent to the annual nutrition of 25 million people. By the turn of the century, India was supplying nearly a fifth of Britian's wheat consumption as well as allowing London grain merchants to speculate during shortages on the Continent. - p. 299
What must be considered in addition to that is the role the Gold Standard played in the bankrupting of India. Britain itself adopted the Gold Standard in 1821 and at that time, the rest of the world used silver or both silver and gold. In 1871, Germany shifted to the Gold Standard and the US soon followed. So did the rest of Europe and Japan. England insisted that India remain on its silver backed currency until 1893, when it began to move to gold. The result of this shift was an immense depreciation of silver. That meant that the British were able to buy low and sell high to the Indians... and the Indians suffered from the reverse situation.
If you had a pound's worth of Indian rupees in 1873, by 1895, they were only worth 64 pence. This devaluation of the rupee cost Indians an extra 105 million pounds between 1874 and 1894. Unfortunately for Indian peasants, who stored their savings in silver ornaments, the Gold Standard store 25% of the value of their savings. During this time, the price of Indian grains remained stable for the British while increasing rapidly for the Indians. The inflation was instrumental in helping the Brits convince Indian peasants to grow export crops.
As Sir William Wederburn pointed out: "Indian peasants in general had three safeguards against famine: (a) domestic hoards of grain; (b) family ornaments; and (c) credit with the village moneylender, who was also the grain dealer. But towards the close of the nineteenth century all were lost by the peasants." - p. 303-304
Put quite literally, the British taxed the Indian people to death. The reason for much of the taxation was England's military adventures around the developing world. India, instead of the British people, paid the cost of these expensive campaigns. During British rule, India never spent less than 25% of its annual budget on the British army.
The most significant change between Indian rules and British rule was the way in which taxes were assessed. Under the British, taxes were set based on your land's average expected harvest. The colonial budget, mostly financed by taxes on farm land, gave less than 2% to agriculture and education and barely 4% to public works of all kinds. A third went to army and police. By making taxes high and by fixing them to average production without regard for changes in weather, they made sure that a certain number of taxpayers would lose their land every year. A farmer would have his grain impounded upon harvest and then had to borrow money to pay taxes in order to eat from his own harvest.
In one of the top wheat-growing districts that I will discuss later, Narmada, the government reassessed land values in 1887 when the area was at the height of a wheat boom. Land values were sky high, so taxes and rents went up as well. This worked well for a few years, as moneylenders gave the landowners more credit. Then, in 1891-92, the British suddenly switched to wheat from Argentina and elsewhere in India. When the rains stopped in the mid-1890's, Narmada's wheat growers had huge debts, high taxes, and no market for their wheat.
the revenue collectors' inflexible claims on a high "average" harvest "compelled the peasants to cultivate marginal lands, and also forced them to 'mine' their land in a situation where most of them had few investible resources left to improve its productivity. - p.307
Prior to British rule, Indians augmented their crops with free things they could gather - grass to feed animals and make rope, wood and dung for fuel, leaves and forest debris for fertilizer, clay for plastering houses, and clean water. These were most important to the poorest households, where they were often literally the difference between life and death. The British transferred these resources from the village community to the state.
In 1870, India's forests were enclosed by "armed agents of the state." The Brits needed the forests for shipbuilding, urban construction, railroads, and fuel.
The British also dissolved an important relationship ("ecological interdependence") between nomadic pastoralists and farmers. In the dry western interior of India, large areas of uncultivated grassland separated settled communities of farmers and bands of nomads. After 1857, the British began a "relentless campaign" against nomads, who they labeled "criminal tribes." Although the agroecology of this area was dependent on the symbiosis of peasant and nomad, valley agriculture and hillslope pastoralism, the Brits' voracious appetite for taxe revenue generated irresistible pressure on the peasants to convert "waste" into taxable agriculture. (p. 328-329)
Traditionally, farmers practiced extensive crop rotation and long fallow periods. This required large farms and lots of manure, which was impossible to maintain with more people on the land (living on smaller farms) and fewer cattle. The expert nomad cattlebreeders were "deliberately squeezed out of the economy." (p. 329)
Between 1843-1873, estimated cattle population fell by 5 million. Numbers fell more during the droughts, and by 1896-97, women were pulling ploughs. Fewer cattle meant less manure. The soil converted from pasture could only produce 1/3 as much millet as the soil traditionally used for crops and ultimately became so degraded it was useless for agriculture or even grazing.
Cotton depletes soil nutrients very rapidly and must be rotated with nitrogen fixing legumes. However, crop rotation became impossible due to taxes and debt, forcing people to maximize short term income at the expense of long term soil fertility.
The British also upset traditional Indian water management, by enforcing British common law, which said that the landowner also owns water rights. The result was water scarcity for those who didn't own land.
When the British did finance irrigation projects, they were concentrated in areas important for export crops like cotton, opium, sugar cane, and wheat. By 1921, only 11% of cropped areas were irrigated. Not to mention that the irrigation projects done by the Brits were ecological disasters.
They might have produced short-term bonanzas in wheat and cane, but at huge, unforeseen social costs. Without proper underground drainage, for example, the capillary action of irrigation brought toxic alkali salts ot the surface, leading to such extensive saline efflorescence... that the superindented of the Geological Survey warned in 1877 that once-fertile plains were on the verge of becoming a "howling wilderness." Indeed, fifteen years later it was estimated that somewhere between 4,000 and 5,000 square miles of farmland - an immense area - was blighted by salinity "with 'valuable' crops isolated in clumps upon its surface." - p. 333
Where the new irrigation went alongside the old, traditional system, the new system undermined the old. This led to well collapsing or water tables falling and wells becoming brackish and unpotable. Canals also blocked natural drainage, leading to breeding grounds for mosquitoes and high rates of malaria. Also, taxes were so high on irrigated land, making it impossible to use it for anything but cash crops (if you use it at all). Villagers often abandoned irrigated fields for lower-taxed unirrigated fields. Also, peasants who built their own wells were taxed on them. Modern studies of industrial vs. indigenous irrigation in India found that indigenous irrigation systems avoided the problems of salinization and mosquito borne disease. Indigenous systems are more efficient and supply more stable yields over the long term. However, these indigenous irrigation systems were neglected and fell into decay in the years leading up to the famines.
Switching Indian Farmers From Subsistence to Export Crops
In this section, I'll give you two case studies, cotton and wheat.
The Cotton Supply Assocation (arm of the Manchester Chamber of Commerce) selected Berar and Nagpore for cotton monoculture. In Berar, the Association dismantled the traditional administrative system of the area, purging "disloyal" leading families who would not cooperate. Then the Brits spent 17 years (1861-1877) reorganizing the peasants of Berar's 7000 villages and 10.5 million acres of cultivable land into a system that was easy to tax.
In reality the government became the supreme landlord with peasant tenure, unlike Tudor England, strictly conditional upon punctual payment of revenue. - p.313
An important new class cropped up - moneylenders who also served as grain merchants. One important contribution of the Brits to India was the railway system, making it possible to export India's grain easily and also making grain price speculation possible.
The railways put traditional porters and carters out of work, turning them into propertyless laborers. Also put out of work were artisans, ruined by taxes on local woven goods and a "flood" of cheap English imports.
What's important to remember about cotton is that the world market was impacted by the American Civil War. American cotton exports ground to a halt and other countries increased production to take their place. When American cotton came back on the scene, the other cotton producing countries were often decimated.
Perhaps the British foresaw this, as they got Berar to grow cotton in the first place to create a buffer to the supply of premium American cotton and to keep prices stable. In 1867, Berar exported as much cotton to Manchester as all of Egypt.
As mentioned before, a new class of moneylenders cropped up. So did another group who split their land into smaller parcels and rented them out to "bhaginders" who paid exorbitantly high rents. By the 1890s, at least 70% of the population were bhagindars or landless laborers.
Although massive sums of capital were sunk into the Association's export infrastructure, including railroad spurs, cotton yards, and metalled feeder roads, none of it percolated to the village level where degraded sanitary conditions, especially contamination of drinking water by human waste, spread cholera and gastrointestinal disease as well as tuberculosis. - p. 315
During the famine of 1899-1900, when 143,000 Beraris died direction from starvation, the province exported not only tens of thousands of bales of cotton but an incredible 747,000 bushels of grain. Despite heavy labor immigration into Berar in the 1890s the population fell by 5 percent and the "life expectation at birth" twice dipped into the 15 year range before finally falling to less than 10 years during the "extremely bad year" of 1900. - p. 315
Without irrigation, Indian families needed more land than they had to grow grain (to eat) plus cotton and pay their taxes. Many opted to just grow cotton and then buy grain, even as cotton prices went down. One reason for this was that cotton was more responsive than grain (millet) to additional labor, provided for free by their families. In the end, the cotton-growing Beraris went naked.
Narmada Valley,in Central Provicnes (today part of Madhya Pradesh), had a wheat boom from 1861-1890. Local handicrafts were ruined by cheap imports that flooded central India after the construction of the railroad. Brits aggressively pushed landowners into commercial production of cotton & especially wheat. Farmers were told to save themselves by growing soft wheat preferred in Britain instead of millet and gram. In main export districts, wheat displaced 2/3 of acreage once used for subsistence grains.
However, the high tax demands drained the money from the area, and small landholders defaulted on debt to moneylenders, losing their land to the moneylenders. By 1889, this had happened to more than half of the land in the Central Provinces. Absentee landowners did not reinvest money into irrigation or cattle.
Even more than in the cotton districts, the Narmada wheat boom was built upon precarious climatic and ecological foundations. - p. 319
High demand for wheat in 1880s pushed people into inferior soil (traditionally used for hardy millets) where harvests only succeeded due to unusually good monsoons from 1884-1894. Railroads used up the lumber in the forests, and wheat used up pasture lands that traditionally fed cattle. This made bulls too expensive to keep, leading to a manure shortage (which was made worse by the high price of coal and the subsequent use of manure as fuel) that increased the rate at which the soil was depleted. The government also did not do any irrigation projects in the area. Remember also that just as Narmada's exports boomed, the British changed their preferred source of wheat to Argentina and elsewhere in India. The people of Narmada were left without a market. Just as the people of Berar went naked, the people of Narmada lived on imported millet and rice at the beginning of the 20th century.
Wheat Exports from the Central Provinces (Millions of Rupees)
Between 1876 and 1879, an estimated 6.1-10.3 million people died. A second (smaller) famine occurred in 1888-1891. A third famine hit India from 1896-1902, killing an estimated 6.1-19.0 million people.
The descriptions of the famine are simply unspeakable. At this point the stories told in India, China, and Brazil have blurred together in my mind. Stories of mothers swapping their children because neither could bear to eat their own. Stories of wild animals eating weakened, starving people in the streets. Stories of pigeons eating spilled grains from railroad cars guarded by armed guards as starving people looked on. In some places, people literally ate their homes and their beds so that when cold weather came, they had no protection nor any food leftover. In these famines, often epidemic disease (cholera, typhoid, malaria) accompanied starvation.
And all the while, India was producing and exporting plenty of food. In areas that were not affected by shortages and drought, often grain prices went up due to speculation, pricing out the poor so that a famine occurred all the same. During the first famine, 1876-1878, India's wheat exports to the UK increased from 308 (1000s of Quarters) in 1875 to 757 in 1876 to 1409 in 1877. Only in 1878 did exports decrease to 420.
A century earlier, Adam Smith said (during a terrible Bengal famine in 1770), "famine has never arisen from any other cause but the violence of government attempting, by improper means, to remedy the inconvenience of death." In this frame of mind, the viceroy of India ordered "there is to be no intererence of any kind on the part of the Government with the object of reducing the price of food." (p. 31) Quoting other great minds of the time like Thomas Malthus and ideologies like social Darwinism, the viceroy made the case that aid to the Indian people would practically hurt them more than it helped. He and others frequently parroted talking points we in modern day America have heard too many times, saying that the lazy Indians did not know how to work hard and if they were given aid in times of drought and famine, they would expect a free handout during the good times as well. The difference between now and then is that then there were tens of millions of people dying as the government made these proclamations.
The aid that was given by the British was done in a way that makes life in a Nazi concentration camp look good. To make sure that people would not show up to work and slack off, the British imposed "distance tests" by forcing people to walk at least 10 miles from their homes to reach work camps. At the work camps, they could perform heavy labor and receive food. However, in some cases, the amount of food provided by at the work camps was literally fewer calories per day than was provided to prisoners at Nazi concentration camps.
This is the calamity that set the stage for the modern day "Third World." Today there are an estimated 1 billion people going hungry, more than ever before. We must ask ourselves whether or not we are making human misery worse and then standing helplessly by as we watch people suffer, as the British did a century ago. | 2026-01-21T20:12:21.076602 |
978,006 | 3.922406 | http://www.reference.com/browse/re-absorb | In aerodynamics, hypersonic speeds are speeds that are highly supersonic. Since the 1970s, the term has generally been assumed to refer to speeds of Mach 5 (5 times the speed of sound) and above. The hypersonic regime is a subset of the supersonic regime.
Supersonic airflow is decidedly different from subsonic flow. Nearly everything about the way an aircraft flies changes dramatically as an aircraft accelerates to supersonic speeds. Even with this strong demarcation, there is still some debate as to the definition of "supersonic". One definition is that the aircraft, as a whole, is traveling at Mach 1 or greater. More technical definitions state that you are only supersonic if the airflow over the entire aircraft is supersonic, which occurs around Mach 1.2 on typical designs. The range Mach 0.75 to 1.2 is therefore considered transonic.
Considering the problems with this simple definition, the precise Mach number at which a craft can be said to be fully hypersonic is even more elusive, especially since physical changes in the airflow (molecular dissociation, ionization) occur at quite different speeds. Generally, a combination of effects become important "as a whole" around Mach 5. The hypersonic regime is often defined as speeds where ramjets do not produce net thrust. This is a nebulous definition in itself, as there exists a proposed change to allow them to operate in the hypersonic regime (the Scramjet).
Characteristics of flow
While the definition of hypersonic flow can be quite vague and is generally debatable (especially due to the lack of discontinuity between supersonic and hypersonic flows), a hypersonic flow may be characterized by certain physical phenomena that can no longer be analytically discounted as in supersonic flow. These phenomena include:
Thin shock layer
As Mach numbers increase, the density behind the shock also increases, which corresponds to a decrease in volume behind the shock wave due to conservation of mass. Consequently, the shock layer, that volume between the body and the shock wave, is thin at high Mach numbers.
As Mach numbers increase, the entropy change across the shock also increases, which results in a strong entropy gradient and highly vortical flow that mixes with the boundary layer.
A portion of the large kinetic energy associated with flow at high Mach numbers transforms into internal energy in the fluid due to viscous effects. The increase in internal energy is realized as an increase in temperature. Since the pressure gradient normal to the flow within a boundary layer is zero, the increase of temperature through the boundary layer coincides with a decrease in density. Thus, the boundary layer over the body grows and can often merge with the thin shock layer.
High temperature flow
High temperatures discussed previously as a manifestation of viscous dissipation cause non-equilibrium chemical flow properties such as dissociation and ionization of molecules resulting in convective and radiative heating.
The hypersonic flow regime is characterized by a number of effects which are not found in typical aircraft operating at low subsonic Mach numbers
. The effects depend strongly on the speed and type of vehicle under investigation.
The categorization of airflow relies on a number of similarity parameters
, which allow the simplification of a nearly infinite number of test cases into groups of similarity. For transonic and compressible
flow, the Mach
and Reynolds numbers
alone allow good categorization of many flow cases.
Hypersonic flows, however, require other similarity parameters. Firstly, the analytic equations for the Oblique shock angle become nearly independent of Mach number at high (~>10) Mach numbers. Secondly, the formation of strong shocks around aerodynamic bodies mean that the freestream Reynolds number is less useful as an estimate of the behavior of the boundary layer over a body (although it is still important). Finally, the increased temperature of hypersonic flows mean that real gas effects become important. For this reason, research in hypersonics is often referred to as aerothermodynamics, rather than aerodynamics.
The introduction of real gas effects mean that more variables are required to describe the full state of a gas. Whereas a stationary gas can be described by three variables (pressure, temperature, adiabatic index), and a moving gas by four (velocity), a hot gas in chemical equilibrium also requires state equations for the chemical components of the gas, and a gas in nonequilibrium solves those state equations using time as an extra variable. This means that for a nonequilibrium flow, something between 10 and 100 variables may be required to describe the state of the gas at any given time. Additionally, rarefied hypersonic flows (usually defined as those with a Knudsen number above one) do not follow the Navier-Stokes equations.
Hypersonic flows are typically categorized by their total energy, expressed as total enthalpy (MJ/kg), total pressure (kPa-MPa), stagnation pressure (kPa-MPa), stagnation temperature (K), or velocity (km/s).
Wallace D. Hayes developed a similarity parameter, similar to the Whitcomb area rule, which allowed similar configurations to be compared.
Hypersonic flow can be approximately separated into a number of regimes. The selection of these regimes is rough, due to the blurring of the boundaries where a particular effect can be found.
In this regime, the gas can be regarded as an ideal gas
. Flow in this regime is still Mach number dependent. Simulations start to depend on the use of a constant-temperature wall, rather than the adiabatic wall typically used at lower speeds. The lower border of this region is around Mach 5, where Ramjets
become inefficient, and the upper border around Mach 10-12.
Two-temperature ideal gas
This is a subset of the perfect gas regime, where the gas can be considered chemically perfect, but the rotational and vibrational temperatures of the gas must be considered separately, leading to two temperature models. See particularly the modeling of supersonic nozzles, where vibrational freezing becomes important.
In this regime, multimolecular gases begin to dissociate
as they come into contact with the bow shock
generated by the body. The type of gas selected begins to have an effect on the flow. Surface catalycity
plays a role in the calculation of surface heating, meaning that the selection of the surface material also begins to have an effect on the flow. The lower border of this regime is where the first component of a gas mixture begins to dissociate in the stagnation point of a flow (Nitrogen~2000 K). The upper border of this regime is where the effects of ionization
start to have an effect on the flow.
In this regime the ionized
electron population of the stagnated flow becomes significant, and the electrons must be modeled separately. Often the electron temperature is handled separately from the temperature of the remaining gas components. This region occurs for freestream velocities around 10-12 km/s. Gases in this region are modeled as non-radiating plasmas
Above around 12 km/s, the heat transfer to a vehicle changes from being conductively dominated to radiatively dominated. The modeling of gases in this regime is split into two classes:
- Optically thin: where the gas does not re-absorb radiation emitted from other parts of the gas
- Optically thick: where the radiation must be considered as a separate source of energy.
The modeling of optically thick gases is extremely difficult, since, due to the calculation of the radiation at each point, the computation load theoretically expands exponentially as the number of points considered increases.
Other Flow Regimes
- Anderson, John (2006). Hypersonic and High-Temperature Gas Dynamics Second Edition. AIAA Education Series. ISBN 1563477807. | 2026-02-02T09:50:03.563548 |
945,631 | 3.58617 | http://scienceblogs.com/tetrapodzoology/2009/10/19/sex-among-toads/ | After a brief hiatus we return to the remarkable world of toads, and this time round we look at reproductive biology. As a western European person, the toad species I’m most familiar with (the Common toad Bufo bufo and Natterjack Epidalea calamita [see later articles for details on the name changes]) are seasonal breeders that turn up at ponds early on in the year [Common toad mating ball shown here, photo by Neil Phillips] and produce strings of hundreds or thousands of eggs (between 400 and 7500). There are other toad species that are even more fecund, with individuals of some species (like the American toad Anaxyrus americanus and Cane toad Rhinella marina) producing more than 20,000 eggs on occasion: if you put an egg string from one of these species into a straight line, it would be up to 20 m long. You might think that no-one will ever see an egg string even approaching that length, given that the toads wind the strings around plants and debris. However, the egg strings of species that lay their eggs in streams or rivers sometimes become un-entangled by heavy rains and are then swept downstream: Shannon & Werler (1955) reported seeing a doomed string belonging to a Mountain toad Incilius cavifrons that was about 14 m long.
Having mentioned streams and rivers, it’s interesting that even those species that normally breed in lakes or ponds will sometimes lay their eggs in fast-flowing water. How successful this strategy might be is a good question, given that the tadpoles lack adaptations for this habitat and would presumably be swept rapidly downstream [Cane toad eggstring shown below, from the Kimberley Cane toad-busting site].
The jet black, poisonous tadpoles of species like the Common toad and Natterjack take about 12 weeks to develop, and by July or August hundreds of metamorphosed toadlets are leaving the water. This pattern will be familiar to people who know the toads of Europe, Asia, northern Africa and North America. A common, and reasonable, assumption is that temperate-zone toads emerge from hibernation and only then start heading towards the breeding pond. You might be surprised to learn that at least some toads of temperate regions actually start moving toward their breeding ponds in September: it’s just that cold snaps in the following months stop them from progressing, and cause them to seek out refugia for hibernation (Sinsch 1992).
Toads are diverse in reproductive biology, and some produce small or very small clutches compared to the more familiar species. Examples include the Asian stream toads (Ansonia*), where females may only produce 75-85 eggs per ovary, and the southeast Asian flathead toads or dwarf toads (Pelophryne) whose clutches (laid in shallow puddles, leaf axils, tree holes and even in such places as broken bottles) may consist of less than 20 eggs (in cases between 5 and 10). Various tropical species, including many of the South American stubfoot toads (Atelopus), lay their eggs in fast-flowing streams and have torrent-adapted tadpoles equipped with sucker-like mouthparts or belly suckers [adjacent image shows ventral surface of a Rio Viego toad Rhinella chrysophora tadpole, with belly sucker indicating strong adaptation for stream environment. From McCranie et al. (1989)].
* The monophyly of which has recently been contested. More on that later.
Some species native to arid regions are opportunistic breeders that mate and produce eggs whenever it rains. In such species, the tadpoles may metamorphose in an incredibly rapid time. A total development time of less than two weeks has been reported for the African Taita dwarf toad Mertensophryne taitana (Müller et al. 2005). Viviparity has evolved more than once in toads: read on.
Some toads engage in amplexus long before they reach the breeding pond, with the males grabbing hold of animate objects while on their way to the pond, each time hoping for the best (female anurans must be very strong as, in cases, they’re able to surmount high obstacles and cling to the bottom of fast-flowing streams, all the while carrying a male who is at least half as heavy as she is). Females therefore typically arrive with a male already attached, though males may then fight over the females within the pond and sometimes drown them in the process. These species are typically explosive breeders that have very short breeding seasons. Other species have longer breeding seasons, and in these the males usually have large vocal sacs and call loudly to attract females [Asian common toads Duttaphrynus melanostictus in amplexus shown here, image by Andrew Johnson ©, used with permission].
Ear loss and semaphore
At least some toads (like the Cameroon toad Amietophrynus superciliaris) are entirely voiceless, and the repeated loss of hearing organs (the tympanum and its associated structures) among toads suggests that airborne sounds are unimportant to several taxa. It’s worth noting here that the development of the tympanum is delayed in post-metamorphic toads, so a juvenile toad that lacks a tympanum does not necessarily grow into an adult that lacks one too by (De La Riva 2004).
However, some species definitely do lack eardrums and other ear structures as adults. How then do the males and females find each other? That’s a good question and, for most earless taxa, no-one yet knows. However, it’s recently been discovered that earless stubfoot toads that inhabit cascade stream environments with high levels of ambient noise (specifically, the Panamanian gold frog Atelopus zeteki: shown here*) use semaphore – waving actions of their forelimbs – in order to attract attention (Lindquist & Hetherington 1998 and references therein). Also worth noting is that the loss of ear components is not necessarily such a problem in species that are still vocal, as some anurans have co-opted the body wall and lungs as sound-transmission organs (Ehret et al. 1990). We clearly need more answers on how the males and females of non-vocal, earless species find each other.
* This species now seems to be extinct in the wild. Many South American highland toad species are threatened with extinction by the spread of the Bd chytrid fungus. For an introduction to the global amphibian crisis please go here.
How to grab and inseminate females
Amplexus (the action whereby males clasp hold of females) is usually axillary in toads (that is, males grab females around the chest), but inguinal amplexus (where the male grabs the female in front of her hindlimbs) is practised by a few species, including the African tree toads (Nectophrynoides), the Pico Blanco toad Incilius fastidiosus, Holdridge’s toad I. holdridgei, the plump toads (Osornophryne) and the Colombian species Rhinella cristinae (Graybeal & de Quieroz 1992, Vélez-R. & Ruiz-C. 2002). In the inguinal amplexus used by some Nectophrynoides African tree toads (Malcolm’s Ethiopian toad N. malcolmi) and by some Nimba toads (Nimbaphrynoides), the male clings to the female’s ventral surface (Mattison 1987). In the larger clade that includes Bufonidae, axillary amplexus is the norm, indicating that inguinal amplexus is derived for toads, not primitive.
Some species of South American stubfoot toad (Atelopus) remain in amplexus for an amazing length of time: a few weeks is apparently fairly common; more incredibly, Dole & Durant (1974) reported individuals of the Rednose stubfoot A. oxyrhynchus to join in amplexus in December, to begin migration towards the breeding streams in April or May, and to spawn in May or June. In one case, a pair remained in amplexus for a staggering 125 days. During this period of prolonged amplexus, the male declines in condition, becoming thinner and thinner. It’s been suggested that prolonged amplexus has evolved because the individuals of the species concerned (inhabitants of high-altitude moorland) are widely spaced out and that meetings must be taken advantage of (Mattison 1987).
As is fairly common in anurans, male toads have stronger arms than females, and they also have special roughened or spicule-covered patches on the hands or fingers that help them to maintain amplexus [those of B. bufo are shown here, from Mattison (1987)]. These ‘nuptial excrescences’ become larger and more obvious during the breeding season and are hence seasonal secondary sexual characteristics. Skin texture may also change for the breeding season (usually becoming smoother), and the vocal sac and associated vocal slits of males also enlarge at this time.
Viviparity has evolved more than once within toads. It’s practised by the African tree toads Nectophrynoides liberiensis and N. occidentalis and by Loveridge’s snouted toad Mertensophryne micranotis. African tree toads as a whole are sometime said to be viviparous, but in fact some have a tadpole stage, and others (N. tornieri and N. viviparus) are ovoviviparous (that is, the eggs develop into froglets inside the body of the mother, but are nourished by their own yolk rather than by uterine means). Again it’s worth making the point that reproductive diversity in anurans is staggering, with just about everything and anything conceivable being practised by some group or other [new species of Tanzanian Nectophrynoides shown here: photo by P. Whitehorn. Check out those poison glands].
Obviously, viviparity can only evolve in species where internal fertilisation occurs. How did internal fertilisation evolve in toads? We don’t really know, but there’s some indication that it might be an ‘accidental’ consequence of inguinal amplexus: the implication being that inguinal amplexus brings the male’s vent into closer, more enduring contact than does axillary amplexus. In plump toads, a ‘cloacal tube’ is said to project from the male’s vent (the only descriptions are vague and unhelpful), and it has been suggested that this might represent a step towards internal fertilisation.
More toads soon. You know you love it.
For previous articles in the monumental, ground-breaking toad series see…
For previous articles on hyloid anurans see…
- Britain’s lost tree frogs: sigh, not another ‘neglected native’
- Ghost frogs, hyloids, arcifery.. what more could you want?
- Green-boned glass frogs, monkey frogs, toothless toads
- It’s the Helmeted water toad!
- Horn-headed biting frogs and pouches and false teeth
- More wide-mouthed South American horned frogs
- We need MORE FROGS
Refs – -
De la Riva, I. 2004. Taxonomic status of Bufo simus O. Schmidt, 1857 (Anura: Bufonidae). Journal of Herpetology 38, 431-434.
Dole, J. W. & Durant, P. 1974. Movements and seasonal activity of Atelopus oxyrhynchus (Anura: Atelopidae) in a Venezuelan cloud forest. Copeia 1974, 230-235.
Ehret, G. Tautz, J. & Schmitz, B. 1990. Hearing through the lungs: lung-eardrum transmission of sound in the frog Eleutherodactylus coqui. Naturwissenschaften 77, 192-194.
Graybeal, A. & de Queiroz, K. 1992. Inguinal amplexus in Bufo fastidiosus, with comments on the systematics of bufonid frogs. Journal of Herpetology 26, 84-87.
Lindquist, E. D. & Hetherington, T. E. 1998. Semaphoring in an earless frog: the origin of a novel visual signal. Animal Cognition 1, 83-87.
Mattison, C. 1987. Frogs & Toads of the World. Blandford, London.
McCranie, J. R., Wilson, L. D. & Williams, K. L. 1989. A new genus and species of toad (Anura: Bufonidae) with an extraordinary stream-adapted tadpole from northern Honduras. Occasional Papers of the Museum of Natural History, the University of Kansas 129, 1-18.
Müller, H., Measey, G. J. & Malonza, P. K. 2005. Tadpole of Bufo taitanus (Anura: Bufonidae) with notes on its systematic significance and life history. Journal of Herpetology 39, 138-141.
Shannon, F. A. & Werler, J. E. 1955. Notes on amphibians of the Los Tuxtlas range of Veracruz, Mexico. Transactions of the Kansas Academy of Science 58, 360-386.
Sinsch, U. 1992. Seasonal changes in the migratory behaviour of the toad Bufo bufo: direction and magnitude of movements. Oecologia 76, 390-398.
Vélez-R., C. M. & Ruiz-C., P. M. 2002. A new species of Bufo (Anura: Bufonidae) from Colombia. Herpetologica 58, 453-462. | 2026-02-01T23:17:33.549759 |
449,933 | 3.69905 | http://www.saintmarksschool.org/academics/media-literacy/index.aspx | Saint Mark's Media Literacy program teaches students to become both critical consumers and inventive creators of mass media. This exciting, innovative program - honored by the National Association of Independent Schools Leading Edge Award in 2005 - raises students' awareness of how mass media influences individuals and society.
Students learn to identify and deconstruct the codes and conventions the media uses to sell, persuade, entertain and educate. They also learn fundamental critical thinking, communication, research and technological skills.
The program, which is constantly evolving and expanding, includes integrated lessons and projects in many grade levels. It culminates in grade eight when students create their own original media productions in an intensive week.
Becoming media creators - and having to articulate the thinking behind their projects both orally and in writing - greatly deepens students' media literacy.
Student productions have covered a broad range of topics in a wide variety of media formats. Past examples include: digital self-portraits that challenge media stereotypes of teenagers, public service videos that reveal the media's role in underage alcohol and tobacco use and websites that examine the link between television and obesity. | 2026-01-25T04:49:57.651418 |
813,862 | 3.760162 | http://history1800s.about.com/od/innovators/p/darwinbio.htm | Charles Darwin's Great Achievement:
As the foremost proponent of the theory of evoution, the British naturalist Charles Darwin holds a unique place in history. While he lived a relatively quiet and studious life, his writings were controversial in their day, and can still inspire controversy in the modern world.
Early Life of Charles Darwin:
Charles Darwin was born on February 12, 1809 at Shrewsbury, England. His father was a medical doctor, and his mother was the daughter of the famous potter Josiah Wedgwood. Darwin’s mother died when he was eight, and he was essentially raised by older sisters. He was not a brilliant student as a child, but went on to university at Edinburgh, Scotland, at first intending to become a doctor.
Darwin took a strong dislike to medical education, and eventually studied at Cambridge. He planned to become an Anglican minister before becoming intensely interested in botany. He received a degree in 1831.
Voyage of the Beagle:
On the recommendation of a college professor, Darwin was accepted to travel on the second voyage of the H.M.S. Beagle. The ship was embarking on a scientific expedition to South America and islands of the South Pacific, leaving in late December 1831. The Beagle returned to England nearly five years later, in October 1836.
Darwin spent more than 500 days at sea and about 1,200 days on land during the trip. He studied plants, animals, fossils, and geological formations and wrote his observations in a series of notebooks. During long periods at sea he organized his notes.
Early Writings of Charles Darwin:
Three years after returning to England, Darwin published Journal of Researches, an account of his observations during the expedition aboard the Beagle. The book was an entertaining account of Darwin's scientific travels and was popular enough to be published in successive editions.
Darwin also edited five volumes titled Zoology of the Voyage of the Beagle, which contributions by other scientists. Darwin himself wrote sections dealing with the distribution of animal species and geological notes on fossils he had seen.
Development of Charles Darwin's Thinking:
The voyage on the Beagle was, of course, a highly significant event in Darwin’s life, but his observations on the expedition were hardly the only influence on the development of his theory of natural selection. He was also greatly influenced by what he was reading.
In 1838 Darwin read an Essay on the Principle of Population, which the British philosopher Thomas Malthus had written 40 years earlier. The ideas of Malthus helped Darwin refine his own notion of “survival of the fittest.”
Charles Darwin Refines His Ideas of Natural Selection:
Malthus had been writing about overpopulation, and discussed how some members of society were able to survive difficult living conditions. After reading Malthus, Darwin kept collecting scientific samples and data, eventually spending 20 years refining his own thoughts on natural selection.
Darwin married in 1839. Illness prompted him to move from London to the country in 1842. His scientific studies continued, and he spent years studying barnacles, for instance.
Publication of Darwin's Masterpiece:
Darwin’s reputation as a naturalist and geologist had grown throughout the 1840s and 1850s, yet he had not revealed his ideas about natural selection widely. Friends urged him to publish them in the late 1850s. And it was the publication of an essay by Alfred Russell Wallace expressing similar thoughts that encouraged Darwin to write a book setting out his own ideas.
In July 1858 Darwin and Wallace appeared together at the Linnean Society of London. And in November 1859 Darwin published the book that secured his place in history, On the Origin of Species By Means of Natural Selection.
Darwin Inspires Controversy:
Charles Darwin was not the first person to propose that plants and animals adapt to circumstances and evolve over eons of time. But Darwin's book put forth his hypothesis in an accessible format, and led to controversy.
Darwin's theories had an almost immediate impact upon religion, science, and society at large.
Charles Darwin's Later Life:
On the Origin of Species was published in several editions, with Darwin periodically editing and updating material in the book.
And while society debated Darwin's work, he lived a quiet life in the English countryside, content to conduct botanical experiments. He was highly respected, regarded as a grand old man of science. He died on April 12, 1882, and was honored by being buried in Westminster Abbey in London. | 2026-01-30T21:21:18.653397 |
1,124,377 | 4.334073 | http://ebookily.org/pdf/articles-definite-and-indefinite-worksheets | Articles Definite And Indefinite Worksheets PDF
GRAMMAR / Definite and Indefinite Articles INSTRUCTIONS FOR THE TEACHER This exercise is a supplement to the exercises of In Charge 1, Unit 9, pages 109 through 112. 1. Distribute the Student Worksheet to your students. Ask them if they have seen the
Write the correct form of the definite article for each of the ... Write the correct form of the indefinite article for each of the following nouns ... 18. _____ lecciones 19. _____ hacha 20. _____ hachas Page 1 of 1 articles_wksht ma This terial is the property of the AR Dept. of ...
Definite and Indefinite. Articles. The English definite article is the. It is used to identify a particular person or thing. If you are speakingabout someone or something you are already familiar with, you use the with the noun. Look at
ARTICLES Exercises Indefinite article 1. This is ..... orange. 2. That is ..... book. 3. This is ... B. Insert definite or indefinite articles, the an, a, where necessary: 1. Greeks like ..... coffee. 2. English like ..... tea. 3 ...
The indefinite articles A/AN and the definite article THE The indefinite article ... * The definite article THE is used when it is clear which thing or person we are talking about, for example when the noun is mentioned for a
Definite vs. indefinite nouns (i.e., Is the noun referring to a specific object, place, or thing?) Countability ... It is not unusual for an ESL writer to place indefinite articles (“A” and “An”) correctly, but to have trouble using definite articles (“The”).
Definite Articles (The/Those) Masculine Feminine Singular: Singular: Plural: Plural: Indefinite Articles (A/An) Masculine Feminine Singular: Singular: Plural: Plural: Write the correct definite article in the space.
• Worksheet:definite articles, indefinite articles, pronouns • Activity Sheet:choosing appropriate pronouns, guessing gender of new nouns • Check Your Knowledge:vocabulary, gender rules, articles, pronouns; 50 points Background
DEFINITE & INDEFINITE ARTICLES Instructions: Fill in the blank with the correct form of the article. Group A: Definite Articles 1. ... Group B : Indefinite Articles 1. Le voy a traer _____ libro. 2. Tomás trabaja en _____ oficina. 3. ...
Articles: Definite The Indefinite A An Some Rules to remember: 1). When using a singular count noun, it MUST always be modified by something, i.e., articles, possessives, demonstratives. Examples: The job My job That job 2).
ARTICLES AND OTHER DETERMINERS 1. ARTICLES: BASIC INFORMATION 1. ... nite or indefinite. If they are definite (in other words, if our hearer or reader knows exactly which ones we mean), we normally use the. If we are talking about indefinite things (which
Definite / Indefinite articles – pg.17 ... Microsoft Word - GRAMMAR WORKSHEETS 1BCHTO.doc Author: Profesor Created Date: 4/16/2008 2:39:33 PM ...
Which are indefinite? O We use definite articles when talking about a _____ person, place, or thing. O We use ... definite articles in Spanish. O Use your worksheets to complete the chart. Masculine Feminine Singular Plural Singular Plural
Use the definite article the to indicate a specific singular or plural noun. I ate the apple in my lunch. (not just any apple, ... Use the indefinite articles, a and an, to indicate non-specific, singular, countable nouns. I ate an apple. (no specific apple)
The words ‘a’ and ‘an’ are called indefinite articles. They are used with singular countable nouns. Example: Have you a pencil? The word ‘the’ is called the definite article. It is used before a noun which refers to something or someone definite.
Circle the indefinite pronoun in each of the following sentences. Then, underline the correct The following indefinite pronouns are singular: anybody, ... A pronoun that does not refer to a definite person, p lace, thing, or idea is called an indefinite MOSt or tn e potatoes was, nave) D
There are two types of articles, definite and indefinite. However, in your choice about whether to use an article, or which one to use, you have four possible choices: the, a, an, or no article. ‘the’ is known as the DEFINITE ARTICLE
What are definite and indefinite articles? Which nouns have which gender? How to say it, someone and no one ... 6 slides, 10 Flash activities, 4 worksheets tion covers: two articles with accompanying comprehe W 1 This presenta the layout of formal German letters formal greetings and endings
Articles, definite / indefinite Nouns, gender and number Adjectives, Day of the Dead, Poem and Music, Vocabulary: adjectives, the ... Worksheets Flash cards, visual aids, book Viaje a España, Videos, Conversation, Worksheets, Presentations, Drills, Written exercises.
“A” and “an” are indefinite articles, and “the” is the only definite article. ... Articles Grammar Practice Worksheets. with the names of oceans, seas, deserts, ... noun is definite: The wisdom of that woman is amazing. with the names of roads,
The most important rule about the use of articles is that an article is required with a singular ... (Definite Article) A (Indefinite Article) Exercises : Supply the article if it is required.
Goal: Learn about definite and indefinite articles. Then practice using ... Definite articles (in English, the) are used with nouns to indicate specific persons, places, ... -handouts/worksheets-quiz/test dates: Homework Policy:
... definite / indefinite / zero articles; modal verbs; contrasting with although; ... them swap worksheets and read each other’s ideas. ... cats), the middle level, the indefinite article, e.g.I saw a cat . (= one cat, but not defined), and the bottom level,
Review SER w/Definite & Indefinite Articles. speaking] Created worksheets from: Spanish is Fun, En Español 1 & Buen Viaje 1 ... -Teacher created worksheets & Worksheets from: ZBuen Viaje 2 [ workbook & from the Web.-Power Point Presentation.
January: Definite & indefinite articles, singular & plural forms of nouns, mid term review & testing ... Worksheets and book exercises to practice all essential content & objectives Games, such as Bingo, for number & alphabet practice
... the articles are the (definite article) and a/an (indefinite article). Masculine Feminine Definite article Singular el diccionario la computadora Plural los diccionarios las computadoras ... indefinite article, then, the definite: En mi calle hay un restaurante. El restaurante es grande.
of exercises and worksheets to practice spanish. the activities should be completed in class unless assigned as homework. 2 . 1 querer (to want) ... definite and indefinite articles write the correct article in front of the nouns (un/unos/una/unas)
worksheets skit share/pair talks group discussion guided notes culture research ... definite articles interrogatives writing the date gustar keynote presentation/epson video ... indefinite articles telling time definite article replacement adjective agreement numbers 31-199
Students will be introduced to Indefinite/ Definite Articles in content. Spanish Grade 5: 1. Students will recite the Spanish Alphabet and vowel sounds. 2. Students will recognize and respond to greetings. 3.
Reinforcement worksheets DVD’s with a cultural component CDs of Spanish music Spanish children’s books Objectives: The student will: ... • Use definite articles. • Use indefinite articles. • Use possessive pronouns and subject pronouns.
Definite and Indefinite articles. Adjectives The letter H The y sound Accents Being able to use vocabulary and tenses to combine ... worksheets to re-enforce the material taught in class. Test Chapter 4 Quizzes Chapter 4.A and 4.B Oral questioning
Worksheets Language Lab activities Oral questioning Oral and written proficiency quizzes. Course Curriculum Document FORM B ... recognize and match the definite and indefinite articles and noun endings. correctly use adjective agreement (gender and number).
Articles (definite, indefinite) Nationalities, School/Classroom, Clothing, Colors, Numbers- 0-1,000,000 House & Home, Possessive Adjectives ... writing worksheets. Aiming for Proficiency: Unit 6, Unit 10 tests LOTE2-K1-1A LOTE1-K1-1A LOTE1-K1-1B LOTE1-K1-1C
Begin to use definite and indefinite articles with their respective nouns ... Supplemental worksheets from other work books. Author: lolivas Created Date: 7/3/2013 3:39:24 PM ...
definite/indefinite articles . Day of the Dead . literature: Teacher-made quizzes and worksheets : Verbal checks . Supersite: www.vhlcentral.com . Teacher-generated/ modified Chapter 1 Enfoques test. Std. 1: Students will be able to use a language
number of nouns and adjectives, definite/indefinite articles, pronouns, possessive adjectives, questions, informal ... Ejercicios (“Exercises” ‐ Work done in class, worksheets) ...
verbs, present tense of basic irregular verbs, negation, indefinite and definite articles, and gender. 3. Culture: friends, family, school, sports, food, money, and travel. 4. Notre Père ... Worksheets, puzzles, games 6. Quizzes, tests, homework, and oral participation . Department
Definite and Indefinite articles Cultural differences among Hispanics Identify cultural and language differences among ... Worksheets Student Created Interview Dictations Buen Viaje textbook Transparencies Worksheets Authentic Literature
A demonstrative adjective points out a definite person, place, thing, or ... A and an are the indefinite articles. Each is used to refer to a single member ...
Skill building worksheets Classroom discussions Composition about a past vacation Comprehension questions about reading texts (in German) ... Convert definite and indefinite articles to the genitive case Identify situations and phrases where genitive would be necessary (possession, ...
Articles (definite and indefinite) 2. Gender recognition 3. Informal commands 4. Plurals (only with definite articles) E. Suggested Activities ... 6. Tear sheet vocabulary 7. Matching games 8. Symtalk 9. Students give and act out commands . 10. Illustrated worksheets 11. Word search 12 ...
Elicit the basic grammatical rules for using indefinite and definite articles when describing something in a paragraph. (i.e., ... • Put test takers into groups and give out the worksheets. Ask them to use their understanding of articles to
worksheets Getting re ... Introduce the definite and indefinite articles
All the work done in the notebook and worksheets will be tested. Kindly refer to the handouts given in the class. Campus: Balestier Session ... Revision of indefinite articles (un/une/des) and definite articles (le/la/l’/les) Forms of color names for masculine and feminine nouns and for ...
understand definite and indefinite articles and when to use them understand how to use pronouns, ... Session 3 Articles: the, a, an ... worksheets . Author: Jessica Ritchie
Correct usage of the definite article, the indefinite article and selected quantifiers Function or Place 1 Function or Place 2 Indefinite article 1 ... The four worksheets in this area are in pdf format, while the 17 web pages
ARTICLES Definite Article: THE Indefinite Article: A, AN PRONOUNS Personal subject pronouns: I, YOU, HE, SHE, IT, WE, YOU, THEY. e.g. ... It is more definite than the other conditionals. e.g. If water is heated to 212 degrees, it boils.
definite/indefinite articles, nouns, pronouns and adjectives -To conjugate être in the present tense ... worksheets/translations and listening activities CD program : CD #3 Transparencies: B2.1-2.8, V 2.1-2.5, P2, C2 Situation Cards
textbooks, but Spanish I worksheets and grammar guides are available to help you ... articles, etc. ... Definite and indefinite articles Word order: placement of adjectives Ordinal numbers
Use definite and indefinite articles k. ... Paper and pencil worksheets, quizzes and tests. (vocabulary, ... online articles), games such as Simon Says e. Oral assessments (daily, informal and formal). Gauging level of pronunciation and proficiency through speaking proficiency assessments on ... | 2026-02-04T15:28:02.871802 |
1,042,815 | 3.866618 | http://psychology.wikia.com/wiki/Biomedical_intervention_for_autism | Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Autism therapies attempt to lessen the deficits and family distress associated with autism and other autism spectrum disorders (ASD), and to increase the quality of life and functional independence of autistic individuals, especially children. No single treatment is best, and treatment is typically tailored to the child's needs. Treatments fall into two major categories: educational interventions and medical management. Training and support are also given to families of those with ASD.
Studies of interventions have methodological problems that prevent definitive conclusions about efficacy. Although many psychosocial interventions have some positive evidence, suggesting that some form of treatment is preferable to no treatment, the methodological quality of systematic reviews of these studies has generally been poor, their clinical results are mostly tentative, and there is little evidence for the relative effectiveness of treatment options. Intensive, sustained special education programs and behavior therapy early in life can help children with ASD acquire self-care, social, and job skills, and often can improve functioning, and decrease symptom severity and maladaptive behaviors; claims that intervention by around age three years is crucial are not substantiated. Available approaches include applied behavior analysis (ABA), developmental models, structured teaching, speech and language therapy, social skills therapy, and occupational therapy. Educational interventions have some effectiveness in children: intensive ABA treatment has demonstrated effectiveness in enhancing global functioning in preschool children, and is well-established for improving intellectual performance of young children. Neuropsychological reports are often poorly communicated to educators, resulting in a gap between what a report recommends and what education is provided. The limited research on the effectiveness of adult residential programs shows mixed results.
Many medications are used to treat problems associated with ASD. More than half of U.S. children diagnosed with ASD are prescribed psychoactive drugs or anticonvulsants, with the most common drug classes being antidepressants, stimulants, and antipsychotics. Aside from antipsychotics, there is scant reliable research about the effectiveness or safety of drug treatments for adolescents and adults with ASD. A person with ASD may respond atypically to medications, the medications can have adverse effects, and no known medication relieves autism's core symptoms of social and communication impairments.
Many alternative therapies and interventions are available, ranging from elimination diets to chelation therapy. Few are supported by scientific studies. Treatment approaches lack empirical support in quality-of-life contexts, and many programs focus on success measures that lack predictive validity and real-world relevance. Scientific evidence appears to matter less to service providers than program marketing, training availability, and parent requests. Even if they do not help, conservative treatments such as changes in diet are expected to be harmless aside from their bother and cost. Dubious invasive treatments are a much more serious matter: for example, in 2005, botched chelation therapy killed a five-year-old autistic boy.
Treatment is expensive; indirect costs are more so. For someone born in 2000, a U.S. study estimated an average discounted lifetime cost of $Template:Formatprice (2014 dollars, inflation-adjusted from 2003 estimateTemplate:Inflation-fn), with about 10% medical care, 30% extra education and other care, and 60% lost economic productivity. A UK study estimated discounted lifetime costs at ₤Template:Formatprice and ₤Template:Formatprice for an autistic person with and without intellectual disability, respectively (2014 pounds, inflation-adjusted from 2005/06 estimateTemplate:Inflation-fn). Legal rights to treatment are complex, vary with location and age, and require advocacy by caregivers. Publicly supported programs are often inadequate or inappropriate for a given child, and unreimbursed out-of-pocket medical or therapy expenses are associated with likelihood of family financial problems; one 2008 U.S. study found a 14% average loss of annual income in families of children with ASD, and a related study found that ASD is associated with higher probability that child care problems will greatly affect parental employment. After childhood, key treatment issues include residential care, job training and placement, sexuality, social skills, and estate planning.
Educational interventions attempt to help children not only to learn academic subjects and gain traditional readiness skills, but also to improve functional communication and spontaneity, enhance social skills such as joint attention, gain cognitive skills such as symbolic play, reduce disruptive behavior, and generalize learned skills by applying them to new situations. Several model programs have been developed, which in practice often overlap and share many features, including:
- early intervention that does not wait for a definitive diagnosis;
- intense intervention, at least 25 hours/week, 12 months/year;
- low student/teacher ratio;
- family involvement, including training of parents;
- interaction with neurotypical peers;
- structure that includes predictable routine and clear physical boundaries to lessen distraction; and
- ongoing measurement of a systematically planned intervention, resulting in adjustments as needed.
Several educational intervention methods are available, as discussed below. They can take place at home, at school, or at a center devoted to autism treatment; they can be done by parents, teachers, speech and language therapists, and occupational therapists. A 2007 study found that augmenting a center-based program with weekly home visits by a special education teacher improved cognitive development and behavior.
Studies of interventions have methodological problems that prevent definitive conclusions about efficacy. Although many psychosocial interventions have some positive evidence, suggesting that some form of treatment is preferable to no treatment, the methodological quality of systematic reviews of these studies has generally been poor, their clinical results are mostly tentative, and there is little evidence for the relative effectiveness of treatment options. Concerns about outcome measures, such as their inconsistent use, most greatly affect how the results of scientific studies are interpreted. A 2009 Minnesota study found that parents follow behavioral treatment recommendations significantly less often than they follow medical recommendations, and that they adhere more often to reinforcement than to punishment recommendations. Intensive, sustained special education programs and behavior therapy early in life can help children acquire self-care, social, and job skills, and often improve functioning and decrease symptom severity and maladaptive behaviors; claims that intervention by around age three years is crucial are not substantiated.
Applied behavior analysisEdit
- Further information: Applied behavior analysis
Applied behavior analysis (ABA) is the applied research field of the science of behavior analysis, and it underpins a wide range of techniques used to treat autism and many other behaviors and diagnoses. ABA-based interventions focus on teaching tasks one-on-one using the behaviorist principles of stimulus, response and reward, and on reliable measurement and objective evaluation of observed behavior. There is wide variation in the professional practice of behavior analysis and among the assessments and interventions used in school-based ABA programs. Many interventions rely heavily on discrete trial teaching (DTT) methods, which use stimulus-response-reward techniques to teach foundational skills such as attention, compliance, and imitation. However, children have problems using DTT-taught skills in natural environments. In functional assessment, a common technique, a teacher formulates a clear description of a problem behavior, identifies antecedents, consequents, and other environmental factors that influence and maintain the behavior, develops hypotheses about what occasions and maintains the behavior, and collects observations to support the hypotheses. A few more-comprehensive ABA programs use multiple assessment and intervention methods individually and dynamically.
ABA-based techniques have demonstrated effectiveness in several controlled studies: children have been shown to make sustained gains in academic performance, adaptive behavior, and language, with outcomes significantly better than control groups. A 2009 review of educational interventions for children, whose mean age was six years or less at intake, found that the higher-quality studies all assessed ABA, that ABA is well-established and no other educational treatment is considered probably-efficacious, and that intensive ABA treatment, carried out by trained therapists, is demonstrated effective in enhancing global functioning in pre-school children. A 2008 evidence-based review of comprehensive treatment approaches found that ABA is well-established for improving intellectual performance of young children with ASD. A 2009 comprehensive synthesis of early intensive behavioral intervention (EIBI), a form of ABA treatment, found that EIBI produces strong effects, suggesting that it can be effective for some children with autism; it also found that the large effects might be an artifact of comparison groups with treatments that have yet to be empirically validated, and that no comparisons between EIBI and other widely recognized treatment programs have been published. A 2009 systematic review came to the same principal conclusion that EIBI is effective for some but not all children, with wide variability in response to treatment; it also suggested that any gains are likely to be greatest in the first year of intervention. A 2009 meta-analysis concluded that EIBI has a large effect on full-scale intelligence and a moderate effect on adaptive behavior. However, a 2009 systematic review and meta-analysis found that applied behavior intervention (ABI), another name for EIBI, did not significantly improve outcomes compared with standard care of preschool children with ASD in the areas of cognitive outcome, expressive language, receptive language, and adaptive behavior.
Pivotal response therapyEdit
- Main article: Pivotal response therapy
Pivotal response therapy or treatment (PRT) is a naturalistic intervention derived from ABA principles. Instead of individual behaviors, it targets pivotal areas of a child's development, such as motivation, responsivity to multiple cues, self-management, and social initiations; it aims for widespread improvements in areas that are not specifically targeted. The child determines activities and objects that will be used in a PRT exchange. Intended attempts at the target behavior are rewarded with a natural reinforcer: for example, if a child attempts a request for a stuffed animal, the child receives the animal, not a piece of candy or other unrelated reinforcer.
Treatment and education of autistic and related communication handicapped children (TEACCH), which has come to be called "structured teaching", emphasizes structure by using organized physical environments, predictably sequenced activities, visual schedules and visually structured activities, and structured work/activity systems where each child can practice various tasks. Parents are taught to implement the treatment at home. A 1998 controlled trial found that children treated with a TEACCH-based home program improved significantly more than a control group.
Communication interventions fall into two major categories. First, many autistic children do not speak, or have little speech, or have difficulties in effective use of language. Interventions that attempt to improve communication are commonly conducted by speech and language therapists, and work on joint attention, communicative intent, and alternative or augmentative and alternative communication (AAC) methods such as visual methods. Little solid research supports the efficacy of speech therapy for autism; AAC methods do not appear to impede speech and may result in modest gains. A 2006 study reported benefits both for joint attention intervention and for symbolic play intervention, and a 2007 study found that joint attention intervention is more likely than symbolic play intervention to cause children to engage later in shared interactions.
Second, social skills treatment attempts to increase social and communicative skills of autistic individuals, addressing a core deficit of autism. A wide range of intervention approaches is available, including modeling and reinforcement, adult and peer mediation strategies, peer tutoring, social games and stories, self-management, pivotal response therapy, video modeling, direct instruction, visual cuing, circle of friends, and social-skills groups. A 2007 meta-analysis of 55 studies of school-based social skills intervention found that they were minimally effective for children and adolescents with ASD, and a 2007 review found that social skills training has minimal empirical support for children with Asperger syndrome or high-functioning autism.
Unusual responses to sensory stimuli are more common and prominent in children with autism, although there is not good evidence that sensory symptoms differentiate autism from other developmental disorders. Several therapies have been developed to treat Sensory Integration Dysfunction. Some of these treatments (for example, sensorimotor handling) have a questionable rationale and have no empirical evidence. Other treatments have been studied, with small positive outcomes, but few conclusions can be drawn due to methodological problems with the studies. These treatments include prism lenses, physical exercise, auditory integration training, and sensory stimulation or inhibition techniques such as "deep pressure"—firm touch pressure applied either manually or via an apparatus such as a hug machine or a pressure garment. Weighted vests, a popular deep-pressure therapy, have only a limited amount of scientific research available, which on balance indicates that the therapy is ineffective. Although replicable treatments have been described and valid outcome measures are known, gaps exist in knowledge related to sensory integration dysfunction and therapy. Because empirical support is limited, systematic evaluation is needed if these interventions are used.
Music therapy uses the elements of music to let people express their feelings and communicate. Two small studies have reported short-term improvement in verbal and gestural communication skills of autistic children from a week's work of daily sessions; no significant effects on behavior problems were observed.
Animal-assisted therapy, where an animal such as a dog or a horse becomes a basic part of a person's treatment, is a controversial treatment for some symptoms. A 2007 meta-analysis found that animal-assisted therapy is associated with a moderate improvement in autism spectrum symptoms. Reviews of published dolphin-assisted therapy (DAT) studies have found important methodological flaws and have concluded that there is no compelling scientific evidence that DAT is a legitimate therapy or that it affords any more than fleeting improvements in mood.
Neurofeedback has been hypothesized to improve focusing and decrease anxiety in individuals with ASD. One pilot study investigated this hypothesis in ten adolescent boys diagnosed with Asperger syndrome. Five boys dropped out during the study; results on the remaining boys were positive but were not statistically significant.
- Main article: Son-Rise
Son-Rise is a home-based program that emphasizes eye contact, accepting the child without judgment, and joining in with the child's repetitive and restricted behaviors. Proponents claim that children will decide to become non-autistic after parents accept them for who they are and engage them in play. Initially, parents and their child go to live at the Autism Treatment Center of America—which is based at the Option Institute—for a week and sometimes longer. Staff from the center help parents with their personal problems in order to teach them how to drop their judgements and beliefs. Staff also request to families to be hopeful for their child's future.
The program was started by the parents of Raun Kaufman, who is claimed to have gone from being autistic to normal via the treatment in the early 1970s. No independent study has tested the efficacy of the program, but a 2003 study found that involvement with the program led to more drawbacks than benefits for the involved families over time, and a 2006 study found that the program is not always implemented as it is typically described in the literature, which suggests it will be difficult to evaluate its efficacy.
In packing, children are wrapped tightly for up to an hour in wet sheets that have been refrigerated, with only their heads left free. The treatment is repeated several times a week, and can continue for years. It is intended as treatment for autistic children who harm themselves; most of these children cannot speak. Similar envelopment techniques have been used for centuries, such as to calm violent patients in Germany in the 19th century; its modern use in France began in the 1960s, based on psychoanalytic theories such as the theory of the refrigerator mother. Packing is currently used in hundreds of French clinics. There is no scientific evidence for the effectiveness of packing, and some concern about risk of adverse health effects.
The Judge Rotenberg Educational Center uses aversion therapy, notably contingent shock (electric shock delivered to the skin for a few seconds), to control the behavior of its patients, many of which are autistic. The practice is controversial.
Patterning is a set of exercises that attempts to improve the organization of a child's neurologic impairments. It has been used for decades to treat children with several unrelated neurologic disorders, including autism. The method, taught at the The Institutes for the Achievement of Human Potential, is based on oversimplified theories and is not supported by carefully designed research studies.
Parent mediated interventionsEdit
Parent mediated interventions offer support and practical advice to parents of autistic children. Randomized and controlled studies suggest that parent training leads to reduced maternal depression, improved maternal knowledge of autism and communication style, and improved child communicative behavior. A 2006 randomized controlled trial (RCT) found that a twenty-week parent education and behavior management (PEBM) program provided significant improvements in parental mental health and well-being, particularly for parents with preexisting mental health problems. A 2008 pilot trial of Parent-Child Interaction Therapy, a parent coaching intervention model, for boys aged 5–12 with high-functioning ASD and behavioral problems, found increases in child adaptability and reductions in parent perceptions of child problem behaviors.
Drugs, supplements, or diets are often used to alter physiology in an attempt to relieve common autistic symptoms such as seizures, sleep disturbances, irritability, and hyperactivity that can interfere with education or social adaptation or (more rarely) cause autistic individuals to harm themselves or others. There is plenty of anecdotal evidence to support medical treatment; many parents who try one or more therapies report some progress, and there are a few well-publicized reports of children who are able to return to mainstream education after treatment, with dramatic improvements in health and well-being. However, this evidence may be confounded by improvements seen in autistic children who grow up without treatment, by the difficulty of verifying reports of improvements, and by the lack of reporting of treatments' negative outcomes. Only a very few medical treatments are well supported by scientific evidence using controlled experiments.
Many medications are used to treat problems associated with ASD. More than half of U.S. children diagnosed with ASD are prescribed psychoactive drugs or anticonvulsants, with the most common drug classes being antidepressants, stimulants, and antipsychotics. Only the antipsychotics have clearly demonstrated efficacy.
Research has focused on atypical antipsychotics, especially risperidone, which has the largest amount of evidence that consistently shows improvements in irritability, self-injury, aggression, and tantrums associated with ASD. Risperidone is approved by the Food and Drug Administration (FDA) for treating symptomatic irritability in autistic children and adolescents. In short-term trials (up to six months) most adverse events were mild to moderate, with weight gain, drowsiness, and high blood sugar requiring monitoring; long term efficacy and safety have not been fully determined. It is unclear whether risperidone improves autism's core social and communication deficits. The FDA's decision was based in part on a study of autistic children with severe and enduring problems of tantrums, aggression, and self-injury; risperidone is not recommended for autistic children with mild aggression and explosive behavior without an enduring pattern.
Other drugs are prescribed off-label in the U.S., which means they have not been approved for treating ASD. Large placebo-controlled studies of olanzapine and aripiprazole were underway in early 2008. Some selective serotonin reuptake inhibitors (SSRIs) and dopamine blockers can reduce some maladaptive behaviors associated with ASD. Although SSRIs reduce levels of repetitive behavior in autistic adults, a 2009 multisite randomized controlled study found no benefit and some adverse effects in children from the SSRI citalopram, raising doubts whether SSRIs are effective for treating repetitive behavior in autistic children. One study found that the psychostimulant methylphenidate was efficacious against hyperactivity associated with ASD, though with less response than in neurotypical children with ADHD. Of the many medications studied for treatment of aggressive and self-injurious behavior in children and adolescents with autism, only risperidone and methylphenidate demonstrate results that have been replicated. A 1998 study of the hormone secretin reported improved symptoms and generated tremendous interest, but several controlled studies since have found no benefit.
Oxytocin may play a role in autism and may be an effective treatment for repetitive and affiliative behaviors; two related studies in adults found that oxytocin decreased repetitive behaviors and improved interpretation of emotions, but these preliminary results do not necessarily apply to children. An experimental drug STX107 has stopped overproduction of metabotropic glutamate receptor 5 in rodents, and it has been hypothesized that this may help in about 5% of autism cases, but this hypothesis has not been tested in humans.
Aside from antipsychotics, there is scant reliable research about the effectiveness or safety of drug treatments for adolescents and adults with ASD. Results of the handful of randomized control trials that have been performed suggest that risperidone, the SSRI fluvoxamine, and the typical antipsychotic haloperidol may be effective in reducing some behaviors, that haloperidol may be more effective than the tricyclic antidepressant clomipramine, and that the opiate antagonist naltrexone hydrochloride is not effective. A person with ASD may respond atypically to medications, the medications can have adverse side effects, and no known medication relieves autism's core symptoms of social and communication impairments.
Many parents give their children vitamin and other nutritional supplements in an attempt to treat autism or to alleviate its symptoms. The range of supplements given is wide; few are supported by scientific data, but most have relatively mild side effects.
Proponents of orthomolecular psychiatry have claimed that nutritional supplementation with high dose pyridoxine (vitamin B6) and magnesium (HPDM) alleviate the symptoms of autism; this is one of the most popular complementary and alternative medicine choices for autism. Three small randomized controlled trials have studied this therapy; the smallest one (with 8 individuals) found improved verbal IQ in the treatment group and the other two (with ten and fifteen individuals, respectively) found no significant difference. Due to the limited data it is difficult to tell whether this treatment approach has effects greater than placebo. The short-term side effects seem to be mild, but there may be significant long-term side effects, as high doses of pyridoxine cause peripheral neuropathy in adults, high doses of magnesium can cause reduced heart rate and weakened reflexes, and high magnesium concentrations are associated with seizures. High dose pyridoxine can cause side effects such as irritability and sensitivity to sound, which can be managed through the use of magnesium.
Dimethylglycine (DMG) is hypothesized to improve speech and reduce autistic behaviors, and is a commonly used supplement. Two double-blind, placebo-controlled studies found no statistically significant effect on autistic behaviors, and reported few side effects. No peer-reviewed studies have addressed treatment with the related compound trimethylglycine.
Vitamin C decreased stereotyped behavior in a small 1993 study. The study has not been replicated, and vitamin C has limited popularity as an autism treatment. High doses might cause kidney stones or gastrointestinal upset such as diarrhea.
Probiotics containing potentially beneficial bacteria are hypothesized to relieve some symptoms of autism by minimizing yeast overgrowth in the colon. The hypothesized yeast overgrowth has not been confirmed by endoscopy, the mechanism connecting yeast overgrowth to autism is only hypothetical, and no clinical trials to date have been published in the peer-reviewed literature. No negative side effects have been reported.
Melatonin is sometimes used to manage sleep problems in developmental disorders. Adverse effects are generally reported to be mild, including drowsiness, headache, dizziness, and nausea; however, an increase in seizure frequency is reported among susceptible children. A 2008 open trial found that melatonin appears to be a safe and well-tolerated treatment for insomnia in children with ASD. and suggested controlled trials to determine efficacy; a small 2009 retrospective study had similar results for adults.
Although omega-3 fatty acids, which are polyunsaturated fatty acids (PUFA), are a popular treatment for children with ASD, there is very little scientific evidence supporting their effectiveness, and further research is needed.
Several other supplements have been hypothesized to relieve autism symptoms, including carnosine, cholesterol, cyproheptadine, D-cycloserine, folic acid, glutathione, metallothionein promoters, other PUFA such as omega-6 fatty acids, tryptophan, tyrosine, thiamine (see Chelation therapy), vitamin B12, and zinc. These lack reliable scientific evidence of efficacy or safety in treatment of autism.
- Further information: Gluten-free, casein-free diet
Atypical eating behavior occurs in about three-quarters of children with ASD, to the extent that it was formerly a diagnostic indicator. Selectivity is the most common problem, although eating rituals and food refusal also occur; this does not appear to result in malnutrition. Although some children with autism also have gastrointestinal (GI) symptoms, there is a lack of published rigorous data to support the theory that autistic children have more or different GI symptoms than usual; studies report conflicting results, and the relationship between GI problems and ASD is unclear.
In the early 1990s, it was hypothesized that autism can be caused or aggravated by opioid peptides like casomorphine that are metabolic products of gluten and casein. Based on this hypothesis, diets that eliminate foods containing either gluten or casein, or both, are widely promoted, and many testimonials can be found describing benefits in autism-related symptoms, notably social engagement and verbal skills. Studies supporting these claims have had significant flaws, so the data are inadequate to guide treatment recommendations.
Other elimination diets have also been proposed, targeting salicylates, food dyes, yeast, and simple sugars. No scientific evidence has established the efficacy of such diets in treating autism in children. An elimination diet may create nutritional deficiencies that harm overall health unless care is taken to assure proper nutrition. For example, a 2008 study found that autistic boys on casein-free diets have significantly thinner bones than usual, presumably because the diets contribute to calcium and vitamin D deficiencies.
Based on the speculation that heavy metal poisoning may trigger the symptoms of autism, particularly in small subsets of individuals who cannot excrete toxins effectively, some parents have turned to alternative medicine practitioners who provide detoxification treatments via chelation therapy. However, evidence to support this practice has been anecdotal and not rigorous. Strong epidemiological evidence refutes links between environmental triggers, in particular thimerosal containing vaccines, and the onset of autistic symptoms. No scientific data supports the claim that the mercury in the vaccine preservative thiomersal causes autism or its symptoms, and there is no scientific support for chelation therapy as a treatment for autism.
Intravenous EDTA (using the drug edetate calcium disodium) chelation has been used safely for over 40 years for treating lead-poisoned children. It is approaved by the FDA for this purpose. The FDA has received reports of 11 deaths associated with the use of edetate disodium (instead of the edetate calcium disodium form). These deaths were reported over the time period from 1971 through 2007. Most recently, two reports were received in 2003, two reports in 2005 and one report was received in 2007. Nine of the deaths were reported following the administration of edetate disodium (by its specific name). A specific EDTA drug was not identified in two cases. Instead, these two death reports simply referred to use of "EDTA." http://www.fda.gov/Drugs/DrugSafety/PostmarketDrugSafetyInformationforPatientsandProviders/ucm113738.htm
Thiamine tetrahydrofurfuryl disulfide (TTFD) is hypothesized to act as a chelating agent in children with autism. A 2002 pilot study administered TTFD rectally to ten autism spectrum children, and found beneficial clinical effect. This study has not been replicated, and a 2006 review of thiamine by the same author did not mention thiamine's possible effect on autism. There is not sufficient evidence to support the use of thiamine (vitamin B1) to treat autism.
Chiropractic is an alternative medical practice whose main hypothesis is that mechanical disorders of the spine affect general health via the nervous system, and whose main treatment is spinal manipulation. A significant portion of the profession rejects vaccination, as traditional chiropractic philosophy equates vaccines to poison. Most chiropractic writings on vaccination focus on its negative aspects, claiming that it is hazardous, ineffective, and unnecessary, and in some cases suggesting that vaccination causes autism or that chiropractors should be the primary contact for treatment of autism and other neurodevelopmental disorders. Chiropractic treatment has not been shown to be effective for medical conditions other than back pain, and there is insufficient scientific evidence to make conclusions about chiropractic care for autism.
Craniosacral therapy is based on the theory that restrictions at cranial sutures of the skull affect rhythmic impulses conveyed via cerebrospinal fluid, and that gentle pressure on external areas can improve the flow and balance of the supply of this fluid to the brain, relieving symptoms of many conditions. There is no scientific support for major elements of the underlying model, there is little scientific evidence to support the therapy, and research methods that could conclusively evaluate the therapy's effectiveness have not been applied.
Studies indicate that 12–17% of adolescents and young adults with autism satisfy diagnostic criteria for catatonia, which is loss of or hyperactive motor activity. Electroconvulsive therapy (ECT) has been used to treat cases of catatonia and related conditions in people with autism. However, no controlled trials have been performed of ECT in autism, and there are serious ethical and legal obstacles to its use.
Hyperbaric oxygen therapyEdit
Hyperbaric oxygen therapy (HBOT) can compensate for decreased blood flow by increasing the oxygen content in the body. It has been postulated that HBOT might relieve some of the core symptoms of autism. A small 2009 double-blind study of autistic children found that 40 hourly treatments of 24% oxygen at 1.3 atmospheres provided significant improvement in the children's behavior immediately after treatment sessions. The study has not been independently confirmed; further studies are planned or in progress.
Unlike conventional neuromotor prostheses, neurocognitive prostheses would sense or modulate neural function in order to physically reconstitute cognitive processes such as executive function and language. No neurocognitive prostheses are currently available but the development of implantable neurocognitive brain-computer interfaces has been proposed to help treat conditions such as autism.
Affective computing devices, typically with image or voice recognition capabilities, have been proposed to help autistic individuals improve their social communication skills. These devices are still under development. Robots have also been proposed as educational aids for autistic children.
Stem cell therapyEdit
The Table Talk of Martin Luther contains the story of a 12-year-old boy who may have been severely autistic. According to Luther's notetaker Mathesius, Luther thought the boy was a soulless mass of flesh possessed by the devil, and suggested that he be suffocated. In 2003 an autistic boy in Wisconsin suffocated during an exorcism in which he was wrapped in sheets.
Ultraorthodox Jewish parents sometimes use spiritual and mystical interventions such as prayers, blessings, recitations of religious text, holy water, amulets, changing the child's name, and exorcism.
One study has suggested that spirituality and not religious activities involving the mothers of autistic children were associated with better outcomes for the child. Religion has also been studied by Pargament as an assist in helping families cope with autism.
- ↑ Powell K (2004). Opening a window to the autistic brain. PLoS Biol 2 (8): E267.
- ↑ 2.00 2.01 2.02 2.03 2.04 2.05 2.06 2.07 2.08 2.09 2.10 2.11 2.12 Myers SM, Johnson CP, Council on Children with Disabilities (2007). Management of children with autism spectrum disorders. Pediatrics 120 (5): 1162–82.
- ↑ 3.0 3.1 Ospina MB, Krebs Seida J, Clark B et al. (2008). Behavioural and developmental interventions for autism spectrum disorder: a clinical systematic review. PLoS ONE 3 (11): e3755.
- ↑ 4.0 4.1 Krebs Seida J, Ospina MB, Karkhaneh M, Hartling L, Smith V, Clark B (2009). Systematic reviews of psychosocial interventions for autism: an umbrella review. Dev Med Child Neurol 51 (2): 95–104.
- ↑ 5.0 5.1 5.2 5.3 Rogers SJ, Vismara LA (2008). Evidence-based comprehensive treatments for early autism. J Clin Child Adolesc Psychol 37 (1): 8–38.
- ↑ 6.0 6.1 6.2 Howlin P, Magiati I, Charman T (2009). Systematic review of early intensive behavioral interventions for children with autism. Am J Intellect Dev Disabil 114 (1): 23–41.
- ↑ 7.0 7.1 Eikeseth S (2009). Outcome of comprehensive psycho-educational interventions for young children with autism. Res Dev Disabil 30 (1): 158–78.
- ↑ Kanne SM, Randolph JK, Farmer JE (2008). Diagnostic and assessment findings: a bridge to academic planning for children with autism spectrum disorders. Neuropsychol Rev 18 (4): 367–84.
- ↑ Van Bourgondien ME, Reichle NC, Schopler E (2003). Effects of a model treatment approach on adults with autism. J Autism Dev Disord 33 (2): 131–40.
- ↑ 10.0 10.1 Leskovec TJ, Rowles BM, Findling RL (2008). Pharmacological treatment options for autism spectrum disorders in children and adolescents. Harv Rev Psychiatry 16 (2): 97–112.
- ↑ 11.0 11.1 Medications for U.S. children with ASD:
- Oswald DP, Sonenklar NA (2007). Medication use among children with autism spectrum disorders. J Child Adolesc Psychopharmacol 17 (3): 348–55.
- Mandell DS, Morales KH, Marcus SC, Stahmer AC, Doshi J, Polsky DE (2008). Psychotropic medication use among Medicaid-enrolled children with autism spectrum disorders. Pediatrics 121 (3): e441–8.
- ↑ 12.0 12.1 12.2 12.3 12.4 12.5 Posey DJ, Stigler KA, Erickson CA, McDougle CJ (2008). Antipsychotics in the treatment of autism. J Clin Invest 118 (1): 6–14.
- ↑ 13.0 13.1 13.2 Angley M, Young R, Ellis D, Chan W, McKinnon R (2007). Children and autism—part 1—recognition and pharmacological management. Aust Fam Physician 36 (9): 741–4.
- ↑ 14.0 14.1 Broadstock M, Doughty C, Eggleston M (2007). Systematic review of the effectiveness of pharmacological treatments for adolescents and adults with autism spectrum disorder. Autism 11 (4): 335–48.
- ↑ 15.0 15.1 Buitelaar JK (2003). Why have drug treatments been so disappointing?. Novartis Found Symp 251: 235–44; discussion 245–9, 281–97.
- ↑ 16.0 16.1 16.2 16.3 16.4 16.5 16.6 16.7 16.8 16.9 Angley M, Semple S, Hewton C, Paterson F, McKinnon R (2007). Children and autism—part 2—management with complementary medicines and dietary interventions. Aust Fam Physician 36 (10): 827–30.
- ↑ 17.0 17.1 17.2 17.3 Francis K (2005). Autism interventions: a critical update. Dev Med Child Neurol 47 (7): 493–9.
- ↑ 18.0 18.1 Herbert JD, Sharp IR, Gaudiano BA (2002). Separating fact from fiction in the etiology and treatment of autism: a scientific review of the evidence. S ci Rev Ment Health Pract 1 (1): 23–43.
- ↑ 19.0 19.1 Rao PA, Beidel DC, Murray MJ (2008). Social skills interventions for children with Asperger's syndrome or high-functioning autism: a review and recommendations. J Autism Dev Disord 38 (2): 353–61.
- ↑ 20.0 20.1 Schechtman MA (2007). Scientifically unsupported therapies in the treatment of young children with autism spectrum disorders. Pediatr Ann 36 (8): 497–8, 500–2, 504–5.
- ↑ Lack of support for interventions:
- Howlin P (2005). "The effectiveness of interventions for children with autism" Fleischhacker WW, Brooks DJ Neurodevelopmental Disorders, 101–19, Springer.PMID 16355605.
- Sigman M, Spence SJ, Wang AT (2006). Autism from developmental and neuropsychological perspectives. Annu Rev Clin Psychol 2: 327–55.
- Williams White S, Keonig K, Scahill L (2007). Social skills development in children with autism spectrum disorders: a review of the intervention research. J Autism Dev Disord 37 (10): 1858–68.
- ↑ Burgess AF, Gutstein SE (2007). Quality of life for people with autism: raising the standard for evaluating successful outcomes. Child Adolesc Ment Health 12 (2): 80–6.
- ↑ Stahmer AC, Collings NM, Palinkas LA (2005). Early intervention practices for children with autism: descriptions from community providers. Focus Autism Other Dev Disabl 20 (2): 66–79.
- ↑ 24.0 24.1 Christison GW, Ivany K (2006). Elimination diets in autism spectrum disorders: any wheat amidst the chaff?. J Dev Behav Pediatr 27 (2 Suppl 2): S162–71.
- ↑ Hazards of chelation therapy:
- Brown MJ, Willis T, Omalu B, Leiker R (2006). Deaths resulting from hypocalcemia after administration of edetate disodium: 2003–2005. Pediatrics 118 (2): e534–6.
- Baxter AJ, Krenzelok EP (2008). Pediatric fatality secondary to EDTA chelation. Clin Toxicol 46 (10): 1083–4.
- ↑ Shimabukuro TT, Grosse SD, Rice C (2008). Medical expenditures for children with an autism spectrum disorder in a privately insured population. J Autism Dev Disord 38 (3): 546–52.
- ↑ Ganz ML (2007). The lifetime distribution of the incremental societal costs of autism. Arch Pediatr Adolesc Med 161 (4): 343–9.
- ↑ Knapp M, Romeo R, Beecham J (2009). Economic cost of autism in the UK. Autism 13 (3): 317–36.
- ↑ 29.0 29.1 Aman MG (2005). Treatment planning for patients with autism spectrum disorders. J Clin Psychiatry 66 (Suppl 10): 38–45.
- ↑ Sharpe DL, Baker DL (2007). Financial issues associated with having a child with autism. J Fam Econ Iss 28 (2): 247–64.
- ↑ Montes G, Halterman JS (2008). Association of childhood autism spectrum disorders and loss of family income. Pediatrics 121 (4): e821–6.
- ↑ Montes G, Halterman JS (2008). Child care problems and employment among families with preschool-aged children with autism in the United States. Pediatrics 122 (1): e202–8.
- ↑ Case-Smith J, Arbesman M (2008). Evidence-based review of interventions for autism used in or of relevance to occupational therapy. Am J Occup Ther 62 (4): 416–29.
- ↑ Rickards AL, Walstab JE, Wright-Rossi RA, Simpson J, Reddihough DS (2007). A randomized, controlled trial of a home-based intervention program for children with autism and developmental delay. J Dev Behav Pediatr 28 (4): 308–16.
- ↑ Wheeler D, Williams K, Seida J, Ospina M (2008). The Cochrane Library and Autism Spectrum Disorder: an overview of reviews. Evid Based Child Health 3 (1): 3–15.
- ↑ Moore TR, Symons FJ (2009). Adherence to behavioral and medical treatment recommendations by parents of children with autism spectrum disorders. J Autism Dev Disord.
- ↑ Dillenburger K, Keenan M (2009). None of the As in ABA stand for autism: dispelling the myths. J Intellect Dev Disabil 34 (2): 193–5.
- ↑ Howard JS, Sparkman CR, Cohen HG, Green G, Stanislaw H (2005). A comparison of intensive behavior analytic and eclectic treatments for young children with autism. Res Dev Disabil 26 (4): 359–83.
- ↑ 39.0 39.1 Steege MW, Mace FC, Perry L, Longenecker H (2007). Applied behavior analysis: beyond discrete trial teaching. Psychol Schools 44 (1): 91–9.
- ↑ Reichow B, Wolery M (2009). Comprehensive synthesis of early intensive behavioral interventions for young children with autism based on the UCLA Young Autism Project model. J Autism Dev Disord 31 (1): 23–41.
- ↑ Eldevik S, Hastings RP, Hughes JC, Jahr E, Eikeseth S, Cross S (2009). Meta-analysis of Early Intensive Behavioral Intervention for children with autism. J Clin Child Adolesc Psychol 38 (3): 439–50.
- ↑ Spreckley M, Boyd R (2009). Efficacy of applied behavioral intervention in preschool children with autism for improving cognitive, language, and adaptive behavior: a systematic review and meta-analysis. J Pediatr 154 (3): 338–44.
- ↑ Pivotal response therapy:
- Koegel RL, Koegel LK (2006). Pivotal Response Treatments for Autism: Communication, Social, & Academic Development, Brookes.
- Koegel LK, Koegel RL, Harrower JK, Carter CM (1999). Pivotal response intervention I: overview of approach. J Assoc Pers Sev Handicaps 24 (3): 174–85.
- ↑ Ozonoff S, Cathcart K (1998). Effectiveness of a home program intervention for young children with autism. J Autism Dev Disord 28 (1): 25–32.
- ↑ 45.0 45.1 Scottish Intercollegiate Guidelines Network (SIGN) (2007). "Assessment, diagnosis and clinical interventions for children and young people with autism spectrum disorders" (PDF). SIGN publication no. 98. Retrieved on 2008-04-02. Lay summary (PDF) — SIGN (2008).
- ↑ 46.0 46.1 Weber W, Newmark S (2007). Complementary and alternative medical therapies for attention-deficit/hyperactivity disorder and autism. Pediatr Clin North Am 54 (6): 983–1006.
- ↑ Schlosser RW, Wendt O (2008). Effects of augmentative and alternative communication intervention on speech production in children with autism: a systematic review. Am J Speech Lang Pathol 17 (3): 212–30.
- ↑ Kasari C, Freeman S, Paparella T (2006). Joint attention and symbolic play in young children with autism: a randomized controlled intervention study. J Child Psychol Psychiatry 47 (6): 611–20. (2007) Erratum. J Child Psychol Psychiatry 48 (5): 523.
- ↑ Gulsrud AC, Kasari C, Freeman S, Paparella T (2007). Children with autism's response to novel stimuli while participating in interventions targeting joint attention or symbolic play skills. Autism 11 (6): 535–46.
- ↑ Matson JL, Matson ML, Rivet TT (2007). Social-skills treatments for children with autism spectrum disorders: an overview. Behav Modif 31 (5): 682–707.
- ↑ Bellini S, Peters JK, Benner L, Hopf A (2007). A meta-analysis of school-based social skills interventions for children with autism spectrum disorders. Remedial Spec Educ 28 (3): 153–62.
- ↑ Rogers SJ, Ozonoff S (2005). Annotation: what do we know about sensory dysfunction in autism? A critical review of the empirical evidence. J Child Psychol Psychiatry 46 (12): 1255–68.
- ↑ Sensory integrative therapy. Research Autism. URL accessed on 2007-10-08.
- ↑ Baranek GT (2002). Efficacy of sensory and motor interventions for children with autism. J Autism Dev Disord 32 (5): 397–422.
- ↑ Stephenson J, Carter M (2009). The use of weighted vests with children with autism spectrum disorders and other disabilities. J Autism Dev Disord 39 (1): 105–14.
- ↑ Schaaf RC, Miller LJ (2005). Occupational therapy using a sensory integrative approach for children with developmental disabilities. Ment Retard Dev Disabil Res Rev 11 (2): 143–8.
- ↑ Hodgetts S, Hodgetts W (2007). Somatosensory stimulation interventions for children with autism: literature review and clinical considerations. Can J Occup Ther 74 (5): 393–400.
- ↑ Gold C, Wigram T, Elefant C (2006). Music therapy for autistic spectrum disorder. Cochrane Database Syst Rev (2): CD004381.
- ↑ Nimer J, Lundahl B (2007). Animal-assisted therapy: a meta-analysis. Anthrozoos 20 (3): 225–38.
- ↑ Marino L, Lilienfeld SO (2007). Dolphin-Assisted Therapy: more flawed data and more flawed conclusions. Anthrozoos 20 (3): 239–49.
- ↑ Scolnick B (2005). Effects of electroencephalogram biofeedback with Asperger's syndrome. Int J Rehabil Res 28 (2): 159–63.
- redirect Template:Cite web
- ↑ Kaufman BN (1995). Son-Rise: the Miracle Continues, HJ Kramer.
- ↑ Williams KR, Wishart JG (2003). The Son-Rise Program intervention for autism: an investigation into family experiences. J Intellect Disabil Res 47 (4–5): 291–9.
- ↑ Williams KR (2006). The Son-Rise Program intervention for autism: prerequisites for evaluation. Autism 10 (1): 86–102.
- ↑ Spinney L (2007). Therapy for autistic children causes outcry in France. Lancet 370 (9588): 645–6.
- ↑ Gonnerman J (2007). School of shock. Mother Jones 32 (5).
- ↑ American Academy of Pediatrics. Committee on Children with Disabilities (1999). The treatment of neurologically impaired children using patterning. Pediatrics 104 (5): 1149–51.
- ↑ McConachie H, Diggle T (2007). Parent implemented early intervention for young children with autism spectrum disorder: a systematic review. J Eval Clin Pract 13 (1): 120–9.
- ↑ Tonge B, Brereton A, Kiomall M, Mackinnon A, King N, Rinehart N (2006). Effects on parental mental health of an education and skills training program for parents of young children with autism: a randomized controlled trial. J Am Acad Child Adolesc Psychiatry 45 (5): 561–9.
- ↑ Solomon M, Ono M, Timmer S, Goodlin-Jones B (2008). The effectiveness of Parent-Child Interaction Therapy for families of children on the autism spectrum. J Autism Dev Disord 38 (9): 1767–76.
- ↑ 72.0 72.1 72.2 72.3 72.4 72.5 72.6 72.7 Levy SE, Hyman SL (2005). Novel treatments for autistic spectrum disorders. Ment Retard Dev Disabil Res Rev 11 (2): 131–42.
- ↑ Schreibman L (2005). "Critical evaluation of issues in autism" The Science and Fiction of Autism, Harvard University Press.
- ↑ Chavez B, Chavez-Brown M, Sopko MA Jr, Rey JA (2007). Atypical antipsychotics in children with pervasive developmental disorders. Pediatr Drugs 9 (4): 249–66.
- ↑ Scott LJ, Dhillon S (2007). Risperidone: a review of its use in the treatment of irritability associated with autistic disorder in children and adolescents. Pediatr Drugs 9 (5): 343–54.
- ↑ Scahill L (2008). How do I decide whether or not to use medication for my child with autism? should I try behavior therapy first?. J Autism Dev Disord 38 (6): 1197–8.
- ↑ Myers SM (2007). The status of pharmacotherapy for autism spectrum disorders. Expert Opin Pharmacother 8 (11): 1579–603.
- ↑ Volkmar FR (2009). Citalopram treatment in children with autism spectrum disorders and high levels of repetitive behavior. Arch Gen Psychiatry 66 (6): 581–2.
- ↑ King BH, Hollander E, Sikich L et al. (2009). Lack of efficacy of citalopram in children with autism spectrum disorders and high levels of repetitive behavior: citalopram ineffective in children with autism. Arch Gen Psychiatry 66 (6): 583–90.
- ↑ Parikh MS, Kolevzon A, Hollander E (2008). Psychopharmacology of aggression in children and adolescents with autism: a critical review of efficacy and tolerability. J Child Adolesc Psychopharmacol 18 (2): 157–78.
- ↑ Bartz JA, Hollander E (2008). Oxytocin and experimental therapeutics in autism spectrum disorders. Prog Brain Res 170 (451–62): 451.
- ↑ 82.0 82.1 Opar A (2008). Search for potential autism treatments turns to 'trust hormone'. Nat Med 14 (4): 353.
- ↑ Strock M (2007). "Autism spectrum disorders (pervasive developmental disorders)". National Institute of Mental Health. Retrieved on 2007-10-05.
- ↑ Tsai LY (1999). Psychopharmacology in autism. Psychosom Med 61 (5): 651–65.
- ↑ Andersen IM, Kaczmarska J, McGrew SG, Malow BA (2008). Melatonin for insomnia in children with autism spectrum disorders. J Child Neurol 23 (5): 482–5.
- ↑ Galli-Carminati G, Deriaz N, Bertschy G (2009). Melatonin in treatment of chronic sleep disorders in adults with autism: a retrospective study. Swiss Med Wkly 139 (19-20): 293–6.
- ↑ Bent S, Bertoglio K, Hendren RL (2009). Omega-3 fatty acids for autistic spectrum disorder: a systematic review. J Autism Dev Disord.
- ↑ Aneja A, Tierney E (2008). Autism: The role of cholesterol in treatment. Int Rev Psychiatry 20 (2): 165–70.
- ↑ Dominick KC, Davis NO, Lainhart J, Tager-Flusberg H, Folstein S (2007). Atypical behaviors in children with autism and children with a history of language impairment. Res Dev Disabil 28 (2): 145–62.
- ↑ Erickson CA, Stigler KA, Corkins MR, Posey DJ, Fitzgerald JF, McDougle CJ (2005). Gastrointestinal factors in autistic disorder: a critical review. J Autism Dev Disord 35 (6): 713–27.
- ↑ Reichelt KL, Knivsberg A-M, Lind G, Nødland M (1991). Probable etiology and possible treatment of childhood autism. Brain Dysfunct 4: 308–19.
- ↑ Millward C, Ferriter M, Calver S, Connell-Jones G (2008). Gluten- and casein-free diets for autistic spectrum disorder. Cochrane Database Syst Rev (2): CD003498.
- ↑ Hediger ML, England LJ, Molloy CA, Yu KF, Manning-Courtney P, Mills JL (2008). Reduced bone cortical thickness in boys with autism or autism spectrum disorder. J Autism Dev Disord 38 (5): 848–56.
- ↑ Doja A, Roberts W (2006). Immunizations and autism: a review of the literature. Can J Neurol Sci 33 (4): 341–6.
- ↑ Thompson WW, Price C, Goodson B et al. (2007). Early thimerosal exposure and neuropsychological outcomes at 7 to 10 years. N Engl J Med 357 (13): 1281–92.
- ↑ Lonsdale D, Shamberger RJ, Audhya T (2002). Treatment of autism spectrum children with thiamine tetrahydrofurfuryl disulfide: a pilot study. Neuro Endocrinol Lett 23 (4): 303–8.
- ↑ Lonsdale D (2006). A review of the biochemistry, metabolism and clinical benefits of thiamin(e) and its derivatives. Evid Based Complement Alternat Med 3 (1): 49–59.
- ↑ 98.0 98.1 Campbell JB, Busse JW, Injeyan HS (2000). Chiropractors and vaccination: a historical perspective. Pediatrics 105 (4): e43.
- ↑ 99.0 99.1 Busse JW, Morgan L, Campbell JB (2005). Chiropractic antivaccination arguments. J Manipulative Physiol Ther 28 (5): 367–73.
- ↑ Ferrance RJ (2003). Autism—another topic often lacking facts when discussed within the chiropractic profession. J Can Chiropr Assoc 47 (1): 4–7.
- ↑ Ernst E (2008). Chiropractic: a critical evaluation. J Pain Symptom Manage 35 (5): 544–62.
- ↑ Hawk C, Khorsan R, Lisi AJ, Ferrance RJ, Evans MW (2007). Chiropractic care for nonmusculoskeletal conditions: a systematic review with implications for whole systems research. J Altern Complement Med 13 (5): 491–512.
- ↑ 103.0 103.1 Green C, Martin CW, Bassett K, Kazanjian A (1999). A systematic review of craniosacral therapy: biological plausibility, assessment reliability and clinical effectiveness. Complement Ther Med 7 (4): 201–7. An earlier version of the paper is available without a subscription: Green C, Martin CW, Bassett K, Kazanjian A (1999). "A systematic review and critical appraisal of the scientific evidence on craniosacral therapy" (PDF). BCOHTA 99:1J. British Columbia Office of Health Technology Assessment. Retrieved on 2007-10-08.
- ↑ Hartman SE, Norton JM (2002). Interexaminer reliability and cranial osteopathy. Sci Rev Alt Med 6 (1): 23–34.
- ↑ Dhossche DM, Reti IM, Wachtel LE (2009). Catatonia and autism: a historical review, with implications for electroconvulsive therapy. J ECT.
- ↑ Rossignol DA, Rossignol LW, Smith S et al. (2009). Hyperbaric treatment for children with autism: a multicenter, randomized, double-blind, controlled trial. BMC Pediatrics 9.
- ↑ Serruya MD, Kahana MJ (2008). Techniques and devices to restore cognition. Behav Brain Res 192 (2): 149–65.
- ↑ Bishop J (2003). The Internet for educating individuals with social impairments. Journal of Computer Assisted Learning 19 (4): 546–56.
- ↑ el Kaliouby R, Picard R, Baron-Cohen S (2006). Affective computing and autism. Ann N Y Acad Sci 1093: 228–48.
- ↑ Ichim TE, Solano F, Glenn E et al. (2007). Stem cell therapy for autism. J Transl Med 5 (30): 30.
- ↑ Wing L (1997). The history of ideas on autism: legends, myths and reality. Autism 1 (1): 13–23.
- ↑ Miles M (2005). Martin Luther and childhood disability in 16th century Germany: what did he write? what did he say?. Independent Living Institute. URL accessed on 2008-12-23.
- ↑ includeonly>Collins D. "Autistic boy dies during exorcism", CBS News, 2003-08-25.
- ↑ Shaked M, Bilu Y (2006). Grappling with affliction: autism in the Jewish ultraorthodox community in Israel. Cult Med Psychiatry 30 (1): 1–27.
- ↑ Ekas, NV, Whitman TL, Shivers C. (2009 May). Religiosity, spirituality, and socioemotional functioning in mothers of children with autism spectrum disorder. J Autism Dev Disord. 39 (5): 706–19..
- ↑ Tarakeshwar, Nalini, Kenneth I. Pargament (2001). Religious Coping in Families of Children with Autism. Focus on Autism and Other Developmental Disabilities 16 (4): 247–260.
- Ministries of Health and Education (2008). New Zealand Autism Spectrum Disorder Guideline (PDF), Wellington: Ministry of Health.
- Fitzpatrick M (2008). Defeating Autism: A Damaging Delusion, London: Routledge. Reviewed in: Guldberg H. 'Autistic children are now seen as a burden'. spiked.
- Posey DJ, McDougle CJ (2008). Preface. Child Adolesc Psychiatr Clin N Am 17 (4): xv–xviii. This describes a special issue of the journal Child and Adolescent Psychiatric Clinics of North America, titled "Treating Autism Spectrum Disorders" (volume 17, issue 4, pages 713–932) and dated October 2008.
- Bryson SE, Rogers SJ, Fombonne E (2003). Autism spectrum disorders: early detection, intervention, education, and psychopharmacological management. Can J Psychiatry 48 (8): 506–16.
- Erickson CA, Posey DJ, Stigler KA, McDougle CJ (2007). Pharmacologic treatment of autism and related disorders. Pediatr Ann 36 (9): 575–85.
Pervasive developmental disorders / Autism spectrum
| Related |
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | 2026-02-03T08:28:31.291633 |
474,218 | 3.634469 | http://www.squamishhistory.ca/history-squamish | History of Squamish
Thank you for visiting the History Section of SHS website The Shining Valley of Squamish This is a shortened edition of the full 26,000 word history in Squamish The Shining Valley, written by the same author, and published by Elaho Press. Kevin McLane Copyright 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013
Before the First People
If we could turn back the clock to 60,000 years ago, we would not recognize the familiar landscape of Squamish and Sea to Sky Country. It took a series of ice ages lasting many thousands of years to shape the landscape into how it appeared when the first humans arrived, and it was the nature of that newly-revealed land that determined the entire history of Squamish. Before the decline of the last Ice Age 10,000 years ago, glacial ice covered Sea to Sky Country to a depth of some 2,000 metres, extending down to the Straits of Georgia, even as far south as Seattle. Only the highest craggy peaks like Tantalus Mountain and Mount Garibaldi would have jutted above the rolling sea of ice. As the climate changed and the ice began to recede, the new land which was slowly revealed was very different to that of 60,000 years earlier. The glaciers had gouged out deep valleys, depositing vast moraines of rock, the rivers were steeper and faster, and the Pacific Ocean extended into the upper Squamish River valley. Brackendale was deep in Howe Sound, the Chief would have been sparkling white granite with not a tree in sight, and small glaciers would have lingered near Squamish in places like Shannon Creek and on Mount Murchison.
The First People
As the ice slowly receded up to the alpine regions, vegetation began to flourish in the valley bottoms, and animals would have migrated northward as food became available. It is not known when the first humans entered Howe Sound in search of food and shelter, but it is reasonable to believe it was something like 5,000 years or more ago. To survive, they would have led a nomadic life, travelling wherever resources were best obtained for the time of year. As hundreds of years rolled by, they grew in number and permanent settlements began to evolve. Their society was dependent on salmon, cedar, shellfish, oolichans and their own resourcefulness, all of which were in plentiful supply. Living in the same place for thousands of years has given aboriginal people a sense of stewardship and connection to the land which is almost extinct for Europeans. They did not feel ownership as we understand it today, they felt custodial inheritance, of a land and terrain which belonged by natural right to everyone. To live in a village knowing that your familyÃs ancestors had lived out their lives in the same soil for thousands of years was a bond of powerful intimacy which still exists today.
The Europeans Arrive
During the first half of the nineteenth century, relations between the first peoples and the earliest Europeans centred on trade. At this time, there were about 70,000 native people in what we now call British Columbia, but only several hundred European traders of the officer-gentleman class. The native people supplied furs and received manufactured goods in return. This brought iron pails, durable fishing nets, metal tools, copper pots and easily-acquired blankets into their culture, but it also brought a rapid loss in the skills of tool manufacture they had developed over thousands of years. This relative equilibrium was shattered by the Cariboo gold rush of 1858, and what was to become an invasion of tens of thousands of fortune-seekers. From this time forward, the new immigrants and their governments began to seek possession of the land itself from the native people, for timber extraction, mining, and agriculture. The relatively peaceful co-existence between the native people and the Europeans had come to an end. The first recorded date of a person of European origin settling in the Squamish Valley was in 1874. Over the course of the two decades that followed, the flow of people coming and going increased, and by 1892 it is said that about 35 families lived in what is now Brackendale. Others in the valley would have included the Chinese labourers who built the dykes, prospectors, itinerant loggers and trappers, of whom there must have been many scattered around the hills and the upper Valley. A small community of Sikhs lived downtown and worked in the waterfront sawmills. For everyone of them, survival was foremost in their minds.
Farming was the natural economic mainstay for the Europeans settlers, and as the new century began, agriculture became steadily more diverse and prosperous in the fertile soil of the flat valley bottom. The Squamish Valley Hop Company built a thriving business in the Brackendale area, and their hops were sent around the world to make fine beer. The hay farms in the Estuary continued to flourish, and a twenty acre potato farm was built near the site of what is now Garibaldi Estates. Other economic activities had a life of their own, like trapping, horse logging, steam-donkey logging, mining at Britannia, and the Woodfibre Pulp Mill, but in the early years they seem to have existed as a secondary force in economic and cultural life, behind the engine of farming and community organization.
The Railroad Loggers
Although it is recorded that horse logging existed in the Squamish area around the late 1890s, such efforts were limited by the slow grinding nature of the work and the massive trees. That gave way at the turn of the century to powerful but cumbersome steam-donkey engines extracting the fallen trees. Things changed again in 1926 when Merrill and Ring, an American company from Washington State, began full-scale operations using the latest railroad and high-lead logging techniques. High-lead techniques lifted the logs into the air, and the tough, narrow-gauge trains rolled them down to Howe Sound. Operations began in the Valleycliffe and Crumpit Woods area, and over the next 10ñ15 years worked their way north to Alice Lake.
The Pacific Great Eastern Railway
By 1910, railroad development strategies were a focal point of political life in British Columbia, and in June 1914, construction gangs began to lay steel north from Squamish up the Cheakamus Canyon, working at a breakneck pace for Prince George. Incredibly, the line reached Pemberton by that fall. The railway brought great change to the cultural life of the Squamish valley; the southern ferry corridor to Vancouver was now joined with a northern corridor to Pemberton and beyond. It was to be another 42 years before the link into Vancouver was accomplished, in 1956.
The Truck Loggers
By the late 1940s, the woods in the Valley were beginning to echo to new sounds; chainsaws, rubber-tired logging trucks grinding up the valley side-hills, and the dynamite blasts of roadbuilding. This dramatic new advance, spurred by more powerful internal combustion engines established logging as the major economic force in the Squamish valley for the next half-century. The railroad locomotive engineers and their dangerous lives on steep grades were now consigned to history, and when high-lead towers were added to tank-tracked log yarders, the engineers were joined by the legendary high riggers. By the end of the 1950s, the faller was left alone as the undisputed king of the woods. His job had gone through its own great change when chainsaws emerged, ironically making the job even more hazardous. In the 1950s, the provincial government took steps to restructure the issuing of timber licences, creating a system of Tree Farm Licences which fell into the hands of the largest companies and ensured a lucrative near-monopoly. In the case of Squamish, in 1958 the province gave control of the entire Ashlu, Elaho and upper Squamish Valleys to Empire Mills (eventually to become Interfor) by granting them TFL 38. This caused an uproar of protest from the many traditional small independent sawmill and logging operators. One way or the other, history was rolling on, and now it was the turn of industrial forestry to become the dominant income generator for the Valley.
Squamish Comes Together
Squamish as we know it today, a community of people with a single government and common culture, some 15 kilometres long, has evolved from a couple of farms in Brackendale. The stages of that journey began in 1892 when a road was built to join Howe Sound to Brackendale. By the turn of the twentieth century, almost everyone lived in the two small communities of ëSquamishà at the waterÃs edge, and ëBrackendaleà some 7 kilometres up valley, a situation which was not to change for almost fifty years. For that half-century, between Cleveland Avenue and Judd Road in Brackendale, there were only green fields and forest, linked by a quiet winding road which followed a route dictated by the river beds and sloughs of the time and the great trees which filled the valley floor. It became known as Government Road, following the familiar course of today. Downtown Squamish was incorporated as a Village in 1948, a popular action which was an important step forward in gaining local control of civic affairs, but the Valley as a whole remained essentially a collection of separate communities with no common government. The Middle valley (Garibaldi Estates of today), known simply as ëMamquamÃ, was served by a Water Board and a Sewer Board, and the affairs of Brackendale were directed by the Farmers Institute. Each had their own priorities. For Brackendale, better river bank protection from the certain threat of repeated heavy floods and a domestic water system were pressing matters. In Squamish it was sewers, a secure base for future development, and dykes to protect the downtown area from becoming a canoe lake when really big storms arrived. In August 1958 one of the most dramatic events in Squamish history occurred, the completion of the ëSeaview Highwayà into Vancouver. It had taken a long time, so long in fact that the Russians had already launched Sputnik, the worldÃs first satellite, into space. Within a few years it is estimated that a quarter of Squamish wage-earners were communting into the big city to work. The economy of Squamish began to change rapidly. By 1964 the future of the valley was coming to a time of great decision and major change. The legendary Pat Brennan was Mayor of the Village of Squamish at the time, and was of the opinion that the long term interests of the Valley were best served by incorporating all the communities into a District Municipality. His view was shared by the other civic leaders of the day: Izzy Boscariol of the Farmers Institute; Pat Goode of the Mamquam Sewer Board; and Art Framboni of the Mamquam Water Board. There was much heated debate among the 3,000 residents as 1964 progressed. The matter went to a vote on November 21st 1964, resulting in a strong 78 per cent in favour of incorporation as a Municipality. The stage was now set for a major upgrading of the infrastructure of the Valley.
A Century of Climbing
As Squamish enjoyed its many decades of quiet isolation from the rest of British Columbia, few people came up Howe Sound for any purpose other than to settle or earn a living. The exception were climbers of the British Columbia Mountaineering Club, with their eyes on the unclimbed high peaks which surround the Squamish Valley. So began a century of connection between climbers and the greater Squamish area. More than 100 years ago, early attempts began to reach the summit of Mount Garibaldi, and the centenary of that great achievement is in 2007. Over the following twenty years, most of the high peaks of the area, such as the Black Tusk, Serratus, Tantalus, and Sky Pilot were climbed in daring assaults by men and women who were among the elite of their day. Those high alpine climbers were also the force that helped establish Garibaldi Park in 1920. Just as it did for everyone in Squamish, the opening of the Seaview Highway from Vancouver in August 1958 marked major change and opportunity for climbers, eyes now firmly fixed on ëthe ChiefÃ. As far back as 1961, the Canadian national media discovered Squamish during the epic first ascent of the Grand Wall, and the ensuing 45 years of development on the superb granite has captured the attention of the entire climbing world.
An Era Ends, Another Begins
For the first peoples, change came hard when Europeans sought new horizons for wealth and opportunity. Then a second era, the pre-eminence by Squamish farming and the horse- and railroad loggers, lasted until World War Two when new technology brought their culture to an end. The domination of industrial forestry in SquamishÃs life was to last for the next half a century. We are now in the middle of a third great change, and in time it will also give way to something different. The Woodfibre pulpmill has recently gone after 94 years of presence in Squamish life, and the great sawmill has gone after almost 50 years rooted in the heart of downtown. If it was the drive for exploration, wealth and opportunity that brought the first era to an end, and technology the second, it is the force of geopolitical change that drives the present one: distant countries undercutting our industrial economy, the rise of international tourism, and a pace like never seen before of Canadians seeking a better home. For over a century, the Squamish landscape was valued and exploited for its farming, timber, and mineral resources to benefit community life. Now, as Squamish attempts to scale back resource extraction to a long-term sustainable level, we are witnessing a different kind of demand for the land, from a worldwide interest in its natural state: the stunning mountains, the ocean proximity, the climbing, the trail network, the Chief, and Squamish as a centre and a home where a well-balanced lifestyle can be achieved. The strains of the changing order are a challenge, but as the heart of Sea to Sky Country, and one of the most vibrant centres of energy and growth in North America today, Squamish is remarkably well-placed to benefit from finding just the right balance of economic and cultural lifestyle. Why not show the world how well it can be done? Copyright 2006, Kevin McLane | 2026-01-25T14:38:45.359848 |
125,825 | 3.596903 | http://www.criticalthinking.org/resources/articles/an-interview-l-elder-ct-concepts-tools.shtml | Michael F. Shaughnessy
Eastern New Mexico University
Portales, New Mexico
1) You have recently co-authored a miniature guide with Richard Paul on “How to Study and Learn.” Briefly explain the purpose of this guide.
The Miniature Guide for Students on How to Study and Learn is designed to help students become “master students.” It provides students with a variety of practical strategies to improve how they study and how they think about the classes they are in. It places the emphasis for learning on the student, rather than on the teacher. Here is how the table of contents begins:
This miniature guide provides important structures for thinking within any content, based on critical thinking concepts and principles, for example, for analyzing the logic of an author’s reasoning, or the reasoning embedded in a textbook. It introduces students to the idea that to learn any subject well is to learn its most fundamental logic, to be able to think within the subject. In other words, it emphasizes the importance of students learning to think historically, to think sociologically, to think scientifically, to think in a literary way, etc. It also introduces students to the intellectual standards for thought, as well as the intellectual virtues, or defining traits of the disciplined mind.
Critical thinking is integral to learning and studying---if one wants to study effectively and learn deeply.. The only way to learn anything well is to actively think it into your thinking. Therefore thinking ideas into one’s thinking is the key to learning any content. Critical thinking provides the tools of mind one needs to do this. When students study without engaging their minds using intellectual tools and standards, they study superficially. They may be adept at memorizing names and places, facts and events, but they miss the important ideas. They are unable to integrate ideas they learn in one class with ideas in another class. Critical thinking provides the foundation for deep learning and integration. In other words, without thinking critically through what they are studying, students cannot learn ideas in a meaningful way, they cannot learn deeply enough to have their thinking altered and improved, they cannot become educated persons.
First, teachers must
understand what it means to be intellectually disciplined if they
are to teach intellectual discipline. In other words, faculty must
themselves have disciplined minds. They must be able to analyze
thinking, to assess what they analyzed, and to reconstruct thinking
(so as to improve it). One of the misconceptions about critical
thinking is that we can somehow easily teach for it without much
explicit knowledge of critical thinking on our part, that we can
employ strategies that lead our students to think without our having
thought through the content we are teaching. But critical thinking
is a rich set of concepts that can only be internalized over years
of working on one’s mind.
Take for example, the assessment of thinking. One needs specific intellectual standards to assess thinking, standards such as clarity, accuracy, logicalness, fairness, significance, depth, breadth, relevance, precision, etc. Students need to begin to use these standards in thinking on a daily basis in the classroom. For example, clarity is a gateway standard in that if we are not clear about what someone is saying, we cannot further assess what they are saying. In other words, if someone’s thinking is unclear, all we know about what they are saying is that we don’t know what they are saying. Yet most students don’t know how to identify when their thinking is unclear. To clarify what someone is saying, we can ask questions such as: Can you say that in other words? Can you elaborate what you have said? Can you give me an example? Can you illustrate?
For students to develop intellectual discipline, they must practice critical thinking on a daily basis for a substantial length of time. There is no reason why we cannot develop our thinking for the whole of our lifetime. However, since most students come to us with virtually no discipline, we have to recognize the limitations of what we can do to foster their development in one semester. We must appreciate the fact that they have bad habits of mind developed over the course of their lives. We need to structure our courses, therefore, so students are regularly engaged in thinking through the content, so that they can’t memorize their way through our courses. Everyday, we need to ask ourselves, “What am I doing today in the classroom to foster thinking through the content? What am I doing to help students to learn how to learn?”
When we say “form
of thinking” we mean the type of thinking inherent in the
discipline, subject, or domain of knowledge upon which one is focused.
The basic idea is that when we learn any content well we learn the
form of thinking essential to the content. For example, when we
study history properly and deeply, we learn to think historically.
When we study science properly and deeply, we learn to think scientifically.
When we study anthropology properly and deeply, we learn to think
When we say “the logic of” we mean that there is an internal system of meanings that must be understood to understand what we are speaking of. We use the tools of critical thinking to analyze that logic. To figure out “the logic of” any product of reasoning, we need to focus on the elements or structures of reasoning embedded in the reasoning. For example, all reasoning has a purpose, answers some question, uses information, makes inferences (or comes to conclusions), reasons from some viewpoint, takes certain things for granted (or makes assumptions), uses concepts and ideas, and has implications.
Given that a textbook is a product of someone’s reasoning, we can analyze the reasoning embedded in the textbook by focusing on its intellectual parts. We can therefore figure out the purpose of the textbook, the questions that drive the author’s reasoning, the primary information and concepts used in the textbook, the assumptions the author(s) make, the points of view inherent in the textbook, and so forth. The tools of intellectual analysis, then, provide students with an effective way to understand the interrelated system of meanings that underlie and define an authors’ reasoning, whether in a textbook, and article, or any other written piece.
The disciplined mind is a mind truly educated, a mind with intellectual dispositions or cultivated tendencies that go beyond basic intellectual skills. For example, the disciplined mind has knowledge of its ignorance, questions its own beliefs, is aware of the need to entertain alternative viewpoints, holds itself to the same intellectual standards it expects of others, is willing to do intellectual work, is confident that reason is the best way to determine what to believe, and thinks for itself As critical thinkers, these traits of mind are our ultimate goal. They are developed gradually, through daily practice in using the tools of thinking. They are also interrelated. As we develop one of these dispositions, or virtues, the others develop as well.
The best way to get students to learn at a deep rather than a superficial level is to structure the course so that they have no choice. If students can make good grades by memorizing for multiple-choice tests, they will. So the key is to design daily activities that require students to think through the content. For example, if we want students to learn to evaluate reasoning, they need lots of practice in doing so. We should begin with simply acts of reasoning and move slowly toward more complicated ones.. They might, for example, figure out an author’s purpose, the main questions the author is addressing in the article, the important information the author uses, and the primary conclusions s/he comes to. We can have students analyze the author’s reasoning a number of times until they become relatively proficient in it. Gradually we can add the other elements of reasoning until they can do the full “logic of” the article. At the same time, we can give them practice in thinking through the intellectual standards and applying them to thinking. We are then ready to have students begin to evaluate an author’s reasoning using the criteria in the Miniature Guide for Students on How to Study and Learn.
It is important that students learn to analyze, or take apart, an author’s reasoning before they assess it. Too often students are quick to judge reasoning before they understand it. My rule is this: If you cannot accurately explain an author’s reasoning in your own words, you have no right to evaluate it. But most students are used to taking positions they do not understand about reasoning they do not understand. When we allow them to do so, we are not doing our job. We need to begin slowly to get students on the right track appreciating the discipline that critical thinking demands of them. They need lots of practice in analyzing reasoning first. Only when they analyze well can they evaluate well.
Inert information is information learned at the superficial level. It is comprised of the facts crammed into the mind without understanding the facts as well as trivial facts rotely remembered that serve no useful purpose. Unfortunately, the focus of most schooling focuses on storing up rote information in one’s short-term memory – fragmented pieces of this and that, facts memorized for tests and then quickly forgotten, information not valuable to us because we don’t understand it well enough to use it in our lives. The key is that we don’t put inert information to work in our thinking because we don’t understand it well enough even to misapply it.
Activated ignorance, on the other hand, is comprised of all the ideas we actively mislearn. Prejudices, biased misconception, and misinterpretations of various kinds are all products of activated ignorance. The key (to activated ignorance) is that we often internalize things that are not true and compound our error by applying it over and over again (falsely) in real life situations. Activated ignorance results from information wrongly learned, or information that is incorrect, inaccurate, or based in half-truth. Activated ignorance can lead to intellectual righteousness, the tendency to believe one is inherently right, that one’s ideas are better than the ideas of others, and therefore that one has a right to “lord” those ideas over others.
Activated ignorance is a natural state of the human mind. We don’t have to learn to use falsehoods in our experience. The mind naturally sees itself as right, as in possession of “the truth,” even when it is using faulty reasoning. We routinely act on ideas that are irrational or unreasonable. Through self-deceptive tendencies we are able to see ourselves as right when we are wrong. In other words, though we often use faulty reasoning and distorted concepts in thinking, we nevertheless are able to hide the problems in our thinking through self-deception.
Activated ignorance is a problem for a number of reasons. It can lead us to make bad decisions. It can lead to problems in our personal lives. It can impede our ability to learn and think through complex problems. It can lead to great injustice and cruelty in the world. Indeed injustice usually occurs, not because people know they are doing something wrong and do it anyway, but because they wrongly think what they are doing is right. They believe themselves to be perfectly justified, even when engaging in the most egregious of acts.
To exemplify the prevalence
of activated ignorance in student thinking, consider the following
ideas students routinely use in thinking: “Learning should
be easy. Learning should be fun. If I am not learning it is the
teacher’s fault. Learning means doing what the teachers says.”
Each of these ideas is flawed. Each when believed and acted upon
is a form of activated ignorance. Each leads to negative consequences
To act in accordance with activated knowledge we must possess the intellectual virtues or dispositions we talked about earlier. For example, when we have intellectual humility, we are clear about what we know and what we don’t know. We resist acting on prejudices and superficial understandings. We are not intellectually arrogant in our approach to situations. We don’t presume to know what we do not. We are careful in the conclusions we come to, and we hold them tentatively when necessary.
The stronger the intellectual virtues in the mind, the more prominent the role of activated knowledge.
For the most part, schooling fosters both the rote memorization of information (producing inert information in the mind), and false understandings taken to be true (producing forms of activated ignorance). The former is apparent in the large volume of superficial work required of students throughout the “educational” process.
The later can be exemplified
in all the ways that students come to falsely believe things: through
classroom mislearning, through accepting media propaganda as true,
through uncritically accepting the beliefs of their peer group as
9) You have a website at www.criticalthinking.org How do you think websites and the Internet will change critical thinking and reasoning and problem solving ?
The information now available
on the internet, like traditional information sources, is useful
to us only to the extent that we can accurately assess and apply
it. In other words, the internet information is not a good in itself.
We will not improve our thinking or our knowledge merely because
we have more information available to us. In fact, people unskilled
at thinking can easily be manipulated through (what is often false
or misleading) information that is pre-digested and made available
on the internet. To effectively use any information we must utilize
the resources of critical thinking. We need to determine the accuracy,
relevance, and significance of information.
To teach students to integrate foundational concepts and principles into their thinking, we need first to place the emphasis on the most significant concepts underlying the subject. We need to model thinking using those concepts. We need to model finding connections between foundational and not-so-foundational concepts. We need to proceed slowly, helping students to make their ground sure before they move on.. We do this by having them read, write, talk, and use foundational concepts in solving problems. We need to give them time to actively discuss the concepts in class with other students. We need to call on them to explain important concepts in their own words. We need to focus less on coverage and more on depth so that students learn a few things well in our classes, rather than many things badly.
One of the beauties of critical thinking is that the concepts and tools embedded in it can be universally taught and applied. It need not, indeed it should not, be reserved for more “advanced students.” All students need to learn to use their minds more effectively. All students need to learn to think things through in a disciplined way. The vast majority of students are capable of learning critical thinking and using it. Though students will inevitably learn at different paces, we can focus on the same material because foundational concepts can be learned at different levels of depth. By having students work together in groups, they can benefit by the insights of others.
Roughly the same practice is essential for all students in learning to think within a discipline (historically, geographically, biologically, sociologically,…). For example, all students can learn to ask questions of clarification within disciplines (Could you say that in other words? Could you give me an example). All students can learn to ask questions of relevance (How is what you are saying relevant to the question we are trying to answer?). All students can learn to analyze information for accuracy (How can we check to see if this information is accurate?). All students can learn the intellectual dispositions and begin to work them into their thinking (intellectual perseverance, intellectual humility, etc.)
In fact, questions focused on the elements and standards of thought can be fostered with children at very young ages. For example, I have worked in demonstration classes with elementary level students and have successfully taught basic critical thinking skills to children even at the kindergarten level. I have also written a miniature guide to critical thinking for children. In this miniature guide, you will find the same foundations of critical thinking that are fostered in the Miniature Guide for Students on How to Study and Learn. The point is that students of all levels and abilities can benefit from critical thinking.
To effectively use information available to us on the web, we need basic critical thinking skills to analyze, evaluate, and improve thinking. In other words, we need to be able to figure out the agenda of the website, the questions they are purporting to answer, the information being presented, the assumptions made, the key concepts that drive the positions taken, etc.
But perhaps even more importantly, we need to be able to assess the quality of website material. For example, we need to be able to figure out whether the information is accurate, and hence how we could check to see if it is accurate. We need to be able to figure out whether it is relevant to the issue we are focused on. We need to be able to distinguish between information that is deep and that which is superficial. We need to differentiate between the significant and the insignificant. We need to be able to determine whether the information provided is detailed enough (or precise enough) for our purpose, etc.
In every subject and domain of learning, there are ideas that are seminal and ideas that are peripheral (and many ideas in-between). Essential ideas are seminal. They are at the roots of many derivative ideas. When we know these foundational ideas well, we are able to derive many of the others. They become sources of power in our thinking. For example, one cannot understand physics without understanding the idea of matter and energy. All of physics revolves around these two ideas and their interrelationships. To think like a physicist is to learn how to use these concepts everywhere in one’s thought.
It is essential ideas that form how we see the world, and how we function in it. If you read the pages in the How to Study and Learn guide, you will notice that each page is focused on one essential idea. Notice two things: 1) the essential idea is the most basic point being made on the page or set of pages, 2) if students use this idea in their thinking, they will reason better through the content and function better as learners.
Take, for example, the essential idea on page 20, “To understand our experience and the world itself, we must be able to think within alternative world views. We must question our ideas. We must not confuse our words or ideas with things.” Now image a student taking this idea seriously. This student would continually seek out, and seek to master, multiple viewpoints. The student would routinely question the ideas he is using in his thinking. He would recognize that things are often confused with words. Words often hypnotize us and we use them without reflecting on what they represent.
Critical thinking reminds us of the power of essential ideas in human thinking: purpose, question, information, concept, inference, implication, point of view, clarity, precision, accuracy, relevance, depth, breadth, logic, and significance. These are essential ideas for our thinking at a critical level. In this miniature guide, we wanted to distill ideas for students so that they could easily see the most basic ones and begin to learn how to put them to use. We did not assume that students can figure out essential ideas on their own, given that they have most likely lack the tools and discipline to do so.
The only way we can do this is to begin to significantly change the way we approach schooling at all levels. Students must learn to use their minds mindfully. They must discover the (liberating) power of intellectual discipline.
But we face many barriers to intellectual development on a large scale. First, teachers and faculty themselves largely lack intellectual discipline and process new material in the same superficial ways their students do. Standardized tests often do not focus attention on foundational concepts and intellectual tools. Coverage “a mile wide and an inch deep” is the rule at most levels of education. Those who should lead are often a significant part of the problem.
When those responsible for educating students misunderstand education as “lecture and test” and do not realize that they are perpetuating the problem, it is difficult to bring change about. In a study conducted by the Center for Critical Thinking for the California Commission on Teacher Credentialing, we found that though college and university faculty overwhelmingly identify critical thinking as of primary importance, few can adequately explain what critical thinking is and how they are teaching for it. In other words they are in essence saying, “We teach critical thinking but don’t ask us to explain what it is or to describe how we teach for it.” This study focused on 38 public universities and 28 private ones. It took a wide sample across the disciplines. The intellectual arrogance prevalent in this study is not confined to faculty in California. It seems natural for teachers and instructors to believe that they are teaching students intellectual discipline.
But this can change if faculty participate in well-designed, long-term professional development activities. When faculty begin to take critical thinking seriously, start to redesign their courses, and focus on essential concepts and basic thinking within the discipline the light bulbs begin to go on.
For the most part this will not have any significant impact on the teaching of critical thinking given that critical thinking is far from prominent in college classes today. What is crucial is a commitment to critical thinking. When that commitment is there, the rest follows. Committed faculty use the internet for class discussions, peer interaction, and teacher feedback. All things being equal, the ideal is to use faculty with experience teaching students to think within content. With the critical insights in place, the necessary adaptations will follow.
The students are right. Most faculty are attempting to cover entirely too much material in a semester course. We need to move away from content coverage and toward deep understanding of the most fundamental concepts in our courses. When students are taught in the didactic mode, at the end of a semester’s course, most students cannot adequately articulate even the most basic concepts in the course. They forget as fast as they learn. Remember, most students take years of classes in science, history, math, language arts, etc., and yet cannot accurately state what science is, why it is important to think scientifically, what history is and why it is important to think historically, what math is and why it is important to think mathematically, etc?
We have been on the content coverage bandwagon for many years, and the amount of content we are asking students to “learn” is increasing quickly. Yet even the best students can only learn well a small set of important concepts in one semester. As we design our courses, we should begin with questions such as these: If my students learn nothing else in my classes what would I want them to learn? What concepts within my content are the most crucial for students to understand to utilize for the rest of their lives?
The fact is that we have little time with our students. Most of what we would want to teach, we simply do not have the time to teach. Thus we must begin with the most significant ideas in our subject. We must help students understand those ideas deeply so that they take root and live in their minds, so that ultimately they live differently having learned those ideas.
First, students should come to learn that the most important goal that they should have in college is to acquire the traits and skills of lifelong learners. College should not be seen merely as a means to a job.
But if we want students to change their concept of college, we have to change our concept as well. We have to structure our courses so that students develop intellectual skills that they recognize as useful as they move through their courses. We need to help them see the relationship between developing important of skills of mind and functioning well in life. We need to help them make connections between what they learn in the classroom and what is happening in the real world, in their world. For example, we need to show them the importance of ideas in life, how the ideas they hold shape their perspectives, how they can change their ideas and transform their lives for the better as a result.
Critical thinking concepts are a rich set of ideas that enable us to understand our minds, our thinking and emotions, and how to live our lives effectively. Critical thinking “tools” is a metaphor for intellectual skills, abilities and dispositions of mind. Some of the most important concepts are the elements of reasoning (purpose, question, information, inference, assumptions, concepts, point of view, implications), the intellectual standards (such as clarity, accuracy, precision, depth, breadth, relevance, significance, fairness, logicalness, etc.), and intellectual virtues or dispositions, which we have already discussed. The “tools” include intellectual abilities such as the ability to gather accurate information, the ability to come to well-reasoned, logical conclusions, the ability to formulate clear and justifiable purposes. When students have skills and dispositions of mind, they have tools of mind they can use in life situations.
Perhaps the most important question still to be answered is: What are the most significant barriers to the development of critical thinking abilities and traits?
There are a number of
barriers to the development of thinking including the lack of insight
into critical thinking on the part of teachers and faculty. But
at a deeper level, perhaps the single most significant barrier is
the native egocentrism of human thought. This is an important question
because it is egocentrism that keeps us from seeking and finding
flaws in our thinking. It is egocentrism that leads to intellectual
arrogance, or the tendency to think we know more than we do. It
is egocentrism that leads to human selfishness and close-mindedness.
Therefore as we teach students to think within disciplines, we also
need to teach students how the mind normally functions – that
it functions to get what it wants, to validate its views, and justify
its behavior. We pursue this thesis in our book: Critical
Thinking: Tools for taking Charge of Your Learning and Your Life.
This book was written as a textbook for college level students but
is useful as well for faculty interested in developing their thinking.
The problem of egocentrism in human thinking is also outlined in
our newest miniature guide: The
Miniature Guide to The Human Mind. | 2026-01-20T04:36:37.206929 |
164,052 | 4.257794 | http://en.wikipedia.org/wiki/Prime_meridian | A prime meridian is a meridian, i.e. a line of longitude, at which longitude is defined to be 0°. A prime meridian and its opposite in a 360°-system, the 180th meridian (at 180° longitude), form a great circle.
This great circle divides the sphere, e.g. the Earth, into two hemispheres. If one uses directions of East and West from a defined prime meridian, then they can be called Eastern Hemisphere and Western Hemisphere.
A prime meridian is ultimately arbitrary, unlike an equator, which is determined by the axis of rotation—and various conventions have been used or advocated in different regions and throughout history.
The notion of longitude was developed by the Greek Eratosthenes (c. 276 BC – c. 195 BC) in Alexandria and Hipparchus (c. 190 BC – c. 120 BC) in Rhodes and applied to a large number of cities by the geographer Strabo (64/63 BC – c. 24 AD). But it was Ptolemy (c. AD 90 – c. AD 168) who first used a consistent meridian for a world map in his Geographia.
Ptolemy used as his basis the "Fortunate Isles", a group of islands in the Atlantic which are usually associated with the Canary Islands (13° to 18°W), although his maps correspond more closely to the Cape Verde islands (22° to 25° W). The main point is to be comfortably west of the western tip of Africa (17.5° W) as negative numbers were not yet in use. His prime meridian corresponds to 18° 40' west of Winchester (about 20°W) today. At this time the chief method of determining longitude was by using the reported times of lunar eclipses in different countries.
Ptolemy’s Geographia was first printed with maps at Bologna in 1477 and many early globes in the sixteenth century followed his lead. But there was still a hope that a "natural" basis for a prime meridian existed. Christopher Columbus reported (1493) that the compass pointed due north somewhere in mid-Atlantic and this fact was used in the important Tordesillas Treaty of 1494 which settled the territorial dispute between Spain and Portugal over newly discovered lands. The Tordesillas line was eventually settled at 370 leagues west of Cape Verde. This is shown in Diogo Ribeiro's 1529 map. São Miguel Island (25.5°W) in the Azores was still used for the same reason as late as 1594 by Christopher Saxton, although by this time it had been shown that the zero deviation line did not follow a line of longitude.
In 1541, Mercator produced his famous forty-one centimetre terrestrial globe and drew his prime meridian precisely through Fuertaventura (14°1'W) in the Canaries. His later maps used the Azores, following the magnetic hypothesis. But by the time that Ortelius produced the first modern atlas in 1570, other islands such as Cape Verde were coming into use. In his atlas longitudes were counted from 0° to 360°, not 180°W to 180°E as is common today. This practice was followed by navigators well into the eighteenth century. In 1634, Cardinal Richelieu used the westernmost island of the Canaries, Ferro, 19° 55' west of Paris, as the choice of meridian. Unfortunately, the geographer Delisle decided to round this off to 20°, so that it simply became the meridian of Paris disguised.
In the early eighteenth century the battle was on to improve the determination of longitude at sea, leading to the development of the chronometer by John Harrison. But it was the development of accurate star charts principally by the first British Astronomer Royal, John Flamsteed between 1680 and 1719 and disseminated by his successor, Edmund Halley that enabled navigators to use the lunar method of determining longitude more accurately using the octant developed by Thomas Godfrey and John Hadley. Between 1765 and 1811, Nevil Maskelyne published 49 issues of the Nautical Almanac based on the meridian of the Royal Observatory, Greenwich. "Maskelyne's tables not only made the lunar method practicable, they also made the Greenwich meridian the universal reference point. Even the French translations of the Nautical Almanac retained Maskelyne's calculations from Greenwich—in spite of the fact that every other table in the Connaissance des Temps considered the Paris meridian as the prime."
In 1884, at the International Meridian Conference held in Washington, D.C., 22 countries voted to adopt the Greenwich meridian as the prime meridian of the world. The French argued for a neutral line, mentioning the Azores and the Bering Strait but eventually abstained and continued to use the Paris meridian until 1911.
List of other prime meridians on Earth
|Locality||GPS longitude||Meridian name||Comment|
|Bering Strait||168°30′ W||
|Washington, D.C.||77°03′56.07″ W (1897) or 77°04′02.24″ W (NAD 27) or 77°04′01.16″ W (NAD 83)||New Naval Observatory Meridian|
|Washington, D.C.||77°02′48.0″ W, 77°03′02.3″, 77°03′06.119″ W or 77°03′06.276″ W (both presumably NAD 27). If NAD27, the latter would be 77°03′05.194″ W (NAD 83)||Old Naval Observatory Meridian|
|Washington, D.C.||77°02′11.56258″ W (NAD 83), 77°02′11.55880″ W (NAD 83), 77°02′11.57375″ W (NAD 83)||White House Meridian|
|Washington, D.C.||77°00′32.6″ W (NAD 83)||Capitol meridian|
|Philadelphia||75° 10′ 12″ W|
|Rio de Janeiro||43° 10′ 19″ W|||
|Fortunate Isles / Azores||~ 25° 40′ 32″ W||Used until the Middle Ages, proposed as one possible neutral meridian by Pierre Janssen at the International Meridian Conference|
|El Hierro (Ferro),
|18° 03′ W,
later redefined as
17° 39′ 46″? W
|Lisbon||9° 07′ 54.862″ W|
|Madrid||3° 41′ 16.58″ W|
|Greenwich||0° 00′ 05.3101″ W||Greenwich meridian||Airy Meridian|
|Greenwich||0° 00′ 05.33″ W||United Kingdom Ordnance Survey Zero Meridian||Bradley Meridian|
|Greenwich||0° 00′ 00.00″ W||IERS Reference Meridian|
|Paris||2° 20′ 14.025″ E||Paris meridian|
|Brussels||4° 22′ 4.71″ E|
|Antwerp||4° 24′ E||Antwerp Meridian||used by Mercator|
|Bern||7° 26′ 22.5″ E|
|Oslo (Kristiania)||10° 43′ 22.5″ E|
|Florence||11°15′ E||Florence Meridian||used in the Peters projection, antipode of a line running through the Bering Strait|
|Rome||12° 27′ 08.4″ E||meridian of Monte Mario|
|Copenhagen||12° 34′ 32.25″ E||Rundetårn|
|Naples||14° 15′ E|||
|Stockholm||18° 03′ 29.8″ E||at the Stockholm Observatory|
|Warsaw||21° 00′ 42″ E||Warsaw Meridian|
|Oradea||21° 55′ 16″ E|
|Alexandria||29° 53′ E|||
|Saint Petersburg||30° 19′ 42.09″ E||Pulkovo Meridian|
|Great Pyramid of Giza||31° 08′ 03.69″ E||1884 |
|Jerusalem||35° 13′ 47.1″ E||for the small dome of the Church of the Holy Sepulchre|
|Mecca||39° 49′ 34″ E||see Mecca Time Approximately 59° east of Greenwich |
|Ujjain||75° 47′ E||Used from 4th century CE Indian astronomy and calendars.|
|Kyoto||135° 74′ E||Used in 18th and 19th (officially 1779-1871) century Japanese maps. Exact place unknown, but in "Kairekisyo" in Nishigekkoutyou-town in Kyoto, then the capital.|
|~ 180||Opposite of Greenwich, proposed 13 Oct 1884 on the International Meridian Conference by Sandford Fleming |
International prime meridian
In October 1884 the Greenwich Meridian was selected by delegates (forty-one delegates representing twenty-five nations) to the International Meridian Conference held in Washington, D.C., United States to be the common zero of longitude and standard of time reckoning throughout the world.
Prime meridian at Greenwich
The position of the Greenwich Meridian has been defined by the location of the Airy transit circle ever since the first observation was taken with it by Sir George Airy in 1851. Prior to that, it was defined by a succession of earlier transit instruments, the first of which was acquired by the second Astronomer Royal, Edmond Halley in 1721. It was set up in the extreme north-west corner of the Observatory between Flamsteed House and the Western Summer House. This spot, now subsumed into Flamsteed House, is roughly 43 metres to the west of the Airy Transit Circle, a distance equivalent to roughly 0.15 seconds of time. It was Airy's transit circle that was adopted in principle (with French delegates, who pressed for adoption of the Paris meridian abstaining) as the Prime Meridian of the world in 1884.
All of these Greenwich meridians were located via an astronomic observation from the surface of the Earth, oriented via a plumb line along the direction of gravity at the surface. This astronomic Greenwich meridian was disseminated around the world, first via the lunar distance method, then by chronometers carried on ships, then via telegraph lines carried by submarine communications cables, then via radio time signals. One remote longitude ultimately based on the Greenwich meridian using these methods was that of the North American Datum 1927 or NAD27, an ellipsoid whose surface best matches mean sea level under the United States.
IERS Reference Meridian
Satellites changed the reference from the surface of the Earth to its center of mass around which all satellites orbit regardless of surface irregularities. The first satellite navigation system, TRANSIT, selected in the 1960s as its reference meridian on an Earth-centered ellipsoid the longitude on the NAD27 ellipsoid of its development laboratory halfway between Washington, D.C. and Baltimore, Maryland. These identical numeric longitudes at a location remote from Greenwich caused 0° of longitude on an Earth-centered ellipsoid to be 5.3" east of the astronomic Greenwich prime meridian through the Airy transit circle. At the latitude of Greenwich, this amounts to 102.5 metres. This was officially accepted by the Bureau International de l'Heure (BIH) in 1984 via its BTS84 (BIH Terrestrial System) that later became WGS84 (World Geodetic System 1984) and the various ITRFs (International Terrestrial Reference Systems).
Due to the movement of Earth's tectonic plates, the line of 0° longitude along the surface of the Earth has slowly moved toward the west from this shifted position by a few centimetres, that is, towards the Airy transit circle (or the Airy transit circle has moved toward the east, depending on your point of view) since 1984 (or the 1960s). With the introduction of satellite technology, it became possible to create a more accurate and detailed global map. With these advances there also arose the necessity to define a reference meridian that, whilst being derived from the Airy transit circle, would also take into account the effects of plate movement and variations in the way that the Earth was spinning. As a result, the International Reference Meridian was established and is commonly used to denote Earth's prime meridian (0° longitude) by the International Earth Rotation and Reference Systems Service, which defines and maintains the link between longitude and time. Based on observations to satellites and celestial compact radio sources (quasars) from various coordinated stations around the globe, Airy's transit circle drifts northeast about 2.5 centimetres per year relative to this Earth-centered 0° longitude. Circa 1999 the international reference meridian (IRM) passed 5.31 arcseconds east of Airy's meridian or 102.5 metres (336.3 feet) at the latitude of the Royal Observatory, Greenwich, London. It is also the reference meridian of the Global Positioning System operated by the United States Department of Defense, and of WGS84 and its two formal versions, the ideal International Terrestrial Reference System (ITRS) and its realization, the International Terrestrial Reference Frame (ITRF). A current convention on the Earth uses the opposite of the IRM as the basis for the International Date Line.
List of places
Country, territory or sea Notes Arctic Ocean Greenland Sea Norwegian Sea North Sea United Kingdom The northernmost land on this meridian is near Tunstall in East Riding, Yorkshire.
The southernmost land in the UK is Peacehaven, East Sussex.
English Channel France The northernmost point on this meridian is in Villers-sur-Mer, Calvados.
The southernmost point is near Gavarnie.
Spain Passing just west of Monte Perdido, in the Pyrenees Mediterranean Sea Gulf of Valencia Spain Mediterranean Sea Algeria Mali Burkina Faso Togo For about 600 m Ghana For about 16 km Togo For about 39 km Ghana Passing through Lake Volta at Atlantic Ocean Passing through the Equator at Southern Ocean Antarctica Queen Maud Land, claimed by Norway
Prime meridian on other planetary bodies
As on the Earth, prime meridians must be arbitrarily defined. Often a landmark such as a crater is used, other times a prime meridian is defined by reference to another celestial object, or by magnetic fields. The prime meridians of the following planetographic systems have been defined:
- The prime meridian of the Moon lies directly in the middle of the face of the moon visible from Earth and passes near the crater Bruce.
- The prime meridian of Mars is defined by the crater Airy-0.
- The prime meridian of Venus passes through the central peak in the crater Ariadne.
- Two different heliographic coordinate systems are used on the Sun. The first is the Carrington heliographic coordinate system. In this system, the prime meridian passes through the center of the solar disk as seen from the Earth on 9 November 1853, which is when Richard Christopher Carrington started his observations of sunspots. The second is the Stonyhurst heliographic coordinates system.
- Jupiter has several coordinate systems because its cloud tops—the only part of the planet visible from space—rotate at different rates depending on latitude. It is unknown whether Jupiter has any internal solid surface that would enable a more Earth-like coordinate system. Scientific Astronomer uses System II coordinates, based on the mean atmospheric rotation of the north and south Equatorial belts. System III coordinates use Jupiter's magnetic field.
- Titan, like the Earth's moon, always has the same face towards Saturn, and so that face is 0 longitude.
Maps by prime meridian Maps in Wikimedia Commons by prime meridian
- Prime Meridian, geog.port.ac.uk
- Norgate 2006
- Hooker 2006
- e.g. Jacob Roggeveen in 1722 reported the longitude of Easter Island as 268° 45' (starting from Fuertaventura) in the Extract from the Official log of Jacob Roggeveen reproduced in Bolton Glanville Corney, ed. (1908), The voyage of Don Felipe Gonzalez to Easter Island in 1770-1, Hakluyt Society, p. 3, retrieved 13 Jan 2013
- Speech by Pierre Janssen, director of the Paris observatory, at the first session of the Meridian Conference.
- Sobel & Andrewes 1998, pp. 110–115
- Sobel & Andrewes 1998, pp. 197–199
- International Conference Held at Washington for the Purpose of Fixing a Prime Meridian and a Universal Day. October, 1884. Project Gutenberg
- Atlas do Brazil, 1909, by Barão Homem de Mello e Francisco Homem de Mello, published in Rio de Janeiro by F. Briguiet & Cia.
- Ancient, used in Ptolemy's Geographia. Later redefined 17° 39′ 46″ W of Greenwich to be exactly 20° W of Paris. French "submarin" at Washington 1884.
- The meridian of Ptolemy's Almagest.
- Wilcomb E. Washburn, "The Canary Islands and the Question of the Prime Meridian: The Search for Precision in the Measurement of the Earth"
- Maimonides, Hilchot Kiddush Hachodesh 11:17, calls this point אמצע היישוב, "the middle of the habitation", i.e. the habitable hemisphere. Evidently this was a convention accepted by Arab geographers of his day.
- Burgess c. 2013
- "International Conference Held at Washington for the Purpose of Fixing a Prime Meridian and a Universal Day. October, 1884. Protocols of the proceedings.". Project Gutenberg. 1884. Retrieved 30 November 2012.
- Greenwich Observatory ... the story of Britain's oldest scientific institution, the Royal Observatory at Greenwich and Herstmonceux, 1675-1975 p.10. Taylor & Francis, 1975
- . McCarthy, P. Kenneth; Seidelmann (2009). TIME from Earth Rotation to Atomic Physics. Weinheim: Wiley-VCH. pp. 244–5.
- ROG Learing Team (23 August 2002). "The Prime Meridian at Greenwich". Royal Museums Greenwich. Royal Museums Greenwich. Retrieved 14 June 2012.
- History of the Prime Meridian -Past and Present
- IRM on grounds of Royal Observatory from Google Earth Accessed 30 March 2012
- The astronomic latitude of the Royal Observatory is 51°28'38"N whereas its latitude on the European Terrestrial Reference Frame (1989) datum is 51°28'40.1247"N.
- Guinot, B., 2011. Solar time, legal time, time in use. Metrologica 48, S181–S185.
- "USGS Astrogeology: Rotation and pole position for the Sun and planets (IAU WGCCRE)". Retrieved 22 October 2009.
- "Carrington heliographic coordinates".
- "Planetographic Coordinates". Retrieved 2009-06-19.
- Burgess, Ebenezer (c. 2013) , "Translation of the Surya-Siddhanta", Journal of the American Oriental Society (e book) 6, Google, p. 185
- Hooker, Brian (2006), A multitude of prime meridians, retrieved 13 Jan 2013
- Norgate, Jean and Martin (2006), Prime meridian, retrieved 13 Jan 2013
- Sobel, Dava; Andrewes, William J. H. (1998), The Illustrated Longitude, Fourth Estate, London
|Wikimedia Commons has media related to: Prime meridian|
- "Where the Earth's surface begins—and ends", Popular Mechanics, December 1930
- scanned TIFFs of the conference proceedings
- Prime meridians in use in the 1880s, by country
- Canadian Prime Meridian | 2026-01-20T18:35:59.350291 |
1,137,168 | 3.854662 | http://www.newworldencyclopedia.org/entry/Sperm | |A sperm cell attempts to penetrate an ovum coat to fertilize it.|
|Human spermatozoön. Diagrammatic. A. Surface view. B. Profile view. In C the head, neck, and connecting piece are more highly magnified.|
|Gray's||subject #258 1243|
A spermatozoon or spermatozoan (pl. spermatozoa), from the ancient Greek σπερμα (seed) and ζων (alive), is more commonly known as a sperm or sperm cell. It is the haploid male gamete cell, meaning it is the reproductive cell containing a single set of chromosomes. A sperm fertilizes an ovum, which serves as the female haploid gamete. Together, the sperm cell and ovum form a zygote, or fertilized egg, which can then grow and develop into a new organism.
Because sperm cells are haploid, they contribute half of the genetic information to the diploid offspring, which contain two sets of chromosomes. In mammals, the sex, or gender, of the offspring is determined by the sperm cell since the ovum always provides an X chromosome. A spermatozoon bearing a Y chromosome will lead to a male (XY) offspring, while a spermatozoon bearing an X chromosome will lead to a female (XX) offspring.
The production of sperm—over 200 million produced in a day in a human—and the process of locating and fertilizing an egg is itself a complex process, involving meiosis, mitosis, delayed maturation, various hormones and enzymes, sense detectors, sperm receptors, and other features. And this is only from the perspective of the sperm. Despite how remarkable is the development of such an intricate process in living organisms, sexual reproduction is a nearly universal aspect of life. Notably, in creating new life, both male and female parents contribute to the offspring. This reflects the universal biological principle of dual characteristics or polarity, and some religions hold that this further reflects the characteristics of the Supreme Being as a unified being of both masculinity and femininity.
Sperm cells were first observed by a student of Anton van Leeuwenhoek in 1677. Semen is a fluid that contains spermatozoa.
Sperm Cell Structure
In humans, a sperm cell consists of a head, which is 5 µm by 3 µm, and a 50 µm long tail, or flagellum. The dense nucleus is covered by a vesicle called the acrosome, which contains enzymes that are crucial for fertilization. The sperm cell contains a minimum amount of cytoplasm. The midpiece, which is located between the head and tail of the sperm cell, contains centrioles, microtubules, and a mitochondrial spiral. These structures are used to aid in movement and fertilization.
During fertilization, the sperm's mitochondria are destroyed by the egg cell. Only the mother is able to provide the offspring's mitochondrial DNA. This plays an important fact in tracing maternal ancestry. However, it has been recently discovered that mitochondrial DNA may be recombinant, or a combination of genes not found together in either parent.
Spermatazoan streamlines are straight and parallel. The tail flagellates and propels the human sperm cell at about 1 to 3 mm per minute by rotating like a propeller. The Reynolds number associated with spermatozoa is in the order of 1E-2. The Reynolds number is a ratio of inertial to viscous forces and is used to determine whether flow will be laminar or turbulent. A small number indicates viscous forces are dominant and therefore laminar flow is present, meaning sperm cells exhibit smooth and constant fluid motion.
In marine invertebrates, the sperm cell is a flagellate cell consisting of a flagella, acrosome, and perforatorium. Such organisms practice external fertilization (Baccetti 1986).
The largest spermatozoa belongs to the fruit fly.
Sperm cell production in humans
In humans, once a male reaches puberty, the period when the gonads (reproductive organs) mature in the early teen years, sperm cells are produced continuously throughout the rest of the male's lifetime (gonads are inactive after birth until puberty). Sperm production does diminish with age, but never completely ceases. Women, on the other hand, are born with all the eggs they will ever have. After approximately 50 years, their reproductive cycle ends during a period known as menopause.
Spermatozoa are all derived from germ cells, or the embryonic gonadal cells that produce gametes. Gametes are the reproductive cells that unite to form a new individual. In the testes (gonads) of a newborn boy, immature spermatogonia, which are germ cells, are present. Some of these spermatogonia continually duplicate themselves through the process of mitosis. Other spermatogonia undergo meiosis and eventually develop into sperm.
During the process of spermatogenesis, germ cells mature to become sperm cells. The first part of the process occurs in the seminiferous tubules of the male testes. It takes approximately 64 days. Final maturation of sperm cells occurs in the epididymus (hollow duct) over a 12-day period. In the seminiferous tubules, spermatogonia initially undergo meiosis to become primary spermatocytes. Then, in the first meiotic division, each primary spermatocyte divides into two secondary spermatocytes. Each of the two secondary spermatocytes divides into two spermatids during the second meiotic division. Spermatids are haploid cells and contain 23 single chromosomes. The spermatids then mature into sperm as they lose most of their cytoplasm and develop a flagellated tail. Also, the nucleus' chromatin condenses into a dense structure as a vesicle called the acrosome covers most of the surface of the nucleus. Although sperm cells have been formed at this point in spermatogenesis, the sperm cells are not yet mature or able to swim freely. The final maturation process takes place once the sperm cells have moved into the epididymis, where they mature over the 12 days or so.
Once the process of spermatogenesis is complete for one primary spermatocyte, the end result is the creation of four sperm cells. The average life span of sperm is between four to six days.
The entire process, from spermatogonium to mobile and functional sperm, takes approximately 76 days. However, at any one time different cells may be in a different stage of the development process. This staggering of developmental stages allows sperm cell production to stay steady at about 200 million sperm per day (Silverthorn 2004). Although this number may seem excessively large, it is about the number of sperm released in a single ejaculation.
Several hormones are required to initiate and maintain gametogenesis, which is the formation of gametes. Without gametogenesis, sperm cells could never form. Follicle stimulating hormone, or FSH, from the anterior pituitary, along with sex hormones, are required for the gametogensis process. The hypothalamus, a part of the brain, controls the release of FSH through gonadotropin releasing hormone, or GnRH.
While in the seminiferous tubules, various cells aid sperm development. Sertoli cells, also called sustentacular cells, are one such type of cell. They function to regulate sperm development by providing nourishment for the spermatogonia. They manufacture various proteins that span from hormones to growth factors to enzymes.
Fertilization and the acrosomal reaction
The main function of sperm is to fertilize an egg to form a zygote. In order to do so, a sperm cell must locate the egg, penetrate its protective layerings, and then finally fuse its genetic material with the egg. The process through which a sperm cell breaks through the barriers of the egg is called the acrosomal process.
Once sperm has entered the female's vagina or cloaca, the sperm begins its task of locating the egg. Sperm do not swim randomly; they use various clues and factors to help reach the egg.
In humans, apparently, the female reproductive tract becomes warmer as the Fallopian tubes are neared. Current research at Harvard University has shown that sperm swim from colder to warmer regions (Flam 2006). Also, research has indicated that sperm swim towards increasing concentration gradients of a synthetic compound called bourgeonal (Flam 2006). Whether the egg or female body releases the chemoattractant is unclear as of now. But studies have been convincing to show that sperm can smell. Essentially, sperm smell their way from the vagina to the to the location of the egg in the distal parts of the female's Fallopian tubes (Flam 2006). Once the sperm meets the egg, fertilization can occur.
For successful fertilization to take place, a sperm must first penetrate various layers surrounding the egg. The outer layer of the egg is the loosely connected granulosa cells. These cells make up what is known as the corona radiata, and they develop with the egg to support its growth and then serve to provide a physical barrier to fertilization. Once past this outer layer, a sperm has to surpass the protective glycoprotein coat of the zona pellucida. In order to get past these two major barriers, however, a sperm must release its powerful enzymes contained in the acrosome of the sperm head. The release of these enzymes begins the acrosomal process.
Once a sperm nears an ovum, capacitation and hyperactivity occur. The sperm begins to swim more rapidly and forcefully. A recent discovery links hyperactivity to a sudden influx of calcium ions into the tail of the sperm. The flagellum contains ion channels formed by a protein called CatSper. These ion channels are selective and allow only calcium ions to flow in. Hence, the opening of CatSper channels leads to the influx of calcium. The sudden rise in calcium levels in the tail causes increased activity in the flagellum, propelling the sperm more forcefully through the viscous environment of the female uterus. Sperm hyperactivity is necessary for breaking through the physical barriers that protect the egg from fertilization. Once a sperm is capacitated and reaches the egg, enzymes are released from the acrosome in order to dissolve cell junctions and the zona pellucida coat (Carlson 2003).
After the sperm has wiggled its way toward the egg, one of the proteins that makes up the zona pellucida binds to a partner molecule receptor on the sperm. The zona pellucida consists of three or four glycoproteins, one of them being ZP3, or zona pellucida glycoprotein 3. It is the sperm receptor on the egg surface and functions in the initial binding and induction of the sperm acrosomal reaction. This lock-and-key type mechanism is species-specific and prevents the sperm and egg of different species from fusing. Once the sperm has bound to ZP3, the fused section of the membranes opens and the nucleus of the sperm is transferred to the egg cytoplasm. Changes in the egg cell follow and help to prevent polyspermy, or the fertilization of the egg by more than one sperm.
Problems with sperm production, motility, or count may lead to infertility. In immotile cilia syndrome, which is an autosomal recessive defect, immobile or poor motility of the cilia of the airways and sperm result. Consequently, an egg cannot be fertilized and male infertility results.
Azoospermia can also lead to infertility. In males afflicted with azoospermia, a non-measurable amount of sperm is present in the semen. Azoospermia has two forms: obstructive azoospermia, where sperm are created but cannot be mixed with the rest of the ejaculatory fluid due to a physical obstruction, and non-obstructive azoospermia, where there is a problem with spermatogenesis. Non-obstructive azoospermia can be caused by cystic fibrosis, obstruction of various sperm pathways, chemotherapy, and Klinefelter syndrome.
A third pathology, impairment of sperm transport, may lead to infertility as well. It can be caused by a variety of factors such as obstruction of the epididymis or vas deferens and cystic fibrosis (Wilson 1991).
- Baccetti, B. 1986. Evolutionary trends in sperm structure. Comp Biochem Physiol A. 85(1): 29-36. PubMed PMID: 2876819.
- Carlson, A. et al. 2003. CatSper1 required for invoked Ca2+ entry and control of flagellar function in sperm. Proceedings of the National Academy of Sciences 100(25).
- Flam, F. 2006. Researchers delve deep to explore the secret life of sperm. Seattle: The Seattle Times Company. May 17, 2006.
- Silverthorn, D. 2004. Human Physiology, An Integrated Approach (3rd Edition). San Francisco: Benjamin Cummings. ISBN 013102153.
- Wilson, J. D., et al. 1991. Harrison's Principles of Internal Medicine (12th Edition). New York: McGraw-Hill, Inc. ISBN 0070708908
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
- Spermatozoon (May 13, 2006) history
- Reynolds_number (May 13, 2006) history
- Azoospermia (May 13, 2006) history
Note: Some restrictions may apply to use of individual images which are separately licensed. | 2026-02-04T20:37:58.499695 |
160,500 | 4.075963 | http://en.wikipedia.org/wiki/Kingdom_of_Abkhazia | Kingdom of Abkhazia
|Part of a series on the|
|History of Georgia|
|Early Middle Ages|
|19th century onwards|
|Georgia (country) portal|
|Part of a series on the|
|History of Abkhazia|
|8th to 19th century AD|
|19th century to 1921|
The Kingdom of Abkhazia (Georgian: აფხაზეთის სამეფო; Aphkhazetis Samepo), also known as the Kingdom of the Abkhazes (აფხაზთა სამეფო) refers to an early medieval feudal state in the Caucasus which lasted from the 780s until being united, through dynastic succession, with the Kingdom of Georgia (see Tao-Klarjeti) in 1008.
Historiographical conundrum
The problem of the Abkhazian Kingdom, particularly the questions of the nature of its ruling family and its ethnic composition, is a main point of controversy between modern Georgian and Abkhaz scholars. This can be largely explained by the scarcity of primary sources on these issues. Most Abkhaz historians claim the kingdom was formed as a result of the consolidation of the early Abkhaz tribes that enabled them to extend their dominance over the neighboring areas. This is objected on the side of the Georgian historians, some of them claiming that the kingdom was completely Georgian.
Most international scholars agree that it is extremely difficult to judge the ethnic identity of the various population segments due primarily to the fact that the terms "Abkhazia" and "Abkhazians" were used in a broad sense during this period—and for some while later—and covered, for all practical purposes, all the population of the kingdom, comprising both the Georgian (including also Mingrelians, Laz, and Svans with their distinct languages that are sisters to Georgian) and possible modern Abkhaz (Abasgoi, Apsilae, and Zygii) peoples. It seems likely that a significant (if not predominant) proportion of the Georgian-speaking population, combined with a drive of the Abkhazian kings to throw off the Byzantine political and cultural dominance, resulted in Georgian replacing the Greek as the language of literacy and culture.
Early history
Abkhazia, or Abasgia of classic sources, was a princedom under the Byzantine authority. It lay chiefly along the Black Sea coast in what is now northwestern part of modern-day disputed Republic of Abkhazia and extended northward into the territory of today’s Krasnodar Krai of Russia. It had Anacopia as the capital. Abkhazia was ruled by a hereditary archon who effectively functioned as a Byzantine viceroy. The country was chiefly Christian and the city of Pityus was a seat of an archbishop directly subordinated to the Patriarch of Constantinople. The Arabs, pursuing the retreating Georgian princes – brothers Mir of Egrisi and Archil of Kartli – surged into Abkhazia in 736. Dysentery and floods, combined with a stubborn resistance offered by the archon Leon I and his Kartlian and Egrisian allies, made the invaders retreat. Leon I then married Mir’s daughter, and a successor, Leon II exploited this dynastic union to acquire Egrisi (Lazica) in the 770s. Presumably considered as a successor state of Lazica, this new polity continued to be referred to as Egrisi in some contemporary Georgian (e.g., The Vitae of the Georgian Kings by Leonti Mroveli) and Armenian (e.g., The History of Armenia by Hovannes Draskhanakertsi) chronicles.
The successful defense against the Arabs, and new territorial gains, gave the Abkhazian princes enough power to claim more autonomy from the Byzantine Empire. Towards circa 786, Leon won his full independence with the help of the Khazars; he assumed the title of King of the Abkhazians and transferred his capital to the western Georgian city of Kutatisi (modern-day Kutaisi). According to Georgian annals, Leon subdivided his kingdom into eight duchies : Abkhazia proper, Tskhumi, Bedia, Guria, Racha and Takveri, Svaneti, Argveti, and Kutatisi.
The most prosperous period of the Abkhazian kingdom was between 850 and 950. In the early years of the 10th century, it stretched, according to Byzantine sources, along the Black Sea coast three hundred Greek miles, from the frontiers of the thema of Chaldia to the mouth of the river Nicopsis, with the Caucasus behind it. The increasingly expansionist tendencies of the kingdom led to the enlargement of its realm to the east. Beginning with George I (872/73-878/79), the Abkhazian kings controlled also Kartli (central and part of eastern Georgia), and interfered in the affairs of the Georgian and Armenian Bagratids. In about 908 King Constantine III (898/99-916/17) had finally annexed a significant portion of Kartli, bringing his kingdom up to the neighborhood of Arab-controlled Tfilisi (modern-day Tbilisi). Under his son, George II (916/17-960), the Abkhazian Kingdom reached a climax of power and prestige. For a brief period of time, Kakheti in eastern Georgia and Hereti in the Georgian-Albanian marches also recognized the Abkhazian suzerainty. As a temporary ally of the Byzantines, George II patronized the missionary activities of Nicholas Mystikos in Alania.
George’s successors, however, were unable to retain the kingdom’s strength and integrity. During the reign of Leon III (960-969), Kakheti and Hereti emancipated themselves from the Abkhazian rule. A bitter civil war and feudal revolts which began under Demetrius III (969-976) led the kingdom into complete anarchy under the unfortunate king Theodosius III the Blind (976-978). By that time the hegemony in Transcaucasia had finally passed to the Georgian Bagratids of Tao-Klarjeti. In 978, the Bagratid prince Bagrat, nephew (sister’s son) of the sonless Theodosius, occupied the Abkhazian throne with the help of his adoptive father David III of Tao. In 1008, Bagrat succeeded on the death of his natural father Gurgen as the King of Kings of the Georgians. Thus, these two kingdoms unified through dynastic succession, in practice laying the foundation to the unified Georgian monarchy, officially styled then as the Kingdom of Georgians.
Seljuk Invasion
The second half of the 11th century was marked by the disastrous invasion of the Seljuk Turks who by the end of 1040s succeeded in building a vast nomadic empire including most of Central Asia and Iran. In 1071 Seljuk armies destroyed the united Byzantine-Armenian and Georgian forces in the Battle of Manzikert, and by 1081, all of Armenia, Anatolia, Mesopotamia, Syria and most of Georgia were conquered and devastated by the Seljuks.
Only Abkhazia and the mountainous areas of Svanetia, Racha and Khevi-Khevsureti did not acknowledge Seljuk suzerainty, serving as a relatively safe haven for numerous refugees. By the end of 1099 David IV of Georgia stopped paying tribute to the Seljuks and put most of Georgian lands except Tbilisi and Ereti under his effective control having Abkhazia and Svanetia as his reliable rear bases. In 1105–1124 Georgian armies under King David undertook a series of brilliant campaigns against the Seljuk Turks and liberated not only the rest of Georgia but also Christian-populated Ghishi-Kabala area in western Shirvan and a big portion of Armenia.
Most Abkhazian kings, with the exception of John and Adarnase of the Shavliani (presumably of Svan origin), came from the dynasty which is sometimes known in modern history writing as the Leonids after the first king Leon, or Anosids, after the prince Anos from whom the royal family claimed their origin. Prince Cyril Toumanoff relates the name of Anos to the later Abkhaz noble family of Achba or Anchabadze. By convention, the regnal numbers of the Abkhazian kings continue from those of the archons of Abasgia. There is also some lack of consistency about the dates of their reigns. The chronology below is given as per Toumanoff.
House of the Anosids (Achba/Anchabadze)
- Leon II, 767/68–811/12
- Theodosius II, 811/12–837/38
- Demetrius II, 837/38–872/73
- George I of Aghts’epi, 872/73–878/79
House of Shavliani
House of the Anosids (Achba/Anchabadze)
- Bagrat I, 887/88–898/99
- Constantine III, 898/99–916/17
- George II, 916/17–960
- Leon III, 960–969
- Demetrius III, 969–976
- Theodosius III, 976–978
House of Bagrationi
- Bagrat II, 978–1014
See also
- Graham Smith, Edward A Allworth, Vivien A Law et al., pages 56-58.
- Graham Smith, Edward A Allworth, Vivien A Law et al., pages 56-58; Abkhaz by W. Barthold V. Minorsky in the Encyclopaedia of Islam.
- Alexei Zverev, Ethnic Conflicts in the Caucasus; Graham Smith, Edward A Allworth, Vivien A Law et al., pages 56-58; Abkhaz by W. Barthold [V. Minorsky] in the Encyclopaedia of Islam; The Georgian-Abkhaz State (summary), by George Anchabadze, in: Paul Garb, Arda Inal-Ipa, Paata Zakareishvili, editors, Aspects of the Georgian-Abkhaz Conflict: Cultural Continuity in the Context of Statebuilding, Volume 5, August 26–28, 2000.
- Vakhushti Bagrationi, The History of Egrisi, Abkhazeti or Imereti, part 1.
- Rapp, pages 481-484.
References and further reading
- (English) Alexei Zverev, Ethnic Conflicts in the Caucasus 1988-1994, in B. Coppieters (ed.), Contested Borders in the Caucasus, Brussels: VUBPress, 1996
- Graham Smith, Edward A Allworth, Vivien A Law, Annette Bohr, Andrew Wilson, Nation-Building in the Post-Soviet Borderlands: The Politics of National Identities, Cambridge University Press (September 10, 1998), ISBN 0-521-59968-7
- Encyclopaedia of Islam
- (English) Center for Citizen Peacebuilding, Aspects of the Georgian-Abkhazian Conflict
- (Russian) Вахушти Багратиони. История царства грузинского. Жизнь Эгриси, Абхазети или Имерети. Ч.1
- S. H. Rapp, Studies In Medieval Georgian Historiography: Early Texts And Eurasian Contexts, Peeters Bvba (September 25, 2003) ISBN 90-429-1318-5
- (English) Conflicting Narratives in Abkhazia and Georgia. Different Visions of the Same History and the Quest for Objectivity, an article by Levan Gigineishvili, 2003
- (English) The Role of Historiography in the Abkhazo-Georgian Conflict, an article by Seiichi Kitagawa, 1996
- (English) History of Abkhazia. Medieval Abkhazia: 620-1221 by Andrew Andersen
- Georgiy I Mirsky, G I Mirskii, On Ruins of Empire: Ethnicity and Nationalism in the Former Soviet Union (Contributions in Political Science), Greenwood Press (January 30, 1997) ISBN 0-313-30044-5
- Ronald Grigor Suny, The Making of the Georgian Nation: 2nd edition (December 1994), Indiana University Press, ISBN 0-253-20915-3, page 45
- Robert W. Thomson (translator), Rewriting Caucasian History: The Medieval Armenian Adaptation of the Georgian Chronicles: The Original Georgian Texts and Armenian Adaptation (Oxford Oriental Monographs), Oxford University Press, USA (June 27, 1996), ISBN 0-19-826373-2
- Toumanoff C., Chronology of the Kings of Abasgia and other Problems // Le Museon, 69 (1956), S. 73-90. | 2026-01-20T17:19:25.515897 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.