url
stringlengths 17
838
| text
stringlengths 509
588k
| meta
dict |
|---|---|---|
http://sites.bu.edu/ombs/tag/lasers/
|
As a neuroscientist, one typically becomes accustomed to thinking outside of the “box.” After all, the brain is incredibly complex and cryptic, and some creative thought is required to develop methods to uncover its secrets.
Francis Crick advised that the greatest hurdle standing in the way of neuroscience is the inability to specifically stimulate a single neuron without altering any of its surrounding cells. This daunting task appears to be a thing of fantasy when considering the innumerable intricate connections the brain is composed of. However, Harvard University’s Samuel Lab has made a reality out of a dream with their groundbreaking research involving transgenic C. elegans. Recently published in Nature, the research of Dr. Andrew M Leifer and his team utilizes a manipulatory optogentics technique called ColBeRT to control the nervous system of a worm with the light from a laser.
Optogenetics is the methodology of employing genetics and visible light to manipulate the activity of living cells. In order for optogenetics to be applicable in the laboratory, the insertion of opsin genes, which encode for light sensitive proteins, into an organism’s genome is necessary. The activity of the resulting opsin-containing cells can be regulated by exposure to visible light. Relative to the work of Leifer et al., optogenetics provides a platform by which the genetically altered motor and sensory neurons of C. elegans can be controlled with the use of a precise laser. Why use C. elegans? According to Leifer et al., “the nematode C. elegans is particularly amenable to optogenetics owing to its optical transparency, compact nervous system and ease of genetic manipulation.”
The ColBeRT (Controlling Locomotion and Behavior in Real-Time) technique, which was designed for the optogenetics research being done at The Samuel Lab, provides a way to specifically track a worm’s movement. A video camera with real-time feedback follows an illuminated and moving C. elegans under a dark field. The worm is placed on a motorized stage, which keeps the image of the organism centered in the camera’s view. Once the worm’s movement is recognized and registered by a specialized graphical user interface (GUI) software called MindControl, the image of the C. elegans is processed and the worm’s image is divided into 100 evenly spaced segments. From these segmented portions, specific target cells can be chosen, the locations of which are transferred to a DMD (digital micromirror device). This pattern is then projected onto the worm by the DMD, allowing for the illumination of the targeted points with a laser. This laser can precisely pinpoint the location of a specific target cell by a simple algorithm specifically designed for the movement analysis of C. elegans.
The impressive spatial and temporal resolution (~50 frames per second) of the ColBeRT technique makes the system scientifically applicable and valid. The ColBeRT’s spatial resolution of ~30ms allowed Leifer et al. to utilize the technique for a number of manipulatory actions on their transgenic nematodes. When cholinergic motor neurons of transgenic C. elegans were exposed to laser light, forward motor movement was suppressed and either paralysis or backward movement of the worm was propagated. Similarily, single touch receptors of the worms were also genetically modified to be sensitive to light. In a normal worm, a gentle touch will stimulate these receptors, causing the worm to repel in the opposite direction of movement. In transgenic C. elegans, illuminating these specific receptors with the light from a laser was able to affect the direction of the worm’s movement, just as a physical stimulus would have. Even more astounding, HSNs (hermaphrodite specific neurons), which innervate the vulval region of C. elegans, were also able to be genetically modified and stimulated with light exposure. When a thin laser strip was shone on the HSN region of the worms, involuntary egg-laying was evoked.
Although still in its beginning stages, the ColBeRT technique seems to be a promising solution to overcoming one of the primary difficulties standing in the way of neuroscience. ColBeRT not only highlights cell-to-cell interactions, but also identifies the precise actions of specific neurons, something that had never been thought possible in the past. As technology develops further, perhaps we will soon be able to manipulate the cells of more complex organisms and eventually, even mammals. Optogenetics and techniques like ColBeRT may be the key to discovering the subtleties of different neurons and could even potentially help map out the human brain.
Single Worm Neurons Remotely Controlled with Lasers – Scientific American
The Samuel Lab videos – Vimeo
|
{
"palladium_score": 3.704738140106201,
"timestamp": "2026-01-18T07:31:35.164823",
"source": "Palladium-STEM (Preview)"
}
|
http://www.healthline.com/health/multiple-sclerosis?globalHeader=yes
|
Table of Contents:
- Symptoms of MS
- What Is Multiple Sclerosis?
- Multiple Sclerosis Treatment
- Early Signs of MS
Part 1 of 11
What are the symptoms of MS?
People with MS experience a wide range of symptoms. Due to the nature of the disease, it can vary widely from one person to another. The symptoms can change in severity from year to year, month to month, and even day to day.
Two of the most common symptoms are fatigue and difficulty walking.
About 80 percent of people with MS report having fatigue. Fatigue that occurs with MS is more than feeling tired. It can become debilitating, affecting your ability to work and perform everyday tasks.
Difficulty walking can occur with MS for a number of reasons:
- numbness of the legs or feet
- difficulty balancing
- muscle weakness
- muscle spasticity
Overwhelming fatigue can also contribute to the problem. Difficulty walking can lead to injuries due to falling.
Other fairly common symptoms of MS include:
- speech disorders
- cognitive issues involving concentration, memory, and problem-solving skills
- acute or chronic pain
Part 2 of 11
What is multiple sclerosis?
MS is a chronic illness involving the central nervous system. The immune system attacks myelin, which is the protective layer around nerve fibers. This causes inflammation and scar tissue, or lesions. This can make it hard for your brain to send signals to the rest of your body. Types of MS include:
Relapsing-remitting MS (RRMS)
RRMS involves clear relapses of disease activity followed by remissions. Symptoms are mild or absent during remission, and there’s no disease progression during the remission period. RRMS is the most common form of MS at onset.
Clinically isolated syndrome (CIS)
CIS involves one episode of symptoms that are due to demyelination in the central nervous system. This episode must last for at least 24 hours.
The two types of episodes are monofocal and multifocal. A monofocal episode means one lesion causes one symptom. A multifocal episode means you have more than one lesion and more than one symptom.
Although these episodes are characteristic of MS, they aren’t enough to prompt a diagnosis. If lesions similar to those that occur with MS are present, you’re more likely to receive a diagnosis of RRMS. If these lesions aren’t present, you’re less likely to develop MS.
Primary-progressive MS (PPMS)
Neurological function becomes progressively worse from the onset of your symptoms if you have PPMS. However, short periods of stability can still occur.
Progressive-relapsing MS was a term people previously used for progressive MS with clear relapses. People now call it PPMS. The words “active” and “not active” are used to describe disease activity.
Secondary-progressive MS (SPMS)
SPMS occurs when RRMS transitions into the progressive form. You may still have noticeable relapses, in addition to gradual worsening of function or disability.
Part 3 of 11
Treatment for multiple sclerosis
No cure is available for MS, but multiple treatment options exist.
If you have relapsing-remitting MS (RRMS), you can choose one of the disease-modifying drugs. These medications are designed to slow disease progression and lower your relapse rate.
Self-injectable disease-modifying drugs include glatiramer (Copaxone, Glatopa) and beta interferons, such as:
Oral medications for RRMS include:
Intravenous infusion treatments for RRMS include:
Disease-modifying drugs aren’t effective in treating progressive MS.
Other treatments may ease your symptoms and improve your quality of life. Because the disease is different for everybody, treatment depends on your specific symptoms. For most people, a flexible approach is necessary.
Part 4 of 11
Early signs of MS
MS can develop all at once, or the symptoms can be so mild that you easily dismiss them. Any symptom can occur first. The following are three of the most common early symptoms of MS:
- Strange sensations, such as numbness and tingling of the arms, legs, or one side of your face can occur. It’s similar to that of feeling of pins and needles you get when your foot falls asleep, but it occurs for no apparent reason.
- Your balance may be a bit off, and your legs may feel week. You may find yourself tripping easily while walking or doing some other type of physical activity.
- A bout of double vision, blurry vision, or partial vision loss can be an early indicator of MS. You could also have some eye pain.
It isn’t uncommon for these early symptoms to go away only to return at a later date. You may go weeks, months, or even years between symptom flare-ups.
These symptoms can have many different causes. If you have these symptoms, it doesn’t necessarily mean that you have MS.
Part 5 of 11
Your doctor will need to perform a neurological exam, a clinical history, and a series of other tests to determine if you have MS.
Diagnostic testing may include the following:
- MRI is the best imaging test for MS. Using a contrast dye allows the MRI to detect active and inactive lesions throughout the brain and spinal cord.
- Evoked potentials require stimulation of nerve pathways to analyze electrical activity in the brain. The three types of evoked potentials doctors use to help diagnose MS are visual, brainstem, and sensory.
- A spinal tap, or lumbar puncture, can help your doctor find abnormalities in your spinal fluid. It can help rule out infectious diseases.
- Doctors use blood tests to eliminate other conditions with similar symptoms.
The diagnosis of MS requires evidence of demyelination in more than one area of the brain, spinal cord, or optic nerves. That damage must have occurred at different times.
It also requires ruling out other conditions that have similar symptoms. This includes Lyme disease, lupus, and Sjogren's syndrome.
Part 6 of 11
What causes multiple sclerosis?
If you have MS, the myelin in your body becomes damaged. Myelin is the protective layer that covers nerve fibers throughout the central nervous system.
It’s thought that the damage is the result of an attack by the immune system. As your immune system attacks myelin, it causes inflammation. This leads to scar tissue, or lesions. All of that inflammation and scar tissue disrupts signals between the brain and other parts of your body.
It isn’t clear what may cause the immune system to attack.
Is MS hereditary?
MS isn’t hereditary, but having a parent or sibling with MS raises your risk slightly. Genetics may play a role. Scientists have identified some genes that seem to increase susceptibility to developing MS.
Researchers think there could be an environmental trigger such as a virus or toxin that sets off the immune system attack.
Part 7 of 11
What is the prognosis for people with multiple sclerosis?
It’s almost impossible to predict how MS will progress in any one person.
About 10 to 15 percent of people with MS have only rare attacks and minimal disability ten years after diagnosis. It’s not a medical diagnosis, but this is sometimes called “benign MS.”
Progressive MS generally advances faster than relapsing-remitting MS (RRMS). People with RRMS can be in remission for many years. A lack of disability after five years is usually a good indicator for the future.
The disease generally progresses faster in men than in women. It may also progress faster in those who receive a diagnosis after age 40 and in those who have a high relapse rate.
About half of people with MS use a cane or other form of assistance at 15 years after receiving an MS diagnosis. At 20 years, about 60 percent are still ambulatory and less than 15 percent need custodial care.
Your quality of life will depend on your symptoms and how well you respond to treatment. Most people with MS don’t become severely disabled and continue to lead full lives.
This unpredictable disease can change course without warning. It’s rarely fatal, and most people with MS have a lifespan that’s close to normal.
Part 8 of 11
Living with MS
Most people with MS find ways to manage their symptoms and function well. You’ll face unique challenges, and those can change over time. Many people with MS share their struggles and coping strategies through in-person or online support groups.
Having MS means you’ll need to see a doctor experienced in treating MS.
If you take one of the disease-modifying drugs, you’ll have to make sure you adhere to the recommended schedule. Your doctor may prescribe other medications to treat specific symptoms.
A well-balanced diet, low in empty calories and high in nutrients and fiber, will help you manage your overall health.
Regular exercise is important for physical and mental health, even if you have disabilities. If physical movement is difficult, swimming or exercising in a swimming pool can help. Yoga classes range from beginner to advanced levels, and some are designed just for people with MS.
Studies regarding the effectiveness of complementary therapies are scarce, but that doesn’t mean they can’t help in some way.
The following may help you feel less stressed and more relaxed:
- tai chi
- music therapy
MS is a lifelong condition. You should focus on communicating concerns with your doctor, learning all you can about MS, and discovering what things make you feel your best.
Part 9 of 11
Dietary recommendations for people with MS
Diet hasn’t been shown to impact the nature of the disease, but it can help with some of the challenges. If you have fatigue, for instance, a diet high in fats and simple carbohydrates won’t help.
The better your diet, the better your overall health. You’ll not only feel better in the short term, but you’ll be laying the foundation for a healthier future.
Your diet should consist mainly of:
- a variety of vegetables and fruits
- lean sources of protein, such as fish and skinless poultry
- whole grains and other sources of fiber
- low-fat dairy products
- adequate water and other fluids
You should limit or avoid:
- saturated fat
- trans fat
- red meats
- foods and beverages high in sugar
- foods high in sodium
- highly processed foods
Portion control can help you maintain a healthier weight. Read food labels. Foods that are high in calories but low in nutrients won’t help you feel better.
If you have coexisting conditions, ask your doctor if you should follow a special diet or take any dietary supplements.
Part 10 of 11
Statistics about MS
MS is the most widespread neurological condition disabling young adults worldwide.
About 400,000 people in the United States have MS, though that’s only an estimate. Doctors in the United States aren’t required to report MS to any agency. According to the National MS Society, there hasn’t been a scientifically sound, national study on the prevalence of MS in the United States since 1975.
Most people are between the ages of 20 and 40 when their doctor diagnoses them with MS. Women develop MS 2 to 3 times more often than men, a difference that has grown steadily for five decades. MS is more common among Caucasians of northern European ancestry than other ethnic groups.
Rates of MS tend to be lower in places that are closer to the equator. The rates of MS are higher in places farther away from the equator. This may have to do with sunlight and vitamin D. People who relocate to a new location before age 15 generally acquire the risk factors associated with the new location.
Part 11 of 11
What are the effects of MS?
The lesions from MS can appear anywhere in the central nervous system. This means they can affect any part of your body.
One of the most common symptoms of MS is fatigue, but it’s not uncommon for people with MS to also have:
- some degree of cognitive impairment
As you age, some disabilities from MS may become more pronounced. If you have mobility issues, you may be at an increased risk for bone fractures and breaks due to falls. Mobility issues can also lead to a lack of physical activity, which can lead to other health problems. Fatigue and mobility issues may also have an impact on sexual function. Having other conditions such as arthritis and osteoporosis can complicate matters.
- About multiple sclerosis. (n.d.). Retrieved from http://multiplesclerosis.ucsf.edu/education_and_support/about_multiple_sclerosis
- Adelman, G., Rane, S. G., & Villa, K. F. (2013, March 7). The cost burden of multiple sclerosis in the United States: A systematic review of the literature. Journal of Medical Economics, 16(5) 639-647. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/23425293
- Apatoff, B. Multiple sclerosis (MS). (2014, March). Retrieved from http://www.merckmanuals.com/professional/neurologic-disorders/demyelinating-disorders/multiple-sclerosis-(ms)
- Hartung, D. M., Bourdette, D. N., Ahmed, S. M., & Whitham, R. H. The cost of multiple sclerosis drugs in the US and the pharmaceutical industry. (2015, May 26). Neurology, 84(21) 2185-2192. Retrieved from http://www.neurology.org/content/84/21/2185.full
- Marrie, R. A., & Hanwell, H. General health issues in multiple sclerosis: Comorbidities, secondary conditions, and health behaviors. (2013, August 19). Continuum, 1046-1057. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/23917100
- Mayo Clinic staff. Multiple sclerosis. Treatment. (2015, October 1). Retrieved from http://www.mayoclinic.org/diseases-conditions/multiple-sclerosis/diagnosis-treatment/treatment/txc-20131903
- MS prevalence. (n.d.). Retrieved from http://www.nationalmssociety.org/About-the-Society/MS-Prevalence
- MS symptoms. (n.d.). Retrieved from http://www.nationalmssociety.org/Symptoms-Diagnosis/MS-Symptoms
- Multiple sclerosis and nutrition. (n.d.). Retrieved from http://my.clevelandclinic.org/ccf/media/files/Multiple_sclerosis_center/2 MS and Nutrition.pdf
- Multiple sclerosis in brief. (n.d.). Retrieved from http://msfocus.org/Facts-About-MS.aspx
- Multiple sclerosis FAQs. (n.d.). Retrieved from http://www.nationalmssociety.org/What-is-MS/MS-FAQ-s
- Progressive-relapsing MS. (n.d.). Retrieved from http://www.nationalmssociety.org/What-is-MS/Types-of-MS/Progressive-relapsing-MS
- Types of MS. (n.d.) Retrieved from http://www.nationalmssociety.org/What-is-MS/Types-of-MS
- Wellness for people with MS: What do we know about diet, exercise and mood and what do we still need to learn? (2015, March). Retrieved from http://www.nationalmssociety.org/NationalMSSociety/media/MSNationalFiles/Brochures/WellnessMSSocietyforPeoplewMS.pdf
- What causes multiple sclerosis? (2009, July). Retrieved from http://msfocus.org/causes-multiple-sclerosis.aspx
- Who gets MS? (Epidemiology). (n.d.). Retrieved from http://www.nationalmssociety.org/What-is-MS/Who-Gets-MS
|
{
"palladium_score": 3.5077059268951416,
"timestamp": "2026-01-18T07:31:35.164823",
"source": "Palladium-STEM (Preview)"
}
|
http://www.corestandards.org/Math/Content/5/introduction/
|
In Grade 5, instructional time should focus on three critical areas: (1) developing fluency with addition and subtraction of fractions, and developing understanding of the multiplication of fractions and of division of fractions in limited cases (unit fractions divided by whole numbers and whole numbers divided by unit fractions); (2) extending division to 2-digit divisors, integrating decimal fractions into the place value system and developing understanding of operations with decimals to hundredths, and developing fluency with whole number and decimal operations; and (3) developing understanding of volume.
- Students apply their understanding of fractions and fraction models to represent the addition and subtraction of fractions with unlike denominators as equivalent calculations with like denominators. They develop fluency in calculating sums and differences of fractions, and make reasonable estimates of them. Students also use the meaning of fractions, of multiplication and division, and the relationship between multiplication and division to understand and explain why the procedures for multiplying and dividing fractions make sense. (Note: this is limited to the case of dividing unit fractions by whole numbers and whole numbers by unit fractions.)
- Students develop understanding of why division procedures work based on the meaning of base-ten numerals and properties of operations. They finalize fluency with multi-digit addition, subtraction, multiplication, and division. They apply their understandings of models for decimals, decimal notation, and properties of operations to add and subtract decimals to hundredths. They develop fluency in these computations, and make reasonable estimates of their results. Students use the relationship between decimals and fractions, as well as the relationship between finite decimals and whole numbers (i.e., a finite decimal multiplied by an appropriate power of 10 is a whole number), to understand and explain why the procedures for multiplying and dividing finite decimals make sense. They compute products and quotients of decimals to hundredths efficiently and accurately.
- Students recognize volume as an attribute of three-dimensional space. They understand that volume can be measured by finding the total number of same-size units of volume required to fill the space without gaps or overlaps. They understand that a 1-unit by 1-unit by 1-unit cube is the standard unit for measuring volume. They select appropriate units, strategies, and tools for solving problems that involve estimating and measuring volume. They decompose three-dimensional shapes and find volumes of right rectangular prisms by viewing them as decomposed into layers of arrays of cubes. They measure necessary attributes of shapes in order to determine volumes to solve real world and mathematical problems.
Grade 5 Overview
Operations and Algebraic Thinking
- Write and interpret numerical expressions.
- Analyze patterns and relationships.
Number and Operations in Base Ten
- Understand the place value system.
- Perform operations with multi-digit whole numbers and with decimals to hundredths.
Number and Operations—Fractions
- Use equivalent fractions as a strategy to add and subtract fractions.
- Apply and extend previous understandings of multiplication and division to multiply and divide fractions.
Measurement and Data
- Convert like measurement units within a given measurement system.
- Represent and interpret data.
- Geometric measurement: understand concepts of volume and relate volume to multiplication and to addition.
- Graph points on the coordinate plane to solve real-world and mathematical problems.
- Classify two-dimensional figures into categories based on their properties.
- Make sense of problems and persevere in solving them.
- Reason abstractly and quantitatively.
- Construct viable arguments and critique the reasoning of others.
- Model with mathematics.
- Use appropriate tools strategically.
- Attend to precision.
- Look for and make use of structure.
- Look for and express regularity in repeated reasoning.
|
{
"palladium_score": 4.202691555023193,
"timestamp": "2026-01-18T07:31:37.579201",
"source": "Palladium-STEM (Preview)"
}
|
http://affordsolartech.com/wind-generators-for-sale.html
|
Renewable electricity production, from sources such as wind power and solar power, is sometimes criticized for being variable or intermittent, but is not true for concentrated solar, geothermal and biofuels, that have continuity. In any case, the International Energy Agency has stated that deployment of renewable technologies usually increases the diversity of electricity sources and, through local generation, contributes to the flexibility of the system and its resistance to central shocks.
Compact Linear Fresnel Reflectors are CSP-plants which use many thin mirror strips instead of parabolic mirrors to concentrate sunlight onto two tubes with working fluid. This has the advantage that flat mirrors can be used which are much cheaper than parabolic mirrors, and that more reflectors can be placed in the same amount of space, allowing more of the available sunlight to be used. Concentrating linear fresnel reflectors can be used in either large or more compact plants.
A Wind Turbine Generator is what makes your electricity by converting mechanical energy into electrical energy. Lets be clear here, they do not create energy or produce more electrical energy than the amount of mechanical energy being used to spin the rotor blades. The greater the “load”, or electrical demand placed on the generator, the more mechanical force is required to turn the rotor. This is why generators come in different sizes and produce differing amounts of electricity.
However, it has been found that high emissions are associated only with shallow reservoirs in warm (tropical) locales, and recent innovations in hydropower turbine technology are enabling efficient development of low-impact run-of-the-river hydroelectricity projects. Generally speaking, hydroelectric plants produce much lower life-cycle emissions than other types of generation. Hydroelectric power, which underwent extensive development during growth of electrification in the 19th and 20th centuries, is experiencing resurgence of development in the 21st century. The areas of greatest hydroelectric growth are the booming economies of Asia. China is the development leader; however, other Asian nations are installing hydropower at a rapid pace. This growth is driven by much increased energy costs—especially for imported energy—and widespread desires for more domestically produced, clean, renewable, and economical generation.
Alternatively, SRECs allow for a market mechanism to set the price of the solar generated electricity subsity. In this mechanism, a renewable energy production or consumption target is set, and the utility (more technically the Load Serving Entity) is obliged to purchase renewable energy or face a fine (Alternative Compliance Payment or ACP). The producer is credited for an SREC for every 1,000 kWh of electricity produced. If the utility buys this SREC and retires it, they avoid paying the ACP. In principle this system delivers the cheapest renewable energy, since the all solar facilities are eligible and can be installed in the most economic locations. Uncertainties about the future value of SRECs have led to long-term SREC contract markets to give clarity to their prices and allow solar developers to pre-sell and hedge their credits.
Solar power is the conversion of energy from sunlight into electricity, either directly using photovoltaics (PV), indirectly using concentrated solar power, or a combination. Concentrated solar power systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. Photovoltaic cells convert light into an electric current using the photovoltaic effect.
|
{
"palladium_score": 3.6698694229125977,
"timestamp": "2026-01-18T07:31:37.579201",
"source": "Palladium-STEM (Preview)"
}
|
https://www.chemistryworld.com/news/impact-of-extra-terrestrial-glycine-delivery-could-have-created-nucleobase-precursors/3010579.article
|
Quantum simulations show that comets and other celestial bodies may have not only delivered building blocks for life to Earth, but also synthesised them on impact
Researchers from the US have discovered that exposing glycine solutions to sudden heat and pressure – 3000K and 48GPa to mimic a comet impacting Earth – can yield carbon-rich structures, including prebiotically-important nitrogen-containing polycyclic aromatic hydrocarbons (NPAHs).1
Glycine is a proteinogenic amino acid – one of 22 that are incorporated biosynthetically into proteins during translation. It is also the simplest amino acid and has been identified on comets such as 67P/Churyumov-Gerasimenko.2
Now, Matthew Kroonblawd and colleagues at the Lawrence Livermore National Laboratory have used semi-empirical quantum simulations to understand what happens when glycine-rich icy materials crash into a planetary surface. The study modelled 10 solutions of glycine under hot and compressed conditions, followed by rapid expansion and cool down periods. They produced a variety of small organic products and NPAHs with different functionalities such as carboxyl, amine and alcohol groups, as well as aromatic heterocycles like furans and pyrroles. NPAHs are important precursors to nucleobases.
The researchers say their findings support an alternative pathway to circumstellar gas-phase radical reactions for creating NPAHs. It also verifies that prebiotic molecules carried by comets can not only survive an impact, but also react under such extreme conditions to produce a diverse series of organic products, adding to theories of how life came to exist on Earth.
|
{
"palladium_score": 3.740046262741089,
"timestamp": "2026-01-18T07:31:37.579201",
"source": "Palladium-STEM (Preview)"
}
|
https://restorationchristianculture.org/speaker/jean-jacques-rousseau/
|
Jean-Jacques Rousseau, (born June 28, 1712, Geneva, Switzerland—died July 2, 1778, Ermenonville, France), Swiss-born philosopher, writer, and political theorist whose treatises and novels inspired the leaders of the French Revolution and the Romantic generation.
Rousseau was the least academic of modern philosophers and in many ways was the most influential. His thought marked the end of the Age of Reason. He propelled political and ethical thinking into new channels. His reforms revolutionized taste, first in music, then in the other arts. He had a profound impact on people’s way of life; he taught parents to take a new interest in their children and to educate them differently; he furthered the expression of emotion rather than polite restraint in friendship and love. He introduced the cult of religious sentiment among people who had discarded religious dogma. He opened people’s eyes to the beauties of nature, and he made liberty an object of almost universal aspiration.
Rousseau’s mother died in childbirth, and he was brought up by his father, who taught him to believe that the city of his birth was a republic as splendid as Sparta or ancient Rome. Rousseau senior had an equally glorious image of his own importance; after marrying above his modest station as a watchmaker, he got into trouble with the civil authorities by brandishing the sword that his upper-class pretentions prompted him to wear, and he had to leave Geneva to avoid imprisonment. Rousseau, the son, then lived for six years as a poor relation in his mother’s family, patronized and humiliated, until he, too, at the age of 16, fled from Geneva to live the life of an adventurer and a Roman Catholic convert in the kingdoms of Sardinia and France.
Rousseau was fortunate in finding in the province of Savoy a benefactor, the baroness de Warens, who provided him with a refuge in her home and employed him as her steward. She also furthered his education to such a degree that the boy who had arrived on her doorstep as a stammering apprentice who had never been to school developed into a philosopher, a man of letters, and a musician.
Mme de Warens, who thus transformed the adventurer into a philosopher, was herself an adventuress—a Swiss convert to Catholicism who had stripped her husband of his money before fleeing to Savoy with the gardener’s son to set herself up as a Catholic missionary specializing in the conversion of young male Protestants. Her morals distressed Rousseau, even when he became her lover. But she was a woman of taste, intelligence, and energy, who brought out in Rousseau just the talents that were needed to conquer Paris at a time when Voltaire had made radical ideas fashionable.
Rousseau reached Paris when he was 30 and was lucky enough to meet another young man from the provinces seeking literary fame in the capital, Denis Diderot. The two soon became immensely successful as the centre of a group of intellectuals—or philosophes—who gathered round the great French Encyclopédie, of which Diderot was appointed editor. The Encyclopédie was an important organ of radical and anticlerical opinion, and its contributors were as much reforming and even iconoclastic pamphleteers as they were philosophers. Rousseau, the most original of them all in his thinking and the most forceful and eloquent in his style of writing, was soon also the most conspicuous. He wrote music as well as prose, and one of his operas, Le Devin du village (1752; “The Village Soothsayer”), attracted so much admiration from the king (Louis XV) and the court that he might have enjoyed an easy life as a fashionable composer, but something in his Calvinist blood rejected that type of worldly glory. Indeed, at the age of 37 Rousseau had what he called an “illumination” while walking to Vincennes to visit Diderot, who had been imprisoned there because of his irreligious writings. In the Confessions (1782–89), which he wrote late in life, Rousseau says that it came to him then in a “terrible flash” that modern progress had corrupted people instead of improving them. He went on to write his first important work, a prize essay for the Academy of Dijon entitled Discours sur les sciences et les arts (1750; A Discourse on the Sciences and the Arts), in which he argues that the history of human life on earth has been a history of decay.
That work is by no means Rousseau’s best piece of writing, but its central theme was to inform almost everything else he wrote. Throughout his life he kept returning to the thought that people are good by nature but have been corrupted by society and civilization. He did not mean to suggest that society and civilization are inherently bad but rather that both had taken a wrong direction and become more harmful as they became more sophisticated. That idea in itself was not unfamiliar in Rousseau’s time. Many Roman Catholic writers, for example, deplored the direction that European culture had taken since the Middle Ages. They shared the hostility toward progress that Rousseau had expressed. What they did not share was his belief that people are naturally good. It was, however, just that belief that Rousseau made the cornerstone of his argument.
Rousseau may well have received the inspiration for that belief from Mme de Warens; for although she had become a communicant of the Roman Catholic Church, she retained—and transmitted to Rousseau—much of the sentimental optimism about human purity that she had herself absorbed as a child from the mystical Protestant Pietists who were her teachers in the canton of Bern. At all events, the idea of human goodness, as Rousseau developed it, set him apart from both conservatives and radicals. Even so, for several years after the publication of his first Discourse, he remained a close collaborator in Diderot’s essentially progressive enterprise, the Encyclopédie, and an active contributor to its pages. His speciality there was music, and it was in this sphere that he first established his influence as a reformer.
Controversy With Rameau
The arrival of an Italian opera company in Paris in 1752 to perform works of opera buffa (comic opera) by Giovanni Battista Pergolesi, Alessandro Scarlatti, Leonardo Vinci, and other such composers suddenly divided the French music-loving public into two excited camps, supporters of the new Italian opera and supporters of the traditional French opera. The philosophes of the Encyclopédie—Jean Le Rond d’Alembert, Diderot, and Paul-Henri Dietrich, baron d’Holbach among them—entered the fray as champions of Italian music, but Rousseau, who had arranged for the publication of Pergolesi’s music in Paris and who knew more about the subject than most Frenchmen after the months he had spent visiting the opera houses of Venice during his time as secretary to the French ambassador to the doge in 1743–44, emerged as the most-forceful and effective combatant. He was the only one to direct his fire squarely at the leading living exponent of French operatic music, Jean-Philippe Rameau.
Rousseau and Rameau must at that time have seemed unevenly matched in a controversy about music. Rameau, already in his 70th year, was not only a prolific and successful composer but was also, as the author of the celebrated Traité de l’harmonie (1722; Treatise on Harmony) and other technical works, Europe’s leading musicologist. Rousseau, by contrast, was 30 years younger, a newcomer to music, with no professional training and only one successful opera to his credit. His scheme for a new notation for music had been rejected by the Academy of Sciences, and most of his musical entries for Diderot’s Encyclopédie were as yet unpublished. Yet the dispute was not only musical but also philosophical, and Rameau was confronted with a more-formidable adversary than he had realized. Rousseau built his case for the superiority of Italian music over French on the principle that melody must have priority over harmony, whereas Rameau based his on the assertion that harmony must have priority over melody. By pleading for melody, Rousseau introduced what later came to be recognized as a characteristic idea of Romanticism, namely, that in art the free expression of the creative spirit is more important than strict adherence to formal rules and traditional procedures. By pleading for harmony, Rameau reaffirmed the first principle of French Classicism, namely, that conformity to rationally intelligible rules is a necessary condition of art, the aim of which is to impose order on the chaos of human experience.
In music, Rousseau was a liberator. He argued for freedom in music, and he pointed to the Italian composers as models to be followed. In doing so he had more success than Rameau; he changed people’s attitudes. Christoph Willibald Gluck, who succeeded Rameau as the most-important operatic composer in France, acknowledged his debt to Rousseau’s teaching, and Wolfgang Amadeus Mozart based the text for his one-act operetta Bastien und Bastienne (Bastien and Bastienne) on Rousseau’s Le Devin du village. European music had taken a new direction. But Rousseau himself composed no more operas. Despite the success of Le Devin du village, or rather because of its success, Rousseau felt that, as a moralist who had decided to make a break with worldly values, he could not allow himself to go on working for the theatre. He decided to devote his energies henceforth to literature and philosophy.
Major Works Of Political Philosophy
As part of what Rousseau called his “reform,” or improvement of his own character, he began to look back at some of the austere principles that he had learned as a child in the Calvinist republic of Geneva. Indeed, he decided to return to that city, repudiate his Catholicism, and seek readmission to the Protestant church. He had in the meantime acquired a mistress, an illiterate laundry maid named Thérèse Levasseur. To the surprise of his friends, he took her with him to Geneva, presenting her as a nurse. Although her presence caused some murmurings, Rousseau was readmitted easily to the Calvinist communion, his literary fame having made him very welcome to a city that prided itself as much on its culture as on its morals.
Rousseau had by that time completed a second Discourse in response to a question set by the Academy of Dijon: “What is the origin of the inequality among men and is it justified by natural law?” In response to that challenge he produced a masterpiece of speculative anthropology. The argument follows on that of his first Discourse by developing the proposition that people are naturally good and then tracing the successive stages by which they have descended from primitive innocence to corrupt sophistication.
Rousseau begins his Discours sur l’origine de l’inegalité (1755; Discourse on the Origin of Inequality) by distinguishing two kinds of inequality, natural and artificial, the first arising from differences in strength, intelligence, and so forth, the second from the conventions that govern societies. It is the inequalities of the latter sort that he set out to explain. Adopting what he thought the properly “scientific” method of investigating origins, he attempts to reconstruct the earliest phases of human life on earth. He suggests that original humans were not social beings but entirely solitary, and to that extent he agrees with Thomas Hobbes’s account of the state of nature. But in contrast to the English pessimist’s view that human life in such a condition must have been “poor, nasty, brutish, and short,” Rousseau claims that original humans, although admittedly solitary, were healthy, happy, good, and free. Human vices, he argued, date from the time when societies were formed.
Rousseau thus exonerates nature and blames society. He says that passions that generate vices hardly existed in the state of nature but began to develop as soon as people formed societies. He goes on to suggest that societies started when people built their first huts, a development that facilitated cohabitation of males and females; that in turn produced the habit of living as a family and associating with neighbours. That “nascent society,” as Rousseau calls it, was good while it lasted; it was indeed the “golden age” of human history. Only it did not endure. With the tender passion of love there was also born the destructive passion of jealousy. Neighbours started to compare their abilities and achievements with one another, and that “marked the first step towards inequality and at the same time towards vice.” People started to demand consideration and respect. Their innocent self-love turned into culpable pride, as each person wanted to be better than everyone else.
The introduction of property marked a further step toward inequality, since it made law and government necessary as a means of protecting it. Rousseau laments the “fatal” concept of property in one of his more-eloquent passages, describing the “horrors” that have resulted from the departure from a condition in which the earth belonged to no one. Those passages in his second Discourse excited later revolutionaries such as Karl Marx and Vladimir Ilich Lenin, but Rousseau himself did not think that the past could be undone in any way. There was no point in dreaming of a return to the golden age.
Civil society, as Rousseau describes it, comes into being to serve two purposes: to provide peace for everyone and to ensure the right to property for anyone lucky enough to have possessions. It is thus of some advantage to everyone, but mostly to the advantage of the rich, since it transforms their de facto ownership into rightful ownership and keeps the poor dispossessed. It is a somewhat fraudulent social contract that introduces government, since the poor get so much less out of it than do the rich. Even so, the rich are no happier in civil society than are the poor because people in society are never satisfied. Society leads people to hate one another to the extent that their interests conflict, and the best they are able to do is to hide their hostility behind a mask of courtesy. Thus, Rousseau regards inequality not as a separate problem but as one of the features of the long process by which men become alienated from nature and from innocence.
In the dedication Rousseau wrote for the second Discourse, in order to present it to the republic of Geneva, he nevertheless praised that city-state for having achieved the ideal balance between “the equality which nature established among men and the inequality which they have instituted among themselves.” The arrangement he discerned in Geneva was one in which the best persons were chosen by the citizens and put in the highest positions of authority. Like Plato, Rousseau always believed that a just society was one in which everyone was in his proper place. And having written the second Discourse to explain how people had lost their liberty in the past, he went on to write another book, Du Contrat social (1762; The Social Contract), to suggest how they might recover their liberty in the future. Again Geneva was the model: not Geneva as it had become in 1754 when Rousseau returned there to recover his rights as a citizen, but Geneva as it had once been—i.e., Geneva as Calvin had designed it.
The Social Contract begins with the sensational opening sentence: “Man is born free, and everywhere he is in chains,” and proceeds to argue that men need not be in chains. If a civil society, or state, could be based on a genuine social contract, as opposed to the fraudulent social contract depicted in the Discourse on the Origin of Inequality, people would receive in exchange for their independence a better kind of freedom, namely true political, or republican, liberty. Such liberty is to be found in obedience to a self-imposed law.
Rousseau’s definition of political liberty raises an obvious problem. For while it can be readily agreed that an individual is free if he obeys only rules he prescribes for himself, this is so because an individual is a person with a single will. A society, by contrast, is a set of persons with a set of individual wills, and conflict between separate wills is a fact of universal experience. Rousseau’s response to the problem is to define civil society as an artificial person united by a general will, or volonté générale. The social contract that brings society into being is a pledge, and the society remains in being as a pledged group. Rousseau’s republic is a creation of the general will—of a will that never falters in each and every member to further the public, common, or national interest—even though it may conflict at times with personal interest.
Rousseau sounds very much like Hobbes when he says that under the pact by which people enter civil society everyone totally alienates himself and all his rights to the whole community. Rousseau, however, represents this act as a form of exchange of rights whereby people give up natural rights in return for civil rights. The bargain is a good one, because what is surrendered are rights of dubious value, whose realization depends solely on an individual’s own might, and what is obtained in return are rights that are both legitimate and enforced by the collective force of the community.
There is no more haunting paragraph in The Social Contract than that in which Rousseau speaks of “forcing a man to be free.” But it would be wrong to interpret these words in the manner of those critics who see Rousseau as a prophet of modern totalitarianism. He does not claim that a whole society can be forced to be free but only that an occasional individual, who is enslaved by his passions to the extent of disobeying the law, can be restored by force to obedience to the voice of the general will that exists inside of him. The person who is coerced by society for a breach of the law is, in Rousseau’s view, being brought back to an awareness of his own true interests.
For Rousseau there is a radical dichotomy between true law and actual law. Actual law, which he described in the Discourse on the Origin of Inequality, simply protects the status quo. True law, as described in The Social Contract, is just law, and what ensures its being just is that it is made by the people in their collective capacity as sovereign and obeyed by the same people in their individual capacities as subjects. Rousseau is confident that such laws could not be unjust because it is inconceivable that any people would make unjust laws for itself.
Rousseau is, however, troubled by the fact that the majority of a people does not necessarily represent its most-intelligent citizens. Indeed, he agrees with Plato that most people are stupid. Thus, the general will, while always morally sound, is sometimes mistaken. Hence Rousseau suggests the people need a lawgiver—a great mind like Solon or Lycurgus or Calvin—to draw up a constitution and system of laws. He even suggests that such lawgivers need to claim divine inspiration in order to persuade the dim-witted multitude to accept and endorse the laws it is offered.
That suggestion echoes a similar proposal by Niccolò Machiavelli, a political theorist whom Rousseau greatly admired and whose love of republican government he shared. An even more conspicuously Machiavellian influence can be discerned in Rousseau’s chapter on civil religion, where he argues that Christianity, despite its truth, is useless as a republican religion on the grounds that it is directed to the unseen world and does nothing to teach citizens the virtues that are needed in the service of the state, namely, courage, virility, and patriotism. Rousseau does not go so far as Machiavelli in proposing a revival of pagan cults, but he does propose a civil religion with minimal theological content designed to fortify and not impede (as Christianity impedes) the cultivation of martial virtues. It is understandable that the authorities of Geneva, profoundly convinced that the national church of their little republic was at the same time a truly Christian church and a nursery of patriotism, reacted angrily against that chapter in Rousseau’s Social Contract.
By the year 1762, however, when The Social Contract was published, Rousseau had given up any thought of settling in Geneva. After recovering his citizen’s rights in 1754, he had returned to Paris and the company of his friends around the Encyclopédie. But he became increasingly ill at ease in such worldly society and began to quarrel with his fellow philosophes. An article for the Encyclopédie on the subject of Geneva, written by d’Alembert at Voltaire’s instigation, upset Rousseau partly by suggesting that the pastors of the city had lapsed from Calvinist severity into unitarian laxity and partly by proposing that a theatre should be erected there. Rousseau hastened into print with a defense of the Calvinist orthodoxy of the pastors and with an elaborate attack on the theatre as an institution that could only do harm to an innocent community such as Geneva.
Years Of Seclusion And Exile
By the time his Lettre à d’Alembert sur les spectacles (1758; Letter to Monsieur d’Alembert on the Theatre) appeared in print, Rousseau had already left Paris to pursue a life closer to nature on the country estate of his friend Mme d’Épinay near Montmorency. When the hospitality of Mme d’Épinay proved to entail much the same social round as that of Paris, Rousseau retreated to a nearby cottage, called Montlouis, under the protection of the Maréchal de Luxembourg. But even that highly placed friend could not save him in 1762 when his treatise Émile; ou, de l’education (Emile; or, On Education), was published and scandalized the pious Jansenists of the French Parlements even as The Social Contract scandalized the Calvinists of Geneva. In Paris, as in Geneva, they ordered the book to be burned and the author arrested; all the Maréchal de Luxembourg could do was to provide a carriage for Rousseau to escape from France. After formally renouncing his Genevan citizenship in 1763, Rousseau became a fugitive, spending the rest of his life moving from one refuge to another.
The years at Montmorency had been the most productive of his literary career; The Social Contract, Émile, and Julie; ou, la nouvelle Héloïse (1761; Julie; or, The New Eloise) came out within 12 months, all three works of seminal importance. The New Eloise, being a novel, escaped the censorship to which the other two works were subject; indeed, of all his books it proved to be the most widely read and the most universally praised in his lifetime. It develops the Romanticism that had already informed his writings on music and perhaps did more than any other single work of literature to influence the spirit of its age. It made the author at least as many friends among the reading public—and especially among educated women—as The Social Contract and Émile made enemies among magistrates and priests. If it did not exempt him from persecution, at least it ensured that his persecution was observed, and admiring femmes du monde intervened from time to time to help him so that Rousseau was never, unlike Voltaire and Diderot, actually imprisoned.
The theme of The New Eloise provides a striking contrast to that of The Social Contract. It is about people finding happiness in domestic as distinct from public life, in the family as opposed to the state. The central character, Saint-Preux, is a middle-class preceptor who falls in love with his upper-class pupil, Julie. She returns his love and yields to his advances, but the difference between their classes makes marriage between them impossible. Baron d’Étange, Julie’s father, has indeed promised her to a fellow nobleman named Wolmar. As a dutiful daughter, Julie marries Wolmar and Saint-Preux goes off on a voyage around the world with an English aristocrat, Bomston, from whom he acquires a certain stoicism. Julie succeeds in forgetting her feelings for Saint-Preux and finds happiness as wife, mother, and chatelaine. Some six years later Saint-Preux returns from his travels and is engaged as tutor to the Wolmar children. All live together in harmony, and there are only faint echoes of the old affair between Saint-Preux and Julie. The little community, dominated by Julie, illustrates one of Rousseau’s political principles: that while men should rule the world in public life, women should rule men in private life. At the end of The New Eloise, when Julie has made herself ill in an attempt to rescue one of her children from drowning, she comes face-to-face with a truth about herself: that her love for Saint-Preux has never died.
The novel was clearly inspired by Rousseau’s own curious relationship—at once passionate and platonic—with Sophie d’Houdetot, a noblewoman who lived near him at Montmorency. He himself asserted in the Confessions that he was led to write the book by “a desire for loving, which I had never been able to satisfy and by which I felt myself devoured.” Saint-Preux’s experience of love forbidden by the laws of class reflects Rousseau’s own experience; and yet it cannot be said that The New Eloise is an attack on those laws, which seem, on the contrary, to be given the status almost of laws of nature. The members of the Wolmar household are depicted as finding happiness in living according to an aristocratic ideal. They appreciate the routines of country life and enjoy the beauties of the Swiss and Savoyard Alps. But despite such an endorsement of the social order, the novel was revolutionary; its very free expression of emotions and its extreme sensibility deeply moved its large readership and profoundly influenced literary developments.
Émile is a book that seems to appeal alternately to the republican ethic of The Social Contract and the aristocratic ethic of The New Eloise. It is also halfway between a novel and a didactic essay. Described by the author as a treatise on education, it is not about schooling but about the upbringing of a rich man’s son by a tutor who is given unlimited authority over him. At the same time the book sets out to explore the possibilities of an education for republican citizenship. The basic argument of the book, as Rousseau himself expressed it, is that vice and error, which are alien to a child’s original nature, are introduced by external agencies, so that the work of a tutor must always be directed to counteracting those forces by manipulating pressures that will work with nature and not against it. Rousseau devotes many pages to explaining the methods the tutor must use. Those methods involve a noticeable measure of deceit, and although corporal punishment is forbidden, mental cruelty is not.
Whereas The Social Contract is concerned with the problems of achieving freedom, Émile is concerned with achieving happiness and wisdom. In this different context religion plays a different role. Instead of a civil religion, Rousseau here outlines a personal religion, which proves to be a kind of simplified Christianity, involving neither revelation nor the familiar dogmas of the church. In the guise of La Profession de foi du vicaire savoyard (1765; The Profession of Faith of a Savoyard Vicar) Rousseau sets out what may fairly be regarded as his own religious views, since that book confirms what he says on the subject in his private correspondence. Rousseau could never entertain doubts about God’s existence or about the immortality of the soul. He felt, moreover, a strong emotional drive toward the worship of God, whose presence he felt most forcefully in nature, especially in mountains and forests untouched by the hand of man. He also attached great importance to conscience, the “divine voice of the soul in man,” opposing this both to the bloodless categories of rationalistic ethics and to the cold tablets of biblical authority.
That minimal creed put Rousseau at odds with the orthodox adherents of the churches and with the openly atheistic philosophes of Paris, so that despite the enthusiasm that some of his writings, and especially The New Eloise, excited in the reading public, he felt himself increasingly isolated, tormented, and pursued. After he had been expelled from France, he was chased from canton to canton in Switzerland. He reacted to the suppression of The Social Contract in Geneva by indicting the regime of that city-state in a pamphlet entitled Lettres écrites de la montagne (1764; Letters Written from the Mountain). No longer, as in the Discourse on the Origin of Inequality, was Geneva depicted as a model republic but as one that had been taken over by “twenty-five despots”; the subjects of the king of England were said to be free by comparison with the victims of Genevan tyranny.
It was in England that Rousseau found refuge after he had been banished from the canton of Bern. The Scottish philosopher David Hume took him there and secured the offer of a pension from King George III; but once in England, Rousseau became aware that certain British intellectuals were making fun of him, and he suspected Hume of participating in the mockery. Various symptoms of paranoia began to manifest themselves in Rousseau, and he returned to France incognito. Believing that Thérèse was the only person he could rely on, he finally married her in 1768, when he was 56 years old.
The Last Decade
In the remaining 10 years of his life Rousseau produced primarily autobiographical writings, mostly intended to justify himself against the accusations of his adversaries. The most important was his Confessions, modeled on the work of the same title by St. Augustine and achieving something of the same classic status. He also wrote Rousseau juge de Jean-Jacques (1780; Rousseau, Judge of Jean-Jacques) to reply to specific charges by his enemies and Les Rêveries du promeneur solitaire (1782; Reveries of the Solitary Walker), one of the most moving of his books, in which the intense passion of his earlier writings gives way to a gentle lyricism and serenity. And indeed, Rousseau does seem to have recovered his peace of mind in his last years, when he was once again afforded refuge on the estates of great French noblemen, first the Prince de Conti and then the Marquis de Girardin, in whose park at Ermenonville he died.
Duignan, B., & Cranston, M. (2018, October 15). Jean-Jacques Rousseau. Retrieved January 10, 2019, from https://www.britannica.com/biography/Jean-Jacques-Rousseau
|
{
"palladium_score": 3.803013801574707,
"timestamp": "2026-01-18T07:31:37.579201",
"source": "Palladium-STEM (Preview)"
}
|
https://pmj.bmj.com/content/80/940/72
|
The first medical article on the hazards of asbestos dust appeared in the British Medical Journal in 1924. Following inquiries by Edward Merewether and Charles Price, the British government introduced regulations to control dangerous dust emissions in UK asbestos factories. Until the 1960s these appeared to have addressed the problem effectively. Only then, with the discoveries that mesothelioma was an asbestos related disease and that workers other than those employed in the dustiest parts of asbestos factories were at risk, were the nature and scale of the hazard reassessed. In Britain, America, and elsewhere new and increasingly strict regulations were enacted.
Statistics from Altmetric.com
Asbestos is the generic term for a number of naturally occurring fibrous minerals. Commercially, the most important of these are the white, blue, and brown varieties, otherwise known as chrysotile (a serpentine asbestos), crocidolite, and amosite (both amphiboles). Asbestos is widely distributed, but the largest deposits are found in Canada and Russia. It possesses amazing characteristics. Uniquely among minerals, it can be spun into a thread and then woven into a cloth. Clothes and soft furnishings can be made from asbestos—even though it is literally a rock. But why make such products out of a mineral except as a curiosity? The answer lies mainly in the material’s unparalleled fireproofing and insulating capabilities. However, asbestos possesses other attractive qualities: it is relatively lightweight (an important consideration when fireproofing naval vessels), abundant, cheap to mine and process, resistant to water and acids (and hence corrosion), durable to the point of indestructibility, electrically non-conductive, and unattractive to vermin. Finally, it can be put to an enormous number of uses (usually when blended with resins, plastics, or other materials). In many respects, therefore, asbestos is the perfect material for an industrialising and electrifying world of heat, combustion, and high speed locomotion. Not surprisingly, it came to be viewed, for the first two thirds of the 20th century, as the “indispensable” and even the “magic” mineral.
By the mid-20th century asbestos was an ingredient in all manner of things, including motor cars (as an ingredient of brakes, clutch linings, and gaskets), buildings (for insulation and fireproofing), warships (also for insulation and fireproofing), domestic products (such as ironing boards), and electrical distribution systems. The product ranges of the largest asbestos companies, such as Johns–Manville, the American giant that dominated the industry for many years, ran to scores of pages. So asbestos had many “upsides”. Unfortunately, it also has a very significant “downside” in that exposure to its dust can cause three fatal diseases: asbestosis, lung cancer, and mesothelioma of the pleura and peritoneum.
It has long been known that asbestos dust constitutes a danger to health; however, some issues, including the relative hazards of different types of asbestos and whether there is a safe level of exposure to any of them, remain in scientific dispute.1,2 Since the 1960s crocidolite has been regarded as a particular hazard, chiefly because of its strong association with mesothelioma. Amosite is widely regarded as scarcely less dangerous. In contrast, some have argued that pure chrysotile “may present little or no carcinogenic hazard” if uncontaminated by amphiboles. As recently as 2000, pure chrysotile was termed “a remarkably safe and valuable natural resource”, which could be used to substantial public health advantage in the Third World.3,4 Others dismiss such views and demand an international ban on all forms of asbestos.5 Such scientific disputes and policy uncertainties conform to a long standing pattern whereby medical knowledge about the health hazards of asbestos dust has emerged slowly and sometimes falteringly since the early 20th century. As Irving J Selikoff, one of the foremost authorities on asbestos related disease in late 20th century America, once said, nature long held “some secrets ... rather close to its vest”.6
DISCOVERY OF A LINK BETWEEN ASBESTOS AND DISEASE
The sequence of developing knowledge about asbestos and disease has generated historical controversy.7–13 Some even maintain that the health hazards of asbestos dust were appreciated in the ancient world; such claims have been convincingly refuted.14 Those doyens of occupational medicine, Thomas Arlidge and Thomas Oliver, ignored the hazards of asbestos in the late-Victorian and Edwardian periods (though Oliver addressed them subsequently).15,16 The first medical paper on the subject appeared in the British Medical Journal in 1924.17 Written by William Cooke of Wigan Infirmary’s department of pathology, it briefly dealt with the illness and death from fibrosis of the lungs and tuberculosis of Nellie Kershaw, who had worked in the spinning room of a Rochdale asbestos factory. Following this case report, other papers soon appeared. These included articles by Oliver, who coined the word “asbestosis”, though most observers have mistakenly attributed the term to Cooke, who used it in a 1927 paper that further explored the Kershaw case.18,19 In 1928, following the discovery of a case of pulmonary fibrosis affecting a Glasgow asbestos worker, Britain’s factory inspectorate took up the issue. Edward Merewether, a medical inspector based in Glasgow, was instructed to ascertain “whether the occurrence of this disease in an asbestos worker was merely a coincidence, or evidence of a definite health risk in the [asbestos] industry”.20
At 36 years of age, Merewether was comparatively young when he embarked upon this task. He was also a newcomer to the inspectorate, having taken up his appointment only in 1927. Merewether’s initial survey was soon followed by a full scale investigation, which he completed in October 1929. He found that occupational exposure to asbestos dust, particularly for prolonged periods at high concentrations, constituted a “definite occupational risk among asbestos workers as a class”.21 The fibrosis of the lungs that could result might lead to “complete disablement” and death.21 His report endorsed a view expressed a few months earlier that a “new” disease, pulmonary asbestosis, had been discovered.22
Merewether had confirmed the existence of a new fatal disease, but he also believed that this disease was preventable. Dust control, he anticipated, “will cause, firstly, a great increase in the length of time before workers develop a disabling fibrosis, and secondly, the almost total disappearance of the disease, as the measures for the suppression of dust are perfected”.21 At this point, Merewether’s colleague, the engineering inspector of factories, Charles Price, investigated and recommended practical measures to control dust.
Following negotiations between representatives of the asbestos industry, the Trades Union Congress (largely in the person of its eminent medical adviser, Dr Thomas Legge, the first ever medical inspector of factories), individual unions, the factory inspectorate, and senior Home Office officials, the government enacted the Asbestos Industry Regulations, 1931. Implemented in full in 1933, these required the suppression of dust in the dustiest, and hence, apparently, the hazardous, areas of asbestos factories.23 With these measures in place, and for decades to come, it was widely agreed that the 1931 regulations had solved the problem of asbestosis in British asbestos factories. Thus, in 1955, Richard Doll referred to the infrequency of asbestosis and attributed its rarity to the “great reduction in the amount of dust in asbestos works” since the early 1930s.24 In the same year, Donald Hunter, then one of the leading authorities on occupational health, observed that the legislation had been “effective in controlling the disease” of asbestosis.25 Other distinguished figures, including Georgiana Bonser of Leeds University and Andrew Meiklejohn of Glasgow University, expressed similar views.26,27
All of these opinions referred to the British experience; for years Britain’s efforts to prevent asbestos related disease were not replicated elsewhere. In the USA, asbestos was mined, manufactured, and used in large quantities (table 1), but, apart from the patchy industrial hygiene measures established by some states beginning in the 1930s, little regulation pre-dated the federal Occupational Safety and Health Act of 1970. By this time scientific and regulatory attitudes towards asbestos disease had been transformed in several ways. Most significantly, it had been ascertained that asbestosis was not the only disease associated with exposure to asbestos dust.
LINK WITH LUNG CANCER
Suspicion that asbestosis might be linked with lung cancer began to emerge in the 1930s.28,29 This link became more persuasive in the 1940s, even though doubts remained.30–34 Then, in 1955, Doll established to the satisfaction of most informed observers that a causal association existed between asbestosis and lung cancer.24 He believed, however, that the Asbestos Industry Regulations had greatly reduced the risk of lung cancer for those who worked in Britain’s asbestos factories. As he wrote in 1960, “It seems likely that the risk may now be largely eliminated”.35
At this time, notwithstanding the discovery of a second asbestos related disease, there was every reason to suppose that the asbestos industry could continue to produce the fireproofing, insulation, and friction materials widely regarded as indispensable to modern life, provided that workers were protected from the heavy and prolonged exposures associated with asbestosis and lung cancer. In 1956, Meiklejohn dismissed the notion of a ban as “completely futile and absurd”.27 Such views remained prevalent during the 1970s and even the 1980s. Irving Selikoff, along with the editorial pages of the Lancet, BMJ, and JAMA and other commentators, emphasised precautions over proscription of the mineral.36–39
The 1960s saw several important developments in the story of asbestos and disease. First, a third asbestos related disease, mesothelioma, was discovered. Second, it was shown that the hazards of asbestos dust were not confined to heavily exposed workers in asbestos factories but extended to insulation workers, other users of products containing asbestos, and people who lived close to asbestos factories.40–44 There were even suggestions that urban dwellers, even in towns and cities remote from asbestos mines or factories, might face a hazard simply because they lived among buildings and cars containing asbestos.45 Third, even in Britain, with its well established and relatively high degree of regulation, some evidence suggested that asbestos related disease had ceased to decline and was possibly increasing.12 Fourth, in Britain and America at least, asbestos hazards began to attract increased media attention. Between 1964 and 1967, stories about the health hazards of asbestos appeared in such national newspapers as The Times, Sunday Times, Daily Herald, Guardian, Daily Telegraph, Morning Star, New York Times, and Wall Street Journal, as well as in local and regional papers. In January 1967, the BBC broadcast a film on the subject in its early evening news programme 24 Hours. Thereafter, asbestos health hazards regularly featured in newspaper and television reports. Fifth, in 1969, the first third party products liability suit claiming personal injury from asbestos was launched in the USA, thereby initiating the process that led to the demise of many large, well established, and successful companies.
LINK WITH MESOTHELIOMA
Cases of pleural mesothelioma were apparently detected in the nineteenth century, but the term itself had not appeared, and its occurrence was “so rare that some pathologists doubted its existence”.46 However, during the 1950s, South African researchers J C Wagner, Christopher Sleggs, and Paul Marchand began to identify cases of mesothelioma in the crocidolite mining district of Griqualand West. Curiously, they found no such cases in the vicinity of the Transvaal asbestos mines, even though the asbestos there was the same as in Griqualand West. This discrepancy initially suggested that the mesotheliomas in the Griqualand West district could have had an origin unrelated to asbestos exposure. Wagner, Sleggs, and Marchand first presented their findings at a conference in Johannesburg in 1959. Papers they published between 1960 and 1962 established a “possible association between the development of mesotheliomas of the pleura and exposure to asbestos dust in people living in the Cape asbestos fields”.47–53 As this quotation indicates, the researchers had not established a clear causal association between exposure to crocidolite dust (let alone other forms of asbestos dust) and cases of mesothelioma among people who had never visited the northwest Cape. There was not long to wait. Papers published in 1964 and 1965 resulted in the general medical recognition of mesothelioma as an asbestos related disease.40–44,54–60 Scepticism remained in some quarters, but, as a leading article in the BMJ later put it, by “the end of 1965 it was clear that asbestos workers are at special risk of developing ... mesothelioma”.61,62
Though other causal agents have been identified, for years asbestos dust (at least certain types of it) has been widely considered to be the principal or even the only cause of mesothelioma.63 Recently, however, the longstanding assumption that mesothelioma is solely caused by asbestos exposure has been called into question by the recognition that there are “many cases (>20%) of mesothelioma for which there is little or no known exposure to asbestos”.64 A causal association between asbestos and mesothelioma is not in dispute, but it has been widely proposed that the disease may also be causally linked with poliomyelitis vaccine contaminated with simian virus 40 (SV40), which was administered to millions of people in Europe and the USA between 1955 and 1963. Mayall et al have suggested a “synergistic interaction between asbestos and SV40 in human mesotheliomas”.65 However, much remains in doubt. Carbone et al have warned against premature “conclusions about the possible role of SV40 in mesothelioma development in the general population”.66 Likewise, Jasani et al have observed that the “causal link between SV40 and mesothelioma ... still needs to be examined further”.67 More recently, Gazdar et al have pointed to “considerable evidence that SV40 has a causative role in the pathogenesis of mesothelioma”, but caution that “the evidence is still insufficient to distinguish between association and causation”.68 At present, therefore, it remains unclear whether a causal association exists between SV40 and elevated rates of mesothelioma.64–69
EXTENT OF THE RISKS POSED BY ASBESTOS
Recognition that asbestos related disease was not confined to unprotected workers in the dustiest locations dates from the 1960s and had much to do with the discovery of mesothelioma and the appreciation that relatively brief and light dust exposure could cause the disease years before its manifestation. A few isolated cases of asbestosis in insulation workers were reported in medical journals as early as the 1930s.70–72 Furthermore, as we now know, the US Navy and Maritime Commission appreciated the need to protect heavily exposed shipyard insulation workers in the early 1940s.73 This knowledge was not disseminated to a wider audience, however, and Asbestos Worker, the magazine of the US insulation workers’ union, accurately noted in 1966 that “probably more attention has been focused on these particular health hazards in the last 3 or 4 years, than in the previous 30 or 40 years”.74 The key figure in identifying the dangers of insulation work involving materials containing asbestos and exposure to even intermittent and light dust concentrations was Irving J Selikoff of Mount Sinai Hospital in New York City. Beginning in the early 1960s, with financial support from sources as diverse as the insulation workers’ union and (from 1968) the Johns–Manville Corporation, Selikoff and his colleagues produced a stream of publications indicating, among other things, that insulators who worked with asbestos material in the USA faced an “important risk” of contracting asbestosis, lung cancer or mesothelioma, and possibly also gastrointestinal cancer.40–42,75,76
In Britain the emergence of knowledge about mesothelioma and the hazards of insulation work coincided with the first doubts about the 1931 regulations. As a result, the factory inspectorate began revising these regulations in 1964. Five years later, following extensive consultation with business, scientists, and trades unionists, the Asbestos Regulations, 1969 were established. These allowed the continued use of asbestos only if maximum allowable concentrations of dust were not exceeded and if other precautions were observed. The regulations applied to all work sites and not, as previously, to asbestos factories alone. The maximum allowable concentration for crocidolite was set so low that its use was virtually eliminated. In the 1970s, 1980s, and 1990s further restrictions, both voluntary and statutory, were placed on the importation and use of asbestos. At their peak, in 1973, UK asbestos imports stood at some 190 000 tonnes per annum; by 1997, the amount had fallen to 4820 tonnes of chrysotile, by then the only form allowed.12 Then, in July 1999, with one minor exception, the European Commission announced a European Union ban on all remaining chrysotile use by 1 January 2005. Britain implemented the ban some five years ahead of schedule in October 1999. Other European Union members have also beaten the deadline, and other countries have introduced their own bans.77 Elsewhere, including many parts of Africa, Asia, and South America, asbestos use remains widespread.
In the early 1970s, the US Occupational Safety and Health Administration (OHSA) identified asbestos as one of its first regulatory targets and introduced a range of controls. The OSHA reduced permitted exposure limits from 5 ff/ml (time weighted average) in 1972 to 2 ff/ml (time weighted average) in 1976.78–80 As in Britain, stricter measures on the manufacture, importation, and processing of asbestos and products containing asbestos followed in the 1980s and 1990s. A permitted exposure limit of 0.1 ff/ml (fibres per millilitre) was introduced in 1994.81 Many of the companies that mined, manufactured, and used asbestos have gone out of business since the early 1980s under the burden of litigation. At present, asbestos use is heavily regulated and banned in most circumstances in the USA. A “comprehensive ban on asbestos in America” is envisaged. However, since exceptions are apparently anticipated if no alternative materials are available and it can be demonstrated that no damage to health or the environment will ensue, it remains to be seen how “comprehensive” this “ban” will be.82
A chronology of medical discovery
1924: W E Cooke publishes the first paper on asbestos related disease.
1925: Thomas Oliver coins the term “asbestosis”.
1930: Edward Merewether confirms that inhalation of asbestos dust can cause a fatal disease.
1935: Kenneth Lynch and W Atmar Smith identify a “possible relationship” between pulmonary asbestosis and carcinoma of the lung.
1955: Richard Doll finds that certain asbestos workers face a “notably higher risk” of contracting lung cancer than the rest of the population.
1960: Wagner, Sleggs, and Marchand publish their first paper indicating a relationship between pleural mesothelioma and asbestos exposure.
1964: Selikoff, Churg, and Hammond demonstrate that insulation contract workers face a health hazard resulting from asbestos exposure.
FUTURE OF ASBESTOS RELATED DISEASE
Even if a worldwide ban on asbestos were to be introduced forthwith, past exposures will ensure that death and disease related to asbestos continue for the foreseeable future. Epidemiologists have predicted that the incidence of male mesothelioma in the USA should peak at about 2300 cases per year at the end of the 20th century and will decline to some 500 cases per year by about 2055.83 In 1995, Julian Peto et al predicted that male deaths from mesothelioma in Britain will peak at between 2700 and 3300 per year around the year 2020.84 A few years later, Peto et al forecast some 250 000 male deaths from mesothelioma in Western Europe as a whole by about 2035. Most of these deaths are expected to occur among roofers, plumbers, electricians, carpenters, gas fitters, and others employed in the building trades.85 Others anticipate figures as high as 10 000 per year among British males alone by 2020.86,87 It appears that the history of asbestos related disease still has some distance to travel.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
|
{
"palladium_score": 3.717029094696045,
"timestamp": "2026-01-18T07:31:37.579201",
"source": "Palladium-STEM (Preview)"
}
|
http://www.reachoutmichigan.org/funexperiments/quick/hawaii/Craters.html
|
Hawai'i Space Grant College, Hawai'i Institute of Geophysics and
Planetology, University of Hawai'i, 1996
To determine the factors affecting the appearance of impact craters
The circular features so obvious on the Moon's surface are
impact craters formed when impactors smashed into the surface.
The explosion and excavation of materials at the impacted site created
piles of rock (called ejecta) around the circular hole as well as
bright streaks of target material (called rays) thrown for great
Two basic methods that form craters in nature are:
1) impact of a projectile on the surface and 2) collapse
of the top of a volcano creating a crater termed caldera.
By studying all types of craters on Earth and by creating impact craters
in experimental laboratories, geologists concluded that the Moon's craters
are impact in origin.
The factors affecting the appearance of impact craters and ejecta
are the size and velocity of the impactor, and the geology of the target
By recording the number, size, and extent of erosion of craters,
lunar geologists can determine the ages of different surface units
on the Moon and can piece together the geologic history. This technique
works because older surfaces are exposed to impacting meteorites
for a longer period of time than are younger surfaces.
Impact craters are not unique to the Moon. They are found on all
the terrestrial planets and on many moons of the outer planets.
On Earth, impact craters are not as easily recognized because
of weathering and erosion. Famous impact craters on Earth are Meteor Crater
in Arizona, U.S.A.; Manicouagan in Quebec, Canada; Sudbury in Ontario,
Canada; Ries Crater in Germany, and Chicxulub on the Yucatan coast in Mexico.
Chicxulub is considered by most scientists as the source crater of the
catastrophe that led to the extinction of the dinosaurs at the end of the
Cretaceous period. An interesting fact about the Chicxulub crater is that
you cannot see it. Its circular structure is nearly a kilometer below the
surface and was originally identified from magnetic and gravity data.
Lunar Impact Crater
Typical characteristics of a lunar impact crater are labeled on
this photograph of Aristarchus, 42 im in diameter, located West of Mare
bowl shaped or flat, characteristically below surrounding ground
level unless filled in with lava.
blandet of mateial surrounding the crater that was exzcavated during
the impact event. Ejecta becomes thinner away from the crater.
rock thrown out of the crater and deposited as a ring-shaped pile
of debris at the crater's edge during the explosion and excavation of an
characteristically steep and may have giant stairs called terraces.
bright streaks starting from a crater and extending away for great
distances. See Copernicus crater for another example.
mountains formed becuase of the huge increase and rapid decrease
in pressure during the impact event. They occur only in the center of craters
that are larger than 40 km diameter. See Tycho crater for another example.
In this activity, marbles or other spheres such as steel shot,
ball bearings, or golf balls are used as impactors that students drop from
a series of heights onto a prepared "lunar surface." Using impactors of
different mass dropped from the same height will allow students to study
the relationship of mass of the impactor to crater size. Dropping impactors
from different heights will allow students to study the relationship of
velocity of the impactor to crater size.
Review and prepare materials listed on the student sheet.
The following materials work well as a base for the "lunar surface."
Dust with a topping of dry tempera paint, powdered drink mixes glitter
or other dry material in a contrasting color. Use a sieve, screen , or
flour sifter. Choose a color that contrasts with the base materials for
most striking results.
all purpose flour
Reusable in this activity and keeps well in a covered container.
It can be recycled for use in the lava layering activity or for many other
science activities. Reusable in this activity, even if colored, by adding
a clean layer of new white baking soda on top. Keeps indefinitely in a
covered container. Baking soda mixed (1:1) with table salt also works.
Reusable in this activity but probably not recyclable. Keeps only in freezer
in airtight container.
sand and corn starch
Mixed (1:1), sand must be very dry. Keeps only in freezer in airtight container.
Pans should be plastic, aluminum, or cardboard. Do not use glass. They
should be at least 7.5 cm deep. Basic 10"x12" aluminum pans or plastic
tubs work fine, but the larger the better to avoid misses. Also, a larger
pan may allow students to drop more marbles before having to resurface
and smooth the target materials.
A reproducible student "Data Chart" is included; students will
need a separate chart for each impactor used in the activity.
Begin by looking at craters in photographs of the Moon and asking students
their ideas of how craters formed.
During this activity, the flour, baking soda, or dry paint may fall onto
the floor and the baking soda may even be disbursed into the air. Spread
newspapers under the pan(s) to catch spills or consider doing the activity
outside. Under supervision, students have successfully dropped marbles
from second-story balconies. Resurface the pan before a high drop.
Have the students agree beforehand on the method they will use to "smooth"
and resurface the material in the pan between impacts. The material need
not be packed down. Shaking or tilting the pan back and forth produces
a smooth surface. Then be sure to reapply a fresh dusting of dry tempera
paint or other material. Remind students that better experimental control
is achieved with consistent handling of the materials. For instance, cratering
results may vary if the material is packed down for some trials and not
Allow some practice time for dropping marbles and resurfacing the materials
in the pan before actually recording data.
Because of the low velocity of the marbles compared with the velocity of
real impactors, the experimental impact craters may not have raised rims.
Central uplifts and terraced walls will be absent.
The higher the drop height, the greater the velocity of the marble, so
a larger crater will be made and the ejecta will spread out farther.
If the impactor were dropped from 6 meters, then the crater would be larger.
The students need to extrapolate the graph out far enough to read the predicted
Have the class compare and contrast their hypotheses on what things
affect the appearance of craters and ejecta.
As a grand finale for your students, demonstrate a more forceful impact
using a slingshot.
What would happen if you change the angle of impact? How could this be
tested? Try it! Do the results support your hypothesis?
If the angle of impact is changed, then the rays will be concentrated
and longer in the direction of impact. A more horizontal impact angle produces
a more skewed crater shape.
To focus attention on the rays produced during an impact, place a paper
bulls-eye target with a central hole on top of a large, flour-filled pan.
Students drop a marble through the hole to measure ray lengths and orientations.
Use plaster of Paris or wet sand instead of dry materials.
Videotape the activity.
Some people think the extinction of the dinosaurs was caused by massive
global climate changes because of a meteorite impact on Earth. Summarize
the exciting work that has been done at Chicxulub on the Yucatan coast
Some people think Earth was hit by an object the size of Mars that caused
a large part of Earth to "splash" into space, forming the Moon. Do you
agree or disagree? Explain your answer.
Physics students could calculate the velocities of the impactors from various
heights. (Answers from heights of 30 cm, 60 cm, 90 cm, and 2 m should,
of course, agree with the velocity values shown on the "Impact Craters
- Data Chart".
Go to Impact Craters Student
Go to Impact Craters Data
Go to Impact Craters Graph.
Return to Impact
Craters Activity Index.
Return to Hands-On
Activities home page.
This activity has been copied, with permission, from the University of
Hawaii's School of Ocean & Earth Science & Technology server to ours,
to allow faster access from our Web site. We encourage you to explore
Return to Reach Out!
To Reach Out!
volunteer organization at the University of Michigan
|
{
"palladium_score": 4.249468803405762,
"timestamp": "2026-01-18T07:31:40.344021",
"source": "Palladium-STEM (Preview)"
}
|
http://www.databasejournal.com/features/mssql/article.php/3693856/Lesson-3-Working-with-SQL-Server.htm
|
In this lesson, you'll learn how to connect and log into SQL Server, how to issue SQL Server statements, and how to obtain information about databases and tables.
Making the Connection
Now that you have a SQL Server DBMS and client software to use with it, it would be worthwhile to briefly discuss connecting to the database.
SQL Server, like all client/server DBMSs, requires that you log into the DBMS before being able to issue commands. SQL Server can authenticate users and logins using its own user list, or using the Windows user list (the logins used to start using Windows). As such, depending on how SQL Server is configured, it may log you in automatically using whatever login you used for Windows itself, or it may prompt you for a login name and password.
When you first installed SQL Server, you were probably prompted for an administrative login (often named sa for system administrator) and a password. If you are using your own local server and are simply experimenting with SQL Server, using this login is fine. In the real world, however, the administrative login is closely protected because access to it grants full rights to create tables, drop entire databases, change logins and passwords, and more.
To connect to SQL Server, you need the following pieces of information:
The hostname (the name of the computer). This is localhost or your own computer name if you're connecting to a local SQL Server.
A valid username (if Windows authentication is not being used).
The user password (if required).
If you're using one of the client applications discussed in the previous lesson, a dialog box will be displayed to prompt you for this information.
Note: Using Other Clients - If you are using a client other than the ones mentioned previously, you still need to provide this information in order to connect to SQL Server.
After you are connected, you have access to whatever databases and tables your login name has access to. (Logins, access control, and security are revisited in Lesson 29, "Managing Security.")
Selecting a Database
When you first connect to SQL Server, a default database is opened for you. This will usually be a database named master (which as a rule you should never play with). Before you perform any database operations, you need to select the appropriate database. To do this, you use the USE keyword.
Plain English: Keyword - A reserved word that is part of the T-SQL language. Never name a table or column using a keyword. Appendix E, "T-SQL Reserved Words," lists the SQL Server keywords.
For example, to use the crashcourse database, you would enter the following (in a query window):
Command(s) completed successfully.
The USE statement does not return any results. Depending on the client used, some form of notification might be displayed (as seen here).
Tip: Interactive Database Selection - In SQL Server Management Studio (or SQL Query Analyzer), you may select a database from the drop-down list in the toolbar to use it. You'll not actually see the USE command being issued (although it is being issued for you), but the database will change and the window title bar will reflect this change.
Remember, you must always USE a database before you can access any data in it.
Learning About Databases and Tables
But what if you don't know the names of the available databases? And for that matter, how do the client applications obtain the list of available databases that are displayed in the drop-down list?
Information about databases, tables, columns, users, privileges, and more, are stored within databases and tables themselves (yes, SQL Server uses SQL Server to store this information). These internal tables are all in the master database (which is why you don't want to tamper with it), and they are generally not accessed directly. Instead, SQL Server includes a suite of prewritten stored procedures that can be used to obtain this information (information that SQL Server then extracts from those internal tables).
Note: Stored Procedures - Stored procedures will be covered in Lesson 23, "Working with Stored Procedures." For now, it will suffice to say that stored procedures are SQL statements that are saved in SQL Server and can be executed as needed.
Look at the following example:
DATABASE_NAME DATABASE_SIZE REMARKS
----------------- ------------- -------
coldfusion 9096 NULL
crashcourse 3072 NULL
forta 2048 NULL
master 4608 NULL
model 1728 NULL
msdb 5824 NULL
tempdb 8704 NULL
sp_databases; returns a list of available databases. Included in this list might be databases used by SQL Server internally (such as master and tempdb in this example). Of course, your own list of databases might not look like those shown above.
To obtain a list of tables within a database, use sp_tables;, as seen here:
sp_tables; returns a list of available tables in the currently selected database, and not just your tables; it also includes all sorts of system tables and other entries (possibly hundreds of entries).
To obtain a list of tables (just tables, not views, and not system tables and so on), you can use this statement:
sp_tables NULL, dbo, crashcourse, "'TABLE'";
TABLE_QUALIFIER TABLE_OWNER TABLE_NAME TABLE_TYPE REMARKS
--------------- ----------- ------------ ---------- -------
crashcourse dbo customers TABLE NULL
crashcourse dbo orderitems TABLE NULL
crashcourse dbo orders TABLE NULL
crashcourse dbo products TABLE NULL
crashcourse dbo vendors TABLE NULL
crashcourse dbo productnotes TABLE NULL
crashcourse dbo sysdiagrams TABLE NULL
Here, sp_tables accepts a series of parameters telling it which database to use, as well as what specifically to list ('TABLE' as opposed to 'VIEW' or 'SYSTEM TABLE').
sp_columns can be used to display a table's columns:
Note: Shortened for Brevity - sp_columns returns lots of data. In the output that follows, I have truncated the display because the full output would have been far wider than the pages in this book, likely requiring many lines for each row.
TABLE_QUALIFIER TABLE_OWNER TABLE_NAME COLUMN_NAME DATA_TYPE TYPE_NAME
crashcourse dbo customers cust_id 4 int identity
crashcourse dbo customers cust_name -8 nchar
crashcourse dbo customers cust_address -8 nchar
crashcourse dbo customers cust_city -8 nchar
crashcourse dbo customers cust_state -8 nchar
crashcourse dbo customers cust_zip -8 nchar
crashcourse dbo customers cust_country -8 nchar
crashcourse dbo customers cust_contact -8 nchar
crashcourse dbo customers cust_email -8 nchar
sp_columns requires that a table name be specified (customers in this example), and returns a row for each field, containing the field name, its datatype, whether NULL is allowed, key information, default value, and much more.
Note: What Is Identity? - Column cust_id is an identity column. Some table columns need unique values (for example, order numbers, employee IDs, or, as in the example just shown, customer IDs). Rather than have to assign unique values manually each time a row is added (and having to keep track of what value was last used), SQL Server can automatically assign the next available number for you each time a row is added to a table. This functionality is known as identity. If it is needed, it must be part of the table definition used when the table is created using the CREATE statement. We'll look at CREATE in Lesson 20, "Creating and Manipulating Tables."
Lots of other stored procedures are supported, too, including:
sp_server_info: Used to display extensive server status information
sp_spaceused: Used to display the amount of space used (and unused) by a database
sp_statistics: Used to display usage statistics pertaining to database tables
sp_helpuser: Used to display available user accounts
sp_helplogins: Used to display user logins and what they have rights to
It is worthwhile to note that client applications use these same stored procedures you've seen here. Applications that display interactive lists of databases and tables, that allow for the interactive creation and editing of tables, that facilitate data entry and editing, or that allow for user account and rights management, and more, all accomplish what they do using the same stored procedures that you can execute directly yourself.
In this lesson, you learned how to connect and log into SQL Server, how to select databases using USE, and how to introspect SQL databases, tables, and internals using stored procedures. Armed with this knowledge, you can now dig into the all-important SELECT statement.
|
{
"palladium_score": 3.723707675933838,
"timestamp": "2026-01-18T07:31:40.344021",
"source": "Palladium-STEM (Preview)"
}
|
https://www.everydayhealth.com/skin-and-beauty-photos/how-to-identify-common-bug-bites.aspx
|
How to Identify Common Bites and Stings
Getting bitten by a bug can be a creepy experience, especially if you don’t know what tiny creature left you with that red, throbbing welt on your skin. Don’t panic. Most bites and stings from common insects are harmless and heal quickly. But some bites and stings, like those from fire ants, wasps, hornets, and bees, may cause intense pain or even an allergic reaction. Others, like poisonous spider bites, require immediate emergency medical care.
Symptoms of bug bites provide clues to the cause and severity. For example, most bug bites cause red bumps with pain, itching, or burning. Some bug bites also feature blisters or welts. Tick bites can carry Lyme disease with a rash that looks like a bull’s-eye. Most bug bites are transmitted directly from the insect, and most bug bites occur outdoors. Two exceptions are bedbugs — tiny mites that live in and near beds — and lice, which spread through contact with an infected person, a comb, or clothing.
The best way to prevent insect bites is to avoid insects, wear protective clothing, use pesticide, not eat foods or wear fragrances that attract bugs, and know your own personal risk for having an allergic reaction to a bug bite.
Because certain bites can also spread illnesses such as the Zika virus and West Nile virus (both transmitted by mosquito), Lyme disease (from a black-legged tick), Rocky Mountain spotted fever (from a dog or wood tick), or Chagas disease (from kissing bugs), it's good to know what bit you. Learning to identify a bug bite by how it looks and feels will help you know whether to seek medical care or treat the skin bump at home.
A mosquito bite appears as an itchy round, red, or pink skin bump. It's usually harmless but can sometimes cause a serious illness, such as the Zika virus (particularly harmful in pregnant women), the West Nile virus, or malaria. For most people, Zika causes a brief, flu-like illness. But pregnant women with Zika infection have had an alarming increase in microcephaly birth defects in their newborns — a debilitatingly small head and brain size. The Centers for Disease Control and Prevention (CDC) posted a 2016 travel alert advising pregnant women to delay travel to 50 areas where Zika is active including Latin America and the Caribbean.
About 2,000 U.S. cases of the West Nile virus were reported to the CDC in 2014. Symptoms appear 2 to 14 days after the bite and can include headaches, body aches, fever, vomiting, diarrhea, and a skin rash. People with a more severe West Nile infection may develop meningitis or encephalitis, and have symptoms including neck stiffness, severe headache, disorientation, high fever, and convulsions.
The bite of a parasite-infected mosquito can cause malaria — a rare occurence in the United States, with only about 1,500 cases reported by the CDC each year. Symptoms are similar to the flu and can include fever, headache, muscle aches, nausea, and vomiting from 10 days to 4 weeks after the bite. Malaria is serious, but it's good to know it is preventable and treatable, according to the CDC.
Bed Bug Bites
You probably won't feel pain when a bed bug bites, but you may see a row of two or more red marks on your skin. Some people develop a mild or severe allergic reaction to the bug's saliva between 24 hours and three days later. This can result in a raised, red skin bump or welt that's intensely itchy and inflamed for several days. This can also include hives, and may mean a trip to your healthcare provider for treatment, notes the American Academy of Dermatology. Bed bug bites can occur anywhere on your body, but typically show up on uncovered areas, such as your neck, face, arms, and hands. It's good to know that although they're common, bed bugs do not carry disease, according to the CDC.
Most spider bites are not poisonous and cause only minor symptoms like red skin, swelling, and pain at the site. Other spider bites are a real emergency. If you develop an allergic reaction to a spider bite, with symptoms such as tightness in the chest, breathing problems, swallowing difficulties, or swelling of the face, you need medical care at once. Because spider bites can get infected with tetanus, the CDC also recommends staying on top of your tetanus booster shots and getting one every 10 years.
A bite from a poisonous spider like the black widow or brown recluse is extremely dangerous and can cause a severe reaction. The black widow's bite, which shows up as two puncture marks, may or may not be painful at first. But 30 to 40 minutes later, you may have pain and swelling in the area. Within eight hours, you may experience muscle pain and rigidity, stomach and back pain, nausea and vomiting, and breathing difficulties. You might not have seen the spider that bit you, but always seek medical attention immediately if there's a possibility you could have been bitten by a poisonous spider. Call 911 or the American Association of Poison Control Centers at 800-222-1222.
Brown Recluse Spider Bites
The brown recluse spider is poisonous and usually lives in dark and unused spaces. Some people feel a small sting followed immediately by a sharp pain, while others don't realize they've gotten a brown recluse bite until hours later. Four to eight hours afterward, the bite may become more painful and look like a bruise or blister with a blue-purple area around it. Later, the bite becomes crusty and turns dark.
Symptoms of a brown recluse spider bite occur within a few hours and include fever, chills, itching, nausea, and sweating. Because some people will have a serious reaction that can lead to kidney failure, seizure, and coma, it's important to get medical care at once, according to the NIH National Library of Medicine. Be sure to seek medical attention immediately if you could have been bitten by a poisonous spider, by calling 911 or the American Association of Poison Control Centers at 800-222-1222.
Some tick bites can be dangerous because they may carry disease. Black-legged ticks, formerly known as deer ticks, may carry Lyme disease, and dog ticks can spread Rocky Mountain spotted fever. Up to 30,000 cases of Lyme disease are reported each year in the United States.
Symptoms of Lyme disease include a skin rash in the pattern of rings, much like a bull’s-eye on a target, that appears up to a month after the tick bite. You may also have fever, fatigue, headaches, muscle and joint aches, as well as irregular heart rhythms. But 20 to 30 percent of people who get infected never develop a rash. Symptoms such as swollen or painful joints, memory loss, or other autoimmune responses that mimic those of other diseases may present themselves later, when Lyme disease is in its advanced stages. A diagnosis may remain elusive because many doctors will not equate these symptoms with Lyme disease.
Rocky Mountain spotted fever from a tick bite is rare, with about 2,500 U.S. cases per year. It causes a fever, headache, muscle aches, and a skin rash. The rash begins on the ankles and wrists after a few days of fever, but later the rash spreads to the rest of the body; in some people, a rash never develops. Although this infection can be severe — and even fatal — it is preventable and can be successfully treated with prompt medical care, according to the CDC.
Symptoms of flea bites may begin within hours after you're bitten, and the bites tend to appear in groups of three or four. You may notice itching, hives, and swelling around an injury or sore, or a rash of small, red bumps that may or may not bleed. Flea bites are more common on your ankles, in your armpits, around your waist, and in the bends of your knees and elbows. A flea-bite rash turns white when you press on it and tends to get larger or spread over time. Scratching the rash can lead to a skin infection, according to the NIH National Library of Medicine, and may need medical attention.
In extremely rare cases, the bites of fleas infected with the bacteria that causes plague can spread the disease from wild rodents to pets and to people. Over the past 10 years, as few as one, and as many as 17, cases of plague were reported in the United States, according to the CDC, most in the rural West. Symptoms of plague include swollen lymph nodes, headache, fever, and chills that appear from one to six days after the bite.
Bee stings cause a sharp pain that may continue for a few minutes, then fade to a dull, aching feeling. The area may still feel sore to the touch a few days later. A red skin bump with white around it may appear around the site of the sting, and the area may itch and feel hot to the touch. If you've been stung by a bee before, your body may also have an immune response to the venom in the sting, resulting in swelling where the sting occurred or in an entire area of your body. If you have this type of allergic response, called anaphylaxis, it is a medical emergency that needs treatment immediately. Symptoms of severe allergy to a bee sting include hives, swelling, trouble breathing, dizziness, cramps, nausea, diarrhea, and even cardiac arrest.
Lice bites are tiny red spots on the shoulders, neck, and scalp from small parasitic insects that can live on your clothes or in your bedding. Because lice bites are so small, they usually don’t hurt, but they do itch. Some people may develop a larger, uncomfortable skin rash from lice bites. Continual scratching of the itchy spots could lead to an infection marked by symptoms including swollen lymph nodes and tender, red skin. An infected lice bite may also ooze and crust over, and will need to be treated by a doctor, but lice are not known to carry other diseases.
Ant Bites and Stings
Ant bites and stings are typically painful and cause red skin bumps. Some types of ants, like fire ants, are venomous and can cause a severe allergy. Fire ants bite first to hold on and then sting, giving a sharp pain and a burning sensation. If you're bitten by fire ants, you may see white, fluid-filled pustules or blisters (pictured above) a day or two after the sting that last three to eight days and may cause scars. The bumps may also be itchy and red, and you may have swelling around the site. It's important not to scratch or break the blisters open, because they can become infected, notes the American College of Allergy, Asthma, and Immunology. Carpenter ants bites are also painful because they spray formic acid into the bite, which causes a burning feeling.
Mite and Chigger Bites
Mite bites do not usually spread disease, but they can irritate the skin and cause intense itching. Itch mites usually feed on insects but will bite other animals, including people. The bites usually go unnoticed until itchy, red marks develop that may look like a skin rash.
Chiggers are a form of mite that inject their saliva so that they can liquefy and eat skin. In response to the chigger bite, the skin around the bite hardens. As the picture above illustrates, the surrounding skin becomes irritated and inflamed, and an itchy red welt develops.
Mites also cause the condition called scabies which is contagious from person to person, notes the CDC. Female scabies mites burrow into the skin to lay eggs. When the eggs hatch, the larvae come to the skin surface, begin to molt, and burrow back into the skin to feed. This results in a skin rash that may look like acne pimples and create intense itching that gets worse at night. You may also notice light, thin lines on the skin where the mites have burrowed, including between the fingers, in the bends at the wrists and knees, and under jewelry on the wrists and fingers.
- Dr. James L. Castner/Corbis
Kissing Bug Bites
Kissing bugs, also known as assassin bugs, can pass on the parasites that cause Chagas disease. According to September 2015 research from the University of Texas at El Paso published in Acta Tropica, more than half of these insects carry the parasite. In the United States, Chagas disease affects about 300,000 people, according to the CDC.
Kissing bugs hide in the daytime but emerge at night, often leaving bites on the face and causing a swollen eyelid. In the first few weeks after infection, symptoms of Chagas disease can include fever, fatigue, body aches, headache, rash, a loss of appetite, diarrhea, and vomiting. But in the long term, and even decades later, the CDC notes that about 30 percent of people infected by kissing bugs will develop serious complications of Chagas disease: an enlarged heart, heart failure, abnormal heart rhythm, cardiac arrest, or an enlarged colon, also known as megacolon.
- Last Updated: 06/05/17
- Last Updated: 06/05/17
|
{
"palladium_score": 3.646798849105835,
"timestamp": "2026-01-18T07:31:40.344021",
"source": "Palladium-STEM (Preview)"
}
|
http://docs.python.org/release/1.5.2p2/lib/module-rotor.html
|
This module implements a rotor-based encryption algorithm, contributed by Lance Ellinghouse. The design is derived from the Enigma device, a machine used during World War II to encipher messages. A rotor is simply a permutation. For example, if the character `A' is the origin of the rotor, then a given rotor might map `A' to `L', `B' to `Z', `C' to `G', and so on. To encrypt, we choose several different rotors, and set the origins of the rotors to known positions; their initial position is the ciphering key. To encipher a character, we permute the original character by the first rotor, and then apply the second rotor's permutation to the result. We continue until we've applied all the rotors; the resulting character is our ciphertext. We then change the origin of the final rotor by one position, from `A' to `B'; if the final rotor has made a complete revolution, then we rotate the next-to-last rotor by one position, and apply the same procedure recursively. In other words, after enciphering one character, we advance the rotors in the same fashion as a car's odometer. Decoding works in the same way, except we reverse the permutations and apply them in the opposite order.
The available functions in this module are:
Rotor objects have the following methods:
An example usage:
>>> import rotor >>> rt = rotor.newrotor('key', 12) >>> rt.encrypt('bar') '\2534\363' >>> rt.encryptmore('bar') '\357\375$' >>> rt.encrypt('bar') '\2534\363' >>> rt.decrypt('\2534\363') 'bar' >>> rt.decryptmore('\357\375$') 'bar' >>> rt.decrypt('\357\375$') 'l(\315' >>> del rt
The module's code is not an exact simulation of the original Enigma device; it implements the rotor encryption scheme differently from the original. The most important difference is that in the original Enigma, there were only 5 or 6 different rotors in existence, and they were applied twice to each character; the cipher key was the order in which they were placed in the machine. The Python rotor module uses the supplied key to initialize a random number generator; the rotor permutations and their initial positions are then randomly generated. The original device only enciphered the letters of the alphabet, while this module can handle any 8-bit binary data; it also produces binary output. This module can also operate with an arbitrary number of rotors.
The original Enigma cipher was broken in 1944. The version implemented here is probably a good deal more difficult to crack (especially if you use many rotors), but it won't be impossible for a truly skilful and determined attacker to break the cipher. So if you want to keep the NSA out of your files, this rotor cipher may well be unsafe, but for discouraging casual snooping through your files, it will probably be just fine, and may be somewhat safer than using the Unix crypt command.
|
{
"palladium_score": 3.5322251319885254,
"timestamp": "2026-01-18T07:31:42.644881",
"source": "Palladium-STEM (Preview)"
}
|
http://www.icr.org/article/5350/243/
|
Quasars Quash Big Bang Assumption
by Brian Thomas, M.S. *
According to the most prominent naturalistic theory of origins, the universe began over 13 billion years ago in a "Big Bang" that flung matter, energy, and space outward.
But quasars near some of the most distant galaxies have posed a problem for this view. A new study showed that these objects, some of the brightest in the universe, cast even more darkness on astronomers' current understanding of starlight and time.
Quasars, or "quasi-stellar radio sources," are super-bright, massive, glowing objects that appear to be associated with black holes near galactic cores. Like many stars and galaxies, they are millions of light years away from earth. Light from these distant sources appears shifted toward the red, or low energy, end of the electromagnetic spectrum. This is interpreted to be caused by an expanding universe.1
In the early 1990s, eminent astronomer Halton Arp discovered a thick trail of glowing gas linking a galaxy named NGC 4319 to a nearby quasar. He noted a huge problem with this find--the galaxy's redshift indicated a distance of 107 million light years away, while its quasar's indicated 1.2 billion light years.2 If the degree of light redshift is truly caused by space expanding between earth and the light source, then the amount of redshift between these obviously connected glowing objects ought to be the same--not an order of magnitude different!
This discrepancy caused Arp and some astronomers to doubt the assumption that redshift--at least for quasars--is caused by an expanding universe. Other astronomers seem to have just ignored the contraindicating anomaly. Based on new research, however, they may have trouble continuing to disregard the contradictions.
In a study published online in the Monthly Notices of the Royal Astronomical Society, astronomer Mike Hawkins at the Royal Observatory in Edinburgh presented results from 900 quasars he observed over the last couple of decades. He found more problems presented by quasars to the standard cosmological model. For example, identical rhythmic light signatures were observed among two quasars, one with redshift showing it as 6 billion light years away, and another showing a 10 billion light year "distance." Physorg.com reported:
Even though the distant quasars were more strongly redshifted than the closer quasars, there was no difference in the time it took the light to reach Earth. This quasar conundrum doesn't seem to have an obvious explanation.3
If redshift is not a reliable indicator of quasar distances, then how reliable is it for other objects? What else could be causing the difference in degree of redshift, and would this alternative cause have implications for other cosmic phenomena? In addition, if the data are ambiguous about the distances to quasars, then science cannot be as sure of the time it took for quasar light to reach earth.
There are even more glaring observations that do not "seem to have an obvious explanation" in the current cosmological context. For example, astronomer Neil DeGrasse Tyson stated emphatically that "the universe was born 14 billion years ago,"4 yet this is not nearly enough time for light from separate regions of space to have crossed paths. According to the Big Bang model, in order for the temperature of space to be as remarkably even as it is, light must have intersected to have smoothed out its energy.5 This "Horizon Problem" has not been solved and is devastating to that model.6
Both the Horizon Problem and the anomalous quasar data reveal fundamental gaps in current astronomical understanding. Some of the basic, fundamental questions remain unanswered about the universe's structure, size, expansion dynamics, effects or causes of gravity, and the nature of light. And this precludes confident assertions about its age--if those assertions are based merely on today's observations and naturalistic assumptions.
For example, astronomer Hugh Ross recently stated, "Technological advance provides definitive data on the age of the universe and earth. There's simply no scientific basis for thinking that the universe and earth are not billions of years old."7 Investigating--not simply ignoring--observations like unexpected quasar light behavior and super-smooth cosmic temperatures shows that the data are only definitive in their defiance of mankind's overconfidence that science has solved the secrets of the universe--including its age.
Investigators may never be able to determine the universe's age based on natural parameters alone, but its age in earth terms has been clearly mapped out in the pages of Scripture, a document that carries the highest authority of divine authorship and veracity. And that authority, along with plenty of scientific evidence, points to a recent creation by an omniscient, omnipotent Creator.8
- Evolutionary cosmologists have extrapolated this expansion back in imaginary time to a Big Bang, but creation cosmologists point out that this may corroborate over a dozen Bible references to the Lord stretching out the heavens. See Humphreys, D. R. 2007. Creation Cosmologies Solve Spacecraft Mystery. Acts & Facts. 36 (10): 10.
- Arp, H., G. Burbidge and A. Hewitt. 1995. More on the galaxy-quasar connection. Sky and Telescope. 75 (8): 9.
- Zyga, L. Discovery that quasars don't show time dilation mystifies astronomers. PhysOrg. Posted on physorg.com April 9, 2010, reporting on research published in Hawkins, M. R. S. On time dilation in quasar light curves. Monthly Notices of the Royal Astronomical Society. Published online in advance of print April 9, 2010.
- deGrasse Tyson, N. For the Love of Hubble. Parade, June 22, 2008.
- This use of "temperature" is unique. What has been measured is the blackbody radiation of the cosmos, which ought to be very irregular. Instead, it is measured at 2.7 degrees Kelvin whether near or far from galaxy-rich regions of space. See Gish, D. 1991.The Big Bang Theory Collapses. Acts & Facts. 20 (6).
- Coppedge, D. 2007. The Light-Distance Problem. Acts & Facts. 36 (6).
- Ross, H. 2009. More Than a Theory. Grand Rapids, MI: Baker Books, 17.
- Humphries, D. R. 2005. Evidence for a Young World. Acts & Facts. 34 (6).
Image credit: NASA, ESA, and G. Canalizo (University of California, Riverside)
* Mr. Thomas is Science Writer at the Institute for Creation Research.
Article posted on April 29, 2010.
|
{
"palladium_score": 3.9012370109558105,
"timestamp": "2026-01-18T07:31:42.644881",
"source": "Palladium-STEM (Preview)"
}
|
http://www.educationworld.com/a_lesson/03/lp322-03.shtml
|
|Back to Taking Notes Lesson Plan|
Use crayons or markers to teach note-taking skills.
note taking, notes, study skills, research, Earhart, theme
About the Lesson
This lesson employs a brief biography of Amelia Earhart as the starting point for note-taking exercises. The Earhart biography is only a suggested starting point for this lesson, however. You can substitute any piece of literature for the selection, or provide additional note-taking practice by repeating this lesson with a variety of content-rich, subject-related reading material.
Very often, students read for a specific purpose rather than general information. For example, if students are working on reports about the causes of the Civil War, they will likely skim many Civil War resources to find the sections of those resources about the specific topic. That means students will be "eliminating" a lot of information as they skim for details about the causes of the war.
In this activity, students skim a brief biography of Amelia Earhart -- or another reading selection of your choice -- to locate specific information related to the focus of their search.
If you use the brief biography of Amelia Earhart, provide each student with a copy of that bio page. Then you might
Whichever topic students tackle, they skim their copy of the biography for information related to the topic. They then use a crayon to underline -- or a highlighting marker to highlight -- information that supports the topic. The highlighted text provides a visual representation of the "notes" students might write if they were using library resources to research the topic.
When students complete their highlighting, have them use the most important highlighted information to write in their own words a concise paragraph on their assigned topic or theme.
After completing this activity, you might encourage students to go beyond the one-page biography and do more in-depth research using library or Internet resources. Provide each student with a different topic to research, for example:
AssessmentGive students a clean copy of a brief biography of Amelia Earhart and have them cross out all but the most important information related to the following topic:
Reasons Many People Think Amelia Earhart Is a HeroThe following might be among the phrases in the biography article that will be highlighted in marker or underlined in crayon:
Lesson Plan Source
Note: This lesson is loosely based on the Trash-N-Treasure technique. Teacher Barbara Jansen wrote Reading for Information: The Trash-N-Treasure Method of Teaching Note-Taking, an in-depth article about this unique note-taking strategy.
LANGUAGE ARTS: English
GRADES K - 12
NL-ENG.K-12.1 Reading for Perspective
NL-ENG.K-12.2 Reading for Understanding
NL-ENG.K-12.3 Evaluation Strategies
NL-ENG.K-12.4 Communication Skills
NL-ENG.K-12.5 Communication Strategies
NL-ENG.K-12.6 Applying Knowledge
NL-ENG.K-12.8 Developing Research Skills
NL-ENG.K-12.12 Applying Language Skills
Click here to return to this week's Note Taking lesson plan page.
|
{
"palladium_score": 3.7471203804016113,
"timestamp": "2026-01-18T07:31:44.706789",
"source": "Palladium-STEM (Preview)"
}
|
http://psychcentral.com/news/archives/2005-02/uopm-bcr021405.html
|
Research represents big step toward development of brain-controlled artificial limbs for people
WASHINGTON, Feb. 17 – Reaching for something you want seems a simple enough task, but not for someone with a prosthetic arm, in whom the brain has no control over such fluid, purposeful movements. Yet according to research presented at the 2005 American Association for the Advancement of Science (AAAS) Annual Meeting, scientists have made significant strides to create a permanent artificial device that can restore deliberate mobility to patients with paralyzing injuries.
The concept is that, through thought alone, a person could direct a robotic arm – a neural prosthesis – to reach and manipulate a desired object.
As a step toward that goal, University of Pittsburgh researchers report that a monkey outfitted with a child-sized robotic arm controlled directly by its own brain signals is able to feed itself chunks of fruits and vegetables. The researchers trained the monkey to feed itself by using signals from its brain that are passed through tiny electrodes, thinner than a human hair, and fed into a specially designed algorithm that tells the arm how to move.
"The beneficiaries of such technology will be patients with spinal cord injuries or nervous system disorders such as amyotrophic lateral sclerosis or ALS," said Andrew Schwartz, Ph.D., professor of neurobiology at the University of Pittsburgh School of Medicine and senior researcher on the project.
The neural prosthesis moves much like a natural arm, with a fully mobile shoulder and elbow and a simple gripper that allows the monkey to grasp and hold food while its own arms are restrained.
Computer software interprets signals picked up by tiny probes inserted into neuronal pathways in the motor cortex, a brain region where voluntary movement originates as electrical impulses. The neurons' collective activity is then fed through the algorithm and sent to the arm, which carries out the actions the monkey intended to perform with its own limb.
The primary motor cortex, a part of the brain that controls movement, has thousands of nerve cells, called neurons, that fire like Geiger counters. These neurons are sensitive to movement in different directions. The direction in which a neuron fires fastest is called its "preferred direction." For each arm movement, no matter how subtle, thousands of motor cortical cells change their firing rate, and collectively, that signal, along with signals from other brain structures, is routed through the spinal cord to the different muscle groups needed to generate the desired movement.
Because of the sheer volume of neurons that fire in concert to allow even the most simple of movements, it would be impossible to create probes that could eavesdrop on them all. The Pitt researchers overcame that obstacle by developing a special algorithm that uses the limited information from relatively few neurons to fill in the missing signals. The algorithm decodes the cortical signals like a voting machine by using each cell's preferred direction as a label and taking a continuous tally of the population throughout the intended movement.
Monkeys were trained to reach for targets. Then, with electrodes placed in the brain, the algorithm was adjusted to assume the animal was intending to reach for those targets. For the task, food was placed at different locations in front of the monkey, and the animal, with its own arms restrained, used the robotic arm to bring the food to its mouth.
"When the monkey wants to move its arm, cells are activated in the motor cortex," said Dr. Schwartz. "Each of those cells activates at a different intensity depending on the direction the monkey intends to move its arm. The direction that produces the greatest intensity is that cell's preferred direction. The average of the preferred directions of all of the activated cells is called the population vector. We can use the population vector to accurately predict the velocity and direction of normal arm movement, and in the case of this prosthetic, it serves as the control signal to convey the monkey's intention to the prosthetic arm."
Because the software had to rely on a small number of the thousands of neurons needed to move the arm, the monkey did the rest of the work, learning through biofeedback how to refine the arm's movements by modifying the firing rates of the recorded neurons.
In recent weeks, Dr. Schwartz and his team were able to improve the algorithms to make it easier for the monkey to learn how to operate the arm. The improvements also will allow them to develop more sophisticated brain devices with smooth, responsive and highly precise movement. They are now working to develop a prosthesis with realistic hand and finger movements. Because of the complexity of a human hand and the movements it needs to make, the researchers expect it to be a major challenge.
Others involved in the research include Meel Velliste, Ph.D., a Pitt post-doctoral fellow in the Schwartz lab, and Chance Spalding, a Pitt bioengineering graduate student; and Anthony Brockwell, Ph.D., Valerie Ventura, Ph.D., Robert Kass, Ph.D., and graduate student Cari Kaufman from the Statistics Department at Carnegie Mellon University.
Source: Eurekalert & othersLast reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009
Published on PsychCentral.com. All rights reserved.
Two roads diverged in a wood, and I--
I took the one less traveled by,
And that has made all the difference.
-- Robert Frost
|
{
"palladium_score": 3.753355026245117,
"timestamp": "2026-01-18T07:31:44.706789",
"source": "Palladium-STEM (Preview)"
}
|
https://en.wikipedia.org/wiki/Distraction_display
|
Distraction displays, also known as diversionary displays, or paratrepsis are anti-predator behaviors used to attract the attention of an enemy away from something, typically the nest or young, that is being protected by a parent. Distraction displays are sometimes classified more generically under "nest protection behaviors" along with aggressive displays such as mobbing. These displays have been studied most extensively in bird species, but also have been documented in populations of stickleback fish and in some mammal species.
Distraction displays frequently take the form of injury-feigning. However, animals may also imitate the behavior of a small rodent or alternative prey item for the predator; imitate young or nesting behaviors such as brooding (to cause confusion as to the true location of the nest), mimic foraging behaviors away from the nest, or simply draw attention to oneself.
Distraction displays were once considered to be a sort of "partial paralysis," or uncontrolled, stress-induced movements. On the basis of several observations, David Lack postulated that such displays simply resulted from the bird's alarm at having been flushed from the nest and had no decoy purpose. He noted a case in the European nightjar, when a bird led him around the nest several times but made no attempt to lure him away. He additionally noted courtship displays mixed with the distraction displays of the bird, suggesting that distraction display is not a purposeful action unto itself, and observed that the display became less vigorous the more frequently he visited the nest, as would be expected if the display were a response driven by fear and surprise.
Other researchers, including Edward Allworthy Armstrong, have taken issue with these arguments. While Armstrong acknowledged that displaying animals could make mistakes, as Lack's nightjar seems to have done in leading him around the nest, he attributed such mistakes not to paralytic fear but to a conflict of interest between self-preservation and reproductive or enemy attack impulses: the bird at once experiences a drive to lure the predator away and also to directly guard the young. Armstrong also thought that the incorporation of sexual and threat displays into the distraction display did not necessarily represent a mistake on the part of the animal, but "might make the display more effective by increasing its conspicuousness." Finally, the observation of less vigorous displays due to repeated nest approaches does not preclude the parent animal simply learning that the human is not a threat to its young. Jeffrey Walters provided evidence that lapwings possessed the ability to distinguish between different types of predators of varying threat levels, a behavior which is presumably learned, perhaps through cultural transmission.
Armstrong additionally noted that displaying animals were rarely captured by predators, as would be expected if the display were truly uncontrolled, and that the movements seemed to show signs of some sort of control by the animal, although likely not conscious, intelligent control. One example of apparent control is attention seemingly paid to routes used by the displaying animal when moving away from the nest. Furthermore, researchers have noted parent animals moving towards the predator during the display. While some of these cases could be attributed to mistakes made during "partial paralysis," in the case described by Wiklund and Stigh, snowy owls consistently walked or ran towards the predator while displaying, suggesting that the action was deliberate.
An additional hypothesis in alignment with Armstrong's ideas about conflicting impulses suggests that the incorporation of sexual and threat displays into the distraction display may represent displacement. Displacement occurs when an animal, unable to satisfy two conflicting impulses, may initiate an out-of-context behavior to "vent". If a displacement behavior served an adaptive function, such as increased survival of the young, then it may have experienced positive selection and become ritualized and stereotyped in its new context.
In any case, there are some forms of distraction display which may in fact have evolved from stress responses, an idea more in alignment with Lack's hypothesis. One of these is the "rodent-run" display, in which a bird fluffs its feathers to mimic the fur of a rodent and scurries away from the nest. It is possible that this display originates from a feather ruffling reflex to alarm.
There are several conditions in which distraction display may be advantageous to the animal, such that the incorporation of displacement or stress behaviors into offspring defense will most likely undergo positive selection. Most such cases depend upon the condition or location of the nest: distraction display has tended to evolve in species whose nests alone do not provide a substantial physical barrier to predators, and in those that nest on exposed terrain or close to the ground. If the nest is on open terrain, the parent may perceive predators at a greater distance and be able to leave the nest and begin displaying before the predator is in sufficient proximity to locate the nest. Furthermore, if the nest is on or near the ground, the parent may be able to display more effectively; Armstrong noted the relative rarity in the literature of distraction display in arboreal-nesting species, and attributed this to the difficulty of displaying convincingly while on a branch. Nonetheless, there have been anecdotal reports of warblers, which nest arboreally, dropping to the ground to perform a distraction display when disturbed, as well as displaying along a tree branch. In addition, distraction display tends to be most adaptive when animals nest solitarily, as solitary nesters lack the opportunity for mobbing a predator or otherwise performing communal defense, although some species have been observed to display in groups. Finally, distraction display tends to be adaptive when diurnal predation by visually-stimulated predators takes place (as these predators are most likely to notice the visual display).
Distraction display has been most extensively studied in birds. It has been observed in many species, including passerines and non-passerines, and has been particularly well documented in the Charadriiformes.
Injury-feigning, including broken-wing and impeded flight displays, is one of the more common forms of distraction. In broken-wing displays, birds that are at the nest walk away from it with wings quivering so as to appear as an easy target for a predator. Such injury-feigning displays are particularly well known in nesting waders and plovers, but also have been documented in other species, including snowy owls, the alpine accentor, and the mourning dove. Impeded flight displays additionally may suggest an injured wing, but through an airborne display.
False brooding is an approach used by plovers. The bird moves away from the nest site and crouches on the ground so as to appear to be sitting at a nonexistent nest and allows the predator to approach closely before escaping. Another display seen in plovers, as well as some passerine birds, is the rodent run, in which the nesting bird ruffles its back feathers, crouches, and runs away from the predator. This display resembles the flight response of a small rodent.
It has additionally been postulated that threat displays, such as gaping by the Caprimulgidae and wing-extension by the killdeer, and sexual displays, such as courtship dancing by stilts, can become incorporated into distraction displays where the bird is feigning injury. In both cases the incorporated components may increase conspicuousness, resulting in a more effective distraction display.
Stickleback fish have been documented performing distraction displays. A nesting male three-spined stickleback, when approached by a group of conspecifics, will perform a distraction display by digging or pointing into the substrate away from the nest in order to protect his eggs from cannibalism. There have been two explanations proposed for this behavior. One hypothesis is that the display arose from a courtship behavior in which the male normally "points" an approaching female towards his nest so that she may lay her eggs within it. Therefore, pointing at the sediment away from a nest containing eggs may divert a cannibalistic female's attention through sexual cues. A second hypothesis is that the stickleback distraction display arose from displaced foraging behavior and as such represents faux-foraging. In support of this hypothesis was the finding that all-male, all-female, and mixed foraging groups responded equally to the display, which would not be expected if it were indeed mimicking a sexual display.
Though rarely documented in mammals, a few instances of distraction display have appeared in the literature. One researcher documented a distraction display performed by a female red squirrel in order to protect her young. When the nest was approached, the female attempted to lead the researcher away through the trees using a ventriloquistic call that resembled the cries of the young. An additional study documented distraction display in Mentawai langurs, whereby a male will call loudly and bounce on branches while the female and young are able to quietly hide.
Costs and decision to display
While animals performing distraction displays are rarely documented as being killed, risks to the displaying animal do exist. One researcher observed and documented an instance in which a second predator become attracted to an animal already performing a distraction display, which was initially triggered by the approach of an initial predator. The displaying animal was killed.
Additionally, it has been shown that some predators are “smart,” or have learned to recognize that distraction displays indicate a nearby nest. One study recorded a red fox that increased its searching behavior in response to the distraction display of a grouse and eventually found and killed the grouse nestlings.
Factors influencing decision
Given these risks, an animal must decide when distraction display is an appropriate response to a predator. Researchers have found several important factors that appear to influence the decision to use a distraction display and the intensity of the display, although it is not evident that these factors are taken into consideration consciously by the displaying animal.
Several considerations involving the predator have been shown to be important, including the distance of the predator from the nest. Intensity of display has been shown to decrease as the distance of the predator from the nest increases, perhaps representing the balancing of risk to the displaying parent and to the vulnerable young. The type of predator has also been shown to be of importance, with birds tending to display most intensely to ground-dwelling carnivores and less intensely to humans and flying predators. Finally, the number of potential predators has also been shown to be important in sticklebacks, in which frequency of distraction displaying by the male is positively correlated with the number of conspecifics in a foraging shoal.
In addition, the presence of a second parent at the nest correlates with increased display intensity, perhaps representing a diluted predation risk. The number of potential extra-pair mobbers has also been shown to marginally increase the intensity of the display, again representing a possible dilution of risk to each of the animals engaging in the distraction.
Third, the timing of distraction display as a correlate of nestling age has been a matter of particular interest in birds, with study results showing that the age at which displays are performed differs in species with precocial and altricial young. In species with precocial young, distraction display is most frequent just after hatching, while in altricial young, it is most frequent just before fledging. This may represent a greater tendency to display at the times when parental investment in young is greatest, and the young are still very vulnerable. However, some studies have failed to find any correlation between the cost of replacing a brood (a measure of parental investment) and the frequency of distraction display.
Lastly, game theory has been employed to explain how grouse may decide to display or not based on proxies for the abundance of “smart” predators, such as abundance of rodents in the preceding year. In this particular study, it was assumed that a greater abundance of rodents in one year may result in higher birth rates among foxes, which feed on the rodents, and therefore a greater population of one-year-old foxes in the following year. Yearling foxes are not yet experienced enough grouse hunters to be considered "smart." As such, distraction display may be a profitable strategy for the grouse in years following rodent population booms, as there is less risk of encountering a "smart" predator. However, a low rodent population in a given year may result in lower birth rates among foxes for that year, thereby resulting in a higher proportion of older, more experienced foxes in the population in the following year. In such a case, grouse may profit from not displaying, as they are more likely to encounter a "smart" predator.
- Armstrong, Edward (1949). "Diversionary display.--Part 2. The nature and origin of distraction display". Ibis 91 (2): 179–188. doi:10.1111/j.1474-919X.1949.tb02261.x.
- Armstrong, Edward (1949). "Diversionary display.--Part 1. Connotation and terminology". Ibis 91 (1): 88–97. doi:10.1111/j.1474-919X.1949.tb02239.x.
- Barrows, Edward M. (2001) Animal behavior desk reference. CRC Press. 2nd ed. p. 177 ISBN 0-8493-2005-4
- Armstrong, Edward (1954). "The ecology of distraction display". British Journal of Animal Behaviour 2 (4): 121–135. doi:10.1016/S0950-5601(54)80001-3.
- Caro, Tim (2005). "Nest defense". Antipredator Defenses in Birds and Mammals. Chicago, IL: The University of Chicago Press. pp. 335–379.
- Ruxton, Graeme D; Thomas N. Sherratt; Michael Patrick Speed. (2004) Avoiding attack: the evolutionary ecology of crypsis, warning signals and mimicry. Oxford University Press. ISBN 0-19-852859-0. p. 198
- Foster, Susan (1988). "Diversionary displays of paternal stickleback: Defenses against cannibalistic groups". Behavioral Ecology and Sociobiology 22 (5): 335–340. doi:10.1007/BF00295102.
- Ridgway, Mark; McPhail, John (1987). "Raiding shoal size and a distraction display in male sticklebacks (Gasterosteus)". Canadian Journal of Zoology 66 (1): 201–205. doi:10.1139/z88-028.
- Whoriskey, Frederick (1991). "Stickleback distraction displays: Sexual or foraging deception against egg cannibalism?". Animal Behaviour 41 (6): 989–995. doi:10.1016/S0003-3472(05)80637-2.
- Whoriskey, Frederick; FitzGerald, Gerard (1985). "Sex, cannibalism and sticklebacks". Behavioral Ecology and Sociobiology 18 (1): 15–18. Retrieved October 26, 2015.
- Tilson, Ronald; Tenaza, Richard (1976). "Monogamy and duetting in an Old World monkey". Nature 263 (5575): 320–321. doi:10.1038/263320a0.
- Long, Charles (1993). "Bivocal distraction nest-site display in the red squirrel, Tamiasciurus hudsonicus, with comments on outlier nesting and nesting behavior". Canadian Field-Naturalist 107 (1): 104–106. Retrieved October 13, 2015.
- Byrkjedal, Ingvar (1989). "Nest defense behavior of lesser golden-plovers" (PDF). Wilson Bulletin 101 (4): 579–590.
- Duffey, Eric; Creasey, N. (2008). "The "rodent-run" distraction-behaviour of certain waders". Ibis 92 (1): 27–33. doi:10.1111/j.1474-919X.1950.tb01730.x.
- Rowley, Ian (1962). ""Rodent-run" distraction display by a passerine, the superb blue wren Malurus cyaneus (L.)". Behaviour 19 (1–2): 170–176. doi:10.1163/156853961X00240.
- Lack, David (1932). "Some breeding-habits of the European nightjar". Ibis 74 (2): 266–284. doi:10.1111/j.1474-919X.1932.tb07622.x.
- Walters, Jeffrey (1990). "Anti-predatory behavior of lapwings: Field evidence of discriminative abilities" (PDF). Wilson Bulletin 102 (1): 49–70.
- Curio, E.; Ernst, U.; Vieth, W. (1978). "Cultural transmission of enemy recognition: One function of mobbing". Science 202 (4370): 899–901. doi:10.1126/science.202.4370.899. JSTOR 1747814. PMID 17752463.
- Wiklund, Christer; Stigh, Jimmy (1983). "Nest defense and evolution of reversed sexual size dimorphism in snowy owls Nyctea scandiaca". Ornis Scandinavica 14 (1): 58–62. doi:10.2307/3676252. Retrieved October 26, 2015.
- Tinbergen, Nikolaas (1952). ""Derived" activities: Their causation, biological significance, origin, and emancipation during evolution". The Quarterly Review of Biology 27: 1–32. doi:10.1086/398642. Retrieved October 26, 2015.
- Grimes, A. (1936). ""Injury feigning" by birds". Auk 53 (4): 478–480. doi:10.2307/4078314. JSTOR 4078314.
- Barash, David (1975). "Evolutionary aspects of parental behavior: Distraction behavior of the alpine accentor". Wilson Bulletin 87 (3): 367–373. Retrieved October 26, 2015.
- Pavel, Vaclav; Bures, Stanislav (2001). "Offspring age and nest defence: Test of the feedback hypothesis in the meadow pipit". Animal Behaviour 61: 297–303. doi:10.1006/anbe.2000.1574.
- Hudson, Peter; Newborn, David (1990). "Brood defence in a precocial species: Variations in the distraction displays of red grouse, Lagopus lagopus scoticus". Animal Behaviour 40: 254–261. doi:10.1016/S0003-3472(05)80920-0.
- Sonerud, Geir (1988). "To distract display or not: Grouse hens and foxes". Oikos 51 (2): 233–237. doi:10.2307/3565647. Retrieved October 26, 2015.
- Ristau, Carolyn (1991). "Aspects of the cognitive ethology of an injury-feigning bird, the piping plover". Cognitive Ethology: The Minds of Other Animals. Hillsdale, NJ: Lawrence Erlbaum Associates. pp. 91–126.
- Baskett, Thomas S. and Sayre, Mark W. and Tomlinson, Roy E. (1993) Ecology and Management of the Mourning Dove. Stackpole Books, p. 167, ISBN 0-8117-1940-5.
- Sordahl, Tex (1990). "The risks of avian mobbing and distraction behavior: an anecdotal review" (PDF). Wilson Bulletin 102 (2): 349–352.
|
{
"palladium_score": 3.616095781326294,
"timestamp": "2026-01-18T07:31:44.706789",
"source": "Palladium-STEM (Preview)"
}
|
http://phys.org/news/2014-08-cordilleran-terrane-collage.html
|
In the August 2014 issue of Lithosphere, Steve Israel of the Yukon Geological Survey and colleagues provide conclusions regarding the North American Cordillera that they say "are provocative in that they blur the definition of tectonic terranes, showing that many observations of early geologists can be attributed to evolving geologic processes rather than disparate geologic histories."
Western North America is characterized by the Cordilleran accretionary mountain belt, which has seen episodic plate convergence since the early Paleozoic, about 253 million years ago. Israel and colleagues write that this long-lived accretionary history of the northern Cordillera has resulted in a "collage of terranes" and overlap assemblages that seemingly have quite disparate geologic histories.
Early geologic research in the North American Cordillera identified several tectonic terranes that were considered to be fundamentally different from one another based upon lithologic and age characteristics. Many of these terranes were thought to have traveled great distances before separately accreting to the ancient North American margin.
Two of the largest terranes, the Alexander terrane and Wrangellia, found along western British Columbia, southwest Yukon, and eastern Alaska, have long been considered to be exotic to each other and to North America. However, in their investigation of relationship between Wrangellia and the Alexander terrane, Israel and colleagues have found evidence that suggests that the two terranes have shared a history since the Latest Devonian (about 364 million years ago), and that portions of Wrangellia are built upon a basement composed of the Alexander terrane.
Israel and colleagues note that this conclusion would see the collage transformed into a more coherent picture with geologic ties between terranes that were previously thought of as complete separate entities. They conclude that this view of terranes adheres to the more traditional ideas of possible links between Laurentia and the accreted terranes with a few seemingly truly exotic pieces caught up in the collage.
Explore further: Mountain-building and the end of an ancient ocean beneath modern New England
More information: New ties between the Alexander terrane and Wrangellia and implications for North America Cordilleran evolution Steve Israel et al., Yukon Geological Survey, P.O. Box 2703(K-14), Whitehorse, Yukon, Y1A 2C6, Canada. August 2014 issue; online at http://dx.doi.org/10.1130/L364.1.
|
{
"palladium_score": 3.613053798675537,
"timestamp": "2026-01-18T07:31:49.141454",
"source": "Palladium-STEM (Preview)"
}
|
http://www.technologyreview.com/article/428115/not-your-kids-sandbox/
|
Smart Pebbles: To test their algorithm, researchers designed tiny cubes with built-in processors and magnets.
Imagine that you have a big box of sand in which you bury a tiny model of a footstool. A few seconds later, you reach into the box and pull out a full-size footstool: the sand has organized itself into a large-scale replica of the model.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have developed an algorithm that could make such “smart sand” possible. The grains of sand would be tiny computational devices that can pass messages to each other and selectively attach to their neighbors; in their research, the MIT team modeled the grains with cubes measuring about 10 millimeters to an edge. The cubes had rudimentary microprocessors inside and switchable magnetic connectors on four of their sides.
Algorithmically, the main challenge in developing smart sand is that such tiny grains would have very few computational resources. “How do you develop efficient algorithms that do not waste any information at the level of communication and at the level of storage?” asks Daniela Rus, a professor of computer science and engineering and a coauthor with her student Kyle Gilpin on a paper presented at the IEEE International Conference on Robotics and Automation in May. Rus and Gilpin’s answer is to convey shape information with a simple physical model.
To see how the algorithm works, picture each grain of sand as a square in a two-dimensional grid. Now imagine that some of the squares—say, forming the shape of a footstool—are missing. That’s where the physical model is embedded.
The grains pass messages to each other to determine which have missing neighbors. Grains with missing neighbors are in one of two places: the perimeter of the sand heap or the perimeter of the embedded shape. Once the grains surrounding the embedded shape identify themselves, they pass messages to other grains a fixed distance away, which in turn identify themselves as defining the perimeter of the duplicate. If the duplicate is supposed to be 10 times the size of the original, each grain surrounding the embedded shape will map to 10 grains of the duplicate’s perimeter. The grains not used to form the duplicate shape detach from their neighbors and simply fall away as the assembled object is lifted from the heap.
The cubes that Gilpin and Rus used in experiments enact this simplified, two-dimensional version of the system. But computer simulations demonstrate that the algorithm would work with a three-dimensional block of cubes, too, by treating each layer of the block as its own two-dimensional grid.
The same algorithm can be varied to produce multiple similarly sized copies of a sample shape or to produce a single large copy of a large object. “Say the tire rod in your car has sheared,” Gilpin says. “You could duct-tape it back together, put it into your system, and get a new one.”
|
{
"palladium_score": 4.031383991241455,
"timestamp": "2026-01-18T07:31:49.141454",
"source": "Palladium-STEM (Preview)"
}
|
https://www.programming-techniques.com/2019/04/introduction-to-cpp-programming-language.html
|
Introduction to C++ Programming language
Definition of C++
C++ is a case-sensitive, general purpose, and a free-form programming language that supports procedural, object-oriented, and generic programming. As it encapsulates both high and low-level language features, C++ is a middle-level language. A lot of platform like Windows, Linux, Unix, Mac, etc. supports C++ programming language.
Benefits of C++ Over C Language
- In C++, Stronger Type Checking is available.
- The OOPS features in C++ like Abstraction, Encapsulation, Inheritance, etc make it more worthy and useful for programmers.
- C++ supports and allows user-defined operators (i.e Operator Overloading) and function overloading.
- The Concept of Virtual functions and also Constructors and Destructors for Objects.
- There is Exception Handling in C++.
- In C++ variable can be declared anywhere in the program, but must be declared before they are used.
History of C++ Programming Language
Bjarne Stroustrup developed C++ programming language in 1980 by at bell laboratories of AT&T (American Telephone & Telegraph), located in the U.S.A.
The founder of C++ language is Bjarne Stroustrup.
It is developed for adding a feature of Object Oriented Programming (OOP) in C without significantly changing the C component.
C++ programming is relatively called a superset of C, it means any valid C program is also a valid C++ program.
Usage of C++ Programming Language
There are several benefits of using C++ for developing applications and many applications product based developed in this language only due to its features and security. Below is the usage areas of C++ where it has been widely and effectively used.
It is used in the development of new applications of C++. The applications based on the graphic user interface, which are highly used applications like adobe photoshop and others.
It is also used for developing most of the operating systems for Microsoft and a few parts of the Apple operating system. Microsoft Windows 95, 98, 2000, XP, office, Internet Explorer and visual studio, Symbian mobile operating systems are mainly written in C++ language only.
The rendering engines of various web browsers are programmed in C++ simply because if the speed that it offers. The rendering engines require faster execution to make sure that users don’t have to wait for the content to come up on the screen. As a result, such low-latency systems employ C++ as the programming language.
The C++ language is also used for developing games. It overrides the complexity of 3D games. It helps to optimize the resources. It supports multiplayer option with networking. Uses of C++ allows procedural programming for intensive functions of CPU and to provide control over hardware, and this language is very fast due to which it is widely used in developing different games or in gaming engines. C++ is mainly used in developing the suites of a game tool.
Postgres and MySQL– two of the most widely used databases are written in C++ and C, the precursor to C++. These databases are used in almost all of the well-known applications that we all use in our day to day life – Quora, YouTube, etc.
The compilers of various programming languages use C and C++ as the backend programming language. This is due to the fact that both C and C++ are relatively lower level languages and are closer to the hardware and therefore are the ideal choice for such compilation systems.
|
{
"palladium_score": 3.703974723815918,
"timestamp": "2026-01-18T07:31:53.645174",
"source": "Palladium-STEM (Preview)"
}
|
http://www.akshardhool.com/
|
What the future holds? Is the question that has been obsessing the human mind, ever since the dawn of wisdom awakened the mankind? As humans watched the day and night skies, they realized periods or cycles in which celestial or heavenly bodies made themselves appear in the sky. The basic of these was of course the sun, which appeared all day long and disappeared in night. Other planets followed their own courses, which were not so simple. Then there were groups of stars or constellations, which first appeared to the human eye as stationary. However, observations over a longer period made humans aware that these too have their own cycles of appearance and disappearance. It was natural for the early humans to interpret these celestial cycles as some form of divine communication that would affect not only personal behavior, but also affairs of community or states. From this basic idea, the subject of astrology developed subsequently. Until the 17th century, astrology was considered a scholarly tradition. It actually led to the development of astronomy as a science and also helped in other branches of science such as meteorology and medicine. Only by the end of the 17th century, astrology lost its academic standing and became regarded as a pseudoscience.
In the Indian context, we have a long tradition of astrologers or astronomers, who were also called mathematicians, because their work involved many new mathematical concepts. Some of these early mathematicians include Aryabhatta (आर्यभट्ट), Varahamihira (वराहमिहिर), Brahmagupta (ब्रम्हगुप्त), Bhattotpala (भट्टोत्पल) and Bhaskaracharya (भास्कराचार्य). Some of their works can be listed as Aryasiddhanta (आर्यसिद्धांत), Brahmasiddhanta (ब्रम्हसिद्धांत), Brhjjataka (बृहज्जातक), Brhatsamhita (बृहत्संहिता), and Lilavati (लीलावती). Besides these, another literary work stands out, because its author or the period, remains unknown. This work is known as Sooryasiddhanta (सूर्यसिद्धांत). The presently available transcript of this treatise is believed to be from the beginning of last millennium.
The basis of all astrological observations has always been the path followed by the Sun around the earth (called as ecliptic) on a background that is full of constellations and asterisms. (Asterisms are group of stars that appear to follow certain patterns, which our fertile minds have managed to associate with figures and outlines of living or non-living things that we see on earth). For convenience of observation and measurement, Sooryasiddhanta divides the Sun’s path or ecliptic around the earth or ecliptic in following fashion.
विकलानां कला षष्ठ्या तत्षष्ट्या भाग उच्यते
तत्त्रिंशता भवेद्राशिर्भगसो द्वादशैव ते II २८ II
तत्त्रिंशता भवेद्राशिर्भगसो द्वादशैव ते II २८ II
“Sixty seconds (vikala) make a minute (kala); sixty of these, a degree (bhaga); of thirty of the latter is composed a sign (rashi); twelve of these are a revolution (bhagana)”.
(Sooryasiddhanta 1. 28)
(Sooryasiddhanta 1. 28)
For Astrology purposes, each of the “Rashi” of thirty degrees is associated or belongs to an asterism that is seen in the background of that “Rashi”. These asterisms are known as the “Signs of the Zodiac”. (The zodiac is an area of the sky that extends approximately 8° to north or south of the ecliptic).
It may come as a surprise to many of us that this system of division of the ecliptic, or “Rashi” and the concept of association of prominent asterisms with a “Rashi” as a particular sign of Zodiac, is virtually identical in ancient Indian Astrology or “Jyotisha” as well as in western Astrology, which is based on ancient Greek Astrology. Some of the Astrologists believe that the “Rashi” concept was a purely Indian effort, copied first, by people of Middle East, from where it propagated to west. There is another school of thought, which believes that concept of “Rashi” originated in ancient Geece, from where it was picked up by Indians.
Be it as may be, there is hardly any point in entering the fray as neither it would lead anywhere nor would it win any argument, We shall therefore refrain from joining any argument and concentrate on the fact that concept of “Rashis” appears in ancient works of both Indian and Greek origin. We can therefore conclude that there must have been a common source or a person or a group of knowledgeable persons, who were familiar with literary works of both Indian and Greek Origin, and naturally were bilingual (proficient in Sanskrit and Greek).
As it turns out, we have a readymade source of information, in form of two books, originally written by famous Indian mathematician Varahamihira. (Sixth century CE), who seems to have a full knowledge of Yavan (Greek) astronomical terms and doctrines1. He even gives the Greek terms for the Sanskrit names for the signs of the Zodiac. Varahamihira, in his books, quotes from treatises written by many other learned men or Pundits. However, he does not reveal to us the sources of his knowledge of Greek doctrines or names. MM P.V.Kane2 gives two instances, where Varahamihira, in his treatise Brhtsamhita, has referred to word “Yavana”. MM P.V.Kane says. “The word Yavana appears to be used in two senses by Varahamihira. In verse 14 of chapter 2, this word means the Yavana people in general, but in some other places (such as verse 1 of chapter 11), this word means either Yavana authors or some one writer from among them”.
Bhattotpala or Bhatta-Utpala (भट्टोत्पल) was a 10th century astrologer-mathematician. According to Al-Biruni, he was a Pundit from Kashmir2, 3. He has written commentaries on Varahamihira’s two books Brhjjataka (बृहज्जातक) and Brhtsamhita (बृहत्संहिता). Bhattotpala, besides commenting on Varahamihira’s original text, also refers, like Varahamihira, to quotes from treatises written by other learned men or Pundits. His references to “Yavanas” (Greeks) appear to be more extensive. What is of special significance is that Bhattotpala mentions certain “Yavana” Pundits either by their titles or names such as “Yavanadhipati” (यवनाधिपति), “”Yavanedra” (यवनेंद्र), “Yavanacharya” (यवनाचार्य) and finally “Yavaneshvara” (यवनेश्वर).
According to MM P.V.Kane2, out of these names, “Yavanacharya” appears to have been an ancient Greek writer. Regarding the names “Yavanadhipati”, “Yavanedra” and “Yavaneshvara”, there is no clarity, whether Bhattotpala is referring to the same author called by him as “Yavanacharya” or these are different authors from differing centuries. In another literary work known as Saravali (सारावलि) by Kalyanavarma (कल्याणवर्मा), written in the intervening centuries between Varahamihira’s books and Bhattotpala’s commentary, we do find words like “Yavanaraja” (यवनराजा), Yavanavrddha” (यवनवृद्ध), “Yavananarendra” (यवनेंद्र). There is also a mention of the name “Purva Yavendras” (पूर्व यवनेंद्र) implying that its author knew ‘early and later’ Yavana writers on Astrology. Bhattotpala makes one interesting comment about “Yavaneshwara” though. He says, “Varaha refers to the views of an ancient Yavanacharya, but he (Bhattotpala) has not seen the work. He has only read the work of Yavaneshwara Sphujidhwaja (स्फुजिध्वज), who mentions the views of Yavana writers of a bygone age and this Sphujidhwaja flourished in an age that was later than the beginnings of Shaka-kala (CE78)”. The original Sanskrit comment is given below.
From this rather confusing state of things, we can make out two facts clearly.
1. Firstly, before the times of Varahamihira (6th century CE) there were several Greek or Yavan Astrologers known to Indians. They might have been based in India or Greece. One of them, commonly known as “Yavanacharya” or “Vrddhayavana” (वृद्धयवन), was rather well known.
2. In the intervening centuries between Varahamihira and Bhattotpala, another Greek Astrologist “Sphujidhwaja” became well known. It is not clear whether he was commonly referred to by names such as “Yavaneshwara”, “Yavanadhipati”, “Yavanendra” or “Yavanaraja” or these names were used for an earlier Greek writer, who was also a King.
According to Bhau Daji1, “Word Sphujidhvaja is a corruption of the Greek name Speusippus. Diogenes Laertius mentions two authors of this name, one of whom was a physician called Hcrophileus Alcxandrinus, and may, possibly, be the astronomer whose works were translated and studied in India”. However no other modern scholar seems to corroborate this view.
Matter would have rested here with “Yavanacharya” and “Yavaneshwara” both lost in sea of obscurity. However. In the summer of year 1897, an Indian scholar, Pundit HaraPrasad Shastry, Professor of Sanskrit, Presidency College Kolkata, briefly happened to see a palm leaf manuscript4 in the library of His Excellency, The Maharaja of Nepal. The manuscript examined by Pundit Shastry was a remarkable one as it was a complete copy of a book called “Yavan-Jataka”. Pundit Shastry made an effort to read the last verse of this manuscript, which said, “that in the year 91 of some (unspecified) era, Yavanesvara translated from his own language into Sanskrit prose, Horashastra and that in the year 191, king Sphujidhwaja rendered that shastra into four thousand Indravajra verses”7.
Mahamahopadhyay P.V.Kane managed in the year 1935, to get a transcript of the above mentioned manuscript from Nepal Darbar and translates5 the last paragraph as, “The seal of the sentences of the ocean of the knowledge of hora (astrology) was guarded by the veil of his own language and was seen in the year 91. Formerly, the Lord of Yavanas, being endowed with the vision of truth by the favour of the Sun, declared this shstra of the knowledge of hora (astrology) in unblemished sentences for enabling- the people to grasp it. There was a talented king named Sphujidhvaja, who turned this (shastra) into Indravajra verses, four thousand in number, in the year 191”7. One thing becomes very clear from both these translations. King Sphujidhwaja (he was probably a Greek king, as Bhau Daji says.) was not the original author of the book “yavana-Jataka” (Greek book of Astrology). He has merely converted it from original prose to verses. MM P.V. Kane also feels that King Sphujidhwaja was probably from Gupta era (3rd to 6th century CE).
Meanwhile, MM P.V.Kane had turned his attention to other available manuscripts of the Book “Yavan-Jataka” that were available to him. He found that they were different from the Nepal Manuscript and some of them differed among themselves. However he found that most of the manuscripts had names that implied that the book was actually called “Vrddha-Yavana-Jataka” written by one “Minaraja”. Regarding identification of this “Minaraja”, MM P.V.Kane says6, “The name Minaraja is not necessarily non-Indian but it is not possible to shut our eyes to the fact that Menander (165/155 –130 BCE), a Greeko-Bactrian king has been identified with Mililinda of the Buddhist work ' Questions of Milind'. Miariija may be a Sanskrit rendering of a foreign word like Menendra”.
Bhattotpala’s commentary on Varahamihira’s Brhjjataka contains twelve verses about the characteristics of the twelve rashis (signs of Zodiac), which he quotes as told by Yavanesvara. (Commentry on Brhjjataka verse 1.5). In Sphujidhwaja’s “Yavana-Jataka”5 the same twelve verses about the twelve rashis appear, following the first verse that is corrupt. The same twelve verses also occur in the “Vrrdha-Yavan-Jataka”6 composed by Yavanachrya Minaraja who is described as the overlord of Yavanas. These twelve versest about Rashis are given below in Appendix.
We can therefore infer from above.
1. A Greek king (most likely Menander) known by various names such as Yavanesvara or Yavanacharya authored a text in Greek language, probably around 140 BCE, about Greek system of Astrology. The book was known as “Yavana-Jataka” first and later as “Vrddha-Yavana-Jataka”. This inference finds support from another reference2 in Bhattotpala’s commentary on Brhjjataka. Here, he quotes from great Sage Badarayana (second century CE according to Marathi Vishva-Kosha of Late Shri Laxmanshastri Joshi) which mentions a certain “Yavanendra”. This would mean that the Greek king, who wrote a text about Greek system of Astrology actually preceded Sage Badarayana. This matches well with King Menander’s age.
2. Varaha-Mihira, Kalyan-Varma and many other authors referred to this treatise, while authoring their own works.
3. Sometime in Gupta era, another Greek Astrologist Sphujidhwaja (Greek name Speusippus) converted the original prose written in Greek language into 4000 verses and named it as “Yavana-Jataka”.
4. Bhattotpala(10th century CE), though he had no access to the original “Vrddha-Yavana-Jataka” , referred to Sphujidhwaja’s book in his commentary on Varahamihira’s books, Brhjjataka and Brihat-samhita.
1. Brief Notes on the Age and Authenticity of the Works of Aryabhata, Varahamihira, Brahmagupta, Bhattotpala, and Bhaskaracharya : By Bhau Dajee : The Journal of the Royal Asiatic Society of Great Britain and Ireland, New Series,Vol. 1, No. 1/2 (1865), pp. 392-418
2. Varahamihira and Utpala : By P.V.Kane : Journal of Bombay Branch of Asiatic society, Vol 24-25, 1949, PP. 1-31
3. Sanskrit literature known to Al-Biruni : By Ajay Mitra Shastri :pp.130 : Indian Journal of History of Science Vol. 10
4. Notes on Palm-leaf MSS in the Library of His excellency The Maharaja of Nepal : By Pundit HaraPrasad Shastry : Journal of Asiatic Society of Bengal : Vol. 66 : 1897 : pp.310-316
5. The Yanavajataka of Sphujidhwaja : By. MM P.V. Kane: Journal of Bombay Branch of Asiatic society, Vol 30-2, 1955, PP. 1-5.
6. Yavaneshvara and Utpala : By MM P.V. Kane: Journal of Bombay Branch of Asiatic society, Vol 30-1, 1955, PP. 1-5.
7 Note: - Bill M. Mak of Kyoto University challenges this translation in his research paper “The Date and Nature of Sphujidhvaja’s Yavanajātaka Reconsidered in the Light of Some Newly Discovered Materials” published in 2013. He gives a new translation in which the numbers 91 and 191 are omitted.
15th March 2020
|
{
"palladium_score": 3.7393462657928467,
"timestamp": "2026-01-18T07:31:53.645174",
"source": "Palladium-STEM (Preview)"
}
|
http://ioarvanit.gr/en/archives/2992
|
Measuring distances from our robot to other objects, is one of the most common data we want to obtain. For example, if we are building an autonomous vehicle, we want to check it’s distance from obstacles to help it make the right decision about it’s course. There are also many more examples of robots that we want them to activate mechanisms when something or someone gets close to them.
One of the simplest, cheapest and most accurate ways to measure distances, is by using ultrasonic sensors. Their working principle is based on the fact that sound is reflected upon most objects and materials.
These sensors have a transmitter that sends a short ultrasonic burst and a receiver that senses the ultrasound upon it’s return. Knowing the speed of the sound in the air (approximately 343 m/sec), we can calculate the distance it traveled, if we measure the time passed for the ultrasound to return to the sensor.
All ultrasonic sensors operate in a similar way. They send a short (a few microseconds long) ultrasonic burst from the transmitter and measure the time it takes for the sound to return to the receiver.
Let’s say that it took 10 milliseconds for the ultrasound to return to the sensor. That means:
- that the time in seconds is 0,01.
- Knowing that sound travels in the air 343 meters for every second, we can calculate the distance in meters by simply multiplying seconds by 343. in our case 0,01 x 343 = 3,43 meters.
- This is the distance that the ultrasound traveled to the obstacle and back to the sensor, so the obstacle is 3,43/2 = 1,715 meters away from the sensor.
Pros and Cons.
The main advantages of using ultrasonic sensors to measure distance are:
- they are cheap and there is a plethora of choices in the market,
- they are not affected by the color or the transparency of obstacles,
- they are not affected by lighting conditions,
- they are pretty accurate on their measurements.
Their drawbacks are:
- They don’t work well enough on obstacles with small surfaces.
- The angle of the surface of the obstacle is crucial for the sensor.
- Obstacles made from sound absorbing materials (for example sponges) are hard to be traced by the sensor, since the absorb sound.
Choosing a sensor.
There is a wide variety of ultrasonic sensors on the market, for most robotics platforms. For those who prefer working with the Lego platform, EV3 and the older NXT, include ultrasonic sensors. Some examples of using them in the classroom are:
If you are an Arduino or Raspberry pi fan and want to dive more into how these sensors work, there are several options that you can find online. The most common and affordable choice is the HC-SR04, which costs less than a euro on ebay (August 2018). For more details and comparative tests with various ultrasonic sensors, i advise you to watch two detailed videos (here and here) from Andreas Spiess channel on Youtube.
Connecting the sensor to Arduino and programming.
The HC-SR04 sensor has 4 pins:
- VCC, that is connected to 5V,
- GND, that is connected to Ground,
- TRIG (Trigger), that is connected to the transmitter to send the ultrasonic burst,
- ECHO, that is connected to the receiver.
There are two ways to connect the sensor to Arduino:
- Connect TRIG and ECHO to different digital pins and make all the hard work and calculations in our program,
- connect TRIG and ECHO to the same digital pin and use a library to make all the calculations.
I am going to start from the second way (easy) and then stay longer on the first, which gives the programmer more control and as an educator i find it more interesting.
One pin connection and the NewPing library.
For my program to work i will need to install the NewPing library to my Arduino IDE, using the library manager.
Now i can write a simple program to print the distance obtained by the sensor to the Serial monitor.
I uploaded the program to the board and started testing.
Two pin connection – Calculating distance from time.
As an educator, i find it more interesting to dig in the working principal of things, even if that means more work for my students. In order to do so in this example we will have to forget the luxury of the NewPing library and make all the calculations ourselves. First of all i changed the schematic by connecting the TRIG and ECHO to different digital pins.
Before i can start coding there are some things i need to clarify:
- TRIG (Trigger) has a default LOW state and when we change it to HIGH it starts sending ultrasonic burst.
- When ECHO receives the bouncing sound it returns a HIGH pulse to the Arduino.
- I will use the pusleIn function to measure the time the ECHO pin stays in HIGH state. This functions returns time in microseconds.
Now i can start my algorithm:
- Set TRIG pin to HIGH.
- Wait for a short period of time (10 microseconds).
- Set TRIG pin to LOW. Now i have sent a short ultrasonic burst.
- Get the time from ECHO pin in microseconds.
- Convert microseconds to seconds (division by 1.000.000).
- Calculate the distance the sound traveled in meters. Multiply seconds by 343 m/sec.
- Now i have the distance i meters. I will convert it to centimeters by multiplying by 100.
- This is the distance the sound traveled to the obstacle and back. So the distance of the obstacle from the sensor is half of that. So i divide distance by 2.
I upload the program to my board and the sensor works as with the NewPing library, returning decimal values since all my variables are float.
Digging even further – Finding the actual speed of sound based on temperature and humidity.
So far i used the speed of sound to calculate distance from time, assuming that this is a constant value of 343 m/sec. That is not actually true. Speed of sound depends on the “density” of the mean it travels through. In solid materials the speed of sound is greater than liquids and in liquids sound travels faster than through gases.
The ultrasonic sensor sends sound through air which is a gas. In gases the speed of sound is affected mostly by the gas temperature, less by the gas humidity and even less by the gas pressure. For example in air with pressure of 1 Atm and
- temperature of 0 degrees Celsius (32 F) and 50% humidity, the speed of sound is 331.61 m/sec,
- temperature of 20 degrees Celsius (68 F) and 50% humidity, the speed of sound is 343.99 m/sec,
- temperature of 30 degrees Celsius (86 F) and 50% humidity, the speed of sounds is 350,31 m/sec και
- temperature of 30 degrees Celsius (86 F) and 90% humidity, the speed of sound is 351,24 m/sec.
There are many online calculators for the speed of sound. I used the one on http://www.sengpielaudio.com/calculator-airpressure.htm to get the above results.
Since i had a cheap temperature – humidity sensor lying around (DHT11 Temperature and Humidity Sensor), i decided to improve the calculations in my code, using these two values to estimate a more accurate speed of sound value.
First of all i embedded the new sensor in my schematic. Typical DHT11 sensors have either 3 pins (5V, GND and Signal) or 4 pins (5V, GND, Signal and NULL). The connections are as follows:
- 5V DHT11 –> 5V Arduino
- GND DHT11 –> GND Arduino
- SIGNAL DHT11 –> A0 pin Arduino
The next step was to add the measurement of temperature and humidity in my code using the dht.h library, which you can download from here. I followed the step by step tutorial from Brainy Bits and now the only thing i needed was to calculate the actual speed of sound.
After a long search, i found that the formula needed was originally published in 1993 by Owen Cramer in his work “The variation of the specific heat ratio and the speed of sound in air with temperature, pressure, humidity, and CO2 concentration.”. I was also happy to find a JAVA implementation by a research team from the Univeristy of Sao Paolo Brazil. With a few tweaks to adjust it to my code the full program is as follows:
I uploaded the program to my board and starting testing. I was happy to have more accurate measurements, even if that does not play a significant role in small distances of few cm.
Using the ultrasonic sensor in class.
The use of ultrasonic sensors in educational robotics is very common and there are hundreds of examples over the internet, either using the Lego platform or Arduino and Raspberry pi. Recently tinkercad added a new command block for getting the distance from an ultrasonic sensor.
I find particularly interesting, for educational purposes, the analytical way of calculating the distance, from the time the sound takes to travel to the obstacle and back.
In a previous project (smart trash can), that we implemented with my students from the evening club Young Hackers, we spent a lot of time to fully understand the algorithm that calculates distance from time using an analytical worksheet (in greek). We implemented the algorithm using a block style language (Ardublockly) that helped students a lot to understand every step of the way.
|
{
"palladium_score": 3.9757020473480225,
"timestamp": "2026-01-18T07:31:53.645174",
"source": "Palladium-STEM (Preview)"
}
|
http://wwx.wikia.com/wiki/Lebensraum
|
|Part of the Politics series on|
|Flag of Nazi Germany|
</table> Template:Audio (German for "living space") was a belief in Germany in the early 20th century that Germany needed new land to expand in, especially toward the east. It became a major motivation for Nazi Germany's territorial aggression after 1937. Lebensraum was a reinterpretation of the by then century-old concept of Drang nach Osten. In his book Mein Kampf, Adolf Hitler detailed his belief that the German people needed Lebensraum – for a Großdeutschland, land, and raw materials – and that it should be taken in the East.
The idea of a Germanic people without sufficient space dates back to long before Adolf Hitler brought it to prominence. Through the middle ages, German population pressures led to settlement in Eastern Europe, a practice termed Ostsiedlung. The term Lebensraum in this sense was coined by Friedrich Ratzel in 1901, and was used as a slogan in Germany referring to the unification of the country and the acquisition of colonies, based on the English and French models. Ratzel believed that the development of a people was primarily influenced by their geographical situation and that a people that successfully adapted to one location would proceed naturally to another. These thoughts can be seen in his studies of zoology and the study of adaptation. This expansion to fill available space, he claimed, was a natural and necessary feature of any healthy species.
Ratzel himself emphasized the need for overseas colonies, to which Germans ought to migrate, not for expansion inside Europe. Wanklyn, (1961) argues that Ratzel's theory was designed to advance science, and that politicians distorted it for political goals. Thus Lebensraum was picked up and expanded by publicists of the day, including Karl Haushofer and General Friedrich von Bernhardi. In von Bernhardi's 1912 book Germany and the Next War, he expanded upon Ratzel's hypotheses and, for the first time, explicitly identified Eastern Europe as a source of new space. According to him, war, with the express purpose of achieving Lebensraum, was a distinct "biological necessity." As he explained with regard to the Latin and Slavic races, "Without war, inferior or decaying races would easily choke the growth of healthy budding elements." The quest for Lebensraum was more than just an attempt to resolve potential demographic problems: it was a necessary means of defending the German race against stagnation and degeneration."
'Lebensraum' and 'Konzentrationslager,' ("concentration camp") were coined in the 1900-1910 era regarding German policies in its colony of South-West Africa (now Namibia). During the first decade of the 20th century imperial Germany colonized the land and committed genocide against the local Herero and Nama peoples. Later use of these words suggests an important question: did Wilhelmine colonization and genocide in Namibia influence Nazi plans to conquer and settle Eastern Europe, enslave and murder millions of Slavs, and exterminate Gypsies and Jews? Madley (2005) argues that the German experience in Namibia was a crucial precursor to Nazi colonialism and genocide and that personal connections, literature, and public debates served as conduits for communicating colonialist and genocidal ideas and methods from the colony to Germany.
World War IEdit
In September 1914, when victory in the World War seemed at hand, Berlin introduced a lebensraum plan for postwar peace terms. the concept of Lebensraum was endorsed secretly by the Chancellor Theobald von Bethmann-Hollweg and the rest of the German government as a war aim in World War I. Documents discovered by the German historian Fritz Fischer have suggested that in the event of a German victory, one policy under discussion by the German government as part of its Septemberprogramm was to annex a strip of Poland, and replace the population with Germans to set up a defensive barrier in the east. The popularion policy was never officially adopted nor put into effect. The significance of Fischer's discovery, as the Australian historian John Moses has noted, is that the goal of winning lebensraum was already in German thinking long before 1933 and thus cannot be seen, as some German historians have argued, as solely Adolf Hitler's personal brain-child. The "September plan" was a proposal that was under discussion but was never adopted and no movement of people was ever ordered. As historian Raffael Scheck concluded, "The government, finally, never committed itself to anything. It had ordered the September Program as an informal hearing in order to learn about the opinion of the economic and military elites."
As the British historian A. J. P. Taylor noted in his 1963 foreword "Second Thoughts" to his 1961 book The Origins of the Second World War:
The German Empire planned to annex territory in both Lithuania and Poland for direct colonization by German colonists after forcible removal of Polish and Lithuanian population. As early as April 1915, the Polish Border Strip plan against Poland, which was first suggested by General Erich Ludendorff in 1914, was approved as a German war aim by the Chancellor Theobald von Bethmann-Hollweg. The German historian Andreas Hillgruber argued that the foreign policy of General Ludendorff, with its demand for lebensraum to be seized for Germany in Eastern Europe during World War I, was the prototype for German policy in World War II. Lebensraum almost became a reality in 1918 during World War I. The new Communist regime of Russia concluded the Treaty of Brest-Litovsk with Germany, ending Russian participation in the war in exchange for the surrender of huge swathes of land, including the Baltic territories, Belarus, Ukraine, and the Caucasus. However, unrest at home and defeat on the Western Front forced Germany to abandon these favorable terms in favor of the Treaty of Versailles, by which the newly acquired eastern territories were agreed to sacrifice the land to Lithuania, Poland, and new nations such as Estonia or Latvia, and a series of short-lived independent states in Ukraine.
The German historian Andreas Hillgruber argued the Treaty of Brest-Litovsk was the prototype for Hitler's vision of a great empire for Germany in Eastern Europe. Hillgruber wrote that:
The desire for lebensraum was a key tenet of several nationalist and extremist groups in post-World War I Germany, notably the Nazi Party under Adolf Hitler. As the American historian Gerhard Weinberg noted, German demands for territorial revision went beyond merely regaining land lost under the Treaty of Versailles, and instead embraced calls for the German conquest and colonization of all Eastern Europe, regardless of whether the land in question had belonged to Germany before 1918 or not Likewise, the British historian Hugh Trevor-Roper argued that the goal of overthrowing Versailles was only a prelude to seizing Lebensraum in Eastern Europe for Germany with no regard as to where Germany's 1914 frontiers had been. In Mein Kampf, Hitler was to write:
The official German history of World War II was to conclude that the conquest of Lebensraum was for Hitler and the rest of the National Socialists the most important German foreign policy goal. At his first meeting with all of the leading generals and admirals of the Reich on February 3, 1933, Hitler spoke of "conquest of Lebensraum in the East and its ruthless Germanization" as his ultimate foreign policy objectives. For Hitler, the land which would provide sufficient Lebensraum for Germany was the Soviet Union, which for Hitler was both conveniently a nation that possessed vast and rich agricultural land and was inhabited by what Hitler saw as Slavic untermenschen (sub-humans) ruled over by what he regarded as a gang of blood-thirsty, but grossly incompetent Jewish revolutionaries. In Hitler's view, the idea of restoring the 1914 borders of the Reich was absurd as those borders did not provide sufficient Lebensraum; only a foreign policy that aimed at the conquest of the proper quantity of Lebensraum would justify the necessary sacrifices that war entailed In Hitler’s view, history was dominated by a merciless struggle between different “races” for survival, and “races” that possessed large amounts of territory were innately stronger then those that did not. Eberhard Jäckel has expressed a Primat der Außenpolitik (“primacy of foreign policy”) interpration of German foreign policy as opposed to the Primat der Innenpolitik ("primacy of domestic politics") thesis favored by some left-wing historians such as Timothy Mason. Jäckel wrote that since Hitler regarded the conquest of Lebensraum as his most important project, and since that could only be accomplished through war, domestic policy comprised simply preparing the nation for the inevitable struggle for Lebensraum
There are, however, many historians such as Martin Broszat and Hans Mommsen who dismiss this "intentionalist" approach, and argue that the concept was actually an "ideological metaphor" in the early days of Nazism.
The practical implementation of the Lebensraum concept began in 1939 with Germany's occupation of Poland. Later, the ideology was also a major factor in Hitler's launching of Operation Barbarossa in June 1941. The Nazis hoped to turn large areas of Soviet territory into German settlement areas as part of Generalplan Ost. Developing these ideas, Nazi theorist Alfred Rosenberg proposed that the Nazi administrative organization in lands to be conquered from the Soviets be based upon the following Reichskommissariats:
The Reichskommissariat territories would extend up to the European frontier at the Urals. They were to have been early stages in the displacement and dispossession of Russian and other Slav people and their replacement with German settlers, following the Nazi Lebensraum im Osten plans. When German forces entered Soviet territory, they promptly organized occupation regimes in the first two territories—the Reichskomissariats of Ostland and Ukraine. The defeat of the Sixth Army at the Battle of Stalingrad in 1942, followed by defeat in the Battle of Kursk in July 1943 and the Allied landings in Sicily put an end to the plans' implementation.
In his book Mein Kampf, Hitler notes that history is an open-ended struggle, and links the concept of Lebensraum with his own brand of racism and social Darwinism. Nevertheless, historians debate whether Hitler's position on Lebensraum was part of a larger program of world domination (the so-called "globalist" position) or a more modest "continentalist" approach, by which Hitler would have sufficed with the conquest of Eastern Europe. Nor are the two positions necessarily contradictory, given the idea of a broader Stufenplan, or "plan in stages," which many such as Klaus Hildebrand and the late Andreas Hillgruber argue lay behind the regime's actions. Historian Ian Kershaw suggests just such a compromise, claiming that while the concept was originally abstract and undeveloped, it took on new meaning with the invasion of the Soviet Union. He goes on to note that even within the Nazi regime, there were differences of opinion about the meaning of Lebensraum, citing Rainer Zitelmann, who distinguishes between the near-mystical fascination with a return to an idyllic agrarian society (for which land was a necessity) as advocated by Darré and Himmler, and an industrial state, envisioned by Hitler, which would be reliant on raw materials and forced labor.
What seems certain is that echoes of lost territorial opportunities in Europe, such as the Treaty of Brest-Litovsk, played an important role in the Hitlerian vision for the distant future:
Racism is not a necessary aspect of expansionist politics in general, nor was the original use of the term "Lebensraum". However, under Hitler, the term came to signify a specific, racist kind of expansionism. Karl Haushofer was an acquaintance of Rudolf Hess, Hitler's deputy. Haushofer had limited influence on Hitler's ideals. "Haushofer primarily provided the academic and scientific support for the expansion of the Third Reich ." Haushofer ideas can be described by the expansion of heavily populated countries having the right to expand and gain land from less populated countries. This was his adaptation of Ratzel's Lebensraum .
Empire of Japan:
cy:Lebensraum da:Lebensraum de:Lebensraum im Osten el:Ζωτικός χώρος es:Lebensraum eu:Lebensraum fr:Lebensraum it:Lebensraum he:מרחב מחיה lt:Gyvybinė erdvė hu:Lebensraum arz:ليبينزراوم nl:Lebensraum ja:生存圏 no:Lebensraum nn:Lebensraum pl:Lebensraum pt:Espaço vital ro:Spaţiu vital ru:Жизненное пространство на Востоке sr:Лебенсраум fi:Lebensraum sv:Lebensraum tr:Lebensraum vi:Lebensraumzh:生存空间
|
{
"palladium_score": 4.001851558685303,
"timestamp": "2026-01-18T07:31:58.446915",
"source": "Palladium-STEM (Preview)"
}
|
https://en.m.wikipedia.org/wiki/2014_North_American_polar_vortex
|
Early 2014 North American cold wave
The 2014 North American cold wave was an extreme weather event that extended through the late winter months of the 2013–2014 winter season, and was also part of an unusually cold winter affecting parts of Canada and parts of the north-central and upper eastern United States. The event occurred in early 2014 and was caused by a southward shift of the North Polar Vortex. Record-low temperatures also extended well into March.
|Formed||January 2, 2014|
|Dissipated||April 10, 2014|
|Lowest pressure||939 mb (27.7 inHg)|
(January 8, 2014 system)
|Damage||$5 billion (United States)|
Central United States
Eastern United States
Part of the 2013–14 North American winter
On January 2, an Arctic cold front initially associated with a nor'easter tracked across Canada and the United States, resulting in heavy snowfall. Temperatures fell to unprecedented levels, and low temperature records were broken across the United States. Business, school, and road closures were common, as well as mass flight cancellations. Altogether, more than 200 million people were affected, in an area ranging from the Rocky Mountains to the Atlantic Ocean and extending south to include roughly 187 million residents of the Continental United States.
Beginning on January 2, 2014, sudden stratospheric warming (SSW)[dubious ] led to the breakdown of the semi-permanent feature across the Arctic known as the polar vortex. Without an active upper-level vortex to keep frigid air bottled up across the Arctic, the cold air mass was forced southward as upper-level warming displaced the jet stream. With extensive snow-cover across Canada and Siberia, Arctic air had no trouble remaining extremely cold as it was forced southward into the United States.
According to the UK Met Office, the jet stream deviated[clarification needed] to the south (bringing cold air with it) as a result of unusual contrast between cold air in Canada and mild winter temperatures in the United States.[further explanation needed] This produced significant wind where the air masses met, leading to bitter wind chills and worsening the impacts of the record cold temperatures.[clarification needed]
On January 6, 2014, Babbitt, Minnesota was the coldest place in the country at −37 °F (−38 °C). The cold air reached as far as Dallas, which experienced a low temperature of 16 °F (−9 °C).
On January 7, 2014, the cold air reached even farther south to Houston, where the low temperature for that morning was 21 °F (−6 °C), two degrees shy of the record-low temperature of 19 °F (−7 °C) for that day.
The low temperature at O'Hare International Airport in Chicago was −16 °F (−27 °C) on January 6. The previous record low for this day was −14 °F (−26 °C), set in 1884 and tied in 1988. The National Weather Service adopted the Twitter hashtag #Chiberia (a portmanteau of Chicago and Siberia) for the cold wave coverage in Chicago and local media adopted the term as well. In spite of cold temperatures and stiff winds which exceeded the 29 miles per hour (47 km/h) and −23 °F (−31 °C) air temperature when Chicago set its all-time wind chill record of −82 °F (−63 °C) in 1983, Chicago did not break the record because the NWS had adopted a new wind chill formula in 2001.
The average daily temperature for the United States on January 6 was calculated to be 17.9 °F (−7.8 °C). The last time the average for the country was below 18 was January 13, 1997; the 17-year gap was the longest on record.
On January 7, at least 49 record lows for the day were set across the country. On the night of January 6–7, Detroit hit a low temperature of −14 °F (−26 °C) breaking the records for both dates. The high temperature of −1 °F (−18 °C) on January 7 was only the sixth day in 140 years of records to have a subzero high. On January 7, 2014, the temperature in Central Park in New York City was 4 °F (−16 °C). The previous record low for the day was set in 1896, twenty-five years after records began to be collected by the government. Marstons Mills, Massachusetts bottomed out at −9 °F (−23 °C) on the morning of January 8, just one degree above their record low, as did Pittsburgh, which also bottomed out at −9 °F (−23 °C), setting a new record low on January 6–7. Cleveland also set a record low on those dates at −11 °F (−24 °C). Temperatures in Atlanta fell to 6 °F (−14 °C), breaking the old record for January 7 of 10 °F (−12 °C) which was set in 1970. Temperatures fell to −6 °F (−21 °C) at Brasstown Bald, Georgia. Although the cold air moderated, the freezing-cold temperatures even reached Florida and as far south as Tampa which had a low of 31 °F (−1 °C) on January 7, 2014.
The coldest parts of Canada were the eastern prairie provinces, Ontario, Quebec, and the Northwest Territories. However, only southern Ontario set temperature records.
During most of the early cold wave, Winnipeg was the coldest major city in Canada. On January 6, it reached a low of −37 °C (−35 °F), while on January 7, the low was −36 °C (−33 °F). On both days, the temperature did not go above −25 °C (−13 °F). Other parts of southern Manitoba recorded lows of below −40 °C (−40 °F). On January 5, the daily high in Saskatoon was −28.4 °C (−19.1 °F) with a wind chill of −46 °C (−51 °F).
On January 7, 2014, a cold temperature record was set in Hamilton, Ontario: −24 °C (−11 °F); London, Ontario was −26 °C (−15 °F). Toronto dropped below -19 °C for the first time in 9 years, with a temperature of -22 °C.
Related extreme weatherEdit
Heavy snowfall or rainfall occurred on the leading edge of the weather pattern, which travelled all the way from the American Plains and Canadian prairie provinces to the East Coast. Strong winds prevailed throughout the freeze, making the temperature feel at least ten degrees Fahrenheit colder than it actually was, due to the wind chill factor. In addition to rainfall, snowfall, ice, and blizzard warnings, some places along the Great Lakes were also under wind warnings. Europe also saw the 2013-2014 Atlantic winter storms in Europe which has been linked to the cold winter in North America.
On January 3, Boston had a temperature of 2 °F (−17 °C) with a −20 °F (−29 °C) wind chill, and over 7 inches (180 mm) of snow. Boxford, Massachusetts recorded 23.8 inches (600 mm). Fort Wayne, Indiana had a record low of −10 °F (−23 °C). In Michigan, over 11 inches (280 mm) of snow fell outside Detroit and temperatures around the state were near or below 0 °F (−18 °C). New Jersey had over 10 inches (250 mm) of snow, and schools and government offices closed.
On January 5, a storm system crossed the Great Lakes region. In Chicago, where 5 inches (13 cm) to 7 inches (180 mm) of snow had fallen, O'Hare and Midway Airports cancelled 1200 flights. Freezing rain caused a Delta Air Lines flight to skid off a taxiway and into a snowbank at John F. Kennedy International Airport, with no injuries. The storms associated with the Arctic front caused numerous road closures and flight delays and cancellations.
By January 8, 2014, John F. Kennedy International Airport had canceled about 1,100 flights, Newark Liberty International Airport had about 600 canceled flights and LaGuardia Airport had about 750–850 flights canceled.
In New York City temperatures fell to a record low of 4 °F (−16 °C) on January 7, which broke a 116-year record. The cold came after days of unseasonably warm temperatures, with daytime highs dropping by as much as 50 °F (10 °C) overnight. Also on January 7, the day after a record-setting −16 °F (−27 °C), Chicago recorded −12 °F (−24 °C). Embarrass, Minnesota had the coldest temperature in the lower 48 states March 2014 with −35 °F (−37 °C).
In Canada, the front brought rain and snow events to most of Canada on January 5 and 6, which became the second nor'easter in less than a week in Nova Scotia and Newfoundland. This weather event ended when the front pushed through, bringing the bitterly cold temperatures with it. Southwestern Ontario experienced a second round of heavy snow in the wake of the front throughout January 6 and 7 and part of January 8 due to lake-effect snow. The Northwest Territories and Nunavut did not experience record-breaking cold, but had a record-breaking blizzard on January 8, when the freeze further south was coming to an end.
Nearly all parts of Canada under the deep freeze experienced steady winds around 30 to 40 kilometres per hour (19 to 25 mph). In some areas along the north shore of Lake Erie, those winds reached 70 km/h (43 mph), with gusts as high as 100 km/h (62 mph). This brought local wind chill levels as low as −48 °C (−54 °F)
Cold air rushing into the Gulf of Mexico behind the front created a Tehuano wind event, with northerly winds from the Bay of Campeche to the Gulf of Tehuantepec in Mexico reaching 41 kn (76 km/h; 47 mph). Saltillo, in the northeast of the country, registered freezing drizzle and a temperature as low as −6 °C (21 °F).
The extreme cold weather grounded thousands of flights and seriously affected other forms of transport. Many power companies in the affected areas asked their customers to conserve electricity.
The weather event played a significant role in the US Economy contributing to a 2.9% drop in GDP. "The bad weather in much of the U.S. in early 2014 was a significant drag on the economy, disrupting production, construction, and shipments, and deterring home and auto sales," wrote PNC Senior Economist Gus Faucher in a note out prior to the release. "But data show growth rebounding in the second quarter, with improvements in home and auto sales and residential construction."
Evan Gold of weather intelligence firm Planalytics called the storm and the low temperatures the worst weather event for the economy since Hurricane Sandy just over a year earlier. 200 million people were affected, and Gold calculated the impact at $5 billion. $50 to $100 million was lost by airlines which cancelled a total of 20,000 flights after the storm began on January 2. JetBlue took a major hit because 80 percent of its flights go through New York City or Boston. Tony Madden of the Federal Reserve Bank of Minneapolis said with so many schools closed, parents had to stay home from work or work from home. Even the ones who could work from home, Madden said, might not have done as much. Not included in the total were the insurance industry and government costs for salting roads, overtime and repairs.
Gold said some industries benefitted from the storm and cold, including video on demand, restaurants which offered delivery services and convenience stores. People also used gift cards to buy online. Hopper Research of Boston observed that searches for flights to Cancun, Mexico increased by about half in northern cities. New England spot natural gas prices hit record levels from January 1 to February 18, with the day-ahead wholesale (spot) natural gas price at the Algonquin Citygate hub serving Boston averaging $22.53 per million British thermal units (MMBtu), a record high for these dates since the Intercontinental Exchange data series began in 2001.
At least 3,600 flights were cancelled on January 6, and several thousand were cancelled over the preceding weekend. Further delays were caused by the weather at airports that did not possess de-icing equipment. At O'Hare International Airport in Chicago, the jet fuel and deicing fluids froze, according to American Airlines spokesman Matt Miller.
Amtrak cancelled scheduled passenger rail service having connections through Chicago, due to heavy snows or extreme cold. Three Amtrak trains were stranded overnight on January 6, approximately 80 miles (130 km) west of Chicago, near Mendota, Illinois, due to ice and snowdrifts on the tracks. The 500 passengers were loaded onto buses the next morning for the rest of the trip to Chicago. Another Amtrak train was stuck near Kalamazoo, Michigan for 8 hours, while en route from Detroit to Chicago. Chicago Metra commuter trains reported numerous accidents. Detroit shut down its People Mover due to the low temperatures on January 7.
Between January 5 and 6, temperatures fell 50 °F (28 °C) in Middle Tennessee, dropping to a high of 9 °F (−13 °C) on Monday, January 6 in Nashville. During the cold wave, the strain on the power supply left 1,200 customers in Nashville without power, along with around 7,500 customers in Blount County. The Tennessee Emergency Management Agency declared a state of emergency.
The Weather Channel reported power outages in several states, abandoned cars on highways in North Carolina, and freezing rain in Louisiana.
Early news reporting suggested that the severe cold would cause high mortality among the emerald ash borer based on the opinion of a US Department of Agriculture spokesperson suggesting "The progressive loss of ash trees in North America due to this insect has probably been delayed by this deep freeze." This has been widely repudiated based on scientific studies of underbark temperature tolerances of emerald ash borer in Canada.
The weather affected schools, roads, and public offices.
In Minnesota, all public schools statewide were closed on January 6 by order of Governor Mark Dayton. This had not been done in 17 years. In Wisconsin, schools in most (if not all) of the state were closed on January 6 as well as on January 7. When the second cold wave emerged, schools were also closed on January 27 and 28. In Michigan, the mayor of Lansing, Virg Bernero, issued a snow emergency prohibiting all non-essential travel as well as closing down non-essential government offices. In Indiana, more than fifty of the state's ninety-two counties, including virtually everywhere north of Indianapolis, closed all roads to all traffic except emergency vehicles. In Ohio, schools across the entire state were closed on January 6 and 7, including the state's largest two school districts, Columbus City Schools and Cleveland Metropolitan School District. The Ohio State University completely shut down on January 6 and 7, delaying the start of the spring semester by two days for the first closure on two consecutive days in 36 years.
On March 4, 2014, the United States House of Representatives passed the Home Heating Emergency Assistance Through Transportation Act of 2014 (H.R. 4076; 113th Congress) in reaction to the extreme cold weather. The bill, if it becomes law, would create an emergency exception to existing Federal Motor Carrier Safety Administration regulations. The exceptions would allow truckers to drive for long hours if they are delivering home heating fuels, such as propane, to places where there is a shortage. The exemption would last until May 31, 2014. An existing suspension was scheduled to expire on March 15, 2014. According to Majority Leader Eric Cantor, the issue of household energy costs need to be addressed because "the Energy Information Administration predicted that 90 percent of U.S. households would see higher home heating costs this year, and low income families already spend 12 percent of their household budget on energy costs." Rep. Shuster argued in favor of the bill saying that it "will provide relief for millions of Americans suffering from the current propane and home heating fuel emergency." According to the Congressman, an "exceptionally cold winter" increased demand on propane, "which is used for heating approximately 12 million homes in the United States."
Schools in much of Southern Ontario and rural Manitoba were closed on both January 6 and 7 because of the combined threat of extreme cold, strong winds, and heavy snowfall. Outside the areas of heavy snowfall, schools remained open both days.
In Quebec, all public services remained open, with the exception of tutoring and any other services requiring exterior relocation.
Role of climate changeEdit
Research on a possible connection between individual extreme weather events and long-term anthropogenic climate change is a new topic of scientific debate. Prior to the events of January 2014, several studies on the connection between extreme weather and the polar vortex were published suggesting a link between climate change and increasingly extreme temperatures experienced by mid-latitudes (e.g., central North America). This phenomenon has been suggested by some to result from the rapid melting of polar sea ice, which replaces white, reflective ice with dark, absorbent open water (i.e., the albedo of this region has decreased). As a result, the region has heated up faster than other parts of the globe. With the lack of a sufficient temperature difference between Arctic and southern regions to drive jet stream winds, the jet stream may have become weaker and more variable in its course, allowing cold air usually confined to the poles to reach further into the mid latitudes.
This jet stream instability brings warm air north as well as cold air south. The patch of unusual cold over the eastern United States was matched by anomalies of mild winter temperatures across Greenland and much of the Arctic north of Canada, and unusually warm conditions in Alaska. A stationary high pressure ridge over the North Pacific Ocean kept California unusually warm and dry for the time of year, worsening ongoing drought conditions there.
Research has led to a good documentation of the frequency and seasonality of sudden warmings: just over half of the winters since 1960 have experienced a major warming event in January or February". According to Charlton and Polvani sudden stratospheric warming (SSW) in the Arctic has occurred during 60% of the winters since 1948 and 48% of these SSW events have led to the splitting of the polar vortex, leading to the same type of Arctic cold front that happened in January 2014.
A 2001 study found that "there is no apparent trend toward fewer extreme cold events in Europe or North America over the 1948–99 period, although a long station history suggests that such events may have been more frequent in the United States during the late 1800s and early 1900s." A 2009 MIT study found that such events are increasing and may be caused by the rapid loss of the Arctic ice pack.
The NOAA's National Climatic Data Center found that since modern records began in 1895, the period from December 2013 through February 2014 was the 34th-coldest such period for the contiguous 48 states as a whole.
The average temperature for the contiguous U.S. during the winter season was 31.3 °F (−0.4 °C), one degree below the 20th-century average, and the number of daily record-low temperatures outnumbered the number of record-high temperatures nationally in early 2014.
In contrast, California had its warmest winter on record, being 4.4 °F (2.4 °C) above average, while first two months of 2014 were the warmest on record in Fresno, Los Angeles, San Francisco, Las Vegas, Nevada, Phoenix and Tucson, Arizona.
In addition, while December through February was the ninth driest on record for the contiguous 48 states dating to 1895, chiefly due to extremely dry conditions in the West and Southwest, yet Winter snow cover areal extent was the 10th-largest on record for same 48 states, dating to 1966. New York City, Philadelphia, and Chicago all had one of their ten snowiest winters, while Detroit had its snowiest winter on record. Aside from persistence due to lack of melting, the lower temperatures may have had some impact—whilst snowfall has an average moisture content ratio of 10:1 (one inch of moisture producing 10 inches of snow), it can range from 3:1 to 100:1, generally rising with falling temperatures, documented instances in the North Central states of snowfalls with ratios from 75:1 up towards the maximum possible of 100:1 were observed and those in excess of 30:1 were rather common because of the temperatures in which the snow formed and fell.
Many cities experienced their coldest February in many years:
- Rochester, Minnesota experienced its fourth-coldest February, with a monthly average temperature of 6.7 °F (−14.1 °C)
- Green Bay, Wisconsin saw its third-coldest February, with a monthly average temperature of 8 °F (−13 °C)
- Minneapolis-St. Paul tied its record for the seventh-coldest February, with a monthly average temperature of 8.6 °F (−13.0 °C)
- Dubuque, Iowa realized its third-coldest February, with a monthly average temperature of 10 °F (−12 °C)
- Madison, Wisconsin saw its tenth-coldest February, with a monthly average temperature of 12.2 °F (−11.0 °C)
- Moline, Illinois tied its record for the fifth-coldest February, with a monthly average temperature of 14.6 °F (−9.7 °C)
- Fort Wayne, Indiana experienced its sixth-coldest February, with a monthly average temperature of 17.6 °F (−8.0 °C)
- Peoria, Illinois saw its sixth-coldest February, with a monthly average temperature of 17.9 °F (−7.8 °C)
- Kansas City, Kansas had its ninth-coldest February, with a monthly average temperature of 24.7 °F (−4.1 °C)
Daily record lows were set on February 28 in Gaylord, Michigan at −29 °F (−34 °C), Green Bay, Wisconsin with −21 °F (−29 °C), Flint, Michigan at −16 °F (−27 °C), Grand Rapids, Michigan −12 °F (−24 °C), and Toledo, Ohio at −7 °F (−22 °C) while Newberry, Michigan dipped to −41 °F (−41 °C).
During the official meteorological winter season of December through February, Brainerd, Minnesota averaged a meager 1.7 °F (−16.8 °C), and realized its third-coldest winter in recorded history. Similarly, the average temperature in Duluth, Minnesota was 3.7 °F (−15.7 °C), ranking this winter as its second-coldest.
The first week of March 2014 also saw remarkably low temperatures in many places, with 18 states setting all-time records for cold. Among them was Flint, Michigan, which reached −16 °F (−27 °C) March 3, and Rockford, Illinois at −11 °F (−24 °C).
Caribou, and Bangor, Maine, Barre/Montpelier, Vt., Glens Falls, N.Y., Dulles Airport, Gaylord, and Houghton Lake, Mich. experienced their coldest March on record in 2014. In addition, March was the second-coldest on record for Concord, N.H. Flint and International Falls, Minn. and Watertown, N.Y. and Marquette, Mich. both saw their third-coldest.
The entire December–March period in Chicago was the coldest on record, topping the previous record from 1903–04, even colder than the notoriously cold winters of the late 1970s. The average temperature in Chicago from December 1, 2013 to March 31, 2014 was 22 °F (−6 °C), 10 °F (5.6 °C) below average.
The state of Iowa went through its ninth-coldest winter in 141 years. Only the winters of 1935–36 and 1978–79 in the last century were colder, with the others being back in the 1880s.
March was near-record cold for the Southeastern U.S, where three states—Tennessee, Alabama, and Georgia—had their coldest March on record.
Despite the abnormally cold winter over sections of North America and much of Russia, most of the globe saw either average or above-average temperatures during the first four months of 2014. In fact, during the cold wave, North America saw much colder temperatures than Sochi, Russia which during the time was hosting the 2014 Winter Olympics.
During the last week of March, meteorologists expected average temperatures to return sometime from April to mid-May. On April 10, 2014, a ridge of high pressure moved into the Eastern United States, bringing average and above-average temperatures to the region, which ended the cold wave.
As of February 27, Winnipeg was experiencing the second-coldest winter in 75 years, the coldest in 35 years and with snowfall total being 50 per cent more than normal. Saskatoon was experiencing the coldest winter in 18 years; Windsor, Ontario, the coldest winter in 35 years and snowiest winter on record; Toronto, the coldest winter in 20 years; St. John's, Newfoundland and Labrador, the coldest winter in 20 years, the snowiest winter in seven years and a record number of stormy days. Vancouver, which is known for its milder weather, was realizing one of its coldest and snowiest Februarys in 25 years.
In 2010 the U.S. winter period December – February had experienced its coldest in 25 years, while Canada had its warmest winter on record.
|Wikimedia Commons has media related to 2013–14 North American cold wave.|
- January 1998 North American ice storm
- December 2013 North American storm complex
- January 2014 Gulf Coast winter storm
- February 11–17, 2014 North American winter storm
- 2013–14 North American winter storms
- 2014–15 North American winter
- November 2014 Bering Sea cyclone
- November 2014 North American cold wave
- "Rough Winter to Lag into March for Midwest, East". AccuWeather.
- "North America Zoom-in Surface Weather Map". National Weather Service. January 8, 2014. Archived from the original on January 8, 2014. Retrieved January 8, 2014.
- "Cost of the cold: 'polar vortex' spell cost US economy $5bn". The Guardian. Associated Press. January 9, 2014. Retrieved January 9, 2014.
- "U.S. polar vortex sets record low temps, kills 21". CBC News. January 8, 2014.
- "Polar Air Blamed For 21 Deaths Nationwide". Chicago Defender. January 8, 2014.
- Gutro, Rob. "Polar Vortex Enters Northern U.S." Retrieved January 8, 2014.
- "N America weather: Polar vortex brings record temperatures". BBC News – US & Canada. BBC News Online. January 6, 2014. Retrieved January 6, 2014.
- Calamur, Krishnadev (January 5, 2014). "'Polar Vortex' Brings Bitter Cold, Heavy Snow To U.S." The Two Way. National Public Radio. Retrieved January 6, 2014.
- Preston, Jennifer (January 6, 2014). "'Polar Vortex' Brings Coldest Temperatures in Decades". The Lede. The New York Times. Retrieved January 6, 2014.
- "Arctic Monday for 140 million as 'POLAR VORTEX' barrels across the US: 4,400 flights canceled, schools closed as far south as ATLANTA and the coldest temperatures recorded in 20 years". Daily Mail. London. January 6, 2014. Retrieved January 7, 2014.
- Associated, The. "5 Things To Know About The Record-Breaking Freeze". NPR. Retrieved January 8, 2014.
- Spotts, Pete (January 6, 2014). "How frigid 'polar vortex' could be result of global warming (+video)". The Christian Science Monitor. Retrieved January 9, 2014.
- "Freezing US – Is the Polar Vortex to Blame?". BBC. Archived from the original on January 7, 2014.
- DeMarche, Edmund (January 4, 2014). "'Polar vortex' set to bring dangerous, record-breaking cold to much of US". FoxNews.com. Fox News. Retrieved January 6, 2014.
- "Chicago Gripped By Record Cold; Schools Closed, Public Transit Delayed". WBBM-TV. January 5, 2014. Retrieved January 8, 2014.
- "'It's too darn cold': Historic freeze brings rare danger warning". CNN. January 6, 2014. Retrieved January 8, 2014.
- "Metra plans on normal schedule for evening rush; CPS classes to resume Wednesday". Chicago Sun-Times. January 6, 2014. Retrieved January 8, 2014.
- "'ChiBeria,' Chicago's biggest chill in nearly 20 years, headed our way". WLS. January 5, 2014. Archived from the original on January 8, 2014. Retrieved January 8, 2014.
- "Ask Tom why: What was the lowest wind chill ever recorded in Chicago?". Articles.chicagotribune.com. January 18, 2011. Retrieved January 15, 2014.
- Borenstein, Seth (January 10, 2014). "Weather wimps?". Salisbury Post. Associated Press. p. 1A.
- Doyle Rice (January 7, 2014), List of record-low temperatures set Tuesday USA Today
- Coldest Arctic Outbreak in Midwest, South Since the 1990s Wrapping Up, weather.com, Jon Erdman and Nick Wiltgen, January 8, 2014
- "Extreme Cold Wave Invades Eastern Half of U.S." Weather Underground. January 7, 2014. Retrieved January 7, 2014.
- Record-setting cold turns deadly Archived January 9, 2014, at the Wayback Machine Macon Telegraph January 7, 2014
- "Tampa weather: history | Tampa Bay Times". Tampa Bay Times. Retrieved July 31, 2014.
- Daily Data Report for January 2014 Archived March 4, 2016, at the Wayback Machine Canada government site
- "Daily Data". Climate.weather.gc.ca. November 12, 2013. Archived from the original on January 7, 2014. Retrieved January 8, 2014.
- "At −24 C, Hamilton sets cold temperature record – Latest Hamilton news – CBC Hamilton". Cbc.ca. January 19, 1994. Retrieved January 8, 2014.
- "London January Weather 2014 – AccuWeather Forecast for Ontario Canada". Accuweather.
- "Alerts for: Simcoe – Delhi – Norfolk – Environment Canada". Weather.gc.ca. April 16, 2013. Retrieved January 7, 2014.
- "Snow, cold disrupt large swath of US; more to come". Boston Globe. Associated Press. January 3, 2014. Archived from the original on January 14, 2014. Retrieved January 10, 2014.
- "Winter storm 2014 strikes Midwest, Northeast; Snow storm 'Hercules' followed by 'polar vortex'". WLS-TV. January 5, 2014. Retrieved January 8, 2014.
- Fitzsimmons, Emma G. (January 5, 2014). "Jet Skids into Snowbank at J.F.K. ad Airport". The New York Times. Retrieved January 8, 2014.
- "Fierce weather forces more flight cancellations, delays". USA TODAY. Archived from the original on March 5, 2015. Retrieved March 6, 2018.
- "Dangerously Cold Temperatures Settle into Mid-State". WTVF NewsChannel 5. January 6, 2014. Retrieved January 6, 2014.
- Associated Press, The (January 7, 2013). "Record low temperatures in New York City". Fox News. Archived from the original on January 7, 2014. Retrieved January 8, 2014.
- "Weather History for Chicago, IL". The Weather Channel.
- "US weather: all 50 states fall below freezing". The Telegraph.
- "Deep freeze extends from Winnipeg east to Newfoundland". Cbc.ca. January 2, 2014. Retrieved January 15, 2014.
- nurun.com (January 7, 2014). "Schools closed, blizzard warning continued". Lfpress.com. Retrieved January 15, 2014.
- "Weather Alerts – Environment Canada". Weather.gc.ca. December 5, 2013. Retrieved January 7, 2014.
- North America's big freeze seen from space BBC News, January 9, 2014
- "'Frost quakes' wake Toronto residents on cold night – Toronto – CBC News". Cbc.ca. January 3, 2014. Retrieved January 7, 2014.
- Satellite Blog, CIMSS (January 3, 2014). "Tehuano wind event in the wake of a strong eastern US winter storm". Space Science and Engineering Center. University of Wisconsin-Madison. Retrieved January 8, 2014.
- "Synop report summary". Saltillo: Ogimet. Retrieved January 11, 2014.
- "76393: Monterrey, N. L. (Mexico)". Professional information about meteorological conditions in the world. Ogimet. Retrieved January 30, 2014.
- "NERC on Polar Vortex Performance: Good, Could be Better". rtoinsider.com. September 30, 2014. Retrieved February 7, 2017.
- "U.S. GDP Dropped 2.9% In The First Quarter 2014, Down Sharply From Second Estimate". Forbes.com. June 25, 2014.
- "Deep freeze may have cost economy about $5 billion, analysis shows". Durangoherald.com. January 10, 2014. Retrieved January 15, 2014.
- Karnowski, Stevework (January 9, 2014). "Deep freeze may have cost economy about $5 billion". News & Observer. Associated Press. Retrieved January 10, 2014.
- EIA. "Today in Energy". U.S. Energy Information Administration. Retrieved March 1, 2014.
- Castellano, Anthony (January 3, 2013). "At Least 13 Died in Winter Storm That Dumped More Than 2 Feet of Snow Over Northeast". ABC News.
- "North America arctic blast creeps east". BBC News. January 7, 2014. Retrieved January 7, 2014.
- "OIA Can't Deice Frozen Jets". AviationPros.com. January 7, 2010. Retrieved January 7, 2014.
- KYLE ARNOLD (January 6, 2014). "Sky Writer: Frozen aircraft fuel stalling nationwide air travel". Tulsa World. Retrieved January 13, 2014.
American Airlines said Monday that it's so cold in Chicago that airline fuel is freezing and they can't refuel planes. "Fuel and glycol supplies are frozen – at ORD (Chicago's O'Hare) and other airports in the Midwest and Northeast," said American Airlines spokesman Matt Miller.
- "Service Alert". Amtrak. Retrieved January 7, 2014.
- "Passengers stuck on Amtrak train 8 hours; Amtrak cancels some Chicago train service". WLS-TV. January 6, 2014. Retrieved January 8, 2014.
- "500 passengers spend night on stranded Amtrak trains". Chicago Tribune. WGN-TV. January 7, 2014. Retrieved January 8, 2014.
- Bone-Chilling Temps Shut Down Detroit People Mover, CBS News, January 7, 2014
- Hayley Harmon (January 4, 2014). "Cold weather knocks out power for 7500 Blount County residents". WATE News. Retrieved January 9, 2014.
- 'Historic and life-threatening' freeze brings rare danger warning, CNN, January 6, 2014
- "Winter Storm Pax Update: Hundreds of Thousands Lose Power, Drivers Abandoning Cars on Charlotte and Raleigh Highways". The Weather Channel. February 12, 2014. Archived from the original on February 11, 2014. Retrieved February 12, 2014.
- CTVNews.ca Staff (January 6, 2014). "Power restored to majority of customers in Newfoundland". CTV News. Bell Media. Retrieved January 6, 2014.
- "Pearson airport delays: What you need to know". CBC News. January 7, 2014. Retrieved January 7, 2014.
- "Extreme cold causes more delays, cancellations at Pearson". CP24. January 7, 2014. Retrieved January 8, 2014.
- Ziezulewicz, Geoff (January 6, 2014). "Cold weather could limit ash borer threat". Chicago Tribune. Retrieved January 8, 2014.
- Purvis, Micheal (February 28, 2014). "Deer in danger from cold weather, but emerald ash borer likely not significantly affected, say scientists". Sault Star. Retrieved February 28, 2014.
- Spears, Tom (February 4, 2014). "False hope: Deep freeze poses no threat to tree-munching emerald ash borer". Ottawa Citizen. Retrieved February 4, 2014.
- Vermunt, Bradley; Cuddington, Kim; Sobek-Swant, Stephanie; Crosthwaite, Jill (2012). "Cold temperature and emerald ash borer: Modelling the minimum under-bark temperature of ash trees in Canada". Ecological Modelling. 235–236: 19–25. doi:10.1016/j.ecolmodel.2012.03.033.
- Livingston, Ian (January 7, 2014). "Polar vortex delivering D.C.'s coldest day in decades, and we're not alone". Washington Post. Retrieved January 7, 2014.
- "Gov. Orders Schools Closed Monday Over Dangerous Cold". CBS. January 3, 2014. Retrieved January 14, 2014.
- "'Polar vortex' set to bring dangerous, record-breaking cold to much of US". Fox News. January 4, 2014. Retrieved January 11, 2014.
- Richards, Erin (January 7, 2014). "2nd day of school closings cuts into planned snow days". Milwaukee Journal Sentinel. Retrieved January 7, 2014.
- Vielmetti, Bruce. "Dangerous blast of cold leads to more school closings". Milwaukee Journal Sentinel. Retrieved January 28, 2014.
- "Mayor Declares Snow Emergency" (Press release). Office of Mayor Virg Bernero, Lansing, MI. January 5, 2014. Archived from the original on January 7, 2014. Retrieved January 7, 2014.
- "County Travel Status for 01/06/2014 00:20:11 EDT" (PDF). Indiana Department of Homeland Security. January 6, 2014. Archived (PDF) from the original on January 6, 2014. Retrieved January 14, 2014.
- "Temperatures continue to drop; districts cancel school". The Columbus Dispatch. Retrieved January 7, 2014.
- "Frosty weather keeps Ohio State closed for a 2nd straight day". The Lantern. Retrieved January 7, 2014.
- Kasperowicz, Pete (February 28, 2014). "Cold snap prompts wave of energy bills". The Hill. Retrieved March 5, 2014.
- "H.R. 4076 – Summary". United States Congress. Retrieved March 4, 2014.
- Kasperowicz, Pete (March 4, 2014). "House votes to ease access to home heating oil". The Hill. Retrieved March 5, 2014.
- Mauriello, Tracie (March 4, 2014). "Fuel-trucker bill advances". Pittsburgh Post-Gazette. Retrieved March 5, 2014.
- "Hamilton board defends decision to keep schools open – Latest Hamilton news – CBC Hamilton". Cbc.ca. January 7, 2014. Retrieved January 15, 2014.
- "Extreme cold weather alert for Toronto | CTV Toronto News". Toronto.ctvnews.ca. December 16, 2013. Retrieved January 9, 2014.
- Cold weather Resource Kit City of Ottawa official site
- Sobel, Adam (January 7, 2014). "Record cold doesn't disprove global warming". CNN. Retrieved January 10, 2014.
- Baldwin, M. P.; Dunkerton, TJ (2001). "Stratospheric Harbingers of Anomalous Weather Regimes". Science. 294 (5542): 581–4. Bibcode:2001Sci...294..581B. doi:10.1126/science.1063315. PMID 11641495.
- Song, Yucheng; Robinson, Walter A. (2004). "Dynamical Mechanisms for Stratospheric Influences on the Troposphere". Journal of the Atmospheric Sciences. 61 (14): 1711–25. Bibcode:2004JAtS...61.1711S. doi:10.1175/1520-0469(2004)061<1711:DMFSIO>2.0.CO;2.
- Overland, James E. (2013). "Atmospheric science: Long-range linkage". Nature Climate Change. 4 (1): 11–2. Bibcode:2014NatCC...4...11O. doi:10.1038/nclimate2079.
- Tang, Qiuhong; Zhang, Xuejun; Francis, Jennifer A. (2013). "Extreme summer weather in northern mid-latitudes linked to a vanishing cryosphere". Nature Climate Change. 4 (1): 45–50. Bibcode:2014NatCC...4...45T. doi:10.1038/nclimate2065.
- Screen, J A (2013). "Influence of Arctic sea ice on European summer precipitation". Environmental Research Letters. 8 (4): 044015. Bibcode:2013ERL.....8d4015S. doi:10.1088/1748-9326/8/4/044015.
- Francis, Jennifer A.; Vavrus, Stephen J. (2012). "Evidence linking Arctic amplification to extreme weather in mid-latitudes". Geophysical Research Letters. 39 (6): n/a. Bibcode:2012GeoRL..39.6801F. doi:10.1029/2012GL051000.
- Petoukhov, Vladimir; Semenov, Vladimir A. (2010). "A link between reduced Barents-Kara sea ice and cold winter extremes over northern continents". Journal of Geophysical Research. 115 (D21): D21111. Bibcode:2010JGRD..11521111P. doi:10.1029/2009JD013568.
- Masato, Giacomo; Hoskins, Brian J.; Woollings, Tim (2013). "Winter and Summer Northern Hemisphere Blocking in CMIP5 Models". Journal of Climate. 26 (18): 7044–59. Bibcode:2013JCli...26.7044M. doi:10.1175/JCLI-D-12-00466.1.
- Wang, L.; Chen, W. (2010). "Downward Arctic Oscillation signal associated with moderate weak stratospheric polar vortex and the cold December 2009". Journal of Geophysical Research. 37 (9): 581–4. Bibcode:2010GeoRL..37.9707W. doi:10.1029/2010GL042659.
- Walsh, Bryan (January 6, 2014). "Polar Vortex: Climate Change Might Just Be Driving the Historic Cold Snap". TIME. Retrieved January 7, 2014.
- Friedlander, Blaine (March 4, 2013). "Arctic ice loss amplified Superstorm Sandy violence". Cornell Chronicle. Retrieved January 7, 2014.
- Spotts, Pete (January 6, 2014). "How frigid 'polar vortex' could be result of global warming (+video)". The Christian Science Monitor. Retrieved January 8, 2014.
- Wetzel, G; Oelhaf, H.; Kirner, O.; Friedl-Vallon, F.; Ruhnke, R.; Ebersoldt, A.; Kleinert, A.; Maucher, G.; Nordmeyer, H.; Orphal, J. (2012). "Diurnal variations of reactive chlorine and nitrogen oxides observed by MIPAS-B inside the January 2010 Arctic vortex". Atmospheric Chemistry and Physics. 12 (14): 6581–6592. Bibcode:2012ACP....12.6581W. doi:10.5194/acp-12-6581-2012.
- Weng, H. (2012). "Impacts of multi-scale solar activity on climate. Part I: Atmospheric circulation patterns and climate extremes". Advances in Atmospheric Sciences. 29 (4): 867–886. Bibcode:2012AdAtS..29..867W. doi:10.1007/s00376-012-1238-1.
- Lue, J.-M.; Kim, S.-J.; Abe-Ouchi, A.; Yu, Y.; Ohgaito, R. (2010). "Arctic Oscillation during the Mid-Holocene and Last Glacial Maximum from PMIP2 Coupled Model Simulations". Journal of Climate. 23 (14): 3792–3813. Bibcode:2010JCli...23.3792L. doi:10.1175/2010JCLI3331.1.
- Zielinski, G.; Mershon, G. (1997). "Paleoenvironmental implications of the insoluble microparticle record in the GISP2 (Greenland) ice core during the rapidly changing climate of the Pleistocene-Holocene transition". Bulletin of the Geological Society of America. 109 (5): 547–559. Bibcode:1997GSAB..109..547Z. doi:10.1130/0016-7606(1997)109<0547:PIOTIM>2.3.CO;2.
- Andrew Freedman (January 2, 2014). "Arctic Outbreak: When the North Pole Came to Ohio".
- Barron-Lopez, Laura (January 6, 2014). "Is global warming behind polar vortex?". The Hill. Retrieved January 8, 2014.
- More on the Drought; The Numbers Are Frightening AccuWeather, January 6, 2014
- GEOS-5 Analyses and Forecasts of the Major Stratospheric Sudden Warming of January 2013 NASA
- Extreme Cold Outbreaks in the United States and Europe, 1948–99 American Meteorological Society June 2001
- Walsh, Bryan (January 6, 2014). "Climate Change Might Just Be Driving the Historic Cold Snap". Time. Retrieved January 9, 2014.
- "Monthly & Seasonal Snowfall at Central Park". National Oceanic and Atmospheric Administration. Retrieved April 1, 2014.
- "NowData – NOAA Online Weather Data". National Oceanic and Atmospheric Administration. Retrieved April 1, 2014.
- "NowData – NOAA Online Weather Data". National Oceanic and Atmospheric Administration. Retrieved April 1, 2014.
- "NowData – NOAA Online Weather Data". National Oceanic and Atmospheric Administration. Retrieved April 1, 2014.
- Erdman, Jon. "NOAA: Winter 2013–2014 Among Coldest on Record in Midwest; Driest, Warmest in Southwest". The Weather Channel. Retrieved March 20, 2014.
- Rice, Doyle (March 13, 2014). "Numbing numbers: U.S. had coldest winter in 4 years". USA TODAY. Retrieved March 20, 2014.
- MWS Milwaukee-Sullivan January 13, 2014
- Dolce, Chris. "10 Cities Where February Could Rank as a Top 10 Coldest". Winter Storm Central. The Weather Channel. Retrieved March 12, 2014.
- Richardson, Renee (March 5, 2014). "2013–14 winter ranks third for all-time coldest". Brainerd Dispatch. Retrieved March 12, 2014.
- Dolce, Wiltgen, Chris, Nick. "Winter Won't Give Up: Now 18 States With All-Time Record March Cold". March 6, 2014. Weather Underground, Inc. Retrieved March 13, 2014.
- "It Was a Record-Cold March From Northern Great Lakes to Northern New England". Apr 2, 2014. Retrieved April 29, 2014.
- "Chicago Had Its Coldest Winter on Record in Over A Century". Archived from the original on April 4, 2014. Retrieved April 29, 2014.
- Hillaker, Harry. "AGRIBUSINESS: Iowa Goes Through Ninth Coldest Winter". March 12, 2014. Retrieved March 13, 2014.
- Sosnowski, Alex. "Rough Winter to Lag into March for Midwest, East". AccuWeather.
- "2014 Winter Olympics: Ten places colder than Sochi right now". Slate Magazine.
- CTVNews.ca Staff. "This winter is miserable: meteorologists have confirmed it". February 27, 2014. Bell Media. Retrieved March 13, 2014.
- Masters, Dr. Jeff. "An upside-down winter: coldest in 25 years in U.S., warmest on record in Canada". Retrieved April 29, 2014.
|
{
"palladium_score": 3.667293071746826,
"timestamp": "2026-01-18T07:31:58.446915",
"source": "Palladium-STEM (Preview)"
}
|
http://cancer.dartmouth.edu/pf/health_encyclopedia/nord596
|
National Organization for Rare Disorders, Inc.
It is possible that the main title of the report Anencephaly is not the name you expected.
Anencephaly is a term that refers to the incomplete development of the brain, skull, and scalp and is part of a group of birth defects called neural tube defects (NTD). The structure which will become the neural tube is supposed to fold and to close together (to form a tube) during the third and fourth weeks of pregnancy. From this neural tube, the brain and spinal cord of the embryo develop. Neural tube defects happen when the neural tube does not close as expected. Anencephaly occurs when the end of the neural tube that would have developed into the brain does not close properly, resulting in the failure of the development of major portions of brain, skull and scalp. Other neural tube defects, such as spina bifida, form when the neural tube does not close properly in a different part of the neural tube.
Infants with anencephaly are born without the front part of the brain, (forebrain) and the thinking and coordinating part of the brain (cerebral hemispheres and cerebellum). Most of the time the remaining brain tissue may be exposed, without skull or scalp to cover and protect it. Although reflex actions such as breathing and responses to touch or sound may occur, gaining consciousness is not possible. Usually infants with anencephaly do not survive more than a few days or weeks.
Meroanencephaly and holoanencephaly are terms refer to the extent of the cranial defect, however, they typically are not used in clinical descriptions and are not predictive of severity of the condition. The term acrania has been used interchangeably with anencephaly in some parts of the world but that practice is discouraged as it confuses two very different conditions.
NIH/National Institute of Neurological Disorders and Stroke
P.O. Box 5801
Bethesda, MD 20824
NIH/National Institute of Child Health and Human Development
31 Center Dr
Building 31, Room 2A32
Bethesda, MD 20892
Birth Defect Research for Children, Inc.
976 Lake Baldwin Lane
Orlando, FL 32814
MUMS National Parent-to-Parent Network
150 Custer Court
Green Bay, WI 54301-1243
Genetic and Rare Diseases (GARD) Information Center
PO Box 8126
Gaithersburg, MD 20898-8126
Infants Remembered In Silence, Inc. (IRIS)
101 Third Street NW
Faribault, MN 55021
Fetal Hope Foundation
9786 S Holland Street
Littleton, CO 80127
Share Pregnancy & Infant Loss Support, Inc.
402 Jackson Street
Saint Charles, MO 63301
For a Complete Report
This is an abstract of a report from the National Organization for Rare Disorders (NORD). A copy of the complete report can be downloaded free from the NORD website for registered users. The complete report contains additional information including symptoms, causes, affected population, related disorders, standard and investigational therapies (if available), and references from medical literature. For a full-text version of this topic, go to MyD-H, the Dartmouth-Hitchcock patient portal. You must be a registered MyD-H user for the Lebanon, Manchester, or Nashua locations to access this site.
The information provided in this report is not intended for diagnostic purposes. It is provided for informational purposes only. NORD recommends that affected individuals seek the advice or counsel of their own personal physicians.
It is possible that the title of this topic is not the name you selected. Please check the Synonyms listing to find the alternate name(s) and Disorder Subdivision(s) covered by this report
This disease entry is based upon medical information available through the date at the end of the topic. Since NORD's resources are limited, it is not possible to keep every entry in the Rare Disease Database completely current and accurate. Please check with the agencies listed in the Resources section for the most current information about this disorder.
For additional information and assistance about rare disorders, please contact the National Organization for Rare Disorders at P.O. Box 1968, Danbury, CT 06813-1968; phone (203) 744-0100; web site www.rarediseases.org or email email@example.com
Last Updated: 8/1/2012
Copyright 1988, 1989, 1990, 1992, 1999, 2002, 2009, 2012 National Organization for Rare Disorders, Inc.
Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
|
{
"palladium_score": 3.8019039630889893,
"timestamp": "2026-01-18T07:32:00.814753",
"source": "Palladium-STEM (Preview)"
}
|
https://bblu.org/2014/09/18/happening/
|
For over 100 years, the Planck Length was virtually ignored. That length was so small, it seemed meaningless.¹ Nothing and nobody could measure it. It was just a ratio of known constants. Yet, it created a conceptual limit of a length which gave a New Orleans high school geometry class a goal or a boundary beyond which they did not have to go. Recent measurements from the Hubble telescope provided the upper limit so this class could define the number of base-2 exponential notations from the smallest measurement of a length to the Observable Universe, the largest.
Within that continuum everything can be placed in a mathematical and geometric order. Everything. That is, everything in the known universe. The most remarkable discovery was that it took no more than 205.1 base-2 exponential notations. It would be our very first view of an ordered universe. And, it readily absorbed all of our worldviews.
That was December 19, 2011. Formally dubbed, “The Big Board – little universe,” we then asked, “What does it mean? How do we use it?” When we engaged the experts, they appeared a bit puzzled and seemed to be asking, “Why haven’t we seen this chart before?” Those who knew Kees Boeke’s 1957 book, Cosmic Vision, asked, “How is it different from Boeke’s work using base-10 exponential notation?” That was a challenge. Our best answers to date – it’s more granular, it mimics chemical bonding and cellular reproduction; it’s based on cascading, embedded, and combinatorial geometries – were not good enough. In April 2012 even the Wikipedia experts (Steven Johnson, MIT) protested. He classified our analysis as “original research” and within a very short time our Wikipedia article was taken down. Others called it idiosyncratic (John Baez, UC-Riverside), but they did not tell us what was wrong with our analysis.
“Let’s just make as many observations as we can to see what can we learn?” A NASA senior scientist and a French astrophysicist helped us with our calculations. Their results gave us a range; the low was 202.34 notations and the high, 205.11. We could identify many things between the 66th notation and the 199th notation. But, there were blanks everywhere so we got busy speculating about them. The biggest group of empty notations was from 2 to about 65. We asked, “Conceptually, what could be there?” Max Planck may have given us a clue when in 1944, in a speech in Florence, Italy; he said, “All matter originates and exists only by virtue of a force which brings the particle of an atom to vibration and holds this most minute solar system of the atom together. We must assume behind this force the existence of a conscious and intelligent mind. This mind is the matrix of all matter.” (The Nature of Matter, Archiv zur Geschichte der Max-Planck-Gesellschaft, Abt. Va, Rep. 11 Planck, Nr. 1797, 1944) Matrix is a good word. Throughout history others have described it as the aether, continuum, firmament, grid, hypostases, plenum and vinculum.
We made two columns and within the top notations, 100-to-103, we found humanity. That seemed politically incorrect until we discovered the cosmological principle that the universe is isotropic and homogeneous. So, if it is true for us, it would also have to be true for “everybody” anywhere in the universe.
This is high school. We had been following embedded geometries, particularly the tetrahedron and octahedron. We observed a tetrahedral-octahedral-tetrahedral chain. In no more than 206 layers everything in the universe is bound together. We learned about tilings and could see that the four hexagonal plates we discovered within the octahedron also created tiles in every possible direction.
“What is this all about? Just what’s happening here?”
We knew we were imposing a certain continuity and order with our mathematics (base-2 exponential notation), and we were also conveying certain simple symmetries and relations with our geometries. That wrapped our work within a conceptual framework that was quite the opposite of the chaotic world of quantum mechanics. Our picture of the known universe was increasingly intimate and warm; it was highly-ordered and had immediate value. And the more we looked at it, the more it seemed that all of science and life had an inherent valuation structure. Here numbers became the container for time, and geometries the container for space. How each was derived became our penultimate challenge. Ostensibly we had backed into a model of the universe and somehow we began to believe that if we could stick with it long enough, it just might ultimately give us some answers to the age-old question, “What is life?”
We had strayed quite far from those tedious chapters in our high school geometry textbook. Yet, we also quickly discovered how little we knew about basic structure when we attempted to guess about the transitions from one notation to the next. We asked, “How can we get from the most-simply defined structure, a sphere, to a sphere with a tetrahedron within it?” We needed more perspective.
“Who is doing this kind of work?” We began our very initial study of the Langlands Program and amplituhedrons. Then, we walked back through history, all the way to the ancient Greeks and we found strange and curious things all along the way. There were the circles of Metatron that seemed to generate the five platonic solids. “How does that work? Are there experts who use it? How?” We still do not have a clue. All the discussions about infinitesimals seemed to come to a crescendo with the twenty-year, rancorous debate between Thomas Hobbes and John Wallis. It was here that we began to understand how geometry lost ground to calculus and algorithms.
The Big Board-little universe was awkward to use. It was five feet tall and a foot wide. Using the Periodic Table as a model, an 8½-by-11 chart was created and quickly dubbed, The Universe Table. It would be our Universe View into which we could hopefully incorporate any worldview. It was an excellent ordering and valuation system.
Though the Planck Length became a natural unit of measurement, a limit based on known universal constants, it wasn’t until Frank Wilczek of MIT opened the discussion did things really begin to change. In an obscure 1965 paper by C. Aldon Mead, his use of the Planck Length was pivotal. In 2001 Wilczek’s analysis of Mead’s work and their ensuing dialogue was published in Physics Today. Wilczek, well on his way to obtaining a Nobel Prize, then began writing several provocative articles, Scaling Mt. Planck. Even his books were helpful. In January 2013 he personally encouraged us on our journey.
In 1899 Max Planck began his quest to define natural units. At that time he took some of the constants of science and he started figuring out natural limits based on them. There are now hundreds that have been defined. Each is a ratio and each can be related to our little chart and big board. The very nature of a ratio seems to be a special clue. It holds a dynamic tension and suggests that the relation is primary and all else is derivative.
We have a lot of work in front of us! And, we are up for the challenge.
Who would disagree with the observation that our world has deep and seemingly unsolvable problems? The human future has become so problematical and complex, proposals for redirecting human energies toward basic, realizable, and global values appear simplistic. Nevertheless, the need for such a vision is obvious. Rational people know that there is something profoundly missing. So, what is it? Is it ethics, morality, common sense, patience, virtues like charity, hope and love? We have hundreds of thousands of books, organizations and thoughtful people who extol all of these and more. The lists are robust. The work is compelling, but obviously none of it is quite compelling enough.
First, it has to be simple. Our chart is simple.
Second, it has to open up to enormous complexities. Using simple math, by the tenth notation there are 1024 vertices. We dubbed it the Forms or Eidos after Plato. The 20th notation would add a million vertices; we called it Structure. The 30th adds a billion new vertices. We ask, “Why not Substances?” The 40th adds a trillion so we think Qualities. The 50th adds a quadrillion vertices. We speculate Relations. By the 60th notation there are no less than a total of 2 quintillion vertices with which to create complexity. We speculate Systems and within Systems there could be The Mind. As if a quintillion vertices is not enough, the great physicist, Freeman Dyson, advises us that really we should be multiplying by 8, not by 2, so potential complexity could be exponentially greater.
Three, it should be elegant. There is nothing more elegant than complex symmetries interacting dynamically that create special harmonies. We can feel it. And, we believe the Langlands program and amplituhedrons will help us to further open that discussion.
What is life? Let us see if we can answer very basic questions about the essence of life for a sixth grade advanced-placement science class and for very-average, high-school students. These are our students. The dialogue is real. The container for these questions and answers is base-2 exponential notation from the Planck Length to the Observable Universe. To the best of our knowledge, December 19, 2011 was the first time base 2 exponential notation was used in a classroom as the parameter set to define the universe. Though our study at that time was geometry, this work was then generalized to all the scientific disciplines, and more recently it was generalized to business and religion. So, as of today, readers will see, and possibly learn, the following:
1. See the totality of the finite, highly-ordered, profoundly inter-related, very-small universe where humanity is quite literally back in the middle of it all.
2. Engage in speculations about the Infinite and infinity whereby the Creative and the Good take a prominent place within the universal constructs of Science.
3. Extend the scale of the universe by redefining the Small Scale and engaging in speculations about the deep symmetries of nature, giving the Mind its key role within Systems, and demonstrating the very nature of homogeneity and isotropy.
4. Adopt an integrated universe view based on Planck Length and Planck Time such that Science, Technology, Engineering and Mathematics are demythologized, new domains for research are opened, and philosophies and religions are empowered to be remythologized within the constraints of universals and constants.
People ask, “Aren’t you getting ahead of yourself? Isn’t this a bit ambitious?” The concepts of space and time raise age-old questions about who we are, where we have come from, and where we are going. With our little formulation, still in its infancy, we are being challenged to see life more fully and more deeply. And so we reply, “What’s wrong with that?”
1 http://www.phys.unsw.edu.au/einsteinlight/jw/module6_Planck.htm Physics professor, Joe Wolfe (Australia), says, “Nothing fundamentally changes at the Planck scale, and there’s nothing special about the physics there, it’s just that there’s no point trying to deal with things that small. Part of why nobody bothers is that the smallest particle, the electron, is about 1020 times larger (that’s the difference between a single hair and a large galaxy).
|
{
"palladium_score": 3.5236849784851074,
"timestamp": "2026-01-18T07:32:08.443217",
"source": "Palladium-STEM (Preview)"
}
|
https://www.geologypage.com/2018/03/sediment-core-from-sluice-pond-contains-evidence-for-1755-new-england-earthquake.html
|
Signs of a 1755 earthquake that was strong enough to topple steeples and chimneys in Boston can be seen in a sediment core drawn from eastern Massachusetts’ Sluice Pond, according to a new report published in Seismological Research Letters.
Katrin Monecke of Wellesley College and her colleagues were able to identify a layer of light brown organic-rich mud within the core, deposited between 1740 and 1810, as a part of an underwater landslide, possibly unleashed by the 1755 Cape Ann earthquake.
The Cape Ann earthquake is the most damaging historic earthquake in New England. While its epicenter was probably located offshore in the Atlantic, the shaking was felt along the North American eastern seaboard from Nova Scotia to South Carolina. Based on contemporary descriptions of damage from Boston and nearby villages, the shaking has been classified at modified Mercalli intensities of “strong” to “very strong,” ((VI-VII) meaning that it would have caused slight to moderate damage of ordinary structures.
New England is located within a tectonic plate, so “it is not as seismically active as places like California, at an active tectonic plate margin,” said Monecke. “There are zones of weakness mid-plate in New England and you do build up tectonic stress here, you just don’t build it up at the same rate that would occur at a plate boundary.”
With few faults to study, however, researchers like Monecke and her colleagues are looking for signs of seismically-induced landslides or the deformation of soft soils to trace the historic and prehistoric record of earthquakes in the region.
Monecke hopes that the new Sluice Pond core will give seismologists a way “to calibrate the sedimentary record of earthquakes in regional lakes,” she said.
“It is important to see what an earthquake signature looks like in these sediments, so that we can start looking at deeper, older records in the region and then figure out whether 1755-type earthquakes take place for example, every 1000 years, or every 2000 years,” Monecke added.
The researchers chose Sluice Pond to look for signs of the Cape Ann earthquake for a variety of reasons. First, the lake is located within the area of greatest shaking from the 1755 event, “and we know from other studies of lakes that have been carried out elsewhere that you need intensities of approximately VII to cause any deformation within the lake sediments,” Monecke said.
Sluice Pond also has steep sides to its center basin, which would make it susceptible to landsliding or underwater sliding during an earthquake with significant shaking. The deep basin with a depth of close to 65 feet also harbored a relatively undisturbed accumulation of sediments for coring.
Through a painstaking analysis of sediment size and composition, pollen and plant material and even industrial contaminants, the research team was able to identify changes in sediment layers over time in the core. The light brown layer deposited at the time of the Cape Ann quake caught their eye, as it contained a coarser mix of sediments and a slightly different mix of plant microfossils.
“These were our main indicators that something had happened in the lake. We saw these near shore sediments and fragments of near-shore vegetation that appear to have been washed into the deep basin,” by strong shaking, said Monecke.
In an interesting twist, land clearing by early settlers from as far back as 1630 may have made the underwater slopes more susceptible to shaking, Monecke said. Sediment washed into the lake from cleared land loads up the underwater slopes and makes them more prone to failure during an earthquake, she noted.
For that reason, the sediment signature linked to prehistoric earthquakes may look a little different from that seen with the Cape Ann event, and Monecke and her colleagues are hoping to sample even older layers of New England lakes to continuing building their record of past earthquakes.
The research team is taking a closer look at a more famous New England body of water: Walden Pond. “It got slightly less ground shaking [than Sluice Pond] in 1755, but it might have been affected by a 1638 earthquake in southern New Hampshire,” Monecke explained. “We already have sediment cores from that lake, and now we are unraveling its sedimentary history and trying to get an age model there as well.”
K. Monecke et al. The 1755 Cape Ann earthquake recorded in lake sediments of eastern New England: An interdisciplinary paleoseismic approach. Seismological Research Letters, 2018 DOI: 10.1785/0220170220
Note: The above post is reprinted from materials provided by Seismological Society of America.
|
{
"palladium_score": 3.7165451049804688,
"timestamp": "2026-01-18T07:32:08.443217",
"source": "Palladium-STEM (Preview)"
}
|
http://www.infoplease.com/encyclopedia/people/james-thomas-english-navigator-explorer.html
|
James, Thomas, 1593?–1635?, English navigator and explorer (1631) of James Bay. Financed by Bristol merchants, he sailed in command of the Henrietta Maria in the spring of 1631 to find the Northwest Passage to the East. Having explored James Bay (the south extension of Hudson Bay), which was named for him, he wintered on Charlton Island, and in the summer of 1632 continued his attempt to find the passage, a quest that Luke Fox was also undertaking independently (1631). Upon his return to England, James wrote his Strange and Dangerous Voyage (1633), which was later to have a strong influence on the poet Samuel Taylor Coleridge.
See R. B. Bodilly, The Voyage of Captain Thomas James (1928); C. M. MacInnes, Captain Thomas James and the North West Passage (1967).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Explorers, Travelers, and Conquerors: Biographies
|
{
"palladium_score": 3.8314356803894043,
"timestamp": "2026-01-18T07:32:08.443217",
"source": "Palladium-STEM (Preview)"
}
|
http://www.thelinuxblog.com/linux-man-pages/7/pipe
|
PIPESection: Linux Programmer's Manual (7)
Index Return to Main Contents
NAMEpipe - overview of pipes and FIFOs
DESCRIPTIONPipes and FIFOs (also known as named pipes) provide a unidirectional interprocess communication channel. A pipe has a read end and a write end. Data written to the write end of a pipe can be read from the read end of the pipe.
A pipe is created using pipe(2), which creates a new pipe and returns two file descriptors, one referring to the read end of the pipe, the other referring to the write end. Pipes can be used to create a communication channel between related processes; see pipe(2) for an example.
A FIFO (short for First In First Out) has a name within the file system (created using mkfifo(3)), and is opened using open(2). Any process may open a FIFO, assuming the file permissions allow it. The read end is opened using the O_RDONLY flag; the write end is opened using the O_WRONLY flag. See fifo(7) for further details. Note: although FIFOs have a pathname in the file system, I/O on FIFOs does not involve operations on the underlying device (if there is one).
I/O on Pipes and FIFOsThe only difference between pipes and FIFOs is the manner in which they are created and opened. Once these tasks have been accomplished, I/O on pipes and FIFOs has exactly the same semantics.
If a process attempts to read from an empty pipe, then read(2) will block until data is available. If a process attempts to write to a full pipe (see below), then write(2) blocks until sufficient data has been read from the pipe to allow the write to complete. Non-blocking I/O is possible by using the fcntl(2) F_SETFL operation to enable the O_NONBLOCK open file status flag.
The communication channel provided by a pipe is a byte stream: there is no concept of message boundaries.
If all file descriptors referring to the write end of a pipe have been closed, then an attempt to read(2) from the pipe will see end-of-file (read(2) will return 0). If all file descriptors referring to the read end of a pipe have been closed, then a write(2) will cause a SIGPIPE signal to be generated for the calling process. If the calling process is ignoring this signal, then write(2) fails with the error EPIPE. An application that uses pipe(2) and fork(2) should use suitable close(2) calls to close unnecessary duplicate file descriptors; this ensures that end-of-file and SIGPIPE/EPIPE are delivered when appropriate.
It is not possible to apply lseek(2) to a pipe.
Pipe CapacityA pipe has a limited capacity. If the pipe is full, then a write(2) will block or fail, depending on whether the O_NONBLOCK flag is set (see below). Different implementations have different limits for the pipe capacity. Applications should not rely on a particular capacity: an application should be designed so that a reading process consumes data as soon as it is available, so that a writing process does not remain blocked.
PIPE_BUFPOSIX.1-2001 says that write(2)s of less than PIPE_BUF bytes must be atomic: the output data is written to the pipe as a contiguous sequence. Writes of more than PIPE_BUF bytes may be non-atomic: the kernel may interleave the data with data written by other processes. POSIX.1-2001 requires PIPE_BUF to be at least 512 bytes. (On Linux, PIPE_BUF is 4096 bytes.) The precise semantics depend on whether the file descriptor is non-blocking (O_NONBLOCK), whether there are multiple writers to the pipe, and on n, the number of bytes to be written:
- O_NONBLOCK disabled, n <= PIPE_BUF
- All n bytes are written atomically; write(2) may block if there is not room for n bytes to be written immediately
- O_NONBLOCK enabled, n <= PIPE_BUF
- If there is room to write n bytes to the pipe, then write(2) succeeds immediately, writing all n bytes; otherwise write(2) fails, with errno set to EAGAIN.
- O_NONBLOCK disabled, n > PIPE_BUF
- The write is non-atomic: the data given to write(2) may be interleaved with write(2)s by other process; the write(2) blocks until n bytes have been written.
- O_NONBLOCK enabled, n > PIPE_BUF
- If the pipe is full, then write(2) fails, with errno set to EAGAIN. Otherwise, from 1 to n bytes may be written (i.e., a "partial write" may occur; the caller should check the return value from write(2) to see how many bytes were actually written), and these bytes may be interleaved with writes by other processes.
Open File Status FlagsThe only open file status flags that can be meaningfully applied to a pipe or FIFO are O_NONBLOCK and O_ASYNC.
Setting the O_ASYNC flag for the read end of a pipe causes a signal (SIGIO by default) to be generated when new input becomes available on the pipe (see fcntl(2) for details). On Linux, O_ASYNC is supported for pipes and FIFOs only since kernel 2.6.
Portability notesOn some systems (but not Linux), pipes are bidirectional: data can be transmitted in both directions between the pipe ends. According to POSIX.1-2001, pipes only need to be unidirectional. Portable applications should avoid reliance on bidirectional pipe semantics.
SEE ALSOdup(2), fcntl(2), open(2), pipe(2), poll(2), select(2), socketpair(2), stat(2), mkfifo(3), epoll(7), fifo(7)
- SEE ALSO
|
{
"palladium_score": 4.1244401931762695,
"timestamp": "2026-01-18T07:32:10.558895",
"source": "Palladium-STEM (Preview)"
}
|
http://www.sacbee.com/opinion/op-ed/article94558767.html
|
In 18th-century America, colonial society and Native American society sat side by side. The former was buddingly commercial; the latter was communal and tribal. As time went by, the settlers from Europe noticed something: No Indians were defecting to join colonial society, but many whites were defecting to live in the Native American one.
This struck them as strange. Colonial society was richer and more advanced. And yet people were voting with their feet the other way.
The colonials occasionally tried to welcome Native American children into their midst, but they couldn’t persuade them to stay. Benjamin Franklin observed the phenomenon in 1753, writing, “When an Indian child has been brought up among us, taught our language and habituated to our customs, yet if he goes to see his relations and make one Indian ramble with them, there is no persuading him ever to return.”
During the wars with the Indians, many European settlers were taken prisoner and held within Indian tribes. After a while, they had plenty of chances to escape and return, and yet they did not. In fact, when they were “rescued,” they fled and hid from their rescuers.
Sometimes the Indians tried to forcibly return the colonials in a prisoner swap, and still the colonials refused to go. In one case, the Shawanese Indians were compelled to tie up some European women in order to ship them back. After they were returned, the women escaped the colonial towns and ran back to the Indians.
Even as late as 1782, the pattern was still going strong. Hector de Crèvecoeur wrote, “Thousands of Europeans are Indians, and we have no examples of even one of those aborigines having from choice become European.”
I first read about this history several months ago in Sebastian Junger’s excellent book “Tribe.” It has haunted me since. It raises the possibility that our culture is built on some fundamental error about what makes people happy and fulfilled.
The native cultures were more communal. As Junger writes, “They would have practiced extremely close and involved child care. And they would have done almost everything in the company of others. They would have almost never been alone.”
If Colonial culture was relatively atomized, imagine American culture of today. As we’ve gotten richer, we’ve used wealth to buy space: bigger homes, bigger yards, separate bedrooms, private cars, autonomous lifestyles. Each individual choice makes sense, but the overall atomizing trajectory sometimes seems to backfire. According to the World Health Organization, people in wealthy countries suffer depression by as much as eight times the rate as people in poor countries.
There might be a Great Affluence Fallacy going on – we want privacy in individual instances, but often this makes life generally worse.
Every generation faces the challenge of how to reconcile freedom and community – “On the Road” versus “It’s a Wonderful Life.” But I’m not sure any generation has faced it as acutely as millennials.
In the great American tradition, millennials would like to have their cake and eat it, too. A few years ago, Macklemore and Ryan Lewis came out with a song called “Can’t Hold Us,” which contained the couplet: “We came here to live life like nobody was watching/I got my city right behind me, if I fall, they got me.” In the first line they want complete autonomy; in the second, complete community.
But, of course, you can’t really have both in pure form. If millennials are heading anywhere, it seems to be in the direction of community. Politically, millennials have been drawn to the class solidarity of the Bernie Sanders campaign. Hillary Clinton – secretive and a wall-builder – is the quintessence of boomer autonomy. She has trouble with younger voters.
Professionally, millennials are famous for bringing their whole self to work: turning the office into a source of friendships, meaning and social occasions.
I’m meeting more millennials who embrace the mentality expressed in the book “The Abundant Community,” by John McKnight and Peter Block. The authors are notably hostile to consumerism.
They are anti-institutional and anti-systems. “Our institutions can offer only service – not care – for care is the freely given commitment from the heart of one to another,” they write.
Millennials are oriented around neighborhood hospitality, rather than national identity or the borderless digital world. “A neighborhood is the place where you live and sleep.” How many of your physical neighbors know your name?
Maybe we’re on the cusp of some great cracking. Instead of just paying lip service to community while living for autonomy, I get the sense a lot of people are actually about to make the break and immerse themselves in demanding local community movements. It wouldn’t surprise me if the big change in the coming decades were this: an end to the apotheosis of freedom; more people making the modern equivalent of the Native American leap.
|
{
"palladium_score": 3.7221367359161377,
"timestamp": "2026-01-18T07:32:10.558895",
"source": "Palladium-STEM (Preview)"
}
|
http://bhhslc9a.blogspot.com/2016/12/historical-fiction-in-writ-lit.html
|
Each day, students have 30 minutes of class time to enjoy reading their books. As they go, students sticky-note their thoughts, observations about the characters, and questions about historical references.
Students have also been categorizing their sticky notes as literal, inferential, or critical, helping them to better understand the different levels of thinking involved when we interpret texts.
On any given day, students can be found reading in various spots around the room. On the left, Antigona decided she was most comfortable at the high top table, whereas Noah wanted to read on the floor. There is great energy in a room full of readers (even if no one is speaking)!
The second part of each class period is giving students time to build a warehouse of resources that help them understand the historical context of their novels, events they are also studying in World History. So far, we have studied how the migration of the Germanic Tribes affected the evolution of the English language and how the Edict of Expulsion explains the beginnings of the Reconquista in Spain. Next? The Bubonic Plague!
We also used our very own BHHS statues of the Knight and the Baron to give us insight into creating historical fiction. Below, York and Amari decide on the best angle from which to photograph the Knight for their story, and to the right you can see part of Georgia's story about the Baron.
Here's what the students have to say about our unit so far:
|
{
"palladium_score": 4.016656398773193,
"timestamp": "2026-01-18T07:32:10.558895",
"source": "Palladium-STEM (Preview)"
}
|
https://www.smithsonianmag.com/smart-news/humans-and-dogs-may-have-hunted-together-prehistoric-jordan-180971291/
|
When and where dogs came to be domesticated is a subject of scientific debate, but there is a wealth of research that attests to the long, intertwined history of humans and their best animal buddies. One theory about the early origins of this relationship posits that dogs were used to help early humans hunt. And, as Ruth Schuster reports for Haaretz, a new study suggests that this may have been the case among prehistoric peoples of what is now Jordan.
A team of archaeologists from the University of Copenhagen and University College London studied a cache of animal bones at an 11,500-year-old settlement called Shubayqa 6, which is classified as “Pre-Pottery Neolithic A,” or belonging to the first stage of Neolithic culture in the Levant. In the Journal of Anthropological Archaeology, the researchers write that they found bones from a canid species, though they could not identify which one because the remains were poorly preserved. They also unearthed the bones of other animals that had been butchered. But perhaps most intriguing were the bones of animals—like gazelle, for instance—that bore clear signs of having passed through a digestive tract.
These bones were too big for humans to have eaten, leading the researchers to surmise that they “must have been digested by dogs,” says lead study author Lisa Yeomans, a zooarchaeologist at the University of Copenhagen. And the researchers don’t think this was a case of wild carnivores sneaking into the settlement to grab a bite.
For one, archaeological evidence indicates that Shubayqa 6 was occupied year-round, suggesting that “dogs were allowed to freely roam around the site picking over the discarded waste, but also defecating in the vicinity of where humans were inhabiting,” the study authors write.
There was also a noticeable surge in hare bones around the time that dogs started to appear at the site, and the researchers think this may be because the dogs were helping humans hunt small prey. Previously, the people of Shubayqa 6 might have relied on tools like netting to catch hares and other animals, says Yeomans, but it wouldn’t have been very effective. Dogs, on the other hand, could selectively target elusive prey.
Humans and dogs thus appear to have forged a reciprocal relationship in Jordan more than 11,000 years ago. There is in fact evidence to suggest that dogs were domesticated by humans in the Near East as early as 14,000 years ago, and some of that evidence seems to point to dogs being used during hunts. Rock art from a site near Shubayqa, for instance, seems to show dogs driving gazelle into a trap.
In light of such archaeological finds, “it would be strange not to consider hunting aided by dogs as a likely explanation for the sudden abundance of smaller prey in the archaeological record," Yeomans says. Among the ancient peoples of Jordan, in other words, the complex history of dog domestication may have been well underway.
|
{
"palladium_score": 3.6092634201049805,
"timestamp": "2026-01-18T07:32:13.463238",
"source": "Palladium-STEM (Preview)"
}
|
https://www.scientificamerican.com/article/scientists-sequence-genom/
|
Writing in Nature, a team of more than 150 scientists describes the genome of Plasmodium falciparum, a parasite that causes malaria. The analysis, which took six years to complete, identified 14 chromosomes containing almost 5,300 genes, including nearly 200 that produce proteins to help P. falciparum evade the body's defense mechanisms. A better understanding of their functions may point to potential new targets for antimalarial drugs.
Because transmission of malaria requires a mosquito vector, controlling or killing the insects is another route to disease control. To that end, the work published in Science could help. A consortium of researchers led by Celera Genomics has sequenced the DNA of Anopheles gambiae, the primary species of mosquito that transmits malaria to humans. According to the report, the genome is 278 million bases long and contains almost 14,000 genes. The scientists have started the daunting task of identifying their functions. In particular, they investigated which genes were turned on or off when female mosquitoes feed on blood. "Those are the pathways that are likely to be useful in finding points of intervention for developing new insecticides or transmission-blocking vaccines," says lead study author Robert A. Holt of Celera. "I think the most important thing the genome will facilitate in the immediate future is understanding the molecular basis of resistance to insecticides, and finding new insecticide targets."
Additional research published in both journals has shed further light on both P. falciparum and mosquito genetics. Laurence J. Zwiebel of Vanderbilt University and his colleagues pinpointed 276 genes that are critical to A. gambiae's sensory systems, which it uses to identify its human prey. If they can identify those that the mosquito uses to smell humans, Zwiebel says, it could be possible to create new repellants against the pests. Kenneth Vernick of New York University School of Medicine and his colleagues have discovered genes in natural populations of A. gambiae that confer resistance to the malarial parasite, allowing the insect to act as a vector while not succumbing to the disease.
Researchers with the World Health Organization's Special Program for Research and Training in Tropical Diseases in Switzerland describe the new findings, particularly the genome sequences, as a breakthrough for public health and "a major contribution to efforts to combat malaria and other mosquito-borne diseases." Despite these encouraging words, however, actually using the information to save lives will most likely require more money than the $200 million currently available each year for malaria research, scientists say. The required funds will near several billion dollars a year for a generation or so, writes Jeffery D. Sachs of the Earth Institute of Columbia University in a commentary in Science. Although that amount may seem daunting, he concludes, it "will be a very small price to pay for millions of lives saved per year and for hundreds of millions of people to be given the change to escape from the vicious cycle of poverty and disease."
|
{
"palladium_score": 3.8370039463043213,
"timestamp": "2026-01-18T07:32:13.463238",
"source": "Palladium-STEM (Preview)"
}
|
http://searchworks.stanford.edu/view/10559457
|
The importance of using primary sources in social studies, K-8 : guidelines for teachers to utilize in instruction
- Bukowiecki, Elaine M., 1947- author.
- Lanham : Rowman & Littlefield,
- Copyright notice
- Physical description
- xiii, 239 pages : illustrations ; 24 cm
LB1584 .B77 2014
- Unknown LB1584 .B77 2014
- Includes bibliographical references.
- Preface Introduction Part One: Primary Sources for Social Studies Instruction Chapter 1: Why Include Primary Sources in Social Studies Instruction Chapter 2: Exploring Primary Print Sources: Examining Authentic Chapter 3: The Visual Aspects of Social Studies: Investigating Part Two: Implementing Primary Sources in the Social Studies Classroom Chapter 4: Primary Sources for Personal Discovery: Exploring One's Own Community to Discover the Past and to Appreciate the Present Chapter 5: Connecting the Past to the Present: Employing Primary Sources to Understand Daily Events Chapter 6: Culminating Research Project: Applying Social Studies Afterword: Why Should I Include Primary Sources in My Social Studies Instruction? About the Author Appendix A: The Themes of Social Studies (National Curriculum Standards for Social Studies, (2010) Appendix B: Standards for English Language Arts and Literacy in History/Social Studies, Science, and Technical Subjects, K-5 (Common Core State Standards, 2010) Appendix C: Standards for English Language Arts, 6-12 (Common Core State Standards, 2010).
- (source: Nielsen Book Data)
- Publisher's Summary
- This two-part book provides teachers in kindergarten through grade eight with a valuable resource as how to include primary sources in a social studies curriculum along with a required social studies textbook. The first section of this book contains descriptions with relevant examples of primary documents and authentic artifacts that are appropriate for incorporation into social studies classrooms. In the second part of this book, the application of primary sources for specific social studies instruction is presented. This book specifically presents ways to use primary sources as means to explore the community where the students reside, to make connections to past and present events, and to research a specific change agent in a particular place. Each chapter contains: *questions and pedagogical strategies for criticallly reading, viewing, and responding to varied authentic artifacts; *techniques for interacting with primary materials; *modifications to meet the needs of diverse learners; *assessment techniques; information tied to technology and the "new literacies"; and *connections to the National Curriculum Standards for the Social Studies (2010) and the Common Core State Standards (2010).
(source: Nielsen Book Data)
- Publication date
- Copyright date
- Elaine M. Bukowiecki.
|
{
"palladium_score": 3.603111743927002,
"timestamp": "2026-01-18T07:32:15.870030",
"source": "Palladium-STEM (Preview)"
}
|
https://www.dana.org/News/Details.aspx?id=42998
|
How do the brain’s sensory regions adapt when input signals are cut off? How swiftly do deprived neurons become receptive to alternative inputs? And how does this adaptability—or “plasticity”—change as the brain grows older? Such questions have been contentious among neuroscientists, many of whom have theorized that neural plasticity greatly weakens with age. Now a new study hints that the brain is wired for a very fast-acting type of plasticity, even in adulthood.
The research, published in the July 15 Journal of Neuroscience, was prompted by an earlier evaluation of a man who had suffered a stroke. The loss of blood flow in his brain had destroyed nerve fibers that send visual information from his retina to his primary visual cortex. Daniel Dilks, then a doctoral student at the Massachusetts Institute of Technology, led a study showing that six months after the stroke, the information-deprived neurons in the man’s primary visual cortex had begun to respond to inputs from adjacent visual neurons instead. The “blind” part of the visual field thus displayed information from nearby, non-blind fields, resulting in a measurable distortion of perceived images.
Having developed the visual distortion measurement technique as a way to study plasticity in the cortex, Dilks, now a postdoctoral researcher, decided to find out how quickly this plasticity could manifest itself. “Does it in fact take six months for your cortex to change, or might it happen within hours?” he remembers asking.
To test the issue, Dilks and his colleagues exploited a quirk of vision involving the retina’s “blind spot,” where the bundled optic nerves depart the eye for the primary visual cortex, creating an area of the retina with no light-receptive cells. The cortical neurons corresponding to this “blind” part of the visual field, however, normally and seamlessly fill the perceptual gap by taking their input from a portion of the other eye’s visual field.
Dilks found that when he put an eye patch over one eye of a research subject, that person would experience an image distortion similar to that seen in the stroke patient. The subject’s “blind spot” neurons would lose their usual inputs, would start to take their inputs from adjacent neurons instead, and thus would effectively stretch a perceived image into their part of the visual field, turning a simple shape such as a square into an apparent rectangle.
Expecting this distortion to manifest only after some hours, as the neurons adjusted to the deprivation of their usual input stream, Dilks put eye patches on each of 48 volunteers and began perceptual tests to find the earliest point at which the distortion could be detected. “To my surprise, it wasn’t hours—it was seconds,” he says.
Too quick to be nerve growth
Such a rapid adjustment suggests that the blind field neurons were not relying on the extension of new connections to their neighbors, which would have required weeks of nerve growth. Instead they were strengthening pre-existing connections as needed on an almost instantaneous basis. “So at least part of this adult cortical change involves the changing of connections,” Dilks says.
Mark Huebener, who researches cortical plasticity at the Max Planck Institute of Neurobiology outside Munich, suggests that the fast adaptation seen in the Dilks study could represent “the unmasking of already present inputs, which were otherwise inhibited.”
Dilks plans to study the phenomenon further with functional magnetic resonance imaging, or fMRI, and he is working on an fMRI system with fine enough resolution to distinguish, for example, activity in the ordinary blind-spot portion of the primary visual cortex from activity in adjacent regions. So far the imaging isn’t sensitive enough to separate the two groups of neurons, he says. “But I’m not willing to give that [fMRI] up.”
Dilks also plans to do a follow-up study in children, to see whether the strength of their visual distortion in the blind-spot test is greater than that measured in adults—as one might expect from the theory that younger brains are more plastic. “Am I uncovering something that is left over from development,” he asks, “or is this something about the adult cortex?”
|
{
"palladium_score": 3.6562395095825195,
"timestamp": "2026-01-18T07:32:15.870030",
"source": "Palladium-STEM (Preview)"
}
|
https://www.drugs.com/cg/hepatitis-c-in-children.html
|
This material must not be used for commercial purposes, or in any hospital or medical facility. Failure to comply may result in legal action.
Hepatitis C In Children
WHAT YOU NEED TO KNOW:
What is hepatitis C?
Hepatitis C is inflammation of the liver caused by hepatitis C virus (HCV) infection. Hepatitis C is less common in children than in adults.
How is HCV spread?
Babies are usually infected during birth. Adolescents are usually infected through injecting drugs, sharing needles, or having unprotected sex with an infected person. The following may also increase your child's risk:
- A stick from an infected needle
- An object with infected blood or body fluids on it touches a wound
- Sharing personal items, such as razors, toothbrushes, or nail clippers with someone who has hepatitis C
- Rarely, a blood transfusion, organ transplant, or long-term kidney dialysis
What are the signs and symptoms of hepatitis C?
Your child may not have symptoms. If symptoms develop, he may have any of the following:
- Dark urine or pale bowel movements
- Jaundice (yellow skin or eyes) and itchy skin
- Joint pain, body aches, or weakness
- Loss of appetite, nausea, or vomiting
How is hepatitis C diagnosed?
If your child was infected during birth, healthcare providers will wait until he is at least 18 months to check for HCV. This is because his body will have HCV antibodies from his mother. Your healthcare provider will ask about your child's signs and symptoms and any health problems he has. Tell him if your child has other infections, such as HIV or hepatitis B. Tell him if your adolescent drinks alcohol or uses any illegal drugs. He may also ask about your adolescent's sex partners. After age 18 months, your child may need any of the following:
- Blood tests are used to check for HCV antibodies made by your child's body to fight the infection. The tests can show the type of HCV your child has, and how many viruses are present. This will help your child's healthcare provider make a treatment plan if needed.
- An ultrasound is used to check for liver problems caused by HCV.
- A liver biopsy is used to test a sample of your child's liver for swelling, scarring, and other damage. A liver biopsy may help healthcare providers learn if your child needs treatment for HCV.
How is hepatitis C treated?
HCV may go away without treatment when it is passed from a mother to her baby during birth. Your child may not need treatment if his body fights the HCV. His infection will be chronic if it has not gone away by the time he is 2 years old. Children younger than 3 years usually do not receive treatment. Your child's healthcare provider will talk to you about any treatments your child or adolescent may need. Medicines may be given to keep the virus from spreading. Medicines may also prevent or decrease liver swelling and damage. Rarely, surgery may be needed to replace your child's liver with a healthy liver.
What can I do to help prevent the spread of HCV?
- Have your child cover any open cuts or scratches. If blood from your child's wound gets on a surface, clean the surface with bleach right away. Put on gloves before you clean. Throw away any items with blood or body fluids on them, as directed by your healthcare provider.
- Do not let your child share personal items. These items include toothbrushes, nail clippers, and razors.
- Talk to your adolescent about safe sex. If your adolescent is sexually active, tell him to use a condom during sex. Sexually active girls should have their male partners wear a condom.
- Tell household members that your child has hepatitis C. They may need to be tested for HCV. Regular handwashing is important for your child and everyone who lives with him. Everyone should wash after the bathroom and before eating. Ask your healthcare provider if you should tell childcare providers or school officials that your child has hepatitis C.
- Protect your baby. Ask your healthcare provider if it is safe for you to breastfeed.
- Do not let your child donate blood. Donations are checked for HCV, but it is best not to donate.
What are the risks of hepatitis C?
Your child's risk for liver damage is increased if he has chronic hepatitis C. He may develop cirrhosis (scarring of the liver) when he is older. He may also develop liver cancer. Your child may need to be treated in a hospital if his symptoms are severe or he has liver damage.
What can I do to manage my child's hepatitis C?
- Talk to your child's healthcare provider about vaccines. He will need hepatitis A and B vaccines if he has not received them. He should also get the flu vaccine each year.
- Offer a variety of healthy foods. Healthy foods include fruits, vegetables, low-fat dairy products, beans, and lean meats and fish. Ask if your child needs to be on a special diet.
- Have your child drink extra liquids. Liquids help the liver function properly. Ask your child's healthcare provider how much liquid your child needs each day and which liquids are best for him.
- Help your child get more rest. Have your child slowly return to his normal activities when he feels better.
- Talk to your adolescent about not drinking alcohol. Alcohol can increase liver damage. Talk to your healthcare provider if your adolescent drinks alcohol and needs help to stop.
- Talk to your adolescent about not smoking. Nicotine can damage blood vessels and make it more difficult to manage hepatitis C. Smoking can also lead to more liver damage. Ask your healthcare provider for information if your adolescent currently smokes and needs help to quit. E-cigarettes or smokeless tobacco still contain nicotine. Talk to your healthcare provider before your adolescent uses these products.
When should I seek immediate care?
- Your child has severe abdominal pain.
- Your child is too dizzy to stand up.
- Your child feels confused or is very sleepy.
- Your child's bowel movements are red or black, and sticky.
- Your child vomits blood or material that looks like coffee grounds.
- Your child is vomiting and cannot keep food or liquids down.
When should I contact my child's healthcare provider?
- Your child has a fever.
- Your child's abdomen or legs have a rash or are swollen.
- Your child is bruising easily.
- You have questions or concerns about your child's condition or care.
Care AgreementYou have the right to help plan your child's care. Learn about your child's health condition and how it may be treated. Discuss treatment options with your child's caregivers to decide what care you want for your child. The above information is an educational aid only. It is not intended as medical advice for individual conditions or treatments. Talk to your doctor, nurse or pharmacist before following any medical regimen to see if it is safe and effective for you.
© 2017 Truven Health Analytics Inc. Information is for End User's use only and may not be sold, redistributed or otherwise used for commercial purposes. All illustrations and images included in CareNotes® are the copyrighted property of A.D.A.M., Inc. or Truven Health Analytics.
The above information is an educational aid only. It is not intended as medical advice for individual conditions or treatments. Talk to your doctor, nurse or pharmacist before following any medical regimen to see if it is safe and effective for you.
|
{
"palladium_score": 3.76556396484375,
"timestamp": "2026-01-18T07:32:15.870030",
"source": "Palladium-STEM (Preview)"
}
|
http://environmentalscience.oxfordre.com/abstract/10.1093/acrefore/9780199389414.001.0001/acrefore-9780199389414-e-7?rskey=4KiBP5&result=1
|
Summary and Keywords
The Anthropocene hypothesis—that humans have impacted “the environment” but also changed the Earth’s geology—has spread widely through the sciences and humanities. This hypothesis is being currently tested to see whether the Anthropocene may become part of the Geological Time Scale. An Anthropocene Working Group has been established to assemble the evidence. The decision regarding formalization is likely to be taken in the next few years, by the International Commission on Stratigraphy, the body that oversees the Geological Time Scale. Whichever way the decision goes, there will remain the reality of the phenomenon and the utility of the concept.
The evidence, as outlined here, rests upon a broad range of signatures reflecting humanity’s significant and increasing modification of Earth systems. These may be visible as markers in physical deposits in the form of the greatest expansion of novel minerals in the last 2.4 billion years of Earth history and development of ubiquitous materials, such as plastics, unique to the Anthropocene. The artefacts we produce to live as modern humans will form the technofossils of the future. Human-generated deposits now extend from our natural habitat on land into our oceans, transported at rates exceeding the sediment carried by rivers by an order of magnitude. That influence now extends increasingly underground in our quest for minerals, fuel, living space, and to develop transport and communication networks. These human trace fossils may be preserved over geological durations and the evolution of technology has created a new technosphere, yet to evolve into balance with other Earth systems.
The expression of the Anthropocene can be seen in sediments and glaciers in chemical markers. Carbon dioxide in the atmosphere has risen by ~45 percent above pre–Industrial Revolution levels, mainly through combustion, over a few decades, of a geological carbon-store that took many millions of years to accumulate. Although this may ultimately drive climate change, average global temperature increases and resultant sea-level rises remain comparatively small, as yet. But the shift to isotopically lighter carbon locked into limestones and calcareous fossils will form a permanent record. Nitrogen and phosphorus contents in surface soils have approximately doubled through increased use of fertilizers to increase agricultural yields as the human population has also doubled in the last 50 years. Industrial metals, radioactive fallout from atomic weapons testing, and complex organic compounds have been widely dispersed through the environment and become preserved in sediment and ice layers.
Despite radical changes to flora and fauna across the planet, the Earth still has most of its complement of biological species. However, current trends of habitat loss and predation may push the Earth into the sixth mass extinction event in the next few centuries. At present the dramatic changes relate to trans-global species invasions and population modification through agricultural development on land and contamination of coastal zones.
Considering the entire range of environmental signatures, it is clear that the global, large and rapid scale of change related to the mid-20th century is the most obvious level to consider as the start of the Anthropocene Epoch.
The Nature of Geological History
Given that the Anthropocene is typically considered as an interval of Earth history, or stratigraphy, rather than of human history, we need first provide some explanation of the ground rules of stratigraphy.
The enormous duration of Earth history—in excess of four and a half thousand million years—is made tractable by means of the construction of the Geological Time Scale and of its units. This involves separating out intervals of Earth history that are distinctive because they share recognizable combinations of the Earth’s preserved biological, chemical, or physical characters.
Within stratigraphy there are two parallel means of classifying Earth history. There is a classification simply as time intervals within which certain events and processes took place (for example, the Jurassic Period). Then, there is also a parallel time-based classification of the material (stratal) record that preserves the evidence of that history (thus, the Jurassic System, comprising all the strata laid down during the Jurassic Period).
Both sets of time units are defined using a prominent and widespread environmental change. Typically for the last half billion years of geological time, this is through recognition of the appearance of a common and representative fossil species, which broadly coincides with that environmental change. However, such a species would not have appeared (or disappeared) everywhere simultaneously around the world and hence, cannot define a single time plane but it is a guide to the time boundary. Geologists circumvent the problem by selecting, at one place in the world, a single level at which this fossil first appears, and then define this as the instant when the time interval begins. Then, geologists try to trace this level within strata all around the world, by any means possible. This single level is the well-known “golden spike,” more technically, a Global Boundary Stratigraphic Section and Point (GSSP). Importantly, the exact level chosen remains the reference point, even if the key fossil is later found to have appeared lower down in strata (i.e., earlier) at the same location (which has indeed sometimes happened).
The Geological Time Scale is also hierarchical, with smaller units of shorter duration and less markedly distinctive character, nested within larger ones. We currently live within the Phanerozoic Eon, the fourth eon of Earth time, which started some 542 million years ago, with the defining event being the evolution of multicellular animals from the single-celled organisms that had comprised life on Earth beforehand.
Within the Phanerozoic, we live within the last of its three subdivisions, the Cenozoic Era, which began 65 million years ago, at the mass extinction event that saw the disappearance of the dinosaurs (and of many other lifeforms across the planet). This dynastic change to the Earth system was abrupt and caused, or largely caused, by a large meteorite strike, which over large parts of the world is recorded as a thin layer with high concentrations of iridium, an element rare at the Earth’s surface but common in meteorites.
Within the Cenozoic, we live in the last of its three subdivisions called the Quaternary Period (the strata of which comprise the Quaternary System). This commenced two and a half million years ago. In general, it is the time from when the world most recently entered a phase of overall bipolar glaciation. Within the Quaternary, we formally live within the Holocene Epoch. This is just the latest of some 50 marked oscillations of climate within the Quaternary Period, when the world emerged from a phase of glacial climate and high global ice volume (and therefore low sea level), into a warm (or interglacial) phase with higher sea level. Although it is just one of many interglacials, it is the one that has experienced a human population explosion, while its deposits (i.e., the Holocene Series) make up important parts of our landscapes—soils, river floodplains and coastal plains, deltas and so on.
The formal definition of the Holocene Epoch is also relevant to the consideration of the Anthropocene. The Holocene, until 2009, used to be defined numerically (as are, still, most of the geological time units prior to the Phanerozoic). Its beginning was placed at 10,000 radiocarbon years before the present (the present being defined as 1950 ce in this case). Studies of both Greenland ice cores and deep ocean sediments showed that the northern hemisphere last emerged from glacial cold into temperate warmth with extraordinary abruptness: a good deal of the change took place over a mere three years. This change can be identified in Greenland ice layers, which in this narrow time interval show chemical evidence of suddenly increased warmth and humidity (the ice layers become thicker and less dusty). Hence, the boundary level has been placed at the ice layer that shows the beginning of this change. As a historical event, it is thus located very precisely, with approximately annual resolution; however, because of the difficulty of working out precisely when this took place, there is a larger error bar (of a couple of centuries) of quite how long ago this happened. This uncertainty is expressed as the boundary in the “golden spike” ice core being dated to 11,700 years before present (the present now being taken to be 2000 ce) plus or minus 99 years. The resultant interplay between relative and “absolute” (numerical) dating is also of significance to defining the Anthropocene.
The Anthropocene: Historical Beginnings
We still live, formally, within the Holocene Epoch. However, the general idea that humans can cause significant change to the Earth has been circulating for a long time—since, the Comte de Buffon’s 1778 work Les Époques de la Nature. In this, Buffon divided our planet’s history into seven epochs, the last and current one being “when the power of Man assisted that of Nature.”
The idea resurfaced intermittently. George Perkins Marsh, the “first American conservationist” catalogued human-driven environmental change in his 1864 Man and Nature; Antonio Stoppani, in his 1871–1873 Corsa di Geologia, suggested an “Anthropozoic Era,” emphasizing that the human activity was changing not just the Earth’s present, but its future also. In the early 20th century, the Russian scientist Vladimir Vernadsky developed the concept of the Earth’s biosphere and, with Eduard Roy and Teilhard de Chardin, of the noosphere (a “sphere of human thought” now enveloping the Earth). At the same time, the geologist Robert Sherlock took a more material approach, counting up the impressive amounts (even then) of rock and soil moved and transformed into construction materials by humans.
These early ideas were, until recently, not taken seriously by geologists. Appreciation of the very great age of the Earth rendered the human timescale almost absurdly short by comparison. And, the very great geological transformations of the past, including the growth and erosion of mountain ranges, the creation and destruction of entire ocean basins, and extraordinary volcanic outbursts and meteorite strikes, seemed to make the transformations wrought by humans both superficial and fleeting.
A wider realization that human impact could be geologically significant came with the development of what has become known as Earth system science, in which it became clear that seemingly subtle changes (for example, in the levels of the trace gases, carbon dioxide and methane, in the atmosphere) could nevertheless have far-reaching changes to the Earth system, largely via changes in climate. The scale and long-term effects, too, of changes to the Earth’s biology was becoming more widely understood, too, as was the scale of physical change to the Earth’s surface.
Several “geological” time terms arose in the late 20th and early 21st century to express this appreciation. There was the Anthrocene of the environmental writer Andrew Revkin, the Homogenocene (to reflect the homogenization of the Earth’s biological communities via human-driven species invasions) of the zoologist John Curnutt, the Myxocene of the oceanographer Daniel Pauly (to denote the future ocean of “microbial slime and jellyfish”) and so on.
However, the term that caught on was independently created by two scientists, Eugene Stoermer and Paul Crutzen. At a meeting of the International Geosphere-Biosphere Program (IGBP) in Mexico in 2000, Crutzen had been listening to debate where the present state of the Earth was continually being referred to as the Holocene. At one point he lost patience, and interjected that we were not in the Holocene but in—a and here he improvised the word—the Anthropocene. Much of the subsequent discussion, he recalled, revolved around this new idea.
He later researched the term, found that Stoermer had independently coined it, and wrote to him, suggesting they published jointly on it, which they did in the IGBP Newsletter (Crutzen & Stoermer, 2000), where this idea was first disseminated within an Earth systems science community. Subsequently, Crutzen restated the concept in Nature (Crutzen, 2002), which was when the term obtained global exposure.
From then, the term began to be used increasingly frequently in the literature, both within the earth sciences and also more widely beyond it. Commonly, it was employed simply as if it was part of the Geological Time Scale. However, the Anthropocene was (and remains) informal and it was rather vaguely defined, with a range of ideas about its duration, definition and hierarchical level, often being referred to somewhat interchangeably as either an epoch or an era—two very different things in formal stratigraphy.
Hence, the Anthropocene was discussed by the Stratigraphy Commission of the Geological Society of London, to see whether there was sufficient evidence for it to be formally considered as a time unit (Zalasiewicz et al., 2008). The majority view was that, at least on preliminary inspection, there was merit in the term, and it might be considered further with a view to eventual possible formalization. However, this national commission does not have any power over the international Geological Time Scale, which is not easily or lightly amended. As its terms underlie basic communication in the Earth sciences, there is a need for stability of nomenclature, and hence the approach taken to its modification is deliberately conservative.
In 2009, an Anthropocene Working Group, part of the Subcomission on Quaternary Stratigraphy, a component part of the International Commission on Stratigraphy (itself under the aegis of the International Union of Geological Sciences) was established, to gather and consider evidence regarding the Anthropocene as a formal unit. So far, it has produced two thematic sets of papers, published by the Royal Society (Williams, Zalasiewicz, Haywood, & Ellis, 2011), and the Geological Society of London (Waters, Zalasiewicz, Williams, Ellis, & Snelling, 2014) and a number of other articles. It aims to produce a body of evidence, together with (at least interim) recommendations, by 2016.
Stratigraphic Analysis of the Anthropocene
Analysis of the Anthropocene may partly be carried out by “classical” analysis of information contained within strata, particularly where these form well-ordered successions (annual layers of snow on ice-caps, or sediment laminae in lake deposits). However, unlike exploration of older Earth histories, there is also an increasingly sophisticated and detailed observational record over the past centuries and (especially) decades. So, analysis may also involve taking “environmental” evidence and translating this into geological, and more precisely stratigraphic, terms. The analysis is rendered yet more complex both by the very short timescales involved, and also by several novel aspects of human-driven geology, that have little or no precedent in Earth history.
In outline, the evidence may be ordered as in classical stratigraphic analysis, involving physical aspects of the deposits (within lithostratigraphy), chemical signatures (chemostratigraphy), and biological patterns (biostratigraphy). These together provide the basis upon which chronostratigraphic division might be attempted (to define a recognizable and correlatable Anthropocene Series), the material equivalent of a time unit of the geochronological time scale (a putative Anthropocene Epoch).
The mineral signature
The material record of the Anthropocene consists in essence of the rock succession that will result, albeit that much of that now consists of unconsolidated sediment. Rocks and sediments are made of component minerals, and hence the mineralogical signature of the Anthropocene is a fundamental part of its characterization.
Here a deep-time context may be provided by the review of Robert Hazen and his colleagues, who proposed a pattern of mineral evolution that showed an increase in mineral diversity from about a dozen or so minerals recognized in cosmic dust, to an increase through reactions that take place in the spinning debris disk around a young star (about 250 minerals that have been recognized in meteorites). Once a planet forms, the melting and crystallization taking place at different depths further increase the number of minerals, while on Earth the processes and diversity of mineral-forming environments associated with plate tectonics increased the number still further, to around 2,000 minerals. The origin of life, in particular the beginning of photosynthesis and resultant oxygenation of the atmosphere, known as the “Great Oxygenation Event” of the early Proterozoic, about 2.4 billion years ago, roughly doubled the number of minerals, to a little in excess of 4,000 by creating a wide range of new oxides and hydroxides. Most of these minerals are extremely rare. Subsequent changes, including the origin of multicellular animals, considerably increased the diversity of mineral shape (in complex shell-related forms, for instance) but added little to the total inventory of mineral species.
Humans have added a considerable, if poorly constrained, number of minerals to the Earth’s inventory as detailed by Zalasiewicz, Kryza, and Williams (2014). Prominent among these are uncombined metals (which are rare in natural settings), including iron and steel, aluminium, titanium, copper, vanadium and others. These have been made in very considerable amounts. For instance, some 500 million tons of aluminum have been produced globally to date, almost all since the mid-20th century—enough to completely coat the United States (and part of Canada) in standard aluminum kitchen foil. The amount of iron and steel produced has been roughly an order of magnitude greater. Other novel minerals include boron nitride, an abrasive that is harder than diamond, tungsten carbide (that makes the ball in many ballpoint pens), novel garnets (for lasers), graphene, and so on. There are also common “mineraloids,” notably glasses and plastics made in very large amounts (some 6 billion tons of plastics have been produced to date, for instance, again almost all since the mid-20th century) and widely distributed over the Earth’s surface. There are also minerals that, relatively rare in nature, have now become much more widespread, such as ettringite, hillebrandite, and portlandite in cement and concrete. There are novel and widespread mineral forms too, such as fly ash (both spherical carbonaceous particles and inorganic ash spheres) that have been dispersed since the mid-19th century from the early industrialized countries, and in greater amounts and globally since the mid-20th century.
The scale of mineralogical novelty, the product of many active materials sciences laboratories, is increasing, but quite unknown in detail. However, the scale of this new phase of mineral evolution is almost certainly the greatest since the “Great Oxygenation Event.” The longevity of these minerals fossilized within strata is untested in detail (given that by definition they have no precise natural analogues), but it seems likely that many can survive in some form buried within strata.
The rock signature
Minerals make up rocks, and humans have made a significant addition to Earth’s inventory of rock types. The most widespread and conspicuous are those associated with construction of different types: concrete, brick, cinder-block, asphalt, plaster. There are also heterogenous “rock types” associated with garbage-dump fills, and more specific lithologies such as ceramics of different types.
Concrete is a major and distinctive component. It is a combination of cement (itself largely a combination of limestone and clay, fired to produce a mineralogy that transforms and hardens with the addition of water) and aggregate (typically sand and gravel). It is a cheap, easily moulded, and robust building material that has been used since at least Roman times (much of the Colosseum is made of it, for instance), but which in recent decades has been used in extraordinary quantities. The total amount produced worldwide is of the order of 500 billion tons, which is equivalent to about a kilo for every square meter of Earth, land, and sea. Of that, well over 90% has been made since the mid-20th century, and over 50% in the last couple of decades—and its production is still accelerating. Concrete blocks, invented in the 1830s, have seen technological innovation with development in the 1930s of lightweight cinder/breeze blocks which incorporate fly-ash wastes and reinforced concrete, with internal iron bars providing the reinforcement that allowed development of tower-blocks, the urban innovation of the 20th century.
Bricks are essentially metamorphosed clays with various admixtures of sand, where the heating process in kilns is taken to the point of melting. This scale of rapid heating is unusual in nature at the Earth’s surface, and so bricks have a distinctive texture and mineralogy. Bricks have been made for several millennia (the early ones typically sun-dried), and their production now exceeds a trillion a year. Similar techniques were employed to produce ceramics and pottery. Important for producing food storage vessels and for artistic figurines, these materials form a common component, and are used to date, the archaeosphere (Edgeworth et al., 2015).
The global road network is about 50 million kilometers, the distance from Earth to Mars. Much of this is graded soils, but in Europe and North America tarmacadam, an asphalt/aggregate admixture, with an aggregate sub-base forms the dominant road type with some 1600 million tonnes of asphalt produced in 2007.
A particular form of “rock” is that in garbage dumps. Typically a highly heterogenous and poorly sorted conglomerate or breccia, its individual components reflect the human activities of the times, including unused food material, packaging, disused furniture and building material, clothes, nappies, and toys. The middens of archaeological times are often dominated by shell and bone material, while today’s rubbish dumps have metals, plastics, and paper as major components (Ford, Price, Cooper, & Waters, 2014). These rock types can make up substantial strata.
Minerals and rocks are arranged into strata of various types and geometrical shapes, and the classification of rock strata is the business of lithostratigraphy. The classification largely reflects their physical nature and mineral composition.
While this account focuses on human-influenced strata, it must be recognized that the Anthropocene (formal or informal) is simply a time unit, which will include anthropogenic strata, strata which appear “natural” but which have some anthropogenic characters on closer examination, and strata which will have little or no human influence even upon close examination. Similarly—and depending where the Anthropocene boundary is placed—strata that remain within the Holocene may include a notable anthropogenic component.
The terrestrial realm
Human influence is most visible and pronounced in the terrestrial realm and humans have modified the ground surface in various ways: to provide habitation, to enable transport, to grow crops, to extract resources, to create agreeable (to us) landscapes in peacetime and degraded ones in war. These surface alterations almost always include a third dimension, hence a subsurface component.
Therefore these anthropogenic surfaces may be underlain by what geologists term as artificial ground, and what has recently been termed the archaeosphere (human-disturbed ground, often including artefacts, that lies above the “natural”) within the archaeological community (Edgeworth et al., 2015). There are also soils of various kinds, which are strongly modified by humans in agricultural and urban settings, and where (particularly for the latter) the term “anthrosols” has been proposed.
Artificial ground of various sorts is common to ubiquitous in urban areas, and increasingly figures on geological maps because of its importance to engineers and environmental planners. It may be meters to tens of meters thick, and in general it is voluminous. In the United Kingdom alone, almost a billion tons of rock and soil are moved annually by human activities and the scale of this activity worldwide is now greater than that of natural processes of erosion and sedimentation—perhaps by as much as an order of magnitude.
These heterogeneous strata can be classified in great detail but there are some main general categories, recognized by such organizations as the British Geological Survey (Ford et al., 2014): made ground is material dumped on the surface, as in raised embankments; worked ground is simply excavations in the ground, such as quarries and road cuttings; infilled ground represents holes in the ground, that have been wholly or partly infilled, such as quarries that have become landfill sites; disturbed ground is rock that has been involved, say, in a zone of collapse around an old underground mine working; and landscaped ground is a general term used where it is difficult to distinguish the various other individual categories, such as is the case beneath much urban housing.
As well as deliberately moved rock and soil, and the soil moved in agriculture, there is the human impact of increased sediment transport in rivers in response to deforestation and introduction of agriculture; this has been evident for thousands of years and has increased with time. However, the great number of dam schemes across major rivers, most constructed since the 1940s (Steffen, Broadgate, Deutsch, Gaffney, & Ludwig, 2015) has caused large volumes of sediment to accumulate behind the dams (Syvitski & Kettner, 2011). There are more subtle effects associated with changes in land use. For instance, in the Fenlands of eastern England, there used to be a surface peat stratum up to at least four meters thick, extending over something like 2000 square kilometers. This has almost completely disappeared, mostly since the 19th century, by deflation (windblow) and oxidation following drainage of the ground.
The human-modified areas have deep roots. Extending beneath urban, agricultural, and what may be otherwise considered “wild” landscapes are foundations and tunnels, underground wires and pipes, and mines and boreholes used for the extraction of resources including metals, coal, oil and gas, and water. These may be regarded as analogous to animal burrows, but whereas nonhuman animals typically burrow to a maximum of only a few meters depth, human burrowing—that has been termed anthroturbation—extends to several kilometers depth (Zalasiewicz, Waters, & Williams, 2014). It is extensive; oil boreholes alone in the United States have been calculated to total some 5 million kilometers in length, and a total for the world might reasonably be 50 million kilometers, roughly the same as the total length of the surface road network. There are substantial amounts of other boreholes, too—for water, mineral exploration, and for scientific purposes like those of the Ocean Drilling Project. In one particular borehole type, atomic bombs were lowered down along them, to be detonated as tests. Between 1951 and 1998, some 1500 underground nuclear explosions were detonated to produce large (up to hundreds of meters across) masses of radioactive breccia and melt rock up to two kilometers underground.
The scale and nature of this anthroturbation is geologically novel, and these traces, being well below surface erosion processes, will persist for many millions of years in the rock mass. Stratigraphically, however, it is not simple, as these structures always cross-cut older rocks, and hence behave geologically a little like the sheets of magma that inject along fractures (igneous dykes) underground.
Coastal lithostratigraphic signals
A range of specific human-driven lithostratigraphic signals are present around coastlines, partly because of the concentration of human habitation around the shore, where coastal plains and deltas have for centuries offered fertile ground with access to marine resources and communication. Phenomena here (often generated or accelerated in more contemporary times) include various harbor and breakwater structures and, more extensively, meters-scale subsidence caused by groundwater and hydrocarbon extraction, sediment loading by buildings and sediment starvation as a result of the building of dams upriver. Deltas worldwide, notably the Mississippi, Ganges, and Yangtze, have shown, as a consequence, a pattern of subsidence that broadly began in the 1930s and is at a rate that greatly outpaces current relative sea level rises associated with global warming (Syvitski & Kettner, 2011).
Other major changes resulting in large anthropogenic stratal bodies are the building up of shorelines in land reclamation including in a number of cases the use of garbage as fill material. Well-known examples include Palm Island and Hong Kong airport (Syvitski & Kettner, 2011), while a less well-known but even larger-scale example is the building of a sea wall along the Chinese coastline that will be ultimately 11,000 kilometers long, dwarfing the examples in, for instance, the Netherlands. These structures are not only very large rock bodies in themselves, but they also heavily modify the nature of sedimentation both in front and behind them.
The continental shelf and slope
The undersea realm has a far shorter history of direct human perturbation than the terrestrial realm. Physical anthropogenic change in this realm now includes such engineered structures as oil platforms and pipelines.
The most widespread impact is bottom trawling, the dragging of weighted nets along the sea floor to capture bottom-living fish, shrimp, and other edible marine creatures. This has been practiced in some shallow near-shore waters since at least the 14th century, but with the advent of powered fishing-boats has now impacted on most continental shelves, in recent decades extending into continental slopes down to depths of one kilometer. The effect is similar to that of plowing on land, and can be similarly destructive to native biota. The most obvious examples of major perturbation are the trawling of slow-growing deep-water coral stands, which are devastated by the practice. Elsewhere, the repeated scraping of the sea floor can smooth topography and redistribute sediment, typically releasing plumes of fine sediment that drift off into deeper water, leaving a reworked surface layer (often with a coarsening-upward pattern) in which the biological composition shifts toward those species that are tolerant of (or even favored by) repeated disruption.
Deep ocean anthropogenic signature
The deep ocean has so far been affected by little in the way of direct sediment redistribution or the siting of engineered structures. However, the deep-sea sediments, which range from relatively rapidly accumulated turbidite deposits to very slowly accumulating pelagic oozes, have been increasingly affected by the addition of anthropogenic debris of different types. Scattered shipwrecks have been landing on the sea floor since humans began seafaring, but their number has been increasing in tandem with the development of global trade in which now 90 percent by volume is transported by sea. There is material simply dropped overboard, and the garbage signature has changed as technology has changed. One characteristic element comprises the “trackways” of coal clinker along the routes of the old steamships. More modern elements, and more widely dispersed, are tins and glass bottles and (since the mid-20th century) plastic, ranging in size from microplastic particles (now recognized to be almost ubiquitous in modern marine sediments) to larger objects, not least discarded fishing nets. As exploitation of the marine realm continues, it seems likely that characteristic new strata will be formed in the near future, such as those resulting from the extraction of deep-sea manganese nodules across wide areas of the abyssal plains.
Chemostratigraphy in geology is a rapidly developing tool that can help correlate strata of all ages across wide areas by means of particular features of chemical composition. It may involve both organic and inorganic components, and a particularly widely used aspect is ratios of isotopes of various elements (such as carbon, oxygen, nitrogen, neodymium) because these can be expressed and maintain their pattern in rocks of widely different contents of the elements involved—and also because these isotopic ratios often reflect fundamental changes to the Earth system. A number of chemical changes and patterns are associated with the Anthropocene, because of the extent of human modification of surface processes, and some of these chemical signals may prove crucial in defining this time unit.
There are a number of surface-related reservoirs of carbon including living organisms, soil, permafrost, atmosphere, ocean waters (a much bigger reservoir than the atmosphere) and sub-seafloor sediments. Carbon is cycled through surface and subsurface environments on various time scales. Short-term cycling is associated with photosynthesis and respiration, and long-term cycles with carbon burial in strata and exhumation, and yet longer ones with carbon being taken into the Earth’s interior via subduction zones and released by volcanism. Levels of the greenhouse gas carbon dioxide in the atmosphere have been a primary determinant of climate over geological time, as exemplified by the variations in its levels between ~180 ppm in glacial phases of the Quaternary and ~260–280 ppm in interglacial phases. The bulk of this gas is taken into the deep ocean in glacial phases and released at the beginning of interglacial phases, with observed close correlation of global temperature and carbon dioxide levels seen in ice cores (Wolff, 2014).
Human perturbation of the carbon cycle, largely by the burning of fossil fuels, has seen atmospheric carbon dioxide levels rise from ~275 ppm at the start of the Industrial Revolution to ~400 ppm today, with the bulk of the rise since the mid-20th century (Steffen et al., 2015). Although this can be directly seen in ice cores, a much more widespread (and permanent) signal in rock and fossil material reflects the input into the atmosphere of isotopically light carbon (i.e., with a greater proportion of 12C to 13C) derived from hydrocarbons. This produces a large, rapid negative (i.e., light) carbon isotope shift in, for example, the shells of foraminifera (common marine protozoans that secrete a calcium carbonate shell and so provide a widely used palaeoenvironmental archive in Cenozoic deposits). This shift is already of some ~2 permil (parts per thousand) in size. Significant geologically, this isotope anomaly is similar to (if still smaller than) the negative carbon isotope anomaly that was produced in an ancient carbon release/global warming event at the end of the Paleocene and beginning of the Eocene epoch, 55 million years ago, and used to define the base of the latter. The developing carbon isotope anomaly for the Anthropocene can form an equally striking marker.
A 1610 dip in atmospheric CO2 has been linked to depopulation of about 50 million people following colonization of the Americas and has been proposed as a potential signature for the start of the Anthropocene (Lewis & Maslin, 2015).
There are other effects of the carbon dioxide input, including acidification of the oceans (Tyrrell, 2011) that also shows a marked change since 1950 (Steffen et al., 2015). This can produce physical effects (the acidification event at the Paleocene-Eocene boundary literally dissolved large areas of the ocean floor to produce a carbonate gap), but is of more significance as regards biological effects to organisms, including reef corals, that secrete skeletons of calcium carbonate.
Nitrogen and phosphorus
The doubling of the surface reservoir of reactive nitrogen, largely caused by the production of nitrogenous fertilizers from atmospheric nitrogen via the Haber-Bosch process, is a significant event in geological history. The relative scale is difficult to quantify, but this perturbation has been regarded by some scientists as the greatest change to the nitrogen cycle since the Great Oxygenation Event of the early Proterozoic, 2.4 billion years ago. Its reflection in strata is not straightforward, but a generally consistent change in nitrogen isotope composition has been detected in lakes far from direct human activity (Wolfe et al., 2013), presumably via long-distance transport of nitrogen-containing aerosols (which also fertilized the lakes to produce a change in the assemblages of diatoms—microscopic single-celled plants that secrete a siliceous shell). These changes began to appear after 1850, but accelerated in the mid-20th century, concomitant with the great expansion of nitrogenous fertilizers (Steffen et al., 2015), and have been suggested as providing the basis for a “golden spike” for the Anthropocene (Wolfe et al., 2013). Elevated ammonium concentrations are also found in mid-latitude glaciers in response to agricultural emissions and in Greenland nitrate levels rose by a factor of two during the twentieth century, mainly between 1950 and 1980 to levels higher than over the previous 100,000 years ago (Wolff, 2014).
The content of phosphorus in surface soils has also roughly doubled, though in this case from the excavation of phosphorus from ground-based sources by mining. Although a consistently detectable stratal “phosphorus anomaly” has not been reported, one effect of the increases in both nitrogen and phosphorus has been the creation and growth of “dead zones” in the ocean (Tyrrell, 2011). These are the result of runoff of excess fertilizers via rivers into shallow, poorly circulating coastal waters. These stimulate plankton blooms which, upon dying, sink to the sea floor and decay, using up dissolved oxygen and suffocating bottom-living faunas; these are generally seasonal kills, as autumn and winter bring storms which stir oxygen back into the waters (hence the surviving biota is that best adapted to rapid recolonization). Currently there are about 400 dead zones in the world, covering in total an area of some 250,000 square kilometers, the best known being in the Baltic Sea, the Gulf of Mexico, and Chesapeake Bay of the eastern United States. Although these phenomena are not yet on the scale of anoxic events of the geological past, further atmosphere/ocean warming will cause the seas to become more strongly thermally stratified, and so more prone to oxygen deprivation.
The importance of metals to the technology of human civilization means that there has been considerable “selective erosion” of them by mining, to bring them from subsurface (often deep subsurface) levels to the surface. Although much of the metals have been processed into artifacts of different kinds, the overspill from the mining, smelting, and production processes have spread metal-rich plumes into waters, soils and near-shore marine sediments. Working out the precise scale of these local metal anomalies is not straightforward, as pre-disturbance background levels need first to be evaluated, but clear enrichments in the environmental levels of lead, cadmium, mercury, and other metals have been widely recognized in industrialized regions (Gałuszka, Migaszewski, & Zalasiewicz, 2014). More widely, aerosols (particularly of lead, from smelting and formerly from lead additives in fuel) have changed the composition of peat bogs, glaciers and icecaps, and these changes may readily be detected. Indeed, various sources of lead have been discriminated in these stratigraphic archives by means of lead isotope ratios, and these patterns are significant to defining and recognizing the Anthropocene.
In addition to the many thousands of new solid mineral forms that human industry has created, there are many thousands of compounds, notably complex organic compounds, which have been created for various purposes and have subsequently been dispersed through the environment. Among these are what have been termed “persistent organic pollutants” (POPs), which are only degraded slowly in the natural environment, some with half-lives of at least decades, which are only weakly soluble in water and tend to bind to sediment, especially clay particles. Hence, rather than simply traveling with water through the surface and subsurface environment, they can be preserved within sediment layers as an archive of the history of the arrival of these POPs into the local environment. Lake sediments are among the best of these archives, though these compounds have also been detected in sediments within rivers, estuaries, and seas.
POPs include organochlorine pesticides such as aldrin, dieldrin, and DDT, and industrial chemicals such as the polychlorinated biphenyls (PCBs) and dibenzofurans. In parts of the world these chemicals were only used for a few decades, before adverse environmental effects caused them to be banned, while elsewhere their use persisted. Hence they can provide a complex stratigraphic pattern involving their invention, more or less widespread use, and termination of input through legal ban or obsolescence.
Stratigraphical analyses of POPs have come a distant second to environmental monitoring studies, but the research carried out to date have shown that, superimposed on geographical variability, many of the commoner and more distinctive POPs appear from the mid-20th century. How long will this signal last? This will clearly vary from compound to compound, and being novel compounds there are no direct analogues. Nevertheless some organic compounds can persist for millions of years in strata essentially unaltered, such as the “TEX” long-chain alkanes used for palaeotemperature analysis of Cenozoic ocean floor strata.
The explosion of the first nuclear (“Trinity”) test bomb at Alamogordo, New Mexico, on July 16, 1945, began the dispersal of novel radionuclides into the surface environment (Zalasiewicz et al., 2015). This early test, and the only two (thus far) wartime uses, at Hiroshima and Nagasaki, Japan, only produced local effects (including the beginning of the formation of a fused radioactive sand rock type, trinitite, around the test site). However, many subsequent tests, from the early 1950s until the comprehensive test ban treaty in 1996 (albeit still not ratified), saw the global dispersal of these radionuclides worldwide, mainly in mid-latitudes, but with clearly detectable amounts in low-latitudes and both the Greenland and Antarctica icecaps (Waters et al., 2015). Other sources of widespread radioactive pollution include the nuclear accidents at Windscale (now Sellafield, Cumbria, U.K.) in 1957, Chernobyl, Ukraine, in 1986 and Fukushima, Japan, in 2011, and also events such as the fall of the SNAP 9 satellite off Mozambique in 1964 and the Soviet Kosmos 954 satellite over Canada in 1978, scattering radioactive debris.
The novel radionuclides involved—including caesium, americium, and plutonium—have different half-lives, the longest being that of plutonium 239, the signal of which can remain detectable for ~100,000 years (Waters et al., 2015). There was also significantly increased production of 14C above its natural abundance, a signal that was absorbed within many carbon reservoirs including wood and shell and bone material, to form another clear nuclear “spike” that will endure roughly half as long as will the plutonium signal.
This artificial radionuclide signal is not problem-free. Radionuclides can migrate within some sediments, rather than staying absorbed to the sediment lamina they were deposited in. And, particularly in deep ocean settings, the radionuclides can make a long (decades) journey before eventually settling to the sea floor. In deep-sea settings, too, the burrowing activity of sea floor organisms can disrupt the primary order of these slowly deposited sedimentary layers. Nevertheless, this particular chemical signal has a strong claim to be regarded as a primary marker of the Anthropocene, and there have been suggestions that the boundary may be placed either at the moment of the Trinity test in 1945 (Zalasiewicz et al., 2015), at the beginning of the main global “bomb spike” in the early 1950s (Waters et al., 2015), or at its peak in 1964 (Lewis & Maslin, 2015). If one of these is chosen, then the decision will reflect the total ensemble of stratigraphic signals at least as much as the precise pattern of the radioactive signal itself.
The use of the complex evolution of organisms, both single-celled and multicellular, has provided the main means of defining and using the Geological Time Scale in the Phanerozoic, within the strata of which fossils are generally abundant. However, in applying this technique to the Anthropocene, there are a number of reasons why “classical” biostratigraphy is difficult to apply. First, there is the short time scale of the Anthropocene, measured in thousands of years at most, compared with the millions of years over which normal processes of evolution and extinction take place. Then, there is the difficulty in comparing data collected by biologists and ecologists regarding the recent and current history of Earth’s biota, with the kind of criteria used by palaeontologists, who mainly deal with skeletal remains only. And, there are some quite novel aspects in Earth’s biological history that need to be taken into account in this analysis.
As regards gaining an idea of the Earth prior to human modification, a baseline state might be best represented by the last interglacial phase, 125,000 years ago. Considering the baseline state is not simple, as overall the Quaternary has been a ~2.6-million-year interval of considerable oscillatory climate change between glacial and interglacial states, with the glacial phases in particular (that make up the bulk of the time) showing complex and dynamic change. Nevertheless, it is notable that most of the Quaternary does not show a particularly elevated rate of either extinctions or evolution, suggesting that life overall had adapted to repeated climate change. It is only late in the Pleistocene that significant biotic change appears, suggesting that the human factor became significant from this time.
Following the unremarkable evolutionary pattern of most of the Quaternary, the late Pleistocene, from ~50,000 years ago to the early Holocene saw waves of extinctions of large mammal species, with most species (other than those in Africa) becoming extinct (Barnosky et al., 2011). The species affected included such as the mammoth, sabre-tooth cat, ground sloth, woolly rhinoceros, and there has been considerable debate about whether climate/environmental change or human hunting was the cause. However, there was commonly a close link between the arrival of humans to any geographic region and subsequent extinctions, suggesting that “overkill” by humans often played a large or crucial role.
Following this, extinction continued through the Holocene, although not on such a dramatic scale. Nevertheless, it is clear that many species became extinct (particularly on hitherto isolated islands) in part because of predation by humans, in part because of competition from or predation by associated invasive species (see below) such as rats and cats and in part because of habitat loss as natural habitat was converted to farmland or urban areas.
Extinctions have accelerated in recent decades, and there has been considerable debate about whether the Earth is entering, or has entered another major mass extinction (it would be the sixth recognized) in Earth history (Barnosky et al., 2011). The consensus seems to be that this mass extinction has not yet happened as, in most major groups of organisms, the number of species known to be extinct is of the order of 1 percent. However, current rates of extinction are far above background levels (perhaps by up to three orders of magnitude) and also the number of species known to be endangered or critically endangered (i.e., in very low numbers) is, within many different major groups, of the order of a few to several tens of percent. With current “business as usual” trends of predation and habitat loss, a mass extinction on the scale of the “big five” is thought likely within two to three hundred years, (Barnosky et al., 2011) even without considering the additional effects of climate change.
While species extinctions are not yet on a major scale, other biological changes are on a considerably greater scale. Species invasions, for instance (also termed neobiota), already comprise a widespread and (uniquely, in Earth history) global phenomenon. Homo sapiens, of course, is the invasive species par excellence, living on every continent—even Antarctica—and having reached, and mostly occupied, virtually every island on the planet. With humans have come a range of other species, either by design (pigs, goats, cats, rabbits) or accidentally (most famously, rats). For vascular plants, although native losses are great over at least half of the ice-free land surface, plant species have been enriched mostly because species invasions exceed native losses (Ellis, Antill, & Kreft, 2012). Species have now been translocated between every continent and every ocean, and invasive species now commonly form up to a half (locally more) of the species complements of many regions—with particularly successful invasives often dominating assemblages (the name Homogenocene having been suggested by John Curnutt to reflect this phenomenon). The global character of this process is unique, as previous invasions were confined to continents that became geographically conjoined (notably, the Americas some 7 million years ago)—or a landmass separated to allow the species from the oceans on either side to mingle. Such invasions can increase local biodiversity (as well as reducing it, by causing native species to become extinct), even while the total biodiversity of the Earth is undergoing reduction. As with extinctions, the effects are essentially permanent, as it is the successfully translocated and surviving species that will comprise the biology (and the palaeontology) of the future.
Part of the reason that so many species are in low numbers is the appropriation of a large proportion (currently ~40%) of primary productivity to support our own species. This is another unique signal of the Anthropocene. The distortion to the pre-human ecological pattern may be exemplified by land vertebrates. Prior to the late Quaternary megafaunal extinctions, resources were shared among some 350 large vertebrate species. Currently, about 180 of these still exist. Among these—considering just body mass—humans now make about one-third. Most of the other two-thirds is distributed among those few vertebrate species that we keep to eat—cows, pigs, sheep, goats, and so on—and these have been heavily modified by selective breeding. Less than 5%—and probably less than 3%—is distributed among the wild large vertebrates of the world: elephants, rhinoceri, hippopotami, lions and tigers, and others. In a further distortion, the total large vertebrate mass has been increased by about an order of magnitude over geological baseline levels, because of agriculturally hyper-fertilized primary productivity (via nitrogen, phosphorus especially—see above) that is now fed efficiently to our preferred prey species.
Researchers examining overall plant and animal communities speak of them in terms of biomes: large areas with particular patterns of ecosystems determined largely by climate. The increasing human influence on terrestrial areas has led to the concept of anthromes (Ellis et al., 2012), where these primary patterns have been transformed into human-dominated agricultural (dominated by a few selectively bred and genetically modified primary crops) and urban systems (of which the nonhuman biology is often largely invasive). The extent of this transformation means that it is no longer accurate to say that humans have created a variety of anthromes that are nested within the regional biomes; rather, with only a few percent of pristine landscape left, it is more appropriate to say that there are now patches of more or less undisturbed biome left within anthrome-dominated terrestrial biology.
Humans, uniquely among land vertebrates, have also changed the trophic structure of the oceans. Increasingly effective and widely applied fishing methods have reduced the numbers of top predators and those just below them in the food web—whales, sharks, tuna, and others—with most populations now decreased by one to two orders of magnitude, and some (such as the Newfoundland cod) having undergone even greater population crashes. With the main targets thus diminished, fishing effort is becoming focused on lower levels of the trophic structure—“fishing down the food chain.”
Within nearshore settings, including lagoon and estuaries, microflora, such as diatoms and dinoflagellates, and microfauna such as foraminifera and ostracods respond dramatically to human-driven stresses (Wilkinson, Poirier, Head, Sayer, & Tibby, 2014). The changes locally occur at a range of dates, but when viewed globally the population and assemblage changes are most prominent from 1940 to 1945, influenced by increases in the release of toxins and pollutants, increased runoff of agricultural fertilizers and input of sewage, changes to water acidity and oxygen levels, increased water turbidity, and salinization. However, in areas where stresses on microbiota have reduced through environmental controls, populations show signs of recovery.
Human trace fossils—technofossils and the technosphere
Many animals leave not only body fossils, mainly of hard parts such as bones and shells, but also trace fossils, such as worm burrows and footprints. Some animals create more complex structures that are also capable of being fossilized: the casings of caddis fly larvae, or the nests of both solitary and colonial insects, some being extraordinarily sophisticated, such as termite nests. The structures created by humans—houses and factories, roads and cars, tools ranging from knives to computers, are also commonly potentially preservable, and may also be considered to be trace fossils. They have been termed technofossils (Zalasiewicz, Williams, Waters, Barnosky, & Haff, 2014), and have some unique attributes: their remarkable diversity (very many millions of types have been made, compared with the usual maximum of three or four traces made by any other species in the animal kingdom); and their rapid evolution, now on a scale of decades and years (sometimes less) which is now also completely decoupled from the biological evolution of the trace-maker.
They all, in one way or another, embody technology, and without this technology the Earth would not be able to support more than a small fraction of the present human population. The technology is produced by humans, but humans, being dependent on that technology, must also maintain it, and the entire technological system is now globally connected. The entire system has been termed the technosphere (Haff, 2014), an emergent system comprising both the technological objects (“hardware”) and its human organizational systems (“software”). It currently needs a great deal of energy to power it, largely from fossil fuels. It has developed from, and is perhaps now in part parasitic upon, the biosphere. It is the system behind all the environmental changes of the Anthropocene, and the future of this time interval will be determined by the nature of its further evolution. It currently has some considerable instabilities—for instance it is extremely poor at recycling its constituent materials compared with the biosphere, and so risks being poisoned by its own waste products. But, it is evolving rapidly, so time will tell.
Climate Change and the Anthropocene
There has been rapid rise in major climate drivers (for example, carbon dioxide and methane), as noted above, and these are now outside Quaternary norms. With atmospheric carbon dioxide currently at ~400 ppm, it is at levels likely broadly typical of the Pliocene Epoch, 3 to 5 million years ago, when temperatures were 2−3 °C higher than today and sea levels 10−20 m higher.
However, there has so far only been a small rise in global average temperature, of ~0.8 °C globally over the past century. This is likely due to global lag effects, together with the storage of heat in oceans (which are much larger stores of heat compared with the atmosphere, and are measurably becoming warmer in their upper layers, with up to half of the last century’s rise in sea level of ~30 cm being due to thermal expansion). Currently, therefore, the Earth is still well within envelope of interglacial conditions as regards global temperature and sea level. Indeed, in the last interglacial, peak temperatures and sea levels were a little higher than today, without anthropogenic forcing. Nevertheless, evidence of the beginning of anthropogenic warming and climate destabilization is now clear, with the temperature rise so far virtually certain to be the result of the rise of greenhouse gases in the atmosphere. There have been clear signs too, over the last couple of decades of increased ice melt and freshening of seawater around both Greenland and Antarctica, with ice mass loss now a few hundred billion tons each year.
Thus, unless human energy supplies rapidly become decarbonized, there will over the coming decades and centuries be rises of global temperature and then sea level that will take the Earth system out of Quaternary interglacial norms and into conditions more resembling the pre-Quaternary Cenozoic. The temperature changes in themselves will lead to many extinctions as species are forced out of their habitable ranges. Hence, as regards the global climate and (especially) sea level signal, of the Anthropocene currently remains weak, but will likely increase considerably over future decades and centuries.
Synthesis, Definition, and Wider Significance
The evidence summarized above suggests that the Anthropocene hypothesis is founded upon a robust array of data indicating a major change in the Earth system (even if still larger changes lie ahead), also recorded as changes to strata, similar to signatures recorded in the geological past.
Hence, if the Anthropocene is real—how should it be defined? The boundaries of geological time units simply represent a temporal framework which captures, as well as possible, the main features of a complex and often protracted change from one state of the Earth system to another.
Three main candidate levels have been suggested for the beginning (or chronostratigraphic base) of the Anthropocene:
Firstly, an “early Anthropocene” or “Palaeoanthropocene” (Foley et al., 2013) that reflects early events, with ideas ranging from the great megafaunal extinctions to, more commonly quoted, events associated with the beginning and spread of agriculture in the early to mid Holocene, which produced significant changes to the landscape, though only marginal changes to the marine realm. Controversially, these landscape changes may have led to the slow rise in atmospheric carbon dioxide levels (from 260 to 280 ppm) through the pre-industrial Holocene, and may have prevented the slide back into a glacial phase.
Secondly, at the beginning of the Industrial Revolution, human population exceeded a billion (Steffen et al., 2015), and the development of large-scale coal burning, steam engines and industry that began the rise in atmospheric carbon dioxide levels that continues to this day. This spread from Britain to Europe to North America between the late 17th and late 18th century, and subsequently more widely (Waters et al., 2014). It was this option that was favored during early descriptions of the Anthropocene by Crutzen and Stoermer (2000) and Crutzen (2002).
Thirdly, from the mid-20th century, there came a “Great Acceleration” in the scale and rate of population growth, energy use, manufacture, habitat/biotic change, and widespread change beginning in the marine realm (Steffen et al., 2015). This was the start of the oil economy and the beginning of nuclear age (Zalasiewicz et al., 2015; Waters et al., 2015) and of globalization, with rapid growth and sophistication of the technosphere (Haff, 2014). Most of the anthropogenic rise in atmospheric carbon dioxide took place in this interval.
Other levels have been proposed, including the 16th and early 17th centuries (e.g., Lewis & Maslin, 2015)—and also a future level, once climate has warmed and sea level has risen further (Wolff, 2014), but those three remain the main candidates. Of them, the “early Anthropocene” is historically highly significant—but the processes, and stratigraphic signals, were diachronous, taking millennia to spread across those parts of the globe they affected (Edgeworth et al., 2015). The same, in a more compressed form, may be said about the Industrial Revolution. It is the mid-20th century “Great Acceleration” that represents the most widespread and synchronous or near-synchronous signals, and also the greatest changes (so far) to the Earth system, and it is likely therefore that a level some time in the mid-20th century will become accepted as the beginning of the Anthropocene (Zalasiewicz et al., 2015), whether this new time term is formalized or not.
As regards hierarchical level, the Anthropocene is currently being considered as a potential epoch, although other levels are possible. Given that it combines features that are geologically striking and completely novel (for example the whole-planet species invasions and technofossils), with others that are still trivial (e.g., sea level change), this is probably a reasonable compromise—especially given that yet larger changes seem likely.
The question of formalization of the Anthropocene will hinge as much on the perceived usefulness of having this unit on the Geological Time Scale (and for whom it is useful, given the wide interest in this concept) as on its geological reality. This is a complex question, the answer to which is hard to predict.
Nevertheless, whether formal or informal, this term and this concept has succeeded in conveying something of the overall rate and scale of global change in the context of all of Earth history, and thus helping in the analysis—and dealing with the human consequences—of this change. Moreover, it has helped refashion the relationship between humans and nature—in effect intertwining them so that one now cannot change without affecting the other. Thus it has also brought the sciences and humanities closer, as inquiry from both sides will be necessary to fully understand—and perhaps even direct the course of—the Anthropocene. Geologists, looking at past major phases of change to the Earth, are used to analyzing driving forces such as major volcanic outbursts and comet strikes. Here it is humans currently driving change to the Earth system—a far more difficult, and more unpredictable, phenomenon.
Barnosky, A. D., Matzke, N., Tomiya, S., Wogan, G. O. U., Swartz, B., Quental, T. B., et al. (2011). Has the Earth’s sixth mass extinction already arrived? Nature, 471(7336), 51–57.Find this resource:
Crutzen, P. J., & Stoermer, E. F. (2000). The Anthropocene. International Geosphere-Biosphere Programme Newsletter, 41, 17–18.Find this resource:
Crutzen, P. J. (2002). Geology of Mankind. Nature, 415, p. 23.Find this resource:
Edgeworth, M., Richter, D. DeB., Waters, C. N., Haff, P., Neal, C., & Price, S. J. (2015). Diachronous beginnings of the Anthropocene: The lower bounding surface of anthropogenic deposits. Anthropocene Review, 2(1), 33–58.Find this resource:
Ellis, E. C., Antill, E. C., & Kreft, H. (2012). All Is Not Loss: Plant Biodiversity in the Anthropocene. PLoS ONE, 7(1), e30535.Find this resource:
Foley, S., Gronenborn, D., Andreae, M., Kadereit, J., Esper, J., Scholz, D., et al. (2013). The Palaeoanthropocene—The beginnings of anthropogenic environmental change. Anthropocene 3, 83−88.Find this resource:
Ford, J. R., Price, S. J., Cooper, A. H., & Waters, C. N. (2014). An assessment of lithostratigraphy for anthropogenic deposits. In C. N. Waters, J. Zalasiewicz, M. Williams, M. A. Ellis, & A. Snelling (Eds.), A Stratigraphical Basis for the Anthropocene (Special Publication 395, pp. 55–89).Find this resource:
Gałuszka, A., Migaszewski, Z. M., & Zalasiewicz, J. (2014). Assessing the Anthropocene with geochemical methods. In C. N. Waters, J. Zalasiewicz, M. Williams, M. A. Ellis, & A. Snelling (Eds.), A Stratigraphical Basis for the Anthropocene (Special Publication 395, pp. 221–238).Find this resource:
Haff, P. K. (2014). Technology as a geological phenomenon: Implications for human well-being. In C. N. Waters, J. Zalasiewicz, M. Williams, M. A. Ellis, & A. Snelling (Eds.), A Stratigraphical Basis for the Anthropocene (Special Publication 395, pp. 301–309). London: Geological Society.Find this resource:
Lewis, S. L., & Maslin, M. A. (2015). Defining the Anthropocene. Nature, 519, 171–180.Find this resource:
Steffen, W., Broadgate, W., Deutsch, L., Gaffney, O., Ludwig, C. (2015). The trajectory of the Anthropocene: The Great Acceleration. The Anthropocene Review, 2(1): 81–98.Find this resource:
Syvitski, J. P. M., & Kettner, A. J. (2011). Sediment flux and the Anthropocene. Philosophical Transactions of the Royal Society, A2011(369), 957–975.Find this resource:
Tyrrell, T. (2011). Anthropogenic modification of the oceans. Philosophical Transactions of the Royal Society, A369, 887–908.Find this resource:
Waters, C. N., Zalasiewicz, J., Williams, M., Ellis, M. A., & Snelling, A. (Eds.) (2014). A Stratigraphical Basis for the Anthropocene? (Special Publication 395, pp. 321). London: Geological Society.
Waters, C. N., Syvitski, J. P. M., Gałuszka, A., Hancock, G. J, Zalasiewicz, J., Cearreta, A., Grinevald, J., McNeill, J. R., Summerhayes, C. (2015). Can environmental radiogenic signatures define the beginning of the Anthropocene Epoch? Bulletin of Atomic Scientists, 71(3), 46–57.Find this resource:
Wilkinson, I. P., Poirier, C., Head, M. J., Sayer, C. D., & Tibby, J. (2014). Micropalaeontological signatures of the Anthropocene. In C. N. Waters, J. Zalasiewicz, M. Williams, M. A. Ellis, & A. Snelling (Eds.), A Stratigraphical Basis for the Anthropocene (Vol. Special Publications 395, pp. 185–219). London: Geological Society.Find this resource:
Williams, M., Zalasiewicz, J., Haywood, A., & Ellis, M. (Eds.) (2011). The Anthropocene: A new epoch of geological time? (Vol. 369A, 833–1112). Philosophical Transactions of the Royal Society.
Williams, M., Zalasiewicz, J., Waters, C. N., & Landing, E. (2014). Is the fossil record of complex animal behaviour a stratigraphical analogue for the Anthropocene? In C. N. Waters, J. Zalasiewicz, M. Williams, M. A. Ellis, & A. Snelling (Eds.), A Stratigraphical Basis for the Anthropocene (Special Publication 395, pp. 143–148). London: Geological Society.Find this resource:
Wolfe, A. P., Hobbs, W. O., Birks, H. H., Briner, J. P., Holmgren, S. U., Ingólfsson, Ó., et al. (2013). Stratigraphic expressions of the Holocene-Anthropocene transition revealed in sediments from remote lakes. Earth-Science Reviews, 116, 17–34.Find this resource:
Wolff, E. W. (2014). Ice Sheets and the Anthropocene. In C. N. Waters, J. Zalasiewicz, M. Williams, M. A. Ellis, & A. Snelling (Eds.), A Stratigraphical Basis for the Anthropocene (Vol. Special Publications 395, pp. 255–263). London: Geological Society.Find this resource:
Zalasiewicz, J., Kryza, R., & Williams, M. (2014). The mineral signature of the Anthropocene. In C. N. Waters, J. Zalasiewicz, M. Williams, M. A. Ellis, & A. Snelling (Eds.), A Stratigraphical Basis for the Anthropocene (pp. 109–117). London: Geological Society.Find this resource:
Zalasiewicz, J., Waters, C. N., & Williams, M. (2014). Human bioturbation, and the subterranean landscape of the Anthropocene. Anthropocene, 6, 3–9.Find this resource:
Zalasiewicz, J., Waters, C. N., Williams, M., et al. (2015). When did the Anthropocene begin? A mid-twentieth century boundary level is stratigraphically optimal. Quaternary International.Find this resource:
Zalasiewicz, J., Williams, M., Smith, A., Barry, T. L., Coe, A. L., Bown, P. R., et al. (2008). Are we now living in the Anthropocene? Geological Society of America Today, 18(2), 4–8.Find this resource:
Zalasiewicz, J., Williams, M., Waters, C. N., Barnosky, A. D., & Haff, P. (2014). The technofossil record of humans. Anthropocene Review 1, 34–43.Find this resource:
|
{
"palladium_score": 3.8704071044921875,
"timestamp": "2026-01-18T07:32:15.870030",
"source": "Palladium-STEM (Preview)"
}
|
https://www.balticasia.lt/en/straipsniai/ekonomika/household-registration-system-influence-on-migration/
|
Household Registration System
Even though today it is considered to be a very controversial topic, the household registration (hukou) system was inevitable in China. Population registration was necessary, because the country could not afford to offer the social benefits to those who were not in the danwei (work unit) system, and cities could not hold too many people without having social and political upheavals. “In the years immediately after 1949 and during the first Five Year Plan in 1953, peasants flocked to the cities, heeding the call to help in reconstruction and industrialization of economy” . Implemented in the late 1950s by the Chinese Communist Party, the household registration system was a result of command economy and therefore benefited China in numerous ways, but has been obstructing many people ever since. It led to the country having an officially acknowledged divisive socioeconomic structure and three castes of society. Although the system was created for the purpose of “tracking down” the changes of figures of population, it has also influenced the internal migration in China in many ways. This article will look into how the household registration (hukou) system influenced China’s rural-to-urban migration and what effects it had on China’s social and urban development.
Introduction of the Hukou System
The most obvious effect on rural-to-urban migration was also the main goal of the hukou system – the undesirable migratory flows were tightly controlled and almost stopped for several decades. When farmers were made to sell grains for the government at artificially low prices, it was essential to keep them tied to the land: “the purpose was not merely to monitor population movements, but to anchor people to their native places, and – in particular – to prevent unauthorised movement from the countryside to the city” . “By 1957, the problem of ‘blind influx’ into China’s cities finally brought the intervention of the highest bodies of Party and state, with the Central Committee and the State Council issuing a joint directive exhorting tighter controls over migrants” . As a result, in January 1958, the National People’s Congress passed “more specific regulations intended to put a stop to unauthorised population movements” . People were provided a ‘household registration booklet’, which established a sort of legal identity. The key distinction has been “nongye hukou” (agricultural household status) and “feinongye hukou” (non-agricultural household status). Since the implementation of the household registration system, it became practically impossible to leave the countryside, and internal rural-to-urban migration in China almost ceased to exist.
The newly established hukou system worked in a way so that any kind of migration was strictly controlled, as changing one’s hukou status was extremely difficult. “Individuals could move voluntarily downward (to a smaller city or to a rural place) or horizontally (as when rural brides moved into the homes and villages of their grooms), but not upward” . Permission to migrate upwards was only granted if a person obtained approvals from both origin and destination governments, which happened in relatively rare and special situations, for example, “admission to an urban university, service in and then demobilization from the army as an officer, or in a situation in which an urban factory had taken over rural land for plant expansion” . But such cases were very rare for people living in villages: “in one village in Guangdong, on average only one man was able to change to urban registration every five years. Among women, this was only one every fifty years” . All of the restrictions meant that while cities were officially out of reach for rural people, internal migration could still exist and “free”, even if limited, movement was available to all.
Separation between Rural and Urban Citizens
The household registration system resulted in some 700–800 million people in effect treated as second-class citizens, deprived of the opportunity to settle legally in cities and of access to most of the basic welfare and state-provided services enjoyed by regular urban residents , which lead China to de-urbanise. The main legal differences between benefits provided by non-agricultural hukou status were as follows: social insurance (tax and benefit levels differ, so does the access to benefits), local services (access to education, housing and healthcare, rationed food coupons), property rights (allocated housing) and job restrictions (allocated jobs) . As only the non-agricultural hukou status holders had the access to the benefits listed, it meant that agricultural hukou holders could not survive in cities. Hence, once so pursued life in urban areas now meant having no means to obtain food or shelter, or even earn money to purchase mentioned necessities (even if food or land markets had existed, which they did not). This lead to something never seen before: as all developing countries were slowly expanding their urban societies, “between the end of the 1950s and 1978, China de-urbanised” , as we can see in figure 1:
The data in the chart portrays the most important effect of the hukou system: the internal migration to cities was stopped and the urban share of the population declined by more than two percentage points. This proves that the plan to reduce urban population worked almost flawlessly in the first few years from its implementation.
Although the costs of changing one’s status were high, there continued to be a very high level of undocumented migration in China. During the years of the Great Leap Forward and the famine that followed, having a non-agricultural hukou status was a ticket to life. As millions of people did not have the possibility to change their household registration status, they decided to try their luck illegally moving to cities, even though they had to face lots of hardships: illegal labour was poorly-paid and migrants had to rely on their family members back in the villages for grain (in addition, even though officially outlawed as dangerous vestiges of capitalism, there were also black markets in cities). The Chinese government was happy with the “perfectly running” plan, but in reality, there was a fairly intense movement happening behind their backs: “in addition to the estimated 30 million who migrated to the cities during the Great Leap Forward (20 million of whom were forcibly returned), an estimated 4 million voluntarily migrated during the first two years of the Cultural Revolution when state control was relatively weak” . Therefore it can be stated that, even though the hukou system made internal migration much harder, the phenomenon of rural-to-urban migration itself never stopped existing.
Hukou System Reformed
As the household registration system was changed and restrictions relaxed within the years, the third caste appeared in society. In 1978, when the opening reforms began, the Chinese officials eventually realised they needed more labour force to continue the process of intensive industrialisation. Within the years that followed, more and more rural hukou holders were taken to the cities for manual labour, which meant that the household registration system needed to be relaxed. “From 1979 to 1995 the nonagricultural hukou population grew at an average of 7.8 million per year, or 3.7%, compared to an average of 2.5 million or 1.9% per year in the period between 1963 and 1978” . When analysing the causes of this change, it is worth noting that the Chinese government was not the main initiator of liberalisations. The government was responding and adapting to the situation that was already changing from the ground, the massive flows of people trying to get into cities. During the 1960s and 1970s, especially after Mao’s 1968 order, about 17 million urban youth were sent “up to the mountains and down to the villages”. But when around 1975 they began coming back from the villages, “article 90, guaranteeing freedom of movement, was removed from the new constitution of the PRC” . The educated youth did not give up, and won the battle against the CCP: “after the Party’s Third Plenum of 1978, the returned youths petitioned the authorities to restore urban residential rights, putting up posters, organizing demonstrations and sit-ins, and disrupting rail traffic. In 1979, the great majority were allowed to return to their home cities or at least restored to “non-agricultural” status in local towns in a brilliant example of successful resistance to the state by a well-organized and determined group” .
Another reason why the Chinese government loosened the measures of the urban growth control was a “growing rural prosperity arising directly from the decision to abolish the collective system of rural work and remuneration” . Together with these masses of (no longer young) returned city dwellers, more and more rural people were seeking jobs in the construction trades in urban areas. Cities’ governments took many of them in as cheap labour force and gave temporary non-agricultural household registration statuses, but did not provide any of the social benefits that were offered to urban hukou holders. Therefore, it is worth noting that the relaxations of the household registration system created a third caste in the Chinese society; in addition to rural residents and urban hukou holders, rural-urban migrants appeared, also called the “floating population”. As the intermediate caste in this conception, migrants were having access to many more opportunities than the rural kin they had left behind. However, on many different fronts they were subject to inferior treatment and discrimination by both urban hukou holders and urban authorities, no matter how long they had been a de facto urban resident . Despite being employed in urban areas, migrants have been vulnerable to police arrest, physical abuse, detention, and deportation to their native village, especially if they were unable to present proof of at least temporary urban registration.
This situation changed a little in 1985, when a new law introduced temporary residence permits (zanzhuzheng) that legalised the status of rural migrants in cities. The main requirements to obtain the right to live in cities were that the migrants “must either run businesses or be employed in enterprises, and have own accommodations in market towns. They must also self-provide their own food grain” . After that, town and city governments realised that selling permanent hukou to those that could afford it was a good source of income: “One of the first towns to do so, Laian, in Anhui, sold 773 permits at 5,000 yuan apiece in just six days” . Figure 2 shows how the number of the so-called “floating migrants” increased from 1985 to 2000:
Starting with only 7 million floating residents in 1982, by 2000 the number of temporary migrants reached almost 80 million. “Generally, the size of the rural migrant labor grew from about 50-60 million in the early 1990s to exceed 100 million in the early years of this century. In 2009, the figure was close to 150 million” . This means that the rapid increase continues to this day and the number of floating migrants has multiplied more than 20 times from 1982 to 2009. The figures prove that in general, the population has been more mobile in the reform era than ever before.
The Real Effect of the Hukou System
But what did having a chance to move again mean? The hukou system started to indirectly promote pursuing a life in urban areas, because before the household registration system, not only the population could move freely, but one’s social status remained untouched. After the implementation of hukou system, being able to change location meant being inferior to the local people, and having no social benefits: “Since fewer than 3 percent can afford health insurance, most avoid medical care altogether. City judges often impose harsher sentences on rural migrants, and employers frequently withhold wages, knowing undocumented workers cannot complain to police without risking exposure” . The two-caste society that was created had not only socioeconomic effects on migration, but also psychological ones. Some 260 million Chinese migrants — about 20 percent of China’s total population — live as second-class citizens in their adopted cities .
The line, or as Southern Weekend newspaper called it, the “electric fence” that originally separated agricultural and non-agricultural hukou status holders, now “effectively lies between official residents of the largest, most prosperous and desirable cities and all others” . The two-tiered society that was a result of the household registration system now enthrones urban status and makes it an ultimate goal for millions of rural residents, who put all their effort in order to obtain the benefits only available to “urbanities” if not for themselves, then at least for their children. By not providing equal rights to all urban citizens, hukou restrictions indirectly promote migration and striving for a legal status that ensures less dangerous job with a better wage, a pension, and healthcare provided by the employer . People’s rights are being violated – for example, in 2000 over three-quarters of all child-birth deaths in Canton were of migrant women , as their admission to the hospitals was refused. As there are greater chances to earn enough money to afford healthcare and other necessities in cities rather than in rural areas, it can be stated that household registration system drives massive crowds of (especially young) people to cities to try and seek for a better life, and therefore promotes China’s internal migration.
Another thing the household registration system (together with the wave of migration that came with it) influenced, was urbanisation. Some areas, especially those on the coastline, grew so much that their demographic profiles were affected. From a relatively small town some cities grew into metropolises in just a couple of years, all thanks to rural migrants, who in some places take up a bigger share of population than the locals (see Figure 3):
The data shows that by 1990 in some industrial regions there were more agricultural hukou status holders than local citizens. “Rural–urban migration turns out to be the dominant source of Chinese urban growth in 1978–1999. In the two decades since 1978, about 174 million people (more than total population of many large countries in 1998, such as Brazil with 166 million and Russia with 147 million) have moved from rural areas to cities, creating the history’s largest flow of migration in the world” . This has led to cities expanding and China rapidly urbanising, even though most urban residents live in poverty and do not have many legal rights.
To summarise, the household registration system could not be avoided in a country such as China, because cities could not hold and provide for too many people. For some time the hukou system stopped internal migration to cities, but since the rural areas were facing hardships and even famine, illegal migration continued to exist. Eventually, after the opening reforms, the household registration system was relaxed. Even though this resulted in getting many people out of poverty, equal rights and social benefits are still not provided for millions of Chinese residents, and it is becoming more and more of a problem. Having the right to live in a city does not mean one is equal to those with an approved registration status, and the hukou system influences migration in a way that it develops desire of people to be in the better caste, which results in many people at least attempting to migrate and succeed in cities. Therefore, it can be said that the hukou system is a vicious circle, helping the country economically but dragging it down in terms of social equality.
Written by: Greta Oss
Edited by: Monika Dvirnaitė
- Atlaslin: Hukou certificate of P.R.C.
- Charlie fong: 2009 Chunyun period,Beijing West Railway Station,China
- Eichen and Zhang, 1992.
- Kirkby, 1985.
- Renmin Ribao, 1957.
- New China News Agency, 1958.
- Whyte, 2010.
- Chan, 2009.
- Potter, 1983.
- Chan, 2010.
- Kuhn, and Shen, 2013.
- Naughton, 2006.
- Yang, 1994.
- Ministry of Public Security, 1991.
- Nyíri, 2010.
- Perry and Selden, 2003.
- Kirkby, 1985.
- Whyte, 2010.
- Zhu, 1991.
- Wei, 2001.
- Chan, 2008.
- Edkins, 2010.
- Zhang and Song, 2003.
- Li, 2003.
- Nyíri, 2010.
- Luan, 2000.
- Zhao, 2003.
- Human Rights in China, 2002.
- Zhang and Song, 2003.
- Chan Kam Wing (2009), “The Chinese hukou system at 50”, Eurasian Geography and Economics 50(2): 197–221.
- _____(2010), “The Household Registration System and Migrant Labor in China: Notes on a Debate”, Population and Development Review, June 36 (2): 357.
- _____(2013), “China, Internal Migration,” in Immanuel Ness and Peter Bellwood (eds.) The Encyclopedia of Global Migration, Blackwell Publishing.
- Edkins, B (2010), “China’s Undocumented Migrant Problem” Slate Magazine, available at http://www.slate.com/articles/news_and_politics/foreigners/2010/05/chinas_undocumented_migrant_problem.html [8 December, 2013].
- Eichen, M. and Zhang Ming (1992), “Internal Migration in the People Republic of China”, Focus On Geography 42: 20.
- Guldin, G. E. (2001), What’s a Peasant to Do? Village Becoming Town in Southern China, Colorado: Westview Press.
- Human Rights in China (2002), INSTITUTIONALIZED EXCLUSION: The tenuous legal status of internal migrants in China’s major cities, November 6.
- Kirkby, R. J. R. (1985), Urbanisation in China: Town and Country in a Developing Economy 1949 – 2000 AD, Worcester: Billing & Sons Limited.
- Kuhn, P., Shen, K. (2013), “Do Employers Prefer Undocumented Workers? Evidence from China’s Hukou System”, Impacts of Hukou, Education and Wage on Job Search and Match: Evidence Based on Online Job Board Microdata 107: 15.
- Li Changping (2003), “Huji zhidu de yanbian” (The transformations of the hukou regime), Nanfang Zhoumo, 3 April.
- Liang Zai and Ma Zhongdong (2004), “China’s Floating Population: New Evidence from the 2000 Census”, Population and Development Review 30(3): 467.
- Luan Jingdong (2000), “Fada diqu nongcun wailai laodongli he yimin guanli yanjiu” (A study of outside labour and migrant management in villages in developed regions), a PhD dissertation, Turku University.
- Ministry of Public Security (1991), “Guanyu zhuanfa bufen shengshi hukou nongzhuanfei gongzuo zuotanhui jiyao de tongzhi” (On Circulating the Minutes of the Working Forum Concerning Hukou Transfer from Agricultural to Non-agricultural Status), May 4.
- Naughton, B. (2006), The Chinese Economy: Transitions and Growth, Cambridge, Massachusetts: The MIT Press.
- New China News Agency (1958) January 9: 2, in Orleans, L. (1959), “The Recent Growth of China’s Urban Population”, Geographical Rewiev 49: 53.
- Nyíri, P. (2010), Mobility and Cultural Authority in Contemporary China, Seattle: University of Washington Press.
- Perry, E. J., Selden, M. (2003), Chinese Society: Change, Conflict and Resistance, Oxford: Taylor and Francis.
- Population Research Institute, Chinese Academy Social Science (1996), Zhongguo renkou nianjian 1996 (Almanac of China’s Population 1996), Beijing: Zhongguo minhang chubanshe.
- Potter, S. H. (1983), “The Position of Peasants in Modern China’s Social Order”, Modern China, 9:4.
- Renmin Ribao (1957), “Directive for checking the blind outflow of the rural populations”, December 9, 260-1.
- Wei Jun (2001), “Feichu hukou!!!” (Get rid of the hukou!!!), Zhongguo Shehui Daokan 4:21.
- Whyte, M. K. (ed.) (2010), One Country, Two Societies: Rural-Urban Inequality in Contemporary China, Massachusetts: Harvard University Press.
- Yang Yunyan (1994), Zhongguo renkou zhuanyi yu fazhan di changqi zhanlüe (Migration and long-term development strategy in China), Wuhan: Wuhan Chubanshe.
- Zhang, Kevin Honglin and Song, Shunfeng (2003), “Rural–urban migration and urbanization in China: Evidence from time-series and cross-section analyses”, China Economic Review, 14:4.
- Zhao Jin (2003), “Jiangsu huji gaige lengqing kaichang” (Jiangsu hukou reform proceeds calmly), Zhongguo Jingji Shibao, 19 May.
- Zhu, Baoshu (1991), “Nongcun renkou xian shaochengzhen zhuanyi de xindongtai hexinwenti” (New Situation and New Problems Concerning Rural Population Migrating to Small Towns), Zhongguo renkou kexue 1:49.
|
{
"palladium_score": 3.5279860496520996,
"timestamp": "2026-01-18T07:32:18.161291",
"source": "Palladium-STEM (Preview)"
}
|
http://www.21alive.com/features/cool-science/Cool-Science---8513-UV-Light-217606781.html?vid=a
|
FORT WAYNE, Ind. (www.incnow.tv) -- In this week's episode of Cool Science Monday, Martin Fisher from Science Central will talk about ultraviolet light.
It’s a very short wavelength of light, which is not visible to the human eye. There are numerous forms of UV light, ranging from very short wavelengths to extremely short wavelengths.
Unlike incandescent light, which comes from something hot – a hot light bulb, glowing lava, a fire – ultraviolet is considered cool light. UV light is capable of making some things fluoresce. They absorb the invisible UV light and release brightly colored visible light.
Martin used a black light and showed some items that fluoresce, such as anti-counterfeit strips in money, paint on a picture, a safety vest.
What are your thoughts CLICK HERE to leave us a "QUESTION OF THE DAY” comment.
© Copyright 2015, A Granite Broadcasting Station. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
|
{
"palladium_score": 3.5910706520080566,
"timestamp": "2026-01-18T07:32:18.161291",
"source": "Palladium-STEM (Preview)"
}
|
https://thenewstack.io/new-algorithm-will-help-supercomputers-simulate-whole-brain-neural-connections/
|
Artificial intelligence has made enormous leaps in recent years. We are seeing this technology incorporated in autonomous cars, collaborative robots and polyvalent deep learning systems that can master various boardgames on their own or reason their way around a subway map or a genealogical tree. Yet, there is still some ways to go before AI will transition from being relatively specialized to being able to master a variety of tasks as easily as humans.
One step towards developing this artificial general intelligence is to simulate how the human brain functions on a computer, in order to offer researchers deeper insights about the inner workings behind intelligence. The problem is, the human brain is incredibly complex, and even with the capabilities of the massive supercomputers available today, it is still impossible to simulate all the interactions between its 100 billion neurons and its trillions of synapses.
But that goal is now one step closer, thanks to a group of international researchers who have now developed an algorithm that not only accelerates brain simulations on existing supercomputers, but also takes a big leap toward realizing “whole-brain” simulations in future exascale supercomputers (machines capable of executing a billion billion calculations per second).
Computing for Whole-Brain Simulations
The research, published in Frontiers in Neuroinformatics, outlines how the researchers’ new method of creating a neuronal network on a supercomputer. To give a sense of how colossal this task is, existing supercomputers such as the petascale K computer at the Advanced Institute for Computational Science in Kobe, Japan can replicate the activity of only 10 percent of the brain.
That’s because it’s limited by the way the simulation model is set up, which affects how the supercomputer’s nodes communicate with each other. Supercomputers might have more than a hundred thousand of these nodes — each with its own processors to perform calculations. In larger simulations, these virtual neurons are distributed across compute nodes to balance the processing workload efficiently, however, one of the challenges of these larger simulations is the high connectivity of neuronal networks, which requires a massive amount of computational power to replicate.
“Before a neuronal network simulation can take place, neurons and their connections need to be created virtually, which means that they need to be instantiated in the memory of the nodes,” explained Susanne Kunkel of KTH Royal Institute of Technology in Stockholm, one of the paper’s authors. “During the simulation a neuron does not know on which of the nodes it has target neurons, therefore, its short electric pulses need to be sent to all nodes. Each node then checks which of all these electric pulses are relevant for the virtual neurons that exist on this node.”
To put it in a simpler way, it’s like sending a whole haystack to each node, so that each will need to find the needles relevant to it out of the haystack. Needless to say, this process consumes a lot of memory, especially as the size of the virtual neuronal network grows. To scale things up and to simulate the whole human brain using current techniques, it would require 100 times more processing memory than is available in today’s supercomputers. However, the new algorithm changes the game, as it optimizes this process by allowing the nodes to first exchange information on which nodes will send and receive to whom, so that afterward each node will only need to send and receive the information it needs, without having to pick through the whole haystack.
“With the new technology we can exploit the increased parallelism of modern microprocessors a lot better than previously, which will become even more important in exascale computers,” said study author Jakob Jordan of the Jülich Research Center.
With the improved algorithm, the team found that a virtual network of 0.52 billion neurons connected by 5.8 trillion synapses, running on the supercomputer JUQUEEN in Jülich, was able to simulate one second of biological time in 5.2 minutes of computations, rather than the previous 28.5 minutes it required, using conventional methods.
It’s predicted that future machines capable of exascale computing will surpass the performance of current supercomputers by 10 to 100 times. With the team’s algorithm — which will be made available as an open source tool — it would mean a greater ability to explore how intelligence functions holistically.
There’s no doubt that future findings based on this tool will not only help to push AI development further, it will also benefit a range of scientific disciplines, noted Markus Diesmann, study author and director at the Jülich Institute of Neuroscience and Medicine: “The combination of exascale hardware and appropriate software brings investigations of fundamental aspects of brain function, like plasticity and learning unfolding over minutes of biological time, within our reach.”
Images: Pixabay, Frontiers in Neuroinformatics.
|
{
"palladium_score": 3.9149069786071777,
"timestamp": "2026-01-18T07:32:20.480074",
"source": "Palladium-STEM (Preview)"
}
|
https://www.passeidireto.com/arquivo/2109979/design-patterns/2
|
to communicate using well-known, well understood names for software interactions. Common design patterns can be improved over time, making them more robust than ad-hoc designs. Design pattern 3 Domain-specific patterns Efforts have also been made to codify design patterns in particular domains, including use of existing design patterns as well as domain specific design patterns. Examples include user interface design patterns, information visualization, secure design, "secure usability", Web design and business model design. The annual Pattern Languages of Programming Conference proceedings include many examples of domain specific patterns. Classification and list Design patterns were originally grouped into the categories: creational patterns, structural patterns, and behavioral patterns, and described using the concepts of delegation, aggregation, and consultation. For further background on object-oriented design, see coupling and cohesion, inheritance, interface, and polymorphism. Another classification has also introduced the notion of architectural design pattern that may be applied at the architecture level of the software such as the Model\u2013View\u2013Controller pattern. Creational patterns Name Description In Design Patterns In Code Complete Other Abstract factory Provide an interface for creating families of related or dependent objects without specifying their concrete classes. Yes Yes N/A Builder Separate the construction of a complex object from its representation allowing the same construction process to create various representations. Yes No N/A Factory method Define an interface for creating an object, but let subclasses decide which class to instantiate. Factory Method lets a class defer instantiation to subclasses (dependency injection). Yes Yes N/A Lazy initialization Tactic of delaying the creation of an object, the calculation of a value, or some other expensive process until the first time it is needed. No No PoEAA Multiton Ensure a class has only named instances, and provide global point of access to them. No No N/A Object pool Avoid expensive acquisition and release of resources by recycling objects that are no longer in use. Can be considered a generalisation of connection pool and thread pool patterns. No No N/A Prototype Specify the kinds of objects to create using a prototypical instance, and create new objects by copying this prototype. Yes No N/A Resource acquisition is initialization Ensure that resources are properly released by tying them to the lifespan of suitable objects. No No N/A Singleton Ensure a class has only one instance, and provide a global point of access to it. Yes Yes N/A Design pattern 4 Structural patterns Name Description In Design Patterns In Code Complete Other Adapter or Wrapper or Translator . Convert the interface of a class into another interface clients expect. An adapter lets classes work together that could not otherwise because of incompatible interfaces. The enterprise integration pattern equivalent is the Translator . Yes Yes N/A Bridge Decouple an abstraction from its implementation allowing the two to vary independently. Yes Yes N/A Composite Compose objects into tree structures to represent part-whole hierarchies. Composite lets clients treat individual objects and compositions of objects uniformly. Yes Yes N/A Decorator Attach additional responsibilities to an object dynamically keeping the same interface. Decorators provide a flexible alternative to subclassing for extending functionality. Yes Yes N/A Facade Provide a unified interface to a set of interfaces in a subsystem. Facade defines a higher-level interface that makes the subsystem easier to use. Yes Yes N/A Flyweight Use sharing to support large numbers of similar objects efficiently. Yes No N/A Front Controller The pattern relates to the design of Web applications. It provides a centralized entry point for handling requests. No Yes N/A Module Group several related elements, such as classes, singletons, methods, globally used, into a single conceptual entity. No No N/A Proxy Provide a surrogate or placeholder for another object to control access to it. Yes No N/A Behavioral patterns Name Description In Design Patterns In Code Complete Other Blackboard Generalized observer, which allows multiple readers and writers. Communicates information system-wide. No No N/A Chain of responsibility Avoid coupling the sender of a request to its receiver by giving more than one object a chance to handle the request. Chain the receiving objects and pass the request along the chain until an object handles it. Yes No N/A Command Encapsulate a request as an object, thereby letting you parameterize clients with different requests, queue or log requests, and support undoable operations. Yes No N/A Interpreter Given a language, define a representation for its grammar along with an interpreter that uses the representation to interpret sentences in the language. Yes No N/A Iterator Provide a way to access the elements of an aggregate object sequentially without exposing its underlying representation. Yes Yes N/A Mediator Define an object that encapsulates how a set of objects interact. Mediator promotes loose coupling by keeping objects from referring to each other explicitly, and it lets you vary their interaction independently. Yes No N/A Memento Without violating encapsulation, capture and externalize an object's internal state allowing the object to be restored to this state later. Yes No N/A Null object Avoid null references by providing a default object. No No N/A Observer or Publish/subscribe Define a one-to-many dependency between objects where a state change in one object results in all its dependents being notified and updated automatically. Yes Yes N/A Servant Define common functionality for a group of classes No No N/A Design pattern 5 Specification Recombinable business logic in a Boolean fashion No No N/A State Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. Yes No N/A Strategy Define a family of algorithms, encapsulate each one, and make them interchangeable. Strategy lets the algorithm vary independently from clients that use it. Yes Yes N/A Template method Define the skeleton of an algorithm in an operation, deferring some steps to subclasses. Template method lets subclasses redefine certain steps of an algorithm without changing the algorithm's structure. Yes Yes N/A Visitor Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. Yes No N/A Concurrency patterns Name Description In POSA2 Other Active Object Decouples method execution from method invocation that reside in their own thread of control. The goal is to introduce concurrency, by using asynchronous method invocation and a scheduler for handling requests. Yes N/A Balking Only execute an action on an object when the object is in a particular state. No N/A Binding properties Combining multiple observers to force properties in different objects to be synchronized or coordinated in some way. No N/A Double-checked locking Reduce the overhead of acquiring a lock by first testing the locking criterion (the 'lock hint') in an unsafe manner; only if that succeeds does the actual lock proceed. Can be unsafe when implemented in some language/hardware combinations. It can therefore sometimes be considered an anti-pattern.
|
{
"palladium_score": 3.5253632068634033,
"timestamp": "2026-01-18T07:32:20.480074",
"source": "Palladium-STEM (Preview)"
}
|
http://www.sv.vt.edu/classes/MSE2094_NoteBook/97ClassProj/anal/kelly/fatigue.html
|
The concept of fatigue is very simple, when a motion is repeated, the object that is doing the work becomes weak. For example, when you run, your leg and other muscles of your body become weak, not always to the point where you can't move them anymore, but there is a noticeable decrease in quality output. This same principle is seen in materials. Fatigue occurs when a material is subject to alternating stresses, over a long period of time. Examples of where Fatigue may occur are: springs, turbine blades, airplane wings, bridges and bones.
This page will cover the topics included in Materials Science
and Engineering, and Introduction by Callister,
as well as other information that may be helpful to the student
in an introductory materials science class.
There are three common ways in which stresses may be applied:
axial, torsional, and flexural. Examples of these are seen in
There are also three stress cycles with which loads may be applied
to the sample. The simplest being the reversed stress cycle
. This is merely a sine wave where the maximum stress and minimum
stress differ by a negative sign. An example of this type of stress
cycle would be in an axle, where every half turn or half period
as in the case of the sine wave, the stress on a point would be
reversed. The most common type of cycle found in engineering applications
is where the maximum stress (smax)and
minimum stress (smin) are
asymmetric (the curve is a sine wave) not equal and opposite.
This type of stress cycle is called repeated stress cycle.
A final type of cycle mode is where stress and frequency vary
randomly. An example of this would be automobile shocks, where
the frequency magnitude of imperfections in the road will produce
varying minimum and maximum stresses.
The S-N Curve
A very useful way to visualize time to failure for a specific
material is with the S-N curve. The "S-N" means stress
verse cycles to failure, which when plotted uses the stress amplitude,
sa plotted on the vertical
axis and the logarithm of the number of cycles to failure. An
important characteristic to this plot as seen in Fig. 2 is the
The significance of the fatigue limit is that if the material
is loaded below this stress, then it will not fail, regardless
of the number of times it is loaded. Material such as aluminum,
copper and magnesium do not show a fatigue limit, therefor they
will fail at any stress and number of cycles. Other important
terms are fatigue strength
and fatigue life.
The stress at which failure occurs for a given number of cycles
is the fatigue strength. The number of cycles required for a material
to fail at a certain stress in fatigue life.
Crack Initiation and Propagation
Failure of a material due to fatigue may be viewed on a microscopic level in three steps:
One can determine that a material failed by fatigue by examining
the fracture sight. A fatigue fracture will have two distinct
regions; One being smooth or burnished as a result of the rubbing
of the bottom and top of the crack( steps 1 & 2 ); The second
is granular, due to the rapid failure of the material. These visual
clues may be seen in Fig. 4:
Other features of a fatigue fracture are Beachmarks
Beachmarks, or clamshell marks, may be seen in fatigue failures
of materials that are used for a period of time, allowed to rest
for an equivalent time period and the loaded again as in factory
usage. Striations are thought to be steps in crack propagation,
were the distance depends on the stress range. Beachmarks may
contain thousands of striations. Visual Examples of Beachmarks
and Striations are seen below in Fig. 5 and 6:
An example of the striations found in fatigue fracture. Each
striation is thought to be the advancement of the crack. There
may be thousands of striations in a beachmark
Demonstration of Crack Propagation Due to Fatigue
The figure above illustrates the various ways in which cracks
are initiated and the stages that occur after they start. This
is extremely important since these cracks will ultimately lead
to failure of the material if not detected and recognized. The
material shown is pulled in tension with a cyclic stress in the
y ,or horizontal, direction. Cracks can be initiated by several
different causes, the three that will be discussed here are nucleating
slip planes, notches. and internal flaws. This figure is an image
map so all the crack types and stages are clickable. For more
information on clickable maps and how to do them see the clickable map tutorial.
Other Useful Links
The rate at which a crack grows has considerable importance in determining the life of a material. The propagation of a crack occurs during the second step of fatigue failure. As a crack begins to propagate, the size of the crack also begins to grow. The rate at which the crack continues to grow depends on the stress level applied. The rate at which a crack grows can be seen mathematically in equation 8.16 in Callister by:
The variables A and m are properties of the material, da is the change in crack length, and dN is the change in the number of cycles. K is the change in the stress intensity factor or by equation 8.17(a & b):
Rearrangement and integration of Eq. 1 gives us the relation of the number of cycles of failure, Nf, to the size of the initial flaw length, ao, and the critical crack length, ac, and Eq. 2:
Factors That Affect Fatigue Life and Solutions
The Mean Stress, discussed in Callister, 8.8, is defined as:
The Mean stress has the affect that as the mean stress is increased, fatigue life decreases. This occurs because the stress applies is greater.
I mentioned previously that scratches and other imperfections on the surface will cause a decrease in the life of a material. Therefore making an effort to reduce these imperfections by reducing sharp corners, eliminating unnecessary drilling and stamping, shot peening, and most of all careful fabrication and handling of the material.
Another Surface treatment is called case hardening, which increases surface hardness and fatigue life. This is achieved by exposing the component to a carbon-rich atmosphere at high temperatures. Carbon diffuses into the material filling interstisties and other vacancies in the material, up to 1 mm in depth.
Exposing a material to high temperatures is another cause of fatigue in materials. Thermal expansion, and contraction will weaken bonds in a material as well as bonds between two different materials. For example, in space shuttle heat shield tiles, the outer covering of silicon tetraboride (SiB4) has a different coefficient of thermal expansion than the Carbon-Carbon Composite. Upon re-entry into the earth's atmosphere, this thermal mismatch will cause the protective covering to weaken, and eventually fail with repeated cycles.
Another environmental affect on a material is chemical attack,
or corrosion. Small pits may form on the surface of the material,
similar to the effect etching has when trying to find dislocations.
This chemical attack on a material can be seen in unprotected
surface of an automobile, whether it be by road salt in the winter
time or exhaust fumes. This problem can be solved by adding protective
coatings to the material to resist chemical attack.
Other Useful Links Dealing With Design Examples
Find: Estimate the maximum tensile stress to yield the fatigue life prescribed
Solution: Use Equations 3 above to solve for Ds.
Comments or Questions? Email Shawn Kelly.
1Beer, Ferdinand P, and E. Russell Johnston, Jr. Mechanics of Materials. 2nd ed. New York: McGraw-Hill, Inc. 1992. Images: Fig. 2.54(b), Fig. 3.8(b), Fig. 4.19
Reed-Hill, Robert E, and Reza Abbaschian. Physical Metallurgy Principles. 3rd ed. Boston: PWS Publishing Company, 1994.
The following figures appear in Reed-Hill :
2 Fig. 21.34, page 752
3 Fig. 21.43, page 761
4 Fig. 21.30, page 749
5 Fig. 21.31, page 749
Callister, William D Jr. Materials Science and Engineering, an Introduction. 3rd ed. New York: John Wiley & Sons, Inc., 1994.
The following figures appear in Callister:
6 Figure 8.24, page 209
7 Figure 5.0 , page 89
8 Reed-Hill: Fig. 5.3, page 127
Special thanks to those who provided links:
Chris Meyer, Jireh Yue, Jared Mutter, Ron Halahan, and Matt Gordon
Thanks to Brian Seal for his HTML skillz.
Table of Contents
Submitted by Shawn M. Kelly
Virginia Tech Materials Science and Engineering
Last updated: 5/4/97 04:03:20.43 PM
|
{
"palladium_score": 4.077735900878906,
"timestamp": "2026-01-18T07:32:22.698302",
"source": "Palladium-STEM (Preview)"
}
|
http://www.thejournal.ie/how-insects-wings-help-engineers-900560-May2013/
|
EXAMINING HOW INSECTS’ wings and legs wear out over time may help engineers as they search for ways to make safe, more durable types of material.
Researchers at Trinity College Dublin are currently working towards a full understanding of the second-most common natural material in the world – insect cuticle.
Insects are one of the most diverse groups of animals in the world but they have one thing in common – they are all made from cuticle. Until Dr Jan-Henning Dirks, Professor David Taylor and Eoin Parle established their study, little was known about the fatigue properties of the material.
“The single biggest cause of failure in cars, aeroplanes and other mechanical structures is material fatigue,” explains lead researcher Dr Dirks. “For quite some time it has been known that this kind of fatigue behaviour easily happens in some materials, but far less in others. That’s why engineers are constantly looking for ideas to design safer, more durable types of materials.”
An insect’s exoskeleton supports them in a way bones support a human body. At the same time, the cuticle – an extremely versatile biological material – acts as a protective skin.
“If we understood how cuticle acts under repeated loads, we might be able to design more durable biomimetic materials for many kinds of applications,” continues Dr Dirks.
As a first step, the Trinity College team looked at the cuticle of the desert locust, which is capable of flying across oceans and deserts for days or weeks at a time.
Parle, who is writing a PhD thesis about the mechanical properties of insect cuticle, explains that the locusts’ wings beat hundreds of thousands a times, and their hind legs perform thousands of jumps.
To measure the fatigue properties of the cuticle, the engineers and scientists took samples of the legs and wings and mechanically simulated the repeated loading that occurs in wing beats and during jumping. The researchers were able to show that both structures can withstand hundreds of thousands of cycles, with the legs being notably more resistant to fatigue.
“Our results also show that due to their shape and fibrous material the legs are very well adapted to withstand the types of failure that might occur in jumping and kicking,” says Parle.
“For the first time, we now actually know that insect cuticle shows material fatigue after repeated loading,” adds Taylor. “These results are obviously just a first step.
“Studying insect cuticle is not only thought-provoking from the engineering point of view, where our findings might help us to develop more durable composite materials. Our results are also interesting from the biological perspective, where we can learn more about how insects evolved to become one of the most successful groups of animals.”
The study’s findings have recently been published in the Journal of Experimental Biology.
Read more from our ‘Science’ Department:
|
{
"palladium_score": 3.7366623878479004,
"timestamp": "2026-01-18T07:32:22.698302",
"source": "Palladium-STEM (Preview)"
}
|
http://www.conservapedia.com/Kaiser_Wilhelm_II
|
Kaiser Wilhelm II
Kaiser Wilhelm II (Friedrich Wilhelm Viktor Albert von Preußen, 27 January 1859 – 4 June 1941) was king of Prussia and German emperor from June 1888 to November 1918. He was the eldest child of Emperor Frederick III and Vicky, the eldest child of Britain's Queen Victoria.
Wilhelm was a constitutional monarch and shared power with his ministers and an elected parliament, or Reichstag. However, the kaiser had the final say on all significant matters, including appointments. His supporters boasted that he was the "mightiest ruler on earth." His manic depressive personality led to chaotic policymaking.
Wilhelm's foreign policy aimed at "Napoleanic supremacy" over France and Russia. He sought to sever the alliance between France and Britain. World War I, the most serious crisis of his reign, was triggered by the assassination of Austro-Hungarian Archduke Franz Ferdinand in Sarajevo in 1914. German involvement began with a massive invasion of France by way of Belgium. By using a Balkan quarrel as his pretext, Wilhelm hoped to discourage involvement by the British, who looked down on the Balkans. This proved to be a miscalculation. The British were outraged by the invasion of neutral Belgium. They responded with a declaration of war against Germany, as well as full military support for the Belgians and French.
For several years, fighting remained deadlocked as trench warfare continued across northern France. When German submarines blockaded Britain and attacked neutral shipping in 1917, the United States declared war on Germany. In November 1918, the war ended in a catastrophic defeat for Germany. By that time, over 4.2 million Germans had been killed. Wilhelm fled to the Netherlands and lived in exile until 1941.
- 1 Youth and personality
- 2 Foreign policy
- 3 Schlieffen and his plan
- 4 The Daily Telegraph Affair (1908)
- 5 Agadir and the origin of WWI (1911)
- 6 The Haldane mission (1912)
- 7 Opposition in the Reichstag (1912-1913)
- 8 Decision for war (1912-1914)
- 9 World War I (1914-1918)
- 10 Life in exile (1918-1941)
- 11 See also
- 12 References
Youth and personality
Vicky was a stern mother and dominating figure in Wilhelm's early life. She attempted to indoctrinate him with the values of Britain's nineteenth century Liberal Party. As as result, Wilhelm grew up with conflicted feelings regarding Britain. He respected British values, but felt the need to use militarism to compensate for his mother's un-Germanic influence. He was also a manic depressive, which resulted in inconsistent and erratic decision making. His withered left arm, a congenital defect, led to an inferiority complex.
Wilhelm was easily entranced by conspiracy theories and was liable to blame foreign policy setbacks on the machinations of his uncle, King Edward VII of Britain. Wilhelm became obsessed with Edward as a youth, when the two future rulers were yachting rivals. After Edward visited Lisbon, Rome, and Paris in 1903, Wilhelm became convinced that his dreams of world power were being thwarted by Edward's efforts to "encircle" Germany.
Wilhelm favored the company of good-looking men, notably Philip of Eulenburg, a composer, writer, and diplomat. Eulenburg met Wilhelm on a hunting trip in 1886 and they were best friends for many years. Eulenburg was exposed as a homosexual in 1907 by journalist Maximilian Harden. The Harden-Eulenburg Affair was one of the first major public discussions of homosexuality in Germany, comparable to the Oscar Wilde affair in Britain. Wilhelm's association with Eulenberg is unlikely to reflect his sexualiy. Wilhelm had a wife and seven legitimate children, as well as mistresses and two illegitimate children.
Wilhelm pursued an aggressive foreign policy known as Weltpolitik, literally "world policy." The rise of the German economy inspired Germans to seek a larger "place in the sun" in terms of international affairs. Weltpolitik is to be distinguished from Bismarck's Realpolitik, which emphasized the balance of power concept. Wilhelm hoped to persuade Britain to withdraw from European affairs. If he could accomplish this, he imagined that Germany's economic and military power would allow him to dominate both France and Russia and emerge with a "Napoleanic supremacy."
The kaiser's dream of leading a continental league to one up Britain's King Edward had been sabotaged long before by Bismarck. As a result of Bismarck's annexation of Alsace-Lorraine in 1871, generations of Frenchmen were inculcated with anti-German feeling. For Bismarck, this was the point. French hostility would unite Germans under his leadership. A French-German-Russian alliance against Britain was proposed at the time of the Boer War in 1900, but failed when Wilhelm demanded that the French recognize Alsace-Lorraine as German. London responded to Germany's rising power and hostility by signing an "Entente" with Paris in 1904.
Mindful of the role the British had played in Napoleon's downfall, Wilhelm built up the German navy in the hope that this would intimidate Britain. In 1900, the Reichstag enacted a Naval Law that authorized a program of long-term naval expansion. When the British launched the oversized HMS Dreadnought in 1906, all earlier warships suddenly became obsolete. Germany was, at least for the moment, only one modern warship behind Britain in the "naval race," as it came to be called. Rivalry between the two nations became a focus of intense public interest and national pride. The Anglo-French Entente proved itself in 1911 when Britain intervened to support France in the Agadir Crisis. Germany did not have the resources to challenge a combined allied fleet. The episode illustrated the futility of the naval strategy. German attention soon shifted to the army.
Finally recovered from the trauma of Russo-Japanese War (1904-1905), Russia reentered great power politics in 1912 by backing the Balkan League in a war against Turkey. Because Wilhelm wanted war with France and Russia, but not Britain, advisers who understood this could manipulate him. Chancellor Bethmann Hollweg withheld crucial telegrams to get his approval for war in 1914.
Schlieffen and his plan
In the early years of his reign, the kaiser had great enthusiasm for military adventures, including proposals to invade Denmark and America. It was the job of Schlieffen, Germany's highly respected army chief, to remind the all highest that the army needed to remain on the eastern frontier to deter the Russians.
The Russo-Japanese War of 1904-1905 inspired Schlieffen to sketch a plan to invade France by deploying through Belgium. His "Memorandum for a War against France," or Schlieffen Plan as it was later called, ignored Russia altogether, an assumption justified by that country's preoccupation with the Far East at the time. The Germans would outflank the French and attack from the rear, according to the plan.
When the Japanese defeated the Russians in 1905, the kaiser expressed interest in the plan. Schlieffen had to confess that the German army was not ready for war. The French had recoilless artillery, which the Germans lacked. The plan contemplated a significantly larger army than the one Schlieffen commanded. Whatever its value in war, the plan could be used as a tool for obtaining weapons and more soldiers.
When Schlieffen retired in 1906, Wilhelm replaced him with Moltke, the emperor's former military adjutant. "The personal favour of the emperors...coupled with his great name, elevated him to offices for which he was completely unqualified," as Britannica puts it. Schlieffen himself strongly advised against the appointment.
The Daily Telegraph Affair (1908)
In October 1908, the emperor gave an interview to Britain's Daily Telegraph in which he stated that, "You English, are mad, mad, mad as March hares," among other tactless comments. An international uproar followed and the kaiser was roundly condemned in the Reichstag and elsewhere. Wilhelm faulted Bülow, his chancellor since 1900, for failing to vet the advance copy of the interview that had been sent to him. The kaiser replaced Bülow with Bethmann, a Prussian bureaucrat with no foreign policy experience. That neither Moltke nor Bethmann were respected by their peers did not give Wilhelm pause. He would personally guide all decision making. Bismarck had done the same, so the German state was already set up to permit one-man rule. But Wilhelm's talents proved to be unequal to those of the Iron Chancellor.
With full control of policymaking apparently established, Wilhelm repeatedly pushed Europe to the brink of war. The Bosnian crisis of 1908-1909 was followed by the Agadir crisis of 1911 and by the Balkan crisis of 1912-1913. As a constitutional monarch, Wilhelm did not take the initiative on any of these occasions. Instead, he responded to proposals put to him by his advisers.
Agadir and the origin of WWI (1911)
In earlier diplomatic incidents, the kaiser and his government had asserted Germany's rights while public opinion had remained cautious. This dynamic reversed in the Agadir crisis of 1911. The crisis began on July 1 when Germany sent the gunboat SMS Panther to the port of Agadir in Morocco. This was a challenge to France, which considered Morocco to be part of its sphere of influence.
On July 21, Prime Minister Lloyd George announced that Britain was prepared to go to war to avoid French humiliation in Morocco. In Germany, this intervention triggered a wave of Anglophobia.
Unlike Wilhelm's desire to be a modern Napoleon, colonies were a war aim that the average German could relate to. Many argued that it was hugely unfair that Britain, with its vast colonial empire, was thwarting Germany's far more modest ambitions, or "place in the sun." Germany had devoted a great deal of resources to its "naval race" with Britain. The Naval League was a source of propaganda and a focus of German patriotic sentiment. Lloyd George showed that a naval buildup would not deter Britain from interfering in Germany's colonial ambitions. In 1912, France and Britain signed a deal dividing up naval responsibilities. Faced with a combined allied fleet, German attention shifted to the army, ending the naval race.
The French emerged from the crisis with a confidence that frightened the Germans. The French army adopted doctrine that emphasized offensive action and a plan to recover Alsace Lorraine called Plan XVII.
Agadir was the first diplomatic crisis since the Great Eastern Crisis of 1878 that engaged European public opinion. France and Germany faced each other with pre-war fury. Bernhardi's warmongering opus Germany and the Next War (1911) became a bestseller. The German Army League, founded in January 1912, soon eclipsed the Naval League. The kaiser's own views were largely unchanged by the episode. But his reluctance to fight the British began to look quaint, even to his own advisors and family.
In November 1911, Crown Prince Wilhelm told his father that, "I am convinced that the political situation at home, so fragmented and muddled with its internal party interests, would improve at a stroke if all the country’s sons had to take up arms for their land." German historian Fritz Fischer, author of the most influential theory concerning the origin of the First World War, argues that the elite's anxiety with the domestic political situation was the driving force toward war.
The Haldane mission (1912)
In February 1912, British Secretary of State for War Richard Haldane arrived in Berlin and proposed that Portuguese and Belgian colonies be partitioned between Britain and Germany. The Germans would get the central African empire they had demanded at the time of Agadir. Wilhelm wasn't interested. His focus was European supremacy. He denounced Haldane's proposal as a British trick designed to divide the German fleet. "We have enough colonies!" he wrote. "If I want some I shall buy them or get them without England!" If Germany received colonies only by Britain's leave, there was no point. The kaiser's rejection was just as well. Haldane's proposal was no better received in London than it was in Berlin.
Opposition in the Reichstag (1912-1913)
In the Reichstag election of January 1912, voters turned against the militarism of the right-wing parties. The number of seats held by the Social Democratic Party more than doubled from 43 to 110 (out of a total of 397). For the first time, the three left-of-center parties, namely the Social Democrats (110), Center Party (91), and the Progressive People's Party (42), together secured a working majority. The Center Party is the precursor of the Christian Democratic Union, Germany's post-World War II conservative movement. The PPP is the precursor of the modern Free Democrats.
The Social Democrats demanded reform of the "three-tier" method of voting used in the Prussian state elections. Prussia was by far the largest of the German states at this time. In peacetime, Germany's main army answered to Prussia, and to Wilhelm as Prussian king. Three-tiered voting meant that the Prussian government was dominated by the junkers, conservative landowners who lived east of Elbe. Agricultural was the least efficient sector of the German economy at this time, so a voting system based on agricultural wealth was anachronistic.
In June 1913, a dramatic expansion of the army was approved with the support of deputies of the Center, the National Liberals, and the Radicals.
In December 1913, lawmakers revolted and voted no confidence in Bethmann, an episode called the Zabern Affair. However, Bethmann's position, along with army expansion, were confirmed when the Reichstag meekly passed the annual budget on December 9. Only the Social Democrats and the Polish Party voted against.
Decision for war (1912-1914)
In November 1912, Wilhelm approved a proposal to support Austria-Hungary against Serbia even if this led to war with both France and Russia. This proposal was presented to the kaiser jointly by both Bethmann and Moltke, as well as by Kiderlen, the foreign secretary. These advisers misled the kaiser into believing that the British might support an Austro-Hungarian attack on Serbia. In December, Lichnowsky, the German ambassador to Britain, informed the kaiser that Britain would likely side with France in the event of such a war. The kaiser responded by ordering a "postponement of the great fight" until the summer of 1914 so that additional preparations could be made. Wilhelm was anxious that 1913, his jubilee, be a year of peace.
Fischer identifies a meeting of the German Imperial War Council on December 8, 1912 as the moment when the imperial government resolved upon war. The meeting was attended by the kaiser and four advisors. In April 1913, Moltke discontinued work on a longstanding plan for an offensive against Russia. The move was intended to rule out "half measures" and left the Schlieffen Plan as Germany's only war plan. In June, the Reichstag passed a bill to expand the army from 544,000 soldiers to 870,000 soldiers. It was the largest expansion since 1871.
The murder of Austro-Hungarian Archduke Franz Ferdinand triggered a final diplomatic crisis in July 1914. This time, Bethmann withheld and even falsified the increasingly urgent messages sent by the troublesome Lichnowsky. When Wilhelm realized that the British still intended to side with the French, he attempted to reverse course. His last minute change of heart proved to be too little, too late.
World War I (1914-1918)
Germany declared war on Russia on August 1, 1914 and on France on August 3. On August 4, this was followed up with a declaration of war on Belgium. German plans called for a massive deployment through Belgium to outflank French fortifications in Lorraine. Army enlargement meant that the Germans finally had enough soldiers to implement the plan Schlieffen had proposed in 1904. Moltke gave the army only a few weeks to conquer France. After that, it would be moved east to face Russia.
When the news that Germany had declared war on Belgium arrived in London, the British cabinet was still discussing its response to the earlier declarations, deadlocked on the issue of war. The German invasion of Belgium allowed Britain to enter the war as a unified nation.
The long-planned French offensive in Lorraine proved to be an enormous suicide mission. From August 7 to September 13, French soldiers flung themselves at well-prepared German fortifications, faithfully following Plan XVII. Losses were staggering, but only a few villages were acquired. The German anxieties that had led to war were shown to be without foundation.
Meanwhile, German infantry advanced through Belgium and northeastern France until September 6–12, when they reached the Marne river north of Paris. At this point the Germans ran out of supplies and both sides began digging trenches. It was the beginning of four years of trench warfare.
When Antwerp fell in October 1914, Wilhelm made a speech in which he stated that the city must remain German. He wanted Belgians to be evacuated from Belgium and replaced with German military colonists. Wilhelm insisted on retaining control of the Belgian coastline in the hope that a German naval base could eventually be built there. This position was an obstacle to peace negotiations throughout the war.
Wilhelm appointed generals Ludendorff and Hindenburg to head the army in August 1916. Promising victory, their prestige grew so great they could boss the imperial government around. On Ludendorff's initiative, Germany adopted "unrestricted" submarine warfare in January 1917. In April, the United States responded with a declaration of war. In July, Ludendorff forced Bethmann to resign. Ludendorff launched a great offensive in France in March 1918.
By August 1918, the Allies had learned to counter Ludendorff's strategies and his offensive sputtered to a close. With American forces on the way, there would be no second chance. As the soldiers realized victory was no longer possible, the front collapsed. On November 7, sailors in Kiel rejected an order to take the fleet on a final suicide mission. It was a signal for revolution across Germany. On November 9, the divisional commanders were summoned to Spa and asked about the attitudes of soldiers. They answered that few were willing fight on, either for the emperor or for Germany. Wilhelm refused to abdicate, but fled to the Netherlands later that day.
Life in exile (1918-1941)
Wilhelm took 59 railway wagons of possessions with him when he fled to Huis Doorn in The Netherlands in 1918.
Wilhelm's correspondence with Tsar Nicholas was published in 1918 as the Willy-Nicky Correspondence. (The two rulers had corresponded in English.) Monarchists were disillusioned by the revelation that the kaiser had brushed aside the tsar's peace proposals. Some supporters of the four-day Kapp Putsch of 1920 were monarchists. This was closest Germany ever came to a serious attempt to restore the monarchy.
Guests at Doorn often heard the former kaiser threaten vengeance against a wide variety of opponents. When Finance Minister Matthias Erzberger was murdered in 1921, Wilhelm celebrated with champagne. Erzberger had forced Bethmann's resignation as chancellor in 1917, thus ending the kaiser's political role.
A 1925 biography by Emil Ludwig did much to undermine Wilhelm's reputation.
Under an agreement reached with the German government in October 1926, one third of the sixty royal castles went to the Hohenzollerns. "No other exiled monarch in modern history was treated as generously," as biographer John Röhl put it.
The kaiser's non-political passions were Greek archaeology and chopping wood. He excavated the Temple of Artemis in Corfu. He chopped down thousands of trees during his stay in Doom.
Wilhelm was appalled when the Nazis murdered former Chancellor Schleicher and his wife in 1934. After Kristallnacht, a Nazi anti-Jewish pogrom conducted in November 1938, Wilhelm stated: "For the first time, I am ashamed to be a German."
William died in Doorn in 1941 while The Netherlands was under German occupation. In 1945, the Dutch government confiscated the Doorn mansion. The mansion has been a museum since 1956. It gets 45,000 visitors a year.
- Röhl, John C. G., Wilhelm II: Into the Abyss of War and Exile, 1900–1941, Cambridge University Press, 2014, p. xxviii.
- Röhl, p. xxvii.
- Lewis-Stempel, John, "Lunatic in charge of an asylum: When Kaiser Wilhelm II ruled Germany", Sunday Express, Apr 20, 2014
- Röhl, pp. 245-246.
- Röhl, p. 46.
- Röhl, pp. 386-392.
- "How Kaiser Bill planned to invade United States" Telegraph, 09 May 2002
- Röhl, p. 416.
- "There was a Schlieffen Plan" by Gerhard P. Gross in The Schlieffen Plan: International Perspectives on the German Strategy for World War I, (2014), pp. 100-101.
- Gross, pp. 103-104. Schlieffen's memorandum is, "A program for further expansion of the army and its mobilization," according to the archive's official description.
- "Moltke, Helmuth (Johannes Ludwig) von." Encyclopædia Britannica. Encyclopædia Britannica Ultimate Reference Suite. Chicago: Encyclopædia Britannica, 2013.
- Röhl, p. 660.
- Röhl, p. 874. "The British decision of July 1911 to protect France by threatening Germany with war and mobilizing the Royal Navy had strengthened to an alarming degree the aversion felt in broad sections of the public in Germany and among the ruling elite towards Great Britain. The fatalistic belief gained ground that sooner or later war with the naval Power and its Entente partner France was inevitable."
- Future War Minister Falkenhayen, a moderate among military officers, saw the kaiser as an obstacle. A few weeks after Agadir, he wrote, "Outwardly, our political situation has improved...but internally nothing has changed, in so far as H.M. is as determined as before to avoid a war." Röhl, p. 875.
- Röhl, p. 876.
- Röhl, p. 842.
- "On Monday the Reichstag passed the new German Army Bill", Spectator, July 5, 1913.
- There was public outrage in response to abuses by a military officer in the Alsacian town of Zabern. On December 4, 1913, the Reichstag voted no confidence in Bethmann by a margin of 293 to 54. Under the German constitution, the emperor had exclusive authority over appointments. But if the budget had been rejected, Bethmann might have been forced out of office. In the end, only the Social Democrats (110 seats) and the Polish Party (18 seats) voted against the budget. Germany had failed to take advantage of its best chance to achieve parliamentary democracy and head off war.
- Moltke understood the war would be long and bloody, but he misled the civilian leadership: "As Kurt Riezler, the chancellor's private secretary, recorded retrospectively in 1915 (p.212): 'Bethmann can blame the coming of the war . on the answer that Moltke gave him.. He did say yes! We would succeed.'" ("Helmuth von Moltke and the Origins of the First World War")
- Röhl, p. 954.
- Helmuth Von Moltke and the Origins of the First World War, p. 103.
- Röhl, p. 1052
- Röhl, p. 11
- Röhl, pp. 1184-1186.
- "Berlin refuses kaiser a final resting place" The Telegraph, 15 Oct 2000.
- Röhl, p. 1221.
- Röhl, p. 1222.
- Röhl, p. 1191-1192.
- Balfour, Michael (1964), The Kaiser and His Times, Houghton Milton, p. 419.
|
{
"palladium_score": 3.5116281509399414,
"timestamp": "2026-01-18T07:32:25.084786",
"source": "Palladium-STEM (Preview)"
}
|
https://www.huffpost.com/entry/7-weird-facts-about-balance_n_5787b989e4b08608d3336779
|
Most people can stand up and walk across a room without giving it much thought. But to do this, your brain must get information from several complex systems in the body, which work together to keep you balanced. Exactly how the body keeps balance, and what happens when these systems don’t work properly, might surprise you. Here are some weird facts about your balance.
1. Your inner ear plays an important role in balance.
Your ears aren’t just important for hearing; they aid in balance as well. Several structures in the inner ear, together called the vestibular system, send signals to the brain that help you orient yourself and maintain balance. Two structures, called the utricle and the saccule, monitor linear movements of your head (from side to side and up and down), and also detect gravity, according to the Mayo Clinic. Other structures, which form loops and contain fluid, monitor the rotation of your head. [Balance Exercises: Everything You Need to Know]
Many balance problems stem from conditions that affect the inner ear. For example, if calcium crystals inside the inner ear end up in the wrong place, it can cause the vestibular system to send signals to the brain that your head is moving, when it’s actually still, causing you to feel dizzy.
2. Your muscles, joints and even skin help with balance, too.
Sensory receptors in your muscles, joints, ligaments and skin help tell your brain where your body is in space — a sense called proprioception, according to the Vestibular Disorders Association (VEDA). These receptors, such as those on the bottom of your feet or along your back, are sensitive to pressure or stretching sensations. Receptors in the neck can tell the brain which way the head is turned, and receptors in the ankles can tell the brain how the body is moving relative to the ground, VEDA says.
When a police officer asks a driver to touch his or her nose as part of a sobriety test, the officer is testing the driver’s proprioception. People who are impaired by alcohol may fail the test because their brains have difficulty determining the position of their limbs relative to their noses.
3. Balance gets worse with age.
As we age, we experience impairments in the three main systems that keep us in balance: vision, the vestibular system and proprioception. These impairments, combined with reduced muscle strength and flexibility, make older adults more prone to falls. One-third of American adults over age 65 experience a fall each year, according to the Centers for Disease Control and Prevention.
4. Your big toe isn’t crucial for balance.
To avoid being drafted into the Vietnam War, some young men deliberately amputated their big toe because this injury would make them unfit for duty in the government’s eyes. But the big toe by itself actually isn’t crucial for balance.
People missing a big toe can still walk and run, although they will likely be slower and have a shorter stride, according to Scientific American. A 1988 study of people who had their big toe amputated found that the patients showed changes in their gait and the forces their body generated when walking. But the patients had “little or no disability” from the loss of their big toe, the study concluded.
5. You can feel like you’re moving when you aren’t.
If you’ve ever sat on a train, looked out the window and suddenly felt like your train was moving when it wasn’t, you’ve experienced a phenomenon called “vection.” This happens because something that takes up a large part of your visual field has started to move. In the train example, what you actually saw was another train start to move, making you feel as though your train were moving in the opposite direction.
Vection can cause disorientation, because your brain experiences a conflict between the incoming sensory information from different sources, according to VEDA. Your vision tells you that you’re moving, but sensory receptors in your body tell you that you aren’t moving (you don’t feel any vibrations from your train). However, extra information from your vestibular system may override this conflict, VEDA says. You might also find yourself looking out the other window, to figure out whether you are really moving.
6. Migraines can be linked with balance problems.
About 40 percent of people who have migraines also experience dizziness or balance problems, which can accompany a migraine or occur at a totally separate time, according to VEDA. The condition is known as migraine-associated vertigo. The cause of the condition is not known, but it’s possible that migraines affect brain signaling and that this, in turn, slows down the brain’s ability to interpret sensory information from the eyes, inner ear and muscles, resulting in a feeling of dizziness, Dr. Sujana Chandrasekhar, president of the American Academy of Otolaryngology-Head and Neck Surgery, told Live Science in a 2014 interview. Another theory is that the dizziness is caused by the release of certain chemicals in the brain that affect the vestibular system.
7. Some people feel a rocking sensation for months after going on a boat.
It’s common for people who’ve been on a boat to feel like they are still swaying and bobbing even after they set foot on land again. This sensation usually disappears within a few hours or days. But for some people, this sensation of feeling like you’re still at sea lasts for months or years. Patients with these symptoms are said to have “mal de debarquement syndrome.” [Here’s a Giant List of the Strangest Medical Cases We’ve Covered]
It’s not clear why some people develop mal de debarquement syndrome. But one hypothesis is that people with this condition have changes in their brain metabolism and brain activity that make it able to adapt to the unfamiliar movement of the ocean when they’re at sea but unable to readapt once this movement has stopped, according to VEDA.
Original article on Live Science.
Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
|
{
"palladium_score": 3.615382194519043,
"timestamp": "2026-01-18T07:32:25.084786",
"source": "Palladium-STEM (Preview)"
}
|
http://www.codeproject.com/Articles/28948/Curve-Fitting-using-Lagrange-Interpolation?fid=1525911&df=90&mpp=25&sort=Position&spc=Relaxed&tid=4060771
|
In this article, I will explain curve fitting using the Lagrange interpolation polynomial. Curve fitting is used in a wide spectrum in engineering applications such as cars and air crafts surface design. The main problem is, given a set of points in the plan, we want to fit them in a smooth curve that passes through these points. The order of the curve f(x) depends on the number of points given. For example, if we have two points, the function will be of the first order, and the curve will be the line that passes through these two points, while if you have three points, the function will be of the second order f(x) = x2 . Let's first explain the Lagrange polynomial, then we will proceed to the algorithm and the implementation. In this article, I am using C# for coding.
An interpolation on two points, (x0, y0) and (x1, y1), results in a linear equation or a straight line. The standard form of a linear equation is given by y = mx + c, where m is the gradient of the line and c is the y-intercept.
m = y1 − y0 / x1 − x0
c = y0 − mx0
which results in:
y = (y1 − y0 / x1 − x0) * x + (x1 y0 − x0 y1 )/ (x1 − x0)
The linear equation is rewritten so that the two interpolated points, (x0, y0) and (x1, y1), are directly represented.
With this in mind, the linear equation is rewritten as:
P1(x) = a0(x − x1) + a1(x − x0)
where a0 and a1 are constants . The points x0 and x1 in the factors of the above equation are called the centers. Applying the equation at (x0, y0), we obtain:
y0 = a0(x0 − x1) + a1(x0 − x0)
a0 = y0/ x0−x1
At (x1, y1), we get:
y1 = a0(x1 − x1) + a1(x1 − x0), or a1 = y1/ x1−x0
Therefore, the linear equation becomes:
P1(x) = y0 (x− x1) /(x0 − x1) + y1 (x− x0) / (x1 − x0)
The quadratic form of the Lagrange polynomial interpolates three points, (x0, y0), (x1, y1), and (x2, y2). The polynomial has the form:
P2(x) = a0(x− x1)(x− x2) + a1(x− x0) (x− x2) + a2(x− x0)(x− x1)
with centers at x0, x1, and x2. At (x0, y0):
y0 = a0(x0 − x1)(x0 − x2) + a1(x0 − x0)(x0 − x2) + a2(x0 − x0)(x0 − x1),
a0 = y0/ (x0 − x1)(x0 − x2)
a1 = y1 / (x1 − x0)(x1 − x2)
a2 = y2/ (x2 − x0)(x2 − x1)
P2(x) = y0 (x− x1)(x− x2) / (x0 − x1)(x0 − x2) + y1 (x − x0)(x − x2)/
(x1 − x0)(x1 − x2) + y2 (x− x0)(x− x1)/ (x2 − x0)(x2 − x1)
In general, a Lagrange polynomial of degree n is a polynomial that is produced from an interpolation over a set of points, (xi , yi ) for i = 0, 1, . . ., n, as follows:
Pn(x) = y0L0(x) + y1L1(x) + ··· + ynLn(x)
Using the Code
Given the interpolating points (xi , yi ) for i =0, 1, . . . ,n;
for i = 0 to n
//the cumulative multiplication from k = 1 (and k not equal i) to n
Evaluate Li (x) = ∏nk=1,k != i (x−xk ) / (xi−xk ) ;
Evaluate Pn(x) = y0L0(x) + y1L1(x) + ··· + ynLn(x);
Download the source code from the top of this page to view the code.
Points of Interest
Numerical computing is a very interesting field in software development. This article is related to that field ... And, I will post more articles soon about Computational Geometry ...such as Convex Hull algorithm and Triangulation.
|
{
"palladium_score": 3.92415189743042,
"timestamp": "2026-01-18T07:32:27.125299",
"source": "Palladium-STEM (Preview)"
}
|
http://en.wikipedia.org/wiki/Spinneret_(spider)
|
||This article needs attention from an expert in Spiders. (April 2008)|
A spinneret is a silk-spinning organ of a spider or the larva of an insect. Some adult insects also have spinnerets, such as those borne on the forelegs of Embioptera. Spinnerets are usually on the underside of a spider's abdomen, to the rear. While most spiders have six spinnerets, some have two, four, or eight. They move independently and in concert.
Most spinnerets are not simple structures with a single orifice producing a single thread, but highly complex structures of many microscopic spigots, each producing one filament. This is important partly because it produces the necessary orientation of the protein molecules, without which the silk would be weak and useless. It also permits spiders to combine multiple filaments in different ways to produce many kinds of silk for special purposes.
Various species of spiders use silk extruded from spinnerets to build webs, to entrap insects by running round them, to make egg-cases, to catch the wind and fly (ballooning), etc. Some insect larvae (including silkworms) extrude silk to make a protective cocoon for their metamorphosis. The insects known as web spinners weave silken galleries for protection from predators and the elements while foraging and breeding.
Observations suggesting that there might be silk-producing organs on the feet of the zebra tarantula (Aphonopelma seemanni) led to questions about the origins of spinnerets. It was hypothesised that spinnerets in spiders were originally used as climbing aids on the feet and evolved and were used for webmaking at a later time. However, these observations have since been challenged, as described in the main article on tarantulas.
- INSECTA: EMBIOPTERA (EMBIIDINA), Retrieved December 1, 2013
- Wiggins, Charlotte (Nov 10, 2013). "Gardening to Distraction: Along came a spider". The Rolla Daily News (Therolladailynews.com). Retrieved December 1, 2013.
- Spider Identification – Types of Spiders, Retrieved December 1, 2013
- Richards, O. W.; Davies, R.G. (1977). Imms' General Textbook of Entomology: Volume 1: Structure, Physiology and Development Volume 2: Classification and Biology. Berlin: Springer. ISBN 0-412-61390-5.
- Gorb, SN; Niederegger S; Hayashi CY; Summers AP; Votsch W; Walther P (September 28, 2006). "Bio materials: silk-like secretion from tarantula feet". Nature 443 (7110): 407. doi:10.1038/443407a. PMID 17006505.
|
{
"palladium_score": 3.8753535747528076,
"timestamp": "2026-01-18T07:32:27.125299",
"source": "Palladium-STEM (Preview)"
}
|
https://kidshealth.org/KidsHealthDemo/en/parents/immune-thrombocytopenia.html?WT.ac=p-ra
|
What Is Immune Thrombocytopenia?
Immune thrombocytopenia — or immune thrombocytopenic purpura (ITP) — happens when the immune system, which fights germs and infections, attacks the body's platelets. Platelets are cells that stop bleeding by forming blood clots. Without enough platelets, kids with the condition bleed easily.
In most children, immune thrombocytopenia (throm-buh-sye-tuh-PEE-nee-uh) goes away within 6 months. But sometimes it can last longer, or come back after going away.
What Are the Signs & Symptoms of Immune Thrombocytopenia?
A child with immune thrombocytopenia may have:
- bleeding that happens easily, such as:
- bleeding under the skin that leads to:
- easy bruising
- small red or purple spots on the skin called petechiae (peh-TEE-kee-eye)
- purple spots that look like bruises called purpura (PURR-pyur-ah)
Very rarely, immune thrombocytopenia can cause bleeding in the brain (a stroke).
What Causes Immune Thrombocytopenia?
Immune thrombocytopenia happens when the immune system attacks platelets. Viral infections often trigger this in children. Less commonly, another illness or autoimmune disease or a medicine can trigger ITP. Often, it isn't clear what triggers the immune system to attack platelets.
Who Gets Immune Thrombocytopenia?
Most cases of childhood immune thrombocytopenia happen in kids 1–7 years old. But it can happen in older kids and teens. Usually, the child is otherwise healthy and feels well.
How Is Immune Thrombocytopenia Diagnosed?
To diagnose immune thrombocytopenia, doctors:
- asks questions
- do an exam
- do blood tests
- do a platelet count
- make sure the other blood counts (red blood cells and white blood cells) are normal
- look for signs of infection
- check for other causes of low platelets
How Is Immune Thrombocytopenia Treated?
Treating immune thrombocytopenia depends on how severe the symptoms are. Children who only have bruising and red pinpoint spots may not need any treatment.
When needed, treatments may include:
- medicines that stop the immune system from attacking platelets, such as:
- an IV injection of antibodies (immunoglobulins or rituximab)
- medicines to help the body make more platelets
- surgery to remove the spleen because the spleen is where the platelets are removed from the blood. This is done only when a child has serious symptoms that don't improve with other treatments.
What Can Parents Do?
While they have immune thrombocytopenia, kids need to:
- avoid sports and activities (such as bike riding and contact sports) that could lead to injury and bleeding
- not take medicines that contain ibuprofen (such as Motrin or Advil) or aspirin, which make bleeding more likely
Most children with immune thrombocytopenia recover fully within a few months. Help your child by:
- going to all doctor's appointments
- following the doctor's advice on which activities are OK and which to avoid
- contacting the doctor and going to a hospital right away if your child has a head injury
- making sure your child avoids any medicines as your doctor recommends
- calling the doctor if your child has new symptoms of bleeding, bruising, or red or purplish spots on the skin
|
{
"palladium_score": 3.667515277862549,
"timestamp": "2026-01-18T07:32:29.907043",
"source": "Palladium-STEM (Preview)"
}
|
http://www1.bartleby.com/107/175.html
|
Henry Gray (18211865). Anatomy of the Human Body. 1918.
VIII. The Lymphatic System
THE LYMPHATIC SYSTEM consists (1) of complex capillary networks which collect the lymph in the various organs and tissues; (2) of an elaborate system of collecting vessels which conduct the lymph from the capillaries to the large veins of the neck at the junction of the internal jugular and subclavian veins, where the lymph is poured into the blood stream; and (3) lymph glands or nodes which are interspaced in the pathways of the collecting vessels filtering the lymph as it passes through them and contributing lymphocytes to it. The lymphatic capillaries and collecting vessels are lined throughout by a continuous layer of endothelial cells, forming thus a closed system. The lymphatic vessels of the small intestine receive the special designation of lacteals or chyliferous vessels; they differ in no respect from the lymphatic vessels generally excepting that during the process of digestion they contain a milk-white fluid, the chyle.
FIG. 592 Scheme showing relative positions of primary lymph sacs based on the description given by Florence Sabin. (See enlarged image)
The Development of the Lymphatic Vessels.The lymphatic system begins as a series of sacs108 at the points of junction of certain of the embryonic veins. These lymph-sacs are developed by the confluence of numerous venous capillaries, which at first lose their connections with the venous system, but subsequently, on the formation of the sacs, regain them. The lymphatic system is therefore developmentally an offshoot of the venous system, and the lining walls of its vessels are always endothelial.
In the human embryo the lymph sacs from which the lymphatic vessels are derived are six in number; two paired, the jugular and the posterior lymph-sacs; and two unpaired, the retroperitoneal and the cisterna chyli. In lower mammals an additional pair, subclavian, is present, but in the human embryo these are merely extensions of the jugular sacs.
The position of the sacs is as follows: (1) jugular sac, the first to appear, at the junction of the subclavian vein with the primitive jugular; (2) posterior sac, at the junction of the iliac vein with the cardinal; (3) retroperitoneal, in the root of the mesentery near the suprarenal glands; (4) cisterna chyli, opposite the third and fourth lumbar vertebræ (Fig. 592). From the lymph-sacs the lymphatic vessels bud out along fixed lines corresponding more or less closely to the course of the embryonic bloodvessels. Both in the body-wall and in the wall of the intestine, the deeper plexuses are the first to be developed; by continued growth of these the vessels in the superficial layers are gradually formed. The thoracic duct is probably formed from anastomosing outgrowths from the jugular sac and cisterna chyli. At its connection with the cisterna chyli it is at first double, but the two vessels soon join.
All the lymph-sacs except the cisterna chyli are, at a later stage, divided up by slender connective tissue bridges and transformed into groups of lymph glands. The lower portion of the cisterna chyli is similarly converted, but its upper portion remains as the adult cisterna.
Lymphatic Capillaries.The complex capillary plexuses which consist of a single layer of thin flat endothelial cells lie in the connective-tissue spaces in the various regions of the body to which they are distributed and are bathed by the intercellular tissue fluids. Two views are at present held as to the mode in which the lymph is formed: one being by the physical processes of filtration, diffusion, and osmosis, and the other, that in addition to these physical processes the endothelial cells have an active secretory function. The colorless liquid lymph has about the same composition as the blood plasma. It contains many lymphocytes and frequently red blood corpuscles. Granules and bacteria are also taken up by the lymph from the connective-tissue spaces, partly by the action of lymphocytes which pass into the lymph between the endothelial cells and partly by the direct passage of the granules through the endothelial cells.
The lymphatic capillary plexuses vary greatly in form; the anastomoses are usually numerous; blind ends or cul-de-sacs are especially common in the intestinal villi, the dermal papillæ and the filiform papillæ of the tongue. The plexuses are often in two layers: a superficial and a deep, the superficial being of smaller caliber than the deep. The caliber, however, varies greatly in a given plexus from a few micromillimeters to one millimeter. The capillaries are without valves.
Distribution.The Skin.Lymphatic capillaries are abundant in the dermis where they form superficial and deep plexuses, the former sending blind ends into the dermal papillæ. The plexuses are especially rich over the palmar surface of the hands and fingers and over the plantar surface of the feet and toes. The epidermis is without capillaries. The conjunctiva has an especially rich plexus.
The alimentary canal is supplied with rich plexuses beneath the epithelium, often as a superficial plexus in the mucosa and a deeper submucosal plexus. Cul-de-sacs extend into the filiform papillæ of the tongue and the villi of the small intestine. Those portions of the alimentary canal covered by peritoneum, have in addition a subserous lymphatic capillary plexus beneath the mesothelium.
The liver has a rich subserous plexus in the capsule and also extensive plexuses which accompany the hepatic artery and portal vein. The lymphatic capillaries have not been followed into the liver lobules. The lymph from the liver forms a large part of that which flows through the thoracic duct. The gall-bladder and bile ducts have rich subepithelial plexuses and the former a subserous plexus.
The trachea and bronchi have plexuses in the mucosa and submucosa but the smaller bronchi have only a single layer. The capillaries do not extend to the air-cells. The plexuses around the smaller bronchi connect with the rich subserous plexus of the lungs in places where the veins reach the surface.
The kidney is supplied with a coarse subserous plexus and a deeper plexus of finer capillaries in the capsule. Lymphatics have been described within the substance of the kidney surrounding the tubules.
The urinary bladder has a rich plexus of lymphatic capillaries just beneath the epithelial lining, also a subserous set which anastomoses with the former through the muscle layer. The submucous plexus is continuous with the submucous plexus of the urethra.
Lymphatic capillaries are probably absent in the central nervous system, the meninges, the eyeball (except the conjunctiva), the orbit, the internal ear, within striated muscle, the liver lobule, the spleen pulp and kidney parenchyma. They are entirely absent in cartilage. In many places further investigation is needed.
Lymphatic Vessels.The lymphatic vessels are exceedingly delicate, and their coats are so transparent that the fluid they contain is readily seen through them. They are interrupted at intervals by constrictions, which give them a knotted or beaded appearance; these constrictions correspond to the situations of valves in their interior. Lymphatic vessels have been found in nearly every texture and organ of the body which contains bloodvessels. Such non-vascular structures as cartilage, the nails, cuticle, and hair have none, but with these exceptions it is probable that eventually all parts will be found to be permeated by these vessels.
Structure of Lymphatic Vessels.The larger lymphatic vessels are each composed of three coats. The internal coat is thin, transparent, slightly elastic, and consists of a layer of elongated endothelial cells with wavy margins by which the contiguous cells are dovetailed into one another; the cells are supported on an elastic membrane. The middle coat is composed of smooth muscular and fine elastic fibers, disposed in a transverse direction. The external coat consists of connective tissue, intermixed with smooth muscular fibers longitudinally or obliquely disposed; it forms a protective covering to the other coats, and serves to connect the vessel with the neighboring structures. In the smaller vessels there are no muscular or elastic fibers, and the wall consists only of a connective-tissue coat, lined by endothelium. The thoracic duct has a more complex structure than the other lymphatic vessels; it presents a distinct subendothelial layer of branched corpuscles, similar to that found in the arteries; in the middle coat there is, in addition to the muscular and elastic fibers, a layer of connective tissue with its fibers arranged longitudinally. The lymphatic vessels are supplied by nutrient vessels, which are distributed to their outer and middle coats; and here also have been traced many non-medullated nerves in the form of a fine plexus of fibrils.
The valves of the lymphatic vessels are formed of thin layers of fibrous tissue covered on both surfaces by endothelium which presents the same arrangement as on the valves of veins (p. 501). In form the valves are semilunar; they are attached by their convex edges to the wall of the vessel, the concave edges being free and directed along the course of the contained current. Usually two such valves, of equal size, are found opposite one another; but occasionally exceptions occur, especially at or near the anastomoses of lymphatic vessels. Thus, one valve may be of small size and the other increased in proportion.
In the lymphatic vessels the valves are placed at much shorter intervals than in the veins. They are most numerous near the lymph glands, and are found more frequently in the lymphatic vessels of the neck and upper extremity than in those of the lower extremity. The wall of the lymphatic vessel immediately above the point of attachment of each segment of a valve is expanded into a pouch or sinus which gives to these vessels, when distended, the knotted or beaded appearance already referred to. Valves are wanting in the vessels composing the plexiform net-work in which the lymphatic vessels usually originate on the surface of the body.
Lymph Glands (lymphoglandulæ).The lymph glands are small oval or bean-shaped bodies, situated in the course of lymphatic and lacteal vessels so that the lymph and chyle pass through them on their way to the blood. Each generally presents on one side a slight depressionthe hilusthrough which the bloodvessels enter and leave the interior. The efferent lymphatic vessel also emerges from the gland at this spot, while the afferent vessels enter the organ at different parts of the periphery. On section (Fig. 597) a lymph gland displays two different structures: an external, of lighter colorthe cortical; and an internal, darkerthe medullary. The cortical structure does not form a complete investment, but is deficient at the hilus, where the medullary portion reaches the surface of the gland; so that the efferent vessel is derived directly from the medullary structures, while the afferent vessels empty themselves into the cortical substance.
Structure of Lymph Glands.A lymph gland consists of (1) a fibrous envelope, or capsule, from which a frame-work of processes (trabeculæ) proceeds inward, imperfectly dividing the gland into open spaces freely communicating with each other; (2) a quantity of lymphoid tissue occupying these spaces without completely filling them; (3) a free supply of bloodvessels, which are supported in the trabeculæ; and (4) the afferent and efferent vessels communicating through the lymph paths in the substance of the gland. The nerves passing into the hilus are few in number and are chiefly distributed to the bloodvessels supplying the gland.
The capsule is composed of connective tissue with some plain muscle fibers, and from its internal surface are given off a number of membranous processes or trabeculæ, consisting, in man, of connective tissue, with a small admixture of plain muscle fibers; but in many of the lower animals composed almost entirely of involuntary muscle. They pass inward, radiating toward the center of the gland, for a certain distancethat is to say, for about one-third or one-fourth of the space between the circumference and the center of the gland. In some animals they are sufficiently well-marked to divide the peripheral or cortical portion of the gland into a number of compartments (so-called follicles), but in man this arrangement is not obvious. The larger trabeculæ springing from the capsule break up into finer bands, and these interlace to form a mesh-work in the central or medullary portion of the gland. In these spaces formed by the interlacing trabeculæ is contained the proper gland substance or lymphoid tissue. The gland pulp does not, however, completely fill the spaces, but leaves, between its outer margin and the enclosing trabeculæ, a channel or space of uniform width throughout. This is termed the lymph path or lymph sinus(Fig. 597). Running across it are a number of finer trabeculæ of retiform connective tissue, the fibers of which are, for the most part, covered by ramifying cells.
On account of the peculiar arrangement of the frame-work of the organ, the gland pulp in the cortical portion is disposed in the form of nodules, and in the medullary part in the form of rounded cords. It consists of ordinary lymphoid tissue (Fig. 598), being made up of a delicate net-work of retiform tissue, which is continuous with that in the lymph paths, but marked off from it by a closer reticulation; it is probable, moreover, that the reticular tissue of the gland pulp and the lymph paths is continuous with that of the trabeculæ, and ultimately with that of the capsule of the gland. In its meshes, in the nodules and cords of lymphoid tissue, are closely packed lymph corpuscles. The gland pulp is traversed by a dense plexus of capillary bloodvessels. The nodules or follicles in the cortical portion of the gland frequently show, in their centers, areas where karyokinetic figures indicate a division of the lymph corpuscles. These areas are termed germ centers. The cells composing them have more abundant protoplasm than the peripheral cells.
The afferent vessels, as stated above, enter at all parts of the periphery of the gland, and after branching and forming a dense plexus in the substance of the capsule, open into the lymph sinuses of the cortical part. In doing this they lose all their coats except their endothelial lining, which is continuous with a layer of similar cells lining the lymph paths. In like manner the efferent vessel commences from the lymph sinuses of the medullary portion. The stream of lymph carried to the gland by the afferent vessels thus passes through the plexus in the capsule to the lymph paths of the cortical portion, where it is exposed to the action of the gland pulp; flowing through these it enters the paths or sinuses of the medullary portion, and finally emerges from the hilus by means of the efferent vessel. The stream of lymph in its passage through the lymph sinuses is much retarded by the presence of the reticulum, hence morphological elements, either normal or morbid, are easily arrested and deposited in the sinuses. Many lymph corpuscles pass with the efferent lymph stream to join the general blood stream. The arteries of the gland enter at the hilus, and either go at once to the gland pulp, to break up into a capillary plexus, or else run along the trabeculæ, partly to supply them and partly running across the lymph paths, to assist in forming the capillary plexus of the gland pulp. This plexus traverses the lymphoid tissue, but does not enter into the lymph sinuses. From it the veins commence and emerge from the organ at the same place as that at which the arteries enter.
FIG. 598 Lymph gland tissue. Highly magnified. a, Trabeculæ. b. Small artery in substance of same. c. Lymph paths. d. Lymph corpuscles. e. Capillary plexus. (See enlarged image)
The lymphatic vessels are arranged into a superficial and a deep set. On the surface of the body the superficial lymphatic vessels are placed immediately beneath the integument, accompanying the superficial veins; they join the deep lymphatic vessels in certain situations by perforating the deep fascia. In the interior of the body they lie in the submucous areolar tissue, throughout the whole length of the digestive, respiratory, and genito-urinary tracts; and in the subserous tissue of the thoracic and abdominal walls. Plexiform networks of minute lymphatic vessels are found interspersed among the proper elements and bloodvessels of the several tissues; the vessels composing the net-work, as well as the meshes between them, are much larger than those of the capillary plexus. From these net-works small vessels emerge, which pass, either to a neighboring gland, or to join some larger lymphatic trunk. The deep lymphatic vessels, fewer in number, but larger than the superficial, accompany the deep bloodvessels. Their mode of origin is probably similar to that of the superficial vessels. The lymphatic vessels of any part or organ exceed the veins in number, but in size they are much smaller. Their anastomoses also, especially those of the large trunks, are more frequent, and are effected by vessels equal in diameter to those which they connect, the continuous trunks retaining the same diameter.
Lymph.Lymph, found only in the closed lymphatic vessels, is a transparent, colorless, or slightly yellow, watery fluid of specific gravity about 1.015; it closely resembles the blood plasma, but is more dilute. When it is examined under the microscope, leucocytes of the lymphocyte class are found floating in the transparent fluid; they are always increased in number after the passage of the lymph through lymphoid tissue, as in lymph glands. Lymph should be distinguished from tissue fluid109 which is found outside the lymphatic vessels in the tissue spaces.
|
{
"palladium_score": 3.54780912399292,
"timestamp": "2026-01-18T07:32:32.384251",
"source": "Palladium-STEM (Preview)"
}
|
https://www.w3.org/International/articles/http-charset/index
|
Intended audience: script developers (PHP, JSP, etc.), webmasters, Web project managers, and anyone who wants to understand how to set or send HTTP charset information.
When a server sends a document to a user agent (eg. a browser) it also sends information in the Content-Type field of the accompanying HTTP header about what type of data format this is. This information is expressed using a MIME type label. This article provides a starting point for those needing to set the encoding information in the HTTP header.
You should look elsewhere for information about how to declare character encoding in HTML pages, or how to find out how to check the character encoding information being sent in an HTTP header.
Documents transmitted with HTTP that are of type text, such as text/html, text/plain, etc., can send a charset parameter in the HTTP header to specify the character encoding of the document.
It is very important to always label Web documents explicitly. HTTP 1.1 says that the default charset is ISO-8859-1. But there are too many unlabeled documents in other encodings, so browsers use the reader's preferred encoding when there is no explicit charset parameter.
The line in the HTTP header typically looks like this:
Content-Type: text/html; charset=utf-8
In theory, any character encoding that has been registered with IANA can be used, but there is no browser that understands all of them. The more widely a character encoding is used, the better the chance that a browser will understand it. A Unicode encoding such as UTF-8 is a good choice for a number of reasons.
How to make the server send out appropriate charset information depends on the server. You will need the appropriate administrative rights to be able to change server settings.
Apache. This can be done via the AddCharset (Apache 1.3.10 and later) or AddType directives, for directories or individual resources (files). With AddDefaultCharset (Apache 1.3.12 and later), it is possible to set the default charset for a whole server. For more information, see the article on Setting 'charset' information in .htaccess.
Jigsaw. Use an indexer in JigAdmin to associate extensions with charsets, or set the charset directly on a resource.
IIS 5 and 6. In Internet Services Manager, right-click "Default Web Site" (or the site you want to
configure) and go to "Properties" => "HTTP Headers" => "File Types..." => "New Type...". Put in the extension you want to map, separately
for each extension; IIS users will probably want to map .htm, .html,... Then, for Content type, add "
text/html;charset=utf-8" (without the
quotes; substitute your desired charset for utf-8; do not leave any spaces anywhere because IIS ignores all text after spaces). For IIS 4, you
may have to use "HTTP Headers" => "Creating a Custom HTTP Header" if the above does not work.
The appropriate header can also be set in server side scripting languages. For example:
Perl. Output the correct header before any part of the actual page. After the last header, use a double
print "Content-Type: text/html; charset=utf-8\n\n";
Python. Use the same solution as for Perl (except that you don't need a semicolon at the end).
PHP. Use the header() function before generating any content,
header('Content-type: text/html; charset=utf-8');
Java Servlets. Use the setContentType method on the ServletResponse before obtaining any
object (Stream or Writer) used for output, e.g.:
If you use a Writer, the Servlet automatically takes care of the conversion from Java Strings to the encoding selected.
JSP. Use the
page directive e.g.:
<%@ page contentType="text/html; charset=UTF-8" %>
out.println() or the expression elements (
<%= object%>) is automatically
converted to the encoding selected. Also, the page itself is interpreted as being in this encoding.
ASP and ASP.Net. ContentType and charset are set independently, and are methods on the response object.
To set the charset, use e.g.:
In ASP.Net, setting Response.ContentEncoding will take care both of the charset parameter in the HTTP Content-Type as well as of the actual encoding of the document sent out (which of course have to be the same). The default can be set in the
globalization element in
Machine.config, which is originally set to UTF-8).
|
{
"palladium_score": 3.506456136703491,
"timestamp": "2026-01-18T07:32:32.384251",
"source": "Palladium-STEM (Preview)"
}
|
http://nile.cirhep.org/introduction-to-ionic-and-covalent-bonding-worksheet-answers/
|
Key Introduction To Ionic Covalent Bonding
Key Introduction To Ionic Covalent Bonding Advertisement Pre Lab Questions 1 Define Ionic Bond Chemical Bond Where Electron S Are Transferred From A Cation Usually A Metal To An Anion A Nonmetal Or Polyatomic The Resulting Opposite Charges Attract And The Bond Gives The Atoms Involved A Full Octet 2 Define Covalent Bond Chemical Bond Where Electron S Are Shared Between Two
Ionic And Covalent Bonding Worksheet With Answers
Ionic And Covalent Bonding Answers About This Resource Info Created Feb 2 Updated May 21 Docx 16 Kb Ionic And Covalent Bonding Worksheets Docx 2 Mb Ionic And Covalent Bonding Answers Report A Problem Get This Resource As Part Of A Bundle And Save Up To 35 Bundle Chemical Bonding Worksheets With Answers 8 70 Categories Grades Chemistry Chemistry Acids And
Answer Key For Ionic Bonding 1 Teacher Worksheets
Showing Top 8 Worksheets In The Category Answer Key For Ionic Bonding 1 Some Of The Worksheets Displayed Are Bonding Basics Chemistry Name Ws 1 Ionic Bonding Key Date Block Ionic Bonding Work 1 Section Ionic Bonding Naming Ionic Compounds Practice Work Ionic And Covalent Compounds Name Key Covalent Bonding Work Once You Find Your Worksheet Click On Pop Out Icon Or Print Icon To
Bonding Basics Ionic Bonds Worksheet Answers
Bonding Basics Ionic Bonds Worksheet Answers Using Useful Focuses Because We Want To Present Everything That You Need Within A Authentic Plus Dependable Supply We All Existing Valuable Facts About Different Subject Areas And Also Topics From Tips About Dialog Crafting To Cooking Book Sets Out As Well As To Identifying What Sort Of Phrases To Use For The Structure Most Of Us Be Sure That
Ionic Vs Covalent Bonds Worksheets Learny Kids
Displaying Top 8 Worksheets Found For Ionic Vs Covalent Bonds Some Of The Worksheets For This Concept Are Bonding Review Work University Of Texas At Austin Bonding Basics Covalent Bonds Answer Key Chapter 7 Practice Work Covalent Bonds And Molecular Properties Of Ionic And Covalent Compounds Polarity Work Chemical Bonding Ionic Covalent Chemical Bonds Ionic Bonds AnswersWorksheet Chemical Bonding Ionic And Covalent Answers Part 2
Worksheet Chemical Bonding Ionic And Covalent Answers Part 2 Together With Covalent Bonding Worksheet In Ionic Bonding The Atom Of The Element Is Bonded To An Atom Of The Opposite Sign For Example Boron Atoms Bonded To Oxygen Atoms Are Covalent Ionic Bonding Is Often The Outcome Of The Covalent Bond As WellIntroduction To Ionic Covalent Bonding Phet Contribution
Introduction To Ionic Covalent Bonding Description Use Simulation To Observe Properties Of Ionic And Molecular Compounds In Conjunction With Msds Sheets This Is Meant To Introduce Ionic And Covalent Bonding As Well As The Properties Associated With The Resulting Compounds Duration 60 Minutes Answers Included YesNinth Grade Lesson Introduction To Covalent Bonding
Introduction To Covalent Bonding Add To Favorites 22 Teachers Like This Lesson Print Lesson Share Objective Swbat Predict The Number And Types Of Bonds I E Covalent Ionic Formed Based On Elements Outermost Electrons And Position On Periodic Table Big Idea Differentiating Between Ionic And Covalent Bonding Can Be Difficult For Students But With A Little Practice They Can Succeed
Worksheet Chemical Bonding Ionic Covalent
Worksheet Chemical Bonding Ionic Covalent Remember Ionic Bond Between A Metal And Non Metal M Nm Covalent Bond Between A Non Metal And Non Metal Nm Nm Part 1 Determine If The Elements In The Following Compounds Are Metals Or Non Metals Describe The Type Of Bonding That Occurs In The Compound Compound Element 1
Chemistry Worksheet Introduction To Chemical Bonding Name
Bonding Worksheet 1 Introduction To Ionic Bonds The Forces That Hold Matter Together Are Called Chemical Bonds There Are Four Major Types Of Bonds We Need To Learn In Detail About These Bonds And How They Influence The Properties Of Matter The Four Major Types Of Bonds Are I Ionic Bonds Iii Metallic Bonds Ii Covalent Bonds Iv Intermolecular Van Der Waals Forces Ionic Bonds The
Introduction To Ionic And Covalent Bonding Worksheet Answers. The worksheet is an assortment of 4 intriguing pursuits that will enhance your kid's knowledge and abilities. The worksheets are offered in developmentally appropriate versions for kids of different ages. Adding and subtracting integers worksheets in many ranges including a number of choices for parentheses use.
You can begin with the uppercase cursives and after that move forward with the lowercase cursives. Handwriting for kids will also be rather simple to develop in such a fashion. If you're an adult and wish to increase your handwriting, it can be accomplished. As a result, in the event that you really wish to enhance handwriting of your kid, hurry to explore the advantages of an intelligent learning tool now!
Consider how you wish to compose your private faith statement. Sometimes letters have to be adjusted to fit in a particular space. When a letter does not have any verticals like a capital A or V, the very first diagonal stroke is regarded as the stem. The connected and slanted letters will be quite simple to form once the many shapes re learnt well. Even something as easy as guessing the beginning letter of long words can assist your child improve his phonics abilities. Introduction To Ionic And Covalent Bonding Worksheet Answers.
There isn't anything like a superb story, and nothing like being the person who started a renowned urban legend. Deciding upon the ideal approach route Cursive writing is basically joined-up handwriting. Practice reading by yourself as often as possible.
Research urban legends to obtain a concept of what's out there prior to making a new one. You are still not sure the radicals have the proper idea. Naturally, you won't use the majority of your ideas. If you've got an idea for a tool please inform us. That means you can begin right where you are no matter how little you might feel you've got to give. You are also quite suspicious of any revolutionary shift. In earlier times you've stated that the move of independence may be too early.
Each lesson in handwriting should start on a fresh new page, so the little one becomes enough room to practice. Every handwriting lesson should begin with the alphabets. Handwriting learning is just one of the most important learning needs of a kid. Learning how to read isn't just challenging, but fun too.
The use of grids The use of grids is vital in earning your child learn to Improve handwriting. Also, bear in mind that maybe your very first try at brainstorming may not bring anything relevant, but don't stop trying. Once you are able to work, you might be surprised how much you get done. Take into consideration how you feel about yourself. Getting able to modify the tracking helps fit more letters in a little space or spread out letters if they're too tight. Perhaps you must enlist the aid of another man to encourage or help you keep focused.
Introduction To Ionic And Covalent Bonding Worksheet Answers. Try to remember, you always have to care for your child with amazing care, compassion and affection to be able to help him learn. You may also ask your kid's teacher for extra worksheets. Your son or daughter is not going to just learn a different sort of font but in addition learn how to write elegantly because cursive writing is quite beautiful to check out. As a result, if a kid is already suffering from ADHD his handwriting will definitely be affected. Accordingly, to be able to accomplish this, if children are taught to form different shapes in a suitable fashion, it is going to enable them to compose the letters in a really smooth and easy method. Although it can be cute every time a youngster says he runned on the playground, students want to understand how to use past tense so as to speak and write correctly. Let say, you would like to boost your son's or daughter's handwriting, it is but obvious that you want to give your son or daughter plenty of practice, as they say, practice makes perfect.
Without phonics skills, it's almost impossible, especially for kids, to learn how to read new words. Techniques to Handle Attention Issues It is extremely essential that should you discover your kid is inattentive to his learning especially when it has to do with reading and writing issues you must begin working on various ways and to improve it. Use a student's name in every sentence so there's a single sentence for each kid. Because he or she learns at his own rate, there is some variability in the age when a child is ready to learn to read. Teaching your kid to form the alphabets is quite a complicated practice.
|
{
"palladium_score": 3.6611108779907227,
"timestamp": "2026-01-18T07:32:34.720158",
"source": "Palladium-STEM (Preview)"
}
|
http://mathforum.org/library/drmath/sets/high_circles.html?num_to_see=40&start_at=1&s_keyid=40506822&f_keyid=40506823
|
why 360 degrees?
See also the
Dr. Math FAQ:
segments of circles
Browse High School Conic Sections, Circles
Stars indicate particularly interesting answers or
good places to begin browsing.
Selected answers to common questions:
Find the center of a circle.
Is a circle a polygon?
Volume of a tank.
Why is a circle 360 degrees?
- 1/4 Tank Dipstick Problem (from Car Talk) [12/04/2002]
The gauge on Rick's 18-wheeler is broken, so he uses a dowel to
measure the diesel in his tank, which is cylinder-shaped, 20 inches in
diameter, and sits on its side. How can he mark the dipstick to show
1/4 of a tank of fuel?
- Hands of a Clock [10/10/1997]
How many times do the hour and minute hands cross in a 12-hour period of
- One Circle Revolving Around Another [05/26/1999]
How many revolutions will a smaller circle make while rotating around the
perimeter of a larger circle?
- Accuracy in Measurement [02/08/2002]
Since pi is irrational, either the circumference or the diameter of a
circle must be irrational. How is that possible?
- Accurate Drawing of an Ellipse [02/14/1999]
Draw an ellipse accurately using simple tools.
- Algebraic Spirals [5/19/1995]
I have read about "algebraic" spirals called "cissoid" and "conchoid," which Descartes deemed more exact than the "transcendental" logarithmic and Archimedean spirals, but I have not been able to find any other information on these figures.
- Analytic Geometry [08/31/1997]
How do I find the standard equations of the circles that pass through
(2,3) and are tangent to both the lines 3x - 4y = -1 and 4x + 3y = 7?
- Analytic Proof that Midpoints Form a Circle [03/10/1998]
Analytic proof that midpoints between a point within a circle and its
circumference form a circle.
- Angle Inscribed in a Semicircle [11/07/2001]
Prove that any angle inscribed in a semicircle is a right angle.
- Angle Measurements of Triangles inside Semicircle [11/26/1998]
If the area of a triangle inside a semicircle is equal to the area
outside the triangle within the semicircle, then find the values of the
acute angles in the triangle.
- Another Grazing Cow [6/7/1995]
A man has a barn that is 20 ft by 10 ft. He tethers a cow to one corner
of the outside of the barn using a 50-ft rope. What is the total area
that the cow is capable of grazing?
- Applications of Parabolas [10/24/2000]
How are parabolas used in real life?
- Approximating Pi using Geometry [08/12/1998]
I need to know a simple method to find the approximate value of pi using
- Arbelos Construction [03/10/2000]
Is there a Euclidean construction for the circles that are sandwiched in
- Arc Formulas [05/08/2003]
I am trying to determine the angle of an arc from the radius and arc
length. The radius is 630 and the arc length is 66.82.
- Archimedes and the Area of a Circle [09/17/1997]
How do you find the area of a circle without pi?
- Archimedes' Method of Estimating Pi [5/29/1996]
What was Archimedes' method for estimating pi using inscribed and
circumscribed polygons about a circle?
- Arcs Inside a Square [07/25/1999]
What is the area of the figure created by the intersection of two arcs
drawn in a square of sidelength 5 units?
- Area and Perimeter in Polygons [06/24/1999]
How can I prove the formula A = (a^2n)/(4tan(180/n)) for computing the
area of a regular n-gon with sidelength a? How does this compare to the
area of a circle?
- Area, Angle of Chords of a Circle [7/25/1996]
Calculate the angles PAB and POB, the area of the sector bounded by OP,
OB and the minor arc PB.
- Are Angles Dimensionless? [08/31/2003]
If you look at the dimensions in the equation arc length = r*theta, it
appears that angles must be dimensionless. But this can't be right.
Or can it?
- Area of a Circle Segment [04/18/1999]
What are the steps for figuring out the area of a segment of a circle?
- Area of a Circle with Radius less than 1 [02/18/2002]
If the radius is less than 1 it just gets smaller and you get a smaller
- Area of a Crescent [11/10/2000]
What is the formula for the area of a crescent?
- Area of a Curved Figure [07/26/1997]
How can you find the area of a curved figure without using calculus?
- Area of an Annulus [10/13/2001]
How can I find the area of an annulus?
- Area of an Ellipse [08/18/1999]
How do you calculate the area of an ellipse?
- Area of an Ellipse Cut by a Chord [05/26/2000]
How can you calculate area of the part of an ellipse cut off by a chord,
if you know the major and minor axes, and the chord?
- Area of an Ellipse using Integral Calculus [11/4/1996]
How do you find the area of an ellipse?
- Area of an Ellipse without using Calculus [11/28/1997]
How do you find the area of an oval without using calculus?
- Area of an Oval [11/30/2001]
How do I figure out the area of an oval 17" x 38"?
- Area of a Parabola [09/05/1997]
How do you find the area of a parabola? (I just finished Algebra 2.)
- Area of A Sector of An Ellipse [02/28/1998]
Finding the area of a sector of an ellipse, given the semiminor and major
axes and the angles of the 2 vectors bounding the sector.
- Area of a Segment from Arc and Chord Length [11/27/2000]
How do you find the area of a segment of a circle if you know only the
arc length and chord length?
- The Area of a Square Inscribed in a Circle [12/23/1995]
What is the area of a square inscribed in a circle whose circumference is
- Area of Inscribed Circle [12/01/1998]
Find the area of the circle inscribed in a triangle ABC using Heron's
- Area of Intersection of Two Circles [12/1/1995]
My teenage son asked me for the formula for the area of intersection of
two arbitrary circles.
- Area of Part of an Ellipse [04/07/2001]
Given an ellipse with a line bisecting it perpendicular to either the
major or minor axis of the ellipse, what is the formula for the area of
the ellipse either above or below that line?
- Area of Union of Two Circles [6/10/1996]
If the effective length of a rope tied to a goat is L, and the goat can
eat exactly half of the grass in a field, express L in terms of R.
- Areas of N-Sided Regular Polygon and Circle [02/24/2005]
The area of an n-sided regular polygon approaches the area of a circle
as n gets very large.
|
{
"palladium_score": 3.70988130569458,
"timestamp": "2026-01-18T07:32:34.720158",
"source": "Palladium-STEM (Preview)"
}
|
https://en.wikipedia.org/wiki/Copyright
|
|Sui generis rights|
Higher category: Property and Property law
|Part of a series on|
Copyright is a type of intellectual property that grants the creator of an original creative work the exclusive legal right to determine whether and under what conditions it may be copied and used by others, usually for a limited term of years. Copyright is intended to protect the original expression of an idea in the form of a creative work, but not the idea itself. A copyright is subject to limitations based on public interest considerations, such as the fair use doctrine in the United States.
Some jurisdictions require "fixing" copyrighted works in a tangible form. It is often shared among multiple authors, each of whom holds a set of rights to use or license the work, and who are commonly referred to as rights holders. These rights frequently include reproduction, control over derivative works, distribution, public performance, and moral rights such as attribution.
Copyrights can be granted by public law and are in that case considered "territorial rights". This means that copyrights granted by the law of a certain state, do not extend beyond the territory of that specific jurisdiction. Copyrights of this type vary by country; many countries, and sometimes a large group of countries, have made agreements with other countries on procedures applicable when works "cross" national borders or national rights are inconsistent.
Typically, the public law duration of a copyright expires 50 to 100 years after the creator dies, depending on the jurisdiction. Some countries require certain copyright formalities to establishing copyright, others recognize copyright in any completed work, without formal registration.
- 1 History
- 2 Obtaining protection
- 3 Enforcement
- 4 Rights granted
- 5 Limitations and exceptions
- 6 Transfer, assignment and licensing
- 7 Criticism
- 8 Public domain
- 9 See also
- 10 References
- 11 Further reading
- 12 External links
This section needs expansion. You can help by adding to it. (August 2019)
The concept of copyright developed in England with the invention of the printing press. The consequent rise in literacy across Europe led to a dramatic increase in the demand for literary work. In reaction to the printing of "scandalous books and pamphlets", the English Parliament passed the Licensing of the Press Act 1662, which required all intended publications to be registered with the government-approved Stationers' Company, giving the Stationers the right to regulate what material could be printed.
Copyright laws allow products of creative human activities, such as literary and artistic production, to be preferentially exploited and thus incentivized. Different cultural attitudes, social organizations, economic models and legal frameworks are seen to account for why copyright emerged in Europe and not, for example, in Asia. In the Middle Ages in Europe, there was generally a lack of any concept of literary property due to the general relations of production, the specific organization of literary production and the role of culture in society. The latter refers to the tendency of oral societies, such as that of Europe in the medieval period, to view knowledge as the product and expression of the collective, rather than to see it as individual property. However, with copyright laws, intellectual production comes to be seen as a product of an individual, with attendant rights. The most significant point is that patent and copyright laws support the expansion of the range of creative human activities that can be commodified. This parallels the ways in which capitalism led to the commodification of many aspects of social life that earlier had no monetary or economic value per se.
Copyright has developed into a concept that has a significant effect on nearly every modern industry, including not just literary work, but also forms of creative work such as sound recordings, films, photographs, software, and architecture.
Often seen as the first real copyright law, the 1709 British Statute of Anne gave the publishers rights for a fixed period, after which the copyright expired. The act also alluded to individual rights of the artist. It began, "Whereas Printers, Booksellers, and other Persons, have of late frequently taken the Liberty of Printing ... Books, and other Writings, without the Consent of the Authors ... to their very great Detriment, and too often to the Ruin of them and their Families:". A right to benefit financially from the work is articulated, and court rulings and legislation have recognized a right to control the work, such as ensuring that the integrity of it is preserved. An irrevocable right to be recognized as the work's creator appears in some countries' copyright laws.
The Copyright Clause of the United States, Constitution (1787) authorized copyright legislation: "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries." That is, by guaranteeing them a period of time in which they alone could profit from their works, they would be enabled and encouraged to invest the time required to create them, and this would be good for society as a whole. A right to profit from the work has been the philosophical underpinning for much legislation extending the duration of copyright, to the life of the creator and beyond, to their heirs.
The original length of copyright in the United States was 14 years, and it had to be explicitly applied for. If the author wished, they could apply for a second 14‑year monopoly grant, but after that the work entered the public domain, so it could be used and built upon by others.
Copyright law was enacted rather late in German states, and the historian Eckhard Höffner argues that the absence of copyright laws in the early 19th century encouraged publishing, was profitable for authors, led to a proliferation of books, enhanced knowledge, and was ultimately an important factor in the ascendency of Germany as a power during that century.
International copyright treaties
The 1886 Berne Convention first established recognition of copyrights among sovereign nations, rather than merely bilaterally. Under the Berne Convention, copyrights for creative works do not have to be asserted or declared, as they are automatically in force at creation: an author need not "register" or "apply for" a copyright in countries adhering to the Berne Convention. As soon as a work is "fixed", that is, written or recorded on some physical medium, its author is automatically entitled to all copyrights in the work, and to any derivative works unless and until the author explicitly disclaims them, or until the copyright expires. The Berne Convention also resulted in foreign authors being treated equivalently to domestic authors, in any country signed onto the Convention. The UK signed the Berne Convention in 1887 but did not implement large parts of it until 100 years later with the passage of the Copyright, Designs and Patents Act 1988. Specially, for educational and scientific research purposes, the Berne Convention provides the developing countries issue compulsory licenses for the translation or reproduction of copyrighted works within the limits prescribed by the Convention. This was a special provision that had been added at the time of 1971 revision of the Convention, because of the strong demands of the developing countries. The United States did not sign the Berne Convention until 1989.
The United States and most Latin American countries instead entered into the Buenos Aires Convention in 1910, which required a copyright notice on the work (such as all rights reserved), and permitted signatory nations to limit the duration of copyrights to shorter and renewable terms. The Universal Copyright Convention was drafted in 1952 as another less demanding alternative to the Berne Convention, and ratified by nations such as the Soviet Union and developing nations.
In 1961, the United International Bureaux for the Protection of Intellectual Property signed the Rome Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organizations. In 1996, this organization was succeeded by the founding of the World Intellectual Property Organization, which launched the 1996 WIPO Performances and Phonograms Treaty and the 2002 WIPO Copyright Treaty, which enacted greater restrictions on the use of technology to copy works in the nations that ratified it. The Trans-Pacific Partnership includes intellectual Property Provisions relating to copyright.
Copyright laws are standardized somewhat through these international conventions such as the Berne Convention and Universal Copyright Convention. These multilateral treaties have been ratified by nearly all countries, and international organizations such as the European Union or World Trade Organization require their member states to comply with them.
The original holder of the copyright may be the employer of the author rather than the author himself if the work is a "work for hire". For example, in English law the Copyright, Designs and Patents Act 1988 provides that if a copyrighted work is made by an employee in the course of that employment, the copyright is automatically owned by the employer which would be a "Work for Hire". Typically, the first owner of a copyright is the person who created the work i.e. the author. But when more than one person creates the work, then a case of joint authorship can be made provided some criteria are met.
Copyright may apply to a wide range of creative, intellectual, or artistic forms, or "works". Specifics vary by jurisdiction, but these can include poems, theses, fictional characters plays and other literary works, motion pictures, choreography, musical compositions, sound recordings, paintings, drawings, sculptures, photographs, computer software, radio and television broadcasts, and industrial designs. Graphic designs and industrial designs may have separate or overlapping laws applied to them in some jurisdictions.
Copyright does not cover ideas and information themselves, only the form or manner in which they are expressed. For example, the copyright to a Mickey Mouse cartoon restricts others from making copies of the cartoon or creating derivative works based on Disney's particular anthropomorphic mouse, but does not prohibit the creation of other works about anthropomorphic mice in general, so long as they are different enough to not be judged copies of Disney's. Note additionally that Mickey Mouse is not copyrighted because characters cannot be copyrighted; rather, Steamboat Willie is copyrighted and Mickey Mouse, as a character in that copyrighted work, is afforded protection.
Typically, a work must meet minimal standards of originality in order to qualify for copyright, and the copyright expires after a set period of time (some jurisdictions may allow this to be extended). Different countries impose different tests, although generally the requirements are low; in the United Kingdom there has to be some "skill, labour, and judgment" that has gone into it. In Australia and the United Kingdom it has been held that a single word is insufficient to comprise a copyright work. However, single words or a short string of words can sometimes be registered as a trademark instead.
Copyright law recognizes the right of an author based on whether the work actually is an original creation, rather than based on whether it is unique; two authors may own copyright on two substantially identical works, if it is determined that the duplication was coincidental, and neither was copied from the other.
In all countries where the Berne Convention standards apply, copyright is automatic, and need not be obtained through official registration with any government office. Once an idea has been reduced to tangible form, for example by securing it in a fixed medium (such as a drawing, sheet music, photograph, a videotape, or a computer file), the copyright holder is entitled to enforce his or her exclusive rights. However, while registration isn't needed to exercise copyright, in jurisdictions where the laws provide for registration, it serves as prima facie evidence of a valid copyright and enables the copyright holder to seek statutory damages and attorney's fees. (In the US, registering after an infringement only enables one to receive actual damages and lost profits.)
A widely circulated strategy to avoid the cost of copyright registration is referred to as the poor man's copyright. It proposes that the creator send the work to himself in a sealed envelope by registered mail, using the postmark to establish the date. This technique has not been recognized in any published opinions of the United States courts. The United States Copyright Office says the technique is not a substitute for actual registration. The United Kingdom Intellectual Property Office discusses the technique and notes that the technique (as well as commercial registries) does not constitute dispositive proof that the work is original or establish who created the work.
The Berne Convention allows member countries to decide whether creative works must be "fixed" to enjoy copyright. Article 2, Section 2 of the Berne Convention states: "It shall be a matter for legislation in the countries of the Union to prescribe that works in general or any specified categories of works shall not be protected unless they have been fixed in some material form." Some countries do not require that a work be produced in a particular form to obtain copyright protection. For instance, Spain, France, and Australia do not require fixation for copyright protection. The United States and Canada, on the other hand, require that most works must be "fixed in a tangible medium of expression" to obtain copyright protection. U.S. law requires that the fixation be stable and permanent enough to be "perceived, reproduced or communicated for a period of more than transitory duration". Similarly, Canadian courts consider fixation to require that the work be "expressed to some extent at least in some material form, capable of identification and having a more or less permanent endurance".
Before 1989, United States law required the use of a copyright notice, consisting of the copyright symbol (©, the letter C inside a circle), the abbreviation "Copr.", or the word "Copyright", followed by the year of the first publication of the work and the name of the copyright holder. Several years may be noted if the work has gone through substantial revisions. The proper copyright notice for sound recordings of musical or other audio works is a sound recording copyright symbol (℗, the letter P inside a circle), which indicates a sound recording copyright, with the letter P indicating a "phonorecord". In addition, the phrase All rights reserved was once required to assert copyright, but that phrase is now legally obsolete. Almost everything on the Internet has some sort of copyright attached to it. Whether these things are watermarked, signed, or have any other sort of indication of the copyright is a different story however.
In 1989 the United States enacted the Berne Convention Implementation Act, amending the 1976 Copyright Act to conform to most of the provisions of the Berne Convention. As a result, the use of copyright notices has become optional to claim copyright, because the Berne Convention makes copyright automatic. However, the lack of notice of copyright using these marks may have consequences in terms of reduced damages in an infringement lawsuit – using notices of this form may reduce the likelihood of a defense of "innocent infringement" being successful.
Copyrights are generally enforced by the holder in a civil law court, but there are also criminal infringement statutes in some jurisdictions. While central registries are kept in some countries which aid in proving claims of ownership, registering does not necessarily prove ownership, nor does the fact of copying (even without permission) necessarily prove that copyright was infringed. Criminal sanctions are generally aimed at serious counterfeiting activity, but are now becoming more commonplace as copyright collectives such as the RIAA are increasingly targeting the file sharing home Internet user. Thus far, however, most such cases against file sharers have been settled out of court. (See: Legal aspects of file sharing)
In most jurisdictions the copyright holder must bear the cost of enforcing copyright. This will usually involve engaging legal representation, administrative or court costs. In light of this, many copyright disputes are settled by a direct approach to the infringing party in order to settle the dispute out of court.
"...by 1978, the scope was expanded to apply to any 'expression' that has been 'fixed' in any medium, this protection granted automatically whether the maker wants it or not, no registration required."
For a work to be considered to infringe upon copyright, its use must have occurred in a nation that has domestic copyright laws or adheres to a bilateral treaty or established international convention such as the Berne Convention or WIPO Copyright Treaty. Improper use of materials outside of legislation is deemed "unauthorized edition", not copyright infringement.
Statistics regarding the effects of copyright infringement are difficult to determine. Studies have attempted to determine whether there is a monetary loss for industries affected by copyright infringement by predicting what portion of pirated works would have been formally purchased if they had not been freely available. Other reports indicate that copyright infringement does not have an adverse effect on the entertainment industry, and can have a positive effect. In particular, a 2014 university study concluded that free music content, accessed on YouTube, does not necessarily hurt sales, instead has the potential to increase sales.
According to World Intellectual Property Organisation, copyright protects two types of rights. Economic rights allow right owners to derive financial reward from the use of their works by others. Moral rights allow authors and creators to take certain actions to preserve and protect their link with their work. The author or creator may be the owner of the economic rights or those rights may be transferred to one or more copyright owners. Many countries do not allow the transfer of moral rights. Where Economic rights allow right owners to derive financial reward from the use of their works by others, the Moral rights allow authors and creators to take certain actions to preserve and protect their link with their work.
With any kind of property, its owner may decide how it is to be used, and others can use it lawfully only if they have the owner’s permission, often through a license. The owner’s use of the property must, however, respect the legally recognised rights and interests of other members of society. So the owner of a copyright-protected work may decide how to use the work, and may prevent others from using it without permission. National laws usually grant copyright owners exclusive rights to allow third parties to use their works, subject to the legally recognised rights and interests of others. Most copyright laws state that authors or other right owners have the right to authorise or prevent certain acts in relation to a work. Right owners can authorise or prohibit:
- reproduction of the work in various forms, such as printed publications or sound recordings;
- distribution of copies of the work;
- public performance of the work;
- broadcasting or other communication of the work to the public;
- translation of the work into other languages; and
- adaptation of the work, such as turning a novel into a screenplay.
Moral rights are concerned with the non-economic rights of a creator. They protect the creator’s connection with a work as well as the integrity of the work. Moral rights are only accorded to individual authors and in many national laws they remain with the authors even after the authors have transferred their economic rights. In some EU countries, such as France, moral rights last indefinitely. In the UK, however, moral rights are finite. That is, the right of attribution and the right of integrity last only as long as the work is in copyright. When the copyright term comes to an end, so too do the moral rights in that work. This is just one reason why the moral rights regime within the UK is often regarded as weaker or inferior to the protection of moral rights in continental Europe and elsewhere in the world. The Berne Convention, in Article 6bis, requires its members to grant authors the following rights:
- the right to claim authorship of a work (sometimes called the right of paternity or the right of attribution); and
- the right to object to any distortion or modification of a work, or other derogatory action in relation to a work, which would be prejudicial to the author’s honour or reputation (sometimes called the right of integrity).
These and other similar rights granted in national laws are generally known as the moral rights of authors. The Berne Convention requires these rights to be independent of authors’ economic rights. Moral rights are only accorded to individual authors and in many national laws they remain with the authors even after the authors have transferred their economic rights. This means that even where, for example, a film producer or publisher owns the economic rights in a work, in many jurisdictions the individual author continues to have moral rights. Recently, as a part of the debates being held at the U.S. Copyright Office on the question of inclusion of Moral Rights as a part of the framework of the Copyright Law in United States, the Copyright Office concluded that many diverse aspects of the current moral rights patchwork—including copyright law’s derivative work right, state moral rights statutes, and contract law—are generally working well and should not be changed. Further, the Office concludes that there is no need for the creation of a blanket moral rights statute at this time. However, there are aspects of the U.S. moral rights patchwork that could be improved to the benefit of individual authors and the copyright system as a whole.
The Copyright Law in the United States, several exclusive rights are granted to the holder of a copyright, as are listed below:
- protection of the work
- to determine and decide how, and under what conditions, the work may be marketed, publicly displayed, reproduced, distributed etc.
- to produce copies or reproductions of the work and to sell those copies (including, typically, electronic copies)
- to import or export the work
- to create derivative works (works that adapt the original work)
- to perform or display the work publicly
- to sell or cede these rights to others
- to transmit or display by radio, video or internet.
The basic right when a work is protected by copyright is that the holder may determine and decide how and under what conditions the protected work may be used by others. This includes the right to decide to distribute the work for free. This part of copyright is often overseen. The phrase "exclusive right" means that only the copyright holder is free to exercise those rights, and others are prohibited from using the work without the holder's permission. Copyright is sometimes called a "negative right", as it serves to prohibit certain people (e.g., readers, viewers, or listeners, and primarily publishers and would be publishers) from doing something they would otherwise be able to do, rather than permitting people (e.g., authors) to do something they would otherwise be unable to do. In this way it is similar to the unregistered design right in English law and European law. The rights of the copyright holder also permit him/her to not use or exploit their copyright, for some or all of the term. There is, however, a critique which rejects this assertion as being based on a philosophical interpretation of copyright law that is not universally shared. There is also debate on whether copyright should be considered a property right or a moral right.
UK copyright law gives creators both economic rights and moral rights. While ‘copying’ someone else’s work without permission may constitute an infringement of their economic rights, that is, the reproduction right or the right of communication to the public, whereas, ‘mutilating’ it might infringe the creator’s moral rights. In the UK, moral rights include the right to be identified as the author of the work, which is generally identified as the right of attribution, and the right not to have your work subjected to ‘derogatory treatment’, that is the right of integrity.
Indian copyright law is at parity with the international standards as contained in TRIPS. The Indian Copyright Act, 1957, pursuant to the amendments in 1999, 2002 and 2012, fully reflects the Berne Convention for Protection of Literary and Artistic Works, 1886 and the Universal Copyrights Convention, to which India is a party. India is also a party to the Geneva Convention for the Protection of Rights of Producers of Phonograms and is an active member of the World Intellectual Property Organization (WIPO) and United Nations Educational, Scientific and Cultural Organization (UNESCO). The Indian system provides both the economic and moral rights under different provisions of its Indian Copyright Act of 1957.
Copyright subsists for a variety of lengths in different jurisdictions. The length of the term can depend on several factors, including the type of work (e.g. musical composition, novel), whether the work has been published, and whether the work was created by an individual or a corporation. In most of the world, the default length of copyright is the life of the author plus either 50 or 70 years. In the United States, the term for most existing works is a fixed number of years after the date of creation or publication. Under most countries' laws (for example, the United States and the United Kingdom), copyrights expire at the end of the calendar year in which they would otherwise expire.
The length and requirements for copyright duration are subject to change by legislation, and since the early 20th century there have been a number of adjustments made in various countries, which can make determining the duration of a given copyright somewhat difficult. For example, the United States used to require copyrights to be renewed after 28 years to stay in force, and formerly required a copyright notice upon first publication to gain coverage. In Italy and France, there were post-wartime extensions that could increase the term by approximately 6 years in Italy and up to about 14 in France. Many countries have extended the length of their copyright terms (sometimes retroactively). International treaties establish minimum terms for copyrights, but individual countries may enforce longer terms than those.
In the United States, all books and other works published before 1923 have expired copyrights and are in the public domain. In addition, works published before 1964 that did not have their copyrights renewed 28 years after first publication year also are in the public domain. Hirtle points out that the great majority of these works (including 93% of the books) were not renewed after 28 years and are in the public domain. Books originally published outside the US by non-Americans are exempt from this renewal requirement, if they are still under copyright in their home country.
But if the intended exploitation of the work includes publication (or distribution of derivative work, such as a film based on a book protected by copyright) outside the U.S., the terms of copyright around the world must be considered. If the author has been dead more than 70 years, the work is in the public domain in most, but not all, countries.
In 1998, the length of a copyright in the United States was increased by 20 years under the Copyright Term Extension Act. This legislation was strongly promoted by corporations which had valuable copyrights which otherwise would have expired, and has been the subject of substantial criticism on this point.
Limitations and exceptions
In many jurisdictions, copyright law makes exceptions to these restrictions when the work is copied for the purpose of commentary or other related uses. US copyright does NOT cover names, title, short phrases or Listings (such as ingredients, recipes, labels, or formulas). However, there are protections available for those areas copyright does not cover – such as trademarks and patents.
There are some exceptions to what copyright will protect. Copyright will not protect:
- Names of products
- Names of businesses, organizations, or groups
- Pseudonyms of individuals
- Titles of works
- Catchwords, catchphrases, mottoes, slogans, or short advertising expressions
- Listings of ingredients in recipes, labels, and formulas, though the directions can be copyrighted
Idea–expression dichotomy and the merger doctrine
The idea–expression divide differentiates between ideas and expression, and states that copyright protects only the original expression of ideas, and not the ideas themselves. This principle, first clarified in the 1879 case of Baker v. Selden, has since been codified by the Copyright Act of 1976 at 17 U.S.C. § 102(b).
The first-sale doctrine and exhaustion of rights
Copyright law does not restrict the owner of a copy from reselling legitimately obtained copies of copyrighted works, provided that those copies were originally produced by or with the permission of the copyright holder. It is therefore legal, for example, to resell a copyrighted book or CD. In the United States this is known as the first-sale doctrine, and was established by the courts to clarify the legality of reselling books in second-hand bookstores.
Some countries may have parallel importation restrictions that allow the copyright holder to control the aftermarket. This may mean for example that a copy of a book that does not infringe copyright in the country where it was printed does infringe copyright in a country into which it is imported for retailing. The first-sale doctrine is known as exhaustion of rights in other countries and is a principle which also applies, though somewhat differently, to patent and trademark rights. It is important to note that the first-sale doctrine permits the transfer of the particular legitimate copy involved. It does not permit making or distributing additional copies.
In Kirtsaeng v. John Wiley & Sons, Inc., in 2013, the United States Supreme Court held in a 6–3 decision that the first-sale doctrine applies to goods manufactured abroad with the copyright owner's permission and then imported into the US without such permission. The case involved a plaintiff who imported Asian editions of textbooks that had been manufactured abroad with the publisher-plaintiff's permission. The defendant, without permission from the publisher, imported the textbooks and resold on eBay. The Supreme Court's holding severely limits the ability of copyright holders to prevent such importation.
In addition, copyright, in most cases, does not prohibit one from acts such as modifying, defacing, or destroying his or her own legitimately obtained copy of a copyrighted work, so long as duplication is not involved. However, in countries that implement moral rights, a copyright holder can in some cases successfully prevent the mutilation or destruction of a work that is publicly visible.
Fair use and fair dealing
Copyright does not prohibit all copying or replication. In the United States, the fair use doctrine, codified by the Copyright Act of 1976 as 17 U.S.C. Section 107, permits some copying and distribution without permission of the copyright holder or payment to same. The statute does not clearly define fair use, but instead gives four non-exclusive factors to consider in a fair use analysis. Those factors are:
- the purpose and character of one's use
- the nature of the copyrighted work
- what amount and proportion of the whole work was taken, and
- the effect of the use upon the potential market for or value of the copyrighted work.
In the United Kingdom and many other Commonwealth countries, a similar notion of fair dealing was established by the courts or through legislation. The concept is sometimes not well defined; however in Canada, private copying for personal use has been expressly permitted by statute since 1999. In Alberta (Education) v. Canadian Copyright Licensing Agency (Access Copyright), 2012 SCC 37, the Supreme Court of Canada concluded that limited copying for educational purposes could also be justified under the fair dealing exemption. In Australia, the fair dealing exceptions under the Copyright Act 1968 (Cth) are a limited set of circumstances under which copyrighted material can be legally copied or adapted without the copyright holder's consent. Fair dealing uses are research and study; review and critique; news reportage and the giving of professional advice (i.e. legal advice). Under current Australian law, although it is still a breach of copyright to copy, reproduce or adapt copyright material for personal or private use without permission from the copyright owner, owners of a legitimate copy are permitted to "format shift" that work from one medium to another for personal, private use, or to "time shift" a broadcast work for later, once and only once, viewing or listening. Other technical exemptions from infringement may also apply, such as the temporary reproduction of a work in machine readable form for a computer.
In the United States the AHRA (Audio Home Recording Act Codified in Section 10, 1992) prohibits action against consumers making noncommercial recordings of music, in return for royalties on both media and devices plus mandatory copy-control mechanisms on recorders.
- Section 1008. Prohibition on certain infringement actions
- No action may be brought under this title alleging infringement of copyright based on the manufacture, importation, or distribution of a digital audio recording device, a digital audio recording medium, an analog recording device, or an analog recording medium, or based on the noncommercial use by a consumer of such a device or medium for making digital musical recordings or analog musical recordings.
Later acts amended US Copyright law so that for certain purposes making 10 copies or more is construed to be commercial, but there is no general rule permitting such copying. Indeed, making one complete copy of a work, or in many cases using a portion of it, for commercial purposes will not be considered fair use. The Digital Millennium Copyright Act prohibits the manufacture, importation, or distribution of devices whose intended use, or only significant commercial use, is to bypass an access or copy control put in place by a copyright owner. An appellate court has held that fair use is not a defense to engaging in such distribution.
The copyright directive allows EU member states to implement a set of exceptions to copyright. Examples of those exceptions are:
- photographic reproductions on paper or any similar medium of works (excluding sheet music) provided that the rightholders receives fair compensation,
- reproduction made by libraries, educational establishments, museums or archives, which are non-commercial
- archival reproductions of broadcasts,
- uses for the benefit of people with a disability,
- for demonstration or repair of equipment,
- for non-commercial research or private study
- when used in parody
It is legal in several countries including the United Kingdom and the United States to produce alternative versions (for example, in large print or braille) of a copyrighted work to provide improved access to a work for blind and visually impaired persons without permission from the copyright holder.
Transfer, assignment and licensing
A copyright, or aspects of it (e.g. reproduction alone, all but moral rights), may be assigned or transferred from one party to another. For example, a musician who records an album will often sign an agreement with a record company in which the musician agrees to transfer all copyright in the recordings in exchange for royalties and other considerations. The creator (and original copyright holder) benefits, or expects to, from production and marketing capabilities far beyond those of the author. In the digital age of music, music may be copied and distributed at minimal cost through the Internet; however, the record industry attempts to provide promotion and marketing for the artist and his or her work so it can reach a much larger audience. A copyright holder need not transfer all rights completely, though many publishers will insist. Some of the rights may be transferred, or else the copyright holder may grant another party a non-exclusive license to copy or distribute the work in a particular region or for a specified period of time.
A transfer or licence may have to meet particular formal requirements in order to be effective, for example under the Australian Copyright Act 1968 the copyright itself must be expressly transferred in writing. Under the U.S. Copyright Act, a transfer of ownership in copyright must be memorialized in a writing signed by the transferor. For that purpose, ownership in copyright includes exclusive licenses of rights. Thus exclusive licenses, to be effective, must be granted in a written instrument signed by the grantor. No special form of transfer or grant is required. A simple document that identifies the work involved and the rights being granted is sufficient. Non-exclusive grants (often called non-exclusive licenses) need not be in writing under U.S. law. They can be oral or even implied by the behavior of the parties. Transfers of copyright ownership, including exclusive licenses, may and should be recorded in the U.S. Copyright Office. (Information on recording transfers is available on the Office's web site.) While recording is not required to make the grant effective, it offers important benefits, much like those obtained by recording a deed in a real estate transaction.
Copyright may also be licensed. Some jurisdictions may provide that certain classes of copyrighted works be made available under a prescribed statutory license (e.g. musical works in the United States used for radio broadcast or performance). This is also called a compulsory license, because under this scheme, anyone who wishes to copy a covered work does not need the permission of the copyright holder, but instead merely files the proper notice and pays a set fee established by statute (or by an agency decision under statutory guidance) for every copy made. Failure to follow the proper procedures would place the copier at risk of an infringement suit. Because of the difficulty of following every individual work, copyright collectives or collecting societies and performing rights organizations (such as ASCAP, BMI, and SESAC) have been formed to collect royalties for hundreds (thousands and more) works at once. Though this market solution bypasses the statutory license, the availability of the statutory fee still helps dictate the price per work collective rights organizations charge, driving it down to what avoidance of procedural hassle would justify.
Copyright licenses known as open or free licenses seek to grant several rights to licensees, either for a fee or not. Free in this context is not as much of a reference to price as it is to freedom. What constitutes free licensing has been characterised in a number of similar definitions, including by order of longevity the Free Software Definition, the Debian Free Software Guidelines, the Open Source Definition and the Definition of Free Cultural Works. Further refinements to these definitions have resulted in categories such as copyleft and permissive. Common examples of free licences are the GNU General Public License, BSD licenses and some Creative Commons licenses.
Founded in 2001 by James Boyle, Lawrence Lessig, and Hal Abelson, the Creative Commons (CC) is a non-profit organization which aims to facilitate the legal sharing of creative works. To this end, the organization provides a number of generic copyright license options to the public, gratis. These licenses allow copyright holders to define conditions under which others may use a work and to specify what types of use are acceptable.
Some sources are critical of particular aspects of the copyright system. This is known as a debate over copynorms. Particularly to the background of uploading content to internet platforms and the digital exchange of original work, there is discussion about the copyright aspects of downloading and streaming, the copyright aspects of hyperlinking and framing.
Concerns are often couched in the language of digital rights, digital freedom, database rights, open data or censorship. Discussions include Free Culture, a 2004 book by Lawrence Lessig. Lessig coined the term permission culture to describe a worst-case system. Good Copy Bad Copy (documentary) and RiP!: A Remix Manifesto, discuss copyright. Some suggest an alternative compensation system. In Europe consumers are acting up against the raising costs of music, film and books, a political party has been grown out of it, The Pirates. Some groups reject copyright altogether, taking an anti-copyright stance. The perceived inability to enforce copyright online leads some to advocate ignoring legal statutes when on the web.
Copyright, like other intellectual property rights, is subject to a statutorily determined term. Once the term of a copyright has expired, the formerly copyrighted work enters the public domain and may be used or exploited by anyone without obtaining permission, and normally without payment. However, in paying public domain regimes the user may still have to pay royalties to the state or to an authors' association. Courts in common law countries, such as the United States and the United Kingdom, have rejected the doctrine of a common law copyright. Public domain works should not be confused with works that are publicly available. Works posted in the internet, for example, are publicly available, but are not generally in the public domain. Copying such works may therefore violate the author's copyright.
- Adelphi Charter
- Artificial scarcity
- Authors' rights and related rights, roughly equivalent concepts in civil law countries
- Conflict of laws
- Copyright Alliance
- Copyright in architecture in the United States
- Copyright on the content of patents and in the context of patent prosecution
- Copyright for Creativity
- Copyright infringement
- Copyright on religious works
- Creative Barcode
- Digital rights management
- Digital watermarking
- Entertainment law
- Freedom of panorama
- Intellectual property protection of typefaces
- List of Copyright Acts
- List of copyright case law
- Literary property
- Model release
- Criticism of copyright
- Photography and the law
- Pirate Party
- Printing patent, a precursor to copyright
- Private copying levy
- Production music
- Reproduction fees
- Software copyright
- Threshold pledge system
- "Definition of copyright". Oxford Dictionaries. Retrieved 20 December 2018.
- "Definition of Copyright". Merriam-Webster. Retrieved 20 December 2018.
- Nimmer on Copyright, vol. 2, § 8.01.
- "Intellectual property", Black's Law Dictionary, 10th ed. (2014).
- "Understanding Copyright and Related Rights" (PDF). www.wipo.int. p. 4. Retrieved 6 December 2018.
- Stim, Rich. "Copyright Basics FAQ". The Center for Internet and Society Fair Use Project. Stanford University. Retrieved 21 July 2019.
- Daniel A. Tysver. "Works Unprotected by Copyright Law". Bitlaw.
- Lee A. Hollaar. "Legal Protection of Digital Information". p. Chapter 1: An Overview of Copyright, Section II.E. Ideas Versus Expression.
- Copyright, University of California, 2014, retrieved 15 December 2014
- "Journal Conventions – Vanderbilt Journal of Entertainment & Technology Law". www.jetlaw.org.
- Blackshaw, Ian S. (20 October 2011). Sports Marketing Agreements: Legal, Fiscal and Practical Aspects. Springer Science & Business Media – via Google Books.
- Kaufman, Roy (16 July 2008). Publishing Forms and Contracts. Oxford University Press – via Google Books.
- "Copyright Basics" (PDF). www.copyright.gov. U.S. Copyright Office. Retrieved 20 February 2019.
- "International Copyright Law Survey". Mincov Law Corporation.
- Copyright in Historical Perspective, p. 136-137, Patterson, 1968, Vanderbilt Univ. Press
- "Cum privilegio: Licensing of the Press Act of 1662". doi:10.1086/677787.
- Bettig, Ronald V. (1996). Copyrighting Culture: The Political Economy of Intellectual Property. Westview Press. p. 9–17. ISBN 0-8133-1385-6.
- Ronan, Deazley (2006). Rethinking copyright: history, theory, language. Edward Elgar Publishing. p. 13. ISBN 978-1-84542-282-0 – via Google Books.
- "Statute of Anne". Copyrighthistory.com. Retrieved 8 June 2012.
- Frank Thadeusz (18 August 2010). "No Copyright Law: The Real Reason for Germany's Industrial Expansion?". Der Spiegel. Retrieved 11 April 2015.
- "Berne Convention for the Protection of Literary and Artistic Works Article 5". World Intellectual Property Organization. Retrieved 18 November 2011.
- Garfinkle, Ann M; Fries, Janet; Lopez, Daniel; Possessky, Laura (1997). "Art conservation and the legal obligation to preserve artistic intent". JAIC 36 (2): 165–179.
- "International Copyright Relations of the United States", U.S. Copyright Office Circular No. 38a, August 2003.
- Parties to the Geneva Act of the Universal Copyright Convention Archived 25 June 2008 at the Wayback Machine as of 1 January 2000: the dates given in the document are dates of ratification, not dates of coming into force. The Geneva Act came into force on 16 September 1955, for the first twelve to have ratified (which included four non-members of the Berne Union as required by Art. 9.1), or three months after ratification for other countries. "Archived copy" (PDF). Archived from the original (PDF) on 25 June 2008. Retrieved 29 January 2007.CS1 maint: Archived copy as title (link)
- 165 Parties to the Berne Convention for the Protection of Literary and Artistic Works Archived 6 March 2016 at the Wayback Machine as of May 2012.
- MacQueen, Hector L; Charlotte Waelde; Graeme T Laurie (2007). Contemporary Intellectual Property: Law and Policy. Oxford University Press. p. 39. ISBN 978-0-19-926339-4 – via Google Books.
- 17 U.S.C. § 201(b); Cmty. for Creative Non-Violence v. Reid, 490 U.S. 730 (1989)
- Stim, Rich. "Copyright Ownership: Who Owns What?". The Center for Internet and Society Fair Use Project. Stanford University. Retrieved 21 July 2019.
- World Intellectual Property Organization. "Understanding Copyright and Related Rights" (PDF). WIPO. p. 8. Retrieved 1 December 2017.
- Simon, Stokes (2001). Art and copyright. Hart Publishing. pp. 48–49. ISBN 978-1-84113-225-9 – via Google Books.
- Express Newspaper Plc v News (UK) Plc, F.S.R. 36 (1991)
- "Subject Matter and Scope of Copyright" (PDF). copyright.gov. Retrieved 4 June 2015.
- "Copyright in General (FAQ)". U.S Copyright Office. Retrieved 11 August 2016.
- "Copyright Registers" Archived 5 October 2013 at the Wayback Machine, United Kingdom Intellectual Property Office
- "Automatic right", United Kingdom Intellectual Property Office
- See Harvard Law School, Module 3: The Scope of Copyright Law. See also Tyler T. Ochoa, Copyright, Derivative Works and Fixation: Is Galoob a Mirage, or Does the Form(GEN) of the Alleged Derivative Work Matter?, 20 Santa Clara High Tech. L.J. 991, 999–1002 (2003) ("Thus, both the text of the Act and its legislative history demonstrate that Congress intended that a derivative work does not need to be fixed in order to infringe."). The legislative history of the 1976 Copyright Act says this difference was intended to address transitory works such as ballets, pantomimes, improvised performances, dumb shows, mime performances, and dancing.
- Copyright Act of 1976, Pub.L. -553 94 –553 , 90 Stat. 2541, § 401(a) (19 October 1976)
- The Berne Convention Implementation Act of 1988 (BCIA), Pub.L. -568 100 –568 , 102 Stat. 2853, 2857. One of the changes introduced by the BCIA was to section 401, which governs copyright notices on published copies, specifying that notices "may be placed on" such copies; prior to the BCIA, the statute read that notices "shall be placed on all" such copies. An analogous change was made in section 402, dealing with copyright notices on phonorecords.
- Taylor, Astra (2014). The People's Platform:Taking Back Power and Culture in the Digital Age. New York City, New York, USA: Picador. pp. 144–145. ISBN 978-1-250-06259-8.
- "U.S. Copyright Office – Information Circular" (PDF). Retrieved 7 July 2012.
- 17 U.S.C.§ 401(d)
- Taylor, Astra (2014). The People's Platform: Taking Back Power and Culture in the Digital Age. New York, New York: Picador. p. 148. ISBN 978-1-250-06259-8.
- Owen, L. (2001). "Piracy". Learned Publishing. 14: 67–70. doi:10.1087/09531510125100313.
- Butler, S. Piracy Losses "Billboard" 199(36)
- "Urheberrechtsverletzungen im Internet: Der bestehende rechtliche Rahmen genügt". Ejpd.admin.ch.
- Tobias Kretschmer & Christian Peukert (2014). "Video Killed the Radio Star? Online Music Videos and Digital Music Sales". Social Science Electronic Publishing. ISSN 2042-2695. SSRN 2425386.CS1 maint: Uses authors parameter (link)
- "World Intellectual Property Organisation (WIPO)" (PDF). 20 April 2019.
- "THE MUTILATED WORK" (PDF). Copyright User.
- ""authors, attribution, and integrity: examining moral rights in the united states"" (PDF). U.S. Copyright Office. April 2019.
- Peter K, Yu (2007). Intellectual Property and Information Wealth: Copyright and related rights. Greenwood Publishing Group. p. 346. ISBN 978-0-275-98883-8.
- Tom G. Palmer, "Are Patents and Copyrights Morally Justified?" Accessed 5 February 2013.
- Dalmia, Vijay Pal (14 December 2017). ""Copyright Law In India"". Mondaq.
- 17 U.S.C. § 305
- The Duration of Copyright and Rights in Performances Regulations 1995, part II, Amendments of the UK Copyright, Designs and Patents Act 1988
- Nimmer, David (2003). Copyright: Sacred Text, Technology, and the DMCA. Kluwer Law International. p. 63. ISBN 978-90-411-8876-2. OCLC 50606064 – via Google Books.
- "Copyright Term and the Public Domain in the United States.", Cornell University.
- See Peter B. Hirtle, "Copyright Term and the Public Domain in the United States 1 January 2015" online at footnote 8 Archived 26 February 2015 at the Wayback Machine
- Lawrence Lessig, Copyright's First Amendment, 48 UCLA L. Rev. 1057, 1065 (2001)
- (2012) Copyright Protection Not Available for Names, Titles, or Short Phrases U.S. Copyright Office
- "John Wiley & Sons Inc. v. Kirtsaeng" (PDF). Archived from the original (PDF) on 2 July 2017.
- "US CODE: Title 17,107. Limitations on exclusive rights: Fair use". .law.cornell.edu. 20 May 2009. Retrieved 16 June 2009.
- "Chapter 1 – Circular 92 – U.S. Copyright Office". www.copyright.gov.
- "Copyright (Visually Impaired Persons) Act 2002 comes into force". Royal National Institute of Blind People. 1 January 2011. Retrieved 11 August 2016.
- WIPO Guide on the Licensing of Copyright and Related Rights. World Intellectual Property Organization. 2004. p. 15. ISBN 978-92-805-1271-7.
- WIPO Guide on the Licensing of Copyright and Related Rights. World Intellectual Property Organization. 2004. p. 8. ISBN 978-92-805-1271-7.
- WIPO Guide on the Licensing of Copyright and Related Rights. World Intellectual Property Organization. 2004. p. 16. ISBN 978-92-805-1271-7.
- "Creative Commons Website". creativecommons.org. Retrieved 24 October 2011.
- Rubin, R. E. (2010) 'Foundations of Library and Information Science: Third Edition', Neal-Schuman Publishers, Inc., New York, p. 341
- "MEPs ignore expert advice and vote for mass internet censorship". European Digital Rights. Retrieved 24 June 2018.
- Dowd, Raymond J. (2006). Copyright Litigation Handbook (1st ed.). Thomson West. ISBN 0-314-96279-4.
- Ellis, Sara R. Copyrighting Couture: An Examination of Fashion Design Protection and Why the DPPA and IDPPPA are a Step Towards the Solution to Counterfeit Chic, 78 Tenn. L. Rev. 163 (2010), available at http://ssrn.com/abstract=1735745.
- Ghosemajumder, Shuman. Advanced Peer-Based Technology Business Models. MIT Sloan School of Management, 2002.
- Lehman, Bruce: Intellectual Property and the National Information Infrastructure (Report of the Working Group on Intellectual Property Rights, 1995)
- Lindsey, Marc: Copyright Law on Campus. Washington State University Press, 2003. ISBN 978-0-87422-264-7.
- Mazzone, Jason. Copyfraud. SSRN
- McDonagh, Luke. Is Creative use of Musical Works without a licence acceptable under Copyright? International Review of Intellectual Property and Competition Law (IIC) 4 (2012) 401–426, available at SSRN
- Nimmer, Melville; David Nimmer (1997). Nimmer on Copyright. Matthew Bender. ISBN 0-8205-1465-9.
- Patterson, Lyman Ray (1968). Copyright in Historical Perspective. Online Version. Vanderbilt University Press. ISBN 0-8265-1373-5.
- Rife, by Martine Courant. Convention, Copyright, and Digital Writing (Southern Illinois University Press; 2013) 222 pages; Examines legal, pedagogical, and other aspects of online authorship.
- Rosen, Ronald (2008). Music and Copyright. Oxford Oxfordshire: Oxford University Press. ISBN 0-19-533836-7.
- Shipley, David E. Thin But Not Anorexic: Copyright Protection for Compilations and Other Fact Works UGA Legal Studies Research Paper No. 08-001; Journal of Intellectual Property Law, Vol. 15, No. 1, 2007.
- Silverthorne, Sean. Music Downloads: Pirates- or Customers?. Harvard Business School Working Knowledge, 2004.
- Sorce Keller, Marcello. "Originality, Authenticity and Copyright", Sonus, VII(2007), no. 2, pp. 77–85.
- Steinberg, S.H.; Trevitt, John (1996). Five Hundred Years of Printing (4th ed.). London and New Castle: The British Library and Oak Knoll Press. ISBN 1-884718-19-1.
- Story, Alan; Darch, Colin; Halbert, Deborah, eds. (2006). The Copy/South Dossier: Issues in the Economics, Politics and Ideology of Copyright in the Global South (PDF). Copy/South Research Group. ISBN 978-0-9553140-1-8. Archived from the original (PDF) on 16 August 2013.
- Ransom, Harry Huntt (1956). The First Copyright Statute. Austin: University of Texas.
- Rose, M. (1993), Authors and Owners: The Invention of Copyright, London: Harvard University Press
- Loewenstein, J. (2002), The Author’s Due: Printing and the Prehistory of Copyright, London: University of Chicago Press.
|Wikimedia Commons has media related to Copyright .|
|Wikiquote has quotations related to: Copyright|
|Wikisource has the text of the 1911 Encyclopædia Britannica article Copyright .|
|Wikisource has original text related to this article:|
|Library resources about |
- Copyright at Curlie
- WIPOLex from WIPO; global database of treaties and statutes relating to intellectual property
- Copyright Berne Convention: Country List List of the 164 members of the Berne Convention for the protection of literary and artistic works
- A Bibliography on the Origins of Copyright and Droit d'Auteur
- MIT OpenCourseWare 6.912 Introduction to Copyright Law Free self-study course with video lectures as offered during the January 2006, Independent Activities Period (IAP)
- About Copyright at the UK Intellectual Property Office
- UK Copyright Law fact sheet (April 2000) a concise introduction to UK Copyright legislation
- IPR Toolkit – An Overview, Key Issues and Toolkit Elements (September 2009) by Professor Charles Oppenheim and Naomi Korn at the Strategic Content Alliance
|
{
"palladium_score": 3.7061100006103516,
"timestamp": "2026-01-18T07:32:42.757167",
"source": "Palladium-STEM (Preview)"
}
|
https://sensorstechforum.com/quantum-encryption-security-can-protect-us/
|
Quantum encryption is one of the trendiest topics in cybersecurity practice and theory. In essence it uses the principles of quantum mechanism to secure message transmissions. One of the distinct advantage related to its use is the fact that it takes advantage of the quantum’s properties to strengthen cryptographic tasks.
What is Quantum Encryption
Quantum encryption is the practice of utilizing principles from quantum mechanism in the complex mathematical algorithms of cryptography. Security has long relied on “traditional” encryption methods which are well known among the general population. However as the rise of cybersecurity threats has come to an incredibly large number more and more companies are seeking to implement the technology in their products and services.
This type of cryptography allows the users to encrypt messages sent to to the recipients without anyone else being able to decode them. This is achieved by using the quantum multiple states capability in practical solutions. The use of quantum mechanics was proposed first by Stephen Wiesner a few decades ago in a paper called “Conjugate Coding”. Since then the most well-known and used application of the process is called quantum key distribution which allows for the creation and maintenance of a secure connection between two parties. There are two very important characteristics security-wise:
- The tunnel is created in a way which prohibits a third party from retrieving the key that secures the communications.
- If a third parties attempts to use a stolen key to access the private secure connection the two parties will get notified immediately.
All quantum encryption operations are secure against quantum computer brute force attacks as the strength of the protective cipher does not depend on mathematical complexity but on physical principles.
Quantum Encryption Use In Security Products
Quantum encryption can be implemented in many different products and services. As the technology grows further and more vendors integrate it in their offerings it will become more well-known across users and specialists. There is great potential among concepts that power up the world’s Internet technologies and consumer devices.
Quantum key distribution has already been implemented by major manufacturers such as Toshiba. Predictions show that it will exponentially grow in a matter of years to come. Earlier this year scientists from Austria and China managed to perform the first quantum encrypted video call. This was done via the Chinese satellite known as “Mikaeus”, specifically launched for conducting physics experiments. One of the reasons why China is investing in this approach is that it is believed that this is a method that can secure them against spying from other state agencies like the CIA or NSA.
However as specialist interest builds up a report from UK’s National Security Centre state that quantum key distribution at its current level of sophistication presents fundamental practical limitations. as a result it cannot be used as a single solution when it comes to security.
Analysts speak of a future hybrid encryption which will be able to facilitate both the quantum encryption characteristics with the mathematical approaches of the traditional cryptography. This will make it easier for developers to integrate the secure technology in their services and products without resorting to expensive and complex infrastructure, while at the same time deploying optimized protective measures.
And while mainstream adoption is unlikely to become the norm in short term it would be very possible if we see quantum encryption used in parts of critical services and important components. Hybrid measures will probably continue to be developed as developers interest grows.
|
{
"palladium_score": 3.5951437950134277,
"timestamp": "2026-01-18T07:32:42.757167",
"source": "Palladium-STEM (Preview)"
}
|
https://scienceatom.com/2021/04/mystery-of-sun-why-its-atmosphere-is-so-hot-explained/
|
New data from the European Space Agency’s (ESA) Solar Orbiter suggests these campfires are driven by a process that may also contribute to heating of the Sun’s outer atmosphere, or corona.
The Sun’s corona is much hotter, around 300 times, than the layers below – a strange feature that has puzzled scientists and is thought to be one of the biggest mysteries in solar physics.
Solar flares are brief eruptions of high-energy radiation from the Sun’s surface, which can cause radio and magnetic disturbances on the Earth. Experts have previously wondered whether these eruptions are linked to the mysterious solar corona heating phenomenon.
In June 2020, the ESA released the closest images ever taken of the Sun which, for the first time, showed campfires dotted across its surface.
These images were captured by Solar Orbiter, a probe designed and built in the UK, when it came within 76 million kilometres (47 million miles) of the Sun’s surface. They revealed around 1,500 small, flickering brightenings that last for between 10 and 200 seconds and span between 400km and 4,000km.
The latest findings are based on computer simulations conducted by an international team of researchers collaborating with the ESA.
“Our model calculates the emission, or energy, from the Sun as you would expect a real instrument to measure,” said Professor Hardi Peter, from the Max Planck Institute for Solar System Research in Germany. “The model generated brightenings just like the campfires.”
The simulations also revealed a process known as component reconnection around the campfires, where magnetic field lines of opposite direction break and then reconnect, releasing energy when they do so.
“Our model shows that the energy released from the brightenings through component reconnection could be enough to maintain the temperature of the solar corona predicted from observations,” said Yajie Chen, a PhD student from Peking University in China.
However, the researchers caution that their work is still in its early stages and requires further observations to confirm their findings.
“We’re looking forward to see what further insights our models bring to help us improve our theories on the processes behind the heating,” Peter said.
Aside from helping unlock the mysteries of coronal heating, the Solar Orbiter will also help scientists piece together the Sun’s atmospheric layers and analyse the solar wind, the stream of highly energetic particles emitted by the star.
Understanding more about solar activity could also help scientists make predictions on space weather events, which can damage satellites in orbit and disrupt the infrastructure on Earth that mobile phones, transport, GPS signals and the electricity networks rely on.
The spacecraft, which was constructed by Airbus in Stevenage, has been designed to withstand the scorching heat from the Sun that will hit one side, while maintaining freezing temperatures on the other side of the spacecraft as the orbit keeps it in shadow.
This research has been published on BBC Science
|
{
"palladium_score": 4.216568470001221,
"timestamp": "2026-01-18T07:32:42.757167",
"source": "Palladium-STEM (Preview)"
}
|
https://petapixel.com/2022/01/21/new-hubble-photos-show-how-in-astronomy-perspective-matters/
|
The Hubble Space Telescope, while soon to be succeeded by the James Webb Space Telescope, continues to capture important images of the universe. In two recent images, perspective plays an important role in how these galaxies appear.
The Truth Behind This Cosmic Duo
Earlier this month, Hubble captured an unusual photo of two galaxies that appear to be colliding with one another. Hubble’s focus was on the spiral galaxy NGC 105 which lies roughly 215 million light-years away in the Pisces constellation. Looking at the photo, it appears as though NGC 105 is colliding with into a neighboring galaxy, but it is here that perspective plays an important role.
“This is just a circumstance of perspective,” NASA explains. “NGC 105’s elongated neighbor is actually far more distant. Such visual associations are the result of our Earthly perspective and they occur frequently in astronomy.”
Nasa says a great example of how perspective plays a big role in our perception are constellations. The stars that form constellations in Earth’s sky at night are not as close to each other as they seem and are actually vastly different distances from Earth and each other.
“To us they appear to form these patterns because they are aligned along the same sightline, while an observer in another part of the galaxy would see different patterns,” NASA explains.
In this case, the smaller, elongated neighbor of NGC 105 remains relatively unknown to astronomers thus far.
A Galaxy With a “Sail” Made of Stars
Hubble also photographed the galaxy below, named NGC 3318, that lies in the constellation Vela and is much closer to than NGC 105, at just a mere 115 million light-years from Earth. NASA explains that Vela was originally part of a much larger constellation known as Argo Navis, but the size of the massive constellation proved unwieldy and impractically large.
So, it was split into three separate parts called Carina, Puppis, and Vela. Mythology fans may recognize all these names. The Argo is a fabled ship in Greek mythology and Carina, Puppis, and Vela are each the name of parts of that ship.
The nautical references don’t end there, as NASA scientists note that the out edges of NGC 3318 almost resemble a ship’s sails blowing in the wind. Just as was the case with NGC 105, the appearance of these “sails” is very likely due to Hubble’s unique perspective.
Questions about the Expansion of the Universe
Astronomers recently analyzed the distances to a sample of galaxies including NGC 105 and their velocities to measure how fast the universe is expanding — this is a value known as the Hubble constant. The results aren’t consistent, and that’s proving troubling.
Their results don’t agree with predictions made by the most widely accepted cosmological model, and their analysis shows that there is only a one-in-a-million chance that this discrepancy is the result of measurement errors. The difference between galaxy measurements and cosmological predictions is a long-standing source of consternation for astronomers, and these recent findings provide credible new evidence that something is either wrong or lacking in our standard model of cosmology.
Image credits: NGC 105 photo by ESA/Hubble and NASA, D. Jones, A. Riess et al.; Acknowledgment: R. Colombari. NGC 3318 by ESA/Hubble and NASA, European Southern Observatory (ESO), R. J. Foley; Acknowledgment: R. Colombari.
|
{
"palladium_score": 3.8821330070495605,
"timestamp": "2026-01-18T07:32:42.757167",
"source": "Palladium-STEM (Preview)"
}
|
http://www.gradesaver.com/textbooks/science/physics/physics-principles-with-applications-7th-edition/chapter-8-rotational-motion-questions-page-220/2
|
a. A point on the rim of a disk rotating at constant angular velocity has no tangential acceleration. b. The point on the rim now has both radial and tangential acceleration. c. In case (a), linear acceleration will not change. In case (b), the tangential acceleration stays constant and the magnitude of the radial component of acceleration will increase as linear speed also increases.
Work Step by Step
a. The tangential speed is constant. The point does have radial, i.e., centripetal, acceleration. The point’s velocity is changing, because the velocity vector is changing direction. b. Now let the disk’s angular velocity increase uniformly. The point on the rim now has both radial and tangential acceleration. It is speeding up (tangential acceleration) and its velocity is continuously changing direction (radial acceleration). c. In the case of constant angular speed, neither component of linear acceleration will change. In the case of angular velocity increasing uniformly, the magnitude of the radial component of acceleration will increase as the linear speed increases. The tangential acceleration stays constant.
|
{
"palladium_score": 4.3462653160095215,
"timestamp": "2026-01-18T07:32:45.167301",
"source": "Palladium-STEM (Preview)"
}
|
https://www.plantengineering.com/articles/dust-explosion-strategies/
|
Dust explosion strategies
Dust explosions are one of the most challenging, complex, dangerous, least understood hazards facing industry today. On one level it is not hard to understand what a dust explosion is: A loud noise, accompanied by a pressure wave and a lot of heat. But what is required for dust to constitute an explosion hazard?
Dust is a solid material-organic or unoxidized metal-not larger than 500 microns in cross section. Smaller particles make more explosive dust, and less noble metals are more explosive than more noble ones. In the simplest possible terms: oxygen + fuel = oxides + heat.
Dust must be agitated so that each particle is surrounded by an oxidant (usually air) to constitute an explosion hazard. Putting this dust into an enclosed space and concentrating it sufficiently supports the explosion. Lastly, a source of ignition is needed. When these factors meet, an explosion results. About 80% of all industrial dusts are explosive if oxygen and ignition are available.
There are a number of technologies and strategies available to deal with possible dust explosions. There is no one approach that is best for every circumstance. A blend of strategies is often optimal.
Strategies can be subdivided into those which mitigate the damage, but assume an explosion will occur; and those that seek to prevent the explosion from developing by controlling one or more of the contributing factors.
A dust explosion is a pentagon consisting of fuel (combustible dust), suspension (agitated dust), confinement (with an explosible concentration of dust), oxidizer (usually air), and ignition source (of sufficient strength and duration).
There are three avoidance approaches to dust explosions (assuming the pentagon can be avoided):
-Avoid ignition sources
-Avoid explosible dust concentrations
-Inerting to reduce oxygen concentration.
-There are also several ways to mitigate an explosion (assuming the pentagon could occur):
-Locate the vessel outside so that an explosion causes no consequential damage
-Locate the vessel inside, but adjacent to an outside wall so that a vent can be directed outside through a straight, short duct
-Locate the vessel indoors, but duct through the roof
Sources of ignition must be identified . Open flames are introduced from within a system or from an external source. Hot surfaces come from many sources, such as dryers, steam pipes, and heaters. Also consider the not-so-obvious possibilities such as blowers, fans, conveyors, milling machines, and other rotating machinery that could develop a hot bearing.
Another source of ignition is the introduction of heat between moving and nonmoving parts in process equipment. Mechanical impact is also a potential source of sparks. Static electrical charges must be prevented.
It is not possible to engineer a system that avoids the possibility of an ignition source for every eventuality. Because of the numerous possibilities, it is less than prudent to rely on avoidance of ignition sources as the only means of dust explosion prevention.
Avoid explosible dust concentrations. There are lower and upper limits to dust concentrations that support an explosible dust cloud. These limits are difficult to define because of the distribution of particle sizes in a cloud and the presence of fines.
In practical terms, a lower explosible dust concentration is in the range of 50-100 g/cu m and a maximum range is 2-3 Kg/cu m.
Inerting uses a gas such as nitrogen or carbon dioxide that displaces the air and with it the oxidant that supports combustion. This subject is complex with many variables.
Equipment must be sealed from the surrounding environment to contain the inerting agent and protect plant personnel. Loss of sealing has several negative consequences. The least serious is the usage rate for the inerting gas goes up substantially. Of greater concern is the possibility of a leak allowing a concentration of oxygen sufficient to support an explosion. Lastly, leaks or inattention could cause asphyxiation.
Venting is a simple principle. When the dust explodes inside a confined area, a deliberately weakened wall releases early in the pressure rise caused by the rapidly increasing temperature. Once this weak area is opened, burned and unburned dust and flame are allowed to escape the confinement so that the vessel itself does not experience the full rise in pressure. If the weakened area releases early enough and is large enough, the pressure remains sufficiently low inside the vessel to protect it from damage. The explosion is allowed to fully develop, but it does so outside the vessel.
From such a simple concept, venting has turned into a complex solution. How big does the vent really need to be? At what pressure should it burst? What part does vessel strength play in the venting solution? Where can the discharge of the vent be directed? How far will the blast travel?
National Fire Protection Association Guideline (NFPA 68) includes procedures that provide help in venting solutions.
Containment is very useful, especially where dust explosions are considered inherent or very likely to occur, where other methods are considered intrusive to the process, or where it is too easily rendered ineffective by externalities.
Containment is approached from two perspectives. First, equipment is designed to contain the maximum pressure exerted by the dust so that the elastic limits of the vessel’s materials of construction are not exceeded. Second, assume that the explosion will deform the vessel by exceeding those limits, but fall short of causing a breach. This technique assumes that the vessel would need repair or replacement after an explosion, but it provides a useful approach, especially where the vessel is relatively large and the cost of the first alternative is considered prohibitive.
In equipment where the possibility of explosion is inherent to the process and expected to be fairly frequent, the first design makes sense. In situations where other controls make the possibility of explosion unlikely, the second method is more practical.
Active suppression provides a permanently available, pressurized extingui-shing agent and a means to ensure fast discharge of the agent. The systems are active and rely on carefully positioned sensors, which are pressure, UV, or infrared optical. They must be selected with great care to be sensitive enough to protect, while at the same time not being so sensitive as to unnecessarily false trigger.
There are a number of strategies available that focus on (1) localizing the explosion, (2) isolating various areas of the system to prevent propagation, or (3) total saturation of the entire system.
Quenching pipe combines characteristics of inerting, containment, and venting. The operating principle is that in the event of an explosion in a protected vessel, a venting panel releases, burned and unburned dust and flame enter the chamber of the quenching pipe, and dust is retained on the surface of a mesh material. When the flame encounters the mesh, the temperature is quenched from at least 1500 C to approximately 90 C in a few milliseconds.
The device is actually performing three functions. First, as dust enters the quench tube the concentration increases rapidly in an area of limited oxygen; thus, the explosibility of the mixture is significantly reduced. Second, as the mesh filters the dust, fuel is removed from the deflagration. Third, the temperature is rapidly quenched because the mesh acts as a heat sink.-Edited by Ron Holzhauer, Managing Editor, 630-320-7139, firstname.lastname@example.org
About 80% of all industrial dusts are explosive if oxygen and ignition are available.
Mitigation and avoidance strategies are available to prevent or control the problem.
Magnitude of an explosion depends on dust surface area, shape, size, disbursal, turbulence, concentration, and several other factors.
There are a number of variables that affect the magnitude of a dust explosion. These factors include specific surface area of dust particles, shape of particles (cubes, spheres, fibers, etc.), distribution of particle sizes and shape in the cloud, disbursal of dusts (evenly spaced or clumped), amount of turbulence of the cloud, concentration of the particles, percent of moisture in dust, strength of ignition, length and duration of ignition spark, concentration of oxidant, heat of combustion of the dust, and total volume and shape of the vessel. Also consider the process and, in particular, what is being done with the product.
These factors make it impossible to determine more than empirical approximations to predict the outcome of any given explosion. In fact, it is safe to say that no two dust explosions are exactly alike, which makes the subject of mitigation complex and challenging.
Dust explosions have a tendency to propagate from the area of origination to adjacent locations. Explosions travel up and down stairwells, as well as from building-to-building through corridors and tunnels. Dust explosions move along the tops of I-beams via a layer of dust. To prevent this phenomenon, eliminate the availability of sufficient amounts of dust to support propagation.
Use isolation valves to stop propagation from vessel-to-vessel connected by piping. Dust explosions develop so rapidly that isolation valves must be able to close quickly. It is also important that enough space be allowed between the vessel where an explosion could occur and the placement of the valve.
The author is willing to answer technical questions concerning this article. Mr. Stevenson is available at 561-735-9556. The company web site is www.cvtechnology.com.
|
{
"palladium_score": 3.559758424758911,
"timestamp": "2026-01-18T07:32:45.167301",
"source": "Palladium-STEM (Preview)"
}
|
https://fyiliving.com/mental-health/relationships/parenting/mothers-and-autistic-children-co-managing-emotions
|
Children diagnosed with autism are characterized as having poor social and emotional awareness and responsiveness. These children often have difficulty managing their emotions and may not know why they feel a certain way. However, a recent study of autistic toddlers and their mothers shows that toddlers with autism can display a range of coping strategies to deal with unexpected emotions. Moreover, the study proved the response of the mother may play a crucial role in helping autistic children deal with their emotions in a more positive way.
As part of the study 34 mother/child pairs had their behavior video recorded during a series of 24 early intervention play sessions. During the series a trained interventionist helped coach and guide the mother through 10 different parenting modules “each targeting specific early joint attention, language skills, and joint engagement with the mother.” The goal was to help teach the mother various skills to co-regulate their autistic child’s emotional response during negative situations. The last ten minutes of the playtime was recorded and the interventionist simply observed the mother and child at play, without giving advice or training.
Toddlers displayed emotional distress frequently—about 20% of the time—but that they also displayed a wide range of strategies designed to cope with that distress. Most of the strategies were actions, like finding ways to relieve tension or distracting themselves with something else. These strategies differed from those of older children, who tend to use more verbal strategies to calm themselves down.
The way the mothers responded to their children’s coping strategies, the more their toddlers were likely to mimic these coping strategies themselves. And as many of the toddlers’ active coping strategies were directed at the mothers, they seemed to be expecting emotional support from them. Furthermore, when the mother’s comforted their toddlers, the children were less distressed. This is comforting news for parents of autistic children who may worry that their children don’t seek them out for comfort. Active responses, such as redirecting toddlers away from frustrations or helping them with difficult tasks, saw the highest correlation to active and varied toddler coping strategies. More verbal comforts were not as effective. Interestingly, most mothers primarily used active responses naturally.
Emotion regulation is a process of coping with new feelings as they arise. The process involves learning coping strategies to prevent these feelings from becoming disruptive. For example, some of us leave a room and count to ten when we’re angry to avoid snapping at someone. Developing coping strategies to regulate our emotions is linked to many social skills that make complicated social interactions possible. Autistic children often have difficulty with these coping strategies, which is coupled with higher than average rates of distress.
The study does have some limitations. Since it could only measure correlations, we can’t say what, if any, kind of causal relationship child coping strategies and mother responses have with each other. Also, almost all the toddlers were boys, who could have different coping strategies than girls. But it’s clear that autistic toddlers do use a wide range of coping strategies that they employ to deal with upsetting emotions. Mothers’ active responses seem to be crucial to decreasing their children’s distress and encouraging the use of these coping strategies. The more autistic toddlers can learn to use healthy coping strategies, the better equipped they may be to handle socially and emotionally challenging situations later in life.
|
{
"palladium_score": 3.730726480484009,
"timestamp": "2026-01-18T07:32:47.469804",
"source": "Palladium-STEM (Preview)"
}
|
https://sese.asu.edu/about/news/article/5485
|
Scientists have long thought that rainfall has a dramatic effect on the evolution of mountainous landscapes, but the reasons for how and why have been elusive. This seemingly logical concept has never been quantitatively demonstrated until now, thanks to a new technique that captures precisely how even the mightiest of mountain ranges — the Himalaya — bends to the will of raindrops.
The research, recently published in Science Advances with Arizona State University co-authors Kelin Whipple, Arjun Heimsath and Kip Hodges of the School of Earth and Space Exploration, not only improves our understanding of how mountain ranges evolve over millions of years, but also paves the way for understanding natural hazards associated with climate-driven erosion and, in turn, human life.
“These findings are the latest outcome from a collaborative study that began several years ago at ASU of the distinct tectonic, topographic, and erosional evolution of the Bhutan Himalaya,” Whipple said. “Our major motivation was to achieve an improved understanding of how current and past rainfall patterns sculpt topography and potentially influence the pattern and rate of tectonic uplift.”
Although there is no shortage of scientific models aiming to explain how the Earth works, the greater challenge can be making enough good observations to test which are most accurate.
The study was based in the central and eastern Himalaya of Bhutan and Nepal, because this region of the world has become one of the most sampled landscapes for erosion-rate studies.
Lead author Byron Adams of the University of Bristol, with ASU collaborators Whipple, Heimsath and Hodges, and with Adam Forte of Louisiana State University, used cosmic clocks within sand grains to measure the speed at which rivers erode the rocks beneath them. Adams and Forte were both formerly at ASU.
“When a cosmic particle from outer space reaches Earth, it is likely to hit sand grains on hillslopes as they are transported toward rivers. When this happens, some atoms within each grain of sand can transform into a rare element. By counting how many atoms of this element are present in a bag of sand, we can calculate how long the sand has been there, and therefore how quickly the landscape has been eroding,” Adams said.
“Once we have erosion rates from all over the mountain range, we can compare them with variations in river steepness and rainfall. However, such a comparison is hugely problematic because each data point is very difficult to produce and the statistical interpretation of all the data together is complicated.”
Adams and his team overcame this challenge by combining regression techniques with numerical models of how rivers erode.
“We tested a wide variety of numerical models to reproduce the observed erosion-rate pattern across Bhutan and Nepal. Ultimately only one model was able to accurately predict the measured erosion rates,” Adams said. “This model allows us for the first time to quantify how rainfall influences river incision and hillslope erosion rates in mountainous landscapes.”
“Our findings show how critical it is to account for rainfall when assessing patterns of tectonic activity using topography, and also provide an essential step forward in addressing how much the slip rate on tectonic faults may be controlled by climate-driven erosion at the surface,” Whipple said.
The findings of this study also carry important implications for land-use management, infrastructure maintenance and hazards in the Himalaya.
Rainfall-driven variations in erosion rates can lead to important differences in landscape instabilities and hazards. In the Himalaya, there is the ever-present risk that high erosion rates can drastically increase sedimentation rates behind dams, jeopardizing critical hydropower projects.
Furthermore, enhanced river incision can undermine hillslopes, elevating the risk of debris flows or landslides, some of which may be large enough to dam the river, creating a new hazard — lake outburst floods.
“Our data and analysis provide an effective tool for estimating patterns of erosion in mountainous landscapes such as the Himalaya, and thus, can provide invaluable insight into the hazards that influence the hundreds of millions of people who live within and at the foot of these mountains,” Adams said.
Building on these findings, Whipple is leading the team in an effort to extend this analysis in an application to the full length of the Himalaya.
“This study will test our model against additional data sets from the central and western Himalaya and apply the results to estimate patterns of erosion rate across the entire range,” Whipple said. “Erosional patterns will allow us to differentiate among competing models of the mountain-building process and help refine estimates of seismic and erosional hazards.”
This article was written by Byron Adams and Victoria Tagg of Bristol University (UK) with contributions by Karin Valentine of ASU’s School of Earth and Space Exploration.
|
{
"palladium_score": 4.16283655166626,
"timestamp": "2026-01-18T07:32:49.804517",
"source": "Palladium-STEM (Preview)"
}
|
http://www.sciencedaily.com/releases/2005/12/051226100015.htm
|
In a study to be published in the January 2006 issue of Nature Biotechnology, researchers led by a team of scientists at Memorial Sloan-Kettering Cancer Center have devised a novel strategy that uses stem cell-based gene therapy and RNA interference to genetically reverse sickle cell disease (SCD) in human cells. This research is the first to demonstrate a way to genetically correct this debilitating blood disease using RNA interference technology.
To prevent the production of the abnormal hemoglobin that causes sickle cell disease, a viral vector was introduced in cell cultures of patients who have the disease. The vector carried a therapeutic globin gene harboring an embedded small interfering RNA precursor designed to suppress abnormal hemoglobin formation. Tested in adult stem cells from SCD patients, researchers found that the newly formed red blood cells made normal hemoglobin and suppressed production of the sickle shaped hemoglobin typical of the disease.
"Sickle cell disease can only be cured by transplanting healthy blood-forming stem cells from another individual, but this option is not available to most patients due to the difficulty in finding a compatible donor," explained Michel Sadelain, MD, PhD, of the Immunology Program at MSKCC and the study's senior author. "By using gene transfer, there is always a donor match because the patient's own stem cells are used to treat the disease."
Sickle cell disease is a genetic condition that causes an abnormal type of hemoglobin to be made in red blood cells. The aggregation of hemoglobin S inside red cells interferes with the body's blood cells' ability to flow through small blood vessels, depriving tissues of adequate oxygen supply. This can cause pain, anemia, infections, organ damage, and stroke. Approximately 80,000 people in the United States have this inherited condition, which is primarily found in people of African, Mediterranean, Indian, or Middle Eastern origin. There is no known cure other than stem cell transplantation.
To treat SCD, Sloan-Kettering scientists devised a novel engineering strategy combining RNA interference with globin gene transfer by creating a therapeutic transgene, consisting of the gamma-globin gene and small interfering RNA specific for beta S-globin, the globin mutant chain that causes sickle cell disease.
"An innovative and sophisticated approach was needed to genetically engineer hematopoietic stem cells using a practical and clinically applicable process. The transferred gene must not disrupt the cells' normal functions," explained Isabelle Riviere, PhD, Co-Director of the Gene Transfer and Somatic Cell Engineering Facility and a study co-author.
The new gene had two functions -- produce normal hemoglobin and suppress the generation of sickle shaped hemoglobin S. The therapeutic gene was engineered into a lentiviral vector and introduced into hematopoietic stem cells. After the cells received the treatment, they made normal hemoglobin.
"This proved our hypothesis that you can simultaneously add one function and delete another in the same cell and obtain synergistic genetic modifications within a single cell," said Selda Samakoglu, PhD, a member of Dr. Sadelain's laboratory and the study's first author. "In this case, we used the technique to correct sickle cell disease, but it should be broadly applicable to use therapeutically in stem cells or malignant cells."
The study's co-authors are Leszek Lisowski, Tulin Budak-Alpodogan, Ping Zhu , and Yelena Usachenko, of Memorial Sloan-Kettering; Santina Acuto, Rosalba Di Marzo, and Aurelio Maggio, Ospedale V. Cervello; and John F. Tisdale of National Institutes of Health. It was supported, in part, by grants from the National Institutes of Health, the Leonardo Giambrone Foundation, and the Associacione per la ricerca Piera Cutino.
Memorial Sloan-Kettering Cancer Center is the world's oldest and largest institution devoted to prevention, patient care, research, and education in cancer. Our scientists and clinicians generate innovative approaches to better understand, diagnose, and treat cancer. Our specialists are leaders in biomedical research and in translating the latest research to advance the standard of cancer care worldwide.
Cite This Page:
|
{
"palladium_score": 3.54744553565979,
"timestamp": "2026-01-18T07:32:52.503624",
"source": "Palladium-STEM (Preview)"
}
|
http://www.additudemag.com/adhd/article/2528.html
|
Completing written work and homework on time is one of the biggest challenges students with attention deficit disorder (ADD ADHD) face. In fact, over 50 percent have difficulty with written expression because of limited working memory, low processing speed, fine motor difficulties, or another problem.
But teachers who are willing to be creative with written assignments and note-taking—without diluting the material—can help students excel. Here are several strategies to consider:
1. Assign fewer problems or questions.
Math homework can be a real challenge. Teachers might modify assignments so that a student is required to do only every second or third problem. In the classroom or for homework, some students may benefit from photocopying math, science, or history pages from their textbooks, and filling in the blanks instead of writing out the whole problems or sentences.
2. Streamline note-taking.
If an ADHD student is distracted by the note-taking process, he’ll have trouble focusing on what is being said in class. One solution is to ask a student who excels in the subject to take notes for the whole class, having him draw stars near the important themes of the lesson. Then make copies for any students who need notes. Another strategy is to provide direction to students who are frantically trying to take down every word you say. Make a point of frequently saying, “Now this is really important—write it down!”
3. Allow dictation.
Researchers have found that the quality and length of reports and essays improved when students spoke them into a tape recorder. Instead of having a student write a paper, allow him to dictate his material to a parent or friend, who can type it up.
4. Get creative with reports.
Develop an assignment “menu” that offers creative, active assignments, not just written ones. One language arts teacher allowed her students to film themselves acting out two or three favorite scenes or baking a cake that was described in the written material. Other creative activities include building a model or calling up an official—at NASA, say—for an opinion on the topic.
5. If students are in a crisis, cut them some slack.
If the child seems to understand the basic concept of the lesson, accept unfinished class work on occasion. Piling work onto an already heavy load of homework assignments can cause a student to under perform—or worse. ADHD students may be so overwhelmed by overdue schoolwork that they give up because they feel they’ll never catch up.
This article comes from the October/November issue of ADDitude.
|
{
"palladium_score": 3.648660659790039,
"timestamp": "2026-01-18T07:32:52.503624",
"source": "Palladium-STEM (Preview)"
}
|
http://www.whole-systems.org/extinctions.html
|
Species Extinction and
Species are currently going extinct at a faster rate than at any time in the past with the exception of cataclysmic encoders with extraterrestrial objects. A good proxy for the rate of extinction is the rate of growth in energy used by the human population. In other words, extinction rates are increasing in step with the product of population growth times the growth in affluence.
- Graph is based on a mathematical model linking species to
habitat loss developed by Edward O. Wilson and others.
- Assumptions: Total number of species = 10 million
Background extinction rate from fossil records is one extinction
per million species per year; estimated total number of species
10 million - background rate = 10 species per year.
- Extinction Rates:
- 1. Edward Wilson estimates 27,000 species are currently lost
per year. By 2022, 22% of all species will be extinct if no action
- 2. Niles Eldridge estimates 30,000 per year currently.
- 3. Georgina M. Mace using a different methodology based on
extrapolations from the current lists of endangered species arrives
at a figure of 14-22% loss of species and subspecies over the
next 100 years.
- 4. Paul Ehrlich, using another approach based on total energy
use estimates extinction rates at 7,000 to 13,000 times the background
rate, (70,000 to 130,000 species per year) which he says is higher
than figures based on data for higher orders of animal indicates,
but we have little data on insects and micro flora and fauna.
Species extinctions are very difficult to quantify. In the past the man caused extinctions have been primarily due to hunting. As man crossed over from Asia and entered the North American Continent, a series of extinctions occurred caused in part by man's predation of slow moving species like the mammoth and other mega-herbivores and perhaps, according to a new hypothesis, by the introduction of new diseases by man or his domesticated animals. The loss of these species caused other dependent species to go extinct like the giant vultures and the long nosed bear. A similar wave of extinctions happened as the Polynesians colonized the pacific islands. This time it was the flightless birds that were defenseless against the new predators. The Moa was the largest, but on many islands there is evidence that up to 50% of all species of birds were hunted to extinction including such remarkable species as the Hawaiian Eagle.
- The cascade of current extinctions, however, is related mostly
to destruction of habitat, and displacement by introduced species.
Chemical pollutants, over harvesting and hybridization have played
smaller but still significant role. While the actual extinction
rate is difficult to pin down, there is no doubt that the planet
is in the midst of a mass extinction of major proportions. The
most conservative estimates place the extinction rate at 1000
times the background rate. These numbers are more easily accepted
when placed in the context of habitat destruction.
- Habitat Loss:
- African Nations: average 68% with Gambia suffering the most
at 91% loss.
- Asian Nations: 69% average with a range of 34% to 96% (excluding
- Mexico 66%
- US 26% (data WRI 1990)
- Habitat destruction can also be estimated in terms of the
product of population times affluence times a factor for technology,
all of which can be summed up by total energy use. Pre-agricultural
revolution energy use is estimated at .001 to .002 terawatts
for a population of 5-10 million. World consumption in 1990 was
13 terawatts or 7,000 to 13,000 times higher.
- In the future, the rapid increase in temperature will become an increasing threat to species. A tree species, for instance, can only migrate at a rate much slower than the rate at which its climactic zone will shift toward the poles.
- "Biodiversity is our most valuable but least appreciated resource. We need to reclassify environmental problems in a way that more accurately reflects reality. There are two major environmental problems, and only two. One is the alteration of the physical environment to a state uncongnial to life, the now familiar syndrome of toxic pollution, loss of the ozone layer, climactic warming... all accelerated by the continued growth of human populations.
- The second category is the loss of biological diversity. ...Merely the attempt to solve the biodiversity crisis offers great benefits never before enjoyed, for to save species is to study them closely, and to learn them is to exploit their characteristics in novel ways.*
The benefits of species diversity can be approached from two
angles, one is the Cost Benefit approach of our monetary based
value system, and the other is the Safe Minimum Standard which
holds each species as irreplaceable and worthy of preservation
for its own sake. The cost benefit analysis would be quite adequate
if all the costs and benefits were included, but many benefits
are not known and the potential benefits cannot be known. For
example, in 1970 Grassy Stunt devastated rice crops in India and
Indonesia. Severe famine was averted by the development of a resistant
strain. After testing 6,273 varieties, the resistant gene was
found in only one variety discovered in 1966.
- The full valuation of species diversity must include but
is not limited to:
- 1. The value of the products obtained from the species. In
addition to potential medicinal values, many species have great
potential as agricultural and livestock that offer benefits of
greater efficiency and less destruction of the environment. 7000
species are grown and collected for food, but 50% of our food
comes from 3 plants, wheat, maze, and rice. The same is true
of commercially raised animals, especially cattle which are ill
suited and destructive to many of the environments they are raised
in. For example a study shows that the green iguana, a delicacy
of the rainforest could produce up to 10 times the yield per
acre of cattle, and the forest would be left in tact to provide
- 2. The present and future value of the genetic material held
in the species. The potential for utilizing genetic material
in beneficial ways is expanding so rapidly that to put a value
on the material from any species would be impossible. There are
many examples like the rice disease mentioned above where the
benefits of a single gene only recently discovered has had incalculable
- 3. Stability of the ecosystem. "In a world created by
natural selection, homogeneity means vulnerability."* Diversity
gives the natural system the ability to resist and adapt to disease,
severe weather, and climactic change. Disease striking a society
dependent on monoculture has been a major factor in many famines,
the Irish Potato Famine being the most infamous, mostly because
of the ability of plant geneticists to invent new resistant variety
as fast as the diseases arise.
- 4. Maximizing the efficiency of the ecosystem. Many studies
have shown that a diversity of species is better able to utilize
the inputs of water, sun and nutrients than a single or small
number of species which leads to greater biomass and less soil
erosion and nutrient loss. This is especially true of rainforests,
but also applies to temperate forests and grasslands.
Recovery time: Berkeley environmental scientist James
W. Kirchner and paleontologist Anne Weil released a report on
3/9/2000 that the recovery time for mass extinction is around
10 million years. This is significantly longer than previously
thought, and seems to hold true for any level of extinctions from
major extinctions of 75% of to minor ones of 25%. http://www.urel.berkeley.edu/urel_1/CampusNews/PressReleases/releases/03-09-2000.html
- Resources: Listed in order of usefulness and readability
- Web Sites
- 1) Wilson, Edward O. The diversity of Life, The Belknap Press
of Harvard university press, Cambridge, Mass.1992. A classic
- 2) Baskin, Yvonne, The Work of Nature, Island Press, 1998.
A very fascinating account of the interrelationships that make
an ecosystem work.
- 3) Eldredge, Niles, Life in the Ballance humanity and the
Biodiversity Crisis Princeton university Press, princeton 1998
- 4) Lawton, John H. & May, Robert M., Editors, Extinction Rates, Oxford University Press, 1995. An academic treatment of the techniques of population estimating.
- There are far too many to list here, but any of these works
will list 100 more references.
|
{
"palladium_score": 3.72879695892334,
"timestamp": "2026-01-18T07:32:52.503624",
"source": "Palladium-STEM (Preview)"
}
|
http://www.upcscavenger.com/wiki/united_states/
|
Paleo-Indians migrated from Eurasia to what is now the U.S. mainland at least 15,000 years ago, with European colonization beginning in the 16th century. The United States emerged from 13 British colonies along the East Coast. Disputes between Great Britain and the colonies led to the American Revolution. On July 4, 1776, as the colonies were fighting Great Britain in the American Revolutionary War, delegates from the 13 colonies unanimously adopted the Declaration of Independence. The war ended in 1783 with recognition of the independence of the United States by the Kingdom of Great Britain, and was the first successful war of independence against a European colonial empire.Greene, Jack P.; Pole, J.R., eds. (2008). A Companion to the American Revolution. pp. 352–361.
The country's constitution was adopted on September 17, 1787, and ratified by the states in 1788. The first ten amendments, collectively named the Bill of Rights, were ratified in 1791 and designed to guarantee many fundamental civil liberties.
Driven by the doctrine of Manifest Destiny, the United States embarked on a vigorous expansion across North America throughout the 19th century. This involved displacing American Indian tribes, acquiring new territories, and gradually admitting new states, until by 1848 the nation spanned the continent. ξ6 During the second half of the 19th century, the American Civil War ended legal slavery in the country. ξ7 By the end of that century, the United States extended into the Pacific Ocean, ξ8 and its economy, driven in large part by the Industrial Revolution, began to soar. The Spanish–American War and confirmed the country's status as a global military power. The United States emerged from as a global superpower, the first country to develop nuclear weapons, the only country to use them in war, and a permanent member of the United Nations Security Council. The end of the Cold War and the dissolution of the Soviet Union in 1991 left the United States as the world's sole superpower. ξ9
The United States is a developed country and has the world's largest national economy by nominal and real GDP, benefiting from an abundance of and high worker productivity. While the U.S. economy is considered post-industrial, the country continues to be one of the world's largest manufacturers. Accounting for 34% of global military spending and 23% of world GDP, it is the world's foremost military and economic power, a prominent political and cultural force, and a leader in scientific research and technological innovations.
The first known publication of the phrase "United States of America" was in an anonymous essay in The Virginia Gazette newspaper in Williamsburg, Virginia, on April 6, 1776. In June 1776, Thomas Jefferson wrote the phrase "UNITED STATES OF AMERICA" in all capitalized letters in the headline of his "original Rough draught" of the Declaration of Independence.DeLear, Byron (August 16, 2012). "Who coined the name 'United States of America'? Mystery gets new twist." Christian Science Monitor (Boston, MA). In the final Fourth of July version of the Declaration, the title was changed to read, "The unanimous Declaration of the thirteen united States of America". In 1777 the Articles of Confederation announced, "The Stile of this Confederacy shall be 'The United States of America. ξ15 The preamble of the Constitution states "...establish this Constitution for the United States of America."
The short form "United States" is also standard. Other common forms are the "U.S.", the "USA", and "America". Colloquial names are the "U.S. of A." and, internationally, the "States". "Columbia", a name popular in poetry and songs of the late 1700s, derives its origin from Christopher Columbus; it appears in the name "District of Columbia". ξ16 In non-English languages, the name is frequently the translation of either the "United States" or "United States of America", and colloquially as "America". In addition, an abbreviation (e.g. USA) is sometimes used.For example, the U.S. embassy in Spain calls itself the embassy of the "Estados Unidos", literally the words "states" and "united", and also uses the initials "EE.UU.", the doubled letters implying plural use in Spanish Elsewhere on the site "Estados Unidos de América" is used
The phrase "United States" was originally plural, a description of a collection of independent states—e.g., "the United States are"—including in the Thirteenth Amendment to the United States Constitution, ratified in 1865. The singular form—e.g., "the United States is"— became popular after the end of the American Civil War. The singular form is now standard; the plural form is retained in the idiom "these United States". The difference is more significant than usage; it is a difference between a collection of states and a unit.G. H. Emerson, The Universalist Quarterly and General Review, Vol. 28 (Jan. 1891), p. 49, quoted in Zimmer paper above.
A citizen of the United States is an "American". "United States", "American" and "U.S." refer to the country adjectivally ("American values", "U.S. forces"). "American" rarely refers to subjects not connected with the United States.Wilson, Kenneth G. (1993). The Columbia Guide to Standard American English. New York: Columbia University Press, pp. 27–28. ISBN 0-231-06989-8.
In the early days of colonization many European settlers were subject to food shortages, disease, and attacks from Native Americans. Native Americans were also often at war with neighboring tribes and allied with Europeans in their colonial wars.Juergens, 2011, p. 69 At the same time, however, many natives and settlers came to depend on each other. Settlers traded for food and animal pelts, natives for guns, ammunition and other European wares.Ripper, 2008 p. 6 Natives taught many settlers where, when and how to cultivate corn, beans and squash. European missionaries and others felt it was important to "civilize" the Indians and urged them to adopt European agricultural techniques and lifestyles.Ripper, 2008 p. 5Calloway, 1998, p. 55
Most settlers in every colony were small farmers, but other industries developed within a few decades as varied as the settlements. Cash crops included tobacco, rice and wheat. Extraction industries grew up in furs, fishing and lumber. Manufacturers produced rum and ships, and by the late colonial period Americans were producing one-seventh of the world's iron supply.Walton, 2009, chapter 3 Cities eventually dotted the coast to support local economies and serve as trade hubs. English colonists were supplemented by waves of Scotch-Irish and other groups. As coastal land grew more expensive freed pushed further west.Lemon, 1987
Slave cultivation of cash crops began with the Spanish in the 1500s, and was adopted by the English, but life expectancy was much higher in North America because of less disease and better food and treatment, leading to a rapid increase in the numbers of slaves.Clingan, 2000, p. 13Tadman, 2000, p. 1534Schneider, 2007, p. 484 Colonial society was largely divided over the religious and moral implications of slavery and colonies passed acts for and against the practice.Lien, 1913, p. 522Davis, 1996, p. 7 But by the turn of the 18th century, African slaves were replacing indentured servants for cash crop labor, especially in southern regions.Quirk, 2011, p. 195
With the British colonization of Georgia in 1732, the 13 colonies that would become the United States of America were established. ξ19 All had local governments with elections open to most free men, with a growing devotion to the ancient rights of Englishmen and a sense of self-government stimulating support for republicanism. ξ20 With extremely high birth rates, low death rates, and steady settlement, the colonial population grew rapidly. Relatively small Native American populations were eclipsed.Walton, 2009, pp. 38–39 The movement of the 1730s and 1740s known as the Great Awakening fueled interest in both religion and religious liberty.Foner, Eric. of American freedom&hl=en&sa=X&ei=iJ6ZVNDOGMjeggStroKQDA&ved=0CCYQ6AEwAA#v=onepage&q=story of American freedom&f=false The Story of American Freedom, 1998 ISBN 0-393-04665-6 p.4-5.
In the French and Indian War, British forces seized Canada from the French, but the francophone population remained politically isolated from the southern colonies. Excluding the Native Americans, who were being conquered and displaced, those 13 colonies had a population of over 2.1 million in 1770, about one-third that of Britain. Despite continuing new arrivals, the rate of natural increase was such that by the 1770s only a small minority of Americans had been born overseas.Walton, 2009, p. 35 The colonies' distance from Britain had allowed the development of self-government, but their success motivated monarchs to periodically seek to reassert royal authority.
Following the passage of the Lee Resolution, on July 2, 1776, which was the actual vote for independence, the Congress adopted the Declaration of Independence, on July 4, which proclaimed, in a long preamble, that humanity is created equal in their unalienable rights and that those rights were not being protected by Great Britain, and finally declared, in the words of the resolution, that the Thirteen Colonies were independent states and had no allegiance to the British crown in the United States. The fourth day of July is celebrated annually as Independence Day. In 1777, the Articles of Confederation established a weak government that operated until 1789. ξ22
Britain recognized the independence of the United States following their defeat at Yorktown.Greene and Pole, A Companion to the American Revolution p 357. Jonathan R. Dull, A Diplomatic History of the American Revolution (1987) p. 161. Lawrence S. Kaplan, "The Treaty of Paris, 1783: A Historiographical Challenge", International History Review, Sept 1983, Vol. 5 Issue 3, pp 431–442 In the peace treaty of 1783, American sovereignty was recognized from the Atlantic coast west to the Mississippi River. Nationalists led the Philadelphia Convention of 1787 in writing the United States Constitution, ratified in state conventions in 1788. The federal government was reorganized into three branches, on the principle of creating salutary checks and balances, in 1789. George Washington, who had led the revolutionary army to victory, was the first president elected under the new constitution. The Bill of Rights, forbidding federal restriction of personal freedoms and guaranteeing a range of legal protections, was adopted in 1791.Boyer, 2007, pp. 192–193
Although the federal government criminalized the international slave trade in 1808, after 1820 cultivation of the highly profitable cotton crop exploded in the Deep South, and along with it the slave population. ξ23 Walton, 2009, p. 43Gordon, 2004, pp. 27,29 The Second Great Awakening, beginning about 1800, converted millions to evangelical Protestantism. In the North it energized multiple social reform movements, including abolitionism; ξ24 in the South, Methodists and Baptists proselytized among slave populations.Heinemann, Ronald L., et al., Old Dominion, New Commonwealth: a history of Virginia 1607–2007, 2007 ISBN 978-0-8139-2609-4, p.197
Americans' eagerness to expand westward prompted a long series of American Indian Wars. ξ25 The Louisiana Purchase of French-claimed territory in 1803 almost doubled the nation's size. The War of 1812, declared against Britain over various grievances and fought to a draw, strengthened U.S. nationalism. ξ26 A series of U.S. military incursions into Florida led Spain to cede it and other Gulf Coast territory in 1819. ξ27 Expansion was aided by steam power, when steamboats began traveling along America's large water systems, which were connected by new , such as the Erie and the I&M; then, even faster railroads began their stretch across the nation's land.Winchester, pp. 198, 216, 251, 253
From 1820 to 1850, Jacksonian democracy began a set of reforms which included wider male suffrage; it led to the rise of the Second Party System of Democrats and Whigs as the dominant parties from 1828 to 1854. The Trail of Tears in the 1830s exemplified the Indian removal policy that moved Indians into the west to their own reservations. The U.S. annexed the Republic of Texas in 1845 during a period of expansionist Manifest destiny. ξ28 The 1846 Oregon Treaty with Britain led to U.S. control of the present-day American Northwest. ξ29 Victory in the Mexican–American War resulted in the 1848 Mexican Cession of California and much of the present-day American Southwest. ξ30
The California Gold Rush of 1848–49 spurred western migration and the creation of additional western states. ξ31 After the American Civil War, new transcontinental railways made relocation easier for settlers, expanded internal trade and increased conflicts with Native Americans. ξ32 Over a half-century, the loss of the American bison (called, buffalo) was an existential blow to many Plains Indians cultures. ξ33 In 1869, a new Peace Policy sought to protect Native-Americans from abuses, avoid further war, and secure their eventual U.S. citizenship, although conflicts, including several of the largest Indian Wars, continued throughout the West into the 1900s.Smith (2001), Grant, pp. 523–526
With the 1860 election of Abraham Lincoln, the first president from the largely anti-slavery Republican Party, conventions in thirteen states ultimately declared secession and formed the Confederate States of America, while the U.S. federal government maintained that secession was illegal. The ensuing war was at first for Union, then after 1863 as casualties mounted and Lincoln delivered his Emancipation Proclamation, a second war aim became abolition of slavery. The war remains the deadliest military conflict in American history, resulting in the deaths of approximately 618,000 soldiers as well as many civilians. ξ37
Following the Union victory in 1865, three amendments to the U.S. Constitution brought about the prohibition of slavery, gave U.S. citizenship to the nearly four million who had been slaves, Page 7 lists a total slave population of 3,953,760. and promised them voting rights. The war and its resolution led to a substantial increase in federal powerDe Rosa, Marshall L. (1997). The Politics of Dissolution: The Quest for a National Identity and the American Civil War. Edison, NJ: Transaction. p. 266. ISBN 1-56000-349-9. aimed at reintegrating and rebuilding the Southern states while ensuring the rights of the newly freed slaves. ξ38 Following the Reconstruction Era, throughout the South Jim Crow laws soon effectively disenfranchised most blacks and some poor whites. Over the subsequent decades, in both the North and the South blacks and some whites faced systemic discrimination, including racial segregation and occasional vigilante violence, sparking national movements against these abuses.
The end of the Indian Wars further expanded acreage under mechanical cultivation, increasing surpluses for international markets. Mainland expansion was completed by the purchase of Alaska from Russia in 1867. In 1893, pro-American elements in Hawaii overthrew the monarchy and formed the Republic of Hawaii, which the U.S. annexed in 1898. Puerto Rico, Guam, and the Philippines were ceded by Spain in the same year, following the Spanish–American War.
Rapid economic development at the end of the 19th century produced many prominent industrialists, and the U.S. economy became the world's largest. Dramatic changes were accompanied by social unrest and the rise of populist, socialist, and anarchist movements.Zinn, 2005 This period eventually ended with the advent of the Progressive Era, which saw significant reforms in many societal areas, including women's suffrage, alcohol prohibition, regulation of consumer goods, greater antitrust measures to ensure competition and attention to worker conditions.
In 1920, the women's rights movement won passage of a constitutional amendment granting women's suffrage. ξ40 The 1920s and 1930s saw the rise of radio for mass communication and the invention of early television.Winchester pp. 410–411 The prosperity of the Roaring Twenties ended with the Wall Street Crash of 1929 and the onset of the Great Depression. After his election as president in 1932, Franklin D. Roosevelt responded with the New Deal, which included the establishment of the Social Security system. ξ41 The Great Migration of millions of African Americans out of the American South began around WWI and extended through the 1960s; ξ42 whereas the Dust Bowl of the mid-1930s impoverished many farming communities and spurred a new wave of western migration. ξ43
The United States was at first effectively neutral during World War II's early stages but began supplying material to the Allies in March 1941 through the Lend-Lease program. On December 7, 1941, the Empire of Japan launched a surprise attack on Pearl Harbor, prompting the United States to join the Allies against the Axis powers. During the war, the United States was referred as one of the "Four Policemen" of Allies power who met to plan the post-war world, along with Britain, the Soviet Union and China. Though the nation lost more than 400,000 soldiers, p. 2. it emerged relatively undamaged from the war with even greater economic and military influence.Kennedy, Paul (1989). The Rise and Fall of the Great Powers. New York: Vintage. p. 358. ISBN 0-679-72019-7. Indeed, World War II ushered in the zenith of U.S. power in what came to be called the American Century, as , indicates: "Truman presided over the greatest military and economic power the world had ever known. War production had lifted the United States out of the Great Depression and had inaugurated an era of unimagined prosperity. Gross national product increased by 60 percent during the war, total earnings by 50 percent. Despite social unrest, labor agitation, racial conflict, and teenage vandalism, Americans had more discretionary income than ever before. Simultaneously, the U.S. government had built up the greatest war machine in human history. By the end of 1942, the United States was producing more arms than all the Axis states combined, and, in 1943, it made almost three times more armaments than did the Soviet Union. In 1945, the United States had two-thirds of the world's gold reserves, three-fourths of its invested capital, half of its shipping vessels, and half of its manufacturing capacity. Its GNP was three times that of the Soviet Union and more than five times that of Britain. It was also nearing completion of the atomic bomb, a technological and production feat of huge costs and proportions."
Allied conferences at Bretton Woods and Yalta outlined a new system of international organizations that placed the United States and Soviet Union at the center of world affairs. As an Allied victory was won in Europe, a 1945 international conference held in San Francisco produced the United Nations Charter, which became active after the war. The United States developed the first nuclear weapons and used them on Japan; the Japanese surrendered on September 2, ending World War II.Pacific War Research Society (2006). Japan's Longest Day. New York: Oxford University Press. ISBN 4-7700-2887-3.
The U.S. often opposed Third World movements that it viewed as Soviet-sponsored. American troops fought communist Chinese and forces in the Korean War of 1950–53. The Soviet Union's 1957 launch of the first artificial satellite and its 1961 launch of the first manned spaceflight initiated a "Space Race" in which the United States became the first nation to land a man on the moon in 1969. A proxy war in Southeast Asia eventually evolved into full American participation, as the Vietnam War.
At home, the U.S. experienced sustained economic expansion and a rapid growth of its population and middle class. Construction of an Interstate Highway System transformed the nation's infrastructure over the following decades. Millions moved from farms and inner cities to large housing developments.Winchester, pp. 305–308 In 1959 Hawaii became the 50th and last state added to the US. ξ49 A growing civil rights movement used nonviolence to confront segregation and discrimination, with Martin Luther King, Jr. becoming a prominent leader and figurehead. A combination of court decisions and legislation, culminating in the Civil Rights Act of 1964, sought to end racial discrimination. ξ50 Meanwhile, a counterculture movement grew which was fueled by opposition to the Vietnam war, black nationalism, and the sexual revolution. The launch of a "War on Poverty" expanded entitlement and welfare spending.
The 1970s and early 1980s saw the onset of stagflation. After his election in 1980, President Ronald Reagan responded to economic stagnation with free-market oriented reforms. Following the collapse of détente, he abandoned "containment" and initiated the more aggressive "rollback" strategy towards the USSR.Soss, 2010, p. 277Fraser, 1989Ferguson, 1986, pp. 43–53Williams, pp. 325–331 ξ51 After a surge in female labor participation over the previous decade, by 1985 the majority of women aged 16 and over were employed.
The late 1980s brought a "thaw" in relations with the USSR, and its collapse in 1991 finally ended the Cold War. ξ52 ξ53
Hayes, 2009US History.org, 2013 This brought about unipolarityCharles Krauthammer, "The Unipolar Moment," Foreign Affairs, 70/1, (Winter 1990/1), 23-33. with the U.S. unchallenged as the world's dominant superpower. The concept of Pax Americana, which had appeared in the post-World War II period, gained wide popularity as a term for the post-Cold War new world order.
Beginning in 1994, the U.S. entered into the North American Free Trade Agreement (NAFTA), linking 450 million people producing $17 trillion worth of goods and services. The goal of the agreement was to eliminate trade and investment barriers among the U.S., Canada, and Mexico by January 1, 2008; trade among the partners has soared since the agreement went into force. "North American Free Trade Agreement (NAFTA)" Office of the United States Trade Representative. Retrieved January 11, 2015.
Barack Obama, the first African American,
ξ61 and multiracial
ξ64 president, was elected in 2008 amid the Great Recession,
which began in December 2007 and ended in June 2009. US Business Cycle Expansions and Contractions, NBER, accessed January 11, 2015.
The United States is the world's third or fourth largest nation by total area (land and water), ranking behind Russia and Canada and just above or below China. The ranking varies depending on how two territories disputed by China and India are counted and how the total size of the United States is measured: calculations range from (area given in square miles) to (area given in square kilometers) to (area given in square kilometers) to 3,805,927 square miles (9.9 Mm2). Measured by only land area, the United States is third in size behind Russia and China, just ahead of Canada.
The coastal plain of the Atlantic seaboard gives way further inland to deciduous forests and the rolling hills of the Piedmont. The Appalachian Mountains divide the eastern seaboard from the Great Lakes and the grasslands of the Midwest. The Mississippi–Missouri River, the world's fourth longest river system, runs mainly north–south through the heart of the country. The flat, fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast.
The Rocky Mountains, at the western edge of the Great Plains, extend north to south across the country, reaching altitudes higher than in Colorado. Farther west are the rocky Great Basin and deserts such as the Chihuahua and Mojave. The Sierra Nevada and Cascade mountain ranges run close to the Pacific coast, both ranges reaching altitudes higher than . The lowest and highest points in the contiguous United States are in the state of California, and only about apart. At an elevation of , Alaska's Denali (Mount McKinley) is the highest peak in the country and North America. Active are common throughout Alaska's Alexander and Aleutian Islands, and Hawaii consists of volcanic islands. The supervolcano underlying Yellowstone National Park in the Rockies is the continent's largest volcanic feature.
The United States, with its large size and geographic variety, includes most climate types. To the east of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The Great Plains west of the 100th meridian are semi-arid. Much of the Western mountains have an alpine climate. The climate is arid in the Great Basin, desert in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon and Washington and southern Alaska. Most of Alaska is subarctic or polar. Hawaii and the southern tip of Florida are tropical, as are the populated territories in the Caribbean and the Pacific. Extreme weather is not uncommon—the states bordering the Gulf of Mexico are prone to hurricanes, and most of the world's occur within the country, mainly in Tornado Alley areas in the Midwest and South.
There are 58 national parks and hundreds of other federally managed parks, forests, and wilderness areas. Altogether, the government owns about 28% of the country's land area. Most of this is protected, though some is leased for oil and gas drilling, mining, logging, or cattle ranching; about .86% is used for military purposes.
Environmental issues have been on the national agenda since 1970. Environmental controversies include debates on oil and nuclear energy, dealing with air and water pollution, the economic costs of protecting wildlife, logging and deforestation, and international responses to global warming.Daynes & Sussman, 2010, pp. 3, 72, 74–76, 78Hays, Samuel P. (2000). A History of Environmental Politics since 1945. Many federal and state agencies are involved. The most prominent is the Environmental Protection Agency (EPA), created by presidential order in 1970. ξ65 The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act.Turner, James Morton (2012). The Promise of Wilderness The Endangered Species Act of 1973 is intended to protect threatened and endangered species and their habitats, which are monitored by the United States Fish and Wildlife Service. ξ66
The United States has a very diverse population; 37 ancestry groups have more than one million members. are the largest ethnic group (more than 50 million) – followed by (circa 37 million), (circa 31 million) and (circa 28 million).
are the largest racial group; Black Americans are the nation's largest racial minority (note that in the U.S. Census, Hispanic and Latino Americans are counted as an ethnic group, not a racial group), and third largest ancestry group. are the country's second largest racial minority; the three largest Asian American ethnic groups are , , and .
The United States has a birth rate of 13 per 1,000, which is 5 births below the world average. Its population growth rate is positive at 0.7%, higher than that of many developed nations. In fiscal year 2012, over one million immigrants (most of whom entered through family reunification) were granted legal residence. "U.S. Legal Permanent Residents: 2012". Office of Immigration Statistics Annual Flow Report. Mexico has been the leading source of new residents since the 1965 Immigration Act. China, India, and the Philippines have been in the top four sending countries every year since the 1990s. , approximately 11.4 million residents are illegal immigrants. As of 2015, 47% of all immigrants are Hispanic, 26% are Asian, 18% are white and 8% are black. The percentage of immigrants who are Asian is increasing while the percentage who are Hispanic is decreasing.
According to a survey conducted by the Williams Institute, nine million Americans, or roughly 3.4% of the adult population identify themselves as homosexual, bisexual, or transgender. A 2012 Gallup poll also concluded that 3.5% of adult Americans identified as LGBT. The highest percentage came from the District of Columbia (10%), while the lowest state was North Dakota at 1.7%. In a 2013 survey, the Centers for Disease Control and Prevention found that 96.6% of Americans identify as straight, while 1.6% identify as gay or lesbian, and 0.7% identify as being bisexual.
In 2010, the U.S. population included an estimated 5.2 million people with some American Indian or Alaska Native ancestry (2.9 million exclusively of such ancestry) and 1.2 million with some native Hawaiian or Pacific island ancestry (0.5 million exclusively). The census counted more than 19 million people of "Some Other Race" who were "unable to identify with any" of its five official race categories in 2010.
The population growth of Hispanic and Latino Americans (the terms are officially interchangeable) is a major demographic trend. The 50.5 million Americans of Hispanic descent are identified as sharing a distinct "ethnicity" by the Census Bureau; 64% of Hispanic Americans are of Mexican descent. Between 2000 and 2010, the country's Hispanic population increased 43% while the non-Hispanic population rose just 4.9%. Much of this growth is from immigration; in 2007, 12.6% of the U.S. population was foreign-born, with 54% of that figure born in Latin America.
Fertility is also a factor; in 2010 the average Hispanic woman gave birth to 2.35 children in her lifetime, compared to 1.97 for non-Hispanic black women and 1.79 for non-Hispanic white women (both below the replacement rate of 2.1). Minorities (as defined by the Census Bureau as all those beside non-Hispanic, non-multiracial whites) constituted 36.3% of the population in 2010 (this is nearly 40% in 2015), U.S. Census Bureau: "U.S. Census Bureau Delivers Final State 2010 Census Population Totals for Legislative Redistricting" see custom table, 2nd worksheet and over 50% of children under age one, and are projected to constitute the majority by 2042. This contradicts the report by the National Vital Statistics Reports, based on the U.S. census data, which concludes that 54% (2,162,406 out of 3,999,386 in 2010) of births were non-Hispanic white. The Hispanic birth rate plummeted 25% between 2006 and 2013 while the rate for non-Hispanics decreased just 5%.
About 82% of Americans live in urban areas (including suburbs); about half of those reside in cities with populations over 50,000. The US has numerous clusters of cities known as megaregions, the largest being the Great Lakes Megalopolis followed by the Northeast Megalopolis and Southern California. In 2008, 273 incorporated places had populations over 100,000, nine cities had more than one million residents, and four global cities had over two million (New York City, Los Angeles, Chicago, and Houston). There are 52 metropolitan areas with populations greater than one million. Of the 50 fastest-growing metro areas, 47 are in the West or South. The metro areas of San Bernardino, Dallas, Houston, Atlanta, and Phoenix all grew by more than a million people between 2000 and 2008.
|Languages spoken at home by more than 1,000,000 persons in the U.S.|
as of 2010
English (American English) is the de facto national language. Although there is no official language at the federal level, some laws—such as U.S. naturalization requirements—standardize English. In 2010, about 230 million, or 80% of the population aged five years and older, spoke only English at home. Spanish, spoken by 12% of the population at home, is the second most common language and the most widely taught second language."Language Spoken at Home by the U.S. Population, 2010", American Community Survey, U.S. Census Bureau, in World Almanac and Book of Facts 2012, p. 615. Some Americans advocate making English the country's official language, as it is in 28 states.
Both Hawaiian and English are official languages in Hawaii, by state law. Alaska recognizes twenty Native languages. Alaska OKs Bill Making Native Languages Official April 21, 2014; Bill Chappell; NPR.org While neither has an official language, New Mexico has laws providing for the use of both English and Spanish, as Louisiana does for English and French. ξ67 Other states, such as California, mandate the publication of Spanish versions of certain government documents including court forms. Many jurisdictions with large numbers of non-English speakers produce government materials, especially voting information, in the most commonly spoken languages in those jurisdictions.
Several insular territories grant official recognition to their native languages, along with English: Samoan
ξ69 and Chamorro ξ70
ξ72 are recognized by American Samoa and Guam, respectively; Carolinian and Chamorro are recognized by the Northern Mariana Islands; ξ73 Cherokee is officially recognized by the Cherokee Nation within the Cherokee tribal jurisdiction area in eastern Oklahoma; ξ74 Spanish is an official language of Puerto Rico and is more widely spoken than English there.
According to the Center for Immigration Studies, Arabic and Urdu (Pakistan's national language) are the fastest growing foreign languages spoken at American households. According to the survey, more than 63.2 million US residents speak a language other than English at home. In recent years, Arabic speaking residents increased by 29%, Urdu by 23% and Persian by 9%.
The most widely taught foreign languages at all levels in the United States (in terms of enrollment numbers) are: Spanish (around 7.2 million students), French (1.5 million), and German (500,000). Other commonly taught languages (with 100,000 to 250,000 learners) include Latin, Japanese, American Sign Language, Italian, and Chinese. 18% of all Americans claim to speak at least one language in addition to English.
|Religious affiliation in the U.S. (2014)|
|Other Non-Christian faiths|
|Nothing in particular|
|Don't know/refused answer|
The First Amendment of the U.S. Constitution guarantees the free exercise of religion and forbids Congress from passing laws respecting its establishment. Christianity is by far the most common religion practiced in the U.S., but other religions are followed, too. In a 2013 survey, 56% of Americans said that religion played a "very important role in their lives", a far higher figure than that of any other wealthy nation. In a 2009 Gallup poll 42% of Americans said that they attended church weekly or almost weekly; the figures ranged from a low of 23% in Vermont to a high of 63% in Mississippi.
As with other Western countries, the U.S. is becoming less religious. Irreligion is growing rapidly among Americans under 30. Polls show that overall American confidence in organized religion is declining, and that younger Americans in particular are becoming increasingly irreligious. According to a 2012 study, Protestant share of U.S. population dropped to 48%, thus ending its status as religious category of the majority for the first time. "Nones" on the Rise: One-in-Five Adults Have No Religious Affiliation Americans with no religion have 1.7 children compared to 2.2 among Christians. The unaffiliated are less likely to get married with 37% marrying compared to 52% of Christians.
According to a 2014 survey, 70.6% of adults identified themselves as Christian, Protestant denominations accounted for 46.5%, while Roman Catholicism, at 20.8%, was the largest individual denomination. The total reporting non-Christian religions in 2014 was 5.9%. Other religions include Judaism (1.9%), Islam (0.9%), Buddhism (0.7%), Hinduism (0.7%). The survey also reported that 22.8% of Americans described themselves as agnostic, atheist or simply having no religion, up from 8.2% in 1990. There are also Unitarian Universalist, Baha'i, Sikh, Jain, Shinto, Confucian, Taoist, Druid, Native American, , humanist and deist communities.Media, Minorities, and Meaning: A Critical Introduction — Page 88, Debra L. Merskin – 2010
Protestantism is the largest Christian religious grouping in the United States. Baptists collectively form the largest branch of Protestantism, and the Southern Baptist Convention is the largest individual Protestant denomination. About 26% of Americans identify as Evangelical Protestants, while 15% are Mainline and 7% belong to a traditionally Black church. Roman Catholicism in the United States has its origin in the Spanish and French colonization of the Americas, and later grew because of Irish, Italian, Polish, German and Hispanic immigration. Rhode Island is the only state where a majority of the population is Catholic. Lutheranism in the U.S. has its origin in immigration from Northern Europe and Germany. North and South Dakota are the only states in which a plurality of the population is Lutheran. Presbyterianism was introduced in North America by Scottish and Ulster Scots immigrants. Although it has spread across the United States, it is heavily concentrated on the East Coast. Dutch Reformed congregations were founded first in New Amsterdam (New York City) before spreading westward. Utah is the only state where Mormonism is the religion of the majority of the population. The Mormon Corridor also extends to parts of Idaho, Nevada and Wyoming. ξ75
The Bible Belt is an informal term for a region in the Southern United States in which socially conservative Evangelical Protestantism is a significant part of the culture and Christian church attendance across the denominations is generally higher than the nation's average. By contrast, religion plays the least important role in New England and in the Western United States.
The U.S. teenage pregnancy rate is 26.5 per 1,000 women. The rate has declined by 57% since 1991. In 2013, the highest teenage birth rate was in Alabama, and the lowest in Wyoming. Abortion is legal throughout the U.S., owing to Roe v. Wade, a 1973 landmark decision by the Supreme Court of the United States. While the abortion rate is falling, the abortion ratio of 241 per 1,000 live births and abortion rate of 15 per 1,000 women aged 15–44 remain higher than those of most Western nations. In 2013, the average age at first birth was 26 and 40.6% of births were to unmarried women.
The total fertility rate (TFR) was estimated for 2013 at 1.86 births per woman. Adoption in the United States is common and relatively easy from a legal point of view (compared to other Western countries). In 2001, with over 127,000 adoptions, the U.S. accounted for nearly half of the total number of adoptions worldwide. It is legal for same-sex couples to adopt. Polygamy is illegal throughout the U.S.
In the American federalist system, citizens are usually subject to three levels of government: federal, state, and local. The local government's duties are commonly split between county and municipal governments. In almost all cases, executive and legislative officials are elected by a plurality vote of citizens by district. There is no proportional representation at the federal level, and it is rare at lower levels. ξ76
The federal government is composed of three branches:
The House of Representatives has 435 voting members, each representing a congressional district for a two-year term. House seats are apportioned among the states by population every tenth year. At the 2010 census, seven states had the minimum of one representative, while California, the most populous state, had 53.
The Senate has 100 members with each state having two senators, elected at-large to six-year terms; one third of Senate seats are up for election every other year. The President serves a four-year term and may be elected to the office no more than twice. The President is not elected by direct vote, but by an indirect electoral college system in which the determining votes are apportioned to the states and the District of Columbia. The Supreme Court, led by the Chief Justice of the United States, has nine members, who serve for life.
The state governments are structured in roughly similar fashion; Nebraska uniquely has a unicameral legislature. The governor (chief executive) of each state is directly elected. Some state judges and cabinet officers are appointed by the governors of the respective states, while others are elected by popular vote.
The original text of the Constitution establishes the structure and responsibilities of the federal government and its relationship with the individual states. Article One protects the right to the "great writ" of habeas corpus. The Constitution has been amended 27 times;Feldstein, Fabozzi, 2011, p. 9 the first ten amendments, which make up the Bill of Rights, and the Fourteenth Amendment form the central basis of Americans' individual rights. All laws and governmental procedures are subject to judicial review and any law ruled by the courts to be in violation of the Constitution is voided. The principle of judicial review, not explicitly mentioned in the Constitution, was established by the Supreme Court in Marbury v. Madison (1803)Schultz, 2009, pp. 164, 453, 503 in a decision handed down by Chief Justice John Marshall.Schultz, 2009, p. 38
Congressional Districts are reapportioned among the states following each decennial Census of Population. Each state then draws single member districts to conform with the census apportionment. The total number of Representatives is 435, and delegate Members of Congress represent the District of Columbia and the five major US territories.House of Representatives. History, Art & Archives, Determining Apportionment and Reapportioning. viewed August 21, 2015.
The United States also observes tribal sovereignty of the Native American nations. Though reservations are within state borders, the reservation is a sovereign entity. While the United States recognizes this sovereignty, other countries may not.
Within American political culture, the Republican Party is considered "conservative" and the Democratic Party is considered "liberal". ξ83 The states of the Northeast and West Coast and some of the Great Lakes states, known as "blue states", are relatively liberal. The "red states" of the South and parts of the Great Plains and Rocky Mountains are relatively conservative.
The winner of the 2008 and 2012 presidential elections, Democrat Barack Obama, is the 44th, and current, U.S. president. Current leadership in the Senate includes Democratic Vice President Joseph Biden, Republican President Pro Tempore (Pro Tem) Orrin Hatch, Majority Leader Mitch McConnell, and Minority Leader Harry Reid.US Senate, Senate Organization Chart for the 114th Congress, viewed August 25, 2015. Leadership in the House includes Speaker of the House Paul Ryan, Majority Leader Kevin McCarthy, and Minority Leader Nancy Pelosi.US House of Representatives, Leadership, viewed August 25, 2015.
In the 114th United States Congress, both the House of Representatives and the Senate are controlled by the Republican Party. The Senate currently consists of 54 Republicans, and 44 Democrats with two independents who caucus with the Democrats; the House consists of 246 Republicans and 188 Democrats, with one vacancy. In state governorships, there are 31 Republicans, 18 Democrats and one independent.MultiState Associates Incorporated. 2015 Governors and Legislatures. Viewed January 14, 2015. Among the DC mayor and the 5 territorial governors, there are 2 Republicans, 2 Democrats (one is also in the PPD), and 2 Independents.National Governor's Association. Current Governors, viewed January 14, 2015; DeBonis, Mike. " Bowser is elected D.C. Mayor", Washington Post November 5, 2014, viewed January 14, 2015.
The United States has a "special relationship" with the United Kingdom ξ84 and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries, including France, Italy, Germany, and Spain. It works closely with fellow NATO members on military and security issues and with its neighbors through the Organization of American States and free trade agreements such as the trilateral North American Free Trade Agreement with Canada and Mexico. In 2008, the United States spent a net $25.4 billion on official development assistance, the most in the world. As a share of America's large gross national income (GNI), however, the U.S. contribution of 0.18% ranked last among 22 donor states. By contrast, private overseas giving by Americans is relatively generous.
The U.S. exercises full international defense authority and responsibility for three sovereign nations through Compact of Free Association with Micronesia, the Marshall Islands and Palau, all of which are Pacific island nations which were part of the U.S.-administered Trust Territory of the Pacific Islands beginning after World War II, and gained independence in subsequent years. ξ81
U.S. taxation is generally progressive, especially the federal income taxes, and is among the most progressive in the developed world.Taxation in the US:
and about half of all taxes. Payroll taxes for Social Security are a flat regressive tax, with no tax charged on income above $118,500 (for 2015 and 2016) and no tax at all paid on unearned income from things such as stocks and capital gains. The historic reasoning for the regressive nature of the payroll tax is that entitlement programs have not been viewed as welfare transfers. However, according to the Congressional Budget Office the net effect of Social Security is that the benefit to tax ratio ranges from roughly 70% for the top earnings quintile to about 170% for the lowest earning quintile, making the system progressive. Is Social SecurityProgressove? CBO
The top 10% paid 51.8% of total federal taxes in 2009, and the top 1%, with 13.4% of pre-tax national income, paid 22.3% of federal taxes. In 2013 the Tax Policy Center projected total federal effective tax rates of 35.5% for the top 1%, 27.2% for the top quintile, 13.8% for the middle quintile, and −2.7% for the bottom quintile. The incidence of corporate income tax has been a matter of considerable ongoing controversy for decades.Tax incidence of corporate tax in the United States:
During FY 2012, the federal government spent $3.54 trillion on a budget or cash basis, down $60 billion or 1.7% vs. FY 2011 spending of $3.60 trillion. Major categories of FY 2012 spending included: Medicare & Medicaid ($802B or 23% of spending), Social Security ($768B or 22%), Defense Department ($670B or 19%), non-defense discretionary ($615B or 17%), other mandatory ($461B or 13%) and interest ($223B or 6%).
Historically, the U.S. public debt as a share of GDP increased during wars and recessions, and subsequently declined. For example, debt held by the public as a share of GDP peaked just after World War II (113% of GDP in 1945), but then fell over the following 30 years. In recent decades, large budget deficits and the resulting increases in debt have led to concern about the long-term sustainability of the federal government's fiscal policies. However, these concerns are not universally shared.
Military service is voluntary, though conscription may occur in wartime through the Selective Service System. American forces can be rapidly deployed by the Air Force's large fleet of transport aircraft, the Navy's 10 active , and at sea with the Navy's Atlantic and Pacific fleets. The military operates 865 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries.
The military budget of the United States in 2011 was more than $700 billion, 41% of global military spending and equal to the next 14 largest national military expenditures combined. At 4.7% of GDP, the rate was the second-highest among the top 15 military spenders, after Saudi Arabia. U.S. defense spending as a percentage of GDP ranked 23rd globally in 2012 according to the CIA. Defense's share of U.S. spending has generally declined in recent decades, from Cold War peaks of 14.2% of GDP in 1953 and 69.5% of federal outlays in 1954 to 4.7% of GDP and 18.8% of federal outlays in 2011.
The proposed base Department of Defense budget for 2012, $553 billion, was a 4.2% increase over 2011; an additional $118 billion was proposed for the military campaigns in Iraq and Afghanistan. The last American troops serving in Iraq departed in December 2011; 4,484 service members were killed during the Iraq War. Approximately 90,000 U.S. troops were serving in Afghanistan in April 2012; by November 8, 2013 2,285 had been killed during the War in Afghanistan.
In 2012 there were 4.7 murders per 100,000 persons in the United States, a 54% decline from the modern peak of 10.2 in 1980.
In 2001–2, the United States had above-average levels of violent crime and particularly high levels of gun violence compared to other developed nations. A cross-sectional analysis of the World Health Organization Mortality Database from 2003 showed that United States "homicide rates were 6.9 times higher than rates in the other high-income countries, driven by firearm homicide rates that were 19.5 times higher." Gun ownership rights continue to be the subject of contentious political debate.
From 1980 through 2008 males represented 77% of homicide victims and 90% of offenders. Blacks committed 52.5% of all homicides during that span, at a rate almost eight times that of whites ("whites" includes most Hispanics), and were victimized at a rate six times that of whites. Most homicides were intraracial, with 93% of black victims killed by blacks and 84% of white victims killed by whites. In 2012, Louisiana had the highest rate of murder and non-negligent manslaughter in the U.S., and New Hampshire the lowest. The FBI's Uniform Crime Reports estimates that there were 3,246 violent and property crimes per 100,000 residents in 2012, for a total of over 9 million total crimes.
Capital punishment is sanctioned in the United States for certain federal and military crimes, and used in 31 states. No executions took place from 1967 to 1977, owing in part to a U.S. Supreme Court ruling striking down arbitrary imposition of the death penalty. In 1976, that Court ruled that, under appropriate circumstances, capital punishment may constitutionally be imposed. Since the decision there have been more than 1,300 executions, a majority of these taking place in three states: Texas, Virginia, and Oklahoma. Meanwhile, several states have either abolished or struck down death penalty laws. In 2014, the country had the fifth highest number of executions in the world, following China, Iran, Saudi Arabia, and Iraq.
The United States has the highest documented incarceration rate and total prison population in the world. ξ86
For the latest data, see
National Research Council. The Growth of Incarceration in the United States: Exploring Causes and Consequences. Washington, DC: The National Academies Press, 2014. Retrieved May 10, 2014.
Nation Behind Bars: A Human Rights Solution. Human Rights Watch, May 2014. Retrieved May 10, 2014. At the start of 2008, more than 2.3 million people were incarcerated, more than one in every 100 adults. ξ87 At year end 2012, the combined U.S. adult correctional systems supervised about 6,937,600 offenders. About 1 in every 35 adult residents in the United States was under some form of correctional supervision at yearend 2012, the lowest rate observed since 1997. The prison population has quadrupled since 1980. ξ88 However, the imprisonment rate for all prisoners sentenced to more than a year in state or federal facilities is 478 per 100,000 in 2013 and the rate for pre-trial/remand prisoners is 153 per 100,000 residents in 2012. The country's high rate of incarceration is largely due to changes in sentencing guidelines and drug policies. ξ89 According to the Federal Bureau of Prisons, the majority of inmates held in federal prisons are convicted of drug offenses. The privatization of prisons and prison services which began in the 1980s has been a subject of debate.
Selman, Donna and Paul Leighton (2010). Punishment for Sale: Private Prisons, Big Business, and the Incarceration Binge. Rowman & Littlefield Publishers. p. xi. ISBN 1-4422-0173-8.
Gottschalk, Marie (2014). Caught: The Prison State and the Lockdown of American Politics. Princeton University Press. p. 70 ISBN 0-691-16405-3.
Peter Kerwin (June 10, 2015). Study finds private prisons keep inmates longer, without reducing future crime. University of Wisconsin–Madison News. Retrieved June 11, 2015. In 2008, Louisiana had the highest incarceration rate, and Maine the lowest. ξ91
|3.6% (Q2 2015, annualized)|
The US's nominal GDP is estimated to be $17.528 trillion From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks ninth in the world in nominal GDP per capita and sixth in GDP per capita at PPP. The U.S. dollar is the world's primary reserve currency.
The United States is the largest importer of goods and second largest exporter, though exports per capita are relatively low. In 2010, the total U.S. trade deficit was $635 billion. Canada, China, Mexico, Japan, and Germany are its top trading partners. In 2010, oil was the largest import commodity, while transportation equipment was the country's largest export. Japan is the largest foreign holder of U.S. public debt. The largest holder of the U.S. debt are American entities, including federal government accounts and the Federal Reserve, who hold the majority of the debt.
The Stockholm International Peace Research Institute, SIPRI, found that the United States' arms industry was the world's biggest exporter of major weapons from 2005–2009, and remained the largest exporter of major weapons during a period between 2010–2014, followed by Russia, China (PRC), and Germany.
In 2009, the private sector was estimated to constitute 86.4% of the economy, with federal government activity accounting for 4.3% and state and local government activity (including federal transfers) the remaining 9.3%. The number of employees at all levels of government outnumber those in manufacturing by 1.7 to 1. While its economy has reached a postindustrial level of development and its service sector constitutes 67.8% of GDP, the United States remains an industrial power. The leading business field by gross business receipts is wholesale and retail trade; by net income it is manufacturing. In the franchising business model, McDonald's and Subway are the two most recognized brands in the world. Coca-Cola is the most recognized soft drink company in the world.
Chemical products are the leading manufacturing field. The United States is the largest producer of oil in the world, as well as its second largest importer. It is the world's number one producer of electrical and nuclear energy, as well as liquid natural gas, sulfur, phosphates, and salt. The National Mining Association provides data pertaining to coal and that include beryllium, copper, lead, magnesium, zinc, titanium and others.
Agriculture accounts for just under 1% of GDP, yet the United States is the world's top producer of corn and soybeans. The National Agricultural Statistics Service maintains agricultural statistics for products that include , , rye, wheat, rice, cotton, corn, barley, hay, sunflowers, and oilseeds. In addition, the United States Department of Agriculture (USDA) provides livestock statistics regarding beef, poultry, pork, and dairy products. The country is the primary developer and grower of genetically modified food, representing half of the world's biotech crops.
Consumer spending comprises 68% of the U.S. economy in 2015. "Personal Consumption Expenditures (PCE)/Gross Domestic Product (GDP)" FRED Graph, Federal Reserve Bank of St. Louis In August 2010, the American labor force consisted of 154.1 million people. With 21.2 million people, government is the leading field of employment. The largest private employment sector is health care and social assistance, with 16.4 million people. About 12% of workers are unionized, compared to 30% in Western Europe. The World Bank ranks the United States first in the ease of hiring and firing workers. The United States is ranked among the top three in the Global Competitiveness Report as well. It has a smaller welfare state and redistributes less income through government action than European nations tend to.
The United States is the only advanced economy that does not guarantee its workers paid vacationRay, Rebecca; Sanes, Milla; Schmitt, John (May 2013). No-Vacation Nation Revisited. Center for Economic and Policy Research. Retrieved September 8, 2013. and is one of just a few countries in the world without paid family leave as a legal right, with the others being Papua New Guinea, Suriname and Liberia.Bernard. Tara Siegel (February 22, 2013). "In Paid Family Leave, U.S. Trails Most of the Globe". The New York Times. Retrieved August 27, 2013. However, 74% of full-time American workers get paid sick leave, according to the Bureau of Labor Statistics, although only 24% of part-time workers get the same benefits. While federal law currently does not require sick leave, it's a common benefit for government workers and full-time employees at corporations. In 2009, the United States had the third highest workforce productivity per person in the world, behind Luxembourg and Norway. It was fourth in productivity per hour, behind those two countries and the Netherlands.
The 2008–2012 global recession had a significant impact on the United States, with output still below potential according to the Congressional Budget Office. It brought high unemployment (which has been decreasing but remains above pre-recession levels), along with low consumer confidence, the continuing decline in home values and increase in foreclosures and personal bankruptcies, an escalating federal debt crisis, inflation, and rising petroleum and food prices. There remains a record proportion of long-term unemployed, continued decreasing household income, and tax and federal budget increases.
There has been a widening gap between productivity and median incomes since the 1970s.Mishel, Lawrence (April 26, 2012). The wedges between productivity and median compensation growth. Economic Policy Institute. Retrieved October 18, 2013. However, the gap between total compensation and productivity is not as wide because of increased employee benefits such as health insurance. While inflation-adjusted ("real") household income had been increasing almost every year from 1947 to 1999, it has since been flat on balance and has even decreased recently.
According to Congressional Research Service, during this same period, immigration to the United States increased, while the lower 90% of tax filers incomes became stagnant, and eventually decreasing since 2000. The rise in the share of total annual income received by the top 1 percent, which has more than doubled from 9 percent in 1976 to 20 percent in 2011, has had a significant impact on income inequality,Alvaredo, Facundo; Atkinson, Anthony B.; Piketty, Thomas; Saez, Emmanuel (2013). "The Top 1 Percent in International and Historical Perspective". Journal of Economic Perspectives. Retrieved August 16, 2013. leaving the United States with one of the widest income distributions among OECD nations.
Focus on Top Incomes and Taxation in OECD Countries: Was the crisis a game changer? OECD, May 2014. Retrieved May 1, 2014. The post-recession income gains have been very uneven, with the top 1 percent capturing 95 percent of the income gains from 2009 to 2012.Saez, Emmanuel (September 3, 2013). "Striking it Richer: The Evolution of Top Incomes in the United States". University of California, Berkeley. Retrieved September 11, 2013. The extent and relevance of income inequality is a matter of debate.
Wealth, like income and taxes, is highly concentrated; the richest 10% of the adult population possess 72% of the country's household wealth, while the bottom half claim only 2%.Piketty, Thomas (2014). Capital in the Twenty-First Century. Belknap Press. ISBN 0-674-43000-X p. 257 Between June 2007 and November 2008 the global recession led to falling asset prices around the world. Assets owned by Americans lost about a quarter of their value. Since peaking in the second quarter of 2007, household wealth was down $14 trillion, but has since increased $14 trillion over 2006 levels." Americans' wealth drops $1.3 trillion". CNN Money. June 11, 2009. At the end of 2014, household debt amounted to $11.8 trillion, down from $13.8 trillion at the end of 2008.
There were about 578,424 sheltered and unsheltered homeless persons in the U.S. in January 2014, with almost two-thirds staying in an emergency shelter or transitional housing program. In 2011 16.7 million children lived in food-insecure households, about 35% more than 2007 levels, though only 1.1% of U.S. children, or 845,000, saw reduced food intake or disrupted eating patterns at some point during the year, and most cases were not chronic. According to a 2014 report by the Census Bureau, one in five young adults lives in poverty today, up from one in seven in 1980. New Census Bureau Statistics Show How Young Adults Today Compare With Previous Generations in Neighborhoods Nationwide. United States Census Bureau, December 4, 2014.
About 12% of children are enrolled in parochial or nonsectarian . Just over 2% of children are homeschooled. The U.S. spends more on education per student than any nation in the world, spending more than $11,000 per elementary student in 2010 and more than $12,000 per high school student. Some 80% of U.S. college students attend public universities.
The United States has many competitive private and public institutions of higher education. The majority of world's top universities listed by different ranking organizations are in the US. There are also local with generally more open admission policies, shorter academic programs, and lower tuition. Of Americans 25 and older, 84.6% graduated from high school, 52.6% attended some college, 27.2% earned a bachelor's degree, and 9.6% earned graduate degrees. The basic literacy rate is approximately 99%.For more detail on U.S. literacy, see A First Look at the Literacy of America's Adults in the 21st century, U.S. Department of Education (2003). The United Nations assigns the United States an Education Index of 0.97, tying it for 12th in the world.
As for public expenditures on higher education, the U.S. trails some other OECD nations but spends more per student than the OECD average, and more than all nations in combined public and private spending. , student loan debt exceeded one trillion dollars, more than Americans owe on credit cards. Student Loan Debt Exceeds One Trillion Dollars. NPR, April 4, 2012. Retrieved September 8, 2013.
Core American culture was established by Protestant British colonists and shaped by the frontier settlement process, with the traits derived passed down to descendants and transmitted to immigrants through assimilation. Americans have traditionally been characterized by a strong work ethic, competitiveness, and individualism, as well as a unifying belief in an "American creed" emphasizing liberty, equality, private property, democracy, rule of law, and a preference for limited government. ξ92 : also see American's Creed, written by William Tyler Page and adopted by Congress in 1918. Americans are extremely charitable by global standards. According to a 2006 British study, Americans gave 1.67% of GDP to charity, more than any other nation studied, more than twice the second place British figure of 0.73%, and around twelve times the French figure of 0.14%.
The American Dream, or the perception that Americans enjoy high social mobility, plays a key role in attracting immigrants. Whether this perception is realistic has been a topic of debate. Gould, Elise (October 10, 2012). "U.S. lags behind peer countries in mobility." Economic Policy Institute. Retrieved July 15, 2013.CAP: Understanding Mobility in America. April 26, 2006 While mainstream culture holds that the United States is a classless society, ξ93 scholars identify significant differences between the country's social classes, affecting socialization, language, and values. ξ94 Americans' self-images, social viewpoints, and cultural expectations are associated with their occupations to an unusually close degree. ξ95 While Americans tend greatly to value socioeconomic achievement, being ordinary or average is generally seen as a positive attribute. ξ96
Characteristic dishes such as apple pie, fried chicken, pizza, hamburgers, and hot dogs derive from the recipes of various immigrants. French fries, Mexican dishes such as burritos and tacos, and pasta dishes freely adapted from Italian sources are widely consumed. Americans drink three times as much coffee as tea. Marketing by U.S. industries is largely responsible for making orange juice and milk ubiquitous breakfast beverages.Smith, 2004, pp. 131–132Levenstein, 2003, pp. 154–55
American eating habits owe a great deal to that of their British culinary roots with some variations. Although American lands could grow newer vegetables England could not, most colonists would not eat these new foods until accepted by Europeans. ξ99 Over time American foods changed to a point that food critic, John L. Hess stated in 1972: "Our founding fathers were as far superior to our present political leaders in the quality of their food as they were in the quality of their prose and intelligence". ξ100
The American fast food industry, the world's largest, pioneered the drive-through format in the 1940s. Fast food consumption has sparked health concerns. During the 1980s and 1990s, Americans' caloric intake rose 24%; frequent dining at fast food outlets is associated with what public health officials call the American "obesity epidemic".Boslaugh, Sarah (2010). "Obesity Epidemic", in Culture Wars: An Encyclopedia of Issues, Viewpoints, and Voices, ed. Roger Chapman. Armonk, N.Y.: M. E. Sharpe, pp. 413–14. ISBN 978-0-7656-1761-3. Highly sweetened soft drinks are widely popular, and sugared beverages account for nine percent of American caloric intake.
Eleven U.S. citizens have won the Nobel Prize in Literature, most recently Toni Morrison in 1993. William Faulkner, Ernest Hemingway and John Steinbeck are often named among the most influential writers of the 20th century.Quinn, Edward (2006). A Dictionary of Literary and Thematic Terms. Infobase, p. 361. ISBN 0-8160-6243-9. Seed, David (2009). A Companion to Twentieth-Century United States Fiction. Chichester, West Sussex: John Wiley and Sons, p. 76. ISBN 1-4051-4691-5. Meyers, Jeffrey (1999). Hemingway: A Biography. New York: Da Capo, p. 139. ISBN 0-306-80890-0. Popular literary genres such as the Western and hardboiled crime fiction developed in the United States. The Beat Generation writers opened up new literary approaches, as have postmodernist authors such as John Barth, Thomas Pynchon, and Don DeLillo. ξ101
The transcendentalists, led by Thoreau and Ralph Waldo Emerson, established the first major American philosophical movement. After the Civil War, Charles Sanders Peirce and then William James and John Dewey were leaders in the development of pragmatism. In the 20th century, the work of W. V. O. Quine and Richard Rorty, and later Noam Chomsky, brought analytic philosophy to the fore of American philosophical academia. John Rawls and Robert Nozick led a revival of political philosophy. Cornel West and Judith Butler have led a continental tradition in American philosophical academia. Chicago school economists like Milton Friedman, James M. Buchanan, and Thomas Sowell have impacted various fields in social and political philosophy.
In the visual arts, the Hudson River School was a mid-19th-century movement in the tradition of European naturalism. The realist paintings of Thomas Eakins are now widely celebrated. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene.Brown, Milton W. (1988 1963). The Story of the Armory Show. New York: Abbeville. ISBN 0-89659-795-4. Georgia O'Keeffe, Marsden Hartley, and others experimented with new, individualistic styles. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. The tide of modernism and then postmodernism has brought fame to American architects such as Frank Lloyd Wright, Philip Johnson, and Frank Gehry. ξ102
One of the first major promoters of American theater was impresario P. T. Barnum, who began operating a lower Manhattan entertainment complex in 1841. The team of Harrigan and Hart produced a series of popular musical comedies in New York starting in the late 1870s. In the 20th century, the modern musical form emerged on Broadway; the songs of musical theater composers such as Irving Berlin, Cole Porter, and Stephen Sondheim have become pop standards. Playwright Eugene O'Neill won the Nobel literature prize in 1936; other acclaimed U.S. dramatists include multiple Pulitzer Prize winners Tennessee Williams, Edward Albee, and August Wilson. ξ104
Though little known at the time, Charles Ives's work of the 1910s established him as the first major U.S. composer in the classical tradition, while experimentalists such as Henry Cowell and John Cage created a distinctive American approach to classical composition. Aaron Copland and George Gershwin developed a new synthesis of popular and classical music. Choreographers Isadora Duncan and Martha Graham helped create modern dance, while George Balanchine and Jerome Robbins were leaders in 20th-century ballet. Americans have long been important in the modern artistic medium of photography, with major photographers including Alfred Stieglitz, Edward Steichen, and Ansel Adams. ξ105
Elvis Presley and Chuck Berry were among the mid-1950s pioneers of rock and roll. In the 1960s, Bob Dylan emerged from the folk revival to become one of America's most celebrated songwriters and James Brown led the development of funk. More recent American creations include hip hop and house music. American pop stars such as Presley, Michael Jackson, and Madonna have become global celebrities, as have contemporary musical artists such as Taylor Swift, Britney Spears, Katy Perry, and Beyoncé as well as hip hop artists Jay Z, Eminem and Kanye West.
Director D. W. Griffith, American's top filmmaker during the silent film period, was central to the development of film grammar, and producer/entrepreneur Walt Disney was a leader in both animated film and movie merchandising. ξ106 Directors such as John Ford redefined the image of the American Old West and history, and, like others such as John Huston, broadened the possibilities of cinema with location shooting, with great influence on subsequent directors. The industry enjoyed its golden years, in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, film directors such as Martin Scorsese, Francis Ford Coppola and Robert Altman were a vital component in what became known as "New Hollywood" or the "Hollywood Renaissance", ξ107 grittier films influenced by French and Italian realist pictures of the post-war period. ξ108 Since, directors such as Steven Spielberg, George Lucas and James Cameron have gained renown for their blockbuster films, often characterized by high production costs, and in return, high earnings at the box office, with Cameron's Avatar (2009) earning more than $2 billion. ξ109
Notable films topping the American Film Institute's AFI 100 list include Orson Welles's Citizen Kane (1941), which is frequently cited as the greatest film of all time, Village Voice: 100 Best Films of the 20th century (2001). Filmsite. Casablanca (1942), The Godfather (1972), Gone with the Wind (1939), Lawrence of Arabia (1962), The Wizard of Oz (1939), The Graduate (1967), On the Waterfront (1954), Schindler's List (1993), Singin' in the Rain (1952), It's a Wonderful Life (1946) and Sunset Boulevard (1950). The Academy Awards, popularly known as the Oscars, have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, ξ110 and the Golden Globe Awards have been held annually since January 1944. ξ111
The market for professional sports in the United States is roughly $69 billion, roughly 50% larger than that of all of Europe, the Middle East, and Africa combined. Global sports market to hit $141 billion in 2012. Reuters. Retrieved on July 24, 2013. Baseball has been regarded as the national sport since the late 19th century, with Major League Baseball (MLB) being the top league, while American football is now by several measures the most popular spectator sport, MacCambridge, Michael (2004). America's Game: The Epic Story of How Pro Football Captured a Nation. New York: Random House. ISBN 0-375-50454-0. with the National Football League (NFL) having the highest average attendance of any sports league in the world and a Super Bowl watched by millions globally. Basketball and ice hockey are the country's next two leading professional team sports, with the top leagues being the National Basketball Association (NBA) and the National Hockey League (NHL). These four major sports, when played professionally, each occupy a season at different, but overlapping, times of the year. College football and basketball attract large audiences.
Boxing and horse racing were once the most watched , but they have been eclipsed by golf and auto racing, particularly NASCAR. In the 21st century, televised mixed martial arts has also gained a strong following of regular viewers. While soccer is less popular in the United States compared to many other nations, the country hosted the 1994 FIFA World Cup, the men's national soccer team has been to the past six World Cups. The United States women's national soccer team won the women's world cup three times, highest in the world. Major League Soccer is the professional soccer league in the United States.
Mass transit accounts for 9% of total U.S. work trips. Transport of goods by rail is extensive, though relatively low numbers of passengers (approximately 31 million annually) use intercity rail to travel, partly because of the low population density throughout much of the U.S. interior. However, ridership on Amtrak, the national intercity passenger rail system, grew by almost 37% between 2000 and 2010. Also, light rail development has increased in recent years. Bicycle usage for work commutes is minimal.
The civil airline industry is entirely privately owned and has been largely deregulated since 1978, while [[List of airports in the United States| most major airports]] are publicly owned. The three largest airlines in the world by passengers carried are U.S.-based; American Airlines is number one after its 2013 acquisition by US Airways. Of the world's 30 busiest passenger airports, 12 are in the United States, including the busiest, Hartsfield–Jackson Atlanta International Airport.
For decades, nuclear power has played a limited role relative to many other developed countries, in part because of public perception in the wake of a 1979 accident. In 2007, several applications for new nuclear plants were filed. The United States has 27% of global coal reserves. It is the world's largest producer of natural gas and crude oil.
Cities, utilities, state governments and the federal government have addressed the above issues in various ways. To keep pace with demand from an increasing population, utilities traditionally have augmented supplies. However, faced with increasing costs and droughts, water conservation is beginning to receive more attention and is being supported through the federal WaterSense program. The reuse of treated wastewater for non-potable uses is also becoming increasingly common. Pollution through wastewater discharges, a major issue in the 1960s, has been brought largely under control.
Most Americans are served by publicly owned water and sewer utilities. Eleven percent of Americans receive water from private (so-called "investor-owned") utilities. In rural areas, cooperatives often provide drinking water. Finally, up to 15 percent of Americans are served by their own wells. Water supply and wastewater systems are regulated by state governments and the federal government. At the state level, health and environmental regulation is entrusted to the corresponding state-level departments. or Public Service Commissions regulate tariffs charged by private utilities. In some states they also regulate tariffs by public utilities. At the federal level, drinking water quality and wastewater discharges are regulated by the United States Environmental Protection Agency, which also provides funding to utilities through .
Water consumption in the United States is more than double that in Central Europe, with large variations among the states. In 2002 the average American family spent $474 on water and sewerage charges,, p. 11; quoting: which is about the same level as in Europe. The median household spent about 1.1 percent of its income on water and sewerage.Calculated based on a median household income of $42,409 in 2002, as quoted by
In 1876, Alexander Graham Bell was awarded the first U.S. patent for the telephone. Thomas Edison's research laboratory, one of the first of its kind, developed the phonograph, the first long-lasting light bulb, and the first viable movie camera. The latter lead to emergence of the worldwide entertainment industry. In the early 20th century, the automobile companies of Ransom E. Olds and Henry Ford popularized the assembly line. The Wright brothers, in 1903, made the first sustained and controlled heavier-than-air powered flight.
The rise of Nazism in the 1930s led many European scientists, including Albert Einstein, Enrico Fermi, and John von Neumann, to immigrate to the United States. ξ114 During World War II, the Manhattan Project developed nuclear weapons, ushering in the Atomic Age, while the Space Race produced rapid advances in rocketry, materials science, and aeronautics.
The invention of the transistor in the 1950s, a key active component in practically all modern electronics, led to many technological developments and a significant expansion of the U.S. technology industry. Goodheart July 2, 2006Silicon Valley: 110 Year Renaissance, McLaughlin, Weimers, Winslow 2008. ξ115 This in turn led to the establishment of many new technology companies and regions around the county such as Silicon Valley in California. Advancements by American microprocessor companies such as Advanced Micro Devices (AMD), and Intel along with both computer software and hardware companies that include Adobe Systems, Apple Computer, IBM, GNU-Linux, Microsoft, and Sun Microsystems created and popularized the personal computer. The ARPANET was developed in the 1960s to meet Defense Department requirements, and became the first of a series of networks which evolved into the Internet. ξ116
These advancements then lead to greater personalization of technology for individual use. , 83.8% of American households owned at least one computer, and 73.3% had high-speed Internet service. 91% of Americans also own a mobile phone . The United States ranks highly with regard to freedom of use of the internet.
In the 21st century, 64% of research and development funding comes from the private sector. The United States leads the world in scientific research papers and impact factor.
Approximately one-third of the adult population is obese and an additional third is overweight; the obesity rate, the highest in the industrialized world, has more than doubled in the last quarter-century. ξ117 Obesity-related type 2 diabetes is considered epidemic by health care professionals. The infant mortality rate of 6.17 per thousand places the United States 169th highest out of 224 countries, with the 224th country having the lowest mortality rate.
In 2010, coronary artery disease, lung cancer, stroke, , and traffic accidents caused the most years of life lost in the U.S. Low back pain, depression, , neck pain, and anxiety caused the most years lost to disability. The most deleterious were poor diet, tobacco smoking, obesity, high blood pressure, high blood sugar, physical inactivity, and alcohol use. Alzheimer's disease, drug abuse, kidney disease and cancer, and falls caused the most additional years of life lost over their age-adjusted 1990 per-capita rates. U.S. teenage pregnancy and abortion rates are substantially higher than in other Western nations, especially among blacks and Hispanics. U.S. underage drinking among teenagers is among the lowest in industrialized nations.
The U.S. is a global leader in medical innovation. America solely developed or contributed significantly to 9 of the top 10 most important medical innovations since 1975 as ranked by a 2001 poll of physicians, while the EU and Switzerland together contributed to five. Since 1966, more Americans have received the Nobel Prize in Medicine than the rest of the world combined. From 1989 to 2002, four times more money was invested in private biotechnology companies in America than in Europe. The U.S. health-care system far outspends any other nation, measured in both per capita spending and percentage of GDP. OECD Health Data 2000: A Comparative Analysis of 29 Countries CD-ROM (OECD: Paris, 2000). See also
Health-care coverage in the United States is a combination of public and private efforts and is not universal. In 2014, 13.4% of the population did not carry health insurance. The subject of uninsured and underinsured Americans is a major political issue. In 2006, Massachusetts became the first state to mandate universal health insurance. Federal legislation passed in early 2010 would ostensibly create a near-universal health insurance system around the country by 2014, though the bill and its ultimate impact are issues of controversy.
In 1998, the number of U.S. commercial radio stations had grown to 4,793 AM stations and 5,662 FM stations. In addition, there are 1,460 public radio stations. Most of these stations are run by universities and public authorities for educational purposes and are financed by public or private funds, subscriptions and corporate underwriting. Much public-radio broadcasting is supplied by NPR (formerly National Public Radio). NPR was incorporated in February 1970 under the Public Broadcasting Act of 1967; its television counterpart, PBS, was also created by the same legislation. (NPR and PBS are operated separately from each other.) , there are 15,433 licensed full-power radio stations in the US according to the U.S. Federal Communications Commission (FCC).
Well-known newspapers are The New York Times, USA Today and The Wall Street Journal. Although the cost of publishing has increased over the years, the price of newspapers has generally remained low, forcing newspapers to rely more on advertising revenue and on articles provided by a major wire service, such as the Associated Press or Reuters, for their national and world coverage. With very few exceptions, all the newspapers in the U.S. are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or in a situation that is increasingly rare, by individuals or families. Major cities often have "alternative weeklies" to complement the mainstream daily papers, for example, New York City's The Village Voice or Los Angeles' LA Weekly, to name two of the best-known. Major cities may also support a local business journal, trade papers relating to local industries, and papers for local ethnic and social groups. Early versions of the American newspaper comic strip and the American comic book began appearing in the 19th century. In 1938, Superman, the comic book superhero of DC Comics, developed into an American icon. ξ118 Aside from and search engines, the most popular websites are Facebook, YouTube, Wikipedia, Amazon, eBay, and Twitter.
|
{
"palladium_score": 3.809648275375366,
"timestamp": "2026-01-18T07:32:55.095206",
"source": "Palladium-STEM (Preview)"
}
|
http://en.wikipedia.org/wiki/Locus_of_control
|
Locus of control
Locus of control is a theory in personality psychology referring to the extent to which individuals believe that they can control events that affect them. Understanding of the concept was developed by Julian B. Rotter in 1954, and has since become an aspect of personality studies. A person's "locus" (Latin for "place" or "location") is conceptualised as either internal (the person believes they can control their life) or external (meaning they believe that their decisions and life are controlled by environmental factors which they cannot influence).
Individuals with a high internal locus of control believe that events in their life derive primarily from their own actions; for example, if a person with an internal locus of control does not perform as well as they wanted to on a test, they would blame it on lack of preparedness on their part. If they performed well on a test, they would attribute this to ability to study. In the test-performance example, if a person with a high external locus of control does poorly on a test, they might attribute this to the difficulty of the test questions. If they performed well on a test, they might think the teacher was lenient or that they were lucky.
Locus of control has generated much research in a variety of areas in psychology. The construct is applicable to fields such as educational psychology, health psychology or clinical psychology. There will probably continue to be debate about whether specific or more global measures of locus of control will prove to be more useful. Careful distinctions should also be made between locus of control (a concept linked with expectancies about the future) and attributional style (a concept linked with explanations for past outcomes), or between locus of control and concepts such as self-efficacy. The importance of locus of control as a topic in psychology is likely to remain quite central for many years.
Locus of control has also been included as one of four dimensions of core self-evaluations – one's fundamental appraisal of oneself – along with neuroticism, self-efficacy, and self-esteem. In a follow-up study, Judge et al. (2002) argued the concepts of locus of control, neuroticism, self-efficacy and self-esteem measured the same, single factor. The concept of core self-evaluations was first examined by Judge, Locke, and Durham (1997), and since has proven to have the ability to predict several work outcomes, specifically, job satisfaction and job performance.
Locus of control is the framework of Rotter's (1954) social-learning theory of personality. In 1966 he published an article in Psychological Monographs which summarized over a decade of research (by Rotter and his students), much of it previously unpublished. In 1976, Herbert M. Lefcourt defined the perceived locus of control: "...a generalised expectancy for internal as opposed to external control of reinforcements". Attempts have been made to trace the genesis of the concept to the work of Alfred Adler, but its immediate background lies in the work of Rotter and his students. Early work on the topic of expectations about control of reinforcement had been performed in the 1950s by James and Phares (prepared for unpublished doctoral dissertations supervised by Rotter at Ohio State University).
Another Rotter student, William H. James (not to be confused with William James), studied two types of "expectancy shifts":
- Typical expectancy shifts, believing that success (or failure) would be followed by a similar outcome
- Atypical expectancy shifts, believing that success (or failure) would be followed by a dissimilar outcome
Weiner's attribution theory as applied to student motivation
|Perceived locus of control|
Additional research led to the hypothesis that typical expectancy shifts were displayed more often by those who attributed their outcomes to ability, whereas those who displayed atypical expectancy were more likely to attribute their outcomes to chance. This was interpreted that people could be divided into those who attribute to ability (an internal cause) versus those who attribute to luck (an external cause). Bernard Weiner argued that rather than ability-versus-luck, locus may relate to whether attributions are made to stable or unstable causes.
Rotter (1975, 1989) has discussed problems and misconceptions in others' use of the internal-versus-external construct.
Personality orientation
Rotter (1975) cautioned that internality and externality represent two ends of a continuum, not an either/or typology. Internals tend to attribute outcomes of events to their own control. People who have internal locus of control believe that the outcomes of their actions are results of their own abilities. Internals believe that their hard work would lead them to obtain positive outcomes. They also believe that every action has its consequence, which makes them accept the fact that things happen and it depends on them if they want to have control over it or not. Externals attribute outcomes of events to external circumstances. People that have external locus of control believe that many things that happen in their lives are out of their control. They believe that their own actions are a result of external factors that are beyond their control. Rotter in his study suggested that people that have external locus of control have four types of beliefs which include the following: powerful others such as doctors, nurses, fate, luck and a belief that the world is too complex to predict its outcomes. People that have external locus of control tend to blame others for the outcomes rather than themselves. It should not be thought, however, that internality is linked exclusively with attribution to effort and externality with attribution to luck (as Weiner's work —see below—makes clear). This has obvious implications for differences between internals and externals in terms of their achievement motivation, suggesting that internal locus is linked with higher levels of need for achievement. Due to their locating control outside themselves, externals tend to feel they have less control over their fate. People with an external locus of control tend to be more stressed and prone to clinical depression.
Internals were believed by Rotter (1966) to exhibit two essential characteristics: high achievement motivation and low outer-directedness. This was the basis of the locus-of-control scale proposed by Rotter in 1966, although it was based on Rotter's belief that locus of control is a single construct. Since 1970, Rotter's assumption of uni-dimensionality has been challenged, with Levenson (for example) arguing that different dimensions of locus of control (such as beliefs that events in one's life are self-determined, or organized by powerful others and are chance-based) must be separated. Weiner's early work in the 1970s suggested that orthogonal to the internality-externality dimension, differences should be considered between those who attribute to stable and those who attribute to unstable causes.
This new, dimensional theory meant that one could now attribute outcomes to ability (an internal stable cause), effort (an internal unstable cause), task difficulty (an external stable cause) or luck (an external, unstable cause). Although this was how Weiner originally saw these four causes, he has been challenged as to whether people see luck (for example) as an external cause, whether ability is always perceived as stable, and whether effort is always seen as changing. Indeed, in more recent publications (e.g. Weiner, 1980) he uses different terms for these four causes (such as "objective task characteristics" instead of "task difficulty" and "chance" instead of "luck"). Psychologists since Weiner have distinguished between stable and unstable effort, knowing that in some circumstances effort could be seen as a stable cause (especially given the presence of words such as "industrious" in English).
Regarding locus of control, there is another type of control that entails a mix among the internal and external types. People that have the combination of the two types of locus of control are often referred to as Bi-locals. People that have Bi-local characteristics are known to handle stress and cope with their diseases more efficiently by having the mixture of internal and external locus of control. People that have this mix of loci of control can take personal responsibility for their actions and the consequences thereof while remaining capable of relying upon and having faith in outside resources; these characteristics correspond to the internal and external loci of control, respectively. An example of this mixed system would be an alcoholic who will accept the fact that he brought the disease upon himself while remaining open to treatment and/or acknowledging that there are people, mainly doctors and therapists, that are trying to cure his/her addiction, and on whom, he should rely.
Measuring scales
The most widely used questionnaire to measure locus of control is the 23-item (plus six filler items), forced-choice scale of Rotter (1966). However, this is not the only questionnaire; Bialer's (1961) 23-item scale for children predates Rotter's work. Also relevant to the locus-of-control scale are the Crandall Intellectual Ascription of Responsibility Scale (Crandall, 1965) and the Nowicki-Strickland Scale. One of the earliest psychometric scales to assess locus of control (using a Likert-type scale, in contrast to the forced-choice alternative measure in Rotter's scale) was that devised by W. H. James for his unpublished doctoral dissertation, supervised by Rotter at Ohio State University; however, this remains unpublished.
Many measures of locus of control have appeared since Rotter's scale. These were reviewed by Furnham and Steele (1993) and include those related to health psychology, industrial and organizational psychology and those specifically for children (such as the Stanford Preschool Internal-External Control Index for three- to six-year-olds). Furnham and Steele (1993) cite data suggesting that the most reliable, valid questionnaire for adults is the Duttweiler scale. For a review of the health questionnaires cited by these authors, see "Applications" below.
The Duttweiler (1984) Internal Control Index (ICI) addresses perceived problems with the Rotter scales, including their forced-choice format, susceptibility to social desirability and heterogeneity (as indicated by factor analysis). She also notes that, while other scales existed in 1984 to measure locus of control, "they appear to be subject to many of the same problems". Unlike the forced-choice format used on Rotter's scale, Duttweiler's 28-item ICI uses a Likert-type scale in which people must state whether they would rarely, occasionally, sometimes, frequently or usually behave as specified in each of 28 statements. The ICI assess variables pertinent to internal locus: cognitive processing, autonomy, resistance to social influence, self-confidence and delay of gratification. A small (133 student-subject) validation study indicated that the scale had good internal reliability (a Cronbach's alpha of 0.85).
Attributional style
Attributional style (or explanatory style) is a concept introduced by Lyn Yvonne Abramson, Martin Seligman and John D. Teasdale; Buchanan and Seligman (1995) have edited a book-length review of the topic. This concept goes a stage further than Weiner, stating that in addition to the concepts of internality-externality and stability a dimension of globality-specificity[clarification needed] is also needed. Abramson et al. believed that how people explained successes and failures in their lives related to whether they attributed these to internal or external factors, short-term or long-term factors, and factors that affected all situations.
The topic of attribution theory (introduced to psychology by Fritz Heider) has had an influence on locus-of-control theory, but differences exist between the history of these two models. Attribution theorists have been (largely speaking) social psychologists (concerned with the general processes characterizing how and why people make the attributions they do), whereas locus-of-control theorists have been more concerned with individual differences.
Significant to the history of both approaches were the contributions made by Bernard Weiner in the 1970s. Before this time, attribution theorists and locus-of-control theorists had been largely concerned with divisions into external and internal loci of causality. Weiner added the dimension of stability-instability (and later controllability), indicating how a cause could be perceived as having been internal to a person yet still beyond the person's control. The stability dimension added to the understanding of why people succeed or fail after such outcomes. Although not part of Weiner's model, a further dimension of attribution was added by Abramson, Seligman and Teasdale (globality-specificity).
Locus of control's best known application may have been in the area of health psychology, largely due to the work of Kenneth Wallston. Scales to measure locus of control in the health domain were reviewed by Furnham and Steele in 1993. The best-known are the Health Locus of Control Scale and the Multidimensional Health Locus of Control Scale, or MHLC. The latter scale is based on the idea (echoing Levenson's earlier work) that health may be attributed to three sources: internal factors (such as self-determination of a healthy lifestyle), powerful others (such as one's doctor) or luck (which is very dangerous as lifestyle advice will be ignored - these people are very difficult to help).
Some of the scales reviewed by Furnham and Steele (1993) relate to health in more specific domains, such as obesity (for example, Saltzer's (1982) Weight Locus of Control Scale or Stotland and Zuroff's (1990) Dieting Beliefs Scale), mental health (such as Wood and Letak's (1982) Mental Health Locus of Control Scale or the Depression Locus of Control Scale of Whiteman, Desmond and Price, 1987) and cancer (the Cancer Locus of Control Scale of Pruyn et al., 1988). In discussing applications of the concept to health psychology Furnham and Steele refer to Claire Bradley's work, linking locus of control to the management of diabetes mellitus. Empirical data on health locus of control in a number of fields was reviewed by Norman and Bennett in 1995; they note that data on whether certain health-related behaviors are related to internal health locus of control have been ambiguous. They note that some studies found that internal health locus of control is linked with increased exercise, but cite other studies which found a weak (or no) relationship between exercise behaviors (such as jogging) and internal health locus of control. A similar ambiguity is noted for data on the relationship between internal health locus of control and other health-related behaviors (such as breast self-examination, weight control and preventative-health behavior). Of particular interest are the data cited on the relationship between internal health locus of control and alcohol consumption.
Norman and Bennett note that some studies that compared alcoholics with non-alcoholics suggest alcoholism is linked to increased externality for health locus of control; however, other studies have linked alcoholism with increased internality. Similar ambiguity has been found in studies of alcohol consumption in the general, non-alcoholic population. They are more optimistic in reviewing the literature on the relationship between internal health locus of control and smoking cessation, although they also point out that there are grounds for supposing that powerful-others and internal-health loci of control may be linked with this behavior.
They argue that a stronger relationship is found when health locus of control is assessed for specific domains than when general measures are taken. Overall, studies using behavior-specific health locus scales have tended to produce more positive results. These scales have been found to be more predictive of general behavior than more general scales, such as the MHLC scale. Norman and Bennett cite several studies that used health-related locus-of-control scales in specific domains (including smoking cessation), diabetes, tablet-treated diabetes, hypertension, arthritis, cancer, and heart and lung disease.
They also argue that health locus of control is better at predicting health-related behavior if studied in conjunction with health value (the value people attach to their health), suggesting that health value is an important moderator variable in the health locus of control relationship. For example, Weiss and Larsen (1990) found an increased relationship between internal health locus of control and health when health value was assessed. Despite the importance Norman and Bennett attach to specific measures of locus of control, there are general textbooks on personality which cite studies linking internal locus of control with improved physical health, mental health and quality of life in people with diverse conditions: HIV, migraines, diabetes, kidney disease and epilepsy.
During the 1970s and 1980s, Whyte correlated locus of control with the academic success of students enrolled in higher-education courses. Students who were more internally controlled believed that hard work and focus would result in successful academic progress, and they performed better academically. Those students who were identified as more externally controlled (believing that their future depended upon luck or fate) tended to have lower academic-performance levels. Cassandra B. Whyte researched how control tendency influenced behavioral outcomes in the academic realm by examining the effects of various modes of counseling on grade improvements and the locus of control of high-risk college students.
Organizational psychology and religion
Other fields to which the concept has been applied include industrial and organizational psychology, sports psychology, educational psychology and the psychology of religion. Richard Kahoe has published work in the latter field, suggesting that intrinsic religious orientation correlates positively (and extrinsic religious orientation correlates negatively) with internal locus. Of relevance to both health psychology and the psychology of religion is the work of Holt, Clark, Kreuter and Rubio (2003) on a questionnaire to assess spiritual-health locus of control. The authors distinguished between an active spiritual-health locus of control (in which "God empowers the individual to take healthy actions") and a more passive spiritual-health locus of control (where health is left up to God). In industrial and organizational psychology, it has been found that internals are more likely to take positive action to change their jobs (rather than merely talk about occupational change) than externals.
Consumer research
Locus of control has also been applied to the field of consumer research. For example, Martin, Veer and Pervan (2007) examined how the weight locus of control of women (i.e., beliefs about the control of body weight) influence how they react to female models in advertising of different body shapes. They found that women who believe they can control their weight ("internals"), respond most favorably to slim models in advertising, and this favorable response is mediated by self-referencing. In contrast, women who feel powerless about their weight ("externals"), self-reference larger-sized models, but only prefer larger-sized models when the advertisement is for a non-fattening product. For fattening products, they exhibit a similar preference for larger-sized models and slim models. The weight locus of control measure was also found to be correlated with measures for weight control beliefs and willpower.
Familial origins
The development of locus of control is associated with family style and resources, cultural stability and experiences with effort leading to reward. Many internals have grown up with families modeling typical internal beliefs; these families emphasized effort, education, responsibility and thinking, and parents typically gave their children rewards they had promised them. In contrast, externals are typically associated with lower socioeconomic status. Societies experiencing social unrest increase the expectancy of being out-of-control; therefore, people in such societies become more external.
The 1995 research of Schneewind suggests that "children in large single parent families headed by women are more likely to develop an external locus of control". Schultz and Schultz also claim that children in families where parents have been supportive and consistent in discipline develop internal locus of control. At least one study has found that children whose parents had an external locus of control are more likely to attribute their successes and failures to external causes. Findings from early studies on the familial origins of locus of control were summarized by Lefcourt: "Warmth, supportiveness and parental encouragement seem to be essential for development of an internal locus"., but causal evidence regarding how parental locus of control influences offspring locus of control (whether genetic, or environmentally mediated) is lacking.
Locus of control becomes more internal with age. As children grow older, they gain skills which give them more control over their environment, however whether this, or biological development is responsible for changes in locus is unclear.
It is sometimes assumed that as people age they will become less internal and more external, but data here have been ambiguous. Longitudinal data collected by Gatz and Karel (cited in Johnson et al., 2004) imply that internality may increase until middle age, decreasing thereafter. Noting the ambiguity of data in this area, Aldwin and Gilmer (2004) cite Lachman's claim that locus of control is ambiguous. Indeed, there is evidence here that changes in locus of control in later life relate more visibly to increased externality (rather than reduced internality) if the two concepts are taken to be orthogonal. Evidence cited by Schultz and Schultz (2005) (for example, Heckhausen and Schulz (1995) or Ryckman and Malikosi, 1975) suggests that locus of control increases in internality until middle age. The authors also note that attempts to control the environment become more pronounced between ages eight and fourteen.
Locus of control is how people measure and understand how people relate their health to their behavior, health status and how long it may take to recover from a disease. Locus of control can influence how people think and react towards their health and health decisions. Each day we are exposed to potential diseases that may affect our health. The way we approach that reality has a lot to do with our locus of control. Sometimes it is expected to see older adults experience progressive declines in their health, for this reason it is believed that their health locus of control will be affected. However, this does not necessarily mean that their locus of control will be affected negatively but older adults may experience decline in their health and this can show lower levels of internal locus of control. Age plays an important role in one’s internal and external locus of control. When comparing a young child and an older adult with their levels of locus of control in regards to health, the older person will have more control over their attitude and approach to the situation. As people age they become aware of the fact that events outside of their own control happen and that other individuals can have control of their health outcomes.
A study published in the journal Psychosomatic Medicine examined the health effect of childhood locus of control. 7,500 British adults (followed from birth), who had shown an internal locus of control at age 10, were less likely to be overweight at age 30. The children who had an internal locus of control also appeared to have higher levels of self-esteem.
Gender-based differences
As Schultz and Schultz (2005) point out, significant gender differences in locus of control have not been found for adults in the U.S. population. However, these authors also note that there may be specific sex-based differences for specific categories of items to assess locus of control; for example, they cite evidence that men may have a greater internal locus for questions related to academic achievement.
A study made by Takaki and colleagues (2006), focused on the gender differences with relationship to internal locus of control and self-efficacy in hemodialysis patients and their compliance. This study showed that females that had high internal locus of control were less compliant in regards to their health and medical advices compared to the men that participated in this study. Compliance is known to be the degree in which a person’s behavior, in this case the patient, has a relationship with the medical advice. For example, a person that is compliant with correctly follow his/her doctor’s advice.
Cross-cultural issues
The question of whether people from different cultures vary in locus of control has long been of interest to social psychologists. Japanese people tend to be more external in locus-of-control orientation than people in the U.S.; however, differences in locus of control between different countries within Europe (and between the U.S. and Europe) tend to be small. As Berry et al. pointed out in 1992, ethnic groups within the United States have been compared on locus of control; African Americans in the U.S. are more external than whites, even when socioeconomic status is controlled. Berry et al. also pointed out in 1992 how research on other ethnic minorities in the U.S. (such as Hispanics) has been ambiguous. More on cross-cultural variations in locus of control can be found in Shiraev and Levy (2004). Research in this area indicates that locus of control has been a useful concept for researchers in cross-cultural psychology.
Self-efficacy is a person's belief that he or she can accomplish a particular activity. It is a related concept introduced by Albert Bandura, has been measured by means of a psychometric scale. It differs from locus of control by relating to competence in circumscribed situations and activities (rather than more general cross-situational beliefs about control). Bandura has also emphasised differences between self-efficacy and self-esteem, using examples where low self-efficacy (for instance, in ballroom dancing) are unlikely to result in low self-esteem because competence in that domain is not very important to an individual. Although individuals may have a high internal health locus of control and feel in control of their own health, they may not feel efficacious in performing a specific treatment regimen that is essential to maintaining their own health. Self-efficacy plays an important role in one’s health because when people feel that they have self-efficacy over their health conditions, the effects of their health becomes less of a stressor.
Smith (1989) has argued that locus of control only weakly measures self-efficacy; "only a subset of items refer directly to the subject's capabilities". Smith noted that training in coping skills led to increases in self-efficacy, but did not affect locus of control as measured by Rotter's 1966 scale.
At first,[when?] stress was not considered to play a part in a person’s health and/or illness. In the previous section we[who?] saw how self-efficacy can be related to a person’s locus of control, and stress also has a relationship in these areas. Self-efficacy can be something that people use to deal with the stress that they are faced with on their everyday life. Some findings suggest that external health-related locus of control combined with self-efficacy moderates illness-related psychological distress. In the previous section, the different types of locus of control, internal and external, were mentioned. Based on the definition of people who have an external locus of control, we can see that this can be associated with higher levels of stress. A study conducted by Bollini and others reveals that individuals who have a high external locus of control tend to have higher levels of psychological and physical problems. These people are also more vulnerable to external influences and as a result they become more responsive to stress.
Veterans that have spinal cord injuries and post-traumatic stress are a good group to look at in regards to locus of control and stress. Aging shows to be a very important factor that can be related to the severity of the symptoms of PTSD experienced by the patients following the trauma of the war. Research suggest that patients that suffered a spinal cord injury will benefit from knowing that they have control over their health problems and their disability, which reflects the characteristics of having internal locus of control. A study by Chung et al. (2006) focused on how the responses of spinal cord injury post-traumatic stress varied depending on age. The researchers tested different age groups including young adults, middle-aged and elderly; the average age was 25, 48 and 65 for each group respectively. After the study, they concluded that age does not make a difference on how spinal cord injury patients respond to the traumatic events that happened. However, they did mention that age did play a role in the extent by which the external locus of control was used, concluding that the young adults group showed to be using more external locus of control characteristics than the other age groups that they were being compared to.
See also
- Explanatory style
- Fundamental Attribution Error
- Industrial Organizational Psychology
- Learned helplessness
- Carlson, N.R., et al. (2007). Psychology: The Science of Behaviour - 4th Canadian ed.. Toronto, ON: Pearson Education Canada.
- Judge, T. A.; Locke, E. A.; Durham, C. C. (1997). "The dispositional causes of job satisfaction: A core evaluations approach". Research in Organizational Behavior 19: 151–188.
- Judge, Timothy A.; Erez, Amir; Bono, Joyce E.; Thoresen, Carl J. (1 January 2002). "Are measures of self-esteem, neuroticism, locus of control, and generalized self-efficacy indicators of a common core construct?". Journal of Personality and Social Psychology 83 (3): 693–710. doi:10.1037/0022-3522.214.171.1243. PMID 12219863.
- Dormann, C.; Fay, D.; Zapf, D.; Frese, M. (2006). "A state-trait analysis of job satisfaction: On the effect of core self-evaluations". Applied Psychology: an International Review 55 (1): 27–51. doi:10.1111/j.1464-0597.2006.00227.x.
- Lefcourt 1976, p. 27
- Herbert M. Lefcourt, Locus of Control: Current Trends in Thory and Research. Psychology Press, 1982
- April, K. A.; Dharani B; Peters K. "Impact of Locus of Control Expectancy on Level of Well-Being". Review of European Studies 4 (2). doi:10.5539/res.v4n2p124.
- Jacobs-Lawson, J. M.; Waddell, E. L.; Webb, A. K. "Predictors of Health Locus of Control in Older Adults". Current Psychology 30 (2): 173–183. doi:10.1007/s12144-011-9108-z.
- Benassi, Sweeney & Dufour, 1988; cited in Maltby, Day & Macaskill, 2007.
- Weiner, 1974
- Nowicki and Strickland, 1971
- Lefcourt, 1976
- Mischel et al., 1974; cited in Furnham & Steele, 1993
- Duttweiler 1984, p. 211
- Abramson, Seligman and Teasdale, 1978.
- Wallston, Wallston, & DeVellis, 1976; Wallston, Wallston, Kaplan & Maides, 1976
- Lefcourt, 1991
- Norman & Bennett, 1995, p. 72
- Georgio & Bradley, 1992
- Ferraro, Price, Desmond and Roberts, 1987
- Bradley, Lewis, Jennings and Ward, 1990
- Stantion, 1987
- Nicasio et al., 1985
- Pruyn et al., 1988
- Allison, 1987
- Norman and Bennett, 1995
- Maltby, Day & Macaskill, 2007
- Whyte, C., "An Integrated Counseling and Learning Assistance Center" (1980). New Directions Sourcebook-Learning Assistance Centers. Jossey-Bass, Inc.
- Whyte, C., "Effective Conseling Methods for High-Risk College Freshmen (1978)." Measurement and Evaluation in Guidance. January, 6, (4). 198-200.
- Kahoe, 1974
- Holt et al., p294
- Allen, Weeks and Moffat, 2005; cited in Maltby et al., 2007
- Martin Brett A. S., Veer Ekant, Pervan Simon J. (2007). "Self-Referencing and Consumer Evaluations of Larger-sized Female Models: A Weight Locus of Control Perspective" (PDF). Marketing Letters 18 (3): 197–209. doi:10.1007/s11002-007-9014-1.
- Meyerhoff (2004), p.8
- Schultz and Schultz (2005), p.439.
- "Social Learning Theory of Julian B. Rotter" Archived from the original 2012-04-07.
- Lefcourt, 1976, p. 100
- Aldwin & Gilmer, 2004; Johnson, Grant, Plomin, Pedersen, Ahern, Berg & McClearn, 2001
- Strickland & Haley, 1980; cited in Schultz & Schultz, 2005.
- Takaki, J; Yano E. (2006). "Possible gender differences in the relationships of self-efficacy and the internal locus of control with compliance in hemodialysis patients". Behavioral Medicine 32 (1): 5–11. doi:10.3200/BMED.32.1.5-11. PMID 16637257.
- Berry, Poortinga, Segall and Dasen, 1992.
- Dyal, 1984; cited in Berry et al., 1992.
- Sherer, Madux et al., 1982
- Roddenberry, Angela; Renk, Kimberly (NaN undefined NaN). "Locus of Control and Self-Efficacy: Potential Mediators of Stress, Illness, and Utilization of Health Services in College Students". Child Psychiatry & Human Development 41 (4): 353–370. doi:10.1007/s10578-010-0173-6.
- Smith, p. 229.
- Man Chung, C; Preveza, E. Papandreou, K., & Prevezas, N. (2006). "Spinal cord injury, posttraumatic stress, and locus of control among the elderly". Psychiatry: Interpersonal & Biological Processes, 69 (1): 69–80. doi:10.1521/psyc.2006.69.1.69.
- Abramson, L.Y., Seligman, M.E.P., Teasdale, J.D. (1978). "Learned helplessness in humans: Critique and reformulation". Journal of Abnormal Psychology 87 (1): 49–74. doi:10.1037/0021-843X.87.1.49. PMID 649856.
- Abramson, L.Y., Metasky, G.I., Alloy, L.B. (1989). "Hopelessness depression: A theory-based subtype of depression". Psychological Review 96 (2): 358–72. doi:10.1037/0033-295X.96.2.358.
- Aldwin, C.M., Gilman, D.F. (2004). Health, Illness and Optimal Ageing. London: Sage. ISBN 0-7619-2259-8.
- Anderson, C. A., Jennings, D.L., Arnoult, L.H. (1988). "Validity and utility of the attributional style construct at a moderate level of specificity". Journal of Personality and Social Psychology 55 (6): 979–90. doi:10.1037/0022-35126.96.36.1999.
- Berry, J.W., Poortinga, Y.H.,Segall, M.H., Dasen, P.R. (1992). Cross-cultural Psychology: Research and Applications. Cambridge: Cambridge University Press. ISBN 0-521-37761-7.
- Buchanan, G.M., Seligman, M.E.P., ed. (1997). Explanatory Style. NJ: Lawrence Erlbaum Associates. ISBN [[Special:BookSources/0-8058-0924-5|0-8058-0924-5 [[Category:Articles with invalid ISBNs]]]] Check
- Burns, M., Seligman, M.E.P. (1989). "Explanatory style across the lifespan". Journal of Personality and Social Psychology 56 (3): 471–7. doi:10.1037/0022-35188.8.131.521. PMID 2926642.
- Cutrona, C.E., Russell, D., Jones, R.D. (1985). "Cross-situational consistency in causal attributions: Does attributional style exist?". Journal of Personality and Social Psychology.
- Duttweiler, P.C. (1984). "The Internal Control Index: A Newly Developed Measure of Locus of Control". Educational and Psychological Measurement 44 (2): 209–21. doi:10.1177/0013164484442004.
- Furnham, A., Steele, H. (1993). "Measures of Locus of Control: A critique of children's, health and work-related locus of control questionnaires". British Journal of Psychology 84 (4): 443–79. doi:10.1111/j.2044-8295.1993.tb02495.x. PMID 8298858.
- Eisner, J.E. (1997). "The origins of explanatory style: Trust as a determinant of pessimism and optimism". In Buchanan, G.M., Seligman, M.E.P. Explanatory Style. NJ: Lawrence Erlbaum Associates. pp. 49–55. ISBN [[Special:BookSources/0-8058-0924-5|0-8058-0924-5 [[Category:Articles with invalid ISBNs]]]] Check
- Gong-Guy, E., Hammen, C. (1980). "Causal perceptions of stressful events in depressed and nondepressed outpatients". Journal of Abnormal Psychology 89 (5): 662–9. doi:10.1037/0021-843X.89.5.662. PMID 7410726.
- Holt, C.L., Clark, E.M., Kreuter, M.W., Rubio, D. (2003). "Spiritual Health locus of control and cancer beliefs among urban African American women". Health Psychology 22 (3): 294–9. doi:10.1037/0278-6184.108.40.2064. PMID 12790257.
- Kahoe, R. (1974). "Personality and achievement correlates of intrinsic and extrinsic religious orientations". Journal of Personality and Social Psychology 29 (6): 812–8. doi:10.1037/h0036222.
- Lefcourt, H.M. (1966). "Internal versus external control of reinforcement: A review". Psychological Bulletin 65 (4): 206–20. doi:10.1037/h0023116. PMID 5325292.
- Lefcourt, H.M. (1976). Locus of Control: Current Trends in Theory and Research. NJ: Lawrence Erlbaum Associates. ISBN [[Special:BookSources/04701540440|04701540440 [[Category:Articles with invalid ISBNs]]]] Check
- Maltby, J., Day, L., Macaskill, A. (2007). Personality, Individual Differences and Intelligence. Harlow: Pearson Prentice Hall. ISBN 0-13-12976-0 Check
- Meyerhoff, Michael K (2004). Locus of Control.
- Norman, P., Antaki, C. (1988). "The Real Events Attributional Style Questionnaire". Journal of Social and Clinical Psychology 7 (2–3): 97–100. doi:10.1521/jscp.1988.7.2-3.97.
- Norman, P., Bennett, P. (1995). "3. Health Locus of Control". In Conner, M., Norman, P. Predicting Health Behaviour. Buckingham: Open University Press. pp. 62–94.
- Nowicki, S., Strickland, B. (1973). "A locus of control scale for children". Journal of Consulting and Clinical Psychology 42: 148–55. doi:10.1037/h0033978.
- Peterson, C., Semmel, A., von Baeyer, C., Abramson, L., Metalsky, G.I., Seligman, M.E.P. (1982). "The Attributional Style Questionnaire". Cognitive Therapy and Research 6 (3): 287–9. doi:10.1007/BF01173577.
- Robbins and Hayes (1997). "The role of causal attributions in the prediction of depression". In Buchanan, G.M., Seligman, M.E.P. Explanatory Style. NJ: Lawrence Erlbaum Associates. pp. 71–98. ISBN [[Special:BookSources/0-8058-0924-5|0-8058-0924-5 [[Category:Articles with invalid ISBNs]]]] Check
- Rotter, J.B. (1954). Social learning and clinical psychology. NY: Prentice-Hall.
- Rotter, J.B. (1966). "Generalized expectancies of internal versus external control of reinforcements". Psychological Monographs 80 (609).
- Rotter, J.B. (1975). "Some problems and misconceptions related to the construct of internal versus external control of reinforcement". Journal of Consulting and Clinical Psychology 43: 56–67. doi:10.1037/h0076301.
- Rotter, J.B. (1990). "Internal versus external control of reinforcement: A case history of a variable". American Psychologist 45 (4): 489–93. doi:10.1037/0003-066X.45.4.489. Retrieved 10 December 2010.
- Schultz, D.P., Schultz, S.E. (2005). Theories of Personality (8th ed.). Wadsworth: Thomson. ISBN 0-534-62402-2.
- Sherer, M., Maddux, J., et al. (1982). "The self-efficacy scale: Construction and validation". Psychological Reports 51 (2): 663–71. doi:10.2466/pr0.19220.127.116.113.
- Shiraev, E., Levy, D. (2004). Cross-cultural Psychology: Critical Thinking and Contemporary Applications (2nd ed.). Boston: Pearson. ISBN 0-205-38612-1.
- Smith, R.E. (1989). "Effects of coping skills training on generalized self-efficacy and locus of control". Journal of Personality and Social Psychology 56 (2): 228–33. doi:10.1037/0022-3518.104.22.168. PMID 2926626.
- Wallston, K.A., Wallston, B.S., Devellis, R. (1978). "Development of Multidimensional Health Locus of Control (MHLC) Scales". Health Education Monographs 6 (2): 160–70. doi:10.1177/109019817800600107. PMID 689890.
- Weiner, B., ed. (1974). Achievement Motivation and Attribution Theory. NY: General Learning Press.
- Weiner, B. (1980). Human Motivation. New York: Holt, Rinehart and Winston.
- Whyte, C., "An Integrated Counseling and Learning Assistance Center" (1980). New Directions Sourcebook-Learning Assistance Centers. Jossey-Bass, Inc.
- Whyte, C. (January 1978). "Effective Conseling Methods for High-Risk College Freshmen". Measurement and Evaluation in Guidance 6 (4): 198–200.
- Xenikou, A.; Furnham, A., McCarrey, M. (1997). "Attributional style for negative events: A proposition for a more valid and reliable measaure of attributional style". British Journal of Psychology 88: 53–69. doi:10.1111/j.2044-8295.1997.tb02620.x.
- Morton TL (June 1997). "The relationship between parental locus of control and children's perceptions of control". J Genet Psychol 158 (2): 216–25. doi:10.1080/00221329709596663. PMID 9168590.
- Locus Of Control & Attributional Style Test by PsychTests.com
- Locus of control: A class tutorial
- Locus of control
- http://www.similarminds.com/locus.html (test to see what you are)
External link for Attributional Style:
|
{
"palladium_score": 3.510007381439209,
"timestamp": "2026-01-18T07:32:55.095206",
"source": "Palladium-STEM (Preview)"
}
|
http://www.rlch.org/content/endangered-species-act
|
Endangered Species Act
The Endangered Species Act (ESA) is one of the most powerful of this nation's environmental laws. Passed in 1973, the act's purpose is to both conserve and restore species that have been listed by the federal government as either endangered or threatened (referred to as "listed" species). The act has several provisions that promote those goals:
- First, the act broadly prohibits anyone from doing anything that would kill, harm, or harass an endangered species. Those prohibitions even apply when listed animal species are on private lands.
- Second, federal agencies have a special obligation to ensure that they do nothing that would harm a listed species. That obligation significantly affects activities on federal lands, like grazing, logging, and mining. But it also means that a federal agency has to assess whether its actions could affect a listed species before the agency signs off on projects like a new highway or a dam on non-federal land.
- Third, the act tells federal agencies to develop plans that show how the listed species could be restored—or "recovered"—so that it no longer needs the act's protections ("delisted").
If an animal or plant species is listed as "endangered," the species is considered to be in danger of extinction throughout a large part of its range. It is possible that a species can be listed as endangered, the highest level of protection the act provides, in one place but not another. The U.S.Fish and Wildlife Service (USFWS) maintains a list of endangered species.
For a species to be listed as "threatened," there must be a significant risk that the species is going to become endangered. Threatened species have a lower risk of extinction than do "endangered" species. As a result, state and federal agencies may have some greater flexibility in how they manage a threatened species than an endangered species. The USFWS maintains a list of threatened species.
Generally speaking, a "species" is a group of related plants or animals that can interbreed to produce offspring. Under the ESA, the word "species" is used more broadly to include any "subspecies" of fish, wildlife, or plants, and also any "distinct population segment" of fish and wildlife species that can interbreed.
- A "subspecies" is a subdivision of a species, which is genetically different from other subspecies and often is geographically separated. Examples of subspecies are the Mexican and the Northern Spotted Owls.
- A "distinct population segment" is not genetically different from the species as a whole, but it has very specific habitat or reproduction habits. An example of a distinct population segment is a particular group of salmon, which, after spending their formative years in the ocean, return to the same mountain stream in which they were born. Thus, the winter run of the Chinook salmon on the Sacramento River in California is endangered, and many other runs of Chinook salmon are threatened, but the spring run of Chinook up the Clackamas River in Oregon and Washington is neither endangered nor threatened.
The Secretary of the Interior has delegated most of his or her duties under the ESA to the U.S.Fish and Wildlife Service (USFWS), which is responsible for all land-based species. The Secretary of Commerce has delegated most of his or her responsibilities for sea life and salmon and steelhead ("anadromous fish" that spawn in inland waters, migrate to the ocean for several years, and then return to their spawning grounds) to the National Marine Fisheries Service (NMFS).
Secretarial Order Principles:
Many tribes believe that the ESA should not apply to tribal lands both because the Act itself is silent on its applicability to tribes, and due to the special legal status of tribal lands. Additionally they saw conflict between their proposed economic development projects and enforcement of the ESA and felt that implementation of the ESA was giving control of their lands to persons living hundreds of miles away. As a result of these concerns, there were extensive negotiations between tribal representatives and federal officials that resulted in a 1997 Secretarial Order outlining the policy to be followed by the Departments of Commerce (for NMFS) and the Interior (for USFWS).
The Secretarial Order sets out five principles for the Departments. The order also includes explanatory text that emphasizes the sovereignty of tribes including a provision that "the Departments shall give deference to tribal conservation and management plans." Overall, the Order seeks to "ensure that Indian tribes do not bear a disproportionate burden for the conservation of listed species."
An example of the success of this cooperative approach is the experience of the White Mountain Apache Tribe in Arizona with restoration of the Mexican gray wolf, Apache trout, and Mexican spotted owl.
The Best Available Science
The ESA requires that USFWS and NMFS base listing decisions on the best available science and also to use this science as one factor in critical habitat designations. In 2003, a GAO report found procedures in place at USFWS to base listing decisions on best available science. In that report, however, the GAO cited continued concerns over agency use of best available science in critical habitat designations.
More recently, critics have charged that both listing and critical habitat decisions have been tainted by political interference. In late 2007, an Idaho District Court judge ruled that USFWS must reconsider its refusal to list the greater sage grouse under the ESA and the USFWS decided to revisit a number of decisions on listing, critical habitat and recovery plans that may have been tainted by political interference by Julie MacDonald, a former Department of the Interior political appointee.
"Listing" refers to the process by which a species is formally designated as a threatened or endangered species. Currently there are more than 1,260 species listed as endangered or threatened under the ESA. Anyone can submit apetition to the federal government to have a species listed. However, that petition must include scientific information that explains why listing is necessary. The two federal agencies that receive petitions are the USFWS and the NMFS. These agencies have a year to evaluate the species for listing. Either agency can also start the process without a petition.
After evaluating the species, the agency has three options:
- It can agree that a species should be listed, that is, it concludes that the listing is "warranted" in all or a specific part of its range.
- It can decide that listing is not justified, that is "not warranted."
- It can conclude that while adding the species to the list is
justified, other species have a higher priority; that is, listing is
"warranted but precluded."
Regardless of what decision the agency makes, it must publish its
decision in the Federal Register and explain how it reached its
After years of litigation, the USFWS and several environmental groups announced a proposed settlement agreement in May 2011, under which the agency would make listing decisions on each of the over 250 species on its 2010 candidate list within the next six years. In return, the environmental groups would agree not to sue to compel 90-day and 12-month findings on new listing petitions the group submits and will limit the number of listing petitions it submits per year. In a separate settlement agreement with the Center for Biological Diversity in July 2011 (see CBD's announcement here), the USFWS agreed to specific deadlines to make listing decisions on additional species. See "Wolverine, other species jump to top of endangered review list with agreement," Missoulian, 7/17/11. U.S. District Court Judge Emmet Sullivan approved both settlement agreements on Sep. 9, 2011. See "Federal judge OKs deal on imperiled species," Great Falls Tribune, 9/9/11. In February 2013, the FWS issued a work plan with target dates for these listing decisions over the next five year. See "Endangered or not, but at least no longer waiting," New York Times, 3/6/13.
Endangered or Threatened in Part of its Range
In March 2007, the USFWS adopted a policy that allows it to list a species in a portion of the species' range. The policy was controversial with critics claiming that it would allow the agency to avoid listings and restoring species' historic ranges. In December 2009, a group of scientists sent a letter asking Secretary of Interior Salazar to change the 2007 policy guiding agencies' determination of whether a species is endangered or threatened. The old policy, they argue, wrongly limits the analysis to present habitat range and fails to examine historic range.
In 2010, litigation concerning the Northern Rockies population of the gray wolf invalidated this policy (see discussion below). The USFWS subsequently withdrew the 2007 memo and announced a draft policy in December 2011 that would end the practice of classifying species differently based on state lines. See Washington Post story on the significance of the change here.
This flowchart depicts the normal listing review process used by the USFWS. For a current list of candidate species for listing, see the USFWS's 11/9/09 Candidate Notice of Review.
Flowchart courtesy of the U.S. Fish and Wildlife Service
The ESA has broad provisions to prevent extinction of plant and animal species. The act prohibits anyone from "taking" a species that has been listed as threatened or endangered. "Take" can be as simple as hunting, shooting or killing a listed animal species. It can also include "harming" a listed species by activities that cause major changes to habitat and leave an animal unable to feed, breed, or find shelter.
When the federal government lists a species as endangered, it is also supposed to identify that species' critical habitat. Critical habitat includes those areas that are important for the species' survival or recovery and which need special management. While a designated critical habitat area is not intended to include all of the potential habitat of the species, it can include habitat that is not currently occupied by the species. The federal government is required to use the best available scientific information in making a decision about critical habitat. The agency can also consider economics when deciding what areas should be designated as critical habitat, although it does not consider economic impacts in a species listing decision.
In February 2011, the U.S. Supreme Court declined an opportunity to weigh in on how economic impacts should be considered in critical habitat designation. For a discussion of the issues and the cases presented for review, see "Supreme Court Won't Hear Critical Habitat Cases," Legal Planet, 2/22/11. In April 2012, the Fifth Circuit Court of Appeals ruled that the agency's denial of citizen groups' petitions to designate critical habitat is not subject to judicial review under the Administrative Procedures Act because it is "committed to agency discretion by law." The Eleventh Circuit followed suit in 2012 (Conservancy of Southwest Florida v. U.S. Fish & Wildlife Service, No. 11-11915, April 18, 2012).
The Secretary of the Interior is not allowed to designate critical habitat at a military site if the Secretary decides that the military site has a resource management plan in place that benefits the affected species. In advocating for this relatively new provision, the Pentagon claimed that this provision is necessary to maintain high standards of military training.
On July 16, 2009, the Interior Department announced plans to withdraw a Bush-era plan for managing forests and protecting spotted owls in the Pacific Northwest because it is "legally indefensible," according to Interior Secretary Ken Salazar. While it revised the withdrawn proposal, the Department reinstated the Northwest Forest Plan, a landmark 1994 agreement reached by the Clinton administration, timber companies and environmentalists. The Department is also asking a federal district court to vacate the Fish and Wildlife Service's 2008 revision of critical habitat for the spotted owl, on which the forest plan was based. See "Limits on logging are reinstated," New York Times, 7/17/09.
|Click to download the GAO report|
The federal agency responsible for a listed species must develop a recovery plan. The plan outlines how it will ensure the species' survival and restore it to the point where it no longer needs the act's protections and can be "delisted" or removed from the list of threatened or endangered species. Examples of recovery efforts include reintroduction of a species into formerly occupied habitat (bald eagles), land acquisition (Florida scrub jays), captive propagation (black-footed ferrets and California condors), habitat restoration and protection (Aleutian Canada geese), population assessments and research (Peter's Mountain mallow), technical assistance for landowners and public education. In most cases, the USFWS or NMFS works with state wildlife agencies, user groups, conservationists, and others in developing such a plan. Because developing and implementing recovery plans is expensive, the agencies focus their efforts on species that would most benefit from a plan. While few species have gone extinct since 1973, only nine have been "recovered" or removed from the list because they no longer need the act's protection.
For additional information, see GAO Report: Endangered Species: Time and Costs Required to Recover Species Are Largely Unknown.
National Wildlife Federation v. State of Idaho
The 9th Circuit Court of Appeals decided in April 2007 that federal agencies must consider potential impacts of proposed actions on both species survival and on a species' chance of recovery. This analysis is necessary when developing a biological opinion to evaluate whether an agency action will result in jeopardy to the species.
For more information on this case, see Court's salmon ruling strengthens enviros' hand on species recovery.
For text of the decision and more information on the endangered salmon litigation, see National Wildlife Federation v. State of Idaho, No. 0635011p - 04/09/2007.
An "experimental population" is a group of individuals of an endangered species that has been established outside the current range of the animals. Animals may be reintroduced to their historical range or to new areas because there is insufficient habitat in the animals' traditional range. Experimental populations are considered threatened, not endangered, and "taking" individual animals is permitted under certain circumstances. Protections of experimental populations vary widely, depending on whether the population is considered "essential" or "nonessential" for species survival. Designation as a "nonessential experimental population" under the 10(j) rule of the ESA assures that endangered species are fully protected from intentional harm, but keeps their presence from restricting current and future land management practices. Use of this special designation helped reduce concerns raised by local communities, landowners and political entities about the intentional release of endangered species that might enter and remain on public lands in their region. The reintroduction of gray wolves to their traditional range in Wyoming and the California condor to its historic range in Arizona are examples of experimental populations that are considered "nonessential" to survival of the species.
The process for "delisting"—removing a species from either the endangered or threatened list, or changing its status from endangered to threatened—is similar to the formal listing process. The process starts with a notice published in the Federal Register. Delisting may include requirements for special management plans to help ensure a healthy population in the future.
Consider, for example, the saga of the gray wolf:
- In 2007, the U.S. Fish and Wildlife Service (USFWS) issued a rule delisting the Western Great Lakes population of the gray wolf.
- Environmental groups filed a lawsuit (Humane Society of the U.S. v. Kempthorne) challenging this rule, and in a 2008 decision (579 F. Supp. 2d 7 (D.D.C. Sep. 29, 2008)) the U.S. District Court for the District of Columbia invalidated the rule, holding that the ESA's language is ambiguous about the authority of the USFWS to delist a population rather than the species as a whole.
- The USFWS issued a new delisting rule for the Western Great Lakes population in April 2009. That rule remains in force today, although it too has been challenged.
- In April 2009, the USFWS issued a rule delisting the Northern Rockies population of the gray wolf, but excepted the "Distinct Population Segment" in Wyoming because that state failed to promulgate an acceptable state management plan for the gray wolf. Several legal challenges followed, with contradictory results:
- Defenders of Wildlife v. Salazar, 729 F. Supp. 2d 1207 (D. Mont. Aug. 5, 2010), vacated the Rocky Mountain delisting rule, on the grounds that the ESA prohibits listing or delisting only part of a distinct population segment. In response, the USFWS reversed its earlier rule with a new rule on October 26, 2010 that reinstated regulatory protections.
- Wyoming v. Dept. of the Interior, 2010 WL 4814950 (D. Wyo. Nov. 18, 2010), declared that the USFWS acted arbitrarily and capriciously by not delisting the wolf in Wyoming. The court remanded the case back to the agency; it did not delist the wolf in Wyoming (but see subsequent action described below).
- Defenders of Wildlife v. Salazar, 2011 WL 1345670 (D. Mont. Apr. 9, 2011), rejected a proposed settlement that would have delisted the wolves in Montana and Idaho, holding that this settlement would violate the ESA for the same reasons that the court rejected the USFWS rule in its August 2010 ruling.
- Congress, in its budget compromise bill in April 2011, included a rider requiring the USFWS to reinstate its April 2009 rule delisting the Northern Rockies population in Washington, Oregon, Idaho, Utah, and Montana. The rider excluded Wyoming because that state lacks "adequate regulatory mechanisms" to protect the wolf.
- The USFWS acted immediately to re-delist the wolves as ordered by Congress. In early May 2011, several environmental groups filed lawsuits attempting to block implementation of the rider on the grounds that it violates constitutional separation of powers, but this argument was rejected by both the District Court and the Ninth Circuit Court of Appeals, so the states proceeded with limited wolf hunting seasons in the fall of 2011.
- In July 2011, the state of Wyoming and the Department of Interior reached an agreement "in principle" (finalized in August 2011) on delisting the wolf in that state, providing for a minimum population of 100 wolves and 10 breeding pairs in the Yellowstone region.
- The USFWS issued its proposed delisting rule for the gray wolf in
Wyoming on October 4, 2011. For information on this decision, see:
- USFWS press release (10/4/11)
- Federal Register notice (10/4/11)
- 2011 Wyoming wolf management plan
- "Feds ready to delist wolves in Wyoming, shoot on sight," Ravalli Republic (10/4/11)
- Environmental groups filed two federal lawsuits (one in Wyoming, and one in Washington, D.C.) seeking to force the USFWS to relist the wolf in Wyoming, claiming that the state management plan is inadequate to ensure the species' recovery and that the USFWS should have prepared an environmental impact statement before finalizing the delisting. For a summary of the litigation, see "Court: Dual lawsuits proceed over Wyoming wolves," Denver Post, 4/23/13.
- In a draft plan circulated in April 2013, the USFWS indicated its intention to delist the wolf throughout the lower 48 states. See "Feds draft plan to end protection of wolves," Missoulian, 4/27/13.
Similar battles are underway concerning the USFWS' decision to delist the grizzly bear in the Greater Yellowstone Ecosystem, a decision that was reversed by a federal judge in September 2009 and upheld by the Ninth Circuit Court of Appeals in November 2011, in part based on concerns about habitat changes related to global warming. See:
- "Park grizzlies' threatened status appealed in Oregon court," Billings Gazette, 3/7/11
- "You're next: Are grizzlies ready to come off the endangered species list?" Missoula Independent, 5/26/11.
- "Grizzlies need care despite high numbers," Jackson Hole News and Guide, 9/7/11.
- "Yellowstone grizzlies need federal protection, appeals court says," Missoulian, 11/22/11.
- "Yellowstone grizzly bears: New cause celebre for effects of global warming?" Christian Science Monitor, 12/6/11.
- "Bears keep threatened status until at least 2014," Billings Gazette, 4/20/12.
- "Interior leader expects Yellowstone grizzlies delisted by 2014," Flathead Beacon, 7/23/12.
Anatomy of a Delisting — the Bald Eagle
- The bald eagle was originally listed as endangered in 1967 (under the Endangered Species Preservation Act of 1966.
- The eagle was downlisted to threatened in 1995.
- In 1999, USFWS proposed to delist the bald eagle, but never completed the process.
- In February 2006, USFWS issued a new proposal to delist and reopened the public comment period.
- USFWS then extended the comment period to June 2006, but took no final action.
- In August 2006, a U.S. District Court ordered the USFWS to finalize their decision on delisting by February 2007.
- In March 2007, delisting was still being discussed because of controversy over the definition of "disturb."
- On June 28, 2007, the USFWS removed the bald eagle from the list of threatened and endangered species. The two main factors that led to the recovery of the bald eagle were the banning of the pesticide DDT and habitat protection afforded by the Endangered Species Act for nesting sites and important feeding and roost sites.
For more information on the delisting as well as continued eagle protections and management guidelines, see the USFWS website.
Section 7 of the ESA has been at the center of much of the debate over endangered species protection. Section 7 says that federal agencies must make sure that none of their actions, or any action they authorize or fund, is likely either to jeopardize the existence of a listed species or to damage its critical habitat. To meet this requirement,federal agencies considering taking some action—from selling timber to re-issuing a grazing permit or permitting a new dam—must "consult" with the U.S. Fish and Wildlife Service (USFWS), for land-based species, or the National Marine Fisheries Service (NMFS), in the case of sea life or salmon and steelhead. The agencies usually use an informal process to determine whether formal consultation is necessary.
Typically, the agency that wants to take an action will informally consult with USFWS or NMFS, asking whether there are any proposed or listed threatened or endangered species or critical habitat in the project area. If the answer is "yes," then the consulting agency (also know as the "action agency") must do a biological assessment (BA) to assess what impact its action might have on the species or habitat. (See flowchart below.) The contents of the BA are left to the discretion of the action agency, but USFWS regulations suggest the following:
- The results of an on-site inspection of the affected area;
- The views of recognized experts on the species at issue;
- A review of the literature and other information;
- An analysis of the effects of the action on the species and habitat;
- An analysis of alternate actions considered by the action agency.
If the assessment indicates that there will be no impact, and the USFWS or NFMS agrees, then informal consultation is over and the project can go forward. If the BA indicates that the action is likely to have an effect, then informal consultation is over and "formal consultation" begins. During the informal consultation, the USFWS or NMFS may suggest project modifications that the action agency could take to avoid the likelihood of adverse impacts.
In some cases, federal courts have ruled that consultation is simply not necessary, due to the nature of the decision process. For example:
- The U.S. Supreme Court decided that EPA does not have to consult with USFWS when it empowers states to issue certain Clean Water Act permits. Once they have primacy, the states can follow their own more lenient rules regarding endangered species protection in issuing permits. National Association of Home Builders v. Defenders of Wildlife, S. Ct., June 25, 2007
- The 10th Circuit Court of Appeals ruled that implementation of a Forest Service land and resource management plan (LRMP) does not necessarily require consultation under the ESA. The standards, guidelines, policies, etc.of a plan are not "agency actions" requiring consultation with USFWS. Forest Guardians v Forsgren, 10th Circuit, February 23, 2007
- The 9th Circuit Court of Appeals ruled that a Forest Service decision that a proposed mining operation may proceed based on the mining company's "Notice of Intent" does not constitute an "agency action" that triggers ESA consultation. The key point was that the agency decided not to require a plan of operations, and thus this constituted "inaction" rather than "action," according to the appellate court. Karuk Tribe of California v. United States Forest Service, 9th Circuit, No. 05-16801, Apr. 7, 2011.
In December 2003 several land management agencies, USFWS, and NMFS adopted new regulations that exempt National Fire Plan projects from the informal consultation process. For more details on these regulations, see Controversies: Special Rules for National Fire Plan Consultation.
Flowchart courtesy of the U.S. Fish and Wildlife Service
If a federal agency informs the USFWS or NMFS that a proposed action might affect any proposed or listed threatened or endangered species or critical habitat (typically done as part of a BA), the agencies begin a formal consultation process. In this process, the USFWS or NMFS prepares a biological opinion (BiOp)—a detailed evaluation of the impacts on the species and critical habitat—based on the BA produced by the action agency. The BiOp thoroughly explains the current status of the species and describes how the proposed action would affect the species. The USFWS (or NMFS) can come to one of three conclusions in its BiOp:
The agency then has to explain how it concluded that the action would, or would not, jeopardize the species that is the subject of the opinion.
- If the opinion concludes the action will not adversely affect the species (i.e., a "no jeopardy" opinion), the action can go forward.
- If the BiOp concludes the action could harm the species, the USFWS or NMFS typically proposes a set of mitigation measures ("reasonable and prudent" alternatives) that would allow the activity to proceed.
It is also possible, though rare, that there are no effective mitigation measures. The practical result of such an opinion is that the agency either has to revise its proposal, abandon it altogether, or try to invoke an exemption from the Endangered Species Committee. See 'Exemptions ' section of the Endangered species Act: Consideration of Economic Factors.
Flowchart courtesy of the U.S. Fish and Wildlife Service
For more information on consultation, see the USFWS ESA web site.
On December 16, 2008, the USFWS and NMFS announced a final rule that changes the consultation process, allowing agencies to skip the wildlife consultation process if they believe there would be little harm to a species. Within two weeks, the State of California filed a lawsuit challenging this rule, claiming that the federal action could put listed species in greater jeopardy and could burden the state financially. In January 2009, members of the 111th Congress joined the call to repeal these controversial changes, and immediately after President Obama's inauguration, his chief of staff issued an order to halt all pending federal regulations until the new White House team conducted a legal and policy review of the last-minute Bush administration rules. In late February, the House of Representatives approved an omnibus appropriations bill that included a provision that authorizes the repeal of this controversial rule, and on March 3, 2009, President Obama announced that he had signed a memorandum directing the Interior and Commerce departments to review the regulation. In early April, 2009, a group of environmental group called on the administration to reverse the controversial rule. They were joined by leading Senate Democrats later that month. Secretary Salazar announced full repeal of the rule on April 28, 2009.
In a report released in May, 2009, the U.S. Government Accountability Office concluded that the USFWS has no established way to track cumulative threats or injuries to most of the imperiled species the agency is charged with protecting. The report, titled "The U.S. Fish and Wildlife Service Has Incomplete Information about Effects on Listed Species from Section 7 Consultations," found that FWS lacks a systematic way to track required monitoring reports or the harm to or death of protected species. Instead, the agency relies on individual biologists to maintain crucial species information. Thus, the retirement or loss of a biologist could be disastrous for the agency and the species it protects.
Participation by private landowners is extremely important to the protection and recovery of listed species because many listed species depend on private lands for habitat during at least part of their lives. Several federal policies and grant programs are designed to help landowners cooperate in protection of listed species.
A Habitat Conservation Plan (HCP) is developed to help protect species from being harmed by activities on private lands and, at the same time, to protect private landowners from liability under the ESA. Sometimes, a private landowner finds out that a planned project (for example, a housing development) may harm or "take" an endangered species. By developing an HCP, the non-federal entity can get the permits it needs to proceed. An HCP outlines what actions the private party plans to take in order to minimize, or mitigate, the impact of his or her actions on the endangered species. When the U.S. Fish and Wildlife Service (USFWS) signs off on an HCP, it generally gives permission to the private party to "take" endangered species as an incident to the development activity (issues an "incidental take permit"). Plans can be developed for listed threatened or endangered species and for other rare species. Including unlisted species in an HCP can provide for early protection for the species that might keep it from needing to be listed in the future.
For more information on HCPs, see the USFWS Habitat Conservation Planning website.
No Surprises Policy
Forests and Fish HCP
In an effort to encourage private property owners to protect endangered species and their habitat, federal agencies have developed a "no surprises" policy that can be written into an HCP. This policy promises the private landowner that if he or she develops an HCP in good faith and the federal agency later concludes that additional measures (e.g., protection of more land) are needed to protect the endangered species, the federal agency cannot require the private landowner to do anything more than what he or she already has committed to do. In other words, the private party who commits to helping to conserve an endangered species doesn't have to be worried about a "surprise" down the road.
When the USFWS approves an HCP plan, they issue an "incidental take" permit that prevents the private property owner from being prosecuted if an endangered species is incidentally killed or injured during the development. Because several conservation groups and an Indian tribe were concerned that there would be no recourse for a species is peril of extinction, the USFWS created a new rule, the permit revocation rule which allows the agency to revoke incidental take permits, despite the "no surprises" policy, when incidental takes would "appreciably reduce the likelihood of survival and recovery of the species in the wild."
For more information on Incidental Take Permits, see Process Essentials: ESA Exceptions or Exemptions .
One way developers can fulfill a promise to mitigate damage to a species is through the use of conservation banks. Conservation banks are lands acquired and managed for specific endangered species. The lands are usually protected permanently by conservation easements. Once a conservation bank is established, the "banker" may sell a fixed number of "mitigation credits" to developers to offset adverse effects of the developer's project on a species. These effects may include destruction of some of the species' habitat or disturbance of the species from increased activity in the area of the development.
The banks operate on the theory that species conservation will be most effective, and people will be most willing to participate in conservation efforts, if everyone benefits from conserving species. Conservation banking benefits all parties:
- Species benefit from protection of much-needed, secure habitat.
- Developers benefit because they can go forward with the development and receive an incidental take permit. Buying credits is easier, and usually more economical, for the developer than developing an individual mitigation project.
- Owners/managers of the conservation banks benefit monetarily through the developers' purchase of mitigation credits.
Some private landowners are unwilling to adopt conservation measures that improve habitat for threatened or endangered species on their land for fear that their future development decisions would then be limited by the presence of the endangered species. Unfortunately, that restricts the amount of privately owned land available for use by threatened and endangered species. Safe Harbor Agreements are designed to get around this conflict. The agreements assure landowners who voluntarily improve habitat for endangered species that their future land development won't be limited if they attract endangered species to their property or increase their numbers.
Title V of the Healthy Forests Restoration Act requires the Secretary of Agriculture to establish a healthy forests reserve program for the purpose of restoring and enhancing forest ecosystems to improve biodiversity, enhance carbon sequestration, and to promote the recovery of threatened and endangered species. The program provides both funding and technical assistance to landowners who volunteer to enroll their land. Safe harbor agreements and other assurances will be made with the landowners as part of the program.
For more information on the reserve program, see Healthy Forests Restoration Act: Title 5.
Candidate Conservation Agreements with Assurances (CCAA) are agreements made between the USFWS or NMFS and landowners. These formal agreements are created to address the specific conservation needs of a particular species, in hopes of keeping it off of the endangered or threatened species lists. The private parties to these agreements voluntarily commit to manage their land and water to decrease current and future threats to a species, so that the population of that species may thrive without federal protection. In exchange, the owners receive assurances from the agency, much like the "no surprises policy" of an HCP, that they will not be required to do more than what they agreed to when they entered the agreement. In order to receive the assurances, the landowner's management activities must significantly contribute to eliminating the need to list the covered species. Species covered in a CCAA may include both animals and plants, and either candidates for listing or species that have already been proposed as threatened or endangered.
Not all species are created equal under the ESA. Different categories of species receive different protection. There are three types of species in the ESA listing process:
- Listed species (either as endangered or threatened);
- Proposed species;
- Candidate species.
Protection under the ESA also differs between plants and animals, and between species listed with or without a critical habitat designation.
A "listed species" is any species of fish, wildlife, or plant that has been determined, through the full, formal ESA listing process, to be either threatened or endangered. Endangered species receive the full protections of the ESA—protection from "takings" and other specific prohibited acts (like commercial trade in the species), designations of critical habitat, requirements for Section 7 consultations, and recovery plans. Threatened species are protected with critical habitat designations, Section 7 consultations, and recovery plans, but they are only protected from takings and other prohibited acts if the USFWS or the NMFS decides it is necessary to do so.
A "proposed species" is any species of fish, wildlife, or plant that has been formally proposed for listing as either a threatened or endangered species under the ESA. The USFWS or the NMFS publishes a proposal to list the species—a "proposed rule"—in the Federal Register, prior to making a final decision to list the species by publishing a "final rule." Proposed species are not protected from "takings" or other prohibited acts, but the USFWS or NMFS can propose critical habitat for them. Federal agencies must follow the Section 7 consultation process for proposed species in order to avoid jeopardizing the species or destroying its proposed critical habitat.
"Candidate species"are plants and animals on a "waiting list" for threatened or endangered status. This means the USFWS or NMFS has sufficient information to list these species, but other, higher-priority species have to be listed first—the agency has concluded that a listing is "warranted but precluded." Candidate species are not legally protected under the ESA, but USFWS and the NMFS encourage partnerships to protect them because effective conservation might reverse their decline and ultimately eliminate the need for ESA protection.
Under the ESA, plants and animals have the same protections from most"prohibited acts"—import-export, possession, transport, or commercial dealing in the species. They have similar protections from more direct harm: it is illegal to kill, harm, harass, or even hunt (collectively called "take") listed animal species; listed plants cannot be picked, dug up or destroyed. Animals are protected from these actions on all lands, but plants are only protected on federal lands unless there is a state law that also protects them.
Critical Habitat Working Group
The process did not achieve consensus, but clarified some of the central issues of critical habitat.
For more information, see The Keystone Center website.
Only about 12 percent of listed species have a designated critical habitat area. According to the USFWS, a critical habitat designation affords little extra protection to most listed species. The agency has, therefore, used its limited staff and funding to list more species rather than spending resources on designating critical habitat. In some cases, the agency decides not to designate critical habitat in order to better protect the species. Sometimes a critical habitat designation may do more harm than good because of public hostility to the designation, because it makes a species like a rare cactus easier to locate, or because of misconceptions about the lack of value to the species of land outside the designated critical area.
Having a critical habitat designation only gives extra protection to a species if there is a federal agency involved, and then only under certain circumstances. If there is no federal agency involved in a project (for example, when a landowner builds a housing development on private land without federal funding or a federal permit), there is no extra protection for the species if the land has been designated as critical habitat. If a federal agency is involved (e.g., in issuing a permit for the housing development), a critical habitat designation may make a difference during the Section 7 consultation process.
In a Section 7 consultation, the agency must consult with the USFWS or NMFS to ensure that its actions will not jeopardize the survival of the species or destroy or adversely modify critical habitat. In most places, ensuring that its actions won't jeopardize survival of the plant or animal, provides at least as much protection as protecting the species' critical habitat. Protecting its critical habitat could provide extra protection to the species if the land being developed were currently "unoccupied" by the species, but were nonetheless important to its future survival.
Click here for full report in pdf.
The ESA provides strong protection for threatened and endangered species, but a few exceptions to the law are available through the USFWS, the NMFS, or the Endangered Species Committee after following a formal application process. These exceptions/exemptions allow individuals or agencies to do a variety of things that are otherwise prohibited, like transporting or even causing the death of a listed animal, without fear of prosecution. The most common exceptions are for:
The USFWS and the NMFS can issue permits for scientific purposes or for projects that enhance the propagation or survival of the species. For example, the agency might issue a permit for a project designed to establish or maintain a new population of wolf, lynx, or condor. While the intention of the recovery team would be to better understand the species to help it survive, biologists might harass an animal while trying to capture it and might even inadvertently kill it in transport. Or the team might need to intentionally kill it for a special medical test or because an individual from an experimental population threatens livestock.
USFWS or NMFS can issue permits to either federal agencies or private landowners for taking a species (harming or killing it or destroying its habitat) if the taking is "incidental to," and not the purpose of, the action. To apply for this kind of permit, the individual, corporation, or state or local government has to prepare a Habitat Conservation Plan (HCP). The permit applicant must describe actions he or she will take to minimize and mitigate impacts to the species. The applicant must also justify why there is no reasonable way of completely avoiding a potential taking.
Criteria for an exemption:
Federal agencies have a special duty under the ESA to make sure that their actions don't harm threatened or endangered species or their critical habitat. If an agency completes the Section 7 consultation process and is told that its proposed action is likely to jeopardize a species or damage its habitat, the agency can apply for an exemption that would enable it to go ahead with its proposed action (e.g., building a visitor center, operating a dam, or issuing just about any kind of permit or license). The project permitee or licensee, or the governor of the state affected by it,can also apply for the exemption. The final decision on whether to grant an exemption is made by the Endangered Species Committee (the so called "god squad") after following an elaborate public process. The seven-member committee includes several cabinet members, the chairman of the Council on Environmental Quality (CEQ), and other high-level appointees.
When granting an exemption, the committee must develop reasonable mitigation and enhancement measures to minimize the negative impacts of the agency's action. The committee has been convened only three times—for the snail darter fish in Tennessee, the spotted owl in Oregon and the whooping crane in Nebraska.
"Recovery crediting" is a conservation tool being proposed to provide incentives for private landowners to conserve endangered species and act as environmental stewards of the nation's natural resources. The recovery crediting system would work like other mitigation banks – the system would create a "bank" of credits that federal agencies can accrue through conservation actions on non-federal lands. Agencies could later use these conservation credits to offset the effects of their actions on the species on federal lands. Proponents argue that agencies will benefit in terms of greater flexibility in their operations on federal land; landowners will benefit from revenue for managing their land for the species; and the species will benefit in having more habitat being managed and protected. Skeptics of the system, like the Center for Biological Diversity, question a program that allows the destruction of habitat on public lands in exchange for arguably less secure protection of the species on private lands. The Center is particularly critical of the Ft. Hood, Texas pilot project where recovery crediting is being tested to mitigate military exercises that threatened the golden-cheeked warbler and other birds. Here, the Center charges that there is little accountability for federal dollars going to private landowners because public knowledge and oversight of the program is very restricted.
For draft guidance on the USFWS policy, click here.
Interaction with Other Laws
The ESA has been described as the "bulldog" of environmental laws, in part because it applies so broadly and with so little room for administrative discretion. Its mandates may trump or otherwise strongly influence the implementation of other federal laws or programs, as has been the case with forest plans, public land grazing programs, and water management.
On June 1, 2012, the Ninth Circuit Court of Appeals issued a ruling that confirmed the reach of the ESA over activities governed by the Mining Law of 1872, although the court limited the circumstances for this application. The plaintiff in the lawsuit, the Karuk Tribe of California, filed the action to protect the threatened Coho salmon from recreational mining in 35 miles of the Klamath River and its tributaries in Northern California. In July 2005 U.S. District Court Judge Saundra B. Armstrong for the Northern District of California ruled against the tribe, and initially a three-judge panel of the Ninth Circuit upheld that decision.
In the June 2012 decision (Karuk Tribe of California v. U.S. Forest Service), the court said that federal land managers must engage in an ESA Section 7 consultation when considering notices of intent (NOI) to condust hard rock mining activities that "might" disturb the surface lands.
Changing agency regulations — rather than enacting new legislation — to change the way public resources are managed is not new. The Clinton administration did it with the Roadless Rule, Forest Service planning regulations, new mining reclamation regulations, and other executive branch actions. The Bush administration reversed (or tried to reverse) most of those changes and has aggressively used both formal and informal rulemaking processes to make its own changes in public lands management. Examples include agency categorical exclusions from NEPA, Clean Air Act regulations and proposed grazing regulations.
Many of these proposed or final rule changes have been controversial with both the public and Congress. The Bush Administration's draft proposals to substantially change rules regarding endangered species quickly raised the ire of both Congress and the public in early 2007. The draft regulatory changes, included many of the changes that Republicans had unsuccessfully sought to make through ESA reform in previous sessions of Congress. After the draft proposal was leaked in March 2007, the USFWS quickly denied that the proposals represented current thinking on regulatory reform even though the draft, written in June 2006, was revised in February 2007.
In November of 2008, the Bush Administration indicated its intention to issue new regulations that would change the consultation process significantly, shifting responsibility for determining whether an action would impact protected species to the federal agencies directly involved in the action, not the U.S. Fish & Wildlife Service or the National Marine Fisheries Service. The USFWS and NMFS announced the final rule on December 16, 2008.
On December 15, 2008, the Interior Inspector General issued a scathing 141-page report, "The Endangered Species Act and the Conflict Between Science and Policy." Singling out former Deputy Assistant Secretary of Fish, Wildlife and Parks Julie MacDonald for criticism, the report blasts the processes by which many ESA decisions were made during her tenure. Another recent report, this one by the General Accountability Office, concludes that the agencies responsible for implementing the ESA have cooperated well with one another but done poorly in decisions regarding critical habitat designation. See Endangered Species Act: Many GAO Recommendations Have Been Implemented, but Some Issues Remain Unresolved, GAO-09-225R, 1/21/09.
Immediately after President Obama's inauguration, his chief of staff issued an order to halt all pending federal regulations while the new White House team conducted a legal and policy review of the last-minute Bush administration rules. In 2009, the Interior and Commerce secretaries withdrew the Bush Administration's relaxed rules on Section 7 consultations (see discussion here). But, in early 2011, the USFWS included a provision in the President's budget bill that requests a cap on the total amount of money the agency can spend to process citizen petitions to list species. For a blog by a law professor critical of this proposal, see "A risky FWS proposal to limit ESA petitions," Legal Planet, 4/4/11.
On May 26, 2011, the Department of the Interior announced a joint process (involving the USFWS and NOAA's Fisheries Service) to improve implementation of the ESA through changes in practices, guidance, policies, and/or regulations, consistent with President Obama's Executive Order 13563, "Improving Regulation and Regulatory Review." The agency is not seeking any changes in legislation, but has identified four areas for regulatory reform to improve implementation of the ESA. See the USFWS Regulatory Reform website for details.
In December, 2003, several federal agencies jointly enacted regulations designed to streamline the consultation process on proposed projects that support the National Fire Plan. This alternative consultation process eliminates the need to conduct informal consultation with USFWS and NMFS for National Fire Plan projects. Under the new process, the USFWS or NMFS will develop anAlternative Consultation Agreement (ACA) with action agencies (Forest Service,Bureau of Indian Affairs, Bureau of Land Management and National Park Service). With an agreement in place, USFWS or NMFS will train the agencies to make independent determinations of whether their fire plan projects are likely to adversely affect protected species. Projects might include prescribed fire, thinning and removal of fuels, emergency stabilization, burned area rehabilitation, road maintenance and ecosystem restoration. This process is designed to accelerate the rate at which the agencies process fire projects without changing the actual standards for Section 7 consultations.
ACA 2006 Update
Alternative Conservation Agreements must include:
- Who will make determinations;
- Procedures for training to make determinations;
- Standards for assessing the effects of a project;
- Provisions for incorporating new information, species, or critical habitat into the analysis;
- Monitoring and periodic program evaluation; and
- Provisions for the action agency to maintain a list of Fire Plan Projects for which it has made determinations.
Critics of this exception contend that the ESA requires at least informal consultation and do not believe that the land management agencies will have the expertise—despite the promise of training—to make the proper determinations alone. Even assuming the agencies have sufficient expertise, critics fear that the conflicting missions of the agencies will lead to decisions less protective of species and their critical habitats. A coalition of environmental groups is challenging the new regulations in court.
For a copy of the new regulation and the agencies' justification of it, see Joint Counterpart Endangered Species Act Section 7 Consultation Regulations in the Federal Register.
For a copy of the ACA, see the USFWS web page on consultation.
For other USFWS recommendations for streamlining Section 7 consultation, see the agency's memorandum on Alternative Approaches to Section 7.
Will fuels reduction projects jeopardize endangered species?
In evaluating the effects of fuels reduction projects on species, the USFWS balances short-term effects of fuels treatment—including destruction of endangered and threatened species' habitat—against long term benefits of the projects. Long-term benefits may include:
- Reestablishing native vegetation;
- Reestablishing natural fire regime; and,
- Reducing risk of catastrophic fires.
Successful collaboration is hard work, and it depends on having the right folks working on the right issues at the right time. (See the RLCH Collaboration Handbook for specifics on starting and maintaining collaborative processes.) Endangered species issues are more frequently litigated than collaborated, but a few have been tackled in collaborative processes.These processes range from high-level, multi-state and multi-party negotiations to state-level planning processes and more local,project-specific discussions. Political realities and pending or threatened actions prompt and guide collaboration to avoid listing(sage grouse), deal with "warranted but precluded" opinions that delay listing (black-tailed prairie dogs), develop and implement recovery plans (fish and birds on the Platte and Colorado Rivers), and facilitate delisting (gray wolves in Montana, Idaho and Wyoming).Political realities can also make continuing with collaboration futile.
The Bitterroot Mountains of Idaho and Montana provide an example of a highly controversial collaboration on hold. In the Bitterroots, the prospect of grizzly bear reintroduction under the ESA spurred creative collaboration. Many view the bear as a "blood thirsty predator" while others "celebrate it as the living symbol of wilderness." A coalition of conservationists,timber mill owners, and timber workers designed a citizen management committee for the federal government's reintroduction plan. The plan,finalized by USFWS under the Clinton Administration would have established a management committee of fifteen members nominated by the governors of Montana and Idaho and the Nez Perce Tribe. Additional members would also represent the U.S. Forest Service and the USFWS. The plan charged the committee with making decision that would "lead toward recovery of the grizzly bear in the Bitterroot ecosystem and minimize social and economic impacts." If the plan is ever implemented, this committee will be the first of its kind to share management authority with the USFWS.
While the USFWS adopted the plan in late 2000, the Department of the Interior under Gale Norton initiated a process to kill the plan by proposing to adopt a "no action" alternative for grizzly reintroduction. After receiving thousands of comments in support of the reintroduction, as well as strong opposition to it from Idaho and the Idaho congressional delegation, the Department has neither implemented the plan nor followed through with officially abandoning it.
In 2010, the U.S. Fish and Wildlife Service issued its decision not to list the Greater sage-grouse as endangered (the agency found that listing was "warranted but precluded" by other priority listing actions). See "No endangered status for Plains bird," New York Times, 3/5/10.
That same year, the USDA Natural Resources Conservation Service launched the Sage Grouse Initiative, which incorporates federal funding for working lands protection (through Farm Bill funding) and a variety of interagency collaborative approaches to protect sage grouse habitat and avoid the need to list the species in the future.
For its part, the Western Governors Association (WGA) adopted a policy position in 2011 urging expansion of this cooperation and support for state and local efforts to implement conservation strategies. In December 2011, the Western Governors Wildlife Council presented a report to the WGA that outlines the measures underway by state and local authorities to conserve sage-grouse habitat, and in the same month the Bureau of Land Management issued instructions for its managers to follow in implementing Greater sage grouse recover, including an interim instruction memorandum (IM) No. 2012-043 and a planning strategy IM No. 2012-044.
In February 2013, the BLM issued a Resource Management Plan for the Lander WY area that is intended to implement the state policy for protecting the Greater sage grouse.
For a case study of the sage grouse partnership on this website (written in 2005), click here.
New information will be added as the Congress takes action.
Endangered Species Act of 1973, 16 USC sections 1531 to 1544.
The text of the ESA can be viewed at the U.S. Fish and Wildlife Service web site.
READ MORE >>
Endangered Species Act Regulations
Can be found in 50 CFR sections 17.1 to 17.23.
READ MORE >>
U.S. Fish and Wildlife Service
An extensive web site with information on the USFWS's Endangered Species Program, including information on species, and species lists,laws, publications, and links to other agencies and sites about ESA.
READ MORE >>
National Marine Fisheries Service
The NMFS website has general information on the ESA and specific information on the role of the NMFS in implementing it. The site also focuses on marine-related life in general.
READ MORE >>
Endangered Species Link
A site listing web links under a wide array of topics. It has good definitions and is very easy to navigate. The web site covers just about every possible aspect of the ESA.
READ MORE >>
Defenders of Wildlife
The Defenders web site, has a section on ESA with an explanation of how it works, success stories, and a discussion of misinformation about the ESA and its consequences.
READ MORE >>
National Endangered Species Act Reform Coalition
The National Endangered Species Act Reform Coalition is a group of organizations dedicated to improving the ESA. The web site has information about effects of the ESA, including the negative effects.The site includes news and op-ed pieces, as well as a map linking to lists of species for each state.
READ MORE >>
|
{
"palladium_score": 4.072581768035889,
"timestamp": "2026-01-18T07:32:55.095206",
"source": "Palladium-STEM (Preview)"
}
|
http://www.teachinghistory.org/history-content/beyond-the-textbook/25750
|
Girls’ Labor and Leisure in the Progressive Era
The central story in many textbooks is one of tireless reformers committed to protecting the poor and helping vulnerable children by eliminating child labor and expanding education. Working on all levels, reformers expanded educational opportunities and increased literacy rates by reforming education from kindergarten through high school.
Florence Kelley, a pioneering social reformer featured in many textbooks, formed the National Child Labor Committee and employed photographer Lewis Hine to document the experiences of children. Hine’s photographs led in part to the era’s notable campaigns and legislation against child labor. His images of children working in mines, factories, and sweatshops illustrate all textbook chapters on the Progressive era.
Yet textbooks minimize parts of the past that alter the collective portrait of reformers as unfaltering. Textbooks underplay the influence of Southern entrepreneurs and government officials who either fought or ignored child labor laws. Not only were reformers less successful than portrayed, but they were not wholly responsible for all the gains for which they took credit and for which they are credited. Child labor, for example, was on the decline when reformers marshaled efforts against it.
Also missing is discussion of reformers’ motivation. Feelings about immigrants, migrants, and their potential impact on American society and culture ranged from compassion to apprehension. Similarly, their confidence in the superiority of middle-class values—especially childhood ideals—guided many of their efforts.
Reformers’ principles about girlhood found expression in institutions providing domestic training and maternal protection. Reformers’ racial attitudes led them to establish segregated facilities. Their beliefs in the benefits of country life blinded them to the problems faced by young rural workers, especially African Americans and Hispanics, who labored on family farms and in commercial agriculture in the South and Southwest.
While the Progressive era is one of the few periods in which girls appear in the national narrative, a few photographs and a sprinkling of quotes in textbooks do not establish significant historical visibility. Girls are largely subsumed within the broader category of (adult) “women,” even though the term “working girls” used by contemporaries better describes the youthful composition of the feminized labor force at the turn of the century.
When categorized as children, girls’ historical prominence is overshadowed by examples, illustrations, quotations, and questions about boys. Moreover, the relationship of gender differences within racial and ethnic categories is rarely recognized. It was, however, critical in determining young people’s opportunities and obstacles, responsibilities and rights.
Textbook descriptions that feminize, sentimentalize, and minimize girls’ labor are based on assumptions about girls as dutiful daughters. Yet those girls that assumed family responsibilities, called “Little Mothers,” by reformers, complained about being saddled with babies while brothers played in city streets. Along with European immigrant girls, Chinese mui tsai indentured servants—not mentioned in textbooks—also complained about having to carry and care for the children of Chinese merchant families in California (see Primary Source “Mui Tsai” ). Images of girls ornament chapters, but they do not effectively illustrate the diversity of their lives or illuminate girls’ value to families, economies, and cultures.
Going beyond the textbook emphasis on girls as family helpers provides insight into the everyday lives and expectations of girls, and reframes the narrative to include a variety of girlhoods that competed for dominance during the Progressive period. While immigrants transported a variety of Old World notions of girlhood deeply rooted in gendered beliefs, national traditions, and ethnic customs, their Americanizing daughters shed Old World attitudes along with their homemade European costumes.
Girls’ agency took place within immigrant families where daughters taught their parents English and American customs and arbitrated with landlords and shopkeepers. Jewish girls in the Northeast, Chinese girls on the West Coast, and those in small towns also ran errands for their families generating economic, social, and cultural capital. Picturesque depictions of girls on commercialized trade cards document the reality of girl couriers and consumers working in the private and the public sphere, as well as informal and household economies (see Primary Source “Going Momma’s Errands” ).
Girls also expanded the customary notion of “work” to include activism, which further reshaped the dominant framework. At age 16, for example, Pauline Newman and her girl friends organized 10,000 families to protest rent hikes in New York City (see Annotated Bibliography, Bartoletti, ch. 3).
Similarly, and much to the consternation of parents, freedom-seeking daughters purchased consumer goods and challenged courting traditions. Some parents turned to the police, courts, and reformers for help with their “reckless” daughters whose vibrant mixed-sex (“heterosocial”) youth culture redefined social values and sexual mores in modernizing America.
This adolescent reinvention of modern girlhood increased clashes with reformers whose middle-class upbringing informed the gender ideals they fostered. Progressives feared that tawdry commercial culture imperiled girls’ morals and manners.
The alleged “girl problem” stirred reformers to raise the age of consent, expand status offenses, and incarcerate the “incorrigible.” Seeking to provide more wholesome and feminine forms of entertainment and education, reformers offered classes on cooking and childcare in settlement houses and established scouting organizations. In schools, the word problems in revised elementary arithmetic books similarly conveyed educational reformers’ conventional goals for girls (see Primary Source “Essentials of Arithmetic Primary Book” ).
The expansion of middle-class ideals about girlhood found expression in Hine’s photographs of child laborers as well. Images of destitute girls typically pathologize working-class beliefs that obligated children to contribute to the family economy. Images of laboring girls stand in contrast to photographs that model the lifestyle and ideals of girlhood among the rising middle class, images that reflect the shifting social value of children from economically useful to “emotionally priceless.” The middle-class girls playing with a doll instead of making one in one Hine photo portrayed the ideal of modern American girlhood (see Primary Source Lewis Hine Photographs ). In representations like these, girls did “cultural work” reinforcing and reifying middle-class ideals.
Textbooks often marginalize girls’ role in shaping the past. In contrast, historians have worked, especially in recent decades, to:
- investigate girls’ everyday lives;
- use intersecting categories of analysis;
- examine girlhood as a social construction;
- deconstruct discourses (e.g., female adolescent sexuality), and
- interrogate girl-centered sources for evidence of consensus and conflict, continuity and change.
|
{
"palladium_score": 3.93939208984375,
"timestamp": "2026-01-18T07:32:55.095206",
"source": "Palladium-STEM (Preview)"
}
|
https://www.psychologytoday.com/blog/why-we-worry/201308/mental-health-stigma?tr=MostEmailedTh
|
There are still attitudes within most societies that view symptoms of psychopathology as threatening and uncomfortable, and these attitudes frequently foster stigma and discrimination towards people with mental health problems. Such reactions are common when people are brave enough to admit they have a mental health problem, and they can often lead on to various forms of exclusion or discrimination – either within social circles or within the workplace.
What is mental health stigma?: Mental health stigma can be divided into two distinct types: social stigma is characterized by prejudicial attitudes and discriminating behaviour directed towards individuals with mental health problems as a result of the psychiatric label they have been given. In contrast, perceived stigma or self-stigma is the internalizing by the mental health sufferer of their perceptions of discrimination (Link, Cullen, Struening & Shrout, 1989), and perceived stigma can significantly affect feelings of shame and lead to poorer treatment outcomes (Perlick, Rosenheck, Clarkin, Sirey et al., 2001).
In relation to social stigma, studies have suggested that stigmatising attitudes towards people with mental health problems are widespread and commonly held (Crisp, Gelder, Rix, Meltzer et al., 2000; Bryne, 1997; Heginbotham, 1998). In a survey of over 1700 adults in the UK, Crisp et al. (2000) found that (1) the most commonly held belief was that people with mental health problems were dangerous – especially those with schizophrenia, alcoholism and drug dependence, (2) people believed that some mental health problems such as eating disorders and substance abuse were self inflicted, and (3) respondents believed that people with mental health problems were generally hard to talk to. People tended to hold these negative beliefs regardless of their age, regardless of what knowledge they had of mental health problems, and regardless of whether they knew someone who had a mental health problem. More recent studies of attitudes to individuals with a diagnosis of schizophrenia or major depression convey similar findings. In both cases, a significant proportion of members of the public considered that people with mental health problems such as depression or schizophrenia were unpredictable, dangerous and they would be less likely to employ someone with a mental health problem (Wang & Lai, 2008; Reavley & Jorm, 2011).
Who holds stigmatizing beliefs about mental health problems?: Perhaps surprisingly, stigmatizing beliefs about individuals with mental health problems are held by a broad range of individuals within society, regardless of whether they know someone with a mental health problem, have a family member with a mental health problem, or have a good knowledge and experience of mental health problems (Crisp et al., 2000; Moses, 2010; Wallace, 2010). For example, Moses (2010) found that stigma directed at adolescents with mental health problems came from family members, peers, and teachers. 46% of these adolescents described experiencing stigmatization by family members in the form of unwarranted assumptions (e.g. the sufferer was being manipulative), distrust, avoidance, pity and gossip, 62% experienced stigma from peers which often led to friendship losses and social rejection (Connolly, Geller, Marton & Kutcher (1992), and 35% reported stigma perpetrated by teachers and school staff, who expressed fear, dislike, avoidance, and under-estimation of abilities. Mental health stigma is even widespread in the medical profession, at least in part because it is given a low priority during the training of physicians and GPs (Wallace, 2010).
What factors cause stigma?: The social stigma associated with mental health problems almost certainly has multiple causes. Throughout history people with mental health problems have been treated differently, excluded and even brutalized. This treatment may come from the misguided views that people with mental health problems may be more violent or unpredictable than people without such problems, or somehow just “different”, but none of these beliefs has any basis in fact (e.g. Swanson, Holzer, Ganju & Jono, 1990). Similarly, early beliefs about the causes of mental health problems, such as demonic or spirit possession, were ‘explanations’ that would almost certainly give rise to reactions of caution, fear and discrimination. Even the medical model of mental health problems is itself an unwitting source of stigmatizing beliefs. First, the medical model implies that mental health problems are on a par with physical illnesses and may result from medical or physical dysfunction in some way (when many may not be simply reducible to biological or medical causes). This itself implies that people with mental health problems are in some way ‘different’ from ‘normally’ functioning individuals. Secondly, the medical model implies diagnosis, and diagnosis implies a label that is applied to a ‘patient’. That label may well be associated with undesirable attributes (e.g. ‘mad’ people cannot function properly in society, or can sometimes be violent), and this again will perpetuate the view that people with mental health problems are different and should be treated with caution.
I will discuss ways in which stigma can be addressed below, but it must also be acknowledged here that the media regularly play a role in perpetuating stigmatizing stereotypes of people with mental health problems. The popular press is a branch of the media that is frequently criticized for perpetuating these stereotypes. Blame can also be levelled at the entertainment media. For example, cinematic depictions of schizophrenia are often stereotypic and characterized by misinformation about symptoms, causes and treatment. In an analysis of English-language movies released between 1990-2010 that depicted at least one character with schizophrenia, Owen (2012) found that most schizophrenic characters displayed violent behaviour, one-third of these violent characters engaged in homicidal behaviour, and a quarter committed suicide. This suggests that negative portrayals of schizophrenia in contemporary movies are common and are sure to reinforce biased beliefs and stigmatizing attitudes towards people with mental health problems. While the media may be getting better at increasing their portrayal of anti-stigmatising material over recent years, studies suggest that there has been no proportional decrease in the news media’s publication of stigmatising articles, suggesting that the media is still a significant source of stigma-relevant misinformation (Thornicroft, Goulden, Shefer, Rhydderch et al., 2013).
Why does stigma matter?: Stigma embraces both prejudicial attitudes and discriminating behaviour towards individuals with mental health problems, and the social effects of this include exclusion, poor social support, poorer subjective quality of life, and low self-esteem (Livingston & Boyd, 2010). As well as it’s affect on the quality of daily living, stigma also has a detrimental affect on treatment outcomes, and so hinders efficient and effective recovery from mental health problems (Perlick, Rosenheck, Clarkin, Sirey et al., 2001). In particular, self-stigma is correlated with poorer vocational outcomes (employment success) and increased social isolation (Yanos, Roe & Lysaker, 2010). These factors alone represent significant reasons for attempting to eradicate mental health stigma and ensure that social inclusion is facilitated and recovery can be efficiently achieved.
How can we eliminate stigma?: We now have a good knowledge of what mental health stigma is and how it affects sufferers, both in terms of their role in society and their route to recovery. It is not surprising, then, that attention has most recently turned to developing ways in which stigma and discrimination can be reduced. As we have already described, people tend to hold these negative beliefs about mental health problems regardless of their age, regardless of what knowledge they have of mental health problems, and regardless of whether they know someone who has a mental health problem. The fact that such negative attitudes appear to be so entrenched suggests that campaigns to change these beliefs will have to be multifaceted, will have to do more than just impart knowledge about mental health problems, and will need to challenge existing negative stereotypes especially as they are portrayed in the general media (Pinfold, Toulmin, Thornicroft, Huxley et al., 2003). In the UK, the “Time to Change” campaign is one of the biggest programmes attempting to address mental health stigma and is supported by both charities and mental health service providers (http://www.time-to-change.org.uk). This programme provides blogs, videos, TV advertisments, and promotional events to help raise awareness of mental health stigma and the detrimental affect this has on mental health sufferers. However, raising awareness of mental health problems simply by providing information about these problems may not be a simple solution – especially since individuals who are most knowledgeable about mental health problems (e.g. psychiatrists, mental health nurses) regularly hold strong stigmatizing beliefs about mental health themselves! (Schlosberg, 1993; Caldwell & Jorm, 2001). As a consequence, attention has turned towards some methods identified in the social psychology literature for improving inter-group relations and reducing prejudice (Brown, 2010). These methods aim to promote events encouraging mass participation social contact between individuals with and without mental health problems and to facilitate positive intergroup contact and disclosure of mental health problems (one example is the “Time to Change” Roadshow, which sets up events in prominent town centre locations with high footfall). Analysis of these kinds of inter-group events suggests that they (1) improve attitudes towards people with mental health problems, (2) increase future willingness to disclose mental health problems, and (3) promote behaviours associated with anti-stigma engagement (Evans-Lacko, London, Japhet, Rusch et al., 2012; Thornicroft, Brohan, Kassam & Lewis-Holmes, 2008). A fuller evidence-based evaluation of the Time to Change initiative can be found in a special issue dedicated to this topic in the British Journal of Psychiatry (British Journal of Psychiatry, Vol. 202, Issue s55, April 2013).
For those of you that would like to test your own knowledge of mental health problems, Time to Change provides you with a quiz to assess your own awareness of mental health problems.
Follow me on Twitter @GrahamCLDavey
|
{
"palladium_score": 3.7329301834106445,
"timestamp": "2026-01-18T07:32:57.676038",
"source": "Palladium-STEM (Preview)"
}
|
http://brightstorm.com/math/precalculus/polynomial-and-rational-functions/finding-maximum-and-minimum-values/
|
Polynomial functions are useful when solving problems that ask us to find things like maximum income, revenue or production quantities. Finding maximum and minimum values of polynomial functions help us solve these types of problems. When setting up these functions, we first determine what the problems is asking us to maximize and then set up the function accordingly.
So we've got some word problems I want to talk about. And these are specifically optimization problems, and that just means that we're looking for the maximum or minimum value of some function. And in this case we'll deal with polynomial functions mostly.
Let's read this one. It says: "A farmer currently has 200 crates of apples and can harvest an additional 10 crates per day. The current price of apples is $120 per crate and is expected to drop $4 each day. When should the farmer sell to maximize her income?" Okay.
Well, let's back up for a second. Income. That's going to be our function. And that's what we want to maximize. We want to maximize income. And we want to maximize over time, when should she sell. So -- and the variable is going to be the number of days. So income. Income is going to be the number of crates she sells, times the price per crate.
Now, first of all, I need to get a function for each of these guys, the number of crates. And the problem says that she has 200. 200 crates. And that she can harvest an additional 10 per day. Let me write that as plus 10X. The minute I do, I've identified the variable X as the number of days.
So here X is the number of days she waits. So, for example, if she doesn't wait at all, 0 days, then she'll have 200 crates. Now, the price per crate, this guy, that's also going to be a function of X. It's going to be -- well now the price is $120 and the price drops by $4 each day. So minus 4X.
And that means that the income function, let's call it I of X, is the number of crates, 200 plus 10X times the price per crate, 120 minus 4X.
Now, this is a quadratic function. And the great thing about quadratic functions is we know exactly where the maximum is going to occur. It's going to occur at the vertex. So we're going to make use of that fact.
Now, here if you look at this equation, I can actually tell where the X intercepts of its graph would be. So I'm actually going to graph this function. 1X intercept is going to be when 200 plus 10X equals 0. 200 plus 10X equals 0. That means 10 X equals negative 200, so X equals negative 20.
So one intercept is going to be at negative 20. Let me mark that. Negative 10. Negative 20. That's the intercept. And then another one will come from this factor. When does 4X equal 120? When X equals 30. So there's another intercept. Positive 30.
Now, one thing I need to think about is what values of X makes sense in this case? We're not just dealing with a function in the abstract sense. We're dealing with a function that describes income for this farmer after she's waited X days. X can't be negative. So we should probably specify that X is greater than or equal to 0. All right.
Now let me draw a rough graph of this quadratic. It's going to be a downward opening quadratic. We can tell that because the leading coefficient would be negative 40 X squared. So let me just draw this. And because we're not really concerned with the negative part of this graph, right, that would represent a negative number of days, which doesn't make sense, I'm just going to dash this part of the graph.
We're also not really concerned with this part. Now, why would you think that is? Here, the income's negative, right? I'm sure she wants to stay away from this, and of course it doesn't actually make any sense. It represents a time when the price is actually negative. That doesn't make sense. We'll just stay between 0 and 30. And remember the maximum is going to occur right where the vertex is.
Now, the vertex, how are we going to find that? Well, parabolas have the marvelous quality that the vertex is exactly halfway between the intercepts. The vertex, we'll call it X max, the X coordinate where the maximum occurs is going to be the average of these two. Negative 20 plus 30 over 2. That's 10 over 2 or 5. And that's it.
The problem only asked for us to find the number of days that she should wait. So that her income would be the maximum. 5's the answer. So she should wait five days.
Now, if you're curious about how much she would make, you can easily calculate that by plugging 5 back into the function. And even though the problem doesn't call for it, I'm kind of curious. Let's find income of 5. We have 200 plus 10 times 5, and we have 120 minus 4 times 5. So that's 250. 100. 25,000. She'll make $25,000.
|
{
"palladium_score": 3.718167781829834,
"timestamp": "2026-01-18T07:32:57.677034",
"source": "Palladium-STEM (Preview)"
}
|
https://weaponsandwarfare.com/2017/10/30/british-invention-tank-ii/
|
Brought to France under the cover of canvas sheets, the first British tanks entered battle against the Germans in September 1916. In his book, Tanks In Battle, Colonel H.C.B. Rogers describes the supplies that were carried into action in the British tanks: “Rations for the first tank battle consisted of sixteen loaves of bread and about thirty tins of foodstuffs. The various types of stores included four spare Vickers machine-gun barrels, one spare Vickers machine-gun, one spare Hotchkiss machine-gun, two boxes of revolver ammunition, thirty-three thousand rounds of ammunition for the machine-guns, a telephone instrument and a hundred yards of cable on a drum, a signalling lamp, three signalling flags, two wire cutters, one spare drum of engine oil, two small drums of grease and three water cans. Added to this miscellaneous collection was all the equipment which was stripped off the eight inhabitants of the tank, so that there was not very much room to move about.”
The training of the crews that went to war in these early tanks had been sub-standard and there had been no instruction in cooperation between the tanks and the infantry. The only point of agreement between the two arms was that the tanks ought to reach their first objective five minutes ahead of the infantry forces and that the primary task of the tanks was to destroy the enemy strongpoints which were preventing the advance of the infantry.
In their initial combat action, it was intended to deploy forty-nine British tanks, but only thirty-two were able to take part. Nine of these suffered breakdowns, five experienced “ditching” (becoming stuck in a trench or soft ground) and nine more couldn’t keep up the pace, lagging well behind the infantry. But the remaining nine met their objective and inflicted severe losses on the German forces. While accomplishing less than had been hoped for, this first effort of the British tank force produced an important and unanticipated side effect. Those tanks that reached the enemy line made a powerful impression on the German troops facing them, causing many to bolt in fear even before the tanks had come into firing range.
Shrieking its message the flying death Cursed the resisting air,
Then buried its nose by a tattered church,
A skeleton gaunt and bare.
The brains of science, the money of fools,
Had fashioned an iron slave
Destined to kill, yet the futile end
Was a child’s uprooted grave.
—The Shell by Private H. Smalley Sarson
When the war is over and the Kaiser’s out of print, I’m going to buy some tortoises and watch the beggars sprint; When the war is over and the sword at last we sheathe, I’m going to keep a jelly-fish and listen to it breathe.
—from A Full Heart by A.A. Milne
For it’s clang, bang, rattle,
W’en the tanks go into battle,
And they plough their way across the tangled wire,
They are sighted to a fraction,
When the guns get into action,
An’ the order of the day is rapid fire; W’en the hour is zero Ev’ry man’s a bloomin’ ’ero,
W’atsoever ’is religion or ’is nime,
You can bet yer bottom dollar W’ether death or glory foller,
That the tanks will do their duty ev’ry time.
—from A Song of the Tanks by J. Dean Atkinson
The following passage from the book Iron Fist by Bryan Perrett describes operational conditions for the crew of the early British tanks in France around the midpoint of the First World War: “Such intense heat was generated by the engine that the men wore as little as possible. The noise level, a compound of roaring engine, unsilenced exhaust on the early Marks, the thunder of tracks crossing the hull, weapons firing and the enemy’s return fire striking the armor, made speech impossible and permanently damaged the hearing of some. The hard ride provided by the unsprung suspension faithfully mirrored every pitch and roll of the ground so that the gunners, unaware of what lay ahead, would suddenly find themselves thrown off their feet and, reaching out for support, sustain painful burns as they grabbed at machinery that verged on the red hot. Worst of all was the foul atmosphere, polluted by the fumes of leaking exhausts, hot oil, petrol and expended cordite. Brains starved of oxygen refused to function or produced symptoms of madness. One officer is known to have fired into a malfunctioning engine with his revolver, and some crews were reduced to the level of zombies, repeatedly mumbling the orders they had been given but physically unable to carry them out. Small wonder then, that after even a short spell in action, the men would collapse on the ground beside their vehicles, gulping in air, incapable of movement for long periods.
“In addition, of course, there were the effects of the enemy’s fire. Wherever this struck, small glowing flakes of metal would be flung off the inside of the armor, while bullet splash penetrated visors and joints in the plating; both could blind, although the majority of such wounds were minor though painful. Glass vision blocks became starred and were replaced by open slits, thereby increasing the risk, especially to the commander and the driver. In an attempt to minimise this, leather crash helmets, slotted metal goggles and chain mail visors were issued, but these were quickly discarded in the suffocating heat of the vehicle’s interior. The tanks of the day were not proof against field artillery so that any penetration was likely to result in a fierce petrol or ammunition fire followed by an explosion that would tear the vehicle apart. In such a situation the chances of being able to evacuate a casualty through the awkward hatches were horribly remote.
“Despite these sobering facts, the crews willingly accepted both the conditions and the risks in the belief that they had a war-winning weapon.”
A British Army corporal said they looked like giant toads. The specter of nearly 400 enemy tanks emerging from the early morning ground fog and mists of Cambrai in north-eastern France on 20 November 1917 must have impressed all who saw it. After years of stalemate and staggering attrition, this first use of massed tanks in warfare was the turning point. British armored commanders had awakened to the possibilities of the tank when imaginatively and skillfully utilized.
For most of 1917 the Allies on the Western Front had been bogged down in their trenches, unable to breach the German defenses. Now in November, the tank commanders saw an opportunity to break the cycle of despair and hopelessness that hung over the Allied armies. They proposed a massive tank raid to be launched against German positions near the town of Cambrai. They liked the prospects. The terrain of the attack was gently rolling, well-drained land. As their plan called for surprising the Germans with a fast and relatively quiet approach, there was to be no conventional softening-up artillery bombardment in advance of the raid. The commanders had intended that the great tank force would arrive quickly, inflict maximum damage and get out fast, having completed their task in three hours or less. They had presented their plan to Sir Douglas Haig, the British Commander-in-Chief on the Western Front, in August when he was incurring catastrophic losses fifty miles to the north of Cambrai in the swamps of Passchendale. At the time, the optimistic Haig was still looking for a victory and shelved the Cambrai idea. But by the autumn his Passchendale ambitions had sunk in the mud there and he was forced to accept the proposal of his tank men.
The plan called for the great mass of tanks to force a breakthrough between the two canals at Cambrai, capture the town itself as well as the higher ground surrounding the village of Flesquieres and the Bourdon Wood. They were then to roll on towards Valenciennes, twenty-five miles to the northeast. The tanks were carrying great bundles of brushwood which would be used to fill in the trenches that they would encounter when crossing the German defenses of the Siegfried Line. It was intended that the tanks would advance line abreast while the accompanying infantry troops would follow in columns close behind to defend against close-quarters attacks.
Deception and diversion were employed by the British in the days leading up to the attack. Dummy tanks, smoke and gas were all used to fool the Germans, and the men and equipment that would be involved in the attack were moved up entirely by night and kept in hiding by day. All 381 tanks allocated for the attack advanced toward Cambrai along a six-mile front.
British planning and attention to detail had been thorough and fastidious, but they had failed to factor in the possibility of one of their own commanders, a General Harper of the 51st Highland Division, deviating from the plan. It seems that Harper had doubts about the ability of the new-fangled tanks to breach the Siegfried Line as quickly as the planners required. On the day of the attack Harper delayed sending his tanks and infantry troops forward until an hour after the rest of the force had left. The delay allowed German field artillery to be positioned with disastrous results for some tank crews. Five burned-out tank hulls were found after the action. Elsewhere along the tank line, however, the armor and infantry had moved swiftly through the German lines, advancing five miles to Bourdon Wood by noon. It had been a brilliant achievement for the British tank crews.
The push continued the next day with the British taking Flesquieres and advancing a further 17 miles. In the next nine days, they won and lost the village of Fontaine-Notre Dame and the surrounding area several times. Then, on 30 November, the Germans counterattacked. Like the British, they struck without the usual initial artillery bombardment, hiding behind heavy gas and smoke screens. The British troops, exhausted by their recent effort, were forced to retreat from the rapidly advancing German forces and in just a few days had to relinquish all of their gains. In the action, the Germans took 6,000 prisoners. Blame for the defeat fell on everyone except those actually responsible—the commanders. There was concern in Whitehall that pointing the finger at their Army commanders would crush the faith of the British people in their military leadership. Still, the British had learned the valuable lesson of how effective tanks and artillery could be when properly employed in concert.
The German offensive of March 1918 began on the 21st and saw the first appearance of their tanks in battle. Designed early in 1917, the A7V was much larger and heavier than the British heavy tank of the day. It weighed thirty-three tons and was operated by a crew of eighteen. The armament consisted of one forward-mounted 57mm gun (roughly equivalent to the British six-pounder) and six machine-guns positioned at the sides and the rear. The maximum armor thickness was 30mm enabling the front of the tank to resist direct hits from field guns at long range, but the overhead armor was too thin to provide much protection. The fitting of the armor plating was such that the hull was very susceptible to bullet splash. The tank’s power came from two 150 hp Daimler sleeve-valve engines. Sprung tracks allowed the vehicle to achieve eight mph on smooth and level ground, a high speed for the time. However, the design and the low ground clearance resulted in relatively poor cross-country performance. The Germans built only fifteen A7Vs. In their initial venture into combat, four of the German tanks were used together with five captured British Mk IVs. One month later, thirteen A7Vs participated in the capture of Villers-Brettoneux and in this action the enemy tanks had the same psychological effect on the British infantry as their tanks had had earlier on their German counterparts. Tanks broke the opposing lines.
Shortly after the German success at Villers-Brettoneux, the world’s first tank-versus-tank action took place in the same neighborhood. In the early morning light, one male and two female Mk IVs were ordered forward to stem the German penetration. Though some of the British tank crew members had suffered from gas shelling, they all advanced and soon sighted one of the A7Vs. The machine-guns of the two females were useless against the armor of the German tank and both were put out of action. But the male was able to maneuver for a flank shot and scored a hit causing the German tank to run up a steep embankment and overturn. Two more A7Vs then arrived and engaged the British tank which saw one off. The crew of the second A7V abandoned their tank and fled.
The Cambrai experience undoubtedly saved many lives, influencing the British attack of 8 August1918, the battle of Amiens, in which 456 tanks finally broke the enemy lines. It was the decisive battle of the war, leading to the German surrender. The battle was launched along a thirteen-mile front. The three objectives were the Green Line, three miles from the start line; the Red Line, six miles from the start and in the center of the front; and the Blue Line, eight miles from the start and in the center. The attack was to begin at 4:20 a.m. with the tanks moving out 1,000 yards to the start line. A thick mist helped the British forces to achieve complete surprise and overrun the German forward defenses.
The main attacks were to be delivered by the Canadian Corps on the right and the Australian Corps on the left, both of them being south of the Somme. The Third Corps was to make a limited advance while covering the left flank. Before the Canadian Fifth Tank Battalion reached and crossed the Green Line objective, it had suffered heavily, losing fifteen tanks. It lost another eleven tanks achieving the Red Line, leaving it only eight machines still operable. The Canadian Fourth Tank Battalion was advancing across firm ground and achieved the Green and Red Lines with ease. Heavy German artillery then took a great toll of the Fourth’s tanks, leaving only eleven for the push on toward the Blue Line.
The Australian Corps, attacking with vehicles of the Fifth Tank Brigade, reached the Green Line by 7 a.m., the Red Line by 10 a.m. and they took the Blue Line an hour later. The tanks had eliminated German opposition up to the Red Line. After that, the Australian infantry poured through the weakened enemy defenses and the tanks were unable to keep pace with them.
After the fighting, most tank crews were suffering the ill effects from having spent upwards of three hours buttoned-up for action. With their guns firing, most of them suffered from headaches, high temperatures and even heart disturbances.
Though it was not immediately apparent, the Allies had won a great victory at Amiens, taking 22,000 German prisoners, and the German High Command realized that it had no more hope of winning the war.
In the Reichstag, the German politicians heard from their military commanders that it was, above all, the tanks that had brought an end to their resistance against the Allies. That evening the downcast Kaiser said to one of his military commanders: “It is very strange that our men cannot get used to tanks.” Major (now General) J.F.C. Fuller summed up the result: “The battle of Amiens was the strategic end of the war, a second Waterloo; the rest was minor tactics.”
|
{
"palladium_score": 3.5351476669311523,
"timestamp": "2026-01-18T07:33:00.010751",
"source": "Palladium-STEM (Preview)"
}
|
https://astronomy.stackexchange.com/questions/1842/what-is-event-horizon-of-a-black-hole
|
The boundary of a black hole is said to be surrounded by event horizon - the point of no return! What is its significance in terms of general relativity?
closed as too broad by called2voyage♦ Mar 10 '14 at 18:38
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
It is exactly the point of no return for light. In other words, it is the point (actually more like a sphere) in which the escape velocity from the Black Hole gravity reaches $c$.
What I mean to say is, how are the characteristics of event horizon related to Space-Time?
Formally, an event horizon is the boundary of a region of spacetime that's not in the causal past of future null infinity. In other words, the boundary of a region from which even idealized light rays cannot escape to infinity. Whenever the event horizon is smooth, it is also null hypersurface--i.e., the direction perpendicular to it is a light ray.
Because the event horizon is defined in terms of the infinite future, the definition is very non-local, and one would need to know the entire future history to be sure where the event horizon is. As such, it is only one of a half-dozen different types of horizons used to study black holes.
Though the event-horizon itself is a three-dimensional hypersurface in spacetime, it can also be viewed as an evolving two-dimensional membrane made of a viscuous electrically conducting fluid with finite temperature and entropy but zero thermal conductivity.
|
{
"palladium_score": 3.6231133937835693,
"timestamp": "2026-01-18T07:33:02.455391",
"source": "Palladium-STEM (Preview)"
}
|
http://reaction-science.com/articles/ed4-nature-1.php
|
Photo credit: Naturepicsonline, Wikimedia Commons
It is that time of year again when all the shops are full of hearts, roses and fluffy teddies with declarations of love printed on their tummies. Gifts are one of the ways we as a species attract or keep a partner. But how do other members of the animal kingdom pull it off?
Within the animal kingdom there are two main ways that animals find a mate. In one instance the females may choose a male partner. This is called intersexual selection, and is the process of courtship (which is the main focus of this article) . Alternatively, intrasexual selection may occur where males compete over access to females, for example elephant seals fight to become ‘beach master’ .
Courtship displays are a set of species specific behaviours with the sole purpose being to attract a mate for copulation (mating), . These displays provide a clear communication of intention between the sexes, and as they are different depending on species, it also ensures mating of the same species. For example a mutual display may be carried out whereby both sexes perform responsive or synchronised displays. Not only does this allow species recognition but it also facilitates bonding. Mutual displays are common for bird species for example, Crested Auklets perform a vocal mutual display by cackling to one another .
It is often the males that will begin a courtship display and exhibit many behaviours to impress and attract females. In fact many of the sexual dimorphisms you witness, such as males being more brightly coloured, are due to the different roles the sexes play in courtship . So why is it that the males go to all the effort? The answer lies with the difference between the sexes in gamete production (production of eggs and sperm). Female displays are so rare because displays are very energetically costly . Females have already put a lot of energy in to egg production and, as few are made each carry a high investment . Comparatively males produce many sperm and invest little energy in each one. In addition, there is a difference between the sexes in relation to the variance in mating success. This is known as Bateman’s principle . The variance in mating success between females of a population is low as they only really need to mate once and the probability is that their eggs will be fertilised. However, there is a high variance between males as their reproductive success is generally based on the number of times they mate (see graph) . Usually a few males will mate with many females thus having a high reproductive output, but most will have little or no output. Also, in systems where multiple males and females mate with each other (polygynandry), males need to ensure it is their genes that are passed on, so the more mates they have the higher the chance they have of doing this . This therefore means that if a male can mate with a large number of females he will have a higher reproductive output. This is why it is believed that males have developed behaviours and characteristics for courtship over females as they need to attract mates .
Photo credit:
The fact that females’ gamete production is more energetically costly also makes them more discerning when it comes to picking a mate. This is because to increase the likelihood of survival of the offspring a good mate must be selected. It is for this reason that courtship displays are so important; it is a chance for males to show off their desirable characteristics to females. There are several hypotheses proposed to describe how females pick a mate. One is called the ‘good gene model’ . This is where females may pick males with certain desirable traits, for example if they have resources like good territory to support offspring development. Or the males that have the brightest feathers show they have superior foraging ability, the brightness is due to them consuming more carotenoids, found in fruit etc. . Another hypothesis is the ‘handicap hypothesis’. Many features of courtship are actually very risky for males, such as having very bright markings which makes them more visible and therefore more vulnerable as prey. Although they may be a risk for males, they are attractive to females as they signal the ability of the male to cope with the handicap .
Courtship displays usually involve a dance, vocalisations or maybe displays of beauty (e.g. bright colours), or strength . Some animals may also release pheromones to signal their availability . The following are some specific examples of behaviours and displays:
Instead of having physical characteristics, these birds display by building ‘bowers’ which are colourful shrines. A lot of time and care is put into the construction of these, with a lot of work being done to gather objects for decoration. Examples of objects used in these bowers are; flowers, berries, coins, glass, shells and plastic, (see image). The female will choose the most artistic .
Photo credit:
Males of this species have evolved a red nasal cavity that they can blow up. They also bounce the inflated sack around as part of the display. Females are attracted to the males they deem to have the superior or most alluring nasal balloons .
Red Capped Manakin
The males of this species perform an impressive looking mating dance, coupled with specific sounds such as clicks . See video below:
But it is not just exotic animals that take part in courtship rituals, in the UK there are many examples, which I am sure you will have witnessed at some point. A very common sight in towns and cities throughout the UK are pigeons. I am sure you have seen many times a seemingly desperate male parading in front of a disinterested female? This is part of their courtship display, in order to attract a mate or even to reinforce a bond between already paired pigeons they carry out a number of behaviours, including the following; Bowing, where the male puffs out his neck feathers and lowers his head. Tail-dragging, spreading of the tail and dragging it along the floor as they run after a female . There are approximately 19,000 breeding pairs of Tawny owls in the UK. Their courtship begins in late February. At this time the male changes his hunting habits to catch prey more frequently in daylight hours so the food can be presented to a female. In addition, as he patrols his territory he may screech to repel rival males and to attract females .
Photo credit: ,
Animals may not be able to pop to the shops to buy a bunch of roses to show their intentions, but that doesn’t stop them giving ‘gifts’ of their own! These ‘gifts’ are presented in some species by the male to try and ensure that copulation occurs. Spiders may wrap prey in silk parcels to present. Male Paratrechalea ornata spiders will also incorporate a pheromone into their silk wrap to encourage the female . Kingfishers will give fish as a gift; it has been found that females are fussy over both the size and species of the fish being gifted. The male will first swallow the fish so it can be presented to the female face first. From experimental investigation it has been discovered that males who produce gifts may be choosier in picking the right female to give it to. This is because, similarly to female egg production, producing good gifts is energetically costly so they don’t want to waste them on a non-viable mate .
Overall, there are many different ways that animals attract partners. From dancing, to building shrines and even giving gifts. All with the sole aim of being able to mate and pass on their genes to the next generation.
|
{
"palladium_score": 3.63789701461792,
"timestamp": "2026-01-18T07:33:02.455391",
"source": "Palladium-STEM (Preview)"
}
|
https://www.nationalgeographic.com/environment/article/trees-release-methane-what-it-means-climate-change?cjevent=d2309f3872b511e981a8000d0a1c0e0b&utm_source=6172777&utm_medium=affiliates&utm_campaign=CJ
|
In 1907, Francis W. Bushong, a chemistry professor at the University of Kansas, reported a novel finding in the journal Chemical and Physical Papers. He’d found methane, the main ingredient in natural gas, in a tree.
Years earlier, he wrote, he’d cut down some cottonwood trees and “observed the formation of bubbles in the sap upon the freshly cut trunk, stump and chips.” When he struck a match, the gas ignited in a blue flame. At the university, he replicated the flame test on a campus cottonwood and this time captured gas samples. The concentration of methane was not much below the level measured in samples from Kansas’s natural gas fields.
The finding was reported mainly as a novelty and faded into obscurity.
Tree methane is back, in a big way.
An expanding network of researchers has discovered methane flowing out of trees from the vast flooded forests of the Amazon basin to Borneo’s soggy peatlands, from temperate upland woods in Maryland and Hungary to forested mountain slopes in China.
Even as they strap $50,000 instruments to trees to record gas flows, more than a few of these researchers have been unable to resist using a lighter or match to produce the same blue flame that took Professor Bushong by surprise more than a century ago.
But the research now is driven by far more than novelty. Methane is second only to carbon dioxide in its importance as a greenhouse-gas emission linked to global warming. In a natural gas pipeline, methane is a relatively clean fossil fuel. But it is a powerful heat-trapping addition to the planet’s greenhouse effect when it accumulates in the atmosphere.
The gas builds up as long as new emissions outpace the rate at which natural chemical reactions in the air or some forest soils break it down (that generally takes about a decade, compared to centuries for carbon dioxide). Since 1750, the atmospheric concentration has surged more than 250 percent (from around 700 parts per billion to more than 1,800 parts per billion). The main human sources linked to the rise are global agriculture—particularly livestock and rice paddies—landfills and emissions from oil and gas operations and coal mines.
Natural sources have always produced large amounts of the gas—currently on a par with those from agriculture. The main source is microbial activity in oxygen-deprived soggy soils and wetlands. (Increasingly, human-driven warming appears to be expanding wetlands, particularly in high latitudes, adding even more methane emissions.)
The full climate impact of methane from trees is nowhere near that of the tens of billions of tons of carbon dioxide released annually from smokestacks and tailpipes, or the methane from, say, humanity’s vast cattle herds or gas fields. But there is sufficient uncertainty in the estimates setting the “global methane budget” that trees could turn out to be a substantial source.
For the moment, this is a newly revealed frontier, said Kristofer Covey, a Skidmore College scientist focused on the chemistry and ecology of forests.
“At the global scale this could be huge”
“The emissions from an individual tree are small,” Covey said. “But there are several trillion trees. At the global scale this could be huge.” Covey organized an international workshop last spring to identify research priorities and just published a paper in New Phytologist that is, in essence, a call for help from a host of disciplines not yet focused on this issue. His coauthor is J. Patrick Megonigal, a tree researcher at the Smithsonian Environmental Research Center in Maryland.
New papers are being published month by month with remarkable rapidity, with each field measurement essentially constituting a new publishable finding.
“We’re very much still in the stamp collecting phase,” Covey said.
The findings are already challenging old norms. Dry upland forests were long assumed to be removing methane from the air through the action of a class of soil microbes called methanotrophs. But work by Megonigal and others is showing tree emissions can diminish or possibly exceed that methane-scrubbing capacity.
Misled by “a flat world”
How did this effect, measured by Bushong in 1907 and noted informally by forestry scientists for generations, stay hidden so long?
For decades, scientists studying flows of methane between terrestrial ecosystems and the air had set their instruments on the ground, never thinking trees might be involved, said Vincent Gauci, a professor of global change ecology at Britain’s Open University and an author of a string of recent papers on trees’ methane role.
What everyone had missed is that the stems and trunks and leaves of trees are surfaces, too, and the gas can flow there as well. “We’d been looking at a flat world,” Gauci said.
No more. Much of the methane now found to be escaping from trees in such wet conditions is thought simply to be microbial methane pumped up and out as oxygen flows down to the roots. But Gauci and other scientists are finding many instances in which trees produce their own methane—sometimes from microbes in the heartwood or other tissues and in other cases from a remarkable direct photochemical reaction thought to be driven by the ultraviolet wavelengths in sunlight.
The tree emissions measured in some regions are enormous, with an international team led by Sunitha Pangala of Lancaster University last year estimating in Nature that just the trees in the Amazon’s seasonally flooded forests were the source of between 14 million and 25 million metric tons of the gas annually—an amount similar to estimates for methane emissions from tundra all around the Arctic.
It might not seem so surprising to think of trees in Amazon forests as conduits for this gas, given that soggy soils, peat bogs, and other low-oxygen environments are the domain of microbes that generate this gas. But other studies have found trees generating substantial methane even in dry upland ecosystems—in some cases within the trunk of the tree, not the soil.
Such findings have spurred even more work, and it seems that everywhere someone looks, the more consequential, and confounding, the picture becomes.
At every scale, from whole forests to clusters of similar trees in a forest to the dynamics in individual trees, the one constant is variation, said Megonigal, at the Smithsonian research center in Maryland.
Covey described forests where similar trees in similar soils have been measured with a fiftyfold difference in methane emissions.
Some trees have been measured to be emitting methane near the base and absorbing it higher up the trunk.
But that’s not the least of it. Closer analysis has found that a single tree can be absorbing methane near the base through microbial processes and emitting it higher up the trunk.
Adding another, perhaps hopeful, twist, it appears that some trees actually sop up methane. The work has not yet been published, but was outlined last year at the European Geosciences Union meeting by Gauci, Pangala, and another colleague.
The study surveyed methane flows in trees in wet and dry soils from Central America and the Amazon to Britain and Sweden. Trees in wet soils uniformly were net emitters of methane but those in drier conditions in some regions actually were net absorbers of the gas.
Lessons for climate policy
The emerging findings on methane and forests are likely to stir discussions about next steps for climate policy related to forests, which has long focused on trees’ capacity to absorb and store carbon dioxide, with little attention to other properties.
“The thing we know about forests is that they sequester carbon,” Covey said. “That’s what you learn, what’s in a third grader’s cartoon drawing of a forest.”
The reality for climate is more complicated. “There is global warming but there’s no global forest,” he said.
The 2015 Paris Agreement on climate change supports forest projects as a way to draw down carbon dioxide emissions that countries have so far failed to constrain at the source. The United Nations has launched a Trillion Tree Campaign. There are a host of ways for companies and consumers to spend money on forest projects through “carbon offsets” to compensate for emissions from travel and the like.
In interviews, Covey and other researchers looking at the tree methane question stress they aren’t arguing that such efforts should pause, noting the many benefits of forest conservation, including carbon storage, resilience against floods, and safeguarding species-rich ecosystems.
Independent of climate diplomacy, countries around the world are working to accelerate forest conservation under a separate agreement, the Convention on Biological Diversity, to safeguard their value as home to vast arrays of species.
But the methane findings do highlight the importance of assessing the full range of climate impacts—for better or worse—of different forest and tree types in different regions. As with better understanding of forest ecology, this can then guide projects to maximize benefits and limit risks.
In recent years, other studies looking at the full impact of forests on the climate system have illuminated how a CO2-centered focus can miss significant additional cooling benefits of forests and—in some regions and forest properties—significant warming effects.
“For some forests all the arrows point in the same direction,” Covey said, describing the various ways trees can affect climate. “There are other places where the arrows don’t line up much.”
He and other researchers said a clearer view can improve climate models and also help insure that programs centered on the climate value of forests are as effective as possible.
In higher latitudes, the simple shift from light-reflecting open land to dark, rough-surfaced tree canopies can warm the local climate by absorbing more sunlight. Forests in the tropics are particularly valuable for local climate, cooling the air around them as their metabolic machinery results in enormous evaporation—and that also can result in more sun-blocking cloud cover and precipitation.
Other work has shown how a complicated array of volatile organic compounds emitted by trees react to create haze and clouds, influencing temperature and precipitation in a variety of ways. In 2014, debate erupted over over-distilled headlines implying that this work, particularly studies by the atmospheric chemist Nadine Unger, then at Yale, meant forests should not be saved.
No one interviewed for this story, including Unger, sees that as the case. Now at Exeter University, she said what’s needed are comprehensive assessments of forests and climate accounting for the full suite of properties.
What’s particularly notable now is that she and some of her past critics are all stressing that the prime focus of the world needs to be cutting emissions of carbon dioxide at the source, even as forests are saved for all the benefits they provide.
“Our best shot at achieving Paris Agreement global temperature targets is a laser focus on reducing CO2 emissions from energy-use in the wealthy mid-latitude countries,” Unger said.
Her point echoes a commentary by a range of scientists in the March 1 edition of Science on making sure “natural climate solutions”—including forest-focused projects—are not seen as an alternative to pursuing deep, prompt cuts in greenhouse-gas emissions. Both will be needed, they said.
William R. Moomaw, an emeritus professor of international environmental policy at Tufts University, said there will always be uncertainties in gauging the full mix of climate influences of forests. But that should not stand in the way of moving forward with programs to expand them or boost their carbon-holding capacity. The weight of evidence still points to forests as a key to maintaining a safe climate, Moomaw said.
“Given that forests were major factors in the stable carbon and temperature balance for the past 10,000 years until humans began cutting them down and also burning them, that suggests the balance of all factors was about right.”
|
{
"palladium_score": 3.9168121814727783,
"timestamp": "2026-01-18T07:33:02.455391",
"source": "Palladium-STEM (Preview)"
}
|
http://forums.cannabisculture.com/forums/ubbthreads.php?ubb=showflat&Number=1271541
|
Human influences on climate
Anthropogenic factors are acts by humans that change the environment and influence climate. The biggest factor of present concern is the increase in CO2 levels due to emissions from fossil fuel combustion, followed by aerosols (particulate matter in the atmosphere) which exerts a cooling effect. Other factors, including land use, ozone depletion, animal agriculture and deforestation also impact climate.
Fossil fuels
Carbon dioxide variations over the last 400,000 years, showing a rise since the industrial revolution.
Carbon dioxide variations over the last 400,000 years, showing a rise since the industrial revolution.
Beginning with the industrial revolution in the 1850s and accelerating ever since, the human consumption of fossil fuels has elevated CO2 levels from a concentration of ~280 ppm to more than 370 ppm today. These increases are projected to reach more than 560 ppm before the end of the 21st century. Along with rising methane levels, these changes are anticipated to cause an increase of 1.4–5.6 °C between 1990 and 2100 (see global warming).
Anthropogenic aerosols, particularly sulphate aerosols from fossil fuel combustion, are believed to exert a cooling influence; see graph. This, together with natural variability, is believed to account for the relative "plateau" in the graph of 20th century temperatures in the middle of the century.
Land use
Prior to widespread fossil fuel use, humanity's largest impact on local climate is likely to have resulted from land use. Irrigation, deforestation, and agriculture fundamentally change the environment. For example, they change the amount of water going into and out of a given locale. They also may change the local albedo by influencing the ground cover and altering the amount of sunlight that is absorbed. For example, there is evidence to suggest that the climate of Greece and other Mediterranean countries was permanently changed by widespread deforestation between 700 BC and 0 BC (the wood being used for ship-building, construction and fuel), with the result that the modern climate in the region is significantly hotter and drier, and the species of trees that were used for ship-building in the ancient world can no longer be found in the area.
A controversial hypothesis by William Ruddiman called the early anthropocene hypothesis suggests that the rise of agriculture and the accompanying deforestation led to the increases in carbon dioxide and methane during the period 5000–8000 years ago. These increases, which reversed previous declines, may have been responsible for delaying the onset of the next glacial period, according to Ruddimann's overdue-glaciation hypothesis.
Animal agriculture
According to a 2006 United Nations report, animal agriculture is responsible for 18% of the world’s greenhouse gas emissions as measured in CO2 equivalents. By comparison, all transportation emits 13.5% of the CO2. In addition to increased CO2 emissions, animal agriculture produces 65% percent of human-related nitrous oxide (which has 296 times the global warming potential of CO2) and 37% of all human-induced methane (which is 23 times as warming as CO2).http://en.wikipedia.org/wiki/Climate_change
Statements by organizations
Various prominent bodies have commented on global warming, most notably the Intergovernmental Panel on Climate Change (IPCC). National and international scientific groups have issued statements both detailing and summarizing the current state of scientific knowledge on the earth's climate.
Intergovernmental Panel on Climate Change (IPCC)
Main article: Intergovernmental Panel on Climate Change
In 2007, as part of its Fourth Assessment Report, the IPCC concluded that human actions are "very likely" the cause of global warming, meaning a 90% or greater probability.
"The world's leading climate scientists said global warming has begun, is very likely caused by man, and will be unstoppable for centuries, ... . The phrase very likely translates to a more than 90 percent certainty that global warming is caused by man's burning of fossil fuels. That was the strongest conclusion to date, making it nearly impossible to say natural forces are to blame."
"The report said that an increase in hurricane and tropical cyclone strength since 1970 more likely than not can be attributed to man-made global warming. The scientists said global warming's connection varies with storms in different parts of the world, but that the storms that strike the Americas are global warming-influenced."
"On sea levels, the report projects rises of 7-23 inches by the end of the century. That could be augmented by an additional 4-8 inches if recent surprising polar ice sheet melt continues."
Joint science academies’ statement
In 2005 the national science academies of the G8 nations, plus Brazil, China and India, three of the largest emitters of greenhouse gases in the developing world, signed a statement on the global response to climate change. The statement stresses that the scientific understanding of climate change is now sufficiently clear to justify nations taking prompt action , and explicitly endorsed the IPCC consensus.
US National Research Council, 2001
In 2001 the Committee on the Science of Climate Change of the National Research Council published Climate Change Science: An Analysis of Some Key Questions . This report explicitly endorses the IPCC view of attribution of recent climate change as representing the view of the science community:
The changes observed over the last several decades are likely mostly due to human activities, but we cannot rule out that some significant part of these changes is also a reflection of natural variability. Human-induced warming and associated sea level rises are expected to continue through the 21st century... The IPCC's conclusion that most of the observed warming of the last 50 years is likely to have been due to the increase in greenhouse gas concentrations accurately reflects the current thinking of the scientific community on this issue.
American Meteorological Society
The American Meteorological Society (AMS) statement adopted by their council in 2003 said:
There is now clear evidence that the mean annual temperature at the Earth's surface, averaged over the entire globe, has been increasing in the past 200 years. There is also clear evidence that the abundance of greenhouse gases in the atmosphere has increased over the same period. In the past decade, significant progress has been made toward a better understanding of the climate system and toward improved projections of long-term climate change... Human activities have become a major source of environmental change. Of great urgency are the climate consequences of the increasing atmospheric abundance of greenhouse gases... Because greenhouse gases continue to increase, we are, in effect, conducting a global climate experiment, neither planned nor controlled, the results of which may present unprecedented challenges to our wisdom and foresight as well as have significant impacts on our natural and societal systems.
Federal Climate Change Science Program, 2006
On May 2, 2006, the Federal Climate Change Science Program commissioned by the Bush administration in 2002 released the first of 21 assessments that concluded that there is clear evidence of human influences on the climate system (due to changes in greenhouse gases, aerosols, and stratospheric ozone) . The study said that observed patterns of change over the past 50 years cannot be explained by natural processes alone, though it did not state what percentage of climate change might be anthropogenic in nature.
Other organizations
Other scientific organizations have made position statements on climate change.
* American Geophysical Union position statement on greenhouse gases and climate change (also endorsed by the American Institute of Physics)
* Climate Change Science: An Analysis of Some Key Questions, National Academy of Sciences, Commission on Geosciences, Environment and Resources, (Washington, DC: National Academy Press, 2001).
* Joint statement on the Science of Climate Change, issued by the Australian Academy of Sciences, Royal Flemish Academy of Belgium for Sciences and the Arts, Brazilian Academy of Sciences, Royal Society of Canada, Caribbean Academy of Sciences, Chinese Academy of Sciences, French Academy of Sciences, German Academy of Natural Scientists Leopoldina, Indian National Science Academy, Indonesian Academy of Sciences, Royal Irish Academy, Accademia Nazionale dei Lincei (Italy), Academy of Sciences Malaysia, Academy Council of the Royal Society of New Zealand, Royal Swedish Academy of Sciences, and Royal Society (UK).
* A position paper of the Stratigraphy Commission of the Geological Society of London.
* Position Statement on Global Climate Change adopted by the Geological Society of America
* Policy Statement on Climate Variability and Change by the American Association of State Climatologists (AASC)
* Australian Medical Association statement on climate change
* American Chemical Society statement on Global Climate Change
The only major scientific organization that rejects the finding of human influence on recent climate is the American Association of Petroleum Geologists.
Recent Surveys of scientists
Various surveys have been conducted to determine a scientific consensus on global warming. Only one has been conducted within the last ten years.
Oreskes, 2004
In December 2004, an article by geologist and historian of science Naomi Oreskes summarized a study of the scientific literature on climate change. The essay concluded that there is a scientific consensus on the reality of anthropogenic climate change. The author analyzed 928 abstracts of papers from refereed scientific journals between 1993 and 2003, listed with the keywords "global climate change". The abstracts were divided into six categories: explicit endorsement of the consensus position, evaluation of impacts, mitigation proposals, methods, paleoclimate analysis, and rejection of the consensus position. 75% of the abstracts were placed in the first three categories, thus either explicitly or implicitly accepting the consensus view; 25% dealt with methods or paleoclimate, thus taking no position on current anthropogenic climate change; none of the abstracts disagreed with the consensus position, which the author found to be "remarkable". It was also pointed out, "authors evaluating impacts, developing methods, or studying paleoclimatic change might believe that current climate change is natural. However, none of these papers argued that point."
Older surveys
Survey of US state climatologists 1997
The neutrality of this section is disputed.
Please see the discussion on the talk page.
In 1997, the conservative advocacy group Citizens for a Sound Economy surveyed America's 48 official state climatologists on questions related to climate change . Of the 36 respondents, 44% considered global warming to be a largely natural phenomenon, compared to 17% who considered warming to be largely manmade. The survey further found that 58% of the climatologists disagreed or somewhat disagreed with then-President Clinton's assertion that "the overwhelming balance of evidence and scientific opinion is that it is no longer a theory, but now fact, that global warming is for real". Eighty-nine percent of the climatologists agreed that "current science is unable to isolate and measure variations in global temperatures caused ONLY by man-made factors," and 61% said that historical data do not indicate "that fluctuations in global temperatures are attributable to human influences such as burning fossil fuels."
60% of the respondents said that reducing man-made CO2 emissions by 15% below 1990 levels would not prevent global temperatures from rising, and 86% said that reducing emissions to 1990 levels would not prevent rising temperatures. By a 39% to 33% margin, more climatologists agreed that "evidence exists to suggest that the earth is headed for another glacial period" though the time scale for the next glacial period was not specified.
Bray and von Storch, 1996
In 1996, Dennis Bray and Hans von Storch undertook a survery of climate scientists on attitudes towards global warming and related matters. The results were subsequently published in Bulletin of the American Meteorological Society Vol. 80, No. 3, March 1999 439-455. The paper addressed the views of climate scientists, with a response rate of 40% from a mail survey questionnaire to 1000 scientists in Germany, the USA and Canada. Most of the scientists believed that global warming was occurring and appropriate policy action should be taken, but there was wide disagreement about the likely effects on society and almost all agreed that the predictive ability of currently existing models was limited.
The abstract says:
The international consensus was, however, apparent regarding the utility of the knowledge to date: climate science has provided enough knowledge so that the initiation of abatement measures is warranted. However, consensus also existed regarding the current inability to explicitly specify detrimental effects that might result from climate change. This incompatibility between the state of knowledge and the calls for action suggests that, to some degree at least, scientific advice is a product of both scientific knowledge and normative judgment, suggesting a socioscientific construction of the climate change issue.
The survey was extensive, and asked numerous questions on many aspects of climate science, model formulation, and utility, and science/public/policy interactions. To pick out some of the more vital topics, from the body of the paper:
The resulting questionnaire, consisting of 74 questions, was pre-tested in a German institution and after revisions, distributed to a total of 1,000 scientists in North America and Germany... The number of completed returns was as follows: USA 149, Canada 35, and Germany 228, a response rate of approximately 40%...
...With a value of 1 indicating the highest level of belief that predictions are possible and a value of 7 expressing the least faith in the predictive capabilities of the current state of climate science knowledge, the mean of the entire sample of 4.6 for the ability to make reasonable predictions of inter-annual variability tends to indicate that scientists feel that reasonable prediction is not yet a possibility... mean of 4.8 for reasonable predictions of 10 years... mean of 5.2 for periods of 100 years...
...a response of a value of 1 indicates a strong level of agreement with the statement of certainty that global warming is already underway or will occur without modification to human behavior... the mean response for the entire sample was 3.3 indicating a slight tendency towards the position that global warming has indeed been detected and is underway.... Regarding global warming as being a possible future event, there is a higher expression of confidence as indicated by the mean of 2.6.
Other older surveys of scientists
It should be noted that these surveys are over 15 years old and the state of climate science has changed radically since their time; current beliefs of the scientific community are different as shown in the reviews cited above.
* Global Environmental Change Report, 1990: GECR climate survey shows strong agreement on action, less so on warming. Global Environmental Change Report 2, No. 9, pp. 1-3
* Stewart, T.R., Mumpower, J.L., and Reagan-Cirincione, P. (1992). Scientists' opinions about global climate change: Summary of the results of a survey. NAEP (National Association of Environmental Professionals) Newsletter, 17(2), 6-7.
* A 1991 Gallup poll of 400 members of the American Geophysical Union and the American Meteorological Society
o Fairness and Accuracy in Reporting states that the report said that 66 % of the scientists said that human-induced global warming was occurring, with 10 % disagreeing and the rest undecided. In a correction Gallup stated: "Most scientists involved in research in this area believe that human-induced global warming is occurring now."
o George Will reported "53 percent do not believe warming has occurred, and another 30 percent are uncertain." (Washington Post, September 3, 1992)
o A 1993 publication by the politically conservative Heartland Institute states: "A Gallup poll conducted on February 13, 1992 of members of the American Geophysical Union and the American Meteorological Society - the two professional societies whose members are most likely to be involved in climate research - found that 18 percent thought some global warming had occurred, 33 percent said insufficient information existed to tell, and 49 percent believed no warming had taken place."
Scientists opposing consensus opinions
Main article: List of scientists opposing global warming consensus
Alleged US governmental interference in reporting
According to an Associated Press release of 30 January 2007 :
"Climate scientists at seven government agencies say they have been subjected to political pressure aimed at downplaying the threat of global warming.
"The groups presented a survey that shows two in five of the 279 climate scientists who responded to a questionnaire complained that some of their scientific papers had been edited in a way that changed their meaning. Nearly half of the 279 said in response to another question that at some point they had been told to delete references to "global warming" or "climate change" from a report."
See also
* Attribution of recent climate change
* Global warming controversy
1. ^ "Warming 'very likely' human-made", BBC News, BBC, 2007-02-01. Retrieved on 2007-02-01.
2. ^ Naomi Oreskes (December 3, 2004). "Beyond the Ivory Tower: The Scientific Consensus on Climate Change". Science 306 (5702): 1686. DOI:10.1126/science.1103618. (see also for an exchange of letters to Science)
External links
* Sherwood Rowland (Nobel Laureate for work on ozone depletion) gives his opinion on climate change 2006 Freeview video provided by the Vega Science Trust.
* Newer reports on EPA website
* Climate Change Science: An Analysis of Some Key Questions, National Academy of Sciences
* Joint Science Academies' Statement: Global Response to Climate Change, National Academy of Sciences
* Climate change special: State of denial New Scientist 4 November 2006
* The Denial Machine CBC Television 15 November 2006http://en.wikipedia.org/wiki/Scientific_opinion_on_climate_change
|
{
"palladium_score": 4.100207328796387,
"timestamp": "2026-01-18T07:33:02.455391",
"source": "Palladium-STEM (Preview)"
}
|
http://frogsaregreen.org/salamanders-in-crisis-an-overview-of-why-salamander-conservation-is-needed/
|
Guest blog by Matt Ellerbeck, founder of Save the Salamanders
Although they are rarely given much thought, and often overlooked when they are, salamanders are in a terrible crisis. Around half of all the world’s salamander species are listed as threatened by the International Union for Conservation of Nature (IUCN). These species are all facing a high risk of extinction. A further 62 species have been designated as near-threatened with populations rapidly dwindling. This means they are quickly getting closer to threatened status and to the brink of extinction. Sadly for some salamanders it is already too late, as both the Yunnan Lake Newt (Cynops wolterstorffi) and Ainsworth’s Salamander (Plethodon ainsworthi) have already gone extinct.
Salamanders have been on the earth for over 160 million years, and the terrible state that they now find themselves in is due to the detrimental acts of humans. Even those species that are not experiencing population declines deserve attention and conservation to ensure that they remain healthy and stable.
One of the biggest issues affecting salamanders is the loss of their natural habitat. Many areas that were once suitable for salamanders to live in have now been destroyed for developmental construction and agriculture. Habitats of all kinds are being lost at an alarming rate. Wetlands are drained, forests are logged and cut down, and waterfronts are developed. Salamanders are literally losing their homes and they are losing them rapidly. The expansion of urban areas threatens the suitable habitats that still remain.
Where natural habitats do still exist, they are often fragmented or degraded. Fragmentation occurs when healthy areas of habitat are isolated from one another. These fragmented areas are known as habitat islands. Salamander populations are affected since gene flow between the populations is prevented. This increases the occurrence of inbreeding, which results in a decrease in genetic variability and the birthing of weaker individuals.
Fragmented populations where inbreeding occurs often ends in a genetic bottleneck. This is an evolutionary event where a significant percentage of the population or species is killed or otherwise prevented from reproducing. Habitat fragmentation is also harmful because it often eliminates crucial requirements in the area which are critical to the survival of salamander populations. Such areas include spaces that can be utilized for thermoregulation, prey capture, breeding, and over-wintering. Without such habitat requirements populations dwindle.
Breeding sites, often in the forms of vernal pools are particularly important. The loss of such areas in the form of habitat destruction can negatively affect the entire population and its reproductive output. According to the Committee on the Status of Endangered Wildlife in Canada (COSEWIC), there is some evidence that certain salamander species have individuals that return to the pond in which they were born once they reach maturity. Therefore, destruction of a breeding pond may result in loss of the entire population returning to that site. Habitat complexity is also important as it offers shelter to salamanders from both predators and human persecution.
Degradation occurs when the natural habitat has been altered and degraded to such a degree that it is unlikely that any remaining salamanders species would be able to survive. Developments and agriculture near fragmented habitats put salamanders at serious risk. As amphibians, salamanders have extremely absorbent skins. Industrial contaminants, the introduction of sedimentation into waterways, sewage run off, pesticides, oils, and other chemicals and toxic substances from developmental construction sites and human settlements can all be absorbed by salamanders. This can quickly lead to deaths. They can also cause widespread horrific deformities to occur. A study conducted at Purdue University found that out of 2,000 adult and juvenile salamanders 8 percent had visible deformities.
According to Save The Frogs, Atrazine (perhaps the most commonly used herbicide on the planet, with some 33 million kg being used annually in the US alone) can reduce survivorship in salamanders. Many products are sold with the claim that they are eco-friendly. However, these should be viewed with caution. For example, according to N.C Partners in Amphibian and Reptile Conservation, Roundup and many other surfactant-loaded glyphosate formulations are not labeled for aquatic use. When these formulations are applied to upland sites according to label instructions, the risk to surfactant-sensitive species is considered low. While this may be the case for fish it does not necessarily apply to amphibians. Salamanders that breed in water also routinely use non-aquatic areas and could easily be exposed to glyphosate formulations that contain harmful surfactants through direct application and not just incidental drift.
Habitat destruction and degradation can also effect the availability of prey items, causing unnatural declines in appropriate food sources.
Habitats are often isolated and cut off from one another by the roads and highways that now run through them. Countless numbers of salamanders are killed on roads and highways every year when they are hit by vehicles. Salamanders that are migrating to breeding and egg-laying sites often must cross over roads to reach such areas. Here many of the mature members of the breeding population are killed. Removing members of the breeding populations greatly limits reproductive output, this makes it incredibly hard for salamander numbers to rebound.
Roads present an additional problem because they represent a form of habitat loss. The roads that run through natural areas also fragment the existing populations, drastically making them smaller in size. This limits the gene flow and genetic diversity between the isolated populations on either side and this greatly increases the chances of extirpation. When salamanders attempt to cross roads to travel between the populations, or to critical breeding/birthing sites it greatly increases their chances of being hit and killed by vehicles.
The Wetlands Ecology and Management (2005) population projections for spotted salamander (Ambystoma maculatum) life tables imply that an annual risk of road mortality for adults of greater then 10% can lead to local population extirpation. Unfortunately, it is estimated that mortality rates can often be as high as 50 to 100%, which means populations are at extreme risk of extirpation and extinction due to road mortality. Wyman (1991) reported average mortality rates of 50.3 to 100% for hundreds of salamanders attempting to cross a paved rural road in New York State, USA. Given that this figure pertains to a rural area from over a decade ago, it is fair to assume that even higher mortality rates occur as their has been in increase in cars and roads over the years. Reducing road mortality is paramount to preserving salamander species.
Being hit and killed by vehicles is not the only threat that roads create for salamanders. Chemical run-off from vehicles contaminate roadside ditches and pools. These sites are often utilized by salamanders for breeding and birthing. According to Steven P. Brady (2012) survival in roadside pools averaged just 56%, as compared to 87% in woodland pools. Thus, an average of 36% fewer individual embryos survived to hatching in roadside versus woodland pools.
Salamanders are also threatened when they are harvested from the wild. Salamanders are taken for the pet trade, for food markets, and for use as fishing bait.
There is much about salamanders that scientists do not know. Aspects of the biology, ecology, and lifestyles of many species is a mystery. This undoubtedly means human interference is negatively affecting salamanders in ways in which we don’t even know. The intricate relation between all species and the vital roles they play within eco-systems is also being altered. Such alterations can have serious consequences to not just salamanders, but many other animals as well (including humans).
To find out how you can help please see: www.savethesalamanders.com
About Matt Ellerbeck and Save the Salamanders:
Over the years he has observed hundreds of salamanders in their natural habitats. This interest eventually led to Matt becoming a Salamander Advocate and Conservationist.
Matt also has considerable experience and expertise in regards to salamanders and their care. He has cared for and observed numerous species. These include forms belonging to the genera Plethodon, Ambystoma, Necturus, Notophthalmus, Hypselotriton, Pleurodeles, Taricha, Salamandra, Hemidactylium, Eurycea, Pseudotriton, Amphiuma, Siren, and Paramesotriton. Matt is also in possession of a license to keep Specially Protected Amphibians in Captivity for the purpose of education, which has been granted by the Ontario Ministry of Natural Resources.
Along with wildlife preservation, Matt also believes in the ideals of Environmentalism, Deep Ecology, Biocentrism, Ecocentrism, and anti-Speciesism; and draws from these various movements to help salamanders.
Matt is committed to continuing his efforts to help salamanders. His love and concern for these animals is second to none!
|
{
"palladium_score": 3.5067310333251953,
"timestamp": "2026-01-18T07:33:02.455391",
"source": "Palladium-STEM (Preview)"
}
|
https://www.eurekalert.org/pub_releases/2010-07/su-bhf072710.php
|
The quickest, best way to slow the rapid melting of Arctic sea ice is to reduce soot emissions from the burning of fossil fuel, wood and dung, according to a new study by Stanford researcher Mark Z. Jacobson.
His analysis shows that soot is second only to carbon dioxide in contributing to global warming. But, he said, climate models to date have mischaracterized the effects of soot in the atmosphere.
Because of that, soot's contribution to global warming has been ignored in national and international global warming policy legislation, he said.
"Controlling soot may be the only method of significantly slowing Arctic warming within the next two decades," said Jacobson, director of Stanford's Atmosphere/Energy Program. "We have to start taking its effects into account in planning our mitigation efforts and the sooner we start making changes, the better."
To reach his conclusions, Jacobson used an intricate computer model of global climate, air pollution and weather that he developed over the last 20 years that included atmospheric processes not incorporated in previous models.
He examined the effects of soot - black and brown particles that absorb solar radiation - from two types of sources. He analyzed the impacts of soot from fossil fuels - diesel, coal, gasoline, jet fuel - and from solid biofuels, such as wood, manure, dung, and other solid biomass used for home heating and cooking in many locations. He also focused in detail on the effects of soot on heating clouds, snow and ice.
What he found was that the combination of both types of soot is the second-leading cause of global warming after carbon dioxide. That ranks the effects of soot ahead of methane, an important greenhouse gas. He also found that soot emissions kill over 1.5 million people prematurely worldwide each year, and afflicts millions more with respiratory illness, cardiovascular disease, and asthma, mostly in the developing world where biofuels are used for home heating and cooking.
Jacobson's study will be published this week in Journal of Geophysical Research (Atmospheres).
It is the magnitude of soot's contribution, combined with the fact that it lingers in the atmosphere for only a few weeks before being washed out, that leads to the conclusion that a reduction in soot output would start slowing the pace of global warming almost immediately.
Greenhouse gases, in contrast, typically persist in the atmosphere for decades - some up to a century or more - creating a considerable time lag between when emissions are cut and when the results become apparent.
Jacobson found that eliminating soot produced by the burning of fossil fuel and solid biofuel could reduce warming above parts of the Arctic Circle in the next fifteen years by up to 1.7 degrees Celsius. For perspective, net warming in the Arctic has been at least 2.5 degrees Celsius over the last century and is expected to warm significantly more in the future if nothing is done.
The most immediate, effective and low-cost way to reduce soot emissions is to put particle traps on vehicles, diesel trucks, buses, and construction equipment. Particle traps filter out soot particles from exhaust fumes.
Soot could be further reduced by converting vehicles to run on clean, renewable electric power.
Jacobson found that although fossil fuel soot contributed more to global warming, biofuel-derived soot caused about eight times the number of deaths as fossil fuel soot. Providing electricity to rural developing areas, thereby reducing usage of solid biofuels for home heating and cooking, would have major health benefits, he said.
Soot from fossil fuels contains more black carbon than soot produced by burning biofuels, which is why there is a difference in impact.
Black carbon is highly efficient at absorbing solar radiation in the atmosphere, just like a black shirt on a sunny day. Black carbon converts sunlight to heat and radiates it back to the air around it. This is different from greenhouse gases, which primarily trap heat that rises from the Earth's surface. Black carbon can also absorb light reflecting from the surface, which helps make it such a potent warming agent.
Jacobson's climate model is the first global model to use mathematical equations to describe the physical and chemical interactions of soot particles in cloud droplets in the atmosphere. This allowed him to include details such as light bouncing around inside clouds and within cloud drops, which he said are critical for understanding the full effect of black carbon on heating the atmosphere.
"The key to modeling the climate effects of soot is to account for all of its effects on clouds, sea ice, snow, and atmospheric heating," Jacobson said. Because of the complexity of the processes, he said it is not a surprise that previous models have not correctly treated the physical interactions required to simulate cloud, snow, and atmospheric heating by soot. "But without treating these processes, no model can give the correct answer with respect to soot's effects," he said.
Jacobson argues that leaving out this scale of detail in other models has led many scientists and policy makers to undervalue the role of black carbon as a warming agent.
The strong global heating due to soot that Jacobson found is supported by some recent findings of Veerabhadran Ramanathan, a professor of climate and atmospheric science at the Scripps Institute of Oceanography, who measures and models the climate effects of soot.
"Jacobson's study is the first time that a model has looked at the various ways black carbon can impact climate in a quantitative way," said Ramanathan, who was not involved in the study.
Black carbon has an especially potent warming effect over the Arctic. When black carbon is present in the air over snow or ice, sunlight can hit the black carbon on its way towards Earth, and also hit it as light reflects off the ice and heads back towards space.
"It's a double-whammy over the ice surface in terms of heating the air," Jacobson said.
Black carbon also lands on the snow, darkening the surface and enhancing melting.
"There is a big concern that if the Arctic melts, it will be a tipping point for the Earth's climate because the reflective sea ice will be replaced by a much darker, heat absorbing, ocean below," said Jacobson. "Once the sea ice is gone, it is really hard to regenerate because there is not an efficient mechanism to cool the ocean down in the short term."
|
{
"palladium_score": 3.5845041275024414,
"timestamp": "2026-01-18T07:33:04.848402",
"source": "Palladium-STEM (Preview)"
}
|
https://www.cognixia.com/blog/translating-long-lost-languages-with-machine-learning
|
Machine learning could help read languages that still haven’t been deciphered, helping us discover things we never knew about history
In the few years that machine learning has been around, it has significantly transformed the study of linguistics, thanks to the availability of huge annotated databases, and techniques for having machines learn from them. Due to this, machine translation from one language to another is quite commonplace now. Though machines are not always perfect with their language, machine learning has given a fresh new perspective to linguistics. It could be any language or dialect from across the world, machines can now learn it, and help translate anything to and from it.
This ability is now being extended to languages of bygone history that academicians, historians and linguists have not yet been able to decipher at all – languages that were spoken or written or both, many, many years Before Christ. Jiaming Luo and Regina Barzilay from Massachusetts Institute of Technology (MIT) and Yuan Cao from Google’s AI Lab in Mountain View, California have developed a machine learning system that is capable of deciphering the lost languages. They have conducted numerous trials and have successfully deciphered a language discovered in one of the historic relics by British archeologist – Arthur Evans in 1886. This language is approximately dated back to 1400 BCE. Their approach for this project was slightly different from the usual machine learning techniques, though.
Generally, machine translation is powered by the understanding that words are related to each other in similar ways, irrespective of what language is involved. The process for machine learning begins with mapping out the relations for a specific language, which would require huge databases of text. A machine then searches this database to pick out how often each word appears next to every other word. This pattern of appearances is identified as a unique signature that would define the word in a multi-dimensional parameter space. The word can then be thought of as a vector within this space. And this vector acts as a powerful constraint on how the word can appear in any translation the machine comes up with. These vectors obey some simple mathematical rules. For example: king – man + woman = queen. And a sentence can be thought of as a set of vectors that follow one after the other to form a kind of trajectory through this space.
The fundamental that drives machine translation is that words in different languages occupy the same points in their respective parameter spaces. This enables mapping an entire language onto another language with one-to-one correspondence. The entire process can be described as the process of finding similar trajectories through these spaces. During the process, the machine doesn’t really need to understand what the sentence means.
Luo et al., on their quest to use machine translation to decipher languages that have been lost entirely are using the constraint of how the languages are known to evolve over time. The principle behind this is that any language can change only in certain ways – the symbols in related languages would still appear with similar distributions, related words would still have the same order of characters, etc. When these constraints are applied, it becomes simpler to decipher a particular language. But for this to happen, it is important that the progenitor language be known. So, if one can figure out the progenitor language of the lost languages, the machine translation process that is being developed can be used to decipher the languages.
One of the biggest challenges that linguists often face is fatigue, which is something that doesn’t happen when machines are used for this process. In some cases, linguists might even use a trial-and-error approach by decipher a particular lost language into every known language. This would be possible only by using the machine translation approach, while being quite impossible if attempted solely through human effort.
Machine translation could help us find so much more about the ancient civilizations, and how things panned out over the decades. Who knows, with this approach, we might even discover Atlantis!
Tag : Machine Learning (ML)
|
{
"palladium_score": 3.574093818664551,
"timestamp": "2026-01-18T07:33:04.849398",
"source": "Palladium-STEM (Preview)"
}
|
https://www.msdmanuals.com/en-in/home/older-people%E2%80%99s-health-issues/aging-and-drugs/aging-and-drugs
|
Drugs, the most common medical intervention, are an important part of medical care for older people. Without drugs, many older people would function less well or die at an earlier age.
Did You Know...
Older people tend to take more drugs than younger people because they are more likely to have more than one chronic medical disorder, such as high blood pressure, diabetes, or arthritis. Most drugs used by older people for chronic disorders are taken for years. Other drugs may be taken for only a short time to treat such problems as infections, some kinds of pain, and constipation. Almost 90% of older adults regularly take at least 1 prescription drug, almost 80% regularly take at least 2 prescription drugs, and 36% regularly take at least 5 different prescription drugs. When over-the-counter and dietary supplements are included, these rates are even higher. Older people who are frail, hospitalized, or in a nursing home take the most drugs. Nursing home residents are prescribed multiple different drugs to take on a regular basis.
Older people also take many nonprescription (over-the-counter, or OTC) drugs. Many OTC drugs are potentially hazardous for older people (see Precautions With Over-the-Counter Drugs: Older People Older People Certain groups of people, such as the very young, the very old, the very sick, and pregnant and breastfeeding women, are more vulnerable to harm from drugs, including over-the-counter (OTC)... read more ).
Benefits and Risks of Prescription Drugs
Many of the improvements in the health and function of older people during the past several decades can be attributed to the benefits of drugs.
Vaccines help prevent many infectious diseases (such as influenza and pneumonia) that once killed many older people.
Antibiotics are often effective in treating serious infections, including pneumonia.
Drugs to control high blood pressure (antihypertensives) help prevent strokes and heart attacks.
Drugs to control blood sugar levels (insulin and other antihyperglycemic drugs) enable millions of people with diabetes to lead normal lives. These drugs also reduce the risk of eye and kidney problems that diabetes can cause.
Drugs to control pain and other symptoms enable millions of people with arthritis to continue to function.
However, drugs can have effects that are not intended or desired (side effects). Starting in late middle age, the risk of side effects related to the use of drugs increases. Older people are more than twice as susceptible to the side effects of drugs Overview of Adverse Drug Reactions Adverse drug reactions (adverse effects) are any unwanted effects of a drug. In the early 1900s, German scientist Paul Ehrlich described an ideal drug as a "magic bullet." Such a drug would... read more as younger people. Side effects are also likely to be more severe, affecting quality of life and resulting in visits to the doctor and in hospitalization.
Older people are more susceptible to the side effects of drugs for several reasons:
As people age, the total amount of water in the body decreases, and the amount of fat tissue increases. Thus, in older people, drugs that dissolve in water reach higher concentrations because there is less water to dilute them, and drugs that dissolve in fat accumulate more because there is relatively more fat tissue to store them (see Drug Distribution Drug Distribution Drug distribution refers to the movement of a drug to and from the blood and various tissues of the body (for example, fat, muscle, and brain tissue) and the relative proportions of drug in... read more ).
As people age, the kidneys are less able to excrete drugs into urine, and the liver is less able to break down (metabolize) many drugs (see Drug Metabolism Drug Metabolism Drug metabolism is the chemical alteration of a drug by the body. (See also Introduction to Administration and Kinetics of Drugs.) Some drugs are chemically altered by the body (metabolized)... read more ). Thus, drugs are less readily removed from the body (see Drug Elimination Drug Elimination Drug elimination is the removal of drugs from the body. (See also Introduction to Administration and Kinetics of Drugs.) All drugs are eventually eliminated from the body. They may be eliminated... read more ).
Older people usually take more drugs and have more disorders.
Fewer studies have been done in older people to help identify appropriate doses of drugs.
Older people are more likely to have chronic medical disorders that may be worsened by drugs or that may affect how the drugs work.
Because of these age-related changes, many drugs tend to stay in an older person’s body much longer, prolonging the drug’s effect and increasing the risk of side effects. Therefore, older people often need to take smaller doses of certain drugs or perhaps fewer daily doses. For example, digoxin, a drug sometimes used to treat certain heart disorders, dissolves in water and is eliminated by the kidneys. Because the amount of water in the body decreases and the kidneys function less well as people age, digoxin concentrations in the body may be increased, resulting in a greater risk of side effects (such as nausea or abnormal heart rhythms). To prevent this problem, doctors may use a smaller dose. Or sometimes other drugs can be substituted.
Older people are more sensitive to the effects of many drugs. For example, older people tend to become sleepier and are more likely to become confused when using certain antianxiety drugs (see table Drugs Used to Treat Anxiety Disorders Drugs Used to Treat Anxiety Disorders ) or sleep aids to treat insomnia Treatment The most commonly reported sleep-related problems are insomnia and excessive daytime sleepiness. Insomnia is difficulty falling asleep or staying asleep, waking up early, or a disturbance in... read more . Some drugs that lower blood pressure Drug Treatment of High Blood Pressure High blood pressure is very common. It often does not cause symptoms; however, high blood pressure can increase the risk of stroke, heart attacks, and heart failure. Therefore, it is important... read more tend to lower blood pressure much more dramatically in older people than in younger people. Larger decreases in blood pressure can lead to side effects such as dizziness, light-headedness, and falls. Older people who have such side effects should discuss them with their doctor.
Many commonly used drugs have anticholinergic effects Anticholinergic: What Does It Mean? . These drugs include some antidepressants (amitriptyline and imipramine), many antihistamines (such as diphenhydramine, contained in over-the-counter sleep aids, cold remedies, and allergy drugs), and many antipsychotics (such as chlorpromazine and clozapine). Older people, particularly those with memory impairment, are particularly susceptible to anticholinergic effects, which include confusion, blurred vision, constipation, dry mouth, and difficulty starting to urinate. Some anticholinergic effects, such as reduction of tremor (as in the treatment of Parkinson disease) and reduction of nausea, are desirable, but most are not.
A drug may have a side effect because it interacts with
A disorder, symptom, or condition other than the one for which the drug is being taken (drug–disease interaction)
Another drug (drug–drug interaction)
Food (drug–food interaction)
A medicinal herb (drug–medicinal herb interaction—see table Some Possible Medicinal Herb–Drug Interactions Other Concerns Dietary supplements are used by about 75% of Americans. They are the most common therapies included among integrative medicine and health (IMH) and complementary and alternative medicine (CAM)... read more )
Because older people tend to have more diseases and take more drugs than younger people, they are more likely to have drug–disease and drug–drug interactions. In many drug-disease interactions, taking a dug can worsen a disorder, symptom, or condition (see table Some Disorders and Symptoms That Can Be Worsened by Drugs in Older People Some Disorders and Symptoms That Can Be Worsened by Drugs in Older People ).
Patients, doctors, and pharmacists can take steps to reduce the risk of drug–disease and drug–drug interactions. Because over-the-counter drugs and medicinal herbs can interact with other drugs, people should ask their doctor or pharmacist about combining the use of these drugs with prescription drugs.
Not following a doctor’s directions for taking a drug (called nonadherence) can be risky (see Adherence to Drug Treatment Adherence to Drug Treatment Adherence is the degree to which a person takes prescribed drugs as directed. (See also Overview of Response to Drugs.) Adherence to (compliance with) drug treatment is important. However, only... read more ). Older age alone does not make people less likely to take drugs as directed. However, up to half of older people do not take drugs as directed. Not taking a drug, taking too little, or taking too much can cause problems. Taking less of a drug because it has side effects may seem reasonable, but people should talk to a doctor before they make any changes in the way they take a drug.
Maximizing the Benefits and Reducing the Risks of Taking Drugs
Older people and the people who care for them can do many things to maximize the benefits and reduce the risks of taking drugs. Any questions about or problems with a drug should be discussed with a doctor or pharmacist. Taking drugs as instructed and communicating with health care providers is essential for avoiding problems and promoting good health.
Know about the drugs and disorders being treated:
Keep a list of all medical problems and drug allergies.
Keep a list of all drugs being taken, including over-the-counter drugs and supplements, such as vitamins, minerals, and medicinal herbs.
Learn why each drug is taken and what its benefits are supposed to be.
Learn what side effects each drug may have and what to do if a side effect occurs.
Learn how to take each drug, including what time of day it should be taken, whether it can be taken with food, or taken at the same time as other drugs, and when to stop taking the drug.
Learn what to do if a dose is missed.
Write down information about how to take the drug or ask the doctor, nurse, or pharmacist to write it down (because such information can easily be forgotten).
Use drugs correctly:
Take drugs as instructed.
Use memory aids, such as a medication organizer, to take drugs as instructed.
Before stopping a drug, consult the doctor about any problems—for example, if side effects occur, if the drug does not seem to work, or if the cost of the drug is burdensome.
Discard any unused drug from a previous prescription, unless instructed not to do so by a doctor, nurse, or pharmacist.
When discarding a drug, follow the disposal instructions on the label, review the information at the Food and Drug Administration's web site, take drugs to an authorized disposal center (possibly at a pharmacy or local law enforcement site), or mix the drug with kitty litter or coffee grounds, tightly wrap in plastic or a similar material, place in a sealable or watertight container or bag, and discard in the trash.
Do not take another person’s drug, even if that person’s problem seems similar.
Check the expiration date on drugs, and do not use the drug if it has expired.
Work closely with the doctor and pharmacist:
Get all prescriptions from the same pharmacy, preferably one that provides comprehensive services (including checking for possible drug interactions) and that maintains a complete drug profile for each person.
Bring all drugs being taken to medical appointments if requested to do so.
Periodically discuss the list of drugs being taken and the list of disorders with the doctor, nurse, or pharmacist to make sure the drugs are correct and should be continued. For example, people can test themselves by telling their health care providers how they are supposed to take all drugs and asking whether what they have said is correct.
Review the list of drugs with the doctor, nurse, or pharmacist every time a drug is changed (doctors and pharmacists can check for interactions between drugs).
Make sure the doctor and pharmacist know about all over-the-counter drugs and supplements being taken, including vitamins, minerals, and medicinal herbs.
Consult the doctor before taking any new drugs, including over-the-counter drugs and supplements.
Report to the doctor or pharmacist any symptoms that might be related to the use of a drug (such as new or unexpected symptoms).
If the schedule of taking drugs is too complex to follow, ask the doctor or pharmacist about simplifying it.
If seeing more than one doctor, make sure each doctor knows all the drugs being taken.
Ask the pharmacist to print the label in large print, and check to make sure it can be read.
Ask the pharmacist to package the drug in containers that are easy to hold and to open.
Remembering to Take Drugs as Prescribed
To benefit from taking drugs, people must remember not only to take their drugs but also to take them at the right time and in the right way. When several drugs are taken, the schedule for taking them can be complex. For example, drugs may have to be taken at different times throughout the day to avoid interactions. Some drugs may have to be taken with food. Other drugs have to be taken when no food is in the stomach. The more complex the schedule, the more likely people are to make mistakes. For example, bisphosphonates (such as alendronate, risedronate, and ibandronate), which are used to increase bone density, need to be taken on an empty stomach and with only water (at least a full glass). If these drugs are taken with other liquids or food, they are not absorbed well and do not work effectively.
If older people have memory problems, following a complex schedule is even harder. Such people usually need help, often from family members. The doctor can be asked about simplifying the schedule. Often, doses can be rescheduled to make taking the drugs more convenient or reduce the total number of daily doses. Also, over time, some drugs may not be needed any longer and can be stopped.
The following things can help people remember to take their drugs as prescribed:
Memory aids can help older people remember to take their drugs. For example, taking a drug can be associated with a specific daily task, such as eating a meal.
A pharmacist can provide containers that help people take drugs as instructed. Daily doses for 1 week or 2 weeks may be packaged in a plastic pack marked with the days or with the times of the day, so that people can keep track of doses taken by noting the empty spaces. Some pharmacies can package drugs in blister packs, so that the daily dose can be easily removed and kept track of. However, such packaging may cost a little more. Additionally, many pharmacies can adjust refill schedules so that regularly used drugs are picked up on a single day each month. This decreases confusion, helps reduce trips to the pharmacy, and minimizes mistakes filling pill organizers.
More elaborate containers with a computerized reminder system are available. These containers beep, flash, or talk at dosing time.
Smartphone apps (cell phone apps)
Apps that help people manage their drugs can be downloaded to multiple smartphones and tablets. These apps can help older people or their family members remember to take their drugs on time. Many of these apps include reminder alerts, which are sent to the device. Some of these apps may cost money.
The following English-language resources may be useful. Please note that THE MANUAL is not responsible for the content of these resources.
AARP (American Association of Retired People): Broad range of resources intended to help people choose how they live as they age
Benefits Check Up (The National Council on Aging): Information about and search tools to find benefit programs and resources
National Institute on Aging—Tracking Your Medications: Provides guidance and a worksheet to track and manage drugs
How to Dispose of Unused Medicines from the Food and Drug Administration (FDA): Videos and information on drug-take-back programs and disposal methods for drugs
|
{
"palladium_score": 3.578237295150757,
"timestamp": "2026-01-18T07:33:04.849398",
"source": "Palladium-STEM (Preview)"
}
|
https://www.historicalclimatology.com/blog/archives/11-2016
|
Dr. Tim Newfield, Princeton University, and Dr. Inga Labuhn, Lund University.
Carolingian mass grave, Entrains-sur-Nohain, INRAP.
Will climate change trigger widespread food shortages and result in huge excess mortality in our future? Many historians have argued that it has before. Anomalous weather, abrupt climate change, and extreme dearth often work together in articles and books on early medieval demography, economy and environment. Few historians of early medieval Europe would now doubt that severe winters, droughts and other weather extremes led to harvest failures and, through those failures, food shortages and mortality events.
Most remaining doubters adhere to the idea that food shortages had causes internal to medieval societies. Instead of extreme weather or abrupt climate change, they blame accidents of (population) growth, deficient agrarian technology, unequal socioeconomic relations and weak institutions. Yet only rarely they have stolen the show or dominated the scholarship. For example, Amartya Sen’s “entitlement approach” to subsistence crises, which assigns primary importance to internal processes, has made few inroads in the literature on early medieval dearth, although in later periods it has many adherents.
Of course, the idea that big events have a single cause – monocausality, in other words – rarely convinces historians for long. Famine theorists and historians of other eras and world regions now argue that neither external forces such as weather, nor internal forces such as entitlements, alone capture the complexity of food shortages. They propose that these two explanatory mechanisms, often labeled “exogenous” and “endogenous,” respectively, should not be considered independent of one another or mutually exclusive. To them, periods of dearth can be explained by environmental anomalies, like unusual and severe plant-damaging weather, that coincide with socioeconomic vulnerability and declining (for most people) entitlement to food.
These explanations are more convincing. It seems that diverse factors acted in concert to cause, prolong and worsen food shortages. But proof for complex explanations for dearth in the distant past is hard to come by. Though they can be misleading, simpler, linear explanations are much easier to pull out of the extant evidence. This is true even when the sources are plentiful, as they are, at least by early medieval standards, for some regions and decades of Carolingian Europe. Food shortages in the Carolingian period, especially those that occurred during the reign of Charlemagne, have attracted the attention of scholars since the 1960s.
Left: Bronze equestrian statuette of Charlemagne or possibly his grandson Charles the Bald (823-877). Discovered in Saint-Étienne de Metz and now in the Louvre. The figure is ninth century in date. The horse might be earlier and Byzantine. Charles the Bald ruled the western portion of the post-Verdun empire, although whether he was actually bald is still debated.
Right: A Carolingian denarius (812-814) depicting Charlemagne. The Charlemagne of the Charlemagne reliquary mask (Center) is handsomer. The coin, though, is contemporary and the bust is from the mid fourteenth century. Housed in the Aachener Dom’s treasury, it contains a skullcap thought to be that of the emperor.
For the Carolingian period, ordinances from the royal court, capitularies, reveal hoarding and speculation, and document official attempts to control the prices and movements of grain, while annalists and hagiographers recount severe winters and droughts. All of this evidence sheds light on dearth. Yet the legislative acts point to internal pressures on food supply, while the narrative sources highlight external ones. As we have seen, neither pressure adequately explains subsistence crises alone.
Unfortunately, however, we rarely have evidence for endogenous and exogenous factors at the same time. Around the year 800, when Leo III crowned Charlemagne imperator, most evidence for dearth comes from the capitularies. Before and after, narrative evidence dominates. So Charlemagne’s food shortages appear to have had internal drivers, and Charles the Bald’s external ones. Or so the written sources lead us to believe.
Carolingian Europe as of August 843 following the Treaty of Verdun. Under rex and imperator Charlemagne (742-814), Carolingian territory stretched to include the area of Europe outlined here.
Fortunately, evidence from other disciplines allows historians to fill in some of the gaps. External pressures are easier to establish by turning to the palaeoclimatic sciences. Using them, we are beginning to rewrite the history of continental European dearth, weather and climate from 750 to 950 CE. We are working on a new study that combines a near-exhaustive assessment of Carolingian written evidence for subsistence crises and weather with scientific evidence for changes in average temperature, precipitation, and volcanic activity (which can influence climate).
We are trying to answer some big questions, such as: What role did droughts, hard winters and extended periods of heavy rainfall have in sparking, prolonging or worsening Carolingian food shortages? Were these external forces the classic triggers of dearth that many early medievalists think they were?
Indicators of past climate embedded in trees and ice can test and corroborate observations of anomalous temperature and precipitation. For instance, the droughts of 794 and 874 CE, documented respectively in the Annales Mosellani and Annales Bertiniani, show up in the tree ring-based Old World Drought Atlas (OWDA, see below). Additionally, as McCormick, Dutton and Mayewski demonstrated, multiple severe Carolingian winters also align fairly neatly with atmosphere-clouding Northern Hemisphere volcanism reconstructed using the GISP2 Greenlandic ice core.
The Old World Drought Atlas (OWDA) for 794 and 874. Negative values indicate dry conditions, positive values indicate wet conditions (from Cook et al. 2015).
By marrying written and natural archives, we are able to perfect our appreciation of the scale and extent of the weather extremes that coincide with Carolingian periods of dearth. Yet instead of simply providing answers, our integrated data are raising questions, and pushing us towards a messier history of early medieval food shortage. This is because the independent lines of evidence often do not agree. For example, only two of the 15 driest years between 750 and 950 CE in the OWDA coincide with drought in Carolingian sources.
Admittedly, some of this dissonance may be artificial. The written record for weather and dearth is incomplete. To be sure, some places and times during the Carolingian era, broadly defined as it is here, are poorly documented. So reported drought years can appear kind of wet in the tree-based OWDA in some Carolingian regions (parts of northern Italy and Provence in 794 and 874 for instance).
Moreover, the detailed or “high-resolution” palaeoclimatology available now for early medieval Europe is much better for some regions than others. Tree-ring series extending back to 750 presently exist for few European regions. It is simply not possible to precisely pair some reported weather extremes or dearths to palaeoclimate reconstructions. Indeed, spatially the two lines of evidence can be mismatched. They can also be seasonally inconsistent, as the trees tell us far less about temperature and precipitation in the winter than they do for the summer.
Matches between historical and scientific evidence are therefore generally limited to the growing seasons, in places where written sources and palaeoclimate data overlap. That is enough to yield some surprising results. When the written record is densest, there is natural evidence for severe weather and rapid climate change, but not for food shortages.
Take the dramatic drop in average temperatures registered in European trees at the opening of the ninth century. According to the 2013 PAGES 2K Network European temperature reconstruction, temperatures were cooler around the time of Charlemagne’s coronation than they had been at any time between the mid sixth and early eleventh centuries. This dramatic cooling aligns well with a relatively small Northern Hemisphere volcanic eruption, detected in the recent ice-core record of volcanism led by Sigl. The eruption would have ejected sunlight-scattering sulfur aerosols into the atmosphere. Notably, larger events in the Carolingian era, like those of 750, 817 and 822, clearly had less of an influence on European temperature. The cold of 800 is equally pronounced but less unusual in a tree-based temperature reconstruction from the Alps. In this series, the late 820s are remarkably cooler.
Documentary sources register the falling temperatures. The Carolingian Annales regni francorum report severe growing-season frosts (aspera pruina) in 800. The Irish Annals of Ulster document a difficult and mortal winter in an entry quite possibly misdated in the Hennessy edition at 798 (799 or the 799/800 winter is more likely). Yet surprisingly, there is no contemporary record of food shortages in Europe.
Top: European Temperature Reconstruction, 0-2000 CE (data from Pages 2K Consortium, 2013).
Bottom: Middle Red: PAGES 2K 2013 Consortium European temperatures; Middle Burgundy: Büntgen et al 2011 Alpine temperature reconstruction; Top: Sigl et al 2015 ice-core record of Global Volcanic Forcing (GVF); Bottom: Written evidence for food shortages, both famines (F) and lesser shortages (LS). ‘W’ indicates no evidence for dearth but evidence for extreme weather. Between 750 and 950 we have identified 23 food shortages: 12 spatially and temporally circumscribed lesser shortages and 11 large multi-year famines.
Scholars tend to focus on instances when the written evidence for dearth and the natural evidence for anomalous weather align tidily. It seems that just as often, however, the two lines of evidence do not match so neatly. Severe weather may not always have triggered dearth in the early Middle Ages. Contemporary peoples could apparently cope with weather extremes in ways that allowed them to escape food shortages.
Early medieval vulnerability to external forces of dearth seems to have varied over space and time. We need to investigate the contrasting abilities of peoples from different early medieval regions and subperiods, participating in distinct agricultural economies with their own agrarian technologies, to withstand plant-damaging environmental extremes.
Several studies already suggest early medievals were capable of responding to gradual climate change. But to argue that they were not rigid or helpless when faced with marked seasonal temperature or precipitation anomalies, we must first identify, from sparse sources, potential moments of resilience. In this we run the risk of reading too much into absences of evidence. Yet the conclusion seems inescapable: when written sources are relatively abundant and there is no record of dearth during notable deviations in temperature and precipitation, early medievals must have adapted successfully.
Going forward, we must identify both moments and mechanisms of early medieval resilience in the face of climate change. Teasing these out from diverse sources might be tough going, but these elements are missing from the history of early medieval dearth and climate. Their omission has allowed for misleadingly neat histories of climate change and disaster in the period. Similar problems might well plague other histories that too clearly link climate changes to food shortages and mortality crises. Research that complicates these links could offer compelling new insights about our warmer future.
Authors' note: this is a short sampling of a much longer and more detailed multidisciplinary examination of Carolingian dearth, weather and climate, currently in preparation.
P. Bonnassie, “Consommation d’aliments immondes et cannibalisme de survie dans l’Occident du Haut Moyen Âge” Annales: Économies, Sociétés, Civilisations 44 (1989), pp. 1035-1056.
U. Büntgen et al, “2,500 Years of European Climate Variability and Human Susceptibility” Science 331 (2011), pp. 578-582.
U. Büntgen and W. Tegel, “European Tree-Ring Data and the Medieval Climate Anomaly” PAGES News 19 (2011), pp. 14-15.
F. Cheyette, “The Disappearance of the Ancient Landscape and the Climatic Anomaly of the Early Middle Ages: A Question to be Pursued” Early Medieval Europe 16 (2008), pp. 127-165.
E. Cook et al, “Old World Megadroughts and Pluvials during the Common Era” Science Advances 1 (2015), e1500561.
S. Devereux, Theories of Famine (Harvester Wheatsheaf, 1993).
R. Doehaerd, Le Haut Moyen Âge occidental: Economies et sociétés (Nouvelle Clio, 1971).
P.E. Dutton, “Charlemagne’s Mustache” and “Thunder and Hail over the Carolingian Countryside” in his Charlemagne’s Mustache and Other Cultural Clusters of a Dark Age (Palgrave, 2004), pp. 3-42, 169-188.
M. McCormick, P.E. Dutton and P. Mayewski, “Volcanoes and the Climate Forcing of Carolingian Europe, A.D. 750-950” Speculum 82 (2007), pp. 865-895.
T. Newfield, “The Contours, Frequency and Causation of Subsistence Crises in Carolingian Europe (750-950)” in P. Benito i Monclús ed., Crisis alimentarias en la edad media: Modelos, explicaciones y representaciones (Editorial Milenio, 2013), pp. 117-172.
PAGES 2k Network, “Continental-Scale Temperature Variability during the Past Two Millennia” Nature Geoscience 6 (2013), pp. 339-346.
K. Pearson, “Nutrition and the Early Medieval Diet” Speculum 72 (1997), pp. 1-32.
A. Sen, Poverty and Entitlements: An Essay on Entitlement and Deprivation (Oxford University Press, 1981).
M. Sigl et al, “Timing and Climate Forcing of Volcanic Eruptions for the Past 2,500 Years” Nature 523 (2015), pp. 543-549.
P. Slavin, “Climate and Famines: A Historical Reassessment” WIREs Climate Change 7 (2016), pp. 433-447.
A. Verhulst, “Karolingische Agrarpolitik: Das Capitulare de Villis und die hungersnöte von 792/793 und 805/806” Zeitschrift fur Agrargeschichte und Agrarsoziologie 13 (1965), pp. 175-189.
|
{
"palladium_score": 3.5082321166992188,
"timestamp": "2026-01-18T07:33:09.090710",
"source": "Palladium-STEM (Preview)"
}
|
https://animals.mom.me/turtle-sleep-5644.html
|
Turtles are well protected by their shells, but that does not mean they can brazenly sleep wherever they like. Depending on the species, habitat and size of a given turtle, it may sleep in a variety of places. In most cases, the turtle will choose a refuge that affords him some extra protection while he takes a nap.
Soil and Leaf Litter
Box turtles apparently find leaf litter to be a cozy place for a snooze. In a 1971 issue of “Copeia,” Richard A. Dolbeer detailed his observations of box turtle (Terrapene carolina) hibernating behavior in eastern Tennessee. In the course of his research, Dolbeer found that during the fall, box turtles were always buried in the leaf litter -- presumably asleep -- when they weren't active. Most often, the box turtles were buried deep enough so the tops of their shells were flush with the ground. In another study -- this one by Vincent J. Burke, et al., of the Savannah River Ecology Laboratory, published in a 1994 issue of “American Midland Naturalist” -- mud turtles (Kinosternon subrubrum) were observed leaving the water and digging egg chambers. The turtles then entered the holes, deposited their eggs and remained underground for several days before exiting the holes and returning to the water. Further analysis by researchers showed that the turtles dug their burrows during or after a rainstorm and stayed in the hole until the next rainstorm; presumably the rain softened the ground, facilitating easier entry and exit.
Burrows and Tree Stumps
In Dolbeer’s study, he found that box turtles greatly preferred deep burrows and holes to sleep in during the coldest days of winter. Most often, box turtles used decaying tree stump holes, but some species dug their own burrows. Gopher tortoises (Gopherus polyphemus) are perhaps the most remarkable tunnelers of all turtles, and their elaborate tunnels contain an entire ecosystem of 360 species, including the endangered indigo snake (Drymarchon couperii).
Sometimes, when a burrow or hole isn’t available, a turtle will just crawl deep into dense vegetation for the night. Eastern box turtles (Terrapene carolina) sometimes use blackberry (Rubus sp.) tangles as a sleeping spot. This choice provides two benefits for the turtles: the sharp thorns of the bushes dissuade some predators, and blackberry is an important dietary staple for the turtles.
Most aquatic and some semi-aquatic turtles burrow into the mud to hibernate or sleep. Though turtles breathe oxygen, many species like soft-shell (Apalone sp.), sliders (Trachemys scripta ssp.) and snapping turtles (Chelydra serpentine) can absorb oxygen directly from the water. Some more terrestrial species, like African mud turtles (Pelusios subniger), will bury themselves in the mud to aestivate when their temporary pools dry up. These turtles will sleep there for up to six months as they wait for the next rainy season.
Turtles may wedge themselves into tight crevices in rock piles or submerged tree stumps for the night. Turtles may also use rock pilings, rip rap, dams and other man-made structures for sleeping. Very large turtles, like alligator snapping turtles (Machrochelys temminckii), have few natural predators and may not feel the need for protective structure; these animals may just sleep on the bottom of the pond.
Some aquatic turtles will sleep exposed on their basking spots. The turtles must feel safe to do so; as such, they're more likely to sleep on basking spots surrounded on all sides by deep water. Turtles basking on a tree stump within a few feet of the shore are likely to bask with one eye open so they can quickly dive to safety.
- Copeia: Winter Behavior of the Eastern Box Turtle, Terrapene Carolina, in Eastern Tennessee
- Herpetologica: Mortality in Hibernating Ornate Box Turtles, Terrapene Ornata
- Journal of Herpetology: Diet-Dependent Differences in Digestive Efficiency in Two Sympatric Species of Box Turtles, Terrapene Carolina and Terrapene Ornata
- Physiological Zoology: The Viability of Chrysemys Picta Submerged at Various Temperatures
- American Midland Naturalist: Prolonged Nesting Forays by Common Mud Turtles (Kinosternon Subrubrum)
- Zoology: Mechanisms of Homeostasis During Long-term Diving and Anoxia in Turtles
- Gopher Tortoise Services, Inc. : Why All the Concern About Gopher Tortoises?
- Arkive: East African Mud Turtle
- Thinkstock Images/Comstock/Getty Images
|
{
"palladium_score": 3.713226795196533,
"timestamp": "2026-01-18T07:33:09.090710",
"source": "Palladium-STEM (Preview)"
}
|
https://arthistoryteachingresources.org/lessons/prehistory-and-prehistoric-art-in-europe/
|
Prehistory and Prehistoric Art in Europe
First Things First...
The lesson on Prehistory often comes at the very beginning of the semester, so offers the opportunity to introduce the activity of close looking as well as an historical overview of the very earliest art practices.
There’s a body of literature by educators on the practice of close looking and inviting personal responses from the students (Portland-based museum educator Mike Murawski has a great post on these topics here). You might use the Woman of Willendorf as the key image from which your lecture and discussion will stem for this class on Prehistory. You could use this class activity at the beginning of your class to get your students to look closely at the Woman of Willendorf in pairs (i.e. to practice the art historical skill of formal analysis) before then opening up the discussion to the whole class and figuring out her context together. Some easy first questions to open with after the activity are, “What did you see? What were the first things you noticed?”
The term Prehistory refers to all of human history that precedes the invention of writing systems c. 3100 BCE and the keeping of written records, and it is an immensely long period of time, some ten million years according to current theories. For the purposes of an art history survey, we split our study of Prehistory into two camps: Paleolithic and Neolithic (old vs. new, nomadic vs. settled, “lithic” from the Greek for stone, as these peoples worked with stone tools). This is often a good time to discussion the differences between culture and civilization, the growth of which will become apparent as the survey progresses—and which can be problematized in terms of how your survey narrates the “development” of civilizations.
The timeline covered in this area of the survey is vast—c. 32,000 BCE (Chauvet Caves) to 7,000 (Neolithic settlements)—but the question that unites this vast chronology is simple and compelling: what can we find out about objects from so long ago, and how do they connect to our contemporary experiences today?
In an hour and fifteen minutes, this question can be investigated through many ancient objects, including:
- Woman of Willendorf, 22,000 BCE
- Lion Human, 32,000 BCE
- Spotted Horses and Human Hands, Peche-Merle, Dordogne, France, 25,000 BCE
- Hall of Bulls, Lascaux Cave, Dordogne, France, c. 15,000 BCE
- Chauvet Cave, Ardeche Gorge, France, 32,000–30,000 BCE
- Çatal Hüyük, Turkey, 7400–6200 BCE
- Stonehenge, c. 3000–1500 BCE
An object like the Woman of Willendorf can tell us about how the female form was viewed culturally as this isn’t an exact replication of what we know homo sapien women looked like at this time. And although she’s been canonized in the art history survey, she’s really not that unique—many similar figures have been found from this era. Her body has been changed to accentuate certain characteristics. Why? Most of the figures from the “upper Paleolithic period” are women. Women bear children, and she seems well-nourished—this may have ensured the continuation of the community. She’s also a portable object. This makes sense as prehistoric communities were nomadic and needed to be able to carry their possessions with them.
The Lion Human (found in the German Alps in 1939) shares certain similarities with French cave wall paintings, which also show hybrid creatures. The French paintings, however, are several thousand years younger than the German sculpture. This sculpture was found with flutes near it, suggesting it was part of a musical ritual or tradition. (Contemporary artists Jennifer Allora & Guillermo Calzadilla’s ‘Raptor’s Rapture’, a work shown at Documenta 2012 in Kassel, Germany, includes a musician specializing in prehistoric instruments playing a flute just like this.) It’s the oldest known zoomorphic sculpture in the world, but as there are no written records left to us, the meaning and purpose of this art can only be guessed at. At the end of the Paleolithic era, there were perhaps over five million inhabitants of the earth.
Cave art is often hidden deep in cave formations, suggesting that it was intended for a privileged subset of people; this theory is similar to that of burial sites such as at Stonehenge where remains of men of a certain age are found. It suggests a society built on hierarchies, one that was structured and ordered. One of the first questions to think about is, how did prehistoric humans work and paint in deep cave formations that would have been pitch-black? They did so by using animal fat lamps (see, for example, the Lamp with Ibex Design). What materials were they using? Natural pigment derived from stone and plant, charcoal, and applied using their hands or rough brushes. Some archaeologists believe that pigment may have been mixed in the mouth and then spat onto the walls (see the archaeological reenactment of painting techniques slide). What was the purpose of the handprints on the walls? It’s unclear—were they signatures? Hand signals used while stalking prey? Used to signify the presence of humans in this animal world? Great examples of cave art include the Lascaux Cave, Le Tuc d’Audoubert, and the Chauvet Cave. Prehistoric cave and rock art was also produced in Australia, Malta (an island between Italy and N. Africa), and Algeria, among other sites.
There may be no one single “function” for these works—they changed over generations, over many thousands of years, so while some of their functions may have been passed down orally, these changed and mutated too over time. We can’t even be sure if the works are about the act of painting, or the finished images. Even within one generation, or a short period of a few generations, the cave paintings would mean different things to different people depending on their age, experience, perhaps their gender. We can only make educated guesses about what they were used for. However, the difficulty and time required to make the works meant they weren’t just for aesthetic pleasure alone. They could have been used for clan rites, as an initiation for younger (male) clan members. They may have believed to have had magical powers (i.e., showing a successful hunt could prefigure that happening in real life), the precursor to modern systems of belief and religion. However, as Herzog’s documentary points out, this theory has since been somewhat dismissed as further archaeological evidence suggests that the animals portrayed are not the ones that were hunted.
The Neolithic or Agricultural Revolution followed the Paleolithic Era, and it began in the Ancient Near East about 10,000 BCE. Not long afterwards, Neolithic settlements appeared in Europe, Africa, Asia, and the Western Hemisphere. During the next 3,500 years, men and women all over the world radically transformed their relationship to nature, from a dependent one to more independent one. This is a slow change—it didn’t happen overnight by any means. Human beings learned to manipulate nature, they invented agriculture, which allowed production of a food surplus, they manufactured new types of tools, and they domesticated animals, like dogs, sheep, goats, cattle, pigs, and so on.
Once a food surplus was produced, human beings began to live in fixed village settlements like Çatal Hüyük, in Turkey (7400–6200 BCE). Humans began to develop more lasting ties to specific sites and places. Innovations of the Neolithic era included a division and specialization of labor, the emergence of an artisan class, such as weavers or potters, the development of trade, the invention of private property, and the development of basic political and social institutions.
Neolithic people also created impressive megalithic (“massive-stone”) constructions, such as Stonehenge. Stone was used because of its durability (think of the words or phrases like “written in stone”). Stonehenge uses post-and-lintel construction techniques, and the technology used to lift, transport, and maneuver the massive stones is still the subject of much discussion by archeologists today.
Next came the Urban Revolution (Mesopotamia and Egypt, from about 3,500 BCE. This “urban revolution” formed the symbolic boundary between “prehistory” and “history” as the first writing emerged from the Fertile Crescent, today Iraq. For many, this is the moment that mankind invented “civilization.”
|
{
"palladium_score": 3.555328130722046,
"timestamp": "2026-01-18T07:33:09.090710",
"source": "Palladium-STEM (Preview)"
}
|
https://stateofamericandemocracy.org/renewing-the-american-democratic-faith/
|
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. – That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed. – Declaration of Independence, 1776
We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America. – Preamble, U.S. Constitution, 1787
The founders of the United States of America embarked on an audacious experiment. Could a diverse people given individual freedom and political sovereignty work together to build a unified nation that promotes freedom and advances equality, justice, and the common good? Aware of the fallibility of human nature, many of the founders had serious doubts. They knew that freedom and self-interest would always exist in tension with the ideals of justice and the common good. Nevertheless, they had the faith and courage to persist with the experiment. Their hopes for creating a free, self-governing society rested on the possibility of establishing an innovative constitution with checks and balances that recognizes certain inalienable human rights. They also believed that if the American experiment was to succeed, it would require a moral and spiritual foundation. Essential to the flourishing of a free society, they argued, are shared moral and political ideals leading to a way of life that involves a strong sense of national solidarity, social responsibility, and engaged citizenship. Even though the founders lived in a very different world from ours, their thinking in this regard is one key to rectifying the deficits in our democratic practice in the twenty-first century.
In recent decades, the United States has become a deeply divided and fragmented nation unable to address pressing social, economic, environmental, political, and international challenges. There is an urgent need to reawaken a shared faith in unifying democratic ideals and values that can provide the nation’s people, in all of their cultural diversity, with an inspiring sense of a national identity and common destiny. From the beginning, what has defined the American people as Americans has been their shared moral and political ideals, which is not to deny the history of repeated attempts by some Americans to marginalize others on the basis of race, religion or ethnic origin. A revitalized democratic faith can help to focus the nation on the task of advancing social and economic justice and creating and implementing a compelling, inclusive vision of the common good. This essay explores the ideals and values that formed the early moral and spiritual foundations of the nation and offers reflections on pathways to a renewal of American democracy.
Historians, sociologists, and philosophers have adopted different ways of describing America’s moral and spiritual foundations. They refer, for example, to the American creed, a creedal national identity, the American Covenant, American civil religion, Americanism, the democratic faith, moral democracy, spiritual democracy, and a public moral philosophy. Walt Whitman, John Dewey, and Jane Addams get to the heart of the matter when they conceive of democracy as first and foremost a great moral ideal. It is with this understanding that this essay identifies the nation’s deepest foundations with a faith in shared ideals and a personal way of life along the lines envisioned in Dewey’s A Common Faith.1 Faith in an integrated vision of ideals and values involves wholehearted commitment to a way of being and relating to the world. When the American democratic faith possesses a person’s mind and heart, it can become a relational spirituality that integrates spiritual life and everyday life.
I. The Ideal and the Real
In the midst of significant ethnic and religious diversity there emerged with the American Revolution of 1776 and the subsequent founding of the republic a set of shared moral and political ideals, which were set forth in the Declaration of Independence and the Preamble of the Constitution. Among the ideals affirmed in these documents by “We the people” are freedom, equality, individual rights, the pursuit of happiness, the sovereignty of the people, the rule of law, justice, the common good, peace, unity in the midst of diversity, and intergenerational responsibility. James Madison, credited with being the principal drafter of the Constitution, explained that “the public good, the real welfare of the great body of the people, is the supreme object to be pursued.”2 The Constitution is a secular document that makes no reference to God, prohibits the establishment of religion by the government, and proclaims freedom of religion as a basic right. The Declaration of Independence, however, does affirm the widely held eighteenth century faith in God as the Creator who has endowed “all men” with equal dignity and inalienable rights.
America’s founding ideals were set forth as self-evident truths and universal values that all can know and understand through rational reflection and an awakened conscience. For many they were sacred truths that promised liberation from the oppression, conflicts, and burdens of life in the Old World. “We have it in our power to begin the world over again,” wrote Thomas Paine in Common Sense, his widely debated tract in support of revolution. America was the New World. Further, in the spirit of the eighteenth century Enlightenment movement, many of the founders adopted a cosmopolitan worldview and, like George Washington, saw no conflict between being a patriotic American and identifying as “a citizen of the great republic of humanity at large.”3
There have, however, always been contradictions between America’s aspirational founding vision and the reality that is American society. Moreover, there were complex problems with what the founders succeeded in accomplishing, revealing ways in which the colonists had not left behind the attitudes and practices of the Old World. Women were not understood to possess the rights of men. It would take 150 years of protest by women for the nation to adopt the Nineteenth Amendment, which granted them the right to vote. The 1787 Constitution does not recognize the rights of Native Americans for whom the European settlement of North America was a catastrophe. They were driven from their lands, forced onto reservations, and in the process suffered the destruction of their cultures and the denial of their livelihood with devastating long term consequences. It was not until 1924 that Native Americans were granted the right to vote. The Constitution allowed the continuation of slavery, and Congress was prohibited from outlawing the slave trade for twenty years. The first African slaves arrived on the shores of North America in 1619, and by the time the Constitution was ratified by the thirteen states in 1791 there were over 700,000 enslaved persons in the new nation. During the second half of the eighteenth century the debate over slavery became intense. Founders like Benjamin Franklin, president of the Pennsylvania Society for the Abolition of Slavery, and many citizens of the new republic viewed slavery as an atrocity that should be abolished. The Constitution did permit the states to outlaw slavery, and it was outlawed in the New England states by 1791.4
Leaders like George Washington and Thomas Jefferson, who owned plantations and were slaveholders, could not escape having to face the fact that slavery presented a terrible moral dilemma, and they feared that the nation would suffer grave consequences if it were not ended. Washington did grant his slaves their freedom in his will. However, he and Jefferson knew that if faced with abolition, some of the southern states, whose agricultural economies were built on slavery, would refuse to join the Union. The nation was created deeply divided over slavery, leading tragically within a relatively short time to the Civil War (1861-1865) that took the lives of over 600,000 men and left much of the country in ruins. Only then did Congress adopt the Thirteenth, Fourteenth, and Fifteenth Amendments to the Constitution abolishing slavery, granting African-American citizens “the equal protection of the laws,” and granting African-American men the right to vote. It was in many ways a second founding of the nation. However, confronted during the Jim Crow era with brutal systems of racial segregation and discrimination, including denial of education, jobs, and the right to vote, the struggle of African-Americans for their freedom and rights was far from over. A hundred years after the Civil War, the civil rights movement would generate massive protests over long denied justice that would force major change, but in the twenty-first century the persistence of discrimination and the need for healing remain critical issues facing the nation.
The general principles set forth in the Declaration of Independence and the Preamble of the Constitution remain among the most powerful, transformative social and political ideals ever articulated. However, given the prevalence of ignorance, the strength of narrow self-interest, and the corrupting influence of money and power, advancing freedom, equality, and justice is a never-ending task. Real progress has been and can be achieved, but intolerance and injustice inevitably reappear as human civilization evolves.
America has always been a work in progress, and its founding ideals express the unfulfilled promise of American life and function as a force driving social change. Langston Hughes, the Harlem Renaissance poet, spoke for all marginalized groups in the nation when in 1935 he wrote: “Let America be America again. Let it be the dream it used to be…The land it never has been yet – And yet must be…” Each generation faces the challenge of renewing the nation’s faith in its founding moral and political ideals and striving to give fresh expression to what that faith and commitment to the common good means.
II. Freedom, Virtue, and the Common Good
At the time of the founding of the nation, biblical religion, especially the prophetic call for compassion and social justice, was a major source of moral and spiritual inspiration. The Puritans, who settled New England in the seventeenth century, are sometimes called the first founders, and “Puritanism provided the moral and religious background for fully three quarters of the people who declared their independence in 1776.”5 The Puritans were Protestant Christian reformers bent on purifying the church and transforming society. In seeking to understand themselves and their mission, they drew heavily on biblical narratives and language. America was the new Promised Land. Their destiny was to create a New Israel in America founded on a new Covenant between God and his people. They were especially mindful that in the biblical narrative God’s liberation of the Hebrew people from slavery in Egypt is followed by revelation of the law and establishment of the Mosaic Covenant at Mt. Sinai. In short, the Puritans like the ancient Hebrews believed that freedom must be harnessed and guided to become a force for the good.
The Puritan’s spiritual and moral idealism has had an enduring influence. Most of us want to believe that America is a unique nation with a higher moral purpose. George Washington reflected this idealism when in his Farewell Address he stated: “It will be worthy of a free, enlightened… nation, to give to mankind the magnanimous and too novel example of a people always guided by an exalted justice and benevolence.” In addition, from Jonathan Edwards’ call in 1630 to build “a city upon a hill” to Abraham Lincoln’s Second Inaugural Address to Martin Luther King Jr.’s civil rights movement speeches on the American dream, reformers and political leaders have used biblical ideals and themes to interpret the meaning of American history and to inspire change.6
Prior to the Revolution there was much discussion in the colonies about republicanism as a form of government and way of life that provided an alternative to British monarchical society with its rigid social hierarchy characterized by inequality, dependence, and subordination. A republic is a self-governing society without a king in which the supreme power resides with those recognized as citizens who enjoy a certain measure of freedom and equality and elect their own government leaders.7 Many of the founders were influenced by ancient models of republican government, especially Athens and Rome, as well as by more contemporary experiments with republicanism. It was republican ideals and values, which were embraced by the Enlightenment movement, that gradually undermined support for monarchical society in the colonies leading eventually to revolution.8 The goal of those involved in drafting the Constitution was to establish the United States as a modern republic sustained by both Christian and classical moral values.
The founders believed that the republic they were creating would require independent, visionary leaders with exceptional intelligence and moral integrity and citizens with a high level of moral and social responsibility. In this regard, they emphasized the vital importance of virtue as the guide to individual fulfillment and social well-being. In accord with the Socratic tradition, they identified virtue with wisdom, knowledge of the good – the product of practical experience, rational inquiry, and deep experiential insight. In the republican tradition, the defining characteristic of civic virtue is devotion to the common good, and its highest expression is self-sacrifice for the general welfare. For the founders, who aspired to create a new, enlightened moral and social order, a love of virtue and wisdom and a liberal arts education were essential to sound political leadership.
The federal and state constitutions manifest a qualified trust that the citizens of the new republic would have the intelligence and moral responsibility to elect good leaders. The founders did fear that self-interest could generate a divisive factionalism and anarchy, leading to new forms of tyranny. To guard against such a development they insisted that certain cultural conditions be established, including access to a basic education and information on public affairs. The state constitutions limited the vote to tax paying, male property holders. In addition, most of the founders agreed with George Washington when in his Farewell Address he asserted: “Of all the dispositions and habits which lead to political prosperity, religion and morality are indispensable supports.” Washington understood that shared moral values underlie respect and support for the law, and he believed that religion and the love and fear of God are necessary to sustaining a commitment to “the national morality.” He considered “wisdom and virtue” to be “the great pillars of human happiness” and “the foundation of the fabric… of free government.”9
The founders were realists about human nature. They were well aware of the Christian doctrine of the universality of sin and of the warnings in classical philosophy regarding the corrupting power of the passions. For this reason they crafted a constitution that divided sovereignty between the federal government and the thirteen states, constructed a federal government with a separation of powers and checks and balances, and protected freedom of speech and the press. It was an innovative system of government carefully designed to prevent the centralization of power in the hands of any individual leader or group. Defending the Constitution, Washington declared: “A just estimate of that love of power, and proneness to abuse it, which predominates in the human heart, is sufficient to satisfy us of the truth of this position.”10
The founders’ republican philosophy also emphasized the interrelation between the realization of freedom, the nurturing of virtue, and citizen engagement in the process of self-government. Republicanism regards humans as social and political beings who find themselves and thrive in and through their relationships and as members of self-governing communities. People may be born with an inalienable right to liberty, but they also need each other and are interdependent. In this tradition, autonomy is a prized value, but there is understood to be no inherent contradiction between freedom and respect for the law. For the revolutionaries, political independence meant living in a society governed by the rule of law as opposed to the arbitrary will of a tyrannical monarch. Further, personal freedom meant more than the absence of restraint. Someone in the grip of delusion and blind passion was not considered to be free. The defenders of republican values taught that becoming a truly free person requires self-knowledge, self-discipline, and the wisdom and will to pursue what is good and just.
In addition, human beings naturally aspire to be part of something greater than their own self-interest, and the founders believed that people realize their full potential as free persons in and through engaged citizenship – working collaboratively with fellow citizens to protect their rights and to advance shared goals. They viewed active participation in self-government and other forms of public service in civil society as the primary way independent citizens acquire a sense of belonging to a community, are educated about public affairs, and develop the virtues and skills needed to deliberate productively and cooperate in advancing the common good.11 Here was a way of shared life that promised to nurture and strengthen a sustaining, spiritual sense of connection, meaning, and purpose.
III. Equality and the Democratic Way of Life
The result of the Revolution was a far-reaching reconstruction of American society that gave birth to the dream of a new democratic culture. It is the original American dream of what America could be. In this new democratic, moral, and spiritual ethos is found the only sure foundation for the flourishing of the nation’s political democracy. Again and again the nation has strayed from this understanding, and it finds itself anew by rediscovering its foundational moral and spiritual ideals, what Alexis de Toqueville described as those “habits of the heart” that create authentic community.
The two most powerful American ideals driving revolutionary change are the principles of freedom and equality. Freedom has been America’s most cherished ideal, “the holy light,” celebrated in the nation’s statues of liberty and in song, sermons, protests, and political rhetoric. However, when reduced to a self-centered individualism, freedom undermines equality, the general welfare, and democracy itself. The ideal of equality is “the most radical and most powerful ideological force let loose in the Revolution.”12 Freedom and equality together with the ideals of individual rights, the rule of law, justice, and the common good are the fundamental values at the heart of the spiritual and ethical vision that expresses the true spirit and deeper meaning of American democracy.
One major influence shaping the spiritual and moral thinking that would over many centuries eventually lead to the American concept of equality is found in biblical religion. The Book of Genesis proclaims that human beings are created in the image of God, recognizing the immeasurable, intrinsic value and dignity of each and every person.
The basic moral teaching of the ancient Hebrew prophets is the imperative to do what is right, good and just and to avoid evil. The most concise, general formulation of what this means is set forth in the Bible’s Golden Rule, which calls on us to treat others with a certain equal moral consideration. God commands the people of Israel: “You shall love your neighbor as yourself.” (Leviticus 19:18) Jesus’ teaching in the Sermon on the Mount summarizes the principle as follows: “In everything do to others as you would have them do to you; for this is the law and the prophets.” (Matthew 7:12) This very general ethical principle invites us to identify with the other as a person who shares with us a common humanity and who like us seeks happiness and wants to avoid suffering. It encourages an attitude of empathy and sympathy. It is the imperative to treat all persons as an end and never as a means only. The clear implication of the principle is that one should strive to help others and avoid harming them.
In the West, the Golden Rule was promoted by the Christian and Jewish traditions, but it is a universal ethical principle that finds expression in many different religions and cultures around the world. Further, it is a very general ethical principle that tells us what to think about when we are trying to decide what to do, but it does not tell us exactly what to do in any specific situation. The implications of the principle are interpreted differently by diverse cultures and generations. As the ethical consciousness of a people expands and becomes more tolerant and inclusive, their understanding of these implications can change dramatically as is happening with contemporary efforts to end discrimination of all kinds.
In the new republic, equality meant rejection of the notion that one’s worth and social standing as a person are determined by birth and class. “Ordinary Americans came to believe that no one in a basic down-to-earth and day-in-and-day-out manner was really better than anyone else. That was equality as no other nation has ever quite had it,” asserts the American historian, Gordon Wood.13 The core notion in the American ideal of equality is the moral principle that all people in the midst of their diversity share a common humanity and are born equal in dignity and all, therefore, are worthy of respect and moral consideration. Right relationship begins with recognition of the inherent and equal dignity of other persons as human beings like oneself. In the eighteenth century, it was, of course, primarily men, and not all men, who were recognized as equal. However, if the American story has a moral meaning, to a large extent it is found in the ongoing, contentious process of working out the full implications of the idea that all human beings are “created equal.”
Equality and liberty are interrelated ideals. The principle of equality meant to the revolutionaries that all white male citizens should be free in the sense of independent and autonomous, able to think and decide for themselves. Further, inherent in the ideals of equality and freedom is the belief that all should have the opportunity to develop their abilities and pursue their aspirations. Moreover, it was widely understood that with freedom and opportunity goes an obligation to strive to realize one’s potential and to contribute according to one’s talents to the life of the nation. Equality of opportunity meant that all careers, including leadership in government, should be open to any white man with the self-discipline, intelligence, and virtue required. The revolutionaries were not interested in leveling society and eliminating all economic inequality. However, they did believe that a large gap between the rich and the poor is a destabilizing force in a republic, and they strongly opposed the development of an aristocracy in America. It was their expectation that equality of opportunity would create more equitable economic conditions, but they did not foresee the dramatic way in which capitalism would increase economic inequality.
Respect for the equal dignity of other persons includes respect for their basic rights. The concept of individual rights had a long history in England stretching from the thirteenth-century Magna Carta to John Locke’s liberal defense of the equality and natural rights of all men in the seventeenth century. Initially the concept of rights and liberties was developed as a way to secure protection for the people from the arbitrary power of monarchs and the abuse of power by governments. However, as the human rights tradition has developed, the vision of humanity’s civil, political, economic, and social rights has also become an essential part of the vision of what constitutes the common good. Human rights clarify the basic conditions necessary for the flourishing of freedom, equality of opportunity, self-government, and full human development.
It was the hope of the revolutionaries that the spirit of freedom coupled with cultivation of mutual respect among the American people would open new pathways to authentic community and national unity. The goal was E Pluribus Unum, out of many one. The Enlightenment movement taught that when the inborn moral and spiritual capacities for empathy, sympathy, and love are supported and nurtured, people are naturally drawn to form friendships, cooperate, and build community. As they abandoned the old monarchical social order and looked for new ways to hold the American people together and build “a more perfect Union,” many of the revolutionaries envisioned the spirit of freedom and equality being infused with these natural feelings of sympathy, love, and benevolence, creating the new social ties and bonds needed.14 George Washington expressed this ideal in his “Farewell Address” when he envisioned a nation “bound together by fraternal affection.” Encouraged that the end of the Civil War was in sight and the Union would prevail, Abraham Lincoln in his Second Inaugural Address endeavored to restore this spirit with his concluding words: “With malice toward none, with charity for all, with firmness in the right as God gives us to see the right, let us strive to finish the work we are in, to bind up the nation’s wounds, to care for him who shall have borne the battle and for his widow and his orphan, to do all which may achieve and cherish a just and lasting peace among ourselves and with all nations.”
In the late eighteenth and early nineteenth centuries, the newly created republic very rapidly underwent a major democratic transformation. In the midst of an enormous release of creative energy, the franchise was steadily expanded. Men from all walks of life with a host of different interests, not just the highly educated and public spirited, were elected to serve at all levels of government. In this regard, historians describe a transition from the republicanism envisioned by the founders to the kind of fractious democratic republic that has existed ever since. It is not, however, the purpose of this essay to explore this history. It remains to consider the contemporary relevance of the founders’ revolutionary ideals and dreams.
IV. Radical Individualism and the Fraying Moral Fabric of Society
We live in a highly complex industrial-technological society linked with a global civilization that the founders, who traveled on horseback and illuminated their homes with candlelight, could never have imagined. Nevertheless, we can still benefit from their republican social and political philosophy, especially their understanding of freedom and firm belief that a free society requires a foundation of shared moral and political ideals to hold the nation together at all levels. In recent decades, however, the influence of the founder’s republicanism has waned, and, as Michael Sandel, the political philosopher, explains, there is a widespread sense “that from family to neighborhood to nation, the moral fabric of community is unraveling around us.”15 In “A Report to the Nation,” the Council on Civil Society states that “the core challenge” facing America is “the moral renewal of our democratic project.” The Council Report asks: “What are those ways of life that self-government requires? What are those qualities the Constitution presupposes in the American people? They are precisely those qualities that are currently disappearing from our society.”16 As the bonds of community have frayed, the safety and wellbeing of children has declined, increasing numbers of people struggle with loneliness, depression, and drug addiction, suicide rates keep climbing, and the nation’s politics become ever more abrasive. There are many contributing factors to this moral and spiritual crisis, including economic inequality, the internet, and social media. However, the most fundamental underlying cause is the culture of radical individualism that became widespread in the wake of the 1960s.17
Radical individualism involves the notion that the self exists as a separate entity independent of other selves and society. It makes the autonomous self, not the community, the center of the individual’s world. Freedom is defined simply as the absence of restraint and the ability to do whatever one desires. Radical individualism values personal achievement over social relationships. It emphasizes being faithful to one’s own personal values, but it also involves the notion that a person is not bound by any moral obligations or civic duties that the individual has not chosen to embrace. This brand of individualism may lead to a personal spiritual quest, but it also has generated a culture of narcissism and consumerism and a secular moral relativism that considers moral judgments to be just subjective personal preferences. One implication of this outlook is liberal neutrality, the widespread idea that government should be neutral regarding any particular set of moral and spiritual values beyond respect for human rights, tolerance, and fair procedures for resolving conflicts.
The weakening of the bonds of community and the social fragmentation created by radical individualism have been further complicated in recent decades by the rise of identity politics, which divides society into an ever-increasing number of minority groups who feel deep resentment over how they have been marginalized and oppressed. Many in these groups have grown disillusioned with having faith in traditional American ideals. On the one hand, identity politics has focused attention on real injustices suffered by women, Native Americans, African-Americans, immigrants, and LGBT people. On the other hand, identity politics is a force generating new forms of tribalism and fragmenting society. This has contributed to a form of multiculturalism that supports a vision of American society as divided into many competing groups with distinct identities and no common national identity.
The most fundamental problem with America’s radical individualism is that it is based on a misguided concept of the self and freedom. It involves what Thomas Merton, the American Trappist monk, has called “the illusion of separateness.” Everything that exists is both a unique individual and interconnected with the larger universe. People are born interconnected with families, local communities, spiritual traditions, nations, the larger human family, and the greater community of life on Earth. With these interconnections go ethical responsibilities without which human life and development are not possible. Moreover, the individual finds meaning and purpose and realizes true freedom through a sense of belonging, self-mastery, honesty, humility, trustworthiness, courage, caring, compassion, and service. It is love that makes us whole. Centuries ago Rabbi Hillel said it simply: “If I am not for myself, who will be for me? If I am only for myself, what am I? And if not now, when?”
V. Renewing and Reconstructing the Democratic Way of life
The ideal of building a cohesive, pluralistic society guided by reason and experience and inspired by a love of freedom coupled with a compassionate spirit of respect for the equal dignity of all persons is the meaning of spiritual democracy.18 A commitment to human rights, justice, and the common good flows from respect for the freedom and equality of the other. In the world’s great spiritual traditions, spiritual development is centrally concerned with nurturing a sense of meaning and purpose and the spirit of understanding, compassion, and love. It is in this regard that one can talk about democracy as a spiritual practice. For many, spiritual democracy is a way of being and living together inspired by a religious faith but for others it springs from an open-hearted ethical humanism. Whatever its source, it is both a demanding and a rewarding spiritual and moral discipline that never forgets there is a tendency toward injustice as well as goodness in all of us. In these troubled times, those seeking the soul of America, the animating spirit of the nation, will find it here.
Paula Winkler, who promoted the role of women in Judaism, in a letter to her soulmate and future husband, Martin Buber, sharpens the focus on the spiritual dimension of respect for the dignity of the other that is at the heart of the democratic faith and spiritual democracy. She writes: “Our attitude toward each other ought above all to be person to person – not ‘Frenchman to German,’ not ‘Jew to Christian,’ and perhaps less of ‘man to woman.’”19 Had Winkler been writing in America today, she might have added “black to white, Democrat to Republican, and established citizen to immigrant.” One ideally encounters the other first and foremost simply as a fellow human being. As Paula Winkler makes clear in her letter, embracing this attitude should not be confused with an attempt to deny the importance of difference and to blur the distinctions between cultures and religions. It is, however, a corrective to a one-sided emphasis on difference that generates separatism and fails to perceive what can bring people together in the midst of their diversity. It involves what Howard Thurman, the black spiritual leader and philosopher, describes as listening for “the sound of the genuine” in the other. It awakens compassion. It makes possible dialogue, forgiveness, and cooperation. The nation’s many social problems in the twenty-first century involving sexism, sexual abuse, racial discrimination, interreligious hatred, oppression of minorities, and economic inequality, as well as the contempt and hatred that pollute the nation’s politics, are rooted in a failure to respect the inherent dignity of the other.
In concluding it is important to ask: from the perspective of the twenty-first century, what fundamental ideals and values are missing in our federal and state constitutions? The science of ecology has made clear that human beings are an interdependent part of the greater community of life on Earth and that the degradation of the planet’s environment by human activity threatens to undermine the right to life, liberty, and the pursuit of happiness for present and future generations. The Preamble to the Constitution does recognize that the American people have a basic responsibility to “secure the Blessings of Liberty to ourselves and our Posterity.” In this regard, it should be made explicit in the nation’s legal systems that future as well as present generations have a right to a clean and healthy environment. Were there to be a constitutional amendment on the environment, it should include a declaration affirming the interdependence of humanity and nature and the ethical imperative to respect and care for the greater community of life in all its diversity. The Earth Charter, a people’s treaty that is the product of a world wide, cross-cultural dialogue on shared values and the common good, provides a very good example of how democratic and ecological ideals can be integrated in a comprehensive ethical vision for building a better world.20
The moral and spiritual transformation necessary for the renewal of our democracy will have to come from the bottom up beginning with families, educational institutions, religious organizations, local communities, and voluntary civic associations. Only pressure from the people will force the national political parties to find ways to work together intelligently and responsibly for the public welfare. A transformation at the local level in how people interact, deliberate, and build community is what is needed. It is the direct experience of a better way of life that will change people’s thinking, attitudes, and behavior. Such change can help to promote a national dialogue about the spiritual and moral values essential to the flourishing of a free society in an age of economic globalization and the internet. Hopefully it might lead to what Michael Sandel calls “a new politics of the common good” that generates “moral and spiritual aspiration,” healing and reconciliation, and a compelling vision of America’s national purpose.21
John Dewey argued that the most effective way to promote social reconstruction is to start with a transformation of the schools, and one promising development is the growing support for PreK-12 school reform movements that promote education of the whole child. These initiatives include social and emotional learning, character education, education in compassion, training in mindfulness, and citizenship education. In addition, guided by new scientific research demonstrating the close correlation between the wellbeing of young people and their spiritual and moral development, a National Council on Spirituality in Education has been formed with strong support from educators.22
There are many schools that are already committed to education for the whole child and that promote the spiritual, moral, and aesthetic dimensions of development as well as social, emotional and academic learning. Some have a religious affiliation and some are secular. What is especially critical in this regard is the school culture. Renewing American democracy will require taking the spirit of holistic education and spiritual democracy into all the nation’s schools.
How the founding of the nation and American history are taught is an especially critical issue if the American people are to recover a sense of shared national identity. Given the influence of identity politics, some want to tell the American story primarily from the perspective of marginalized and oppressed groups, emphasizing the grave injustices that have occurred and the moral failures of the nation’s dominant white, male leadership. While the truth in this story needs to be honestly addressed, insofar as the objective of such a narrative is to condemn America and discredit her institutions, it contains no real answer to the problems that face oppressed groups, the nation, and the world. The alternative is to highlight what is truly enlightened, innovative, and good in the revolutionary founding of the nation and its history without denying the moral failures and injustice that are tragically very much a part of the story. If there is to be progressive change, hope is essential. The nation’s founding ideals and the struggle to realize those ideals by women and men of courage and good will from all races, ethnic groups, and religions should be viewed as one major source of hope that can inspire oncoming generations.
In his inaugural address as president, John F. Kennedy invoked the spirit of the civic republicanism of the founders when he stated: “Ask not what your country can do for you, ask what you can do for your country.” One way to awaken a new sense of national identity, solidarity, and commitment to the common good among the American people is to require of all citizens, women and men, one or two years of national service after high school or college. The requirement could be fulfilled by serving in the military, teaching in the public schools, or through any number of other socially beneficial domestic or international projects.
In conclusion, if the American people are to come together to address the great social, economic, and environmental challenges they face in the twenty-first century, it will require deepening our understanding of how to live meaningfully, responsibly, and joyfully with freedom. In this regard, the upcoming 250th anniversary of the Revolution presents a unique opportunity to rediscover the common sense in the founders’ republicanism and to commit ourselves to a fresh vision of the democratic faith infused with reverence for the mystery of being, a passion for what is right and just, and a new respect for Earth, our shared home in the universe.
1 Jo Ann Boydston, ed., John Dewey: The Later Works, 1925-1953, Vol. 9 (Carbondale, Ill.: Southern Illinois University Press, 1986), 1-58. See also John Dewey, “Creative Democracy—The Task Before Us” in LW 14:224-230.
2 Federalist Paper 45. Robert Bellah, et al., Habits of the Heart; Individualism and Commitment in American Life (Berkeley, Cal.: University of California Press, 1985), 252-256.
3 As quoted in Gordon S. Wood, The Radicalism of the American Revolution (New York: Alfred Knopf, 1992), 221.
4 Jill Lepore, These Truths; A History of the United States (New York: W.W. Norton & Co., 2018), 36-38, 123-128.
5 Sidney Alstrom, A Religious History of the American People (New Haven: Yale University Press, 1972), 124.
6 Philip Gorski, American Covenant: A History of Civil Religion From the Puritans to the Present (Princeton: Princeton University Press, 2017), viii-ix, 14-23, 37-44, 88-92, 116-131, 148-157.
7 James Madison, Federalist Paper 39.
8 Wood, Radicalism, 95-109.
9 George Washington Farewell Address.
10 Ibid.; James Madison, Federalist Paper 47.
11 Gorski, American Covenant, 23-26, 62,82, 223-225; Wood, Radicalism, 104.
12 Wood, Radicalism, 232
13 Ibid., 234.
14 Ibid., 229-240.
15 Democracy and its Discontents; America in Search of A Public Philosophy (Cambridge: Harvard University Press, 1996), 3.
16 See “A Call to Civil Society: Why Democracy Needs Moral Truths” issued by the Council on Civil Society, a joint project of the Institute for American Values and the University of Chicago Divinity School. The Council’s report was issued in 1998, but the moral decline it describes has only worsened and its recommendations remain as relevant as when first released.
17 For an illuminating account of the origins and nature of radical individualism in America and the spiritual challenge it presents, see Robert Bellah, Habits of the Heart. See also David Brooks, The Second Mountain: The Quest for a Moral Life (New York: Random House, 2019), 3-11; Gorski, American Covenant, 28-30.
18 I was first introduced to the concept of “spiritual democracy” by Maurice Wohlgelernter’s Introduction to M. Wohlgelernter, ed., History, Religion, and Spiritual Democracy: Essays in Honor of Joseph L. Blau (New York: Columbia University Press, 1980), lxv-lxxiv.
19 Paul Mendes-Flohr, Martin Buber; A Life of Faith and Doubt (New Haven, CT: Yale University Press, 2019), 13.
20 The Earth Charter has been endorsed by over 750 organizations in the United States and over seven thousand organizations worldwide. See www.earthcharter.org.
21 Justice: What’s the Right Thing To Do? (New York: Farrar, Strauss and Giroux, 2009), 261-269.
22 For information on the Collaborative for Spirituality in Education and the National Council, see www.spiritualityineducation.org.
Originally printed on September 19, 2019. Republished with permission of the author.
|
{
"palladium_score": 3.839766502380371,
"timestamp": "2026-01-18T07:33:09.090710",
"source": "Palladium-STEM (Preview)"
}
|
http://www.teachingideas.co.uk/art/persp.htm
|
Age Range: 5 to 11
This activity highlights a very easy method of making pictures and text which look three-dimensional. First of all, start with a few shapes dotted around the page:
Next, draw a dot anywhere on the page (it doesn't always have to be in the middle). This dot is called the "vanishing point". When you have drawn your vanishing point, start drawing STRAIGHT lines from it to the corners of one of the shapes:
Now, repeat this for all of the shapes (on shapes with curved edges, draw lines to the sides - see the picture)
Subtle shading of the parts "furthest away" helps to increase the sense of perspective:
The following two pictures illustrate the effect of moving the vanishing
point around the page. As you can see, the placing of the VP has an immense
effect on the outcome of the drawing.
Make a 3D Name Display...
When the children are familiar with this technique, they can help to make a display in their classroom. Here is one which I created in a classroom...
To make your name display, the teacher should first make a name page for each child in the class. This page can be created using WordArt. Make sure that you use first names only (which should fill most of an A4 page), and change the text colouring to white and add a thin black border to the text.
When you have your name pages, ask the children to use the above technique to make them look 3D. When this is done, they can colour their names (using random colours, or using a pattern if desired). These should then be cut out (by the teacher, as the names can be tricky to cut) and stuck on to your display.
You could also make a "Welcome to..." sign to indicate the name of the class as in the above photo. This can be done by making 2 A4 name pages (with "Welcome to" on one page and the name of the class on the other). The pages should then be enlarged onto A3 paper and stuck together. Then, draw a vanishing point in the middle, and make your picture look 3 dimensional!
Comments powered by Disqus
|
{
"palladium_score": 4.371204376220703,
"timestamp": "2026-01-18T07:33:11.445991",
"source": "Palladium-STEM (Preview)"
}
|
https://en.wikibooks.org/wiki/Signals_and_Systems/Time_Domain_Analysis
|
Signals and Systems/Time Domain Analysis
There are many tools available to analyze a system in the time domain, although many of these tools are very complicated and involved. Nonetheless, these tools are invaluable for use in the study of linear signals and systems, so they will be covered here.
- 1 Linear Time-Invariant (LTI) Systems
- 2 Linear Time Invariant (LTI) Systems
- 3 Other Function Properties
- 4 Linear Operators
- 5 Impulse Response
- 6 Convolution
- 7 Correlation
Linear Time-Invariant (LTI) Systems
This page will contain the definition of a LTI system and this will be used to motivate the definition of convolution as the output of a LTI system in the next section. To begin with a system has to be defined and the LTI properties have to be listed. Then, for a given input it can be shown (in this section or the following) that the output of a LTI system is a convolution of the input and the system's impulse response, thus motivating the definition of convolution.
Consider a system for which an input of xi(t) results in an output of yi(t) respectively for i = 1, 2.
There are 3 requirements for linearity. A function must satisfy all 3 to be called "linear".
- Additivity: An input of results in an output of .
- Homogeneity: An input of results in an output of
- If x(t) = 0, y(t) = 0.
"Linear" in this sense is not the same word as is used in conventional algebra or geometry. Specifically, linearity in signals applications has nothing to do with straight lines. Here is a small example:
This function is not linear, because when x(t) = 0, y(t) = 5 (fails requirement 3). This may surprise people, because this equation is the equation for a straight line!
Being linear is also known in the literature as "satisfying the principle of superposition". Superposition is a fancy term for saying that the system is additive and homogeneous. The terms linearity and superposition can be used interchangably, but in this book we will prefer to use the term linearity exclusively.
We can combine the three requirements into a single equation: In a linear system, an input of results in an output of .
A system is said to be additive if a sum of inputs results in a sum of outputs. To test for additivity, we need to create two arbitrary inputs, x1(t) and x2(t). We then use these inputs to produce two respective outputs:
Now, we need to take a sum of inputs, and prove that the system output is a sum of the previous outputs:
If this final relationship is not satisfied for all possible inputs, then the system is not additive.
Similar to additivity, a system is homogeneous if a scaled input (multiplied by a constant) results in a scaled output. If we have two inputs to a system:
Where c is an arbitrary constant. If this is the case then the system is homogeneous if
for any arbitrary c.
If the input signal x(t) produces an output y(t) then any time shifted input, x(t + δ), results in a time-shifted output y(t + δ).
This property can be satisfied if the transfer function of the system is not a function of time except expressed by the input and output.
Example: Simple Time Invariance
To demonstrate how to determine if a system is time-invariant, consider the two systems:
- System A:
- System B:
Since system A explicitly depends on t outside of x(t) and y(t), it is time-variant. System B, however, does not depend explicitly on t so it is time-invariant (given x(t) is time-invariant).
Example: Formal Proof
A more formal proof of why systems A & B from above are respectively time varying and time-invariant is now presented. To perform this proof, the second definition of time invariance will be used.
- System A
- Start with a delay of the input
- Now delay the output by δ
- Clearly , therefore the system is not time-invariant.
- System B
- Start with a delay of the input
- Now delay the output by δ
- Clearly , therefore the system is time-invariant.
Linear Time Invariant (LTI) Systems
The system is linear time-invariant (LTI) if it satisfies both the property of linearity and time-invariance. This book will study LTI systems almost exclusively, because they are the easiest systems to work with, and they are ideal to analyze and design.
Other Function Properties
Besides being linear, or time-invariant, there are a number of other properties that we can identify in a function:
A system is said to have memory if the output from the system is dependent on past inputs (or future inputs) to the system. A system is called memoryless if the output is only dependent on the current input. Memoryless systems are easier to work with, but systems with memory are more common in digital signal processing applications. A memory system is also called a dynamic system whereas a memoryless system is called a static system.
Causality is a property that is very similar to memory. A system is called causal if it is only dependent on past or current inputs. A system is called non-causal if the output of the system is dependent on future inputs. Most of the practical systems are causal.
Stability is a very important concept in systems, but it is also one of the hardest function properties to prove. There are several different criteria for system stability, but the most common requirement is that the system must produce a finite output when subjected to a finite input. For instance, if we apply 5 volts to the input terminals of a given circuit, we would like it if the circuit output didn't approach infinity, and the circuit itself didn't melt or explode. This type of stability is often known as "Bounded Input, Bounded Output" stability, or BIBO.
Studying BIBO stability is a relatively complicated course of study, and later books on the Electrical Engineering bookshelf will attempt to cover the topic.
Mathematical operators that satisfy the property of linearity are known as linear operators. Here are some common linear operators:
- Fourier Transform
Example: Linear Functions
Determine if the following two functions are linear or not:
zero state response means transient response or natural response.
- Example. Finding the total response of a driven RLC circuit.
Convolution (folding together) is a complicated operation involving integrating, multiplying, adding, and time-shifting two signals together. Convolution is a key component to the rest of the material in this book.
The convolution a * b of two functions a and b is defined as the function:
The greek letter τ (tau) is used as the integration variable, because the letter t is already in use. τ is used as a "dummy variable" because we use it merely to calculate the integral.
In the convolution integral, all references to t are replaced with τ, except for the -t in the argument to the function b. Function b is time inverted by changing τ to -τ. Graphically, this process moves everything from the right-side of the y axis to the left side and vice-versa. Time inversion turns the function into a mirror image of itself.
Next, function b is time-shifted by the variable t. Remember, once we replace everything with τ, we are now computing in the tau domain, and not in the time domain like we were previously. Because of this, t can be used as a shift parameter.
We multiply the two functions together, time shifting along the way, and we take the area under the resulting curve at each point. Two functions overlap in increasing amounts until some "watershed" after which the two functions overlap less and less. Where the two functions overlap in the t domain, there is a value for the convolution. If one (or both) of the functions do not exist over any given range, the value of the convolution operation at that range will be zero.
After the integration, the definite integral plugs the variable t back in for remaining references of the variable τ, and we have a function of t again. It is important to remember that the resulting function will be a combination of the two input functions, and will share some properties of both.
Properties of Convolution
The convolution function satisfies certain conditions:
- Associativity With Scalar Multiplication
for any real (or complex) number a.
- Differentiation Rule
Find the convolution, z(t), of the following two signals, x(t) and y(t), by using (a) the integral representation of the convolution equation and (b) muliplication in the Laplace domain.
The signal y(t) is simply the Heaviside step, u(t).
The signal x(t) is given by the following infinite sinusoid, x0(t), and windowing function, xw(t):
Thus, the convolution we wish to perform is therefore:
From the distributive law:
Akin to Convolution is a technique called "Correlation" that combines two functions in the time domain into a single resultant function in the time domain. Correlation is not as important to our study as convolution is, but it has a number of properties that will be useful nonetheless.
The correlation of two functions, g(t) and h(t) is defined as such:
Where the capital R is the Correlation Operator, and the subscripts to R are the arguments to the correlation operation.
We notice immediately that correlation is similar to convolution, except that we don't time-invert the second argument before we shift and integrate. Because of this, we can define correlation in terms of convolution, as such:
Uses of Correlation
Correlation is used in many places because it demonstrates one important fact: Correlation determines how much similarity there is between the two argument functions. The more the area under the correlation curve, the more is the similarity between the two signals.
The term "autocorrelation" is the name of the operation when a function is correlated with itself. The autocorrelation is denoted when both of the subscripts to the Correlation operator are the same:
While it might seem ridiculous to correlate a function with itself, there are a number of uses for autocorrelation that will be discussed later. Autocorrelation satisfies several important properties:
- The maximum value of the autocorrelation always occurs at t = 0. The function always decreases (or stays constant) as t approaches infinity.
- Autocorrelation is symmetric about the x axis.
Cross correlation is every instance of correlation that is not considered "autocorrelation". In general, crosscorrelation occurs when the function arguments to the correlation are not equal. Crosscorrelation is used to find the similarity between two signals.
RADAR is a system that uses pulses of electromagnetic waves to determine the position of a distant object. RADAR operates by sending out a signal, and then listening for echos. If there is an object in range, the signal will bounce off that object and return to the RADAR station. The RADAR will then take the cross correlation of two signals, the sent signal and the received signal. A spike in the cross correlation signal indicates that an object is present, and the location of the spike indicates how much time has passed (and therefore how far away the object is).
|
{
"palladium_score": 3.6528730392456055,
"timestamp": "2026-01-18T07:33:11.445991",
"source": "Palladium-STEM (Preview)"
}
|
http://www.oxforddictionaries.com/definition/english/long+sight
|
The inability to see things clearly, especially if they are relatively close to the eyes, owing to the focusing of rays of light by the eye at a point behind the retina. Also called hypermetropia.
- Defective vision due to short sight or long sight can be corrected by wearing spectacles, contact lenses or by LASIK.
- A significant proportion of the EMI students chose Options C and D, indicating that many of them could not distinguish between the effects of short sight and long sight.
- Some conditions such as short or long sight, eye muscle co-ordination problems and most lazy eyes can be corrected, and glasses are not always necessary.
For editors and proofreaders
Line breaks: long sight
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed.
|
{
"palladium_score": 3.755999803543091,
"timestamp": "2026-01-18T07:33:11.445991",
"source": "Palladium-STEM (Preview)"
}
|
http://www.frac.org.php56-17.dfw3-1.websitetestlink.com/obesity-health/factors-contributing-obesity
|
Obesity is a complex condition with biological, genetic, behavioral, social, cultural, and environmental influences. For example:
- Individual behaviors and environmental factors can contribute to excess caloric intake and inadequate amounts of physical activity. The current high rates of obesity have been attributed to, in part, increased snacking and eating away from home, larger portion sizes, greater exposure to food advertising, limited access to physical activity opportunities, and labor-saving technological advances (Duffey & Popkin, 2011; Piernas & Popkin, 2011; Powell et al., 2011; Sallis & Glanz, 2009).
- Certain medical conditions (e.g., polycystic ovary syndrome) and prescription drugs (e.g., steroids, anti-depressants) can cause weight gain.
- Recent evidence suggests that inadequate sleep, prenatal and post-natal influences (e.g., maternal pre-pregnancy weight status, maternal smoking during pregnancy), chemical exposure, and stress may affect energy balance or obesity risk (Gore et al., 2015; Gundersen et al., 2011; Knutson, 2012; Shlisky et al., 2012; Weng et al., 2012).
- Race-ethnicity, gender, age, income, and other socio-demographic factors also can play a role in this complex health issue, as discussed elsewhere on this web-site. (See the sections on Obesity in the U.S. and Relationship Between Poverty and Obesity.)
Many of these and other contributing factors affect everyone at some point during their lives, at least to some extent, but those who are food insecure or low-income face additional challenges and risks.
|
{
"palladium_score": 3.5861196517944336,
"timestamp": "2026-01-18T07:33:11.445991",
"source": "Palladium-STEM (Preview)"
}
|
https://workplacewellbeing.co/about-training-programs/mindfulness-in-schools
|
Mindfulness within Educational & School-Based Settings.
Over the past couple of decades there has been a growing debate about the role schools need to play in the lives of children and young people, their families, the wider community and society. No longer are schools expected to deliver a purely formalised academic education. Schools are increasingly being asked to provide both an academic and a more holistic education which considers each student's overall wellbeing. In the main, this focus on a student's wellbeing has primarily been concerned with identifying and managing mental health problems, bullying issues, and a plethora of antisocial behaviours within the school environment.
However, as Professor Martin Seligman (2010, 2012, 2013) states it is more than just the alleviation of symptoms that promotes positive mental health and wellbeing, it is the presence and teaching of concepts such as flourishing, wellbeing, resilience, learned optimism and posttraumatic growth. In fact, Seligman (2012, 2013) is calling for a revolution in world education in the teaching of wellbeing in our school systems through the application of Positive Education and Positive Psychology. Seligman and his colleagues believe that through the practical application of Positive Education the alarmingly high levels of; youth depression, anxiety, suicide and allied mental health problems will be directly addressed. This is then additional enhanced by fostering school communities that actively promote happiness, confidence, contentment, compassion, kindness and balance, in short Wellbeing. Seligman (2014) says "I believe that schools can teach both traditional skills for learning and help teach students the skills to lead a flourishing life".
In support of Seligman's claims are reports from UNICEF (2007) and the OECD (2009) which highlight the alarmingly low rates of well-being, both objective (e.g. health, educational attainment) and subjective (e.g. life satisfaction) among children and adolescents in the majority of Western economically advantaged countries. There appears to be a direct correlation between the rise of national and personal economic wealth and an overall decrease in personal wellbeing and life satisfaction in many advantaged Western countries (Seligman, 2012).
One clear practice that appears to address many of the above concerns within school and classroom environments is Mindfulness. Mindfulness Practices & Techniques have now been used formally within schools, both in Australia and internationally, since the late 1990s and the research is clearly indicating the many benefits, for both teachers and students, that Mindfulness Practice in the classroom can have (please refer to the following table).
Empirically Researched Benefits of Mindfulness Practice for Teachers & Students
(Fernando, 2012; Flook, Goldberg, Pinger, Bonus & Davidson, 2013; Gold et al., 2010; Meiklejohn et al., 2012; Roeser et al., 2013; Schueberlein & Sheth, 2009; Waters, Barsky, Ridd & Allen, 2014)
|Reduced Occupational Stress Levels||Reduced Stress & Anxiety Levels|
|Reduced Occupational Burnout Rates||Improved Academic Performance|
|Improved Focus, Attention, Concentration & Awareness||Increased Openness & Willingness to Learn|
|Improved Emotional Balance||Promotes Positive Mental Health|
|Increased Responsiveness to Student Needs||Improved Self-Reflection Abilities|
|Improved Professional Interpersonal Relationships||Improved Self-Regulation & Impulse Control Abilities|
|Improved Personal Interpersonal Relationships||Enhanced Social & Emotional Intelligence|
|Fostering a Positive & Respectful Classroom Climate||Improved Prosocial Behaviours & Interpersonal Relationships|
|Improved Classroom Organisation & Performance||Improved Classroom & School-Based Interpersonal Relationships|
Improved Rates of Teacher Retention & Reduced Rates of Teacher Turnover
|Improved Focus, Attention, Concentration & Awareness|
|Increased Occupational Self-Compassion||Promotes ethical-moral reasoning|
|Increased General Sense of Wellbeing||
Promotes a greater acceptance of Self and Others and Difference
(e.g. gender, race, culture & sexual orientation).
Fernando, R. (2012). Measuring the efficacy and sustainability of a mindfulness-based in-class intervention. Mindful Schools. Retrieved from http://www.mindfulschools.org/about-mindfulness/research/
Flook, l., Goldberg, S. B., Pinger, L., Bonus, K., & Davidson, R. J. (2013). Mindfulness for teachers: A pilot study to assess effects on stress, burnout and teaching efficacy. Mind, Brain and Education, 7(3), 182-195. flook_13.pdf
Gold, E., Smith, A., Hooper, I., Herne, D., Tansey, G., & Hulland, C. (2010). Mindful-based stress reduction (MBSR) for primary school teachers. Journal of Child and Family Studies Studies, 19, 184-189.
Meiklejohn, J.,Phillips, C., Freedman, M. L., Griffin, M. L., Biegel, G., Roach, A.. ... Saltzman. A. (2012). Integrating mindfulness training into K-12 education: Fostering the resilience of teachers and students. Mindfulness, 3(4), 291-307. meiklejohn_12.pdf
Organisation for Economic Cooperation and Development (OECD). (2009). Comparative child wellbeing across the OECD. In OECD, Doing better for children (21-63). DOI:10.1787/9789264059344-en
Roeser, R. W., Schonert-Reichl, K. A., Jha, A., Cullen, M.,Wallace, L., Wilensky, R. ... Harrison, J. (2013) Mindfulness training and reduction in teacher stress and burnout: Results from two randomised wait-list field trails. Journal of Educational Psychology, 105(3), 787-804. roeser_13.pdf
Schueberlein, D., & Sheth, S. (2009). Mindful Teaching and Teaching Mindfulness: A Guide for Anyone Who Teaches Anything. Boston, MA: Wisdom Publications.
Seligman, M. (2010). Flourish: Positive psychology and positive intentions. The Tanner Lectures on Human Values, University of Michigan. Retrieved from http://tannerlectures.utah.edu/_documents/a-to-z/s/Seligman_10.pdf
Seligman, M. (2012). Flourish. North Sydney, NSW: Random House.
Seligman, M. (2013). Building the state of wellbeing: A strategy for South Australia, a summary of progress. Adelaide Thinker in Residence 2012-2013. Retrieved from http://www.thinkers.sa.gov.au/seligmanaddendum/files/inc/8c48cde37c.pdf
Seligman, M. (2014). A message from our patron. Positive Education Schools Association. Retrieved from http://www.pesa.edu.au/
United Nations Children Fund (UICEF). (2007). Child poverty in perspective: An overview of child well-being in rich countries. A comprehensive assessment of the lives and well-being of children and adolescents in the economically advanced nations. Innocenti Report Card 7. UNICEF Innocenti Research Centre, Florence. Retrieved from http://www.unicef-irc.org/publications/pdf/rc7_eng.pdf
Waters, L., Barsky, A., Ridd, A., & Allen, K. (2014). Contemplative education: A systematic evidenced-based review of the effects of meditation interventions in schools. Educational Psychology Review, online first article. DOI 10.1007/s10648-014-9258-2
|
{
"palladium_score": 3.645921230316162,
"timestamp": "2026-01-18T07:33:13.775012",
"source": "Palladium-STEM (Preview)"
}
|
https://www.messagetoeagle.com/neanderthal-dna-still-influence-modern-human-genes-scientists-say/
|
MessageToEagle.com – Everyone living outside of Africa today has a small amount of Neanderthal in them. The last Neanderthal died 40,000 years ago, but much of their genome lives and contribute to certain traits in modern humans.
The impact of Neanderthals’ genetic contribution has been uncertain, but scientists have now discovered evidence that Neanderthal DNA sequences still influence how genes are turned on or off in modern humans.
Researchers now report evidence that Neanderthal DNA sequences still influence how genes are turned on or off in modern humans. Neanderthal genes’ effects on gene expression likely contribute to traits such as height and susceptibility to schizophrenia or lupus, the researchers found.
One of the major problems when investigating how Neanderthals’ genes affect modern humans is the lack of material. DNA can be extracted from fossils and sequenced, but RNA cannot. Without this source of information, scientists cannot be certain if Neanderthal genes functioned differently than their modern human counterparts.
The option is to look to gene expression in modern humans who possess Neanderthal ancestry.
Scientists have previously found correlations between Neanderthal genes and traits such as fat metabolism and depression.
A new study now reveals that Neanderthal genes also contribute to traits such as height and susceptibility and susceptibility to schizophrenia or lupus.
One example uncovered by this study is a Neanderthal allele of a gene called ADAMTSL3 that decreases risk of schizophrenia, while also influencing height.
“Previous work by others had already suggested that this allele affects alternative splicing. Our results support this molecular model, while also revealing that the causal mutation was inherited from Neanderthals,” said postdoctoral researcher Rajiv McCoy from the University of Washington School of Medicine.
Alternative splicing refers to a process in which mRNAs are modified before they leave the cell’s nucleus. When the Neanderthal mutation is present, the cell’s machinery removes a segment of the mRNA that is expressed in the modern human version. The cell ends up making a modified protein because of a single mutation from a Neanderthal ancestor.
The connection between that modified protein, height, and schizophrenia still requires more investigation, but it’s an example of how small differences between modern humans and Neanderthals can contribute to variation in people.
“Even 50,000 years after the last human-Neanderthal mating, we can still see measurable impacts on gene expression,” said geneticist and study co-author Joshua Akey of the University of Washington School of Medicine.
“And those variations in gene expression contribute to human phenotypic variation and disease susceptibility.”
The mysterious disappearance of Neanderthals has puzzled scientists a very long time. Neanderthals thrived in Europe for several hundred thousand years, but they mysteriously died out about 30,000 years ago. It was during this period that modern humans arrived in Europe.
Some scientists have suggested modern humans out-competed or outright killed the Neanderthals. However, new genetic evidence provides support for another theory: Perhaps our ancestors made love, not war, with their European cousins, and the Neanderthal lineage disappeared because it was absorbed into the much larger human population.
Next, researchers will investigate whether Denisovans, another species of hominins that crossbred with modern humans are contributing to gene expression.
Expand for references
Rajiv C. Mccoy, Jon Wakefield, Joshua M. Akey. Impacts of Neanderthal-Introgressed Sequences on the Landscape of Human Gene Expression. Cell, 2017 DOI: 10.1016/j.cell.2017.01.038
|
{
"palladium_score": 3.5292139053344727,
"timestamp": "2026-01-18T07:33:18.466242",
"source": "Palladium-STEM (Preview)"
}
|
https://www.physicsforums.com/threads/tensor-notation-epsilon-ijk.640970/
|
1. The problem statement, all variables and given/known data Prove that (AxB) is perpendicular to A *We know that it is in the definition but this requires an actual proof. This is what I did on the exam because it was quicker than writing out the vectors and crossing and dotting them. 2. Relevant equations X dot Y = 0 when they are perpendicular AxB= epsilonijk(AjBk)i 3. The attempt at a solution AxB= epsilonijk(AjBk)i so (AxB) dot A = AxB= epsilonijk(AjBk)iAi = AxB= epsilonijkAiAjBk which means that i=j because we have both Ai and Aj. If i=j then epsilon=0, so the value of the dot product is 0, which means the angle between them is 90.
|
{
"palladium_score": 3.5364396572113037,
"timestamp": "2026-01-18T07:33:18.466242",
"source": "Palladium-STEM (Preview)"
}
|
https://www.psychtestingsolutions.com/anxiety-in-children.html
|
Anxiety in children or adolescents is a concern for parents and teachers alike. All of us feel anxious sometimes. It is just part of life. We all know what it is like to feel worried, anxious, nervous or fearful. This is as true for children, as it is for adults. However, most of us are able to manage our anxious feelings. We learn to cope with them and are able to carry on despite them.
Anxiety in children or adolescents becomes a problem when children find it difficult to manage their anxious feelings and become stressed, upset and unable to cope with everyday challenges. For these children feelings of anxiety are constant and far more pervasive than an occasional wave of apprehension or anxiety. Anxious feelings dominate and interfere with healthy child functioning and optimum development.
Anxiety in children and adolescents manifests in many different ways, and ranges from relatively minor concerns to more serious and debilitating problems. Children or adolescents who exhibit high levels of anxiety that interfere with normal functioning in everyday life may meet the criteria for an anxiety disorder. Anxiety disorders include panic attacks, phobias, extreme shyness, obsessive thoughts and compulsive behaviors. In some cases, children who suffer from an anxiety disorder may find it difficult to leave home to go to school or to visit family and friends or go on outings and errands with their parents.
In addition, they may find it difficult to cope with academic tasks at school. They may also show neurocognitive concerns such as problems with concentration and sustained attention, or difficulties with working memory, and related executive function tasks.
Social Anxiety Disorder: Children who suffer from social anxiety feel self-conscious and fear being around other people. Some of these children may feel that everyone is watching and staring at them or being critical in some way. These children will avoid social situations and other people. In severe cases, children with a social anxiety disorder may prefer to be alone much of the time. They often suffer from performance anxiety and may show concerns around feelings of rejection and humiliation.
Other children or adolescents who suffer from social anxiety know their thoughts and fears are irrational. They know others are not really judging or evaluating them at every moment. But this knowledge does not make their fears and anxieties disappear, nor does it make it any easier for them to engage in social interactions.
Panic Disorder: Children who suffer from panic disorder have panic attacks without warning. A panic attack usually lasts several minutes and can be extremely upsetting and frightening. In some cases, panic attacks last longer than a few minutes or occur several times in a short period. A panic attack is frequently followed by feelings of depression and helplessness. For these children, their greatest fear is that a panic attack will happen again. Often the child doesn't know what caused the panic attack. It seems to come out of the blue. At other times, the child may report that he or she was feeling stressed and upset and expected the panic attack.
Generalized Anxiety Disorder. Anxiety in children or adolescents may also present as a generalized anxiety disorder. Children who suffer from a generalized anxiety disorder are filled with worry, anxiety and fear. They constantly think about and dwell on the "what ifs" of the situation. They feel trapped in a destructive cycle of anxiety and worry and are vulnerable to feeling depressed. These children and adolescents feel incapacitated by their inability to shut the mind off, and are overcome with feelings of worry. In addition, the child's mood may change regularly, perhaps daily or even hour to hour. The child's feelings of anxiety and mood swings become habitual and disrupt the child's ability to cope with everyday life.
Children with generalized anxiety disorder frequently exhibit physical symptoms, including headaches, irritability, frustration, trembling, problems with concentration and sleep disturbances. They may also exhibit symptoms of social withdrawal and panic disorder.
The treatment for anxiety in children varies and depends on how extreme the problem is and how long the problem has been going on. In addition, the causes of anxiety in children are unique to each child and depend on a unique set of circumstances. Successful treatment outcomes will also depend on unique factors and circumstances. Some children, for example, will feel better after a few weeks or months of treatment, while other children may need a year or more to show positive effects in treatment.
Anxiety in children may coexist with other disorders such as ADD or depression. Treatment in these cases will differ and may take longer. While a treatment plan must be specifically designed for each child, there are a number of standard approaches that can help address anxiety in children. Mental health professionals who specialize in treating anxiety often use a combination of the following treatments:
Anxiety in children affects large numbers of children and ranges from relatively minor concerns to more severe and debilitating issues and behaviors. Children or adolescents who are experiencing high levels of anxiety or who suffer from an anxiety disorder often have difficulty coping at home and at school. Their social, academic and interpersonal skills can suffer, and they may experience difficulty doing well at school.
Early intervention can help reduce anxious feelings in children or adolescents, and prevent problems from escalating. Children who exhibit high levels of anxiety or who suffer from an anxiety disorder also need help to develop healthy coping strategies to ameliorate and reduce their feelings of anxiety.
Contact Dr. O'Connor about anxiety in children. She will respond to your concerns and suggest options for addressing them.
Dr. O'Connor also offers Psychological Assessment Services that she tailors to fit the needs of an anxious child or adolescent.
To learn how an assessment can help the anxious child or adolescent, click here.
Click here to learn more about child anxiety, including when its a problem, risk factors, and causes.
Contact Dr. O’Connor to learn more about our Psychological Assessment Services and how to help when anxiety in children is a problem.
A Psychological Assessment can get to "the root of the problem" and point to effective solutions and intervention strategies to address it.
To learn more about child anxiety, including signs and symptoms and how to help, click here.
You can also purchase our case studies about children who suffer from an anxiety disorder.
|
{
"palladium_score": 3.850767135620117,
"timestamp": "2026-01-18T07:33:21.256999",
"source": "Palladium-STEM (Preview)"
}
|
http://www.teacherspayteachers.com/Product/Biology-Lab-Diversity-of-Cell-Structure-96912
|
This is a short, one hour lab that I wrote for my Biology I students. This lab would be excellent for an average Biology class or middle school science class. I use this at the first of the school year after teaching about the microscope and the cell organelles.
The purpose of the lab is to observe a variety of cells, to compare plant cells and animal cells, and to observe the life in a drop of pond water. Students will make wet mount slides of onion cells, cheek cells, and Elodea cells. Students are required to draw and label the cell organelles and answer various questions comparing and contrasting plant and animal cells.
NOTE: This product is also included in a large, bundled unit lesson plan and can be viewed here: Cell Structure and Function Complete Bundled Unit Plan
Finally, let the students have some fun with the pond water! My students always get so excited when viewing pond water. The lab will ask to students to fill in a chart about their observations of pond water. Students will have to find four different organisms an describe their method of locomotion, whether or not they have cell walls and chloroplasts, and the method of food getting.
Materials required for this lab are: Microscope, slides, onion, iodine, coverslips, methylene blue, Elodea plant, pond water, colored pencils.
The download contains 7 pages: A 4 page lab for the student and a 3 page answer key for the teacher. You will receive both a pdf version of the lab and a Word document incase you would like to make changes to my work.
You might also be interested in the following products:
Cell Structure and Function Powerpoint and Notes
Cell Structure and Function Bundled Unit of 19 products
Cells (Plant and Animal) Quiz / Homework / Review
Cell Organelles Crossword Puzzle
Cell Organelles MAtching Worksheet
Cell Structure and Function - Set of 3 Quizzes
Cell Organelles Jeopardy Review Game
Test: Cell Structure and Function and Membrane Transport
Microscope Powerpoint with Notes for Teacher and Student
Microscope Crossword Puzzle
Microscope Powerpoint Jeopardy Review Game
Microscope Homework / Study Guide
Amy Brown - Science Stuff
|
{
"palladium_score": 3.5004544258117676,
"timestamp": "2026-01-18T07:33:21.256999",
"source": "Palladium-STEM (Preview)"
}
|
http://www.ibiblio.org/hyperwar/USA/USA-MTO-Salerno/USA-MTO-Salerno-25.html
|
The Bombing of Cassino
To the Allied forces, the Anzio beachhead toward the end of February was a defensive liability that placed great strain on naval and air resources. Yet it threatened the enemy's major supply routes south of Rome; a comparatively short Allied advance from the beachhead would imperil all the German troops on the Tenth Army front. The strength of the barrier erected at Anzio by the Germans ruled out such an advance for the moment. Was it then possible that the strong German effort at Anzio had been made at the expense of weakening the Gustav Line? If so, it was time for the Allies to make another effort to get into the Liri valley.
After the bombardment of Monte Cassino on 15 February and the subsequent ground attack, General Alexander considered the New Zealand Corps capable of making one more attempt to break through. But if the corps failed again, and Alexander was hardly optimistic, offensive operations would have to be brought to a halt--"after the New Zealand Corps has shot its bolt, a certain pause in land operations will be essential to enable troops to be reorganized and prepared to continue the battle."1
While the New Zealand Corps prepared to renew its attack, Alexander continued to regroup his forces to provide the overwhelming strength needed to break the Gustav Line. Since the troops of Fifth Army were divided between Anzio and Cassino, they were too weak to exert decisive pressure at either place. The Eighth Army, already stripped of units, could do little more than maintain the Adriatic front.
How to find fresh reserves was settled during a series of conferences at General Alexander's headquarters in late February, which set into motion a large-scale shift of forces to the area west of the Apennines. Eventually the Fifth Army zone would be narrowed to the coastal area, where the II Corps and the French Expeditionary Corps would be located under Fifth Army control, along with the VI Corps at Anzio. The Eighth Army, after moving across the Apennines to the Cassino area, would take control of two British corps, the 10 and the 13, as well as of the 2 Polish Corps and 1st Canadian Corps--the provisional New Zealand Corps would be disbanded. The 5 Corps operating directly under Alexander's 15th Army Group headquarters would remain on the Adriatic front.2
Before these new arrangements were
completed, Fifth Army would try once more to break the Gustav Line in the Cassino area. The attempt would be made by General Freyberg's New Zealand Corps in mid-March.
To General Freyberg, there were several reasons for the failure of the experienced mountain fighters of the 4th Indian Division to capture Monte Cassino in February: the Indians could not attack on a broad front and the Germans were therefore able to shift reinforcements quickly to threatened areas; the Germans could concentrate defensive fires quickly and effectively because they had the advantage of observation; the Allies had found it virtually impossible to conduct effective supply operations on the Cassino massif. Believing that a major attack across the high ground was impractical, General Freyberg looked to the town of Cassino. Possession of the town, he felt, would allow an easier approach to Monte Cassino and access to the Liri valley. By putting the 78th Division into the left portion of the New Zealand Corps zone, south of Highway 6, Freyberg could concentrate the 2d New Zealand Division in depth on a narrow front directly before Cassino. The New Zealand division, attacking from the east in the main effort, was to take the town, while the 4th Indian Division assisted by striking into Cassino from the north. Then, while these two divisions advanced to seize Monte Cassino, the 78th Division and CCB of the 1st Armored Division were to enter the Liri valley and begin a drive toward Valmontone. As in the earlier attack of the New Zealand Corps, air power was to come into play--the ground troops were to attack Cassino immediately after a heavy bombing of the town.
General Clark was "really shocked" by General Freyberg's idea of starting the exploitation before the reduction of the Cassino massif, and particularly Monte Cassino. "It is absolutely impossible," he wrote, "to mass for an attack down the Liri Valley without first securing the commanding elevation on one flank or the other." Since 10 Corps had too few troops to seize the heights dominating the Liri valley from the south, Clark felt strongly that the Cassino spur had to be in Allied possession before troops could enter the Liri valley. This seemed to be the principal lesson of the failure to cross the Rapido River at Sant'Angelo in January. General Wilson agreed that it was necessary to secure the high ground before, as he put it, sticking one's head into what otherwise would be a Liri valley trap.3
What explained Freyberg's interest in Cassino and his proposal to bomb the town, Clark believed, was Freyberg's deepening conviction that Monte Cassino was impregnable. "He has weakened from day to day," Clark wrote in his diary, "in his [belief in his] ability to take the monastery." But as a result of discussion between Clark and Freyberg, the corps commander altered his plan. Although he retained Cassino as his primary target, he now included a simultaneous attack to secure Monte Cassino.4
Issuing his order on 21 February, General Freyberg outlined his attack in four phases: (1) the 4th Indian Division was to capture a hill 500 yards due north of the abbey of Monte Cassino and from there cover with fire the western edge of Cassino and the eastern slope of Monte
Cassino; (2) aircraft were then to strike the town of Cassino in a heavy bombardment; (3) the 2d New Zealand Division, with CCB of the 1st Armored Division attached, was to capture the town of Cassino and seize a bridgehead over the Rapido at Highway 6, while the Indian division captured Monte Cassino and cut Highway 6 several miles west of the Rapido River; (4) while New Zealand tanks under 78th Division control passed through the Rapido bridgehead and captured Sant'Angelo from the north, CCB was to exploit westward along Highway 6 in the Liri valley, the 78th Division was to cross the Rapido near Sant'Angelo, and the 36th Division was to keep one regiment in readiness to support the exploitation.5
The air forces were to set D-day and H-hour any time after 24 February, but General Freyberg insisted that a weather forecast of three successive days without rain be a prerequisite. This would give the planes good visibility for the bombardment and for subsequent supporting attacks and the tanks dry ground and good traction for the exploitation. Air and ground commanders decided to execute the large scale bombing in the morning. The ground attack would follow at noon. The date would be announced when the weather conditions were suitable for air and ground forces alike.6
At a meeting held at the New Zealand Corps headquarters on 21 February, General Freyberg discussed his plan of attack, with special attention to the role of the air forces. In attendance were General Brann, the Fifth Army G-3, Brig. Gen. Thomas E. Lewis, the Fifth Army artillery officer, Colonel Hansborough, the Fifth Army air support control officer, Col. Stephen B. Mack of the XII Air Support Command, and several New Zealand officers. At the outset of the conference, Freyberg declared that he would not attack "unless a large scale air effort was made." He wanted at least 75 tons of bombs to be dropped to level the town of Cassino and permit his infantry and tanks "to walk through." Colonel Mack assured him that planes could destroy the town. They could drop that amount of bombs on a single target in about three hours, but no less, for the bomber groups would have to wait for the dust and smoke to clear between attacks. As for what General Freyberg hoped the result would be, Mack stated his conviction that the infantry "could advance [only] with difficulty" after the bombardment and that it would be impossible "to get tanks through the town for two days" because the streets would be blocked with debris. Freyberg impatiently "brushed aside" Mack's statement. He expected his tanks to be through the town in six to twelve hours.7
Like General Freyberg, the commander of the U.S. Army Air Forces, General Arnold, hoped for a great victory through the use of air power. Early in March, he wrote from Washington to suggest to General Eaker, who commanded the Mediterranean Allied Air
Forces, that a massive air attack be launched:
We are all very greatly disturbed here at the apparent "bogging down" of the Italian campaign. I admit that I am looking at this from a great distance away from the actual scene of battle . . . .
The Ground Forces are at almost the exact position in which they found themselves during my last visit. The hill overlooking Cassino is still in German hands. That hill apparently dominates the military situation in that it must be taken before we can hope to effect a juncture between the main army and the beachhead force. With different terrain, the desert force found itself in similar positions during its fight across the top of Africa. They solved the problem, I believe, by convincing the Ground Forces that they could and would blow a hole through the opposition providing those Ground Forces were ready and set to take advantage of the opportunity . . . .
What he recommended was gathering together all the aircraft of the Coastal Air Force, all the heavy bombers, medium bombers, and fighters of the strategic and tactical air forces--including crews in rest camps, those not yet quite ready for battle, and those in Africa--to establish a force "which, for one day, could really make air history." Withdraw the ground forces temporarily, General Arnold continued, and use all the available air power to "break up every stone in the town behind which a German soldier might be hiding. When the smoke of the last bombers and fighters begins to die down, have the ground troops rapidly take the entire town of Cassino."8
General Eaker was somewhat dubious. He thought this was easier said than done, and he wrote to General Arnold:
It was clearly demonstrated in the bombing of the Abbey that little useful purpose is served by our blasting the opposition unless the army does follow through.
I am anxious that you do not set your heart on a great victory as a result of this operation. Personally, I do not feel it will throw the German out of his present position completely and entirely, or compel him to abandon the defensive role, if he decides and determines to hold on to the last man as he now has orders to do. It may, however, and I hope will permit the present line [at Cassino] and bridgehead [Anzio] to join up. From our [air] point of view that is the first and major consideration. The bridgehead [at Anzio] is so limited that we are forced to abandon our landing strip in the bridgehead. We lost twenty-four airplanes before we gave up . . . .
. . . . It apparently is difficult for anyone not here to understand the full effect of the combination of terrain and rainfall on the battle. The streams are swollen; there are no bridges, these have all been destroyed; the land is a complete quagmire--it will not support foot troops let alone heavy equipment. Everything must move on the few important roads and these, of course, are in the battle zone and completely enfiladed by heavy artillery fire . . . . we must remember that the terrain and the weather conspired to bring about an entirely different situation than that which pertained in the desert. In the desert campaign flanking movements were always possible. The weather and the terrain made that possible. Here, both the weather and the terrain have forced any advances to be made through mined defiles with heavy artillery concentrations on the high ground on either side. That makes a different picture out of it entirely . . . .
The picture with respect to the future is this and you can rely on it . . . . We shall go forward and capture Rome when the
weather permits . . . and not before; we shall be able, with Spring and Summer weather, to contain the German divisions now in Italy.9
If General Eaker was far from optimistic about the effect of a heavy air attack at Cassino, he had high hopes for the efficacy of a sustained bomber program directed against enemy coastal shipping and the road and rail nets used by the Germans. Operation STRANGLE, as it was called, was designed to cut German supply routes to the divisions located south of the Pisa-Rimini line. Eaker had sufficient aircraft to carry out the plan over a period of six weeks to two months. All he needed was good weather. With this operation he was sure he could help the Allied ground forces take Rome and compel the Germans to withdraw into northern Italy.10
The details of Operation STRANGLE were worked out as early as the first days of March; the operational directive was issued later in the month. The XII Air Support Command, charged with the primary responsibility for this large-scale interdiction program, would be unable to throw its full weight into the task until after the breakthrough attack at Cassino, which required top priority for close support missions.11
Despite General Eaker's conviction that a bombardment of Cassino would be of little practical help to the ground troops, he tried to make the operation a success. After studying photographs of a B-24 attack on marshaling yards and airfields, he reminded Maj. Gen. Nathan E. Twining, the Mediterranean Allied Strategic Air Force commander, early in March, that he "was again disappointed at the scattered bombing and poor results obtained . . . . we need to press very hard to improve accuracy, formation flying and leadership."12
As finally ordered, General Freyberg's attack would have the 2d New Zealand Division capture the town of Cassino and break out into the Liri valley near Highway 6, while the 4th Indian Division assisted by neutralizing enemy positions on the eastern slopes of Monte Cassino, maintaining pressure to prevent the enemy from moving reserve forces against the main effort, and capturing Monte Cassino. The daytime attack by infantry and tanks was to follow a heavy air bombardment of four hours' duration and an artillery preparation in maximum strength. The bombing was to increase in intensity and reach a climax at H-hour of the ground attack. A total of 360 heavy and 200 medium bombers was expected to level Cassino, and fighter-bombers would be on hand to support the developing ground operation.13
Hoping to avoid getting his tanks bogged down in street fighting, General Freyberg directed maximum use of fire and movement, not only by his tanks but also by his self-propelled artillery. To prevent tanks from being hit by friendly fire, those vehicles moving from the direction of the enemy were to elevate their guns to maximum height. These instructions applied to the New
Zealand elements and also to two predominantly American task forces that were to exploit the breakthrough of the Gustav Line. Both task forces were composed mainly of units from CCB of the 1st Armored Division.14
CCB had been ready to exploit an opening into the Liri valley as early as January. The terrain had been thoroughly studied and preparations carefully made--radio-equipped control posts established, routes of advance delineated, wreckers and recovery vehicles stationed at appropriate points.15 For a week in mid February, CCB had remained on a 6-hour alert near San Pietro, Ceppagna, and Monte Trocchio, awaiting word for commitment across the Rapido River.16 Now once again the troops were ready. "We are scheduled to go around the corner from Cassino," General Allen, the CCB commander, wrote to General Harmon, who was at Anzio with the bulk of the 1st Armored Division, "with the First Tank Group leading, followed by some armor of the New Zealand Division, after which CC 'B' proper pushes on." He had conferred with New Zealand officers on the plan of attack, and he had conducted command post exercises, though he had been unable to have demonstrations or field exercises. Allen was not entirely optimistic about the prospects of the new attack. His letter to Harmon continued:
The weather here has been terrible and the valley is a sea of mud. I don't believe that any medium tank will be able to venture far from firm standing under the conditions that now exist, and operations [will be] restricted to roads, only a few of which exist in that valley . . . .
. . . nor can I give you any dope on when this planned operation will go into effect. We sit at the end of a telephone on a two hour alert with the engineers . . . ready with matériel for the bridging. Our artillery is in position firing some missions as are the T.D. battalions . . . . everyone is anxious for the attack to start the push up and rejoin the Division for the march into Rome.17
The weather continued to be miserable, and Freyberg continued to wait for a forecast of three clear days. Impatient after the first week in March had gone by, General Clark urged the New Zealand Corps commander to go ahead, to stop waiting for ideal weather. "I fully realize that we are not going to completely break through," the army commander wrote, "and the tanks will
play only a small part in this attack."18 But General Freyberg was immovable. More time passed. One of the difficulties was the variation in weather within the theater. When it was clear at Cassino, it might be zero visibility at the airfields--foggy in Naples, raining in Foggia, and cloudy over Corsica, Sardinia, and North Africa.19
The meteorologists finally produced the proper forecast. At 1800, 14 March, the Mediterranean Air Force headquarters announced D-day for the following day. During the night, New Zealand and Indian troops withdrew 1 ,500 yards from their most advanced positions for safety during the bombardment of Cassino that would start the next morning.20
To drop a minimum of 75 tons of bombs on Cassino in the shortest possible time, and to have the most destructive effect on the stone houses and concrete pillboxes in the town, the aircraft would use nothing less than 1,000 pound bombs, with fusings adjusted to penetrate the buildings to basement depth. Bombers would attack in waves, striking every fifteen minutes from 0830 to noon. The artillery, which would fire between the bombing waves, would deliver at noon a final concentration lasting forty minutes. When the infantrymen jumped off, a creeping artillery barrage would precede them, the fires moving through Cassino 100 to 200 yards ahead of the assault troops. Fighter-bombers would assist by attacking selected targets, especially the railway station, the ancient coliseum at the base of Monte Cassino, and Monte Cassino itself.21
On the morning of 15 March, General Clark drove to Cervaro to witness what would be, up to that time, the greatest massed air onslaught in direct tactical support of ground forces. Together with Devers, Alexander, Eaker, Freyberg, and others, he watched Cassino, plainly visible a little less than three miles away. Like all the troops in the Cassino area, he heard what someone later would call a "locust-like drone [that] came from afar." The "uncertain murmur swelled gradually; a steady, pulsing throb." Then "the specks began to appear, high and small against the sky."
First to arrive at 0830 were the medium bombers, B-25's and B-26's, in flights of a dozen or more, escorted by fighters flying high above them and marking the sky with vapor trails. The bombers approached the target, almost passed, then turned left. The bellies of the planes opened, and the bombs tumbled out. Then the planes wheeled again, this time to fly home.
About 80 percent of the bombs dropped by the aircraft in the first wave fell into the heart of Cassino. The others landed nearby, a few short ones coming to earth on the Allied side of the Rapido River. As the bombs struck, "stabbing flashes of orange flame" shot through a holocaust of erupting smoke and debris.
Next, at 0845, came the heavy bombers, the Flying Fortresses, along with the dive bombers. As the pilots roared over the town, already obliterated from
BOMBING OF THE TOWN OF CASSINO
view by smoke and dust, the bombardiers let go their loads. Bright orange bursts appeared over Cassino, Monte Cassino, and the Rapido valley. Only the impact of the first bombs was visible. The bombs of the later strikes were lost in a billowing ocean of gray and white dust and smoke.
The ground for at least five miles around Cassino shook violently as though in an earthquake. How could any human being in the town "survive such punishment and retain his sanity"?
Almost without interruption, the bombs fell until noon. Between the waves of planes, artillery pounded the target.
Finally came the 40-minute cannonade, joined by every field piece in the area--American, British, New Zealand, Indian, and French. An artilleryman's dream, the target was in plain sight, the range was virtually point-blank, the calibration was exact, the registration perfect. The artillery thundered, the gunners perspiring in the chill winter air.
Monte Cassino seemed to jump and writhe under the detonations. Great holes appeared in the few walls of the abbey still standing. Huge chunks of masonry flew through the air.
When the artillery barrage ceased and the ground troops moved out in the attack, "Surely, there were no defenders
left with any fight in them. Surely it would be but a question of bodies and prisoners, perhaps very few of either."22
Between 0830 and 1200, 15 March, 72 B-25's, 101 B-26's, 262 B-17's and B-24's--a total of 435 aircraft--bombed the Cassino area. The planes dropped more than 2,000 bombs, a total weight of almost 1,000 tons, in an unprecedented bombardment of awesome proportions.23 There was little flak at Cassino, and no German planes appeared to oppose the bombing. The Allied aircraft suffered no losses.
The medium bomber attacks were generally punctual, their bombing concentrated and accurate. The heavy bombers were often at fault on all three counts. Thus, the target received less than the full weight of the bombs dropped. Only about 300 tons fell into the town of Cassino. The remainder landed on the slopes of Monte Cassino and elsewhere. Only half in all found the target area. In addition, there were frequent and long pauses between the attacking waves.
Even this imperfect bombardment demolished Cassino, toppling walls, crushing buildings, and covering the streets with debris.
Some heavy bomber pilots were unable to identify the target, and twenty-three returned to their bases with their bombs intact; two jettisoned their loads in the sea. Rack failure on the leading plane of one formation sent forty bombs into Allied-held areas, killing and wounding civilians and troops. These short bombs and others inflicted about 142 casualties--28 were killed--among the Allied units in the Cassino area. Ten air miles away, several planes bombed Venafro by mistake, killing 17 soldiers and 40 civilians, and wounding 79 soldiers and 100 civilians. The bombing errors were an "appalling" tragedy that General Clark attributed to "poor training and inadequate briefing of crews."24
The artillery firing went as planned. A total of 746 guns and howitzers delivered 2,500 tons of high explosive immediately ahead of the assault troops and an additional 1,500 tons on hostile batteries and other preselected targets. Between 1220 and 2000 that day, artillery pieces in the Cassino area fired almost 200,000 rounds.
General Freyberg and other commanders expected the air bombardment and artillery shelling to pulverize Cassino, destroy enemy strongpoints, disrupt German communications, neutralize hostile artillery, and inflict heavy casualties on the Germans--in short, to so stupefy, daze, and demoralize the Cassino defenders that the ground troops would attain their objectives and occupy the town quickly with hardly any losses.25 Contrary to their anticipations, "plenty of defenders remained; plenty of fight, plenty of guns, ammunition, observation points, and plenty of perseverance."26
The air attack had come as a surprise to the Germans and had tossed men about "like scraps of paper." But the demoralizing effect of the bombing lasted only a short time. The stone houses in Cassino gave excellent protection against all but psychological strain. The men of the 1st Parachute Division, who had moved into Cassino on 26 February, were exceptionally well trained and conditioned, and did not panic.
At 1040 that morning, in the midst of the bombardment, Vietinghoff phoned Senger to instruct him to stand fast. "The Cassino massif," he said, "must be held at all costs by the 1st Parachute Division." Senger had every intention of doing just that. Although prisoners taken by the Allies would later report that the bombing had inflicted a considerable number of casualties, the defenders at Cassino actually sustained comparatively few losses. Their heavy weapons and artillery fire were only partially neutralized. Against the New Zealand and Indian infantrymen in the first assault, the German paratroopers put out extremely heavy mortar and machine gun fire. The paratroopers also found that the bombing had its compensations--toppled walls formed effective bulwarks for defense.27
Not only the hostile fire but the immense destruction wrought in Cassino impeded the Allied attack. When tankers in immediate support of the assaulting infantry advanced, they found their routes blocked by debris and craters. Some commanders and staff members had realized that progress through Cassino would be slowed by the bomb holes and the wreckage of the buildings, but the actual conditions were far worse than they had expected. Rubble choked the narrow streets, and some craters were so large--forty to fifty feet in diameter in a few instances--that they had to be bridged before the tanks could pass. Since the New Zealand Corps headquarters was a provisional entity, it lacked organic corps engineers, and the improvised engineer units were inadequate for the tremendous task of clearing avenues of advance. Germans concealed in ruined houses picked off engineers trying to do their work.28
More aircraft--120 B-17's and 140 B-24's--arrived over Cassino early on the afternoon of 15 March to help the ground troops, but heavy cloud formations covered the area and prevented the pilots from finding their targets. They returned to their bases without
releasing their loads. Lighter planes had better success. Between 1300 and 1500, 49 fighter-bombers dropped 18 tons of bombs on the railroad station in Cassino. Between 1345 and 1630, 96 P-47's, A-36's, and P-40's struck the base of Monte Cassino with 44 tons. Between 1500 and 1700, 32 P-40's and A-36's hit the forward slopes of Monte Cassino with 10 tons. And 66 A-20's and P-40's loosed 34 tons on various targets at different times during the afternoon.
The massive support from the air had little result. New Zealand infantrymen fought a bitter house-to-house battle in Cassino and came close to reaching Highway 6 along the base of Monte Cassino, but they were unable to break through to the Liri valley. Other New Zealand troops on the massif won a hill quite close to the abbey of Monte Cassino, but could go no farther. Indian troops trying to fight their way into Cassino from the north made little progress.
As dusk fell on the afternoon of 15 March, the clouds that had moved over Cassino became dark and menacing, the weather broke and the rain came. Contrary to the forecaster's predictions of three days of clear weather, a torrential downpour beat upon the battered town. The bomb craters and exposed cellars soon filled with water. As the rain continued throughout the night, it became obvious that tanks would be unable to pass through Cassino for at least thirty-six hours. And General Freyberg was depending to a large extent on the power of tanks.29
During the night the tankers could hardly form up to renew the attack. New Zealand infantrymen stumbled
RUINS OF THE CONTINENTAL HOTEL
through mud-filled craters and crumbling debris, their communications deteriorating because water had damaged their radio sets and enemy fire had cut down wire teams.
There was no progress in Cassino on 6 March, as confused fighting took place around the Continental Hotel and the railway station. Indian troops advanced toward Monte Cassino but could get no closer to the abbey than a half mile. Planes dropped 266 tons of high explosive to help the ground troops, but with no effect on the situation.
It was the artillery fire that the Germans found devastating. Of the ninety-four gun barrels that the 71st Projector Regiment had started with on 16 March, only five were left at the end of the day--the rest had been knocked out by counterbattery fire. To the defenders, the Allied forces seemed to be employing "the tactics of El Alamein; namely,
concentrated fire from planes and guns, and infantry attacks on a narrow front." But the Allied strength massed at Cassino failed to overwhelm the Gustav Line.30
The pattern was much the same on 17 March. New Zealand troops, fighting at close range, sought to clear the southwestern corner of Cassino. Indian troops attempted to gain the slope of Monte Cassino. Planes dropped about 200 tons of bombs in direct support of ground operations without noticeable effect. General Clark noted that day:
The battle of Cassino is progressing slowly. Freyberg's enthusiastic plans are not keeping up to his time schedule . . . .
I have repeatedly told Freyberg from his inception of this plan that aerial bombardment alone never has and never will drive a determined enemy from his position. Cassino has again proven this theory, for, although no doubt heavy casualties were inflicted upon the enemy in Cassino, sufficient have remained to hold up our advance and cause severe fighting in the town for the past two days . . . .
Due to General Alexander's direct dealing with Freyberg and the fact that this is an all-British show, I am reluctant to give a direct order to Freyberg . . . .31
By the night of 17 March, the situation at Cassino was thoroughly confused. The difficulty of locating and reporting forward positions made effective artillery support impossible. Tanks still could not maneuver. Highway 6 was still blocked.
Yet the attack continued in this grim and desperate battle in the weird ghost town of Cassino and on the slopes of the Cassino massif surrealistically decorated by ravaged trees and the debris of combat. The forces remained deadlocked. The Germans held two principal centers of resistance in Cassino, one in the northwest, the other in the southwest corner of the town, immobilizing and grinding down six battalions of New Zealand infantry. The Germans also held the principal ridges protecting the approaches to Monte Cassino and had completely isolated New Zealand and Indian forces on two hills.
By 21 March, as the battle of Cassino entered its seventh day, some commanders, General Juin for one, believed that the attack was proving too costly and should be stopped. General Freyberg was unwilling to call it off. At a conference during the afternoon General Alexander supported Freyberg--if the New Zealand Corps could keep up the pressure for twenty-four or forty-eight hours more, the German defense might collapse. General Clark admitted he had been discouraged about continuing the attack until he had talked with some of Freyberg's subordinate commanders, who were determined to fight until the objective was gained. General Leese agreed with Freyberg. Alexander decided to review the situation each day to see when to call a halt.32
Although no one wanted to admit defeat--"I hate to see the Cassino show flop" was the way General Clark put it--it was apparent two days later, on 23 March, that the New Zealand and Indian divisions were exhausted. Freyberg agreed with Clark, and recommended that the attack be halted. At a meeting with Leese and Clark, Alexander gave the order.33
There was no other choice. Despite the unprecedented air bombardment of Cassino, the expenditure of almost 600,000 artillery shells and the loss of 2,000 New Zealand and Indian troops in nine days--almost 300 killed, nearly 250 missing, and more than 1,500 wounded--the latest attempt to break the Gustav Line and gain entrance to the Liri valley had failed.34
General Alexander's chief of staff explained the reasons for failure. There had been too much optimism about the effect of the air bombardment on the German defenders, and this in turn had led to employing too few Allied troops in the attack. The heavy rain had bogged down the assault elements, particularly the tanks. And the enemy resistance had been stubborn.35
General Allen, together with his troops of CCB, was waiting to enter the Liri valley when word came on 16 March that the New Zealanders would probably not be able to provide him with a bridgehead. He decided that if CCB were now committed, he would try to gain a bridgehead himself. CCB continued in alert status until the morning of 18 March, when Allen was informed that the exploitation "planned for months" had become impossible. Although in reserve, CCB had nevertheless suffered casualties--several German dive bombers attacked and destroyed the tactical command post of the 1st Tank Group, completely demolishing a small building housing the headquarters and all the vehicles around it, killing six men and badly wounding five, all of them key noncommissioned officers. On 24 March orders arrived for CCB to withdraw from the Cassino area for movement to Anzio.36
One company of American tanks had participated in the battle for Cassino. Before the battle General Freyberg had asked whether General Allen could provide an armored force to help the Indian division and whether he could do so without weakening CCB to the point of hindering the projected exploitation. Allen made available a company of light tanks. In the hope that the "appearance of tanks, and the fire we could deliver, would cause chaos and panic among the Germans," 1st Lt. Herman R. Crowder, Jr., commanding Company D, 760th Tank Battalion, received the mission of spearheading an infantry attack in the Cassino massif and providing impetus for a final thrust to the abbey of Monte Cassino. The attack was first delayed, then changed to an assault on one of the spurs of Monte Castellone.
In rough terrain that caused four tanks to throw tracks at once and against heavy German mortar fire, the tank company jumped off on 19 March, but soon had to retire. The tankers then gave supporting fire to Indian infantrymen. Early in the afternoon the company moved forward again, the tankers firing as they advanced. Despite shell holes, bomb craters, and enemy artillery and small arms fire, the company had started to move along a trail directly toward Monte Cassino when the lead tank ran over a mine and was disabled, blocking the column. Although the appearance of tanks
in such difficult ground seemed to surprise and disconcert the Germans, no Indian infantrymen moved up to consolidate the gain. Crowder ordered his tanks to pull back slowly. During the withdrawal, his company lost four more tanks--one was destroyed by a mine, another by antitank fire, and two bogged down in mudholes.
All together, ten tanks were lost that day. Hoping to recover some of them, Crowder tried to get a small force of infantry and engineers to accompany the tankers. The G-3 of the Indian division refused to make the infantry and engineers available--the Germans, he said, had probably already mined and booby-trapped the tanks. Crowder estimated that the tanks were no more than 150 yards ahead of the front, but an advance of this distance, he later reported, the Indians "considered a major operation." Crowder's tank company, in the opinion of the division staff, had nevertheless given valuable assistance.37
The failure to break the Cassino defenses disappointed ground force commanders but positively shocked the air forces commanders. General Eaker, who had watched the bombardment, had returned to his headquarters that afternoon and had at once conferred by radio teletype with Maj. Gen. Barney Giles, General Arnold's chief of staff in Washington.38 The conversation was apparently amplified in a letter Eaker sent several days later to General Arnold to describe and explain what had happened.
The air phases of the Cassino battle, General Eaker wrote, went according to plan until about 1500, when an abrupt break in the weather prevented most of the remaining missions. Despite the rain, low clouds, poor visibility, and the cancellation of some missions, the air bombardment, according to ground force commanders, had provided the destruction desired. Prisoners of war indicated that the bombing had come as a great shock and surprise to the Germans and "really knocked their ears off." Yet about 300 troops living or taking shelter in a long tunnel deep under Cassino and other Germans equally well protected had survived the bombing and had resisted the ground advance, continuing to fight even though some infantry companies numbered less than thirty men. Significantly, Eaker stated, the defenders received no reinforcements during the battle.
"I think," General Eaker continued, "if I had been sitting in Washington and had been unfamiliar with the terrain at Cassino, I would have wondered what this Cassino battle was all about." Since the map showed Cassino to be a compact town at the foot of a mountain and astride the main highway into the Liri valley behind the mountain, why had the Allied command not bypassed Cassino in the broad valley to the left? This would have perhaps been possible in dry weather. But the ground during much of the first three months of 1944 had been a morass of mud that bogged down not only tanks and motor vehicles but also foot troops. That was why Cassino was a roadblock and why it had to be taken before any large-scale
offensive could be made through the valley. Furthermore, the ground commanders felt that they had to have the high ground north of Cassino before striking through the valley in order to prevent the Germans from placing fire on the rear of the exploiting forces, from launching counterattacks, and from using the heights as observation posts.
General Eaker had watched the tanks and infantry move into the eastern edge of Cassino and come to a stop. The bombs had created tremendous craters that soon filled with water. These had to be bridged or filled before the tanks could proceed, for cliffs and impassably wet ground prevented the tanks from going around the holes. "You will remember," Eaker wrote, "that I warned you in a letter written before the battle of Cassino not to expect a large-scale breakthrough as a result of this operation. That estimate of the situation has proved correct." Nor was it possible, with the forces available, with troops who were weary and depressed, to anticipate a large-scale advance in the Cassino area until the ground dried. Even as he wrote, Eaker commented, it was "raining buckets full."
General Eaker was aware that some persons outside the theater might attribute the ground force failure to poor performance by the air forces. Inside the theater, there was no such feeling. Considering the weather, Wilson, Devers, Alexander, and Clark all felt that the air forces had done everything possible.39
Air officers in Washington were sympathetic. General Giles sent congratulations and assurance that General Arnold and everyone else in the Army Air Forces headquarters were pleased with the "very fine showing you made with the air power at Cassino." Their displeasure was directed against the ground boys, as Giles called them, who did not follow through. Air commanders, he said, had "never guaranteed [the ability] to land on top of the rubble and occupy the ground." The air forces people felt that the ground follow-up of the bombing was "puny" in comparison to
the greatest concentration of air power in the world. It is too bad that our ground forces did not build up strength in depth consisting of three or four divisions in column and push on through Cassino or go around it. I believe that if we could find a few jugs of corn liquor of the same brand that General Grant did so well with, that situation could be cleared up in a few days.40
There was, nevertheless, a persistent feeling that something, somewhere, had gone wrong. And someone was going to be blamed. To repudiate comment appearing in the press that the unsuccessful outcome of the Cassino battle was due to air force failure, General Clark sent General Eaker a letter stating categorically, "I do not share that view." The tendency to blame the air forces, he wrote, "has not been inspired by my headquarters." No bombardment, in his opinion, could eliminate determined infantrymen occupying good defensive positions in a fortified area.41 Bombing could be demoralizing for a short time, but it had no lasting results when prepared positions protected men from
concussion and gave them a sense of security. The effect of the bombardment of Cassino, "though potent, was of relatively short duration and intermittent."42
General Twining wrote:
Cassino is not an indictment of the value of heavy bombs in close support of the Army. Their ability to land a knock-out blow, without warning is still an advantage which no other form of attack enjoys, but . . . there are limiting and controlling factors for this as with all other types of fire support.43
The outstanding performance at Cassino was that of the German paratroopers. To Senger, the XIV Panzer Corps commander, their "iron tenacity and unswerving resolution of true soldiers had overcome a concentration of matériel on a narrow front which probably had no precedent in this war." Their constant optimism, during even the most critical phases of the battle, was a source of amazement and inspiration to corps and army headquarters. "No troops but the 1st Parachute Division," declared Vietinghoff, the Tenth Army commander, "could have held Cassino."44
Three times the Allied forces had tried to break the Gustav Line and get into the Liri valley, and three times they had failed--in January the frontal attack across the Rapido, in February the attempt to outflank the Cassino spur, and in March the effort to drive between the abbey and the town. They would try again, but only after the weather cleared and the ground was firm, after the troops had rested. Only then, in May, would they again take up the struggle.
1. ACMF Appreciation 1, 22 Feb 44.
2. ACMF Min of CofS Mtg, 1430, 28 Feb 44, dated 4 Mar 44, AG 337; Ltr, Alexander to Clark, 18 Feb 44, sub: Regrouping; Ltrs, Alexander to Clark and to Leese, 22 Feb 44. Last three in AAI 17/3/44 - 10/10/44.
3. Clark Diary, 19 Feb 44.
4. Ibid., 21 Feb 44.
5. New Zealand Corps OI 5, 21 Feb 44; 36th Div Ltr, 9 Mar 44, sub: OI, 36th Div File; 4th New Zealand Armd Brigade OI 4, 16 Feb 44, Amendment 1, 18 Feb 44, Amendment 2, 23 Feb 44, and OI 5, 9 Mar 44. Last two in 4th New Zealand Armd Brigade File.
6. Fifth Army Ltr, Air Support, 7 Apr 44, Cassino Study.
7. Memo, Hansborough to Brann, 31 Mar 44, Cassino Study.
8. Ltr, Arnold to Eaker, undated (early Mar 44), Mathews File, OCMH.
9. Ltr, Eaker to Arnold, 6 Mar 44, Mathews File, OCMH.
11. XII Tactical Air Command Operational History, 1 January-30 June 1944, pp. 14-43. See below, p. 451.
12. Ltr, Eaker to Twining, 10 Mar 44, Mathews File, OCMH. See Craven and Cate, eds., Europe: ARGUMENT to V-E Day, p. 326.
13. 2d New Zealand Div Opn Order 41, 23 Feb 44.
14. See 4th New Zealand Brigade OI 4, 16 Feb 44, 4th New Zealand Armd Brigade File. Task Force A consisted of the 13th Armored Regiment, with the 1st, 2d, and 3d Battalions and the Reconnaissance Company, the 636th Tank Destroyer Battalion, the 16th Armored Engineer Battalion (Provisional), the 434th Antiaircraft Battalion (Provisional), the 6617th Mine Clearance Company, and a platoon of the 1st Armored Division Military Police Company. Task Force B was composed of the 1st Tank Group, with the 753d Tank Battalion, the 760th Tank Battalion (less two companies), the 776th Tank Destroyer Battalion, a company of the 48th Engineer Combat Battalion, a troop of the 91st Cavalry Reconnaissance Squadron, and the 21st New Zealand Infantry Battalion. In support of the two task forces were four battalions of 155-mm. howitzers under the control of the 6th Field Artillery Group headquarters. 1st Armd Div CCB FO 1, 2100, 14 Mar 44.
15. See CCB Paper, Movement of Assault Elements to the Rapido, 22 Jan 44, CCB S-3 Jnl File.
16. 1st Tank Group (later 1st Armd Group) AAR, 13 Feb-26 Mar 44. During part of this time, CCB was also alerted to the possibility of going to the Anzio beachhead. See Keyes to Allen, 1130, 25 Jan 44, CCB S-3 Jnl File. See also CCB Liri Valley Plan (Cassino Phase), 3 Jan 44, revised plan, 4 Feb 44 and CCB S-3 Msg, 4 Feb 44, CCB S-3 Jnl File; 36th Div Artillery Annex 3 to FO 45, 1200, 4 Feb 44.
17. Ltr, Allen to Harmon, 4 Mar 44, CCB S-3 Jnl File.
18. Clark Diary, 8 Mar 44.
19. Ibid., 10, 11 Mar 44.
20. Fifth Army Ltr, Air Support, 7 Apr 44, Cassino Study.
21. Mediterranean Allied Tactical Air Force Report, Attack on Cassino, 15 March 1944, dated 11 Jul 44, AFHQ G (Ops), Lessons from Opns, vol. II.
22. Fifth Army Engr History, I, 28; Clark Diary, 5 Mar 44.
23. Four months later in Normandy, on two different occasions, more than three times as many strategic bombers in direct support of tactical operations would drop much more than three times as many tons of high explosive. (See Martin Blumenson, Breakout and Pursuit, UNITED STATES ARMY IN WORLD WAR II (Washington, 1961), pp. 191, 234.) And in November 1944 the largest operation of this sort in World War II would take place. (See Charles B. MacDonald, The Siegfried Line Campaign, UNITED STATES ARMY IN WORLD WAR II (Washington, 1963), pp. 403ff.)
24. Quotation from Clark Diary, 17 Mar 44; figures from New Zealand Rpt, Bombing of Cassino, 23 Mar 44, Cassino Study; Mediterranean Allied Tactical Air Force Rpt, Attack on Cassino, dated 11 Jul 44, AFHQ G (Ops), Lessons from Opns, vol. II; Fifth Army Ltr, Air Support, 7 Apr 44, Cassino Study. The Fifth Army Report of Operations for March gives the figure as 1,400 tons of bombs dropped on Cassino. General Clark recorded in his diary on 15 March that 334 heavy bombers, 255 fighter-bombers and light bombers, and some medium bombers had dropped a total of 1,320 tons of bombs. According to figures received by Clark and recorded in his diary on 15 and 16 March, there were 138 Allied casualties lost to short bombs in the Cassino area--3 Polish, 7 British, 64 French, and 45 New Zealand soldiers were wounded, 8 French and 14 New Zealand soldiers were killed. On 17 March, he recorded totals of about 75 Allied troops killed and 550 wounded by the bombing.
25. Mediterranean Allied Tactical Air Force Rpt, Attack on Cassino, 15 Mar 44, AFHQ G (Ops), Lessons from Opns, vol. II.
26. Fifth Army Engr History, I, 28.
27. Vietinghoff to Senger, 1040, 15 Mar 44, quoted in Steiger MS; Vietinghoff MSS; New Zealand Rpt, Bombing of Cassino, 23 Mar 44, Cassino Study.
28. Ibid.; Fifth Army Engr History, I, 31ff.; Clark Diary, 16 Mar 44.
29. New Zealand Rpt, Bombing of Cassino, 23 Mar 44, Cassino Study.
30. Steiger MS.
31. Clark Diary, 17 Mar 44.
32. Clark Diary, 21 Mar 44.
33. Clark Diary, 23 Mar 44.
34. The 4th Indian Division lost 4,000 men in the fighting around Cassino during the months of February and March. The Tiger Triumphs, pp. 62-64.
35. General Harding's Press Conference, 25 Mar 44, Cassino Study.
36. 1st Tank Group (later 1st Armd Group) AAR, Feb-26 Mar 44; Fifth Army Msg, 24 Mar 44, Fifth Army G-3 Jnl.
37. Rpt by Crowder, 24 Mar 44; Memo dictated by Gen Allen at 1130, 11 Mar 44; Ltr, Galloway to Crowder, 21 Mar 44; Allen Memos, 12, 21 Mar 44. All in CCB S-3 Jnl. See also Rpt (Col Devore), The Attack on Albanete House, AGF Bd Rpts, NATO.
38. Eaker Diary, 15 Mar 44, Mathews File, OCMH.
39. Ltr, Eaker to Arnold, The Cassino Battle, 21 Mar 44, Mathews File, OCMH.
40. Giles to Eaker, 29 Mar 44, Mathews File, OCMH.
41. Clark to Eaker, 5 Apr 44, Mathews File, OCMH. See Ltr, Gruenther to Alexander, Preliminary Rpt of Bombing of Cassino, 31 Mar 44, Cassino Study; Fifth Army Rpt on Cassino Opn, 5 Jun 44.
42. Fifth Army Rpt of Cassino Opn, 5 Jun 44; Fifth Army Rpt on Effect of Bombing and Shelling of Cassino, 27 Apr 44, AFHQ G (Ops), Lessons from Opns, vol. II. See also AFHQ Lessons from Opns, vol. I.
43. Twining Memo 5, 4 Jun 44, AFHQ Files. See also Memo, Hansborough for Brann, 31 Mar 44, Cassino Study.
44. MS # C-095b (Senger), OCMH; Vietinghoff MSS; MS # T-1a (Westphal et al.), OCMH; Vietinghoff to Kesselring, quoted in Steiger MS.
|
{
"palladium_score": 3.752964496612549,
"timestamp": "2026-01-18T07:33:21.256999",
"source": "Palladium-STEM (Preview)"
}
|
http://www.readwritethink.org/classroom-resources/lesson-plans/poetry-reading-interpretation-through-30746.html?tab=1
|
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, videos, activities, or other ideas you'd like to contribute, we'd love to hear from you.
Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals.
Teacher Resources by Grade
|1st - 2nd||3rd - 4th|
|5th - 6th||7th - 8th|
|9th - 10th||11th - 12th|
Poetry Reading and Interpretation Through Extensive Modeling
|Grades||9 – 12|
|Lesson Plan Type||Standard Lesson|
|Estimated Time||Eight 50-minute sessions|
Through the use of extensive modeling with John Berryman’s “Sole Watchman,” students will understand the steps involved in the analysis and interpretation in poetry. The teacher will model how to summarize and analyze the poem, construct a thesis, and develop an essay. Students will review and discuss a sample essay complete with comments that highlight strong writing decisions. After reading and interpreting Berryman’s “The Ball Poem,” students will construct a 3-4 page essay on this poem.
Beth Hewitt’s “Writing Onstage: Giving Students an Authentic Model” was the driving force behind not only this unit, but many of my writing lessons. Hewitt’s claim that “When we write with students, they benefit” (6) led to the creation of a model-heavy classroom during writing lessons. A structured model unit such as this can provide great help to students who struggle with the writing process, teachers who are just getting into spontaneous writing, and educators who would like to work extensively with editing and revising.
Hewett, Beth L. “Writing Onstage: Giving Students an Authentic Model.” Classroom Notes Plus, 27.2. October 2009.
|
{
"palladium_score": 3.642467975616455,
"timestamp": "2026-01-18T07:33:23.587980",
"source": "Palladium-STEM (Preview)"
}
|
http://www1.bartleby.com/107/133.html
|
Henry Gray (18211865). Anatomy of the Human Body. 1918.
THE VASCULAR system is divided for descriptive purposes into (a) the blood vascular system, which comprises the heart and bloodvessels for the circulation of the blood; and (b) the lymph vascular system, consisting of lymph glands and lymphatic vessels, through which a colorless fluid, the lymph, circulates. It must be noted, however, that the two systems communicate with each other and are intimately associated developmentally.
The heart is the central organ of the blood vascular system, and consists of a hollow muscle; by its contraction the blood is pumped to all parts of the body through a complicated series of tubes, termed arteries. The arteries undergo enormous ramification in their course throughout the body, and end in minute vessels, called arterioles, which in their turn open into a close-meshed network of microscopic vessels, termed capillaries. After the blood has passed through the capillaries it is collected into a series of larger vessels, called veins, by which it is returned to the heart. The passage of the blood through the heart and blood-vessels constitutes what is termed the circulation of the blood, of which the following is an outline.
The human heart is divided by septa into right and left halves, and each half is further divided into two cavities, an upper termed the atrium and a lower the ventricle. The heart therefore consists of four chambers, two, the right atrium and right ventricle, forming the right half, and two, the left atrium and left ventricle the left half. The right half of the heart contains venous or impure blood; the left, arterial or pure blood. The atria are receiving chambers, and the ventricles distributing ones. From the cavity of the left ventricle the pure blood is carried into a large artery, the aorta, through the numerous branches of which it is distributed to all parts of the body, with the exception of the lungs. In its passage through the capillaries of the body the blood gives up to the tissues the materials necessary for their growth and nourishment, and at the same time receives from the tissues the waste products resulting from their metabolism. In doing so it is changed from arterial into venous blood, which is collected by the veins and through them returned to the right atrium of the heart. From this cavity the impure blood passes into the right ventricle, and is thence conveyed through the pulmonary arteries to the lungs. In the capillaries of the lungs it again becomes arterialized, and is then carried to the left atrium by the pulmonary veins. From the left atrium it passes into the left ventricle, from which the cycle once more begins.
The course of the blood from the left ventricle through the body generally to the right side of the heart constitutes the greater or systemic circulation, while its passage from the right ventricle through the lungs to the left side of the heart is termed the lesser or pulmonary circulation.
It is necessary, however, to state that the blood which circulates through the spleen, pancreas, stomach, small intestine, and the greater part of the large intestine is not returned directly from these organs to the heart, but is conveyed by the portal vein to the liver. In the liver this vein divides, like an artery, and ultimately ends in capillary-like vessels (sinusoids), from which the rootlets of a series of veins, called the hepatic veins, arise; these carry the blood into the inferior vena cava, whence it is conveyed to the right atrium. From this it will be seen that the blood contained in the portal vein passes through two sets of vessels: (1) the capillaries in the spleen, pancreas, stomach, etc., and (2) the sinusoids in the liver. The blood in the portal vein carries certain of the products of digestion: the carbohydrates, which are mostly taken up by the liver cells and stored as glycogen, and the protein products which remain in solution and are carried into the general circulation to the various tissues and organs of the body.
Speaking generally, the arteries may be said to contain pure and the veins impure blood. This is true of the systemic, but not of the pulmonary vessels, since it has been seen that the impure blood is conveyed from the heart to the lungs by the pulmonary arteries, and the pure blood returned from the lungs to the heart by the pulmonary veins. Arteries, therefore, must be defined as vessels which convey blood from the heart, and veins as vessels which return blood to the heart.
Structure of Arteries (Fig. 448).The arteries are composed of three coats: an internal or endothelial coat (tunica intima of Kölliker); a middle or muscular coat (tunica media); and an external or connective-tissue coat (tunica adventitia). The two inner coats together are very easily separated from the external, as by the ordinary operation of tying a ligature around an artery. If a fine string be tied forcibly upon an artery and then taken off, the external coat will be found undivided, but the two inner coats are divided in the track of the ligature and can easily be further dissected from the outer coat.
The inner coat (tunica intima) can be separated from the middle by a little maceration, or it may be stripped off in small pieces; but, on account of its friability, it cannot be separated as a complete membrane. It is a fine, transparent, colorless structure which is highly elastic, and, after death, is commonly corrugated into longitudinal wrinkles. The inner coat consists of: (1) A layer of pavement endothelium, the cells of which are polygonal, oval, or fusiform, and have very distinct round or oval nuclei. This endothelium is brought into view most distinctly by staining with nitrate of silver. (2) A subendothelial layer, consisting of delicate connective tissue with branched cells lying in the interspaces of the tissue; in arteries of less than 2 mm. in diameter the subendothelial layer consists of a single stratum of stellate cells, and the connective tissue is only largely developed in vessels of a considerable size. (3) An elastic or fenestrated layer, which consists of a membrane containing a net-work of elastic fibers, having principally a longitudinal direction, and in which, under the microscope, small elongated apertures or perforations may be seen, giving it a fenestrated appearance. It was therefore called by Henle the fenestrated membrane. This membrane forms the chief thickness of the inner coat, and can be separated into several layers, some of which present the appearance of a net-work of longitudinal elastic fibers, and others a more membranous character, marked by pale lines having a longitudinal direction. In minute arteries the fenestrated membrane is a very thin layer; but in the larger arteries, and especially in the aorta, it has a very considerable thickness.
FIG. 448 Transverse section through a small artery and vein of the mucous membrane of the epiglottis of a child. X 350. (Klein and Noble Smith.) A. Artery, showing the nucleated endothelium, e, which lines it; the vessel being contracted, the endothelial cells appear very thick. Underneath the endothelium is the wavy elastic lamina. The chief part of the wall of the vessel is occupied by the circular muscle coat m; the rod-shaped nuclei of the muscle cells are well seen. Outside this is a, part of the adventitia. This is composed of bundles of connective tissue fibers, shown in section, with the nuclei of the connective tissue corpuscles. The adventitia gradually merges into the surrounding connective tissue. V. Vein showing a thin endothelial membrane, e, raised accidentally from the intima, which on account of its delicacy is seen as a mere line on the media m. This latter is composed of a few circular unstriped muscle cells a. The adventitia, similar in structure to that of an artery. (See enlarged image)
The middle coat (tunica media) is distinguished from the inner by its color and by the transverse arrangement of its fibers. In the smaller arteries it consists principally of plain muscle fibers in fine bundles, arranged in lamellæ and disposed circularly around the vessel. These lamellæ vary in number according to the size of the vessel; the smallest arteries having only a single layer (Fig. 449), and those slightly larger three or four layers. It is to this coat that the thickness of the wall of the artery is mainly due (Fig. 448A, m). In the larger arteries, as the iliac, femoral, and carotid, elastic fibers unite to form lamellæ which alternate with the layers of muscular fibers; these lamellæ are united to one another by elastic fibers which pass between the muscular bundles, and are connected with the fenestrated membrane of the inner coat (Fig. 450). In the largest arteries, as the aorta and innominate, the amount of elastic tissue is very considerable; in these vessels a few bundles of white connective tissue also have been found in the middle coat. The muscle fiber cells are about 50μ in length and contain well-marked, rod-shaped nuclei, which are often slightly curved.
The external coat (tunica adventitia) consists mainly of fine and closely felted bundles of white connective tissue, but also contains elastic fibers in all but the smallest arteries. The elastic tissue is much more abundant next the tunica media, and it is sometimes described as forming here, between the adventitia and media, a special layer, the tunica elastica externa of Henle. This layer is most marked in arteries of medium size. In the largest vessels the external coat is relatively thin; but in small arteries it is of greater proportionate thickness. In the smaller arteries it consists of a single layer of white connective tissue and elastic fibers; while in the smallest arteries, just above the capillaries, the elastic fibers are wanting, and the connective tissue of which the coat is composed becomes more nearly homogeneous the nearer it approaches the capillaries, and is gradually reduced to a thin membranous envelope, which finally disappears.
Some arteries have extremely thin walls in proportion to their size; this is especially the case in those situated in the cavity of the cranium and vertebral canal, the difference depending on the thinness of the external and middle coats.
The arteries, in their distribution throughout the body, are included in thin fibro-areolar investments, which form their sheaths. The vessel is loosely connected with its sheath by delicate areolar tissue; and the sheath usually encloses the accompanying veins, and sometimes a nerve. Some arteries, as those in the cranium, are not included in sheaths.
FIG. 449 Small artery and vein, pia mater of sheep. X 250. Surface view above the interrupted line; longitudinal section below. Artery in red; vein in blue, (See enlarged image)
All the larger arteries, like the other organs of the body, are supplied with bloodvessels. These nutrient vessels, called the vasa vasorum,arise from a branch of the artery, or from a neighboring vessel, at some considerable distance from the point at which they are distributed; they ramify in the loose areolar tissue connecting the artery with its sheath, and are distributed to the external coat, but do not, in man, penetrate the other coats; in some of the larger mammals a few vessels have been traced into the middle coat. Minute veins return the blood from these vessels; they empty themselves into the vein or veins accompanying the artery. Lymphatic vessels are also present in the outer coat.
Arteries are also supplied with nerves, which are derived from the sympathetic, but may pass through the cerebrospinal nerves. They form intricate plexuses upon the surfaces of the larger trunks, and run along the smaller arteries as single filaments, or bundles of filaments which twist around the vessel and unite with each other in a plexiform manner. The branches derived from these plexuses penetrate the external coat and are distributed principally to the muscular tissue of the middle coat, and thus regulate, by causing the contraction and relaxation of this tissue the amount of blood sent to any part.
The Capillaries.The smaller arterial branches (excepting those of the cavernous structure of the sexual organs, of the splenic pulp, and of the placenta) terminate in net-works of vessels which pervade nearly every tissue of the body. These vessels, from their minute size, are termed capillaries. They are interposed between the smallest branches of the arteries and the commencing veins, constituting a net-work, the branches of which maintain the same diameter throughout; the meshes of the net-work are more uniform in shape and size than those formed by the anastomoses of the small arteries and veins.
The diameters of the capillaries vary in the different tissues of the body, the usual size being about 8μ. The smallest are those of the brain and the mucous membrane of the intestines; and the largest those of the skin and the marrow of bone, where they are stated to be as large as 20μ in diameter. The form of the capillary net varies in the different tissues, the meshes being generally rounded or elongated.
The rounded form of mesh is most common, and prevails where there is a dense network, as in the lungs, in most glands and mucous membranes, and in the cutis; the meshes are not of an absolutely circular outline, but more or less angular, sometimes nearly quadrangular, or polygonal, or more often irregular.
Elongated meshes are observed in the muscles and nerves, the meshes resembling parallelograms in form, the long axis of the mesh running parallel with the long axis of the nerve or muscle. Sometimes the capillaries have a looped arrangement; a single vessel projecting from the common net-work and returning after forming one or more loops, as in the papillæ of the tongue and skin.
The number of the capillaries and the size of the meshes determine the degree of vascularity of a part. The closest network and the smallest interspaces are found in the lungs and in the choroid coat of the eye. In these situations the interspaces are smaller than the capillary vessels themselves. In the intertubular plexus of the kidney, in the conjunctiva, and in the cutis, the interspaces are from three to four times as large as the capillaries which form them; and in the brain from eight to ten times as large as the capillaries in their long diameters, and from four to six times as large in their transverse diameters. In the adventitia of arteries the width of the meshes is ten times that of the capillary vessels. As a general rule, the more active the function of the organ, the closer is its capillary net and the larger its supply of blood; the meshes of the network are very narrow in all growing parts, in the glands, and in the mucous membranes, wider in bones and ligaments which are comparatively inactive; bloodvessels are nearly altogether absent in tendons, in which very little organic change occurs after their formation. In the liver the capillaries take a more or less radial course toward the intralobular vein, and their walls are incomplete, so that the blood comes into direct contact with the liver cells. These vessels in the liver are not true capillaries but sinusoids; they are developed by the growth of columns of liver cells into the blood spaces of the embryonic organ.
Structure.The wall of a capillary consists of a fine transparent endothelial layer, composed of cells joined edge to edge by an interstitial cement substance, and continuous with the endothelial cells which line the arteries and veins. When stained with nitrate of silver the edges which bound the epithelial cells are brought into view (Fig. 451). These cells are of large size and of an irregular polygonal or lanceolate shape, each containing an oval nucleus which may be displayed by carmine or hematoxylin. Between their edges, at various points of their meeting, roundish dark spots are sometimes seen, which have been described as stomata, though they are closed by intercellular substance. They have been believed to be the situations through which the colorless corpuscles of the blood, when migrating from the bloodvessels, emerge; but this view, though probable, is not universally accepted.
Kolossow describes these cells as having a rather more complex structure. He states that each consists of two parts: of hyaline ground plates, and of a protoplasmic granular part, in which is imbedded the nucleus, on the outside of the ground plates. The hyaline internal coat of the capillaries does not form a complete membrane, but consists of plates which are inelastic, and though in contact with each other are not continuous; when therefore the capillaries are subjected to intravascular pressure, the plates become separated from each other; the protoplasmic portions of the cells, on the other hand, are united together. In some organs, e. g., the glomeruli of the kidneys, intercellular cement cannot be demonstrated in the capillary wall and the cells are believed to form a syncytium.
In many situations a delicate sheath or envelope of branched nucleated connective tissue cells is found around the simple capillary tube, particularly in the larger ones; and in other places, especially in the glands, the capillaries are invested with retiform connective tissue.
Sinusoids.In certain organs, viz., the heart, the liver, the suprarenal and parathyroid glands, the glomus caroticum and glomus coccygeum, the smallest bloodvessels present various differences from true capillaries. They are wider, with an irregular lumen, and have no connective tissue covering, their endothelial cells being in direct contact with the cells of the organ. Moreover, they are either arterial or venous and not intermediate as are the true capillaries. These vessels have been called sinusoids by Minot. They are formed by columns of cells or trabeculæ pushing their way into a large bloodvessel or blood space and carrying its endothelium before them; at the same time the wall of the vessel or space grows out between the cell columns.
Structure of Veins.The veins, like the arteries, are composed of three coats: internal, middle, and external; and these coats are, with the necessary modifications, analogous to the coats of the arteries; the internal being the endothelial, the middle the muscular, and the external the connective tissue or areolar (Fig. 452). The main difference between the veins and the arteries is in the comparative weakness of the middle coat in the former.
FIG. 451 Capillaries from the mesentery of a guinea-pig, after treatment with solution of nitrate of silver. a. Cells. b. Their nuclei. (See enlarged image)
In the smallest veins the three coats are hardly to be distinguished (Fig. 449). The endothelium is supported on a membrane separable into two layers, the outer of which is the thicker, and consists of a delicate, nucleated membrane (adventitia), while the inner is composed of a network of longitudinal elastic fibers (media). In the veins next above these in size (0.4 mm. in diameter), according to Kölliker, a connective tissue layer containing numerous muscle fibers circularly disposed can be traced, forming the middle coat, while the elastic and connective tissue elements of the outer coat become more distinctly perceptible. In the middle-sized veins the typical structure of these vessels becomes clear. The endothelium is of the same character as in the arteries, but its cells are more oval and less fusiform. It is supported by a connective tissue layer, consisting of a delicate net-work of branched cells, and external to this is a layer of elastic fibers disposed in the form of a net-work in place of the definite fenestrated membrane seen in the arteries. This constitutes the internal coat. The middle coat is composed of a thick layer of connective tissue with elastic fibers, intermixed, in some veins, with a transverse layer of muscular tissue. The white fibrous element is in considerable excess, and the elastic fibers are in much smaller proportion in the veins than in the arteries. The outer coat consists, as in the arteries, of areolar tissue, with longitudinal elastic fibers. In the largest veins the outer coat is from two to five times thicker than the middle coat, and contains a large number of longitudinal muscular fibers. These are most distinct in the inferior vena cava, especially at the termination of this vein in the heart, in the trunks of the hepatic veins, in all the large trunks of the portal vein, and in the external iliac, renal, and azygos veins. In the renal and portal veins they extend through the whole thickness of the outer coat, but in the other veins mentioned a layer of connective and elastic tissue is found external to the muscular fibers. All the large veins which open into the heart are covered for a short distance with a layer of striped muscular tissue continued on to them from the heart. Muscular tissue is wanting: (1) in the veins of the maternal part of the placenta; (2) in the venous sinuses of the dura mater and the veins of the pia mater of the brain and medulla spinalis; (3) in the veins of the retina; (4) in the veins of the cancellous tissue of bones; (5) in the venous spaces of the corpora cavernosa. The veins of the above-mentioned parts consist of an internal endothelial lining supported on one or more layers of areolar tissue.
Most veins are provided with valves which serve to prevent the reflux of the blood. Each valve is formed by a reduplication of the inner coat, strengthened by connective tissue and elastic fibers, and is covered on both surfaces with endothelium, the arrangement of which differs on the two surfaces. On the surface of the valve next the wall of the vein the cells are arranged transversely; while on the other surface, over which the current of blood flows, the cells are arranged longitudinally in the direction of the current. Most commonly two such valves are found placed opposite one another, more especially in the smaller veins or in the larger trunks at the point where they are joined by smaller branches; occasionally there are three and sometimes only one. The valves are semilunar. They are attached by their convex edges to the wall of the vein; the concave margins are free, directed in the course of the venous current, and lie in close apposition with the wall of the vein as long as the current of blood takes its natural course; if, however, any regurgitation takes place, the valves become distended, their opposed edges are brought into contact, and the current is interrupted. The wall of the vein on the cardiac side of the point of attachment of each valve is expanded into a pouch or sinus, which gives to the vessel, when injected or distended with blood, a knotted appearance. The valves are very numerous in the veins of the extremities, especially of the lower extremities, these vessels having to conduct the blood against the force of gravity. They are absent in the very small veins, i. e., those less than 2 mm. in diameter, also in the venæ cavæ, hepatic, renal, uterine, and ovarian veins. A few valves are found in each spermatic vein, and one also at its point of junction with the renal vein or inferior vena cava respectively. The cerebral and spinal veins, the veins of the cancellated tissue of bone, the pulmonary veins, and the umbilical vein and its branches, are also destitute of valves. A few valves are occasionally found in the azygos and intercostal veins. Rudimentary valves are found in the tributaries of the portal venous system.
|
{
"palladium_score": 3.7645139694213867,
"timestamp": "2026-01-18T07:33:25.904772",
"source": "Palladium-STEM (Preview)"
}
|
http://www.sciencemeetsreligion.org/blog/2011/03/can-computers-think/
|
Are human brains different than computers?
It has been widely believed through history (and is still widely believed by many religious-minded people) that the human mind is fundamentally distinct from anything mechanical or otherwise non-living. However, as with many other beliefs of this sort, modern science has discovered many of the workings of the human mind, and while a complete picture of conscious thought is not yet in hand, many scientists believe that the essential details are understood. Many of these findings were outlined in a 1995 book by Francis Crick, the co-discoverer of DNA [Crick1995], and more details have been discovered since then.
Along this line, considerable research has been done analyzing intelligent behavior of animals. In a 2007 study by a Japanese research team, chimpanzees out-performed university students in a task to remember the location of numbers on a screen [Briggs2007]. In another remarkable study, researchers at Wofford College taught a border collie to recognize the names of 1022 objects, and to distinguish between names of the objects themselves and orders to fetch them. The researchers halted the training of this dog after three years, not because the dog could not learn more names but because of time and budget constraints [SD2011e]. Also, the average scaled brain size of various species of dolphins is greater than the average scaled brain size of chimpanzees, gorillas and orangutans, our closest primate relatives [ConwayMorris2003, pg. 247]. In short, these studies point to human intelligence only as the end of a long spectrum of intelligence among other species on this planet.
One of the more interesting lines of research in this area is in “artificial intelligence,” which is a term used rather broadly to describe advances in computer science that attempt to perform similar functions to the operation of human minds. Early pioneers of computing were convinced that numerous real-world applications of artificial intelligence were just around the corner. In the early 1950s, for instance, it was widely expected that practical computer systems for machine translation would be working within “a year or two.” But these early efforts foundered on the reality that emulating functions of the human brain was much more difficult than originally expected. Spurred in part by Moore’s Law, within the past 20 years a new generation of researchers in the artificial intelligence field has revisited some of the older applications that proved so troublesome. One breakthrough in this arena was the discovery of “Bayesian” methods, which, by the way, are significantly more akin to the experience-based process by which humans think and learn. As one example of the numerous lines of research and development in this area, some rather good computer translation tools are now available — try Google’s online translation tool at http://translate.google.com.
IBM’s Deep Blue System
This recent wave of progress in artificial intelligence was brought to the public’s attention in 1996, when an IBM computer system known as “Deep Blue” defeated Gary Kasparov, the reigning world chess champion, in game one of a six-game match. After the defeat, Kasparov quickly left the stage and was described as “devastated” [Weber1996]. Kasparov went on to win the 1996 tournament, but in a 1997 rematch Deep Blue won 3.5 games to 2.5 games, decisively establishing computer supremacy in tournament chess [Weber1997].
IBM’s Watson System
In 2004, an IBM research executive, while having dinner with some coworkers, noticed that everyone in the restaurant started watching a telecast of the American quiz show Jeopardy!, where Ken Jennings was in the middle of a long winning streak. After discussions with IBM scientists and executives, IBM embarked on a plan to develop a natural language question-answering system whose goal was to be powerful enough that it could compete with top human contestants on Jeopardy! The project proved every bit as challenging as it was first thought to be. According to some reports, IBM spent roughly $1 billion on the project, which was dubbed “Watson” after Thomas J. Watson, the founder of IBM.
In 2010, after the IBM team felt that enough progress had been made, IBM executives contacted executives of the Jeopardy! show, and an agreement was reached to stage a tournament. On the human side, Jeopardy! recruited legendary champs Ken Jennings, who set an all-time record of 74 consecutive wins, and Brad Rutter, who until the Watson match was undefeated and the biggest money winner. The match was conducted on 14-16 February 2011 at IBM’s headquarters. Questions were fed to Watson electronically as soon as they were displayed to the human contestants. When Watson was confident of an answer, it depressed the signaling button, and, if it was the first to ring in, enunciated the response in a computer-synthesized voice.
On the first day, Watson opened impressively, but in the end was tied with Rutter for the lead. But on the second day Watson performed extremely well — it rang in first on 25 of the 30 questions, and was correct on 24 of the 25. It also did very well on the third day, although not as decisively as the second day. Watson’s three-day total “winnings” were $77,147, far ahead of Jennings at $24,000 and Rutter at $21,600, and so Watson was declared the victor (IBM’s actual winnings of US $1,000,000 were split between two charities). In his memorable inscription conceding defeat to Watson at the end of the Jeopardy! match, Ken Jennings wrote on his screen “I for one welcome our new computer overlords” [Markoff2011a].
The real significance of the Watson project was IBM’s demonstration that a computer system can rather well “understand” and respond to natural language queries, which has long been a major obstacle in real-world applications of artificial intelligence. Computers have not yet passed the “Turing test,” the test mentioned above wherein humans exchange messages with an unseen partner (a computer) and judge it to be human, but they are getting close.
So where is all this heading? A 2011 Time article featured an interview with futurist Ray Kurzweil [Grossman2011]. Kurzweil is a leading figure in the “Singularity” movement, a loosely coupled group of scientists and technologists who foresee an era, which they predict will occur by roughly 2045, when machine intelligence will far transcend human intelligence. Such future intelligent systems will then design even more powerful technology, resulting in a dizzying advance that we can only dimly foresee at the present time. Kurzweil outlines this vision in his recent book The Singularity Is Near [Kurzweil2005]. Many of these scientists and technologists believe that we are already on the cusp of this transition.
Futurists such as Kurzweil certainly have their skeptics and detractors. Many question the timetable of these predictions. Others (including the present author) are concerned that these writers are soft-pedaling enormous societal, legal, financial and ethical challenges, some of which we are just beginning to see clearly. Still others, such as Bill Joy, acknowledge that many of these predictions will materialize, but are very concerned that humans could be relegated to minor players in the future, or that out-of-control robots or nanotech-produced “grey goo” could destroy life on our fragile planet [Joy2000].
Nonetheless, the basic conclusions of the Singularity community appear to be on target: Moore’s Law is likely to continue for at least another 20 years or so. Progress in a wide range of other technologies is here to stay (in part because of Moore’s Law). Scientific progress is here to stay (again, in part because of Moore’s Law-based advances in instrumentation and computer simulation tools). And all this is leading directly and inexorably to real-world artificial intelligence within 20-40 years. Whether “we” merge with “them,” or “they” advance along with “us” is an interesting question, but either way, the future is coming.
In the wake of these developments, many have noted that advances in technology are heading directly to a form of “immortality,” in two different ways. First of all, medical technology is in the midst of a revolution on several fronts. These developments range from advanced, high-tech prosthetics for the handicapped to some remarkable cancer therapies currently being developed [Kurzweil2005]. Of even greater interest is research into the fundamental causes of aging. Scientists have identified seven broad categories of molecular and cellular differences between older and younger people, and are working on ways to retard or stop each factor [deGrey2004].
Another line of thinking in this direction follows recent developments in artificial intelligence mentioned above. Some in the Singularity community, for instance, believe that the time will come that one’s brain can be scanned with such resolution that the full contents of one’s mind can be “uploaded” into a super-powerful computer system, after which the “person” will achieve a sort of immortality [Kurzweil2005]. Physicist Frank Tipler is even more expansive, predicting that every human who has ever lived will ultimately be “resurrected” in an information-theoretic sense [Tipler1994]. Even if one discounts the boundless optimism of such writers, the disagreement is generally not a matter of if, but only when such changes will transpire, and whether mankind can muster the wisdom to carefully control and direct these technologies for good rather than evil.
In any event, it is curious to note that at the pinnacle of modern science and technology, mankind has identified the extension of life and, even more boldly, the conquering of death as top future priorities, goals which are also the pinnacles of Judeo-Christian religion. Further, many futurist thinkers (who by and large are of highly secular sentiment) also recognize that extension of life has significant implications for human morality. As Marc Geddes explains [Geddes2004]:
Rational people understand that actions have consequences. A life of crime may help a person in the short term, but in the long run it may get you killed or imprisoned. … People are more likely to be moral when they understand they will have to face the consequences of their actions in the future. It follows that the further into the future one plans for, the more moral one’s behavior should become.
In a similar vein, humanitarian Albert Schweitzer based his sense of ethics in a deep reverence for human life, a reverence that reverberates even today in a very different environment than the one Schweitzer originally envisioned [Schweitzer1933, pg. 157]:
Affirmation of life is the spiritual act by which man ceases to live unreflectively and begins to devote himself to his life with reverence in order to raise it to its true value. To affirm life is to deepen, to make more inward, and to exalt the will to live. At the same time the man who has become a thinking being feels a compulsion to give to every will-to-live the same reverence for life that he gives to his own. He experiences that other life in his own. He accepts as being good: to preserve life, to promote life, to raise to its highest value life which is capable of development; and as being evil: to destroy life, to injure life, to repress life which is capable of development. This is the absolute, fundamental principle of the moral, and it is a necessity of thought.
In short, although some disagree, the consensus of scientists who have studied mind and consciousness is that there does not appear to be anything fundamental in human intelligence that cannot one day be exhibited in machine intelligence. Those of religious faith who hold out for a fundamental distinction that cannot be bridged — a cognitive science “proof” of God — are welcome to hold this view, but from all indications this notion is another instance of a “God of the gaps” theological error, wherein one looks for God in the recesses of what remains unknown in scientific knowledge at one particular point in time. Note that these findings do not refute the religious notion of a “soul,” nor do they suggest that humans do not assume responsibility for their actions and decisions, but instead merely that many if not all normal operations of human minds may one day be replicated in machine intelligence. Thus, as with most other aspects of the science-religion discussion, fundamentally there is no basis for conflict, provided that each discipline recognizes its own limitations. For additional discussion, see God of the gaps.
Some additional analysis of this issue, and some additional references, may be found at Computers-think.
- [Briggs2007] Helen Briggs, “Chimps beat humans in memory test,” BBC News, 3 Dec 2007, available at Online article.
- [ConwayMorris2003] Simon Conway Morris, Life’s Solutions: Inevitable Humans in a Lonely Universe, Cambridge University Press, Cambridge, UK, 2003.
- [Crick1995] Francis Crick, The Astonishing Hypothesis: The Scientific Search for the Soul, Touchstone, New York, 1995.
- [deGrey2004] Aubrey de Grey, “The War on Aging,” in Immortality Institute, The Scientific Conquest of Death, Libros en Red Publishers, Buenos Aires, 2004, pg. 29-46.
- [Geddes2004] Marc Geddes, “An Introduction to Immortality Morality,” in Immortality Institute, The Scientific Conquest of Death, Libros en Red Publishers, Buenos Aires, 2004, pg. 239-256.
- [Grossman2011] Lev Grossman, “2045: The Year Man Becomes Immortal,” Time, 10 Feb 2011, available at Online article.
- [Hodges2000] Andrew Hodges, Alan Turing: The Enigma, originally published 1983, republished by Walker and Co., New York, 2000.
- [Joy2000] Bill Joy, “The Future Doesn’t Need Us,” Wired, Apr 2000, available at Online article.
- [Kurzweil2005] Ray Kurzweil, The Singularity Is Near, Viking Penguin, New York, 2005.
- [Macrae1992] Norman Macrae, John Von Neumann: The Scientific Genius Who Pioneered the Modern Computer, Game Theory, Nuclear Deterrence, and Much More, Pantheon, New York, 1992.
- [Markoff2011a] John Markoff, “Computer Wins on ‘Jeopardy!': Trivial, It’s Not,” Yew York Times, 16 Feb 2011, available at Online article.
- [Schweitzer1933] Albert Schweitzer, Out of My Life and Thought: An Autobiography, Felix Meiner Verlag, Leipzig, 1931, English Translation 1933, reprinted by Johns Hopkins University Press, 1998.
- [SD2011e] [no author] “Border Collie Comprehends Over 1,000 Object Names as Verbal Referents,” Science Daily, 6 Jan 2011, available at Online article.
- [Tipler1994] Frank J. Tipler, The Physics of Immortality: Modern Cosmology, God and the Resurrection of the Dead, Doubleday, New York, 1994.
- [Weber1996] Bruce Weber, “In Upset, Computer Beats Chess Champion,” New York Times, 11 Feb 1996, available at Online article.
- [Weber1997] Bruce Weber, “Computer Defeats Kasparov, Stunning the Chess Experts,” New York Times, 5 May 1997, available at Online article.
|
{
"palladium_score": 3.711418628692627,
"timestamp": "2026-01-18T07:33:25.904772",
"source": "Palladium-STEM (Preview)"
}
|
http://www.doctorslounge.com/nephrology/procedures/dialysis.htm
|
In a clinical context Dialysis is a method for removing waste such as
urea from the blood when the kidneys can no longer do the job. The two
types of dialysis are: hemodialysis and peritoneal dialysis.
In hemodialysis, the patient's blood is passed through a tube into a
machine that filters out waste products. The cleansed blood is then
returned to the body.
In peritoneal dialysis, a special solution is run through a tube into
the peritoneum, a thin tissue that lines the cavity of the abdomen.
The body's waste products are removed through the tube.
There are three types of peritoneal dialysis:
- Continuous ambulatory
peritoneal dialysis (CAPD), the most common type, needs no machine and
can be done at home.
- Continuous cyclic peritoneal dialysis (CCPD) uses
a machine and is usually performed at night when the person is
- Intermittent peritoneal dialysis (IPD) uses the same type of
machine as CCPD, but is usually done in the hospital because treatment
Hemodialysis and peritoneal dialysis may be used to
treat people with diabetes who have kidney failure.
It works by having the blood flow along one side of a semi-permeable
membrane, with the dialysis solution (usually a highly concentrated
saline) flowing along the other side. Due to the difference in
osmolarity between the two liquids, water traverses the membrane in
order to dilute the dialysis liquid, carrying along the unwanted blood
Are you a doctor or a nurse?
Do you want to join the Doctors Lounge online medical community?
Participate in editorial activities (publish, peer review, edit) and
give a helping hand to the largest online community of patients.
Click on the link below to see the requirements:
Doctors Lounge Membership
How Hemodialysis is typically done
- Dialysis is conducted in a dedicated facility, either a special
room in a hospital or clinic that specializes in hemodialysis
- Nurses and technicians working in the facility have special
training specific to dialysis.
- A dialysis patient will be given a prescription by a
(a doctor specializing in kidney issues. All dialysis treatment
issues are ultimately referred back to this doctor or alternate,
though the attending nurse will often make minor care decisions
without referring to the doctor.
- The dialysis prescription will specify various parameters for
setting up dialysis machines. It will also specify times and durations
of dialysis sessions. In the US, 3-4 hour sessions, 3 times a week are
- The dialysis center to be used by the patient is contacted and
schedules the patient for a specific time period.
- Before or around the time the patient arrives for his/her scheduled
session, a dialysis machine will be prepared. There are many models of
dialysis machines, but typically in modern machines there will be a
computer, CRT, a pump, and facility for disposable tubing and filters.
The filters (the actual artificial kidneys) are cylindrical, clear
plastic outside with the filter material visible inside (looks like
thick paper). They are perhaps 15-18" long, and 2-3" thick. They have
connectors at both ends. The technician or nurse will setup plumbing
on the machine in a moderately complex pattern that has been worked
out to move blood through the filter, allow for saline drip (or not),
allow for various other medications/chemicals to be administered. How
the plumbing is setup may vary between models of machine and they
types of filters. For some filters, it is necessary to clear
sterilizing fluid (Renaline, or others) from the filter before
connecting the patient. This is done by altering the plumbing to push
saline through the filter, and carefully checked with a type of litmus
test. The pump does not directly contact the blood or fluid in the
plumbing - it works by applying pressure to the tubing, then moving
that pressure point around. Think of a disk with a protrusion in it.
Put this into a close fitting 270 degree enclosure. Put plastic tubing
between the enclosure and the disk, entering and exiting in the 90
open degrees. Now imagine the disk turning. It will put pressure on
the tubing, and the pressure point will roll around through the 270
degrees, forcing the fluid to move. It is characteristic of dialysis
machines that most of the blood out of the patients body at any given
time is visible. This facilitates troubleshooting, particularly
detection of clotting.
- The patient arrives and is carefully weighed. Standing and sitting
blood pressures are taken. Temperature is taken.
- Access is setup. For patients with a fistula (a surgical
modification to an arm or leg vein to make it more robust, and
therefore usable for high capacity blood movement required by
dialysis) this means inserting to large gauge needles into the
fistula. (Yes, it hurts.) Fistulas are widely considered the desirable
way to get access for hemo-dialysis, but they take time to setup and
mature. For other patients, access may be via a catheter installed to
connect to large veins in the chest. (This means no needles, but there
are other severe downsides to a catheter). There are some other
arrangements that can be made as well.
- When access has been setup, the patient is then connected to the
preconfigured plumbing - creating a complete loop through the pump and
filter. The pump and a timer are started. Hemodialysis is underway.
- Periodically (every half hour, nominally) blood pressure is taken.
As a practical matter, fluid is also removed during dialysis. Most
dialysis patients are on moderate to severe fluid restrictive diets
(in addition to other dietary restrictions). This is because kidney
failure usually includes an inability to properly regulate fluid
levels in the body. A session of hemodialysis may typically remove 2-5
kilograms (5-10 pounds) of fluid from the patient. The removal of
fluid done to achieve a predetermine "dry weight" of the patient. This
is a weight that the care staff believes represents what the patient
should weigh without fluid built up because of kidney failure.
Removing this much fluid can cause or exacerbate low blood pressure.
Monitoring is intended to detect this before it becomes too severe.
Low blood pressure can cause cramping,
lightheadedness, and unconsciousness.
- At the end of the prescribed time, the patient is disconnected
from the plumbing (which is removed and discarded, except perhaps for
the filter, which may be sterilized and reused with the same patient
at a later date). Needle wounds (in case of fistula) are bandaged with
gauze, held for 5-10 minutes with direct pressure to stop bleeding,
then the taped in place. It's just like getting blood draw, only it
takes a lot longer, and more fluid is lost.
- Temperature, standing and sitting blood pressure, and weight are
all measured again. Temperature changes may indicate infection. BP
discussed in point 10 above. Weighing is to confirm the removal of the
desired amount of fluid.
- Care staff verifies that the patient is in condition suitable for
leaving. The patient must be able to stand (if previously able),
maintain a reasonable blood pressure, and be coherent (if normally
coherent). Different rules apply for in-patient treatment - in those
cases the patient isn't leaving the facility.
|
{
"palladium_score": 3.7100024223327637,
"timestamp": "2026-01-18T07:33:30.453207",
"source": "Palladium-STEM (Preview)"
}
|
https://en.m.wikipedia.org/wiki/Humor
|
Humour (British English), also spelt as humor (American English; see spelling differences), is the tendency of experiences to provoke laughter and provide amusement. The term derives from the humoral medicine of the ancient Greeks, which taught that the balance of fluids in the human body, known as humours (Latin: humor, "body fluid"), controlled human health and emotion.
People of all ages and cultures respond to humour. Most people are able to experience humour—be amused, smile or laugh at something funny—and thus are considered to have a sense of humour. The hypothetical person lacking a sense of humour would likely find the behaviour inducing it to be inexplicable, strange, or even irrational. Though ultimately decided by personal taste, the extent to which a person finds something humorous depends on a host of variables, including geographical location, culture, maturity, level of education, intelligence and context. For example, young children may favour slapstick such as Punch and Judy puppet shows or the Tom and Jerry cartoons, whose physical nature makes it accessible to them. By contrast, more sophisticated forms of humour such as satire require an understanding of its social meaning and context, and thus tend to appeal to a more mature audience.
Many theories exist about what humour is and what social function it serves. The prevailing types of theories attempting to account for the existence of humour include psychological theories, the vast majority of which consider humour-induced behaviour to be very healthy; spiritual theories, which may, for instance, consider humour to be a "gift from God"; and theories which consider humour to be an unexplainable mystery, very much like a mystical experience.
The benign-violation theory, endorsed by Peter McGraw, attempts to explain humour's existence. The theory says 'humour only occurs when something seems wrong, unsettling, or threatening, but simultaneously seems okay, acceptable or safe'. Humour can be used as a method to easily engage in social interaction by taking away that awkward, uncomfortable, or uneasy feeling of social interactions.
Others believe that 'the appropriate use of humour can facilitate social interactions'.
Some claim that humour should not be explained. Author E.B. White once said, "Humor can be dissected as a frog can, but the thing dies in the process and the innards are discouraging to any but the pure scientific mind." Counter to this argument, protests against "offensive" cartoons invite the dissection of humour or its lack by aggrieved individuals and communities. This process of dissecting humour does not necessarily banish a sense of humour but begs attention towards its politics and assumed universality (Khanduri 2014).
Arthur Schopenhauer lamented the misuse of humour (a German loanword from English) to mean any type of comedy. However, both humour and comic are often used when theorising about the subject. The connotations of humour as opposed to comic are said to be that of response versus stimulus. Additionally, humour was thought to include a combination of ridiculousness and wit in an individual; the paradigmatic case being Shakespeare's Sir John Falstaff. The French were slow to adopt the term humour; in French, humeur and humour are still two different words, the former referring to a person's mood or to the archaic concept of the four humours.
As with any art form, the acceptance of a particular style or incidence of humour depends on sociological factors and varies from person to person. Throughout history, comedy has been used as a form of entertainment all over the world, whether in the courts of the Western kings or the villages of the Far East. Both a social etiquette and a certain intelligence can be displayed through forms of wit and sarcasm. Eighteenth-century German author Georg Lichtenberg said that "the more you know humour, the more you become demanding in fineness."
Western humour theory begins with Plato, who attributed to Socrates (as a semi-historical dialogue character) in the Philebus (p. 49b) the view that the essence of the ridiculous is an ignorance in the weak, who are thus unable to retaliate when ridiculed. Later, in Greek philosophy, Aristotle, in the Poetics (1449a, pp. 34–35), suggested that an ugliness that does not disgust is fundamental to humour.
In ancient Sanskrit drama, Bharata Muni's Natya Shastra defined humour (hāsyam) as one of the nine nava rasas, or principle rasas (emotional responses), which can be inspired in the audience by bhavas, the imitations of emotions that the actors perform. Each rasa was associated with a specific bhavas portrayed on stage.
In Arabic and Persian cultureEdit
The terms comedy and satire became synonymous after Aristotle's Poetics was translated into Arabic in the medieval Islamic world, where it was elaborated upon by Arabic writers and Islamic philosophers such as Abu Bischr, his pupil Al-Farabi, Persian Avicenna, and Averroes. Due to cultural differences, they disassociated comedy from Greek dramatic representation, and instead identified it with Arabic poetic themes and forms, such as hija (satirical poetry). They viewed comedy as simply the "art of reprehension" and made no reference to light and cheerful events or troublesome beginnings and happy endings associated with classical Greek comedy. After the Latin translations of the 12th century, the term comedy thus gained a new semantic meaning in Medieval literature.
Mento star Lord Flea, stated in a 1957 interview that he thought that: "West Indians have the best sense of humour in the world. Even in the most solemn song, like Las Kean Fine ["Lost and Can Not Be Found"], which tells of a boiler explosion on a sugar plantation that killed several of the workers, their natural wit and humour shine though."
Confucianist Neo-Confucian orthodoxy, with its emphasis on ritual and propriety, has traditionally looked down upon humour as subversive or unseemly. Humor was perceived as irony and sarcasm. The Confucian "Analects" itself, however, depicts the Master as fond of humorous self-deprecation, once comparing his wanderings to the existence of a homeless dog. Early Daoist philosophical texts such as "Zhuangzi" pointedly make fun of Confucian seriousness and make Confucius himself a slow-witted figure of fun. Joke books containing a mix of wordplay, puns, situational humour, and play with taboo subjects like sex and scatology, remained popular over the centuries. Local performing arts, storytelling, vernacular fiction, and poetry offer a wide variety of humorous styles and sensibilities.
Famous Chinese humorists include the ancient jesters Chunyu Kun and Dongfang Shuo; writers of the Ming and Qing dynasties such as Feng Menglong, Li Yu, and Wu Jingzi; and modern comic writers such as Lu Xun, Lin Yutang, Lao She, Qian Zhongshu, Wang Xiaobo, and Wang Shuo, and performers such as Ge You, Guo Degang, and Zhou Libo.
Modern Chinese humor has been heavily influenced not only by indigenous traditions, but also by foreign humor, circulated via print culture, cinema, television, and the internet. During the 1930s, the transliteration "youmo" (humour) caught on as a new term for humour, sparking a fad for humour literature, as well as impassioned debate about what type of humorous sensibility best suited China, a poor, weak country under partial foreign occupation. While some types of comedy were officially sanctioned during the rule of Mao Zedong, the Party-state's approach towards humour was generally repressive. Social liberalisation in the 1980s, commercialisation of the cultural market in the 1990s, and the advent of the internet have each—despite an invasive state-sponsored censorship apparatus—enabled new forms of humour to flourish in China in recent decades.
Social transformation modelEdit
The social transformation model of humour predicts that specific characteristics, such as physical attractiveness, interact with humour. This model involves linkages between the humorist, an audience, and the subject matter of the humour. The two transformations associated with this particular model involves the subject matter of the humour, and the change in the audiences perception of the humorous person, therefore establishing a relationship between the humorous speaker and the audience. The social transformation model views humour as adaptive because it communicates the present desire to be humorous as well as future intentions of being humorous. This model is used with deliberate self-deprecating humour where one is communicating with desires to be accepted into someone else’s specific social group. Although self-deprecating humour communicates weakness and fallibility in the bid to gain another's affection, it can be concluded from the model that this type of humour can increase romantic attraction towards the humorist when other variables are also favourable. The social transformation model can also be followed in teaching and lecturing where humour is used to improve the cognitive capabilities of the students. Humour could create a positive and informal classroom environment that triggers students’ enthusiasm and interest.
90% of men and 81% of women, all college students, report having a sense of humour is a crucial characteristic looked for in a romantic partner. Humour and honesty were ranked as the two most important attributes in a significant other. It has since been recorded that humour becomes more evident and significantly more important as the level of commitment in a romantic relationship increases. Recent research suggests expressions of humour in relation to physical attractiveness are two major factors in the desire for future interaction. Women regard physical attractiveness less highly compared to men when it came to dating, a serious relationship, and sexual intercourse. However, women rate humorous men more desirable than nonhumorous individuals for a serious relationship or marriage, but only when these men were physically attractive.
Furthermore, humorous people are perceived by others to be more cheerful but less intellectual than nonhumorous people. Self-deprecating humour has been found to increase the desirability of physically attractive others for committed relationships. The results of a study conducted by McMaster University suggest humour can positively affect one’s desirability for a specific relationship partner, but this effect is only most likely to occur when men use humour and are evaluated by women. No evidence was found to suggest men prefer women with a sense of humour as partners, nor women preferring other women with a sense of humour as potential partners. When women were given the forced-choice design in the study, they chose funny men as potential relationship partners even though they rated them as being less honest and intelligent. Post-Hoc analysis showed no relationship between humour quality and favourable judgments.
It is generally known that humour contributes to higher subjective wellbeing (both physical and psychological). Previous research on humour and psychological well-being show that humour is in fact a major factor in achieving, and sustaining, higher psychological wellbeing. This hypothesis is known as general facilitative hypothesis for humour. That is, positive humour leads to positive health. Not all contemporary research, however, supports the previous assertion that humour is in fact a cause for healthier psychological wellbeing. Some of the previous researches’ limitations is that they tend to use a unidimensional approach to humour because it was always inferred that humour was deemed positive. They did not consider other types of humour, or humour styles. For example, self-defeating or aggressive humour. Research has proposed 2 types of humour that each consist of 2 styles, making 4 styles in total. The two types are adaptive versus maladaptive humour. Adaptive humour consist of facilitative and self-enhancing humour, and maladaptive is self-defeating and aggressive humour. Each of these styles can have a different impact on psychological and individuals’ overall subjective wellbeing.
- Affiliative style humour. Individuals with this dimension of humour tend to use jokes as a means of affiliating relationships, amusing others, and reducing tensions.
- Self-enhancing style humour. People that fall under this dimension of humour tend to take a humorous perspective of life. Individuals with self-enhancing humour tend to use it as a mechanism to cope with stress.
- Aggressive humour. Racist jokes, sarcasm and disparagement of individuals for the purpose of amusement. This type of humour is used by people who do not consider the consequences of their jokes, and mainly focus on the entertainment of the listeners.
- Self-defeating humour. People with this style of humour tend to amuse others by using self-disparaging jokes, and also tend to laugh along with others when being taunted. It is hypothesised that people use this style of humour as a mean of social acceptance. It is also mentioned that these people may have an implicit feeling of negativity. So they use this humour as a means of hiding that inner negative feeling.
In the study on humour and psychological well-being, research has concluded that high levels of adaptive type humour (affiliative and self-enhancing) is associated with better self-esteem, positive affect, greater self-competency, as well as anxiety control and social interactions. All of which are constituents of psychological wellbeing. Additionally, adaptive humour styles may enable people to preserve their sense of wellbeing despite psychological problems. In contrast, maladaptive humour types (aggressive and self-defeating) are associated with poorer overall psychological wellbeing, emphasis on higher levels of anxiety and depression. Therefore, humour may have detrimental effects on psychological wellbeing, only if that humour is of negative characteristics.
Humour is often used to make light of difficult or stressful situations and to brighten up a social atmosphere in general. It is regarded by many as an enjoyable and positive experience, so it would be reasonable to assume that it humour might have some positive physiological effects on the body.
A study designed to test the positive physiological effects of humour, the relationship between being exposed to humour and pain tolerance in particular, was conducted in 1994 by Karen Zwyer, Barbara Velker, and Willibald Ruch. To test the effects of humour on pain tolerance the test subjects were first exposed to a short humorous video clip and then exposed to the Cold Press Test. To identify the aspects of humour which might contribute to an increase in pain tolerance the study separated its fifty-six female participants into three groups, cheerfulness, exhilaration and humour production. The subjects were further separated into two groups, high Trait-Cheerfulness and high Trait-Seriousness according to the State-Trait-Cheerfulness-Inventory. The instructions for the three groups were as follows: the cheerfulness group were told to get excited about the movie without laughing or smiling, the exhilaration group was told to laugh and smile excessively, exaggerating their natural reactions, the humour production group was told to make humorous comments about the video clip as they watched. To ensure that the participants actually found the movie humorous and that it produced the desired effects the participants took a survey on the topic which resulted in a mean score of 3.64 out of 5. The results of the Cold Press Test showed that the participants in all three groups experienced a higher pain threshold, a higher pain tolerance and a lower pain tolerance than previous to the film. The results did not show a significant difference between the three groups.
There are also potential relationships between humour and having a healthy immune system. SIgA is a type of antibody that protects the body from infections. In a method similar to the previous experiment, the participants were shown a short humorous video clip and then tested for the effects. The participants showed a significant increase in SIgA levels.
There have been claims that laughter can be a supplement for cardiovascular exercise and might increase muscle tone. However an early study by Paskind J. showed that laughter can lead to a decrease in skeletal muscle tone because the short intense muscle contractions caused by laughter are followed by longer periods of muscle relaxation. The cardiovascular benefits of laughter also seem to be just a figment of imagination as a study that was designed to test oxygen saturation levels produced by laughter, showed that even though laughter creates sporadic episodes of deep breathing, oxygen saturation levels are not affected.
As humour is often used to ease tension, it might make sense that the same would be true for anxiety. A study by Yovetich N, Dale A, Hudak M. was designed to test the effects humour might have on relieving anxiety. The study subject were told that they would be given to an electric shock after a certain period of time. One group was exposed to humorous content, while the other was not. The anxiety levels were measured through self-report measures as well as the heart rate. Subjects which rated high on sense of humour reported less anxiety in both groups, while subjects which rated lower on sense of humour reported less anxiety in the group which was exposed to the humorous material. However, there was not a significant difference in the heart rate between the subjects.
In the workplaceEdit
The significant role that laughter and fun play in organisational life has been seen as a sociological phenomenon and has increasingly been recognised as also creating a sense of involvement among workers. Sharing humour at work not only offers a relief from boredom, but can also build relationships, improve camaraderie between colleagues and create positive affect. Humour in the workplace may also relieve tension and can be used as a coping strategy. In fact, one of the most agreed upon key impacts that workplace humour has on people’s well being, is the use of humour as a coping strategy to aid in dealing with daily stresses, adversity or other difficult situations. Sharing a laugh with a few colleagues may improve moods, which is pleasurable, and people perceive this as positively affecting their ability to cope. Fun and enjoyment are critical in people's lives and the ability for colleagues to be able to laugh during work, through banter or other, promotes harmony and a sense of cohesiveness.
Humour may also be used to offset negative feelings about a workplace task or to mitigate the use of profanity, or other coping strategies, that may not be otherwise tolerated. Not only can humour in the workplace assist with defusing negative emotions, but it may also be used as an outlet to discuss personal painful events, in a lighter context, thus ultimately reducing anxiety and allowing more happy, positive emotions to surface. Additionally, humour may be used as a tool to mitigate the authoritative tone by managers when giving directives to subordinates. Managers may use self-deprecating humour as a way to be perceived as more human and "real" by their employees. Furthermore, ethnography studies, carried out in a variety of workplace settings, confirmed the importance of a fun space in the workplace. The attachment to the notion of fun by contemporary companies has resulted in workplace management coming to recognise the potentially positive effects of "workplay" and realise that it does not necessarily undermine workers’ performance.
Laughter and play can unleash creativity, thus raising morale, so in the interest of encouraging employee consent to the rigours of the labour process, management often ignore, tolerate and even actively encourage playful practices, with the purpose of furthering organisational goals. Essentially, fun in the workplace is no longer being seen as frivolous. The most current approach of managed fun and laughter in the workplace originated in North America, where it has taken off to such a degree, that it has humour consultants flourishing, as some states have introduced an official "fun at work" day. The results have carried claims of well-being benefits to workers, improved customer experiences and an increase in productivity that organisations can enjoy, as a result. Others examined results of this movement while focusing around the science of happiness—concerned with mental health, motivation, community building and national well-being—and drew attention to the ability to achieve "flow" through playfulness and stimulate "outside the box" thinking. Parallel to this movement is the "positive" scholarship that has emerged in psychology which seeks to empirically theorise the optimisation of human potential. This happiness movement suggests that investing in fun at the workplace, by allowing for laughter and play, will not only create enjoyment and a greater sense of well-being, but it will also enhance energy, performance and commitment in workers.
One of the main focuses of modern psychological humour theory and research is to establish and clarify the correlation between humour and laughter. The major empirical findings here are that laughter and humour do not always have a one-to-one association. While most previous theories assumed the connection between the two almost to the point of them being synonymous, psychology has been able to scientifically and empirically investigate the supposed connection, its implications, and significance.
In 2009, Diana Szameitat conducted a study to examine the differentiation of emotions in laughter. They hired actors and told them to laugh with one of four different emotional associations by using auto-induction, where they would focus exclusively on the internal emotion and not on the expression of laughter itself. They found an overall recognition rate of 44%, with joy correctly classified at 44%, tickle 45%, schadenfreude 37%, and taunt 50%.:399 Their second experiment tested the behavioural recognition of laughter during an induced emotional state and they found that different laughter types did differ with respect to emotional dimensions.:401–402 In addition, the four emotional states displayed a full range of high and low sender arousal and valence.:403 This study showed that laughter can be correlated with both positive (joy and tickle) and negative (schadenfreude and taunt) emotions with varying degrees of arousal in the subject.
This brings into question the definition of humour, then. If it is to be defined by the cognitive processes which display laughter, then humour itself can encompass a variety of negative as well as positive emotions. However, if humour is limited to positive emotions and things which cause positive affect, it must be delimited from laughter and their relationship should be further defined.
Humour has shown to be effective for increasing resilience in dealing with distress and also effective in undoing negative affects.
Madeljin Strick, Rob Holland, Rick van Baaren, and Ad van Knippenberg (2009) of Radboud University conducted a study that showed the distracting nature of a joke on bereaved individuals.:574–578 Subjects were presented with a wide range of negative pictures and sentences. Their findings showed that humorous therapy attenuated the negative emotions elicited after negative pictures and sentences were presented. In addition, the humour therapy was more effective in reducing negative affect as the degree of affect increased in intensity.:575–576 Humour was immediately effective in helping to deal with distress. The escapist nature of humour as a coping mechanism suggests that it is most useful in dealing with momentary stresses. Stronger negative stimuli requires a different therapeutic approach.
Humour is an underlying character trait associated with the positive emotions used in the broaden-and-build theory of cognitive development.
Studies, such as those testing the undoing hypothesis,:313 have shown several positive outcomes of humour as an underlying positive trait in amusement and playfulness. Several studies have shown that positive emotions can restore autonomic quiescence after negative affect. For example, Frederickson and Levinson showed that individuals who expressed Duchenne smiles during the negative arousal of a sad and troubling event recovered from the negative affect approximately 20% faster than individuals who didn’t smile.:314
Using humour judiciously can have a positive influence on cancer treatment.
Humour can serve as a strong distancing mechanism in coping with adversity. In 1997 Kelter and Bonanno found that Duchenne laughter correlated with reduced awareness of distress. Positive emotion is able to loosen the grip of negative emotions on peoples’ thinking. A distancing of thought leads to a distancing of the unilateral responses people often have to negative arousal. In parallel with the distancing role plays in coping with distress, it supports the broaden and build theory that positive emotions lead to increased multilateral cognitive pathway and social resource building.
Humour has been shown to improve and help the ageing process in three areas. The areas are improving physical health, improving social communications, and helping to achieve a sense of satisfaction in life.
Studies have shown that constant humour in the ageing process gives health benefits to individuals. Such benefits as higher self-esteem, lower levels of depression, anxiety, and perceived stress, and a more positive self-concept as well as other health benefits which have been recorded and acknowledged through various studies. Even patients with specific diseases have shown improvement with ageing using humour. Overall there is a strong correlation through constant humour in ageing and better health in the individuals.
Another way that research indicates that humour helps with the ageing process, is through helping the individual to create and maintain strong social relationship during transitory periods in their lives. One such example is when people are moved into nursing homes or other facilities of care. With this transition certain social interactions with friend and family may be limited forcing the individual to look else where for these social interactions. Humour has been shown to make transitions easier, as humour is shown reduce stress and facilitate socialisation and serves as a social bonding function. Humour may also help the transition in helping the individual to maintain positive feelings towards those who are enforcing the changes in their lives. These new social interactions can be critical for these transitions in their lives and humour will help these new social interactions to take place making these transitions easier.
Humour can also help ageing individuals maintain a sense of satisfaction in their lives. Through the ageing process many changes will occur, such as losing the right to drive a car. This can cause a decrease in satisfaction in the lives of the individual. Humour helps to alleviate this decrease of satisfaction by allowing the humour to release stress and anxiety caused by changes in the individuals life. Laughing and humour can be a substitute for the decrease in satisfaction by allowing individuals to feel better about their situations by alleviating the stress. This, in turn, can help them to maintain a sense of satisfaction towards their new and changing life style.
"Humour seems to engage a core network of cortical and subcortical structures, including temporo-occipito-parietal areas involved in detecting and resolving incongruity (mismatch between expected and presented stimuli); and the mesocorticolimbic dopaminergic system and the amygdala, key structures for reward and salience processing."
Humour can be verbal, visual, or physical. Non-verbal forms of communication–for example, music or visual art–can also be humorous.
- Being reflective of or imitative of reality
- Surprise/misdirection, contradiction/paradox, ambiguity.
Behaviour, place and sizeEdit
- by behaving in an unusual way,
- by being in an unusual place,
- by being the wrong size.
Most sight gags fit into one or more of these categories.
Some theoreticians of the comic consider exaggeration to be a universal comic device. It may take different forms in different genres, but all rely on the fact that the easiest way to make things laughable is to exaggerate to the point of absurdity their salient traits.
There are many taxonomies of humor; the following is used to classify humorous tweets in (Rayz 2012).
Different cultures have different typical expectations of humour so comedy shows are not always successful when transplanted into another culture. For example, a 2004 BBC News article discusses a stereotype among British comedians that Americans and Germans do not understand irony, and therefore UK sitcoms are not appreciated by them.
- Raymond Smullyan, "The Planet Without Laughter", This Book Needs No Title
- Peter McGraw, "Too close for Comfort, or Too Far to care? Finding Humor in Distant Tragedies and Close Mishaps"
- Nicholas Kuiper, "Prudence and Racial Humor: Troubling Epithets"
- "The Quotations Page: Quote from E.B. White". Retrieved 26 August 2018.
- Ritu Gairola Khanduri. 2014. Caricaturing Culture in India: Cartoons and History of the Modern World. Cambridge: Cambridge University Press.
- Seth Benedict Graham A cultural analysis of the Russo-Soviet Anekdot 2003 p. 13
- Bakhtin, Mikhail. Rabelais and His World [1941, 1965]. Trans. Hélène Iswolsky. Bloomington: Indiana University Press p. 12
- Webber, Edwin J. (January 1958), "Comedy as Satire in Hispano-Arabic Spain", Hispanic Review, 26 (1): 1–11, doi:10.2307/470561, JSTOR 470561
- Michael Garnice (11 March 2012). "Mento Music Lord Flea". Retrieved 14 April 2013.
- Xiao, Dong Yue (2010). "Exploration of Chinese humor: Historical review, empirical findings, and critical reflections". Humor. 23 (3). doi:10.1515/HUMR.2010.018.
- C. Harbsmeier, "Confucius-Ridens, Humor in the Analects." Harvard Journal of Asiatic Studies 50. 1: 131–61.
- Jocelyn Chey and Jessica Milner Davis, eds. "Humour in Chinese Life and Letters: Classical and Traditional Approaches" (HKUP, 2011)
- "The Invention of Li Yu – Patrick Hanan – Harvard University Press". www.hup.harvard.edu. Retrieved 26 August 2018.
- "Comic Visions of Modern China": http://u.osu.edu/mclc/files/2014/09/intro20.2-158jzq5.pdf
- Christopher Rea, "The Age of Irreverence: A New History of Laughter in China" (University of California Press, 2015)
- "Archived copy". Archived from the original on 2016-10-02. Retrieved 2016-09-30.CS1 maint: Archived copy as title (link)
- "Archived copy". Archived from the original on 2016-10-02. Retrieved 2016-09-30.CS1 maint: Archived copy as title (link)
- David Moser, "Stifled Laughter": http://www.danwei.org/tv/stifled_laughter_how_the_commu.php
- Jessica Milner Davis and Jocelyn Chey, eds. "Humour in Chinese Life and Culture: Resistance and Control in Modern Times" (HKUP, 2013): http://www.hkupress.org/Common/Reader/Products/ShowProduct.jsp?Pid=1&Version=0&Cid=16&Charset=iso-8859-1&page=-1&key=9789888139248
- Lundy, Tan, Cunningham (1998). "Heterosexual romantic preferences: The importance of humor and physical attractiveness for different types of relationships". Personal Relationships. 5 (3): 311–325. doi:10.1111/j.1475-6811.1998.tb00174.x.CS1 maint: Multiple names: authors list (link)
- Nasiri, F.; Mafakheri, F. (2015). "Higher Education Lecturing and Humor: From Perspectives to Strategies". Higher Education Studies. 5 (5): 26–31. doi:10.5539/hes.v5n5p26.
- Hewitt, L (1958). "Student perceptions of traits desired in themselves as dating and marriage partners". Marriage and Family Living. 20 (4): 344–349. doi:10.2307/348256. JSTOR 348256.
- Goodwin, R (1990). "Sex differences among partner preferences: Are the sexes really very similar?". Sex Roles. 23 (9–10): 501–513. doi:10.1007/bf00289765.
- Kenrick, Sadalla, Groth, Trost (1990). "Evolution, traits, and the stages of the parental investment model". Journal of Personality. 58 (1): 97–116. doi:10.1111/j.1467-6494.1990.tb00909.x. PMID 23750377.CS1 maint: Multiple names: authors list (link)
- Bressler, Balshine (2006). "The influence of humour on desirability". Evolution and Human Behavior. 27: 29–39. doi:10.1016/j.evolhumbehav.2005.06.002.
- Kuiper & Martin (1993). "Humor and self-concept". Humor: International Journal of Humor Research. 6 (3). doi:10.1515/humr.1922.214.171.124.
- Bos, E.H.; Snippe, E.; de Jonge, P.; Jeronimus, B.F. (2016). "Preserving Subjective Wellbeing in the Face of Psychopathology: Buffering Effects of Personal Strengths and Resources". PLOS ONE. 11 (3): e0150867. Bibcode:2016PLoSO..1150867B. doi:10.1371/journal.pone.0150867. PMC 4786317. PMID 26963923.
- Kuiper & Martin (1998). "Laughter and stress in daily life: Relation to positive and negative affect". Motivation and Emotion.
- Martin, Puhlik-Doris, Larsen, Gray., & Weir (2003). "Individual differences in uses of humor and their relation to psychological well-being: Development of the humor styles questionnaire". Journal of Research in Personality. 37: 48–75. doi:10.1016/s0092-6566(02)00534-2.CS1 maint: Multiple names: authors list (link)
- Kuiper, Grimshaw, Leite., & Kirsh (2004). "Humor is not always the best medicine: Specific components of sense of humor and psychological well-being". Humor: International Journal of Humor Research. 17 (1–2). doi:10.1515/humr.2004.002.CS1 maint: Multiple names: authors list (link)
- "Do cheerfulness, exhilaration, and humor production moderate pain tolerance? A FACS study". ResearchGate. Retrieved 2015-08-11.
- Bennett, Mary Payne; Lengacher, Cecile (2009). "Humor and Laughter May Influence Health IV. Humor and Immune Function". Evidence-Based Complementary and Alternative Medicine. 6 (2): 159–164. doi:10.1093/ecam/nem149. PMC 2686627. PMID 18955287.
- Bennett, Mary Payne; Lengacher, Cecile (2008). "Humor and Laughter May Influence Health: III. Laughter and Health Outcomes". Evidence-Based Complementary and Alternative Medicine. 5 (1): 37–40. doi:10.1093/ecam/nem041. PMC 2249748. PMID 18317546.
- Fry, W.F.; Stoft, P.E. (1971). "Mirth and oxygen saturation levels of peripheral blood". Psychotherapy and Psychosomatics. 19 (1): 76–84. doi:10.1159/000286308. PMID 5146348.
- Yovetich, N.A.; Dale, J.A.; Hudak, M.A. (1990). "Benefits of humor in reduction of threat-induced anxiety". Psychological Reports. 66 (1): 51–58. doi:10.2466/pr0.19126.96.36.199. PMID 2326429.
- Plester, Barbara (2009-01-01). "Healthy humour: Using humour to cope at work". Kōtuitui: New Zealand Journal of Social Sciences Online. 4 (1): 89–102. doi:10.1080/1177083X.2009.9522446.
- Bolton, Sharon C.; Houlihan, Maeve (2009-10-02). "Are we having fun yet? A consideration of workplace fun and engagement". Employee Relations. 31 (6): 556–568. doi:10.1108/01425450910991721. ISSN 0142-5455.
- Szameitat, Diana P., et al. Differentiation of Emotions in Laughter at the Behavioural Level. 2009 Emotion 9 (3).
- Strick, Madelijn; et al. (2009). "Finding Comfort in a Joke: Consolatory Effects of Humor Through Cognitive Distraction". Emotion. 9 (4): 574–578. doi:10.1037/a0015951. PMID 19653782.
- Fredrickson, Barbara L. (1998). "What Good Are Positive Emotions?". Review of General Psychology. 2 (3): 300–319. doi:10.1037/1089-26188.8.131.520. PMC 3156001. PMID 21850154.
- "Humor in Cancer Treatment". Retrieved 22 January 2017.
- Keltner, D.; Bonanno, G.A. (1997). "A study of laughter and dissociation: Distinct correlates of laughter and smiling during bereavement". Journal of Personality and Social Psychology. 73 (4): 687–702. doi:10.1037/0022-35184.108.40.2067. PMID 9325589.
- Abel, M (2002). "Humor, stress, and coping strategies". International Journal of Humor Research. 15 (4): 365–381. doi:10.1515/humr.15.4.365.
- Kupier, N.A.; Martin, R.A. (1993). "Humor and self-concept". International Journal of Humor Research. 6 (3): 251–270.
- Crew Solomon, Jennifer (January 1996). "American Behavioral Scientist". Humor and Aging Well: A Laughing Matter or a Matter of Laughing?. 3. 39 (3): 249–271. doi:10.1177/0002764296039003004.
- Shelley A. Crawford & Nerina J. Caltabiano (2011): Promoting emotional well-being through the use of humour, The Journal of Positive Psychology: Dedicated to furthering research and promoting good practice, 6: 3, 237–252
- Vrticka, Pascal; Black, Jessica M.; Reiss, Allan L. (30 October 2013). "The neural basis of humour processing". Nature Reviews Neuroscience. 14 (12): 860–868. doi:10.1038/nrn3566. PMID 24169937.
- Rowan Atkinson/David Hinton, Funny Business (tv series), Episode 1 – aired 22 November 1992, UK, Tiger Television Productions
- Emil Draitser, Techniques of Satire (1994) p. 135
- M. Eastman/W. Fry, Enjoyment of Laughter (2008) p. 156
- "Automatic Humor Classification on Twitter" (PDF). 2012.
- "Do the Americans get irony?". BBC News. 27 January 2004. Retrieved 2 April 2012.
- Alexander, Richard (1984), Verbal humor and variation in English: Sociolinguistic notes on a variety of jokes
- Alexander, Richard (1997), Aspects of verbal humour in English
- Basu, S (December 1999), "Dialogic ethics and the virtue of humor", Journal of Political Philosophy, 7 (4): 378–403, doi:10.1111/1467-9760.00082, retrieved 2007-07-06 (Abstract)
- Billig, M. (2005). Laughter and ridicule: Towards a social critique of humour. London: Sage. ISBN 1-4129-1143-5
- Bricker, Victoria Reifler (Winter, 1980) The Function of Humor in Zinacantan Journal of Anthropological Research, Vol. 36, No. 4, pp. 411–418
- Buijzen, Moniek; Valkenburg, Patti M. (2004), "Developing a Typology of Humor in Audiovisual Media", Media Psychology, 6 (2): 147–167, doi:10.1207/s1532785xmep0602_2(Abstract)
- Carrell, Amy (2000), Historical views of humour, University of Central Oklahoma. Retrieved on 2007-07-06.
- García-Barriocanal, Elena; Sicilia, Miguel-Angel; Palomar, David (2005), A Graphical Humor Ontology for Contemporary Cultural Heritage Access (PDF), Ctra. Barcelona, km.33.6, 28871 Alcalá de Henares (Madrid), Spain: University of Alcalá, archived from the original (PDF) on 2006-05-23, retrieved 2007-07-06
- Goldstein, Jeffrey H., et al. (1976) "Humour, Laughter, and Comedy: A Bibliography of Empirical and Nonempirical Analyses in the English Language." It's a Funny Thing, Humour. Ed. Antony J. Chapman and Hugh C. Foot. Oxford and New York: Pergamon Press, 1976. 469–504.
- Hurley, Matthew M., Dennet, Daniel C., and Adams, Reginald B. Jr. (2011), Inside Jokes: Using Humor to Reverse-Engineer the Mind. Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-01582-0
- Holland, Norman. (1982) "Bibliography of Theories of Humor." Laughing; A Psychology of Humor. Ithaca: Cornell UP, 209–223.
- Martin, Rod A. (2007). The Psychology Of Humour: An Integrative Approach. London, UK: Elsevier Academic Press. ISBN 978-0-12-372564-6
- McGhee, Paul E. (1984) "Current American Psychological Research on Humor." Jahrbuche fur Internationale Germanistik 16.2: 37–57.
- Mintz, Lawrence E., ed. (1988) Humor in America: A Research Guide to Genres and Topics. Westport, CT: Greenwood, 1988. ISBN 0-313-24551-7; OCLC 16085479.
- Mobbs, D.; Greicius, M.D.; Abdel-Azim, E.; Menon, V.; Reiss, A.L. (2003), "Humor modulates the mesolimbic reward centres", Neuron, 40 (5): 1041–1048, doi:10.1016/S0896-6273(03)00751-7, PMID 14659102.
- Nilsen, Don L.F. (1992) "Satire in American Literature." Humor in American Literature: A Selected Annotated Bibliography. New York: Garland, 1992. 543–48.
- Pogel, Nancy, and Paul P. Somers Jr. (1988) "Literary Humor." Humor in America: A Research Guide to Genres and Topics. Ed. Lawrence E. Mintz. London: Greenwood, 1988. 1–34.
- Roth, G.; Yap, R; Short, D. (2006). "Examining humour in HRD from theoretical and practical perspectives". Human Resource Development International. 9 (1): 121–127. doi:10.1080/13678860600563424.
- Smuts, Aaron. "Humor". Internet Encyclopedia of Philosophy
- Wogan, Peter (Spring 2006), "Laughing At First Contact", Visual Anthropology Review (published 12 December 2006), 22 (1): 14–34, doi:10.1525/var.2006.22.1.14 (Abstract)
|
{
"palladium_score": 3.613729476928711,
"timestamp": "2026-01-18T07:33:32.931318",
"source": "Palladium-STEM (Preview)"
}
|
https://bookboon.com/nb/matrix-algebra-for-engineers-ebook
|
This book and accompanying YouTube video lectures is all about matrices, and concisely covers the linear algebra that an engineer should know. We define matrices and how to add and multiply them, and introduce some special types of matrices. We describe the Gaussian elimination algorithm used to solve systems of linear equations and the corresponding LU decomposition of a matrix. We explain the concept of vector spaces and define the main vocabulary of linear algebra. Finally, we develop the theory of determinants and use it to solve the eigenvalue problem.
|
{
"palladium_score": 3.538313388824463,
"timestamp": "2026-01-18T07:33:32.931318",
"source": "Palladium-STEM (Preview)"
}
|
https://betterlesson.com/lesson/631849/human-impact-on-biodiversity?from=mtp_lesson
|
The purpose of this lesson is for students to identify ways in which people can work to protect biodiversity. In the "Google Age" where students are seconds away from any information they need, it is easy to allow students to find their own sources of information. While I am a believer in this method more often than not, for this particular lesson I am going to supply students with the beginning sources to use.
There are two reasons that I am restricting student resources (at least initially). First, a Google search of "how do we maintain biodiversity" brings up a list of things that students can do in this regard. Because I want students to develop their thinking/problem solving skills, I do not want the answer to this important question to be given in a format that allows students to blindly copy information without thinking and/or processing what they are reading. Second, the resources I plan to give to students have additional information on why biodiversity is so important. This builds on the prior lesson that allows students to take their learning further than just creating a list of actions.
I start by using the Maintaining Biodiversity PowerPoint in which students view a series of images and are asked the same question for each: How do you think this picture demonstrates people helping to maintain biodiversity?
I have the students discuss each image with their lab group to develop their best response. I ask two or three lab groups to share their responses for each slide; however, I do not comment on their answers other than to prompt them for more information if needed to ensure understanding of their ideas. The goal of this, at this point, is to activate student thinking. We will revisit this same PowerPoint in the wrap-up when I will convey what is occurring in each photograph.
This video demonstrates how I used this PowerPoint to help students take their learning deeper so they are able to really show what they know within their final project.
Biodiversity and Importance of Biodiversity, both by cK-12, are the resources that I am providing students. These web-based resources work best if your students have 1:1 devices (such as Chromebooks or tablets) but a computer lab is the next best thing. If online access is not available, the articles can be printed and the videos can be viewed as a whole class or assigned as homework.
My favorite aspects of the cK-12 resources are that the articles are easy for most students to read and they incorporate a lot of images and videos that elaborate on the written content. Additionally, teachers can customize the articles, adding their own questions, videos and diagrams as desired. If needed, there are resources such as tldr, a Google App or Rewordify -- both change reading levels or shorten articles but proof read them.
No matter how students access this information, I use the Note-taking Sheet to help students organize their information. The note-taking sheet is set up with three columns: headings from the reading, summary of information learned and questions/comments/connections. I have been working hard with students to develop their ability to "wonder" things again and I try to incorporate this into as many activities as possible. Students are familiar with 2 column notes from 7th grade, however it is still beneficial to model the strategy, as it is the first time we are using this method in class.
To model this I would select on of the articles (Biodiversity) and write the first heading. In this case it is not really a heading but rather an important term that is used to open the section: Biodiversity. I would then model how I read to pull out important information and how I put that into my own words in the chart. Then I demonstrate making I wonder statements or connections and again copy these ideas into my chart. I explain to students that while this method of note taking does take more time, it is worth it in the end because the learning is remembered longer because of how the information is processed at a deeper level. See the Modeling Example to see the information I used to show the students.
When students complete their note taking they are asked to complete the last section of information they need for their final project that requires the following:
The students will have 2-3 days to complete this as well as to put the finishing touches on their spider biodiversity project and get it ready to turn in for grading.
Now that students have had some time to develop their own ideas, I again go through the Maintaining Biodiversity PowerPoint. This time I explain what each image is showing and how it helps support and maintain biodiversity. The notes section of the PowerPoint has the information needed for each picture.
|
{
"palladium_score": 3.8267464637756348,
"timestamp": "2026-01-18T07:33:35.761828",
"source": "Palladium-STEM (Preview)"
}
|
https://blogs.ncl.ac.uk/speccoll/tag/grace-darling/
|
The 13th August 2010 marks the 100th anniversary of the death of Florence Nightingale. Florence was an English nurse who became famous while looking after injured soldiers during the Crimean War.
Florence was from an upper-class Victorian family and was frustrated by her sheltered upbringing. She wanted to pursue an education and aged sixteen she felt a calling from God to become a nurse. In the Mid-Nineteenth Century nursing was an occupation on a par with domestic service and her parents were horrified by her decision. Undeterred she went to Germany to carry out her training.
At the same time anger was mounting about the suffering of British soldiers fighting in the Crimean War. Florence was sent to Scutari in Turkey to head a team of female nurses. In letters home, she described the appalling conditions. Four times as many were dying from infection and disease as in battle. The death rates were high until the Sanitary Commission arrived and flushed out the sewers upon which the hospital was built.
As head nurse, Florence ran a tight ship. She instilled discipline in her nurses and introduced uniforms and a strict curfew. She also took on tasks that went beyond her duty, but increased her popularity with the soldiers, such as writing letters of condolence to relatives and setting up a banking system that allowed soldiers to send money home.
However, Florence Nightingale the woman and Florence Nightingale the legend were two very different people. The legend was born on 24th February 1855, when the Illustrated London News published an engraving of her holding a lamp in a hospital ward while tending to injured soldiers.
The public couldn’t get enough of the story of the beautiful, caring young woman who was risking her life in a warzone. An industry sprung up producing statuettes, figurines and posters, mainly by artists who had never seen her! Songs and poems were written describing her efforts in tending for the sick and the dying soldiers. The media frenzy was so great that when Florence arrived back in England in August 1856, she had to travel under the pseudonym Miss Smith, so that she could return to her family home without drawing attention!
Florence, however, detested her celebrity status. She felt that the legend that had been created around her hid what she was trying to achieve. The idea of fame was different in the 19th Century – it was associated with criminals and travelling entertainers so it is not surprising that Florence didn’t exactly bask in it.
However, it was the Florence of legend that allowed the real Florence to make the changes she had dreamt of. Government ministers were aware of her popularity and felt they couldn’t refuse her. Using public money donated in her honour, she set up the Nightingale School for nurses. In 1860 her Notes on Nursing, which advised ordinary women on how to care for relatives, became a bestseller. She used her fame to campaign for reforms in many areas of health and throughout her life wrote 200 books, pamphlets and articles, and more than 14,000 letters, campaigning for improvements until her death.
Florence’s experience of unwanted fame echoes that of an earlier Victorian heroine, Grace Darling. Grace grew-up on the Farne Islands where her father was a lighthouse keeper. On 7th September 1838, along with her father, she saved nine people from the wreck of the SS Forfarshire. The ship had crashed on the rocks and in the early hours of the morning Grace spotted the wreck and survivors from the lighthouse. She and her father took a rowing boat across to the wreck and Grace kept it steady while her father helped the survivors in.
News of the rescue was reported by the local newspapers who cast Grace in a heroic light. The myth began to take shape as journalists competed to create the most exciting account of events. It was reported that Grace was awoken by the cries of the survivors, which would have been an impressive feat given the noise of the gale-force wind blowing outside at the time, as her sister Thomasina later commented! It was also suggested that Grace forced her father to take the boat out regardless of the risks. It is highly unlikely that Grace would have disobeyed her father and in turn William Darling, an experienced seaman, would never have taken such a chance if he felt their own lives would be at risk.
The media frenzy which surrounded Grace was akin to that which we associate with modern-day celebrities. But with a young girl as Queen the press wanted female heroines. Soon her image could be found on trinkets, plates, postcards, chocolates and even soap boxes! Even William Wordsworth wrote a poem about her in 1843. Grace received letters requesting locks of her hair and scraps of the dress she wore during the rescue. Pressure also came from the Victorian paparazzi; the portrait artists. In a letter to the press five weeks after the rescue, her father requested that other painters take their likenesses from one of the seven paintings already completed!
Grace’s story became popular as it fitted in well with the romanticism of the Victorian period. All of the elements, the fact that she lived on a remote island, was beautiful and obedient and had such an angelic name, created an enchanting story. Fiction writers would have struggled to create a character that embodied the ideal Victorian woman in the way that Grace did.
A number of books were written about Grace. Grace Darling, or the Maid of the Isles by Jerrold Vernon, gave birth to the legend ‘of the girl with windswept hair’. This is particularly amusing as Grace went out in the boat with her hair in curling-rags! It could be argued that it was the ordinariness of Grace’s life as a lighthouse keeper’s daughter, which led to the creation of the myths. Journalists felt they had to invent stories about Grace to ensure she lived up to her reputation as a heroine.
Tragically, Grace died of tuberculosis in 1842, aged 26. However, the Grace Darling story continued long after her death. If she had married and grown old, she would no longer have been the girl of the legend. Through her untimely death she was immortalised as the brave and beautiful heroine she never wanted to be.
|
{
"palladium_score": 3.605477809906006,
"timestamp": "2026-01-18T07:33:35.761828",
"source": "Palladium-STEM (Preview)"
}
|
https://news.algaeworld.org/2015/06/seaweed-colonizing-ice-free-parts-of-antarctica/
|
Newly ice-free areas exposed by glacial retreat in Potter Cove, Antarctica, are being colonized by seaweed. With glaciers melting, the original white, mostly lifeless Antarctica is now becoming darker and lively with seaweed. These macroalgae not only produce oxygen for marine species through photosynthesis but also serve as the base of the marine food chain. Scientists predict this seaweed colonization could lead to a higher rate of carbon sequestration and higher productivity in marine system, but the local biodiversity might be reduced.
Glacial retreat has a major influence on coastal ecosystems – it creates ice-free areas which can then be taken over by marine species. However, the process is not always that simple. A recent study published in Polar Biology by D. Deregibus et al. discovered that although newly exposed ice-free areas favor colonization, sediments carried by glacial runoff makes seawater less clear and affects coastal marine species adversely by reducing the survival or reproductive rate. Nonetheless, seaweed in Potter Cove has adapted to shade and can tolerate darkness for a long period as it is accustomed to ice cover blocking sunlight. Increased turbidity, or cloudiness, caused by sediments affects the distribution rather than survival of Antarctic seaweed, Deregibus and his colleagues found.
The study investigated how the availability of incoming sunlight affects seaweed distribution in newly ice-free areas in Potter Cove, South Shetland Islands in Antarctica. Researchers found that the more sunlight breaks the surface of the water, the more seaweed can thrive. Levels of sunlight are influenced by the amount of sediments that runoff glaciers as they melt as sediments can decrease water clarity and light penetration.
In Potter Cove, high loads of sediment are produced during the summer melting season. This phenomenon is more evident in newly ice-free areas closer to glacial runoff. Both seasonal and spatial variations in water clarity affect the depth distribution of macroalgae. The vertical distribution in areas close to glacier runoff is reduced due to higher concentration of sediments, researchers found.
In this study, three major factors – turbidity, salinity and temperature – were examined to assess their influence on seaweed’s vertical distribution. The results indicate that changes in salinity and temperature do not significantly affect photosynthetic performance of seaweed; instead, turbidity is the main controlling factor.
Specifically, how deep the light can penetrate determines the maximal depth distribution limit of seaweed. The depth at which seaweed can survive is controlled by the amount of available light. In addition, carbon balance also affects what kinds of seaweed can be found at different depths.
The mystery of how these two seaweeds survive even when there is little light lies in carbon balance. During spring, when more sunlight reaches deep water, the seaweed starts accumulating extra carbon storage compounds. These accumulated carbon compounds can then be used to sustain their metabolism in summer, when inflowing sediments block the sun.
The rapid increase in temperature has caused significant glacial retreat as well as sea ice decrease in the Western Antarctic Peninsula. This glacial retreat has lead to an increase in the rate of sediment deposition. Such inflow of sediment into the marine system will affect the coastal ecosystems, especially distribution of species, according to researchers.
As temperatures continue to rise in the future, the spatial distribution of seaweed is expected to expand further in new coastal areas. However, how exactly such expansion will affect the coastal ecosystem remains a question for future study.
Photo: South Shetland Islands in Antarctica. Image credit: David Stanley/Flickr
View original article at: Seaweed colonizing ice-free parts of Antarctica
|
{
"palladium_score": 4.038286209106445,
"timestamp": "2026-01-18T07:33:35.761828",
"source": "Palladium-STEM (Preview)"
}
|
https://mdbytes.com/abstract-classes-and-interfaces/
|
This week’s post briefly describes the different roles played by abstract classes and interfaces in the Java programming environment. An illustration is given for the use of an interface, along with a justification as to why the interface approach is more appropriate than using an abstract class.
An abstract class is a programming class with many of the same properties as any other class (Murach, 2011, pp. 266-267). An abstract class has many of the same characteristics of any other Java class; fields, constructors, and methods. Any abstract methods within the abstract superclass must be defined in the subclasses. As a model only, the abstract class cannot be instantiated, i.e. objects cannot be directly created from the abstract superclass. The abstract class is most useful when a superclass is desired to serve as a generic type to be inherited by two or more subclasses (Lowe, 2017, p. 301).
To illustrate the use of abstract classes, the JavaFX Application class provides a meaningful example (JavaFX Class Application Javadoc, 2019). JavaFX, currently the standard graphical user interface (GUI) used in Java, offers the programmer the ability to open a window on the desktop and to run an application which interacts with users through the graphical window. Users can enter text, click on buttons, checks boxes, and more as part of the program. But if there were no window, there would be no GUI application. The JavaFX code ensures the creation of a window by using an abstract class.
The JavaFX abstract class Application provides the basic framework for all JavaFX applications. Most importantly, the Application class contains the abstract method start(Stage primaryStage). All JavaFX applications inherit this important method, which is used to launch the GUI application. The Stage class is a descendant of Window and is used then to control the appearance of a graphical user interface. Without a stage (window) properly defined there can be no GUI. In this case, the abstract class ensures the essential components are present for the GUI to operate.
Interfaces are also used in JavaFX GUI components. Elements that appear within the GUI window (primaryStage) often have different attributes, many of which area constructed through the use of an interface. Consider, for example, the Button class in JavaFX (JavaFX Class Button Javadoc, 2019). The Button class implements three interfaces; Styleable, EventTarget, and Skinnable. Each of these three classes offers the programmer different attributes and methods to be used with a JavaFX button object. For example, a button class without Styleable would offer a very narrow range of appearance for the button itself. The Styleable interface allows the programmer to apply CSS meta data to the button object.
Deitel, P., & Deitel, H. (2012). Java: How to Program 9th Edition. Boston: Prentice Hall.
Lowe, D. (2017). Java All-In-One for dummies, 5th edition. Hoboken: John Wiley & Sons.
Murach, J. (2011). Java Programming, 4th edition. Fresno: Mike Murach & Associates.
Oracle. (2019, March 19). JavaFX Class Application Javadoc. Retrieved from docs.oracle.com: https://docs.oracle.com/javase/8/javafx/api/javafx/application/Application.html
Oracle. (2019, March 19). JavaFX Class Button Javadoc. Retrieved from docs.oracle.com: https://docs.oracle.com/javase/8/javafx/api/toc.htm
StackOverflow.Com. (2012, December 9). Responsibilities and use of service and DAO Layers. Retrieved from StackOverflow.Com: https://stackoverflow.com/questions/13785634/responsibilities-and-use-of-service-and-dao-layers
|
{
"palladium_score": 4.082079887390137,
"timestamp": "2026-01-18T07:33:35.761828",
"source": "Palladium-STEM (Preview)"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.