text
stringlengths 247
264k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 20
294
| date
stringlengths 20
20
| file_path
stringclasses 370
values | language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 62
58.7k
|
|---|---|---|---|---|---|---|---|---|
Between 3000 BCE and 1800 CE there were more than 60 'mega-empires' that controlled at the peak an area of at least one million square kilometers. An empire is a state extended beyond the limits of what might seem to be the natural limits of a government. By conquest, or by colonization, or by any other means, lawful or lawless, the central power has stretched its arms, east and west, north and south, through many degrees of latitude and longitude. Empires are states the chiefs of which have come to be called emperors, but not all rulers called emperors rule actual empires.
It seemed to some that in the arts of empire the English were mere plagiarists, stupid plagiarists who have spoilt what they have stolen. They had not, so it was affirmed, one single original or admirable quality. They were not great discoverers like the Portuguese, or a great Christianizing power like the Spaniards. They had not the art of conciliating natives like the French, nor even of making themselves beloved by their own colonists. They had not even the wits to make their empire pay like the Dutch. They rolled up, everywhere, mountains of debt; they extorted only that they may squander. The single quality that they possessed in an abundant degree was neither rare nor original. Heavy bloodsuckers, they bestrode the earth with their so-called empire like a nightmare; the world would be a sweeter place to live in without them; the amount of damage they had wrought was as wide as the realm that they have filched from their betters with so much violence and fraud. These pleasantries, oft repeated, have grown to have the weight of arguments; and, indeed, to others, they formed a very ingenious substitute for argument.
Like the British Empire, the Roman Empire commenced from a very small beginning. Rome, the greatest city of the world, started life as a village composed of a few roughly-built hovels surrounded by an earthen rampart. Such an empire is mighty so long as it is thought to be so ; but it ceases to be mighty at the moment when the breath of opinion fails to pronounce it to be omnipotent. It lives in peril hourly upon the prestige of its reputation. An empire, in this sense, can have no period of stable equilibrium, for at every moment it must be either in growth or in decay. Accretion or dissolution are its only conditions.
Beginning with Gibbon, most theoretical efforts have been directed to the causes of imperial disintegration and fragmentation. Slowly, the great Empire passes away. Troubles at home, discontent, luxury and discord are doing their work ; the savage tribes on the frontier begin to raid the further provinces, and the mercenary soldiers, instead of driving them back, are fighting one another. The disintegration of an empire may take place in a manner which may be likened to the blowing up of a machine in consequence of a faulty construction, or an ill-adjusted relationship of its motive functions, which, as they are partly chemical and partly mechanical, as in the instance of the steam-engine, require great care and skill in the engine-room. Dismemberment may be the breaking up of a crazy and cumbrous machine, which is sure to ensue if it be attempted to put a too high speed upon its movements, in relation to the age of the framework, and to the quality of the materials, and the manner of the jointings, and the worn condition of the revolving part3, and the loss of steady-pins, and the wear of the cogs.
China is unique, in that it has seen a continuous sequence of rise and fall of empires since the Bronze Age. Had not great captains skillfully marshaled hosts for battle, the name of the United States of America might have been added to the long list of empires that have fallen.
Modern Empires - Since 1800
Plus a few others not on the other lists.
|Date (peak)||Empire name||Region||Area|
|1790||Spanish Empire||World Wide||19.40|
|1822||Portuguese Empire||World Wide||8.90|
|1914||Imperial Germany||World Wide||3.30|
|1922||British Empire||World Wide||36.70|
|1942||Axis Italy||Europe / Africa||3.70|
|1949||Dutch Empire||World Wide||2.10|
|1953||Soviet Bloc||World Wide||25.60|
|1960||Belgian Empire||World Wide||2.40|
|1960||French Empire||World Wide||12.60|
60 "Mega" Empires - 3000 BCE to 1800 CE
Over 1,000,000 square kilometers
|Date (peak)||Empire name||Region||Area|
|1300 BC||Egypt (New Kingdom)||Africa||1.00|
|800||Tufan (Tibet)||Central Asia||4.60|
|1310||Golden Horde||Central Asia||6.00|
|1122 BC||Shang||East Asia||1.25|
|50||China - Han||East Asia||6.00|
|715||China - Tang||East Asia||5.40|
|947||Liao (Kitan)||East Asia||2.60|
|980||China - Sung||East Asia||3.10|
|1126||Jin (Jurchen)||East Asia||2.30|
|1450||China - Ming||East Asia||6.50|
|1790||China - Manchu||East Asia||14.70|
|500||Hephthalite Huns||South Asia||1.70|
|648||Harsha (Kanyakubia)||South Asia||1.00|
|670 BC||Assyria||Southwest Asia||1.40|
|585 BC||Media||Southwest Asia||2.80|
|500 BC||Achaemenid Persia||Southwest Asia||5.50|
|323 BC||Hellenistic (Alexander's)||Southwest Asia||5.20|
|301 BC||Seleucid||Southwest Asia||3.90|
|550||Sassanian Persia||Southwest Asia||3.50|
|980||Buyid (Buwahid)||Southwest Asia||1.60|
|Join the GlobalSecurity.org mailing list|
|
<urn:uuid:992961d0-bd99-4221-8b1a-4b33c642dee3>
|
CC-MAIN-2013-20
|
http://www.globalsecurity.org/military/world/empires.htm
|
2013-05-22T00:35:42Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.850367
| 1,370
|
On November 5, 2009, the FCC released their Consumer Facts on "Wireless Devices and Health Concerns." In this document, the FCC recommends precautions for the use of cell phones.
According to the FCC, “Recent reports by some health and safety interest groups have suggested that wireless device use can be linked to cancer and other illnesses. These questions have become more pressing as more and younger people are using the devices, and for longer periods of time.”
They now recommend the following steps:
- Use an earpiece or headset
- If possible, keep wireless devices away from your body when they are on, mainly by not attaching them to belts or carrying them in pockets
- Use the cell phone speaker to reduce exposure to your head
- Consider texting rather than talking
- Buy a wireless device with lower Specific Absorption Rate (SAR)
|
<urn:uuid:9f2e0aa7-819a-4bdd-8e16-b64487f8dba6>
|
CC-MAIN-2013-20
|
http://articles.mercola.com/sites/articles/archive/2010/04/27/fcc-now-recommends-precautions-for-cell-phone-use.aspx
|
2013-05-24T23:07:15Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.953676
| 176
|
Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals.
Check out our collection of strategy guides to find effective literacy teaching and learning strategies to use in your classroom.
Journal > The Reading Teacher
Motivating Young Writers Through Write-Talks: Real Writers, Real Audiences, Real Purposes
by Amy Alexandra Wilson
|Grades||6 – 12|
Modeled after the popular teaching technique of book talks, write talks are brief motivational talks designed to engage students in writing. Teachers can invite adults from their communities into their classrooms to give write talks, thereby conveying to students that real people go through different writing processes to write real texts for real audiences.
Wilson, A. (2008, March). Motivating Young Writers Through Write-Talks: Real Writers, Real Audiences, Real Purposes. The Reading Teacher, 61(6), 485–487. doi: 10.1598/RT.61.6.5
Grades K – 12 | Calendar Activity | October 20
Students examine the different ways that they write and view all kinds of writing at the National Gallery of Writing.
|
<urn:uuid:5addbe2b-8601-489a-95c6-46f95b4cebf7>
|
CC-MAIN-2013-20
|
http://www.readwritethink.org/professional-development/professional-library/motivating-young-writers-through-20945.html?tab=2
|
2013-05-21T17:52:14Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700380063/warc/CC-MAIN-20130516103300-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.86429
| 242
|
Is this topic for you?
Atrial fibrillation and ventricular tachycardia are types of fast heart rates that can be serious. If you have one of these heart problems, see the topic Atrial Fibrillation or Ventricular Tachycardia.
What is supraventricular tachycardia?
During an episode of SVT, the heart's electrical system doesn't work right, causing the heart to beat very fast. The heart beats at least 100 beats per minute and may reach 300 beats per minute. After treatment or on its own, the heart usually returns to a normal rate of 60 to 100 beats a minute.
SVT may start and end quickly, and you may not have symptoms. SVT becomes a problem when it happens often, lasts a long time, or causes symptoms.
SVT also is called paroxysmal supraventricular tachycardia (PSVT) or paroxysmal atrial tachycardia (PAT).
What causes SVT?
Most episodes of SVT are caused by faulty electrical connections in the heart. What causes the electrical problem is not clear.
SVT also can be caused by certain medicines. Examples include very high levels of the heart medicine digoxin or the lung medicine theophylline.
What are the symptoms?
Some people with SVT have no symptoms. Others may have:
How is SVT diagnosed?
Your doctor will diagnose SVT by asking you questions about your health and symptoms, doing a physical exam, and perhaps giving you tests. Your doctor:
If you do not have an episode of SVT while you're at the doctor's office, your doctor probably will ask you to wear a portable electrocardiogram (EKG), also called an ambulatory electrocardiogram. When you have an episode, the device will record it.
How is it treated?
Some SVTs don't cause symptoms, and you may not need treatment. If you do have symptoms, your doctor probably will recommend treatment.
To treat sudden episodes of SVT, your doctor may:
If these treatments don't work, you may have to go to your doctor's office or the emergency room. You may get a fast-acting medicine such as adenosine or verapamil. If the SVT is serious, you may have electrical cardioversion, which uses an electrical current to reset the heart rhythm.
If you often have episodes of SVT, you may need to:
What can you do at home to prevent SVT?
You can try some things at home to help prevent SVT by avoiding the things that trigger it.
To find your triggers, keep a diary of your heart rate and your symptoms. You might find, for example, that smoking or caffeine causes your SVT episodes.
eMedicineHealth Medical Reference from Healthwise
To learn more visit Healthwise.org
Find out what women really need.
Pill Identifier on RxList
- quick, easy,
Find a Local Pharmacy
- including 24 hour, pharmacies
|
<urn:uuid:ee1864cb-1608-432c-927b-78cd908ed69f>
|
CC-MAIN-2013-20
|
http://www.emedicinehealth.com/supraventricular_tachycardia-health/article_em.htm
|
2013-05-22T00:42:22Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.922473
| 635
|
Students form anti-bullying task force
FRONT PAGE STORY
by Debbie Bosak
MICHIGAN CITY - Bullies are nothing new.
For ages, schools have been testing grounds for those who feel a
need to assert their misperceived strength or superiority by
choosing one or two unfortunate souls as targets of their unfounded
aggression. There was a time gone by when teachers and parents might
have shaken their heads, saying, 'kids will be kids.'Â Those days are
As the current culture becomes increasingly numb to brutality in
many forms and school violence continues to escalate, parents and
administrators alike struggle to find a solution to the problem.
"The taunts are more personal and involve sexual identity and
physical appearance, topics the culture exaggerates," said Dr. Susan
Bryant, principal at St. Stanislaus Kosta. "In the past, no one
asked about a person's sexual preference. Now it's all over the news."
At St. Stanislaus Kosta, 17 sixth-grade students have taken a
leadership role in an effort to create awareness and stop the
practice of bullying cold in its tracks. As part of the Unique
Gifted Lovable You, or Hey UGLY, program, this middle school task
force launched a school-wide initiative on May 13 geared toward
empowering students to step up and take control.
According to the Centers for Disease Control, more than 864,000
teens miss at least one day of school each month because they fear
for their safety. A study by the National Youth Violence Prevention
Resource Center States that 5.7 million children have bullied or
been bullied. Bullying can take many forms, from verbal to
emotional, as well as physical. In light of developing technology,
bullies now employ modern tools, such as the Internet, email, camera
phones, online social networking sites and text messaging, which can
provide an insidious veil of anonymity.
Bryant was attracted to UGLY because of its dedication to helping
students of all ages overcome issues of self-esteem and the bullying
that can happen as a result. Bullies hide their behavior well,
ensuring that the teacher cannot always observe the behavior, said
Bryant. Parents of students accused of bullying often do not believe
that their child is even capable of this mean behavior.
"We originally contacted Hey UGLY to help us create an anti-bullying
environment, and it became apparent that their approach to
developing self-esteem directly attacks the bullying syndrome," said
Bryant. "Boys and girls who feel empowered to be genuine and
accepting of others don't bully others and don't usually become the
victims of bullies."
Gathering in the school gymnasium, members of the student task force
addressed the assembly. "Bullies feel scared so they feel the need
to hurt others with their words or actions," said Darria Burt.
"If you fear for your safety, tell someone right away," added Alex
Miramontes. "Tell a teacher, a counselor, a friend or your parents."
Following the presentation, each of the school's 170 students stood
before a task force member to take a pledge: "I promise to stop
bullying and respect other's feelings." Students then created a
large banner with cutouts of their hands as a continued reminder of
"Some bully because they think it will make them feel good, but it
doesn't. It makes them feel worse,"commented Madeleine Wojasinski,
a task force member. And so it continues. "The bully makes someone
else feel bad and then that person bullies someone else," she said.
"It's like a chain. One bully creates another bully."
According to Carrie Miller, a St. Stanislaus middle school teacher
and project facilitator, the task force will reinforce its message
with classroom visits, which will include various self-esteem and
diversity awareness activities. They are also sponsoring a
school-wide essay and art contest on the effects of bullying.
"I've taught in both Catholic and public schools and I've seen
bullying everywhere," Miller noted. "The best place to start is with
the kids and the Golden Rule, do unto others as you would have them
do unto you."
"That's what Jesus teaches us and what we want to teach the children," Bryant added.
"We need to take care of ourselves and be
good to one another."
Hey U.G.L.Y., a not-for-profit organization, was designed to give
children and teens struggling with the effects of low self-esteem
the tools needed to counter bullying, eating disorders, other forms
of violence, substance abuse and suicide. Betty Hoeffner, president
and co-founder, was present at St. Stanislaus to watch students take
Inspired by a teen who came to Hoeffner after failed suicide
attempts, the advocate for youth embarked on developing a program
that would enable children and teens to learn to feel good about
themselves. And when it came to bullying, Hoeffner was able to draw
on personal experience.
"I remember it like it was yesterday - what it felt like in grade
school and high school," Hoeffner recalled. "I was bullied and I was
The Hey U.G.L.Y. program has now reached out to more than 650,000 youth
through in-school presentations and activity plans.
"I hope our program stops bullying because it hurts people, even the
bully," said Martin Lomay, a team member.
Despite the fact that victims of bullies often are made to feel
isolated and marginalized, according to the bullying task force, one
of the strongest deterrents to this behavior is the old adage -
strength lies in numbers.
"Surround anyone being bullied to show unity and protection and then
take them to the principal to tell what you saw," advised student
Skyler Lagneau. "Bullying is just mean, and no one should be mean to
more news about Hey U.G.L.Y.
|
<urn:uuid:2d8ae14c-276a-4036-ae9e-8a292aed6fc6>
|
CC-MAIN-2013-20
|
http://heyugly.org/NEWS_NWICatholicBullying.php
|
2013-05-21T10:34:11Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.964316
| 1,315
|
Hirschsprung disease is a congenital disorder of the colon that causes a functional obstruction to the passage of stool.The disease is characterized by a lack of ganglion cells--nerve cells that coordinate peristalsis of the bowel--in the rectum and occasionally in the entire colon, or even the small intestine.The diagnosis of Hirschsprung disease may be made in a newborn because of late passage of stool, vomiting, or severe abdominal distension.Sometimes, Hirschsprung disease is not detected until later in childhood as part of the evaluation of severe constipation. Infants and children with Hirschsprung disease can become extremely ill due to a condition called enterocolitis, in which the colon becomes very dilated and inflamed. Children with this complication can die if treatment is not instituted immediately.
The definitive diagnosis of Hirschsprung disease is made by obtaining a biopsy of the rectum.This can usually be done with the child awake since there is no sensation of pain in the rectum.The biopsy is obtained using a device narrower than a pencil that is passed into the anus.If no ganglion cells are seen in the specimens obtained, then Hirschsprung disease is diagnosed.Children suspected of having Hirschsprung disease will also undergo a barium enema.This will often show a "transition zone" where dilated colon becomes narrow at the point where there are no ganglion cells.This test is important for planning operative treatment.
The ultimate treatment of Hirschsprung disease involves an operation that brings colon containing ganglion cells--normally contracting colon--down to a point just above the anus.If an infant or child is not severely ill when the diagnosis is made, this operation can often be done within a week of the diagnosis.However, some patients are extremely ill when they first present with Hirschsprung disease because of enterocolitis or a dramatically dilated colon from long-term obstruction.In these children, it is necessary to perform a colostomy to allow the colon to decompress.After several months, the definitive "pull through" operation can be performed.
The definitive operation for Hirschsprung disease has a number of variations, all of which are effective.We use an operation called the Soave endorectal pullthrough.We usually perform this operation using a minimally-invasive technique that avoids an abdominal incision and usually requires only the placement of a single three millimeter wide laparoscope into the abdomen.The mucosa or inner lining of the rectum is removed through the anus and the colon with normal ganglion cells is "pulled through" the remaining cuff of rectal muscle to near the anus where it is sewn in place.The laparoscope is used only to monitor the process of pulling the rectum through and occasionally to assist with dividing the blood vessels to the colon.
Before the operation, it is very important that the colon be well-irrigated and any enterocolitis be under control. This may require from several days to a week of antibiotics, rectal irrigations, and dilations.
In cases where it had been necessary to place a colostomy, we still can use a relatively minimally invasive technique, but an abdominal incision must still be made to take down the colostomy.
After the operation children are permitted to eat once bowel function has returned, and they may then be discharged home.This is usually within one to two days of the operation when the minimally invasive procedure has been used.Many children have difficulties with severe diaper rash after the operation due to an increased frequency of stool output.This persists until there has been accommodation to the operation.We instruct parents in the application of various cream preparations that reduce the severity of the inflammation in the skin around the anus. After several weeks we begin a regimen of occasional dilatation of the rectum that can be performed at home. In addition we usually ask parents to give rectal irrigations for three months after the operation in order to decrease the incidence of colon infection.
The long-term results from pullthrough operations for Hirschsprung disease are quite good.However, children are still at risk for the development of enterocolitis in the colon that remains, even though it has ganglion cells.This presents as fever, abdominal pain, abdominal distension, and possibly bloody diarrhea.Urgent medical attention must be obtained.
|
<urn:uuid:14e17ceb-f77c-491b-92e8-2f0d8a79ef45>
|
CC-MAIN-2013-20
|
http://lomalindahealth.org/childrens-hospital/our-services/clinical-services/pediatric-surgery/conditions-and-treatments/hirschsprung-disease.page
|
2013-05-18T08:52:33Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381630/warc/CC-MAIN-20130516092621-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.939632
| 911
|
TNRTB Archive - Retain for reference information
More affirmation for RTB's cosmic creation model comes from new polarization measurements of type-Ia supernovae that have confirmed their current and future usefulness as indicators of cosmological distance. To use a particular class of objects to measure astronomical distances, scientists must know how bright the objects are. While not all type-Ia supernovae are of the same brightness, astronomers use other techniques to compensate for these differences, making supernovae good “standard candles.” Recent analysis of polarization from these supernovae helped clarify the mechanism by which the explosions proceed. This provides another tool astronomers can use to correct for brightness differences, consequently allowing them to measure distances more accurately using type-Ia supernovae detected in the future. Better distance measures help establish the accuracy of a class of big bang models, including RTB's creation model.
o Lifan Wang, Dietrich Baade, and Ferdinando Patat, “Spectropolarimetric Diagnostics of Thermonuclear Supernova Explosions,” Science 315 (2007): 212-14.
· Related Resource
o Hugh Ross, “A Beginner’s—and Expert’s—Guide to the Big Bang”
· Product Spotlight
o Creation as Science, by Hugh Ross
|
<urn:uuid:61d963dc-afbf-4613-b0d9-1247b8815e12>
|
CC-MAIN-2013-20
|
http://www.reasons.org/articles/more-type-ia-supernovae-tools
|
2013-05-27T02:55:23Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.87113
| 274
|
The Delta serves as a unique “hub” in California’s water system, receiving runoff from other watersheds that goes for many beneficial uses throughout the state. The Delta provides a portion of the drinking water for more than 27 million Californians—nearly two-thirds of the state’s population. As the West Coast’s largest estuary, the Delta relies on water flows to ensure a healthy ecosystem while also providing water to irrigate more than 3 million acres of agricultural land. California’s water hub cannot continue to meet these demands.
Water deliveries from the Delta have been reduced significantly in recent years due to years of drought and other systemic problems in the Delta. Left unaddressed, this will create tremendous impacts on California’s economy, environment, agricultural industry and millions of residents throughout the state.
Water is essential to human life and health, and human consumptive uses are the top priority for developed water supply in California under existing law. Water supply, regardless of source, also is an important part of the California economy. Thus, water is both an important natural resource and an important economic resource, to be managed appropriately for identifiable public benefit, and to be preserved for future generations. There is great competition for the limited amount of developed water supply.
Public trust principles, well established in the American legal system with roots back to England and parallel principles in other legal systems, provide a way to frame decisions about the use of water in the Delta and Delta watershed. In our legal system, water is not owned by any user, but the State of California and public retain ownership. Users gain the right for use of water in various ways (riparian, appropriative, etc.), but those rights are conditional as stated both in the term reasonable use and by the underlying public trust for protection of the resource.
The Delta’s watershed is 27 percent of the land area of California and receives 36 percent of the precipitation for the state. Large populations outside of the watershed are serviced by exported Delta water. California has changed little over 116 years, though climate change projections suggest more rainfall than snow, reduced snow pack, and more severe storms in the future. This telling fact is often lost in our discussion over state water policy.
Because of California’s Mediterranean climate, the key challenge for the statewide water system has been to shift water from wet years, wet seasons, and wet locations to drier times and places. California’s major supply of water is from rain and snow that falls north and east of the Delta (with a relatively modest amount imported from other states). But the major demand for water is west and south of the Delta.
The Delta is an important, but not dominant, part of the California’s water supply. A relatively small proportion of total state water from rain, snow or inflow from other states flows into the Delta—15 percent in a wet year, 13 percent in an average year, and 9 percent in a dry year. But the Delta is more important than its share of water because it is the hub of the two largest water systems in the state, the federal Central Valley Project and the State Water Project. These projects use the Delta as a hub of their water conveyance system. The Delta also plays that role in some local water systems such as Contra Costa Water District, while other users take water directly from the Delta’s waterways for use in the Delta. In total, taking water from the Delta has increased significantly over the past half century, mostly for export.
More water is commonly exported from the Delta in average or dry water years than is exported during wet years. In wet years, about 4.6 million acre-feet of water is exported from the Delta; in average and dry years, water exports are about 6.3 million and 5.1 million acre-feet, respectively. The current infrastructure for water conveyance and storage limits ability to capture and store water during high flows for use in dry years.
This 36-page booklet provides information on a wide range of water issues facing California with particular focus on the Delta. To view the booklet, please click here.
To view the Appeals Procedures adopted by the Council on Sept. 24, 2010, please click here.
The Delta Stewardship Council has updated the timelines for completion of the Delta Plan, the recirculation of the Programmatic Environmental Impact Report and the Notice of Proposed Rulemaking.
|
<urn:uuid:511ad885-a994-4281-974b-7a0128094b20>
|
CC-MAIN-2013-20
|
http://deltacouncil.ca.gov/water-supply
|
2013-05-22T07:12:52Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.947738
| 905
|
Title: First Search for Dark Matter Annihilation in the Sun Using the ANTARES Neutrino Telescope
Authors: ANTARES Collaboration: S. Adrián-Martinez, I. Al Samarai, A. Albert, et al.
First Author Institution: Institut d’Investigació per a la Gestió Integrada de les Zones Costaneres (IGIC) – Universitat Politècnica de València.
The quest for identifying the dark matter particle is well underway. Many experiments are relying on different detection methods to look for this elusive particle. In this paper, we discuss the work of the ANTARES (Astronomy with a Neutrino Telescope and Abyss environmental RESearch project) collaboration, which is using a neutrino telescope to search for signals of dark matter annihilation in the Sun.
Most of the matter in the Universe is not visible, it does not interact light. Think about it: casting light on something and seeing what we get back is the main physical mechanism we have of knowing it is there. In astronomy, we observe the light emitted by the object or the inverse, we notice light being absorbed by an object. However, most of the matter in the Universe will not allow us to perform that experiment. We know of its existence because of its gravitational influence on the evolution of the Universe, the dynamics of galaxies or its lensing signatures. The leading theory for “dark” matter is that it is made of WIMPs, weakly interacting massive particles that naturally arise in supersymmetry (SUSY), which is an extension of the Standard Model of particle physics.
In spite of all the gravitational evidence in favor of dark matter, we still have not had a definitive detection of dark matter particles. The literature on these searches is vast, with experiments looking for dark matter directly, through the recoil of targets that collide with dark matter particles or indirectly, through products of the annihilation or decay of dark matter particles. We gave a short overview of the recent advances of this field in this astrobite. We also described an interesting experimental design of a new type of dark matter detector here.
The WIMP annihilation signal
If dark matter is indeed a supersymmetric particle, when two dark matter particles collide, they annihilate into other particles (including photons, neutrinos and antimatter). Indirect dark matter experiments search for those products by looking at regions where we expect the dark matter density to be high. WIMPs can become gravitationally trapped in the center of the Sun, annihilate and produce neutrinos that can escape and reach the Earth. Neutrino telescopes, such as ANTARES, can be used to search for this signal.
How does ANTARES work?
Neutrino telescopes do not resemble optical telescopes at all. When neutrinos interact with the Earth or the atmosphere, they produce charged particles (muons). If the particles have very high energies (10 GeV-100TeV), they emit Cherenkov light as they traverse water.
The ANTARES telescope is located at ~2500 m underwater in the Mediterranean Sea. Its array of photomultipliers collects the Cherenkov light with the aim of reconstructing the direction of the original incoming muon. The figure below shows a diagram of the experiment underwater.
The background, always the background
As usual in this business, one needs to be careful in distinguishing signal from background. In this particular case, cosmic rays entering the Earth atmosphere can produce downgoing muons and both downgoing and upgoing neutrinos that could mimic the muons from dark matter annihilation. The most significant contribution to the background comes from the downgoing atmospheric muons. To avoid them, the best strategy is to only take into account measurements that were triggered at night, when the muons from dark matter annihilation come from the opposite direction to the background, from below the detector.
Moreover, to increase the signal-to-background ratio, one can place a cut on the inferred direction of the muons (not the original neutrino!) with respect to that of the Sun. This cut depends on the energy of the muon: at high energies, the direction of the muon and the neutrino are more similar than for low energy muons. The trade-off is that a more stringent cut on the direction gives a cleaner signal, but at the same time it removes information from the low energy muons. Nevertheless, there is always a remaining background signal that you need to model to see if the data are in excess of (which would be a detection of dark matter annihilation) or consistent with the background.
From January 2007 to December 2008, the experiment took an effective total of 294.6 days of data. We reproduce here Figure 4 in the paper, where the main results are presented. This figure shows the distribution of separations between the inferred direction of the muon and the Sun. The number of muons coming from the direction of the Sun is consistent with the background expectation. There is no detection of dark matter annihilation in the Sun by ANTARES. In the next section, we explore what constraints can be placed on the properties and models of dark matter particles using these results. If you are brave enough, carry on reading!
For the brave at heart: Constraints on SUSY parameter space
What do these results tell us about the dark matter particle candidates? Different models for dark matter particles predict different production mechanisms for neutrinos. The Minimal Supersymmetry models (MSSM) considered in this work predict that the neutrino signature comes mostly from the decay of tau particles, W bosons or bottom quarks that are produced when the dark matter particles annihilate.
The measurements of the previous section allow the authors to place an upper limit on the neutrino flux from dark matter annihilation in the Sun. This is to say, if there were dark matter annihilation happening in the interior of the Sun, the neutrino flux from this process would have to be below a certain threshold to be consistent with the observations by ANTARES.
This information comes in handy. In the interior of the Sun, the rate of dark matter annihilation depends on the rate of capture of dark matter: you need to be capturing dark matter to have anything to annihilate. This is just an equilibrium argument, but a very useful one, since it allows us to set constraints on the cross-section of the interaction dark matter with nucleons in the Sun, most abundantly, protons. The cross-section has a spin-dependent (SD) and a spin-independent (SI) contribution, based on whether the interaction depends or not on the spin of the nucleon. We already have very stringent limits on the SI cross-section of the interactions by means of direct detection experiments. Neutrino telescopes are thus ideal for constraining SD WIMP-proton cross-sections. For a more technical explanation, see this paper.
In the left panels of Figure 6 (reproduced below), the ANTARES collaboration presents their bounds on the parameter space defined by the SD cross-section of the interaction and the WIMP mass. Overall, even if we have not had a detection of annihilating dark matter in the Sun, the constraints on the parameter space of SUSY models are getting tighter day by day.
|
<urn:uuid:d500cd77-f00f-4f31-909c-e732afc1af6b>
|
CC-MAIN-2013-20
|
http://astrobites.org/2013/03/02/the-search-for-wimp-annihilation-in-the-sun/
|
2013-05-18T05:25:12Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.892368
| 1,535
|
Description of Historic Place
Fort George National Historic Site of Canada is a largely recreated 18th-century military fort located on the west bank of the Niagara River near the river's mouth. It is situated on the remains of the original Fort George, largely destroyed during the War of 1812.
Fort George was declared a national historic site because:
- it served as the principal fortification on the Niagara Peninsula during the War of 1812 and as Headquarters of the Central Division of the British Army,
- it played a key role in the defence of Upper Canada,
- its destruction by artillery contributed to the American victory in the Battle of Fort George and the subsequent seven-month occupation by American forces.
The heritage value of Fort George lies in the remnants of a late 18th-century British fortification embedded in its cultural landscape, and the residues of the history to which they bear witness, particularly those associated with the War of 1812, the Battle of Fort George, British and American occupancy of the fortress, and its destruction in May 1813. Fort George was the site of a historic reconstruction during the 1930s, an activity which reconfigured most of its earthworks and resulted in the construction of several buildings inside the footprint of the original fort.
Sources: HSMBC Minutes, October 1963; Commemorative Integrity Statement.
Key features contributing to the heritage value of this site include:
- archaeological remnants of the Battle of Fort George, British and American occupancy, the original fortification with its palisade and the sloping glacis, buildings, structures, and landscape features,
- the massing, form, materials and craftsmanship of the powder magazine, particularly its stone construction and buttressed bomb- and fire-proof walls,
- the location of the powder magazine, away from other buildings,
- the spatial inter-relationships between the remains of original facilities inside the palisade,
- the core of "original Fort George dirt" in the rebuilt earthworks and any indications of the outline of the earthworks of the original fortification,
- the remaining natural topographical features of the site, particularly as they are integrated with military requirements (such as the natural ravine in which the powder magazine is located),
- the extensive cleared area extending to the Niagara River and across the Commons,
- archaeological remains of life at the fortress witnessing both British and American occupancy (including American trench lines both above and below grade, remnants of wharves, buildings, supply yards of Navy Hall),
- archaeological remains of the Battle of Fort George and destruction of the fort,
- the siting of the fortress on a steep rise, near the mouth of the river,
- evidence of 1930s commemorative activities on the site which re-configured its earthworks and added new buildings,
- viewplanes from the fort to the river, Butler's Barracks, the town, and across the river to the former site of the American Fort Niagara.
|
<urn:uuid:81b610a4-71aa-4441-9898-14ede31f0ca9>
|
CC-MAIN-2013-20
|
http://www.historicplaces.ca/en/rep-reg/place-lieu.aspx?id=7613
|
2013-05-25T20:15:50Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706298270/warc/CC-MAIN-20130516121138-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.959322
| 607
|
Studying a Vanishing Bird
In the spring of 1924, ornithologist Arthur Allen, founder of the Laboratory of Ornithology at Cornell, was traveling with his wife Elsa in Florida when they decided to check out an alleged sighting of an Ivory-billed Woodpecker. Ivory-bills had not been seen for several years. The Allens managed to find a pair and decided to study the birds by observing them but elected not to camp nearby for fear of disturbing what might be the last nesting pair. Much to their dismay, a pair of local taxidermists got a permit and shot the birds legally while the Allens were away.
In the early 1930s Mason Spencer, a state legislator from northeastern Louisiana, shot a male ivory-bill in a huge tract of virgin timber, known as the Singer Tract, along Louisiana's Tensas River and word went out to the ornithological community.
In 1935 Allen organized the Brand-Cornell University-American Museum of Natural History Ornithological Expedition. The expedition--including Cornell professors Arthur Allen and Peter Paul Kellogg, James Tanner, a graduate student, and bird artist George Miksch Sutton, who was also an ornithologist and curator of the Cornell bird collection--traveled across America to record motion pictures and sounds of vanishing birds.
In 1924, Allen and his graduate student, Peter Paul Kellogg, had assisted the Fox-Case Movietone Corporation in in recording bird songs on motion-picture sound film, the first bird song recordings ever made. Allen recognized the tremendous potential for using sound recordings to study birds.
One of the goals of the 1935 expedition was to check out the 81,000-acre Singer Tract where Mason had shot an Ivory-billed Woodpecker. After grilling Spencer about his sighting, the expedition headed into the swamp led by Jack Kuhn, the local game warden. After three days in the swamp, the expedition found an ivory-bill nest 40 feet above the ground in a cavity in a red maple.
"The whole experience was like a dream," wrote Sutton in his 1936 book Birds in the Wilderness. "There we sat in the wild swamp, miles and miles from any highway, with two ivory-billed woodpeckers so close to us that we could see their eyes, their long toes, even their slightly curved claws with our binoculars."
Allen set up Camp Ephilus--a play on the scientific name of the ivory-bill (Campephilus principalis) --within 200 yards of the nest and kept watch, recording every detail of the birds' behavior, for a couple of weeks. Peter Paul Kellogg had stayed in town moving all of the equipment from their truck to a wagon that would be hauled to the campsite by mules. It was impossible to get a motor vehicle into the swamp.
When Kellogg arrived, he and the crew produced the first motion pictures and sound recordings ever made of the ivory-billed woodpecker. Tape recorders had not yet been invented so Kellogg recorded bird sounds using the movietone system.
The movietone system used to record the ivory-bills worked by converting vibrations striking the microphone into electrical impulses and then into light of varying intensity, which was captured on motion-picture film. After the film was developed, the process would be reversed, converting the light images back into electrical impulses, which were then converted back into sound.
The sounds of the ivory-bills captured by Kellogg in 1935 are the ones still used for playback today by ivory-bill searchers. They are also the sounds against which modern recordings of possible kent calls are checked.
From 1937 to 1939, Jim Tanner spent two years studying ivory-bills in the Singer Tract and searching for them across the South as part of his PhD dissertation for Cornell. Funded by the National Audubon Society, Tanner produced an in-depth report, which was later published as The Ivory-billed Woodpecker. In 1939 Tanner estimated there might have been 22 to 24 ivory-bills remaining in the United States, with not more than 6 to 8 birds at any one place. Although Tanner spent months checking out sightings of the ivory-bill around the South, the only birds he ever found were in the Singer Tract. He concluded that the only hope of saving the species lay in preserving that ancient forest.
The Singer Tract (named after the sewing machine company who owned the land) was the largest piece of primeval forest left in the South. The logging rights to the Singer Tract had been sold to the Chicago Mill and Lumber Company. The National Audubon Society mounted a campaign to save the Singer Tract but it only accelerated the rate of cutting. The Chicago Mill and Lumber Company had no interest in saving the forest or compromising with John Baker, the president of the National Audubon Society. Baker wanted to buy the rights to the trees and obtained a pledge of $200,000 from the governor of Louisiana for that purpose.
The lumber company refused the offer and the Singer Sewing Machine Company, which still owned the land, refused to intercede. Richard Pough, who later became the first president of The Nature Conservancy, was sent by Audubon to search for the remaining ivory-bills in the Singer Tract in December 1943-January 1944. In a letter to John Baker he wrote, "It is sickening to see what a waste a lumber company can make of what was a beautiful forest." He found one female ivory-bill in a small stand of uncut timber, surrounded by destruction.
The artist, Don Eckelberry, who also worked for Audubon, went to the swamp in April 1944 looking for the bird Pough had spotted. He found her at her roost hole and spent two weeks watching and sketching her. Eckelberry's time in the swamp is the last universally accepted sighting of one of these birds in the United States.
|
<urn:uuid:99fa0f9c-d582-462d-bc27-7e3b09ef47af>
|
CC-MAIN-2013-20
|
http://www.birds.cornell.edu/ivory/aboutibwo/studying_vanishing_html
|
2013-06-18T05:25:47Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706933615/warc/CC-MAIN-20130516122213-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.969168
| 1,228
|
Total fertility rate: 3.58 children born/woman (2012 est.)
Definition: This entry gives a figure for the average number of children that would be born per woman if all women lived to the end of their childbearing years and bore children according to a given fertility rate at each age. The total fertility rate (TFR) is a more direct measure of the level of fertility than the crude birth rate, since it refers to births per woman. This indicator shows the potential for population change in the country. A rate of two children per woman is considered the replacement rate for a population, resulting in relative stability in terms of total numbers. Rates above two children indicate populations growing in size and whose median age is declining. Higher rates may also indicate difficulties for families, in some situations, to feed and educate their children and for women to enter the labor force. Rates below two children indicate populations decreasing in size and growing older. Global fertility rates are in general decline and this trend is most pronounced in industrialized countries, especially Western Europe, where populations are projected to decline dramatically over the next 50 years.
Source: CIA World Factbook - Unless otherwise noted, information in this page is accurate as of February 21, 2013See Also
© 2013 IndexMundi. All rights reserved.
|
<urn:uuid:c3f395e8-7bf5-400d-b2f0-22a05e26d885>
|
CC-MAIN-2013-20
|
http://www.indexmundi.com/iraq/total_fertility_rate.html
|
2013-05-19T10:05:26Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.944167
| 259
|
Q: We have been growing tomatoes in the same place about 10 years and each year they are getting worse. Sue, we have very poor quality and quantity. Small plants. We try Miracle-Gro but even that doesn't seem to help. What can I do to improve the nutrients of the soil and get better plants?
A: The first suggestion, often difficult in the small backyard garden, is to move the tomatoes. I know it sounds too simple but many of the things that cause problems for tomatoes winter over in the soil.
Tomatoes and their kin (potato, peppers, eggplant, tobacco, petunias and tomatillo) should be rotated so that they aren't planted in the same area within three or more years of the last planting. Common soil borne problems include:
Wilts from soil fungi: Verticillium and Fusarium wilts.
Virus: Tomato/Tobacco mosaic virus survives in the soil and can infect transplants with the virus.
Nematode, root rot: stunted plants with pale green leaves; roots have multiple swellings. The tiny worm winters in soil or plant debris.
Other things that may cause problems include:
Inadequate sunlight: Fruiting plants need full sun for peak performance; Six to 10 hours daily, six is a bare minimum and far from ideal.
Water: Too much or too little. Too much leads to rotting, fungal problems, cracking and splitting; too little can cause stunted plants, poor nutrient uptake.
Toxins from the roots of black walnut and butternut trees are another source of wilting, particularly in tomatoes. Avoid planting within the root zone of the tree. Blossom-end rot -- a calcium deficiency but usually caused by either uneven watering or improper pH, both can affect the ability of the plant to use the calcium available in the soil.
Over-fertilization: Applying too much nitrogen (the N of the N-P-K on a fertilizer bag) will cause excessive foliage and few flowers. Consider fertilizing before the tomatoes bloom and again after the first harvest.
Cool temperatures: Planting too early risks frost damage, stunted growth, delayed growth. Plant outside only when the danger of a late frost is past and the soil has warmed. Early planting will require protection such as cloche, water wall or other heat-conserving devices.
Pests: Hornworms, stinkbugs, aphids, whiteflies, cutworms. Check plants often and treat as soon as discovered. Often, handpicking is the easiest option and quite effective.
Amend soil. Improve the organic content of the soil by adding compost, aged manure or other soil conditioners. This will increase the texture of the soil, its water-retention ability, as well as the amount and variety of microorganisms in the soil. Some of these combat the soil-borne pests that winter over in the soil. Do this now as soon as the soil can be worked.
Consider raised beds: Creating a new bed with fresh soil should lessen the chance of soil-borne problems.
Water at the base: Water in the early part of the day so the plants have a chance to dry off. Avoid watering the leaves as this can stir up fungus spores and spread problems.
Water during dry spells: the soil should be uniformly moist, not wet.
Grow resistant tomato varieties: Look for A=Alternaria, F=Fusarium, N=nematodes, T=Tobacco mosaic virus, and V=Verticillium on the plant tag or seed description.
Fertilize: Give plants a good start with an initial dose of weak fertilizer (starter fertilizer strength: 2 tablespoons of 5-10-10 or 5-10-5 per gallon or a weak fish emulsion) when transplanting out to the garden.
Readers: write in with your success solutions to this tomato problem.
Sue Kittek is a freelance garden writer. Send questions to Garden Keeper at firstname.lastname@example.org or mail: Garden Keeper, The Morning Call, P.O. Box 1260, Allentown, PA 18105.
THIS WEEK IN THE GARDEN
Indoors for transplant: Finish sowing dahlia, larkspur, portulaca and head lettuce. Sow: Leaf lettuce, peppers and tomatoes. Next week sow: cucumbers.
As soon as the soil can be worked, sow outdoors: peas and sweet peas, potatoes, poppies and mignonette.
Pot dahlia tubers and grow on indoors for cuttings in April.
As hostas peak through the soil, dig and divide large mounds.
Clear iris beds and cut back plants to about four inches to deter iris borers.
Purchase onion sets for planting later this spring.
Consider building a cold frame. It will extend the garden season by a few months.
Do soil tests before amending soil with anything other than compost.
Cut back old, dead foliage from perennials. Clear spring bulbs from leaf litter and mulch. Clear branches, twigs and other debris from the yard and garden beds.
Move plants wintering over into lighted area and increase watering as they break dormancy.
Clean, groom, divide and repot indoor plants so they are settled in before they move outdoors for the season. Remember root pruning can be used to control the size of a plant.
Continue to feed birds regularly and provide fresh water. Encourage nesting as part of any insect control program.
Map and plan changes to old beds or design new ones.
|
<urn:uuid:1aaed8a9-694e-48f4-86b7-3a9cbbb2ffbb>
|
CC-MAIN-2013-20
|
http://articles.mcall.com/2009-03-20/features/4334193_1_soil-fusarium-wilts-plants
|
2013-05-19T02:10:16Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.910596
| 1,168
|
The average person needs a minimum of seven hours of sleep a night. It is said that there are many conseqences of not getting enough sleep that most of us are not aware about. These consequences can led to not living a healthy life.
Most teens don’t recognize how not getting enough sleep can cause a major effect in their life. The consequences of not getting enough sleep are eating more which will led to more calories in your body, living a shorter life, weight gain, brain shrinking, higher blood pressure, increasing the risk of getting sick, worsen your memory, skiping your workouts, being cranky and not being on your game. These are all serious probelms that people should know about. Teens are effected by this everday which has impacts on their school work. Usually, it is seen in schools that most students complain about their tiredness. Could it also be a problem with a lot of school work?
“ I don’t get enough sleep because I can’t fall asleep at night, I am always tired so I fall asleep in class, the fact that I am involved in a sport and have practice everyday it effects my sleeping rutine as well as paying attention in school. The most sleep I get is 4 to 5 hours.” said by New Dorp student Monica River. “ I feel that I do get enough sleep, but if I do not get enough sleep I will end up sleeping during class time and that makes me wish school started later. The amount of homework and projects given to me does effect my sleeping rutine at times, I usually get 8 hours of sleep a night and to help me get enough sleep I would stop watching televsion.” said by another New Dorp student Kathleen Braunstein.
Clearly, many people do not realize the importance of having enough sleep. Sleep effects may aspects of your everyday life. If people start getting the right amount of sleep they need a night it will change their life style for the better.
*Mary Pantaleon contributed to this articleTweet
|
<urn:uuid:89c60a9c-cb62-429a-9760-66e5952916b6>
|
CC-MAIN-2013-20
|
http://newdorpvoice.com/not-enough-sleep/
|
2013-05-21T01:03:14Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699632815/warc/CC-MAIN-20130516102032-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.970095
| 423
|
Mar. 18, 2006 Researchers at Oregon State University have created the world's first completely transparent integrated circuit from inorganic compounds, another major step forward for the rapidly evolving field of transparent electronics.
The circuit is a five-stage "ring oscillator," commonly used in electronics for testing and new technology demonstration. It marks a significant milestone on the path toward functioning transparent electronics applications, which many believe could be a large future industry.
A report on the findings has been accepted for publication in a professional journal, Solid State Electronics. The research has been supported by the National Science Foundation, Army Research Office, and HP. Recently, OSU also licensed to HP the rights to market new products based on this work, which provides the university a partner to help scale-up and commercialize the technology.
"This is a quantum leap in moving transparent electronics from the laboratory toward working commercial applications," said John Wager, a professor of electrical engineering at OSU. "It's proof that transparent transistors can be used to create an integrated circuit, tells us quite a bit about the speeds we may be able to achieve, and shows we can make transparent circuits with conventional photolithography techniques, the basic patterning methods used to create electronics all over the world."
Collaborators on the work at OSU include Wager; Doug Keszler, professor and head of the OSU Department of Chemistry; Janet Tate, a professor of physics; and Rick Presley, who as a master's candidate in electrical engineering at OSU has been at the cutting edge of a new electronics industry.
Transparent electronics, scientists say, may hold the key to new industries, employment opportunities, and new, more effective or less costly consumer products. Uses could range from transparent displays in the windshield of an automobile to cell phones, televisions, copiers, "smart" glass or game and toy applications. More efficient solar cells or better liquid crystal displays are possible.
Recently, OSU announced the creation of a transparent transistor based on zinc-tin-oxide. The new transparent integrated circuit is made from indium gallium oxide. Both of these compounds, which are amorphous heavy-metal cation multi-component oxides, share some virtues - they have high electron mobility, chemical stability, physical durability and ease of manufacture at low temperatures.
They also will be cost-effective and safe - alternative heavy metals such as gold and silver have been ruled out because of their expense, and others such as mercury, lead or arsenic avoided due to environmental concerns.
There are still challenges that need to be met, Wager said. The technology needs to be scaled up to larger sizes, all process steps must be functional for manufacturing, physical protection is needed for the new circuits, new markets and products identified. And work will continue toward a "P-channel" device would provide a number of advantages, such as lower power consumption, simple electronic architecture, and ability to do both analog and digital processing.
"What's exciting is that all of the remaining work seems very feasible," Wager said. "It will take some time, but we just don't see any major obstacles that are going to preclude the commercial use of transparent electronics with these compounds.
"In a way," Wager added, "it's shocking how fast this field has progressed. We might be able to bring transparent integrated circuits to widespread use in five years or so, a process that took a couple of decades in the early evolution of conventional electronics."
When perfected, researchers say, some transparent electronics applications may be so cheap and effective that they could be used in "throw away" devices, or used to replace conventional circuits that don't even require transparency. The electronic capabilities of the materials are sufficiently impressive that have already outperformed organic and polymer materials that are the basis of millions of dollars of research every year.
OSU officials believe the evolution of these products and the collaboration with HP may be one of the most valuable the university has ever developed with private industry.
The project is affiliated with the Oregon Nanoscience and Microtechnologies Institute, a research collaboration involving Oregon's three public research universities - OSU, Portland State University, and the University of Oregon - as well as the Pacific Northwest National Laboratory, the state of Oregon, and the regional business community.
In order to move the research along more quickly, the university has emphasized collaboration between scientists and engineers to address basic science, as well as engineering and manufacturing issues. The end result should be not only breakthroughs in fundamental science, but also compounds that will be practical to manufacture and use.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead.
|
<urn:uuid:f32631ce-1c15-4e18-b884-701fd258110d>
|
CC-MAIN-2013-20
|
http://www.sciencedaily.com/releases/2006/03/060318144306.htm
|
2013-05-23T11:42:35Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.945272
| 980
|
What migrating Sharpies eat
Feathers plucked from Sharp-shinned Hawks’ beaks and talons reveal an unexpected taste for larger-than-expected birds
Published: February 15, 2013
The standard way to determine what a bird of prey eats is to examine what’s left behind after its meals. The method works well during the breeding season, but it’s of no use after the birds move on.
Photo by U.S. Fish and Wildlife Service
What do raptors eat while migrating? Biologists who captured 72 Sharp-shinned Hawks at a hawk watch in the Manzano Mountains in New Mexico recently took a novel approach to answering that question.
They checked the hawks’ beaks and talons for prey species’ feathers, which the scientists gathered. Then, back in the lab, they extracted nucleotide sequences from the feathers and compared them to genes obtained from reference feathers taken from 57 bird species netted at banding sites located not far from the hawk watch.
The results showed that migrating Sharpies take most of their prey in proportion to its abundance. Twenty species were identified conclusively, including three species never before described as Sharp-shinned Hawk prey: Ladder-backed Woodpecker, Bullock’s Oriole, and Townsend’s Warbler.
The most common prey species were American Robin and Hermit Thrush. Both are larger than most potential prey, and the hawks took both more frequently than expected.
Why Sharpies show an inclination to feed upon relatively uncommon and large prey isn’t clear, write the researchers, “but selecting larger prey would be consistent with an optimal foraging strategy whereby hawks maximize energy intake per hunting attempt or hunting time.”
|
<urn:uuid:a9ec397d-d4e9-455d-912d-184b24053abc>
|
CC-MAIN-2013-20
|
http://birdwatchingdaily.com/en/Getting%20Started/Birding%20Briefs/2013/02/What%20migrating%20Sharpies%20eat.aspx
|
2013-05-26T02:48:33Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.95546
| 365
|
Part 1 | Part 2
If you look at the various components in your computer, the hard disk is typically the slowest. This means that disk I/O can become a major bottleneck
Back in the days of DOS, the file system of choice was FAT (file allocation table). The FAT file system limited the length of a filename to eight characters plus a three character extension. Newer file systems such as FAT32 and NTFS allow the use of long filenames, but they also retain a FAT-style filename for backward compatibility purposes.
This filename alias can cause performance problems in a couple of different ways. First, the process of writing the filename alias consumes disk I/O cycles. Also, the filename alias forces the operating system to stop and calculate what the abbreviated filename should be.
That in itself incurs a minor performance hit. The real problem, though, is in the way that the filename alias is calculated. To create a filename alias, Windows looks at the first six characters of the original filename and then derives the alias from those six characters. Typically, the alias consists of the first six characters of the filename, a ~ sign and a number. A number is used at the end of the filename because it's possible that multiple files within a folder will have filenames in which the first six characters are identical.
Herein lies the problem. If you have a few files in which the first few characters of the filename are identical, it isn't really a big deal. However, when you start having large numbers of files in which the initial portion of the filenames are identical, Windows begins to spend more and more time trying to figure out what the filename alias should be. Once a directory contains a few thousand files with these characteristics, performance can really begin to suffer. According to some sources, if a folder contains 300,000 or more files that have identical initial portions of the filename, contradictions can begin to occur because Windows exhausts its pool of aliases for those files.
Today, the old "eight-dot-three"-style filenames are rarely necessary. Almost everyone uses FAT32 or NTFS as their file system, and backward compatibility with the FAT file system is seldom an issue anymore. That being the case, you might consider completely disabling support for backward compatibility. Doing so can greatly improve the file system's performance.
To disable backward compatibility, you can use a tool that was included with the Windows Resource Kit called Fsutil. Fsutil is a command-line tool designed to modify the behavior of the file system. The command for disabling backward compatibility is:
Fsutil behavior set disable8dot3 1
If you need to retain backward compatibility with the FAT file system, then you can improve performance by being careful how you name files. You should avoid placing large numbers of files into a single folder whenever possible. However, if you must place a lot of files into a folder, try structuring the filenames so that the first few characters of each filename are different, as opposed to making the last part of the filename the differentiating factor. Remember that Windows looks at the first six characters of a filename when creating an alias.
Resize the master file table
Another way to improve performance is to adjust the size of the master file table. The master file table is similar to the File Allocation Table used by the FAT file system. It is essentially a directory of all of the files and folders found on the hard disk volume. For this reason, it is critical to the volume's performance that the master file table remains as unfragmented as possible. As such, Microsoft has designed the NTFS so that 12.5% of the volume's disk space is reserved for the master file table.
Normally, this works out OK, but if the volume contains a large number of files (not necessarily large files), the amount of space reserved for the master file table can become inadequate. Likewise, if the volume starts to become low on disk space, Windows may start placing some of the smaller files into the area reserved for the master file table in an effort to avoid running out of disk space.
Either of these situations can cause performance problems. Fortunately, you can adjust the amount of disk space that is reserved for the master file table by using the fsutil command. The actual command is shown below:
Fsutil behavior set mftzone 1
You might have noticed that the command above ends with a "1." The number at the end of the command tells fsutil how much disk space to reserve. The "1" indicates that 12.5% of the total capacity will be reserved. In addition, a "2" reserves 25% of the disk's capacity, while a "3" reserves 37.5% and a "4" reserves 50%.
ABOUT THE AUTHOR
Brien M. Posey, MCSE, has received Microsoft's Most Valuable Professional Award four times for his work with Windows Server, IIS and Exchange Server. He has served as CIO for a nationwide chain of hospitals and healthcare facilities, and was once a network administrator for Fort Knox. You can visit his personal Web site at www.brienposey.com.
This was first published in February 2008
|
<urn:uuid:6bd0a83c-17e6-44f1-a4e7-fbc1b0025987>
|
CC-MAIN-2013-20
|
http://searchwindowsserver.techtarget.com/tip/Optimizing-NTFS-file-system-performance
|
2013-05-25T13:00:54Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.941023
| 1,074
|
Risk factorsBy Mayo Clinic staff
While anyone can catch infectious diseases, you may be more likely to get sick if your immune system isn't working properly. This may occur if:
- You're taking steroids or other medications that suppress your immune system, such as anti-rejection drugs for a transplanted organ
- You have HIV or AIDS
- You have certain types of cancer or other disorders that affect your immune system
In addition, certain other medical conditions may predispose you to infection, including implanted medical devices, malnutrition and extremes of age, among others.
- Understanding microbes in sickness and in health. National Institute of Allergy and Infectious Diseases. http://www.niaid.nih.gov/topics/microbes/documents/microbesbook.pdf. Accessed Oct. 8, 2012.
- Long SS, et al. Long: Principles and Practice of Pediatric Infectious Diseases. 4th ed. Philadelphia, Pa.: Saunders Elsevier; 2012. http://www.mdconsult.com/books/about.do?about=true&eid=4-u1.0-B978-1-4377-2702-9..C2009-0-41480-6--TOP&isbn=978-1-4377-2702-9&uniqId=372964036-9. Accessed Oct. 8, 2012.
- Facts about infectious diseases. Infectious Diseases Society of America. http://www.idsociety.org/Facts_About_ID/#. Accessed Oct. 8, 2012.
- Escherichia coli infections. World Health Organization. http://www.emro.who.int/health-topics/escherichia-coli-infections/. Accessed Oct. 10, 2012.
- De Martel C, et al. Global burden of cancers attributable to infections in 2008: A review and synthetic analysis. The Lancet Oncology. 2012;13:607.
- Personal prevention of MRSA skin infections. Centers for Disease Control and Prevention. http://www.cdc.gov/mrsa/prevent/personal.html. Accessed Oct. 10, 2012.
- Sexually transmitted diseases (STDs). Centers for Disease Control and Prevention. http://www.cdc.gov/std/treatment/2006/clinical.htm. Accessed Oct. 10, 2012.
|
<urn:uuid:ec4c7368-d31d-47e9-b5c8-79285ad04dc3>
|
CC-MAIN-2013-20
|
http://www.mayoclinic.com/health/infectious-diseases/DS01145/DSECTION=risk-factors
|
2013-05-19T18:27:40Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.8013
| 502
|
We will focus on the following: the dynamics of a revolutionary movement leading to Bolshevik victory; the international and internal processes that transformed an international socialist project into a Soviet imperial one; and the endurance of nationalism within the Soviet imperial framework. We will also explore in some depth the role of personalities in politics. In terms of coverage, we begin with the crises of the Romanov regime and study the Russian revolutions of 1905 and 1917, the Civil War, the Soviet period, Stalinism, the Second World War, and the Cold War. We move to the arms races and nuclear perils of the Cold War and study the failed efforts to reform the Soviet system. Finally, we will examine the struggles of both Gorbachev and the post-Soviet leadership to integrate their state into a world order dominated by democratic values and capitalist markets while sustaining or reviving the Russian and Soviet empires' traditional great-power status.
COURSE FORMAT: Lecture/Discussion
Level: UGRD Credit: 1 Gen Ed Area Dept: SBS HIST Grading Mode: Graded
Prerequisites: NONE Links to Web Resources For This Course.
Last Updated on MAR-30-2006
Copyright Wesleyan University, Middletown, Connecticut, 06459
|
<urn:uuid:80c92788-a754-4364-a708-445110c1e3c9>
|
CC-MAIN-2013-20
|
http://www.wesleyan.edu/course/hist219f.htm
|
2013-05-21T11:16:29Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699899882/warc/CC-MAIN-20130516102459-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.878673
| 254
|
In Jewish consciousness, a fast day is a time of reckoning, a time to correct a previous mistake. What happened on the Tenth of Tevet that we have to correct?
On the Tenth of Tevet, 2,500 years ago, Nebuchadnezzar began his siege of Jerusalem. Actually, there was little damage on that first day and no Jews were killed. So why is this day so tragic? Because the siege was a message, to get the Jewish people to wake up and fix their problems. They failed, and the siege led to the destruction of the King Solomon's Temple.
Today we are also under siege. Much of the Jewish world is ignorant of our precious heritage. Children whose Jewish education ended at age 13 now carry that perception through adulthood. The results are catastrophic: assimilation in the diaspora, and a blurring of our national goals in Israel.
The siege was a message to the Jewish people to wake up and fix their problems.
So what's the message for us? Wake up and understand. What does the Almighty want? If there's a siege, hear the message now. Don't wait for the destruction.
If the Jewish problem today is a lack of appreciation of our heritage, then the solution is clear: increased love of Torah, love of Jews, and love of Israel and Jerusalem. The Almighty is telling us: The siege will not be lifted until you correct the mistake.
Responsibility To Teach
The Talmud speaks about two sages concerned over the threat of Torah being forgotten by the Jewish people. As a precaution, Rav Chiyah captured a deer, slaughtered it, and gave the meat to orphans. Then he tanned the hides and wrote five separate scrolls, one for each of the Five Books of Moses. He took five children, and taught each of them one book. He then took six more children, and taught each of them one of the six orders of Mishnah, the oral law.
Then he told each of the 11 children: Teach what you've learned to each other. With this, the Talmud says, Rav Chiyah ensured that the Torah would never be forgotten by the Jewish people.
This raises a question: 11 children is a pretty small class. Why didn't Rav Chiyah simply teach all the children all the books? Why did he teach each child only one book?
The answer is that the children having to teach each other was essential to the process. To ensure that Torah should not be forgotten, you have to teach what you've learned to others. That's the secret. You've got an obligation to your fellow Jews. If you know something -- teach it.
To ensure that Torah is not forgotten, teach what you've learned to others.
Realize that the most destructive, painful, contagious disease of all is ignorance. Ignorance leads to wasted lives and untold suffering.
So if you know the key to happiness, teach it. Do you see human beings walking around depressed, half dead? Give them some joy. If you have the ability, you must help. Otherwise you'll always bear the knowledge of what you "could have done."
This is not about "forcing your opinion" on others. No. A good teacher conveys information that allows the student to get in touch with what he already knows -- and re-discover it on his own. Get others to see and understand it on their own terms.
Don't sell yourself short. You have the ability to make a dramatic impact on others. You don't have to be a U.S. Senator to make a difference. With one piece of wisdom you can help humanity.
The director of Aish HaTorah's Russian Program is Rabbi Eliyahu Essas, a former refusenik from the Soviet Union. He lived there at a time when it was totally illegal to study Torah. Consequently, Rabbi Essas had nobody to teach him, and at the time, he didn't know how to even read Aleph-Bet. So he got a hold of some underground books, hid out from the KGB, and began to teach himself Torah.
After awhile, word got out that Rabbi Essas knew Torah, and people started coming to study in secret. But of 5 million Soviet Jews, Rabbi Essas was one of the few teaching Torah. So you can imagine that his time was in great demand. That's why Rabbi Essas made a rule: "Before I begin teaching you, you must agree to teach over what you've learned to others." In this way, Rabbi Essas was able to multiply his effect.
Before I teach you, you must agree to teach over what you've learned to others.
Although we don't live under an oppressive Soviet regime, the concept still applies to us as well. You learned something precious? Say to yourself: "That was fascinating. How did it change me? What did it teach me about living? Now how can I transfer this insight to others?"
Don't forget: Teaching benefits you as well. Until you share an idea, it's not yours. It remains but a hazy notion in your imagination. Having to explain an idea to others forces you to clarify it for yourself. You've taken it out of potential and made it a reality.
When you teach someone, make sure they understand how important it is to teach it over to someone else. If they do, then that's part of your success as a teacher. That's ensuring that Torah would never be forgotten by the Jewish people.
There's one more lesson to be learned from the story of Rav Chiyah. By teaching the 11 children only one book each, these children knew they had to learn from one another. The Jewish people are one and we're all in this together. Every person is worthy of profound respect, regardless of their beliefs and level of observance, and there is something to be learned from everyone.
We live in serious times. Whether it's assimilation in America, or international forces pressing our holy city of Jerusalem, the message is essentially the same: The siege is on and the clock is ticking. We have to communicate the Torah message to our people. It is a matter of utmost national urgency.
We who believe in the power of Torah and the eternal mission of the Jewish people are required to act.
Who is responsible? We who believe in the power of Torah and the eternal mission of the Jewish people are required to act. To teach wisdom and be a "Light Unto the Nations."
On the Tenth of Tevet, when Nebuchadnezzar surrounded the city of Jerusalem, we did not get the message. Will we get the message now? Will we change? Will we wake up to reality?
You've got to care. If you don't make the effort, you don't care enough. You have powers. Are you going to use them?
We must get the message. Before the destruction. Now is the time.
|
<urn:uuid:b4c39877-7e4c-4ed5-a2e4-ec148a6b56b0>
|
CC-MAIN-2013-20
|
http://www.aish.com/jw/j/48960291.html
|
2013-06-20T09:15:34Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.974999
| 1,428
|
more articles like this
updated: Jun 04, 2011, 8:15 AM
Eyes-in-the-Sky (EITS) is a Santa Barbara Audubon Program whose mission is to
educate children and families in Santa Barbara County about birds, their natural
habitats and how people affect the "wellness" of birds' natural environment.
Since 2001 EITS' popular educational outreach program remains unique in its
teaching approach, using rescued, un-releasable birds of prey (raptors) to open
the door to learning about wildlife and nature. Raptors currently in the care of
EITS are a Great horned owl, Western screech owl, Red-tailed hawk, Peregrine
falcon and American Kestrel.
Last year, more than 7,000 children and adults benefited from science classes,
week-end guided bird walks, summer camp presentations and public events.
Community members who wish to apply to be a volunteer with EITS may contact
Gabriele Drozdowski, Program Director at firstname.lastname@example.org (or 805-898-
0347). Volunteers learn to care for the raptors and to "meet and greet" the
public and share stories about the raptors' and their wild counterparts. Tax
deductible contributions to help with the care and maintenance of the raptors,
their aviary and the program costs are gratefully accepted and can be mailed to:
Attn: EITS, Santa Barbara Audubon, Inc., 5679 Hollister Avenue, Suite 5B,
Goleta, CA 93117.
7 comments on this article. Read/Add
# # # #
|
<urn:uuid:e39c8d44-705b-443f-889e-f3ccdf77c600>
|
CC-MAIN-2013-20
|
http://www.edhat.com/site/tidbit.cfm?nid=50717
|
2013-05-22T01:05:57Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.879523
| 360
|
Autism news in 2012 once again centered on the dramatic increase in autism rates. In March, the Centers for Disease Control estimated that one in 88 children has autism, up 78 percent from 2002.
Scientists increasingly learned through research that autism is largely caused by environmental and man-made factors, a departure from the view held years ago that autism’s causes were nearly all genetic.
Meanwhile, educational and therapeutic interventions continued to evolve, with a strong emphasis on play skills as a way to improve social and life skills for children on the spectrum.
Links and excerpts from 10 autism articles from 2012 are below.
Autism advocates and government officials testified in front of a congressional committee Thursday about the federal response to the dramatic increase in autism diagnoses in recent years.
One in every 88 babies born in the U.S. will develop autism, according to the Centers for Disease Control, a 23 percent increase since 2009 and a 78 percent increase since 2007. In the 1960s, autism was believed to affect one in 10,000 children in the U.S.
Members of the House Oversight and Government Reform Committee questioned representatives of the National Institutes of Health and CDC about research priorities and subsequent results. A second panel of autism advocates testified about concerns ranging from research to services for people with autism. See the video here.
Numerous congressmen on the committee harshly criticized the NIH and CDC for a lack of effective research results, while agency officials at times struggled to come up with answers. The safety of vaccines was discussed, an issue that NIH and CDC insists is not linked to the rise in autism. However, many parents still steadfastly believe vaccines are one of the causes of the disorder. Members of the House committee recounted instances in which parents told them of children developmentally regressing immediately after being subjected to vaccines.
- Study: Traffic pollution, air quality associated with increased autism risk
- Memories of working with children with autism: School, sports, play, and fun
- Social skills and play date activities for girls with autism: Yoga, music, games
- Surfing camps offer instruction for children with autism and other special needs
- Tips to keep children with autism and other disabilities safe from sexual abuse
- Use flashcards to prompt reading and speech for children on autism spectrum
- Play Beatles songs as music therapy for children with autism and special needs
- Use Gardner’s Multiple Intelligences for special needs play date activities
- For students with autism and other disabilities, continuity enhances learning
To read excerpts from the articles on Examiner.com, click here.
|
<urn:uuid:93146cef-3556-4c32-ad4f-f50e72d6dbaa>
|
CC-MAIN-2013-20
|
http://mikefrandsen.org/tag/examiner-com/
|
2013-05-22T08:12:51Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.956061
| 520
|
Introduction to ASP
What Is ASP?
Microsoft Active Server Pages (ASP) is a server-side scripting environment that you can use to create and run dynamic, interactive Web server applications. With ASP, you can combine HTML pages, script commands, and COM components to create interactive Web pages or powerful Web-based applications, which are easy to develop and modify. For example, you can use the ActiveX Data Objects (ADO) components to add database connectivity to your Web pages.
How ASP Works
When you incorporate ASP into your Web site, here's what happens:
Because your script runs on the server, the Web server does all of the processing and standard HTML pages can be generated and sent to the browser. This means that your Web pages are limited only by what your Web server supports.
Internet Information Services (IIS)
To run ASP on your computer you will need the Internet Information Services (IIS) component installed on your machine.
IIS is a web server application and set of feature extension modules created by Microsoft for use with Microsoft Windows. IIS is not turned on by default when Windows is installed.
Read Internet Information Services for more information.
Local IIS Web Sites
A local Internet Information Services (IIS) Web site is an IIS Web application on your computer. Creating or opening a local IIS Web site is useful in the following situations:
To open an existing local IIS Web site, the preceding must be true as well as the following:
Running Local IIS Web Sites
Deploying Local IIS Web Sites
Creating Virtual Directories in IIS
In most cases, the content you publish to your Web or FTP site is located in a root or home directory on your computer, such as C:\Inetpub\wwwroot\. However, there might be instances when the content is located somewhere else, or even on a remote computer.
To publish from any directory not contained within your home or root directory, you can create a virtual directory. A virtual directory is a directory that is not contained in the home directory but appears to client browsers as though it were.
You can create a virtual directory through IIS Manager or by using Windows Explorer.
To create a virtual directory by using IIS Manager
To create a virtual directory by using Windows Explorer
Creating Virtual Directories in IIS 7 (Windows Vista or Later)
The IIS manager user interface consists of three panes.
The left hand side pane is Connections, the middle pane is Workspace and the right hand side pane is Actions.
The Connections pane lists application pools and websites. The workspace pane consists of two tabs at the bottom namely Features View and Content View. The Features View allows you to work with the settings of the selected item from Connections pane whereas the Content View displays all the child nodes (content) of the selected item.
Application pool is a group of IIS applications that are isolated from other application pools. Each application pool runs in its own worker process. Any problem with that process affects the applications residing in it and not the rest of the applications. You can configure application pools individually.
In order to create a new application pool, select "Application Pools" under Connections pane. Then click on "Add application pool" from Actions pane. This will open a dialog as shown below:
Specify a name for the new pool to be created. Select .NET framework version that all the applications from the pool will use. Also select pipeline mode. There are two pipeline modes viz. integrated and classic. The integrated mode uses the integrated request processing model whereas the classic mode uses the older request processing model. Click OK to create the application pool.
Your new application pool will now be displayed in the Workspace pane. To configure the application pool click on the "Advanced Settings" option under Actions pane. The following figure shows many of the configurable properties of an application pool.
If you use 64-bit Windows, set Enable 32-Bit Applications to True. (See Running Classic ASP on 64-bit Windows Operating System below.)To create a new web site, select Web Sites node under Connections pane and then click on "Add Web Site" under Actions pane. This opens a dialog as shown below:
Here, you can specify properties of the new web site including its application pool and physical location.Creating an IIS application or a Virtual Directory is quick and simple. Just right click on the web site and choose either "Add Application" or "Add Virtual Directory" to open respective dialogs (see below).
An existing Virtual directory can be marked as an IIS application by right clicking on it and selecting "Convert to Application".
Once you create a website or an IIS application, you can then set several ASP related configuration properties via Workspace pane.
Go to ASP->Debugging Properties->Send Errors to Browser, set it to True,
You may encounter the following error messages when you run ASP pages with IIS 7:
1. Error message when you request an ASP page that connects to an Access database in IIS 7.0: "Microsoft JET Database Engine error '80004005'"
2. Error message when you request an ASP page: "An error occurred on the server when processing the URL. Please contact the system administrator"
Go to Internet Options->Advanced, disable "Show friendly HTTP error messages".
An important aspect of working with an Access .mdb file and file upload to a folder on the Web server is to correctly configure permissions.
When a Web application uses an Access database, the application must have Read permission to the .mdb file so the application can access the data. Additionally, the application must have Write permission to the folder that contains the .mdb file. Write permission is required because Access creates an additional file that has the extension .ldb in which it maintains information about database locks for concurrent users. The .ldb file is created at run time.
To use an Access database in an ASP Web application, you must configure the folder that contains the Access database to have both Read and Write permissions for the IIS user account.
The default anonymous IIS user depends on IIS version. In IIS 5, it is IUSR_<MachineName>. In IIS 6.0 and IIS 7 it can be NETWORKSERVICE or IUSR. In IIS 7.5 it depends on Application Pool, read Application Pool Identities for detail.
If you specify database path, ASPMaker also creates the database folder but you may need to set the permissions yourself. To set permissions in the database folder,
Similarly, set permissions in the folder where the uploaded file and audit trail log file reside.
Running Classic ASP on 64-bit Windows Operating System
Windows Server 2008 or Windows 7 64-bit (IIS 7.x)
On 64-bit Windows 2008/7, IIS 7.x can run both 32-bit and 64-bit worker processes simultaneously. To run 32-bit Web applications in IIS 7.x on 64-bit Windows all it needs is to assign the 32-bit applications to a separate application pool in IIS and turn on the Enable 32-Bit Applications switch for that application pool. To do this, open IIS Manager, open Application Pool, select the application pool, and then click Advanced Settings. In Enable 32-Bit Applications, select True.
Windows Server 2003 64-bit (IIS 6)
On 64-bit Windows 2003, although IIS 6 supports running both 64-bit and 32-bit worker processes, it doesn't support running in both modes simultaneously. By default IIS 6 is configured to run in native 64-bit mode and work only with 64-bit worker processes, which means you can only run 64-bit Web applications (for ASP.NET applications they can only target ASP.NET version 2.0 or higher) in the native mode. In order to run 32-bit Web applications you will need to set IIS 6 to run in 32-bit mode. Note: This means all your Web applications will now run in 32-bit mode.
To enable IIS 6 to run 32-bit worker processes follow these steps:
The following article explains the details of the changes in the behavior of IIS after configuring it to run 32-bit worker processes: Running 32-bit Applications on 64-bit Windows (IIS 6.0)
|©2001-2013 e.World Technology Ltd. All rights reserved.|
|
<urn:uuid:69e8c088-d692-4fe0-aa79-abe7d8091fa3>
|
CC-MAIN-2013-20
|
http://www.hkvstore.com/aspmaker/doc/aspprimer.htm
|
2013-05-24T23:00:50Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.845998
| 1,752
|
Sponsored Link •
Magic and version numbers
The first four bytes of every class file are always 0xCAFEBABE. This magic number makes Java class files easier to identify, because the odds are slim that non-class files would start with the same initial four bytes. The number is called magic because it can be pulled out of a hat by the file format designers. The only requirement is that it is not already being used by another file format that may be encountered in the real world. According to Patrick Naughton, a key member of the original Java team, the magic number was chosen "long before the name Java was ever uttered in reference to this language. We were looking for something fun, unique, and easy to remember. It is only a coincidence that OxCAFEBABE, an oblique reference to the cute baristas at Peet's Coffee, was foreshadowing for the name Java."
The second four bytes of the class file contain the major and minor version numbers. These numbers identify the version of the class file format to which a particular class file adheres and allow JVMs to verify that the class file is loadable. Every JVM has a maximum version it can load, and JVMs will reject class files with later versions.
The class file stores constants associated with its class or interface in the constant pool. Some constants that may be seen frolicking in the pool are literal strings, final variable values, class names, interface names, variable names and types, and method names and signatures. A method signature is its return type and set of argument types.
The constant pool is organized as an array of variable-length elements. Each constant occupies one element in the array. Throughout the class file, constants are referred to by the integer index that indicates their position in the array. The initial constant has an index of one, the second constant has an index of two, etc. The constant pool array is preceded by its array size, so JVMs will know how many constants to expect when loading the class file.
Each element of the constant pool starts with a one-byte tag specifying the type of constant at that position in the array. Once a JVM grabs and interprets this tag, it knows what follows the tag. For example, if a tag indicates the constant is a string, the JVM expects the next two bytes to be the string length. Following this two-byte length, the JVM expects to find length number of bytes, which make up the characters of the string.
In the remainder of the article I'll sometimes refer to the nth element of the constant pool array as constant_pool[n]. This makes sense to the extent the constant pool is organized like an array, but bear in mind that these elements have different sizes and types and that the first element has an index of one.
The first two bytes after the constant pool, the access flags, indicate whether or not this file defines a class or an interface, whether the class or interface is public or abstract, and (if it's a class and not an interface) whether the class is final.
The next two bytes, the this class component, are an index into the constant pool array. The constant referred to by this class, constant_pool[this_class], has two parts, a one-byte tag and a two-byte name index. The tag will equal CONSTANT_Class, a value that indicates this element contains information about a class or interface. Constant_pool[name_index] is a string constant containing the name of the class or interface.
The this class component provides a glimpse of how the constant pool is used. This class itself is just an index into the constant pool. When a JVM looks up constant_pool[this_class], it finds an element that identifies itself as a CONSTANT_Class with its tag. The JVM knows CONSTANT_Class elements always have a two-byte index into the constant pool, called name index, following their one-byte tag. So it looks up constant_pool[name_index] to get the string containing the name of the class or interface.
Following the this class component is the super class component, another two-byte index into the constant pool. Constant_pool[super_class] is a CONSTANT_Class element that points to the name of the super class from which this class descends.
The interfaces component starts with a two-byte count of the number of interfaces implemented by the class (or interface) defined in the file. Immediately following is an array that contains one index into the constant pool for each interface implemented by the class. Each interface is represented by a CONSTANT_Class element in the constant pool that points to the name of the interface.
The fields component starts with a two-byte count of the number of fields in this class or interface. A field is an instance or class variable of the class or interface. Following the count is an array of variable-length structures, one for each field. Each structure reveals information about one field such as the field's name, type, and, if it is a final variable, its constant value. Some information is contained in the structure itself, and some is contained in constant pool locations pointed to by the structure.
The only fields that appear in the list are those that were declared by the class or interface defined in the file; no fields inherited from super classes or superinterfaces appear in the list.
The methods component starts with a two-byte count of the number of methods in the class or interface. This count includes only those methods that are explicitly defined by this class, not any methods that may be inherited from superclasses. Following the method count are the methods themselves.
The structure for each method contains several pieces of information about the method, including the method descriptor (its return type and argument list), the number of stack words required for the method's local variables, the maximum number of stack words required for the method's operand stack, a table of exceptions caught by the method, the bytecode sequence, and a line number table.
Bringing up the rear are the attributes, which give general information about the particular class or interface defined by the file. The attributes section has a two-byte count of the number of attributes, followed by the attributes themselves. For example, one attribute is the source code attribute; it reveals the name of the source file from which this class file was compiled. JVMs will silently ignore any attributes they don't recognize.
|
<urn:uuid:76583a11-4dc2-4636-9c7c-c097e1b506b6>
|
CC-MAIN-2013-20
|
http://www.artima.com/underthehood/classfile2.html
|
2013-05-23T11:56:33Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.913858
| 1,345
|
Islam in Albania
|Islam by country|
|Part of a series on|
|Balkan countries with substantial Albanian population|
|Varieties of Albanian|
During the Ottoman rule, the majority of Albanians converted to the Muslim affiliation (Sunni and Bektashi). However, decades of state atheism which ended in 1991 brought a decline in religious practice in all traditions.
A recent Pew Research Center demographic study put the percentage of Muslims in Albania at 79.9%. However, a recent Gallup poll gives percentages of religious affiliations with only 43% Muslim, 19% Eastern Orthodox, 15% Catholic and 23% atheist or nonreligious. In the 2011 census the declared religious affiliation of the population was: 56.70% Muslims, 2.09% Bektashis, 10.03% Catholics, 6.75% Orthodox, 0.14% Evangelists, 0.07% other Christians, 5.49% believers without denomination, 2.50% Atheists, 13.79% undeclared.
Ottoman period
Islam came to Albania through the Ottoman rule in the 14th century and confronted Christianity. In the North, the spread of Islam was slower due to resistance from the Roman Catholic Church and the mountainous terrain which contributed to curb Muslim influence. In the center and south, however, by the end of the seventeenth century the urban centers had largely adopted the religion of the growing Albanian Muslim elite. The existence of an Albanian Muslim class of pashas and beys who played an increasingly important role in Ottoman political and economic life became an attractive career option for most Albanians.
The Muslims of Albania were divided into two main communities: those associated with Sunni Islam and those associated with the Bektashi Sufis, a mystic Dervish order that came to Albania during the Ottoman period, primarily during the 18th and 19th centuries. The Bektashi sect is considered heretical by most mainstream Muslims. Historically Sunni Islam found its strongest base in northern and central Albania, while Bektashis were found primarily in the Tosk lands of the south.
During Ottoman rule the Albanian population gradually began to convert to Islam through the teachings of Bektashism, in order to gain considerable advantages in the Ottoman trade networks, bureaucracy and army. Many Albanians were recruited into the Ottoman Janissary and Devşirme and 42 Grand Viziers of the Ottoman Empire were of Albanian origin. The most prominent Albanians during Ottoman rule were: Davud Pasha, Hamza Kastrioti, Iljaz Hoxha, Nezim Frakulla, Köprülü Mehmed Pasha, Ali Pasha, Edhem Pasha, Haxhi Shehreti, Ali Pasha of Gucia, Ibrahim Pasha of Berat, Köprülü Fazıl Ahmed, Muhammad Ali of Egypt, Kara Mahmud Bushati, Ahmet Kurt Pasha.
The country won its independence from the Ottoman Empire in 1912. Following the National Renaissance tenets and the general lack of religious convictions, during the 20th century, the democratic, monarchic and later the communist regimes followed a systematic dereligionization of the nation and the national culture. Due to this policy, as all other faiths in the country, Islam underwent radical changes.
In 1923, following the government program, the Albanian Muslim congress convened at Tirana decided to break with the Caliphate, established a new form of prayer (standing, instead of the traditional salah ritual), banished polygamy and the mandatory use of veil (hijab) by women in public, practices forced on the urban population by the Ottomans.
The Muslim clergy, following suit with the Catholic and Orthodox clergy, was totally eradicated during the communist regime of Enver Hoxha who declared Albania the only non-religious country in the world, banning all forms of religious practice in public in 1967.
See also
- Miller, Tracy, ed. (October 2009), Mapping the Global Muslim Population: A Report on the Size and Distribution of the World’s Muslim Population (PDF), Pew Research Center, retrieved 2009-10-08
- Albanian census 2011
- John Hutchinson, Anthony D. Smith, "Nationalism: Critical Concepts in Political Science"
- Albania dispatch, Time magazine, April 14, 1923
- Official website of the OIC
- The Muslim Forum of Albania
- Albanian Institute of Islamic Thought & Civilization
- The Bektashi Community
- Muslim Albania
|
<urn:uuid:d0b6578e-bd94-4ff2-9e8c-92187ab9ec7a>
|
CC-MAIN-2013-20
|
http://en.wikipedia.org/wiki/Islam_in_Albania
|
2013-05-18T08:51:00Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381630/warc/CC-MAIN-20130516092621-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.919076
| 941
|
As the world edged into financial crisis, there were repeated warnings that we were headed for disaster. In the end, disaster struck. In many ways, the challenge of climate change has a similar feel, and the alarm bells are ringing just as loudly. But while it was possible to bail out the banks and to stimulate economic recovery with trillions of dollars of public finance, it will not be possible to bail out the climate—unless we act now.
Yet even when the basic science of climate change has been accepted by almost all scientists, many others still seem to think that it is unfounded, and that the world has more important questions to address. Reducing poverty, increasing food production, combating terrorism, and sustaining economic recovery are seen as more deserving of our attention. But this is a false choice, for climate change is not an alternative priority to all of these; it is in fact a "risk multiplier," a factor that will undermine our ability to achieve any of these things.
For example, ending poverty so that every person has the opportunity to lead a good life is already a hugely challenging ambition, and rapid climate change will make it more so. Several studies have set out how climatic change will threaten economic development, especially in the most vulnerable and poorest countries. This will, in turn, damage programs to reduce poverty.
Food security is already at risk because of soil erosion and the volatility of oil and gas prices that sustain industrial farming, while demand is rising because of population growth and changing diets. Climate change will exacerbate this squeeze. According to a United Nations Environment Program projection, agricultural productivity could drop by up to 50 percent in many developing countries by 2080—not least because of changed patterns of rainfall.
These environmental stresses are likely to heighten social tensions. If in the future it becomes clear that the world's big polluters knew but did little or nothing about these problems, a whole new generation of resentment might be born.
With this in mind, it seems to me that we need to adopt a new approach. Surely the starting point must be to see the world as it really is, and perhaps to accept that the economy is a wholly owned subsidiary of Nature and not the other way around. Nature is, after all, the capital that underpins capitalism. The world's tropical rainforests provide a powerful case in point.
These incredible ecosystems harbor more than half the earth's terrestrial biodiversity, on which, whether we like it or not, human survival depends. They generate rainfall; they are home to many of the world's indigenous peoples; and they help meet the needs of hundreds of millions of other people. They also hold vast quantities of carbon. But they are being cleared and burned at a rate of about 6 million hectares per year. In addition to hastening a mass extinction of species—many of which could hold the answer to the treatment of human diseases as well as the key to new technologies based on mimicking Nature's genius—this is causing massive greenhouse-gas emissions, accounting for about a fifth of the total.
This is precisely why my Rainforests Project has expended so much effort during these last two years to help facilitate a consensus on increasing international cooperation to cut deforestation. Back in April, I was able to host a meeting of world leaders at St. James's Palace in London, in the margins of the G20 summit, where it was agreed to establish a new informal working group to look at how rates of deforestation could be slowed as rapidly as possible. The group came back with recommendations just a few weeks ago, and it is enormously heartening to see the degree of partnership that has developed between countries, environmental groups, and companies that are determined to work together toward implementing the proposals for dealing with the underlying economic root causes of deforestation.
Through providing countries with financial rewards for their positive performance in cutting deforestation (or for not starting it in the first place), we would make it possible for rainforest nations to implement strategies for sustainable development more quickly and without having to rely so heavily on the kind of economic activities that cause deforestation. By using—in addition to public-sector finance—innovative, long-term investment instruments, perhaps facilitated by the multilateral development banks, we could restore vast areas of already degraded land to increase food output. At the same time, money would be available for new health and education programs, as well as genuinely integrated rural-development models. In return, the world would sustain the vital ecosystem services upon which we all rely for our economic, physical, and spiritual survival.
The idea that the world should pay in some way for the essential utility services provided by the rainforests (after all, we already pay for our water, gas, and electricity) is not a new one. But there does, at last, appear to be agreement that this is one way we can quickly begin to reduce emissions and, thus, buy urgently needed time in the battle against catastrophic climate change. Through a constructive process, countries have been able to find a mutually agreeable approach that I hope, in the months ahead, will lead to the kind of international cooperation that could make a decisive difference.
While initiatives like this will need to be a part of the solution, they are not, I believe, the whole answer. In some ways the climate challenge is not first and foremost due to an absence of sound policy ideas or technology, but more a crisis of perception. As we have become progressively more separate from Nature, and more reliant on technological inventiveness to solve our problems, we have become less able to see our predicament for what it really is—namely as being utterly out of balance, having lost any sense of harmony with the earth's natural rhythms, cycles, and finite systems. The fact that we generally regard economics as being separate from Nature is just one, albeit quite fundamental, sign of this imbalance.
Forging a reconnection with Nature and reintegrating our societies and economies with her capacities is, as far as I can see, the real challenge to which we must rise. The Copenhagen summit will, I hope, contribute to a shift at this deeper level, as well as set out the plan for transition to a low-carbon economy based on official targets, policies, and technologies. As things stand, the world is not short of all these—what it does lack, however, is a mindset fit for the situation we face.
While time may not be on our side, our ability to cooperate and innovate to find solutions appears to be with us still. We have in the past faced huge challenges and prevailed. This time the challenge seems greater than ever before, but I hope with all my heart that in Copenhagen we will be able to exploit these very human attributes to the full. It is the very least we can do for future generations.
|
<urn:uuid:b93ce433-303c-474b-b4f9-0933df1cbc76>
|
CC-MAIN-2013-20
|
http://www.thedailybeast.com/content/newsweek/2009/12/03/green-alert.html
|
2013-05-24T15:59:45Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.965919
| 1,375
|
A blood smear is a blood test that gives information about the number and shape of blood cells.
How the Test is Performed
Blood is typically drawn from a vein, usually from the inside of the elbow or the back of the hand. The site is cleaned with germ-killing medicine (antiseptic). The health care provider wraps an elastic band around the upper arm to apply pressure to the area and make the vein swell with blood.
Next, the health care provider gently inserts a needle into the vein. The blood collects into an airtight vial or tube attached to the needle. The elastic band is removed from your arm.
Once the blood has been collected, the needle is removed, and the puncture site is covered to stop any bleeding.
In infants or young children, a sharp tool called a lancet may be used to puncture the skin and make it bleed. The blood collects into a small glass tube called a pipette, or onto a slide or test strip. A bandage may be placed over the area if there is any bleeding.
The blood sample is sent to a lab, where the health care professional looks at it under a microscope. Or, the blood may be examined by an automated machine. The smear shows the number and kinds of white blood cells (differential), abnormally shaped blood cells, and gives a rough estimate of white blood cell and platelet counts.
How to Prepare for the Test
No special preparation is necessary.
How the Test Will Feel
When the needle is inserted to draw blood, some people feel moderate pain. Others feel only a prick or stinging sensation. Afterward, there may be some throbbing.
Why the Test is Performed
This test may be performed as part of a general health exam to help diagnose many illnesses. Or, your doctor may order this test if you have signs of a blood disorder.
Other conditions under which the test may be performed:
Red blood cells normally are the same in size and color and have a lighter-colored area in the center. The blood smear is considered normal if there is:
- Normal appearance of cells
- Normal white blood cell differential
Normal value ranges may vary slightly among different laboratories. Talk to your doctor about the meaning of your specific test results.
What Abnormal Results Mean
Abnormal results mean there is an abnormality in the size, shape, color, or coating of the red blood cells.
Some abnormalities may be graded on a 4-point scale:
- 1+ means 25% of cells are affected
- 2+ means half of cells are affected
- 3+ means 75% of cells are affected
- 4+ means all of the cells are affected
The presence of target cells may be due to:
The presence of sphere-shaped cells (spherocytes) may be due to:
The presence of elliptocytes may be a sign of hereditary elliptocytosis or hereditary ovalocytosis.
The presence of fragmented cells (schistocytes) may be due to:
The presence of a type of immature red blood cell called a normoblast may be due to:
The presence of burr cells (echinocytes) may indicate:
The presence of spur cells (acanthocytes) may indicate:
The presence of teardrop-shaped cells may indicate:
- Leukoerythroblastic anemia
- Severe iron deficiency
- Thalassemia major
The presence of Howell-Jolly bodies may indicate:
The presence of Heinz bodies may indicate:
- Alpha thalassemia
- Congenital hemolytic anemia
- G6PD deficiency
- Unstable form of hemoglobin
The presence of slightly immature red blood cells (reticulocytes) may indicate:
- Anemia with bone marrow recovery
- Hemolytic anemia
The presence of basophilic stippling may indicate:
The presence of sickle cells may indicate sickle cell anemia.
Veins and arteries vary in size from one patient to another and from one side of the body to the other. Obtaining a blood sample from some people may be more difficult than from others.
Other risks associated with having blood drawn are slight but may include:
- Excessive bleeding
- Fainting or feeling light-headed
- Hematoma (blood accumulating under the skin)
- Infection (a slight risk any time the skin is broken)
The accuracy of this test depends, in part, on the experience of the person looking at the sample. Experienced cell examiners can get a lot of information from the blood smear.
Newland J. The peripheral blood smear. In: Goldman L, Ausiello D, eds. Cecil Medicine. 23rd ed. Philadelphia, Pa: Saunders Elsevier; 2007:chap 161.
David C. Dugdale, III, MD, Professor of Medicine, Division of General Medicine, Department of Medicine, University of Washington School of Medicine. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc.
The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997-
A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited.
|
<urn:uuid:84298ea1-06f5-4c83-81c0-1100ce653b07>
|
CC-MAIN-2013-20
|
http://www.lmhospital.org/health-library/HIE%20Multimedia/1/003665.htm
|
2013-06-18T22:44:50Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.893252
| 1,154
|
How to grow Erysimum
From the Greek erus, to draw up; some species are said to produce blisters (Cruciferae). Alpine wallflower. Hardy annual, biennial and perennial plants, closely related to Cheiranthus. Some are rather weedy, but others make good edging plants for a perennial border, or on gravelly banks and retaining walls.
Annual species cultivated E. perofskianum, 1 foot, reddish-orange, summer, Afghanistan.
Biennial E. allionii see Cheiranthus allionii, E. arkansanum, 11-2 feet, golden-yellow, July to October, Arkansas and Texas. E. asperum, 1 foot, vivid orange, early summer, North America. E. linifolium (syn. Cheiranthus linifolius), 1-14 feet, rosy-lilac, early summer, Spain.
Perennial E. dubium (syn. E. ochroleucum), 1 foot, pale yellow, April to July, Europe. E. rupestre, 1 foot, sulphur-yellow spring, Asia Minor.
Cultivation The alpine wallflowers like ordinary soil in dryish, sunny beds or in the rock garden. Propagation of annuals is by seed sown in April where the plants are to flower; biennials by seed sown out of doors, in June in a sunny place, transplanting the seedlings to their flowering positions in August; perennials by seed sown in a similar manner or by division in March or April, or by cuttings inserted in sandy soil in August in a cold propagating frame.
|
<urn:uuid:034f2a7e-7861-4e69-b20d-9123eeeeaf24>
|
CC-MAIN-2013-20
|
http://www.backyardgardener.com/gardening/perennial/Erysimum.html
|
2013-05-22T01:10:30Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.876864
| 360
|
People’s history No longer can these display prints, that once used to occupy pride of place in middle-class Indian homes, be written off either as kitsch or banalities. They represent the common man’s response to the momentous events of late-19th and mid-20th century that climaxed with the Independence of India. Bharat Mata: India’s Freedom Movement in Popular Art by Erwin Neumayer and Christine Schelberger (
Oxford, Rs 2,750) presents a gallery of popular prints mass produced — many, ironically, in Europe — to rouse Indians from a deep slumber to take up arms and protect a nation visualized as the Mother for whom her worthy sons would sacrifice themselves. Bharat Mata herself served as a milkmaid decanting the nourishing beverage for Britannica and carried the body of Gandhi in her lap, Pietà fashion. The Calcutta print of Jinnah’s burial and of Chacha Nehru present the two strands of this grand narrative.
|
<urn:uuid:7631c441-be99-4b7c-858a-4a73def2cdca>
|
CC-MAIN-2013-20
|
http://www.telegraphindia.com/1071123/asp/opinion/story_8580806.asp
|
2013-06-19T18:54:12Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.920207
| 212
|
- n. Indian chief and founder of the Powhatan confederacy of tribes in eastern Virginia; father of Pocahontas (1550?-1618)
“Susquehannocks, and as brave as the children of Wahunsonacock.”
“Wahunsonacock, who was chief of all the Powhatans, sits now within his wigwam, sharpening flints for his arrows, making his tomahawk bright and keen, thinking of a day three suns hence, when the tribes will shake off forever the hand upon their shoulder, -- the hand so heavy and white that strives always to bend them to the earth and keep them there. ”
“Nantauquas, the son of Wahunsonacock, a war chief of the Powhatans.”
‘Wahunsonacock’ hasn't been added to any lists yet.
Looking for tweets for Wahunsonacock.
|
<urn:uuid:635bc19e-a40d-479d-926e-c25d5b7c1d9e>
|
CC-MAIN-2013-20
|
http://www.wordnik.com/words/Wahunsonacock
|
2013-05-20T02:23:44Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.919688
| 209
|
(Return to index)
We have, in the next place, to treat of Memory and Remembering, considering its nature, its cause, and the part of the soul to which this experience, as well as that of Recollecting, belongs. For the persons who possess a retentive memory are not identical with those who excel in power of recollection; indeed, as a rule, slow people have a good memory, whereas those who are quick-witted and clever are better at recollecting.
We must first form a true conception of these objects of memory, a point on which mistakes are often made. Now to remember the future is not possible, but this is an object of opinion or expectation (and indeed there might be actually a science of expectation, like that of divination, in which some believe); nor is there memory of the present, but only sense-perception. For by the latter we know not the future, nor the past, but the present only. But memory relates to the past. No one would say that he remembers the present, when it is present, e.g. a given white object at the moment when he sees it; nor would one say that he remembers an object of scientific contemplation at the moment when he is actually contemplating it, and has it full before his mind;-of the former he would say only that he perceives it, of the latter only that he knows it. But when one has scientific knowledge, or perception, apart from the actualizations of the faculty concerned, he thus 'remembers' (that the angles of a triangle are together equal to two right angles); as to the former, that he learned it, or thought it out for himself, as to the latter, that he heard, or saw, it, or had some such sensible experience of it. For whenever one exercises the faculty of remembering, he must say within himself, 'I formerly heard (or otherwise perceived) this,' or 'I formerly had this thought'.
Memory is, therefore, neither Perception nor Conception, but a state or affection of one of these, conditioned by lapse of time. As already observed, there is no such thing as memory of the present while present, for the present is object only of perception, and the future, of expectation, but the object of memory is the past. All memory, therefore, implies a time elapsed; consequently only those animals which perceive time remember, and the organ whereby they perceive time is also that whereby they remember.
The subject of 'presentation' has been already considered in our work On the Soul. Without a presentation intellectual activity is impossible. For there is in such activity an incidental affection identical with one also incidental in geometrical demonstrations. For in the latter case, though we do not for the purpose of the proof make any use of the fact that the quantity in the triangle (for example, which we have drawn) is determinate, we nevertheless draw it determinate in quantity. So likewise when one exerts the intellect (e.g. on the subject of first principles), although the object may not be quantitative, one envisages it as quantitative, though he thinks it in abstraction from quantity; while, on the other hand, if the object of the intellect is essentially of the class of things that are quantitative, but indeterminate, one envisages it as if it had determinate quantity, though subsequently, in thinking it, he abstracts from its determinateness. Why we cannot exercise the intellect on any object absolutely apart from the continuous, or apply it even to non-temporal things unless in connexion with time, is another question. Now, one must cognize magnitude and motion by means of the same faculty by which one cognizes time (i.e. by that which is also the faculty of memory), and the presentation (involved in such cognition) is an affection of the sensus communis; whence this follows, viz. that the cognition of these objects (magnitude, motion time) is effected by the (said sensus communis, i.e. the) primary faculty of perception. Accordingly, memory (not merely of sensible, but) even of intellectual objects involves a presentation: hence we may conclude that it belongs to the faculty of intelligence only incidentally, while directly and essentially it belongs to the primary faculty of sense-perception.
Hence not only human beings and the beings which possess opinion or intelligence, but also certain other animals, possess memory. If memory were a function of (pure) intellect, it would not have been as it is an attribute of many of the lower animals, but probably, in that case, no mortal beings would have had memory; since, even as the case stands, it is not an attribute of them all, just because all have not the faculty of perceiving time. Whenever one actually remembers having seen or heard, or learned, something, he includes in this act (as we have already observed) the consciousness of 'formerly'; and the distinction of 'former' and 'latter' is a distinction in time.
Accordingly if asked, of which among the parts of the soul memory is a function, we reply: manifestly of that part to which 'presentation' appertains; and all objects capable of being presented (viz. aistheta) are immediately and properly objects of memory, while those (viz. noeta) which necessarily involve (but only involve) presentation are objects of memory incidentally.
One might ask how it is possible that though the affection (the presentation) alone is present, and the (related) fact absent, the latter-that which is not present-is remembered. (The question arises), because it is clear that we must conceive that which is generated through sense-perception in the sentient soul, and in the part of the body which is its seat-viz. that affection the state whereof we call memory-to be some such thing as a picture. The process of movement (sensory stimulation) involved the act of perception stamps in, as it were, a sort of impression of the percept, just as persons do who make an impression with a seal. This explains why, in those who are strongly moved owing to passion, or time of life, no mnemonic impression is formed; just as no impression would be formed if the movement of the seal were to impinge on running water; while there are others in whom, owing to the receiving surface being frayed, as happens to (the stucco on) old (chamber) walls, or owing to the hardness of the receiving surface, the requisite impression is not implanted at all. Hence both very young and very old persons are defective in memory; they are in a state of flux, the former because of their growth, the latter, owing to their decay. In like manner, also, both those who are too quick and those who are too slow have bad memories. The former are too soft, the latter too hard (in the texture of their receiving organs), so that in the case of the former the presented image (though imprinted) does not remain in the soul, while on the latter it is not imprinted at all.
But then, if this truly describes what happens in the genesis of memory, (the question stated above arises:) when one remembers, is it this impressed affection that he remembers, or is it the objective thing from which this was derived? If the former, it would follow that we remember nothing which is absent; if the latter, how is it possible that, though perceiving directly only the impression, we remember that absent thing which we do not perceive? Granted that there is in us something like an impression or picture, why should the perception of the mere impression be memory of something else, instead of being related to this impression alone? For when one actually remembers, this impression is what he contemplates, and this is what he perceives. How then does he remember what is not present? One might as well suppose it possible also to see or hear that which is not present. In reply, we suggest that this very thing is quite conceivable, nay, actually occurs in experience. A picture painted on a panel is at once a picture and a likeness: that is, while one and the same, it is both of these, although the 'being' of both is not the same, and one may contemplate it either as a picture, or as a likeness. Just in the same way we have to conceive that the mnemonic presentation within us is something which by itself is merely an object of contemplation, while, in-relation to something else, it is also a presentation of that other thing. In so far as it is regarded in itself, it is only an object of contemplation, or a presentation; but when considered as relative to something else, e.g. as its likeness, it is also a mnemonic token. Hence, whenever the residual sensory process implied by it is actualized in consciousness, if the soul perceives this in so far as it is something absolute, it appears to occur as a mere thought or presentation; but if the soul perceives it qua related to something else, then,-just as when one contemplates the painting in the picture as being a likeness, and without having (at the moment) seen the actual Koriskos, contemplates it as a likeness of Koriskos, and in that case the experience involved in this contemplation of it (as relative) is different from what one has when he contemplates it simply as a painted figure-(so in the case of memory we have the analogous difference for), of the objects in the soul, the one (the unrelated object) presents itself simply as a thought, but the other (the related object) just because, as in the painting, it is a likeness, presents itself as a mnemonic token.
We can now understand why it is that sometimes, when we have such processes, based on some former act of perception, occurring in the soul, we do not know whether this really implies our having had perceptions corresponding to them, and we doubt whether the case is or is not one of memory. But occasionally it happens that (while thus doubting) we get a sudden idea and recollect that we heard or saw something formerly. This (occurrence of the 'sudden idea') happens whenever, from contemplating a mental object as absolute, one changes his point of view, and regards it as relative to something else.
The opposite (sc. to the case of those who at first do not recognize their phantasms as mnemonic) also occurs, as happened in the cases of Antipheron of Oreus and others suffering from mental derangement; for they were accustomed to speak of their mere phantasms as facts of their past experience, and as if remembering them. This takes place whenever one contemplates what is not a likeness as if it were a likeness.
Mnemonic exercises aim at preserving one's memory of something by repeatedly reminding him of it; which implies nothing else (on the learner's part) than the frequent contemplation of something (viz. the 'mnemonic', whatever it may be) as a likeness, and not as out of relation.
As regards the question, therefore, what memory or remembering is, it has now been shown that it is the state of a presentation, related as a likeness to that of which it is a presentation; and as to the question of which of the faculties within us memory is a function, (it has been shown) that it is a function of the primary faculty of sense-perception, i.e. of that faculty whereby we perceive time.
Next comes the subject of Recollection, in dealing with which we must assume as fundamental the truths elicited above in our introductory discussions. For recollection is not the 'recovery' or 'acquisition' of memory; since at the instant when one at first learns (a fact of science) or experiences (a particular fact of sense), he does not thereby 'recover' a memory, inasmuch as none has preceded, nor does he acquire one ab initio. It is only at the instant when the aforesaid state or affection (of the aisthesis or upolepsis) is implanted in the soul that memory exists, and therefore memory is not itself implanted concurrently with the continuous implantation of the (original) sensory experience.
Further: at the very individual and concluding instant when first (the sensory experience or scientific knowledge) has been completely implanted, there is then already established in the person affected the (sensory) affection, or the scientific knowledge (if one ought to apply the term 'scientific knowledge' to the (mnemonic) state or affection; and indeed one may well remember, in the 'incidental' sense, some of the things (i.e. ta katholou) which are properly objects of scientific knowledge); but to remember, strictly and properly speaking, is an activity which will not be immanent until the original experience has undergone lapse of time. For one remembers now what one saw or otherwise experienced formerly; the moment of the original experience and the moment of the memory of it are never identical.
Again, (even when time has elapsed, and one can be said really to have acquired memory, this is not necessarily recollection, for firstly) it is obviously possible, without any present act of recollection, to remember as a continued consequence of the original perception or other experience; whereas when (after an interval of obliviscence) one recovers some scientific knowledge which he had before, or some perception, or some other experience, the state of which we above declared to be memory, it is then, and then only, that this recovery may amount to a recollection of any of the things aforesaid. But, (though as observed above, remembering does not necessarily imply recollecting), recollecting always implies remembering, and actualized memory follows (upon the successful act of recollecting).
But secondly, even the assertion that recollection is the reinstatement in consciousness of something which was there before but had disappeared requires qualification. This assertion may be true, but it may also be false; for the same person may twice learn (from some teacher), or twice discover (i.e. excogitate), the same fact.
Accordingly, the act of recollecting ought (in its definition) to be distinguished from these acts; i.e. recollecting must imply in those who recollect the presence of some spring over and above that from which they originally learn.
Acts of recollection, as they occur in experience, are due to the fact that one movement has by nature another that succeeds it in regular order.
If this order be necessary, whenever a subject experiences the former of two movements thus connected, it will (invariably) experience the latter; if, however, the order be not necessary, but customary, only in the majority of cases will the subject experience the latter of the two movements. But it is a fact that there are some movements, by a single experience of which persons take the impress of custom more deeply than they do by experiencing others many times; hence upon seeing some things but once we remember them better than others which we may have been frequently.
Whenever therefore, we are recollecting, we are experiencing certain of the antecedent movements until finally we experience the one after which customarily comes that which we seek. This explains why we hunt up the series (of kineseis) having started in thought either from a present intuition or some other, and from something either similar, or contrary, to what we seek, or else from that which is contiguous with it. Such is the empirical ground of the process of recollection; for the mnemonic movements involved in these starting-points are in some cases identical, in others, again, simultaneous, with those of the idea we seek, while in others they comprise a portion of them, so that the remnant which one experienced after that portion (and which still requires to be excited in memory) is comparatively small.
Thus, then, it is that persons seek to recollect, and thus, too, it is that they recollect even without the effort of seeking to do so, viz. when the movement implied in recollection has supervened on some other which is its condition. For, as a rule, it is when antecedent movements of the classes here described have first been excited, that the particular movement implied in recollection follows.
We need not examine a series of which the beginning and end lie far apart, in order to see how (by recollection) we remember; one in which they lie near one another will serve equally well. For it is clear that the method is in each case the same, that is, one hunts up the objective series, without any previous search or previous recollection. For (there is, besides the natural order, viz. the order of the pralmata, or events of the primary experience, also a customary order, and) by the effect of custom the mnemonic movements tend to succeed one another in a certain order. Accordingly, therefore, when one wishes to recollect, this is what he will do: he will try to obtain a beginning of movement whose sequel shall be the movement which he desires to reawaken. This explains why attempts at recollection succeed soonest and best when they start from a beginning (of some objective series). For, in order of succession, the mnemonic movements are to one another as the objective facts (from which they are derived). Accordingly, things arranged in a fixed order, like the successive demonstrations in geometry, are easy to remember (or recollect) while badly arranged subjects are remembered with difficulty.
Recollecting differs also in this respect from relearning, that one who recollects will be able, somehow, to move, solely by his own effort, to the term next after the starting-point. When one cannot do this of himself, but only by external assistance, he no longer remembers (i.e. he has totally forgotten, and therefore of course cannot recollect). It often happens that, though a person cannot recollect at the moment, yet by seeking he can do so, and discovers what he seeks. This he succeeds in doing by setting up many movements, until finally he excites one of a kind which will have for its sequel the fact he wishes to recollect. For remembering (which is the condicio sine qua non of recollecting) is the existence, potentially, in the mind of a movement capable of stimulating it to the desired movement, and this, as has been said, in such a way that the person should be moved (prompted to recollection) from within himself, i.e. in consequence of movements wholly contained within himself.
But one must get hold of a starting-point. This explains why it is that persons are supposed to recollect sometimes by starting from mnemonic loci. The cause is that they pass swiftly in thought from one point to another, e.g. from milk to white, from white to mist, and thence to moist, from which one remembers Autumn (the 'season of mists'), if this be the season he is trying to recollect.
It seems true in general that the middle point also among all things is a good mnemonic starting-point from which to reach any of them. For if one does not recollect before, he will do so when he has come to this, or, if not, nothing can help him; as, e.g. if one were to have in mind the numerical series denoted by the symbols A, B, G, D, E, Z, I, H, O. For, if he does not remember what he wants at E, then at E he remembers O; because from E movement in either direction is possible, to D or to Z. But, if it is not for one of these that he is searching, he will remember (what he is searching for) when he has come to G if he is searching for H or I. But if (it is) not (for H or I that he is searching, but for one of the terms that remain), he will remember by going to A, and so in all cases (in which one starts from a middle point). The cause of one's sometimes recollecting and sometimes not, though starting from the same point, is, that from the same starting-point a movement can be made in several directions, as, for instance, from G to I or to D. If, then, the mind has not (when starting from E) moved in an old path (i.e. one in which it moved first having the objective experience, and that, therefore, in which un-'ethized' phusis would have it again move), it tends to move to the more customary; for (the mind having, by chance or otherwise, missed moving in the 'old' way) Custom now assumes the role of Nature. Hence the rapidity with which we recollect what we frequently think about. For as regular sequence of events is in accordance with nature, so, too, regular sequence is observed in the actualization of kinesis (in consciousness), and here frequency tends to produce (the regularity of) nature. And since in the realm of nature occurrences take place which are even contrary to nature, or fortuitous, the same happens a fortiori in the sphere swayed by custom, since in this sphere natural law is not similarly established.
Hence it is that (from the same starting-point) the mind receives an impulse to move sometimes in the required direction, and at other times otherwise, (doing the latter) particularly when something else somehow deflects the mind from the right direction and attracts it to itself. This last consideration explains too how it happens that, when we want to remember a name, we remember one somewhat like it, indeed, but blunder in reference to (i.e. in pronouncing) the one we intended.
Thus, then, recollection takes place. But the point of capital importance is that (for the purpose of recollection) one should cognize, determinately or indeterminately, the time-relation (of that which he wishes to recollect). There is,-let it be taken as a fact,-something by which one distinguishes a greater and a smaller time; and it is reasonable to think that one does this in a way analogous to that in which one discerns (spacial) magnitudes. For it is not by the mind's reaching out towards them, as some say a visual ray from the eye does (in seeing), that one thinks of large things at a distance in space (for even if they are not there, one may similarly think them); but one does so by a proportionate mental movement. For there are in the mind the like figures and movements (i.e. 'like' to those of objects and events).
Therefore, when one thinks the greater objects, in what will his thinking those differ from his thinking the smaller? (In nothing,) because all the internal though smaller are as it were proportional to the external. Now, as we may assume within a person something proportional to the forms (of distant magnitudes), so, too, we may doubtless assume also something else proportional to their distances. As, therefore, if one has (psychically) the movement in AB, BE, he constructs in thought (i.e. knows objectively) GD, since AG and GD bear equal ratios respectively (to AB and BE), (so he who recollects also proceeds). Why then does he construct GD rather than ZH? Is it not because as AG is to AB, so is O to I? These movements therefore (sc. in AB, BE, and in O:I) he has simultaneously. But if he wishes to construct to thought ZH, he has in mind BE in like manner as before (when constructing GD), but now, instead of (the movements of the ratio) O:I, he has in mind (those of the ratio K:L; for K:L::ZA:BA. (See diagram.)
When, therefore, the 'movement' corresponding to the object and that corresponding to its time concur, then one actually remembers. If one supposes (himself to move in these different but concurrent ways) without really doing so, he supposes himself to remember.
For one may be mistaken, and think that he remembers when he really does not. But it is not possible, conversely, that when one actually remembers he should not suppose himself to remember, but should remember unconsciously. For remembering, as we have conceived it, essentially implies consciousness of itself. If, however, the movement corresponding to the objective fact takes place without that corresponding to the time, or, if the latter takes place without the former, one does not remember.
The movement answering to the time is of two kinds. Sometimes in remembering a fact one has no determinate time-notion of it, no such notion as that e.g. he did something or other on the day before yesterday; while in other cases he has a determinate notion-of the time. Still, even though one does not remember with actual determination of the time, he genuinely remembers, none the less.
Persons are wont to say that they remember (something), but yet do not know when (it occurred, as happens) whenever they do not know determinately the exact length of time implied in the 'when'.
It has been already stated that those who have a good memory are not identical with those who are quick at recollecting. But the act of recollecting differs from that of remembering, not only chronologically, but also in this, that many also of the other animals (as well as man) have memory, but, of all that we are acquainted with, none, we venture to say, except man, shares in the faculty of recollection. The cause of this is that recollection is, as it were a mode of inference. For he who endeavours to recollect infers that he formerly saw, or heard, or had some such experience, and the process (by which he succeeds in recollecting) is, as it were, a sort of investigation. But to investigate in this way belongs naturally to those animals alone which are also endowed with the faculty of deliberation; (which proves what was said above), for deliberation is a form of inference.
That the affection is corporeal, i.e. that recollection is a searching for an 'image' in a corporeal substrate, is proved by the fact that in some persons, when, despite the most strenuous application of thought, they have been unable to recollect, it (viz. the anamnesis = the effort at recollection) excites a feeling of discomfort, which, even though they abandon the effort at recollection, persists in them none the less; and especially in persons of melancholic temperament. For these are most powerfully moved by presentations. The reason why the effort of recollection is not under the control of their will is that, as those who throw a stone cannot stop it at their will when thrown, so he who tries to recollect and 'hunts' (after an idea) sets up a process in a material part, (that) in which resides the affection. Those who have moisture around that part which is the centre of sense-perception suffer most discomfort of this kind. For when once the moisture has been set in motion it is not easily brought to rest, until the idea which was sought for has again presented itself, and thus the movement has found a straight course. For a similar reason bursts of anger or fits of terror, when once they have excited such motions, are not at once allayed, even though the angry or terrified persons (by efforts of will) set up counter motions, but the passions continue to move them on, in the same direction as at first, in opposition to such counter motions. The affection resembles also that in the case of words, tunes, or sayings, whenever one of them has become inveterate on the lips. People give them up and resolve to avoid them; yet again they find themselves humming the forbidden air, or using the prohibited word. Those whose upper parts are abnormally large, as is the case with dwarfs, have abnormally weak memory, as compared with their opposites, because of the great weight which they have resting upon the organ of perception, and because their mnemonic movements are, from the very first, not able to keep true to a course, but are dispersed, and because, in the effort at recollection, these movements do not easily find a direct onward path. Infants and very old persons have bad memories, owing to the amount of movement going on within them; for the latter are in process of rapid decay, the former in process of vigorous growth; and we may add that children, until considerably advanced in years, are dwarf-like in their bodily structure. Such then is our theory as regards memory and remembering their nature, and the particular organ of the soul by which animals remember; also as regards recollection, its formal definition, and the manner and causes-of its performance.
(Return to index)
|
<urn:uuid:015aaf1a-36c7-4d79-bd89-3acc79ef9a4e>
|
CC-MAIN-2013-20
|
http://psychclassics.yorku.ca/Aristotle/memory.htm
|
2013-06-19T12:41:35Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.97313
| 6,050
|
Coenzyme A (CoA, CoASH, or HSCoA) is adapted from pantothenic acid and adenosine triphosphate and used in metabolism in areas such as fatty acid oxidization and the citric acid cycle. Its main function is to carry acyl groups such as acetyl as thioesters. A molecule of coenzyme A carrying an acetyl group is also referred to as acetyl-CoA.
Acetyl-CoA is an important molecule itself. It is the precursor to HMG CoA, which is a vital component in cholesterol and ketone synthesis. Furthermore, it contributes the acetyl group to acetylcholine; the addition of the acetyl group to choline a reaction that is catalysed by choline acetyltransferase. Its main task is conveying the carbon atoms within the acetyl group to the citric acid cycle to be oxidized for energy production.
The conversion of pyruvate into Acetyl-CoA is referred to as the Pyruvate Dehydrogenase Reaction. It is catalyzed by an enzyme-complex called pyruvate dehydrogenase. The enzyme consists of 60 subunits: 24 pyruvate dehydrogenase, 24 dihydrolipoyl transacetylase , and 12 dihydrolipoyl dehydrogenase (commonly denoted E1, E2, and E3). 24 pyruvate dehydrogenase has the coenzyme TPP incorporated into it, 24 dihydrolipoyl transacetylase has lipoate and coenzyme A, and 12 dihydrolipoyl dehydrogenase has the coenzymes FAD and NAD+. Through a complex reaction, pyruvate is decarboxylated and turned into acetaldehyde, then attached to coenzyme A while NAD+ is subsequently reduced to NADH and H+.
|
<urn:uuid:3a991bd4-9e18-4a05-b1c5-07368a2bd44a>
|
CC-MAIN-2013-20
|
http://bio-medicine.org/biology-definition/Coenzyme_A/
|
2013-05-22T00:20:53Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.95334
| 397
|
While playing with ice balloons, robotic sensors and sound, the idea came up: What if one could lick a Popsicle embedded with a sensor to show how our bodies conduct electricity?
When a young welder saw someone with a digital camera taking pictures using a lens with shallow depth, she grew obsessed with making a viewer - using a wooden box, mirror, and lens - that turns real life images to miniature size.
Behind the scenes at the Exploratorium, San Francisco's museum of art and science, are a handful of artists and scientists who have created the Tinkering Studio, a small and ambitious space where visitors can tinker away on their own projects, whether by soldering and spot-welding, creating LED-based jewelry, dissecting discarded mechanical toys and making something new, or engineering large-scale chain reaction devices.
The Tinkering Studio, conceptualized in 2008 to give visitors a chance to learn about science and art by "thinking with their hands," now attracts about 15,000 people a year. It is being used as a prototype for programs in museums across the globe - from Berkeley to New York, South Korea to Canada.
Started by two teachers, the five-person core team boasts a doctor in neuroscience and a MacArthur grant recipient. They are welders, birders and magicians, and create inventive and interactive displays. Situated across from the cafe inside the Exploratorium, the Tinkering Studio is a place where visitors can build, hack and invent. Above all, it is a place where the process is considered much more interesting than the end result.
"It's kindergarten for adults," said Mike Petrich, who co-founded the studio with Karen Wilkinson, his partner in work and life. Petrich and Wilkinson, both 44, met 20 years ago, as undergraduate students at the Minneapolis College of Art and Design. He taught her to use computers; she taught him to weld.
"In 1999, Karen and I came out to the Exploratorium to teach a two-week class, and they told us to stick around for a couple of months," laughed Petrich, whose black Lab, Luke, accompanies them to work. "We started by removing much of the technology and replacing it with low-tech tools and materials."
The biggest draw of the Tinkering Studio is the wall of peg boards, billed as the world's largest marble track. It's where tracks can be made using wood, 1/4-inch dowels, funnels, broccoli bands, and all kinds of tubing for all kinds of configurations.
The studio also invites educators to attend "Tinkering 101" courses, to learn things such as mechanical toy dissection, marble runs, and circuit exploration. The tinkerers have exported their mix of whimsy and science to everyone from engineers at MIT to the Dalai Lama's monks in India.
Walter Kitundu, 38, a tinkerer who won a MacArthur Fellowship award in 2008 for inventing a class of musical instruments called phonoharps - stringed instruments incorporating a record player - said, "I love my job every day. I'm a cog for artistic direction. The people who work here are really good at what they do. They take their work seriously, but not themselves."
"The tables and the furniture of the Tinkering Studio is something I came up with," added Kitundu, a hobbyist bird photographer who recently spent six months documenting the life of a red-tail hawk in Alta Plaza Park and has a giant interactive bird mural on permanent display at San Francisco International Airport.
"I try to shy away from things where I already know the end result," Kitundu said. "I take on projects I feel like I can do, but may not know how to do. It's how I've made my way through life. We are kind of lucky in that we get to play around with things and mess around with things all day."
Kitundu is now working on designs for the Tinkering Studio's future site at the new Exploratorium, to open on Pier 15 in 2013. The studio will be larger, with more tinkering exhibits and works by international artists. It will be situated directly across from the museum's workshop, allowing visitors to construct and experiment at the same time staffers are building and experimenting.
"What's so great about the Tinkering Studio is that you are having a conversation with the materials," Kitundu said, recalling that the first thing he made as a kid was a case for his record player. "I've seen how people have a lot of opinions about failure. We see it as you have found a way not to do it. You figure out a way to problem solve."
Several of the Tinkering Studio's tinkerers started as "explainers," volunteers in a three-year program who help visitors navigate the exhibits and do their own creating.
Nicole Catrett, who grew up in Austin, Texas, and just turned 30, worked as an explainer before being hired for the machine shop. She joined the tinkering group a year ago. One of the greatest compliments she hears is that she's a "crack welder."
Catrett makes eye-popping exhibits, including light and color displays, and created a stroboscope camera from simple materials and a toy motor. She has collaborated with fellow tinkerer Ryan Jenkins on a musical bench, and built the telescope that makes everyday life look like a miniature animated scene.
Another tinkerer who started as an explainer is Luigi Anzivino, 36, who is from Bologna, Italy, and has a doctorate in neuroscience.
"The way I used to think about education is that you learn from people who are smarter than you," he said, sitting in the Tinkering Studio, where several young explainers were being trained. "Here, the philosophy is that whatever you come up with on your own is truly valuable, and the important thing is to take charge of your path."
Anzivino has helped build the studio's website and document the tinkering.
"I was here for two weeks and the time just blew my mind," he said. "I think that the philosophy here ultimately leads to better science."
Looking at the marble tracks being made by explainers, he said, "You own the experience and the process. What was lacking for me in all of those years of school was a joy of learning. It always felt like a duty. Here, learning is fun."
|
<urn:uuid:3a246513-f0fd-4834-ae25-20d832b7be05>
|
CC-MAIN-2013-20
|
http://www.sfgate.com/science/article/Exploratorium-s-Tinkering-Studio-makes-science-fun-2295556.php
|
2013-05-18T08:24:27Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381630/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.971908
| 1,348
|
The Eastern Redbud grows in acidic, alkaline, loamy, moist, rich, sandy, well drained, wide range, clay soils.
Spectacular spring blossoms. The seeds provide winter food for birds. An excellent tree for planting near utility lines. Provides good shade when planted near patios. Well known for its beauty, it is the state tree of Oklahoma.
Rosy pink flowers appear in April. Reddish-purple leaves change to dark green, then to yellow. Forms a spreading, graceful crown. Full sun or light shade. Partial shade preferred in windy, dry areas. Grows to 20' to 30', 30' spread. (zones 4-9)
Northern bobwhite and a few songbirds, such as chickadees, will eat the seeds, and it can be used for nesting sites and nesting materials, it also provides shelter for birds and mammals.
Native to North America and Canada with cousins in Europe and Asia. First cultivated in 1811. The Spaniards noted Redbuds and made distinctions between the New World species and their cousins in the Mediterranean region in 1571. George Washington reported in his diary on many occasions about the beauty of the tree and spent many hours in his garden transplanting seedlings obtained from the nearby forest.
The leaves of this tree are reddish-purple, changing to dark green and then yellow.
This tree produces a pod, brown-brownish black and 2 to 3 inches long.
|
<urn:uuid:873325ba-17f2-40bf-a14c-1f706a256752>
|
CC-MAIN-2013-20
|
http://www.arborday.org/trees/treeguide/TreeDetail.cfm?ID=6
|
2013-05-23T18:31:08Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.921126
| 303
|
Users, in different capabilities, should have the opportunity to interact with important resources in many aspects of life such as education, employment, government, e-commerce and health care. It is critical that the Web be available to everyone especially when there are more than 750 million people with disabilities around the world . This number, according to World Health Organization (WHO), is increasing due to population growth, ageing, the emergence of chronic diseases and medical advances that preserve and prolong life . Therefore, assistive and universal design becomes an important aspect of corporate social responsibility and is required by laws and policies in some cases. In addition, accessibility and usability has important benefits for all users, not only for disabled people such as elderly people, those with low literacy or not fluent in the language, and new and infrequent web users . The aim of this paper is to evaluate two software applications and two web 2.0 applications for accessibility and usability.
The Disability Discrimination Act (DDA) defines a disabled person as “someone who has a physical or mental impairment that has a substantial and long-term adverse effect on his or her ability to carry out normal day-to-day activities” . For the purpose of this paper, ‘Disability’ is defined as an obstruction that affects the normal use of web and software applications such as visual, physical, speech, cognitive, and neurological impairment.
Accessibility means having equal access to information and services regardless of physical or developmental abilities or impairments . Therefore, people with disabilities can perceive, understand, navigate, and interact with the Web and software, which enable them to contribute to these technologies.
The ISO 9241 defines ‘usability’ as “The extent to which a product can be used
by specified users to achieve specified goals with effectiveness, efficiency and
satisfaction in a specified context of use” . Universal design means that the
design of products and environments are usable by all users, to the furthermost
extent possible, without the need for individual accommodation .
(W3C). W3C Launches International Web Accessibility Initiative. Retrieved December 20, 2009, from World Wide Web Consortium: http://www.w3.org.
World Health Organization. (2006). Promoting access to healthcare services for persons with disabilities.
Henry, S. L. Understanding Web Accessibility. Retrieved December 20, 2009, from Universal Interface Design,: www.uiaccess.com.
(DDA). Definition of 'disability' under the Disability Discrimination Act. Retrieved December 20, 2009, from The UK Government: http://www.direct.gov.uk.
Johns Hopkins University. (2008). What is accessibility? Retrieved December 22, 2009, from Web Accessibility: http://webaccessibility.jhu.edu.
(ISO). (1998). ISO 9241: Ergonomics Requirements for Office Work with Visual Display Terminal (VDT).
Zaphiris, P., & Ellis, R. D. (2001, October 23-27). Website Usability and Content Accessibility of the top USA Universities, In Proceedings of WebNet 2001 Conference.
|
<urn:uuid:a331d125-267b-437d-b006-ee992a4efece>
|
CC-MAIN-2013-20
|
http://users.ecs.soton.ac.uk/fibm1e09/applications_evaluation.html
|
2013-05-25T06:32:28Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705575935/warc/CC-MAIN-20130516115935-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.905894
| 642
|
The test reactor, part of the Department of Energy's (DOE) Idaho National Laboratory (INL), sits on an 890-square-mile tract of land known simply as "The Site." Located 45 minutes from Idaho Falls in the southeastern corner of the state, this swath of windswept desert is the epicenter of American nuclear energy research. Over the past half century, 51 reactors have been built here, including first-generation prototypes of the 1950s; only three still operate. But it is among the relics of these early experiments that the country's energy future is taking shape.
In recent years, the debate over nuclear power has moved to the front burner, spurred by concerns about foreign oil and the specter of global warming. But what many on both sides of the issue often fail to note is that America's 103 existing nuclear reactors are aging. Over the next few decades, they will have to be decommissionedtaking 20 percent of the country's electrical supply with them.
In the Energy Policy Act of 2005, Congress approved up to $2.95 billion in incentives for new nuclear plants, and set aside another $1.25 billion for an experimental reactor to be built here in the Idaho desert. The reactor will be the centerpiece of a modern-day Manhattan Project, with scientists from around the world working together to revolutionize the production of nuclear power.
At the heart of every reactor is fuelusually uraniumundergoing a chain reaction that generates heat and fast-moving neutrons. A coolant draws away the heat and uses it to spin a turbine to generate electricity, and a moderator slows the neutrons to keep the reaction under control. Any material used in building a reactor has to withstand the heatas well as intense pressure and a constant barrage of neutronsfor the reactor's projected lifetime. To prove that a new alloy can last 25 years, you could put it in a furnace for 25 years and bombard it with neutronsor, if you don't want to wait that long, you can use the ATR.
"It is like a time machine," says Duling, the facility's former deputy director. The reactor uses uranium enriched to 92 percent (anything more than 20 percent is considered weapons-grade) to generate a quadrillion neutrons per square centimeter per second100 to 1000 times greater than commercial reactors. By cranking up the neutron dose, the ATR can simulate as much as 40 years of wear and tear on a new fuel or alloy in a single year.
The test reactor is a simple water-cooled model built in 1967. But by tuning the pressure, temperature and chemistry inside its core, scientists can use it to reproduce the conditions in just about any other type of reactor. Recently, they tested chunks of graphite to see whether it's safe to extend the life of Britain's antiquated Magnox reactors. INL staff are now gearing up for an even bigger challenge: testing parts for proposed Generation IV reactors, which would leap technologically two steps ahead of the Gen II designs operating commercially in the United States today.
Despite concerns about catastrophic accidents and radioactive waste disposal, Gen II plants "are cost-effective and working well, and safety continues to improve," says James Lake, INL's associate director. Yet, no new reactors have been ordered in the States since the industry's peak sales year of 1973. Simple economics quashed further growth.
Thanks to the 2005 congressional incentives, a dozen utilities around the country have once again started the lengthy process of applying to build nuclear plants. If all goes smoothly, they could produce power by the middle of the next decade. These reactors would be Generation III and III+ designsevolutionary improvements on today's Generation II reactors, which use water in some form as both a coolant and a moderator.
But, according to the DOE, what is really needed are even safer, cheaper reactors that produce less waste and use fuel that's not easily adapted for weapons production. To develop this kind of reactor, 10 countries, including the United States, joined forces in 2000 to launch the Generation IV International Forum. A committee of 100-plus scientists from participating countries evaluated more than 100 designs; after two years, they picked the six best. All of the final Gen IV concepts make a clean break from past designs. Some don't use a moderator, for instance. Others call for helium or molten lead to be used as coolants.
How It Works: Generation II and III Reactors:
All 103 nuclear power plants now operating in the United States employ light-water reactors, which use ordinary water as both a moderator and a coolant. The next wave of nuclear plants has taken these Generation II concepts to the next level, improving both safety and efficiency. Utilities plan to begin building Generation III reactors by the end of the decade.
In a Gen II Pressurized Water Reactor, water circulates through the core where it is heated by the fuel's chain reaction. The hot water is then piped to a steam generator, and the steam spins a turbine that produces electricity. The Gen III Evolutionary Pressurized Reactor improves upon this design primarily by enhancing safety features. Two separate 51-in.-thick concrete walls , the inner one lined with metal, are each strong enough to withstand the impact of a heavy commercial airplane. The reactor vessel sits on a 20-ft. slab of concrete with a leaktight "core catcher," where the molten core would collect and cool in the event of a meltdown. There are also four safeguard buildings with independent pressurizers and steam generators, each capable of providing emergency cooling of the reactor core.
|
<urn:uuid:4b7846b4-fbea-4bf0-b9dc-5dcbee0ca0c4>
|
CC-MAIN-2013-20
|
http://www.popularmechanics.com/science/energy/nuclear/3760347
|
2013-05-18T17:47:53Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00053-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.943115
| 1,147
|
Zinc is one of the most common elements in the earth's crust. Zinc is found in the air, soil, and
water and is present in all foods. In its pure elemental (or metallic) form, zinc is a bluish-white
shiny metal. There is no information on the taste and odor of metallic zinc. Powdered zinc is
explosive and may burst into flames if stored in damp places. Metallic zinc has many uses in
industry. A common use is as coating for iron or other metals so that they do not rust or corrode.
Metallic zinc is also mixed with other metals to form alloys such as brass and bronze. A zinc and
copper alloy is used to make pennies in the United States. Metallic zinc is also used to make dry
Zinc can also combine with other elements, such as chlorine, oxygen, and sulfur, to form zinc
compounds. Zinc compounds that may be found at hazardous waste sites are zinc chloride, zinc
oxide, zinc sulfate, and zinc sulfide. This profile focuses primarily on metallic zinc and commonly
found or used zinc compounds. Most zinc ore found naturally in the environment is in the form of
zinc sulfide. Zinc compounds are widely used in industry. Zinc compounds are not explosive or
flammable. Zinc sulfide is gray-white or yellow-white, and zinc oxide is white. Both of these
compounds are used to make white paints, ceramics, and several other products. Zinc oxide is
also used in producing rubber. Zinc compounds, such as zinc acetate, zinc chloride, and zinc sulfate, are used in preserving wood and in manufacturing and dyeing fabrics. Zinc chloride is
also the major ingredient in smoke from smoke bombs. Zinc compounds are also used by the
drug industry as ingredients in some common products, such as sun blocks, diaper rash ointments,
deodorants, athlete's foot preparations, acne and poison ivy preparations, and antidandruff
Zinc is an essential food element need by the body in small amounts. Too little zinc in the diet can
lead to poor health, reproductive problems, and lowered ability to resist disease. Too much zinc
can be harmful to health. TOP
Fate & Transport
Zinc enters the air, water, and soil as a result of both natural processes and human activities.
Most zinc enters the environment as the result of human activities, such as mining, purifying of
zinc, lead, and cadmium ores, steel production, coal burning, and burning of wastes. These
releases can increase zinc levels in the atmosphere. Waste streams from zinc and other metal
manufacturing and zinc chemical industries, domestic waste water, and run-off from soil
containing zinc can discharge zinc into waterways. The level of zinc in soil increases mainly from
disposal of zinc wastes rom metal manufacturing industries and coal ash from electric utilities. In
air, zinc is present mostly as fine dust particles. This dust eventually settles over land and water.
Rain and snow aid in removing zinc from air. Most of the zinc in bodies of water, such as lakes
or rivers, settles on the bottom. However, a small amount may remain either dissolved in water
or as fine suspended particles. The level of dissolved zinc in water may increase as the acidity of water increases. Some fish can collect zinc in their bodies if they live in water containing zinc.
Most of the zinc in soil is bound to the soil and does not dissolve in water. However, depending
on the characteristics of the soil, some zinc may reach groundwater. Contamination of
groundwater from hazardous waste sites has been noticed. Zinc may be taken up by animals
eating soil or drinking water containing zinc. If other animals eat these animals, they will also
have increased amounts of zinc in their bodies. TOP
We are exposed to small amounts of zinc compounds in food every day. The average daily zinc
intake through the diet in this country ranges from 7 to 16.3 milligrams (mg). Food may contain
levels of zinc ranging from approximately 2 parts of zinc per million (ppm) parts of foods (e.g.,
leafy vegetables) to 29 ppm (meats, fish, poulty). Zinc is also present in most drinking water.
Drinking water or other beverages may contain high levels of zinc if they are stored in metal
containers or flow through pipes that have been coated with zinc to resist rust. Drinking water
may also be contaminated by zinc from industrial sources or toxic waste sites. High-level
exposure to zinc may also result from taking too many zinc dietary supplements. Fetuses and
nursing children may be exposed to the zinc in the blood or milk of their mothers.
In general, levels of zinc in air are relatively low and fairly constant. Average levels of zinc in the
air throughout the United States are less than 1 microgram of zinc per cubic meter
(ug/m3) of air, but range from 0.1 to 1.7 ug/m3 in areas near cities.
Air near industrial areas may have higher levels of zinc. The average zinc concentration for a 1-year period was 5 ug/m3 in one area near an industrial source.
About 150,000 workers are exposed to zinc at their jobs. Jobs where people are exposed to zinc
include zinc mining, smelting, and welding; manufacture of brass, bronze, or other zinc-containing
alloys; manufacture of galvanized metals; and manufacture of machine parts, rubber, paint,
linoleum, oilcloths, batteries, some kinds of glass and ceramics, and dyes. People at construction
jobs, automobile mechanics, and painters are also exposed to zinc. TOP
Zinc can enter the body through the digestive tract if you eat food or drink water containing it.
Zinc can also enter through your lungs if you inhale zinc dust or fumes from zinc-smelting or zinc-welding operations on your job. The amount of zinc that passes directly through the skin is
relatively small. The most likely route of exposure near NPL waste sites is through drinking
water containing a high amount of zinc. Zinc is stored throughout the body. Zinc increases in
blood and bone most rapidly after exposure. Zinc may stay in the bone for many days after
exposure. Normally, zinc leaves the body in urine and feces. TOP
Inhaling large amounts of zinc (as zinc dust or fumes from smelting or welding) can cause a
specific short-term disease called metal fume fever. However, very little is known about the long-term effects of breathing zinc dust or fumes.
Taking too much zinc into the body through food, water, or dietary supplements can also affect
health. The levels of zinc that produce adverse health effects are much higher than the
Recommended Daily Allowances (RDAs) for zinc of 15 mg/day for men and 12 mg/day for
women. If large doses of zinc (10-15 times higher than the RDA) are taken by mouth even for a
short time, stomach cramps, nausea, and vomiting may occur. Ingesting high levels of zinc for
several months may cause anemia, damage the pancreas, and decrease levels of high-density
lipoprotein (HDL) chlolesterol. We do not know if high levels of zinc affect the ability of people
to have babies or cause birth defects in humans.
Eating food containing very large amounts of zinc (1,000 times higher than the FDA) for several
months caused many health effects in rats, mice, and ferrets, including anemia and injury to the
pancreas and kidney. Rats that ate very large amounts of zinc became infertile. Rats that ate very
large amounts of zinc after becoming pregnant had smaller babies. Putting low levels of certain
zinc compounds, such as zinc acetate and zinc chloride, on the skin of rabbits, guinea pigs, and
mice caused skin irritation. Skin irritation from exposure to these compounds would probably
occur in humans. EPA has determined that zinc is not classifiable as to its human carcinogenicity.
Consuming too little zinc is at least as important a health problem as consuming too much zinc.
Without enough zinc in the diet, people may experience loss of appetite, decreased sense of taste
and smell, decreased immune function, slow wound healing, and skin sores. Too little zinc in the
diet may also cause poorly developed sex organs and retarded growth in you men. If a pregnant
woman does not get enough zinc, her babies may have growth retardation. TOP
Information excerpted from
Toxicological Profile for Zinc May 1994 Update
Agency for Toxic Substances and Disease Registry
United States Public Health Service
|
<urn:uuid:7ad6b84f-22aa-40d1-8376-8a72ad4d1b11>
|
CC-MAIN-2013-20
|
http://eco-usa.net/toxics/chemicals/zinc.shtml
|
2013-06-19T19:04:57Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.947665
| 1,861
|
From Satishkumar, University of Agricultural Sciences, Raichur
Posted 22 June 2011
I work with the Department of Soil and Water Engineering, University of Agricultural Sciences, Raichur, Karnataka. I have been engaged in teaching, research and extension activities on soil and water resource conservation, irrigation and drainage, surface and groundwater hydrology.
I am developing a knowledge information system on watershed (as defined by a natural boundary demarcated by runoff that leaves the boundary through a single outlet, i.e., stream/ river) management. Eventually, a watershed is the managerial unit of planning, development and management of natural (land water and energy), animal, plant, and human resources for their sustainable use.
A watershed could be as small as a few hectares to several thousand square kilometres, covering a river basin. Because of human intervention in terms of indiscriminate agriculture, deforestation, industrialization and urbanization, land and water resources are susceptible for degradation in terms excessive erosion, siltation, salinity, barren land, droughts, deserts, depletion of groundwater as well as the repercussions of climate change.
Again, the stakeholders of the different land-uses including farmers, government line departments, scientists and engineers, policy makers, social activists, NGOs and the public are slowly becoming aware of the problems ahead. They are putting their efforts into solutions from their own perspectives and limitations. Hence, there is need to develop and manage a knowledge base in the form of an information system. This can consolidate all aspects, activities, data and information related to natural resources status, planning, development and management. It can also facilitate the dissemination of this knowledge.
For developing this information base, I am looking at natural resource management with special attention to water. For this, I request members to please share the following
- Experiences in developing a knowledge system especially for water resources management
- What are the suitable data management systems and the software involved?
- How can various aspects of the issue be linked?
Your comments will open ways and means in data-base collection and will serve as a repository of various knowledge based systems available. It will help in providing robust knowledge base to the development practitioners in the natural management aspects.
Download the below attachment for response
|
<urn:uuid:d11e769b-2e73-4648-a58c-25ca480ce7b9>
|
CC-MAIN-2013-20
|
http://www.indiawaterportal.org/questions/need-experiences-developing-knowledge-system-water-resources-management-what-are-data
|
2013-05-18T05:58:38Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.937366
| 459
|
Hormonal effects in newborns
Hormonal effects in newborns occur because, while they are in the womb, babies are exposed to many chemicals (hormones) present in the mother's bloodstream. After birth, the infants are no longer exposed to these hormones. This exposure may cause temporary conditions in a newborn.
Newborn breast swelling; Physiologic leukorrhea
Hormones from the mother (maternal hormones) are some of the chemicals that pass through the placenta into the baby's blood during pregnancy. These hormones can affect the baby.
For example, during pregnancy, high levels of the hormone estrogen are produced. This causes breast enlargement in the mother. By the third day after birth, breast swelling may also be seen in newborn boys and girls. Such newborn breast swelling does not last, but it is a common concern among new parents.
The breast swelling should go away by the second week after birth as the hormones leave the newborn's body. Do not squeeze or massage the newborn's breasts because this can cause an infection under the skin (abscess).
Hormones from the mother may also cause some fluid to leak from the infant's nipples. This is called witch's milk. It is common and usually goes away within 2 weeks.
Newborn girls may also have temporary changes in the vaginal area.
- The skin tissue around the vaginal area, called the labia, may look puffy as a result of estrogen exposure.
- There may be a white fluid (discharge) from the vagina. This is called physiologic leukorrhea.
- There may also be a small amount of bleeding from the vagina.
These changes are common and should slowly go away over the first 2 months of life.
Last reviewed 1/24/2011 by Neil K. Kaneshiro, MD, MHA, Clinical Assistant Professor of Pediatrics, University of Washington School of Medicine. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc.
- The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition.
- A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions.
- Call 911 for all medical emergencies.
- Links to other sites are provided for information only -- they do not constitute endorsements of those other sites.
Any duplication or distribution of the information contained herein is strictly prohibited.
|
<urn:uuid:9e8b0412-c498-4c85-8e45-e094e0433c0c>
|
CC-MAIN-2013-20
|
http://www.uihealthcare.org/Adam/?/HIE%20Multimedia/1/001911
|
2013-05-19T11:09:18Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697420704/warc/CC-MAIN-20130516094340-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.923778
| 508
|
Bust of tragedian poet Sophocles from 496-406 b.C.
Made of casting stone with an antique, ivory-colored finish.
Approx. 150mm (15cm) x 60mm (6cm) x 60mm (6cm)
Sophocles (ancient Greek Σοφοκλῆς , c. 496 BCE-406 BCE) was the second of the three ancient Greek tragedians whose work has survived. His first plays were written later than those of Aeschylus and earlier than those of Euripides. According to the Suda, a 10th century encyclopedia, Sophocles wrote 123 plays during the course of his life, but only seven have survived in a complete form: Ajax, Antigone, Trachinian Women, Oedipus the King, Electra, Philoctetes and Oedipus at Colonus. For almost 50 years, Sophocles was the most-awarded playwright in the dramatic competitions of the city-state of Athens that took place during the religious festivals of the Lenaea and the Dionysia.
Sophocles, the son of Sophillus, was a wealthy member of the rural deme (small community) of Colonus Hippius in Attica, which would later become a setting for his plays, and was probably born there. His birth took place a few years before the Battle of Marathon in 490 BCE: the exact year is unclear, although 497/6 is perhaps most likely. Sophocles' first artistic triumph was in 468 BCE when he took first prize in the Dionysia theatre competition over the reigning master of Athenian drama, Aeschylus. According to Plutarch the victory came under unusual circumstances. Instead of following the custom of choosing judges by lot, the archon asked Cimon and the other strategoi present to decide the victor of the contest. Plutarch further contends that Aeschylus soon left for Sicily following this loss to Sophocles.
|
<urn:uuid:ddd28fbf-6199-4cca-b3a2-08483b2b2477>
|
CC-MAIN-2013-20
|
http://www.royalolympiccruises.com/Ancient_Greek_Replicas/Miniature_busts/Sophocles_Bust_6.html?pdi=EA_B105_W&ug=45
|
2013-05-22T14:53:33Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.979765
| 428
|
It is now widely accepted that excessive burning of fossil fuels and the resultant CO2 gases produced has had a major impact on the world's climate. If this continues, scientists predict that it will cause irreversible damage, such as the melting of the polar ice caps.
The planet's fossil fuel stock is a finite and depleting resource. Converting fossil fuels into energy creates bi-products that are harmful to the environment. This, together with increasing instability in the world's energy markets, makes a compelling case for renewable energy.
Increasingly, governments are introducing progressive legislation to curb and replace the use of fossil fuels. With global energy demand set to increase, there is an undeniable and urgent need to develop new renewable energy technologies. OpenHydro is leading the way in developing technologies that harness the power of the world's largest natural resource - the world's oceans.
|
<urn:uuid:75059b47-a6c7-4a7c-a3ec-0bab41770afa>
|
CC-MAIN-2013-20
|
http://www.openhydro.com/environment.html
|
2013-05-19T19:37:57Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698017611/warc/CC-MAIN-20130516095337-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.923361
| 174
|
An ice shelf is a thick floating platform of ice that forms where a glacier or ice sheet flows down to a coastline and onto the ocean surface. Ice shelves are only found in Antarctica, Greenland and Canada. The boundary between the floating ice shelf and the grounded (resting on bedrock) ice that feeds it is called the grounding line. The thickness of ice shelves ranges from about 100 to 1000 metres.
Ice shelves are principally driven by gravity-driven pressure from the grounded ice That flow continually moves ice from the grounding line to the seaward front of the shelf. The primary mechanism of mass loss from ice shelves is iceberg calving, in which a chunk of ice breaks off from the seaward front of the shelf. Typically, a shelf front will extend forward for years or decades between major calving events. Snow accumulation on the upper surface and melting from the lower surface are also important to the mass balance of an ice shelf. Ice may also accrete onto the underside of the shelf.
The density contrast between glacial ice, which is denser than normal ice, and liquid water means that only about 1/9 of the floating ice is above the ocean surface. The world's largest ice shelves are the Ross Ice Shelf and the Filchner-Ronne Ice Shelf in Antarctica.
Canadian ice shelves
All Canadian ice shelves are attached to Ellesmere Island and lie north of 82°N. Ice shelves that are still in existence are the Alfred Ernest Ice Shelf, Milne Ice Shelf, Ward Hunt Ice Shelf and Smith Ice Shelf. The M'Clintock Ice Shelf broke up from 1963 to 1966; the Ayles Ice Shelf broke up in 2005; and the Markham Ice Shelf broke up in 2008.
Antarctic ice shelves
See also: List of Antarctic ice shelves
A total of 44 percent of the Antarctic coastline has ice shelves attached. Their aggregate area is 1,541,700 km² .
Ice shelf disruption
In the last several decades, glaciologists have observed consistent decreases in ice shelf extent through melt, calving, and complete disintegration of some shelves.
The Ellesmere ice shelf reduced by 90 percent in the twentieth century, leaving the separate Alfred Ernest, Ayles, Milne, Ward Hunt, and Markham Ice Shelves. A 1986 survey of Canadian ice shelves found that 48 km². (3.3 cubic kilometers) of ice calved from the Milne and Ayles ice shelves between 1959 and 1974. The Ayles Ice Shelf calved entirely on August 13, 2005. The Ward Hunt Ice Shelf, the largest remaining section of thick (>10 m) landfast sea ice along the northern coastline of Ellesmere Island, lost 600 square km of ice in a massive calving in 1961-1962. It further decreased by 27% in thickness (13 m) between 1967 and 1999. In summer 2002, the Ward Ice Shelf experienced another major breakup.
Two sections of Antarctica's Larsen Ice Shelf broke apart into hundreds of unusually small fragments (hundreds of meters wide or less) in 1995 and 2002.
The breakup events may be linked to the dramatic polar warming trends that are part of global warming. The leading ideas involve enhanced ice fracturing due to surface meltwater and enhanced bottom melting due to warmer ocean water circulating under the floating ice.
The cold, fresh water produced by melting underneath the Ross and Flichner-Ronne ice shelves is a component of Antarctic Bottom Water.
Although it is believed that the melting of floating ice shelves will not raise sea levels, technically, there is a small effect because sea water is ~2.6% more dense than fresh water combined with the fact that ice shelves are overwhelmingly "fresh" (having virtually no salinity); this causes the volume of the sea water needed to displace a floating ice shelf to be slightly less than the volume of the fresh water contained in the floating ice. Therefore, when a mass of floating ice melts, sea levels will increase; however, this effect is small enough that if all extant sea ice and floating ice shelves were to melt, the corresponding sea level rise is estimated to be ~4 cm.
However, if and when these ice shelves melt sufficiently, they no longer impede glacier flow off the continent, so that glacier flow would accelerate. This new source of ice volume would flow down from above sea level, thus resulting in its total mass contributing to sea rise.
See also
- Greve, R.; Blatter, H. (2009). Dynamics of Ice Sheets and Glaciers. Springer. doi:10.1007/978-3-642-03415-2. ISBN 978-3-642-03414-5.
- "Antarctic ice shelf 'hanging by thread': European scientists". July 10, 2008. Yahoo! News.
- Jeffries, Martin O. Ice Island Calvings and Ice Shelf Changes, Milne Ice Shelf and Ayles Ice Shelf, Ellesmere Island, N.W.T.. Arctic 39 (1) (March 1986)
- Hattersley-Smith, G. The Ward Hunt Ice Shelf: recent changes of the ice front. Journal of Glaciology 4:415-424. 1963.
- Vincent, W.F., J.A.E. Gibson, M.O. Jeffries. Ice-shelf collapse, climate change, and habitat loss in the Canadian high Arctic. Polar Record 37 (201): 133-142 (2001)
- NASA Earth Observatory. "Breakup of the Ward Hunt Ice Shelf".
- Peter Noerdlinger, PHYSORG.COM "Melting of Floating Ice Will Raise Sea Level"
- Noerdlinger, P.D.; Brower, K.R. (July 2007). "The melting of floating ice raises the ocean level". Geophysical Journal International 170 (1): 145–150. Bibcode:2007GeoJI.170..145N. doi:10.1111/j.1365-246X.2007.03472.x.
- Jenkins, A.; Holland, D. (August 2007). "Melting of floating ice and sea level rise". Geophysical Research Letters 34 (16): L16609. Bibcode:2007GeoRL..3416609J. doi:10.1029/2007GL030784.
|Wikimedia Commons has media related to: Ice shelf|
|Wikinews has related news: Ice shelf breaks free in Canadian Arctic|
- http://www.antdiv.gov.au/default.asp?casid=1547 - from the Australian Antarctic Division
- http://nsidc.org/quickfacts/iceshelves.html - from the U.S. National Snow and Ice Data Center
- http://www.cnn.com/2006/TECH/science/12/29/canada.arctic.ap/index.html - CNN story about the Canadian Ayles ice shelf break up in August 2005
- http://ice-glaces.ec.gc.ca/ - from the Canadian Ice Service
|
<urn:uuid:51062eb6-1ed1-4329-aadf-eb77cf3926eb>
|
CC-MAIN-2013-20
|
http://en.wikipedia.org/wiki/Ice_shelf
|
2013-05-21T17:46:31Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.873155
| 1,477
|
Just a quicky on what will probably become a fairly large story:
A difference in the way British and American ships measured the temperature of the ocean during the 1940s may explain why the world appeared to undergo a period of sudden cooling immediately after the Second World War.
The scientists point out that the British measurements were taken by throwing canvas buckets over the side and hauling water up to the deck for temperatures to be measured by immersing a thermometer for several minutes, which would result in a slightly cooler record because of evaporation from the bucket.
This finding actually makes the AGW story go more smoothly:
Professor Jones said that the study lends support to the idea that a period of global cooling occurred later during the mid-twentieth century as a result of sulphate aerosols being released during the 1950s with the rise of industrial output. These sulphates tended to cut sunlight, counteracting global warming caused by rising carbon dioxide.
"This finding supports the sulphates argument, because it was bit hard to explain how they could cause the period of cooling from 1945, when industrial production was still relatively low," Professor Jones said.
Although its perhaps a bit of an embarrassment.
And the weird thing is, Steve McIntyre seems to have got to this one first. Too bad Steve grinds out blog posts rather than writing up a real paper now and again.
Go through the links for details. The James Annan post ( through "a bit of...") is especially good.
|
<urn:uuid:db86bd2a-1385-4036-9b01-d43c6cbef7b1>
|
CC-MAIN-2013-20
|
http://bigcitylib.blogspot.com/2008/05/climate-anomaly-caused-by-buckets-of.html?showComment=1212232320000
|
2013-05-22T21:51:20Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.970522
| 306
|
Every species becomes extinct eventually. Some leave descendants that continue the evolutionary proliferation of life that kicked off on this planet over 3.5 billion years ago, but no parent species is immortal. Life on Earth is in continual flux, with new lineages emerging as others die back.
But what if we could resurrect lost species? And even if we developed the technology to do so, are such efforts wise during a time when the same attention and energy could be applied to preventing extant species from slipping away? This Friday, researchers are going to converge at the TEDX DeExtinction symposium, partnered with National Geographic, to discuss the possibilities and pitfalls of reviving species that have been lost over the past 12,000 years.
The woolly mammoth – the shaggy Ice Age icon that persisted until a scant 3,700 years ago – is probably the most charismatic “deextinction” candidate. For decades now, scientists have been considering how the lost proboscidean might be brought back through cloning, and we’re continually told that the necessary advances to accomplish the task are just around the corner. (Although, much like a Windows software release, the debut of woolly mammoth 2.0 has long been delayed. I’m not optimistic about estimates that we’re only four or five years away from squeeing over the photos of the first cloned baby mammoth.) But the woolly mammoth may be more of a symbolic conversation-starter that has obscured other Lazarus-wannabes, including the Tasmanian tiger, passenger pigeon, Steller’s sea cow, and the Xerces blue butterfly.
These candidate species, the “Revive & Restore” project says, were selected according to three sets of criteria. These requirements run the gamut from the squishy and snuggly – “Is the species missed?” – to matters of technological knowhow and whether the species is “rewildable.” What seems missing, or at least glossed over, are the ecological and ethical implications of reviving these lost species, and the focus on charismatic species has skewed attention towards animals that may not actually be good selections for resurrection.
Just as the woolly mammoth symbolizes the great hope of species revival, the proboscidean also highlights the lack of attention ecology receives in such proposals. The challenge of deextinction is almost always framed in technological terms – can we bring back species? – but what will happen to the animals after they have been recreated has received comparatively little attention.
Let’s say that scientists are able to clone a woolly mammoth within their ambitious five year time frame. Where would such an animal live? The woolly mammoth’s natural habitat – the cold, dry mammoth steppe of the last Ice Age – does not exist anymore. Perhaps there are modern ecological proxies in scattered refugia, but should we really strive to bring back an animal that might only exist in zoos, or may face shrinking habitats in the wake of future climate change?
To bring back a species that no longer has a place in the world would be irresponsible and undercuts the moral imperative that deextinction advocates so often rely on to make their case. Indeed, one of the primary arguments for deextinction is that we must pay penance by restoring animals that previous generations of humans have wiped out, yet we’d only repeat our mistakes if we brought back a species without consideration of the creature’s future survival on a changing planet. Trying to replicate the Ice Age doesn’t make much sense when our species is hurtling the planet towards a greenhouse world.
Smilodon, a sabercat also listed as a top candidate, is an even worse choice. Wildlife specialists in and around Yellowstone National Park have enough trouble trying to get the public to accept the presence of wolves – carnivores that were extirpated from the area within recent history before being reintroduced two decades ago – and conservationists continue to struggle with the persistent conflict between jaguars and ranchers in South America. Can you imagine the uproar over sabertoothed cats being returned to the western United States or South American grasslands? There may not be a country for revived sabercats.
A simplistic argument could be made that Smilodon de nouveau would be necessary to keep cloned mammoths and mastodons in check at some future date, but such a position relies on the assumption that the cat actually hunted the large herbivores. Thanks to geochemical and anatomical evidence, paleontologists have found that Smilodon preferentially targeted camels and bison, not the giant proboscideans of its time. This isn’t just technical nitpicking. If we’re not only going to restore species, but try to recreate communities and interactions from deep time, we must heed the evidence of the fossil and historical record and not just restore species because we think it would be cool to see them.
The Shasta ground sloth might be a better deextinction candidate. Chris Clarke recently made a case for bringing back the trundling herbivore. Thankfully, Clarke totally avoided the guilt trip that deextinction advocates often use to insist that we have a duty to bring an extinct species back, and instead considered how the sloth might resume its role as a seed disperser within imperiled Joshua tree habitats. I’m not entirely convinced that reviving the Shasta ground sloth would be a worthwhile endeavor, especially since we don’t know exactly why the species died out nor whether the sloth would be able to cope to environmental changes that are already underway due to climate change, but I believe Clarke made a far better case for his favorite sloth than woolly mammoth or sabercat advocates have made for their candidates of choice. (And, I must admit, seeing baby sloths cling to the backs of their plodding mothers would be absolutely adorable.)
Of course, the Ice Age megamammals are extreme examples. Most of the candidate animals were wiped out much closer in time. But the same questions still apply. The best candidates for deextinction may not be the biggest, most beautiful, or famous, but species that will be resilient and adaptable to the altered nature of their old haunts and to future ecological fluctuations. More than that, some of the candidate animals might face the same threats to their existence that exterminated them in the first place. Human conflict might be just as bad, if not worse, for revived species, particularly carnivores such as the Tasmanian tiger and sport animals that require populations of staggering size to survive, such as the passenger pigeon.
Conjuring extinct species back into life will require a great deal of care, planning, and management. Is all the effort worth it, especially when conservation efforts worldwide are suffering from a lack of funding?
One way that deextinction advocates could make a stronger case for their projects would be to identify applications to threatened and endangered species that are still living. Perhaps genomic engineering could add variation to populations of animals suffering from the effects of population decline and inbreeding, such as cheetahs. And maybe cloning could help keep a truly critically-endangered species afloat long enough to have a chance to keep adapting and evolving. Some of these techniques are already being used, or at least considered.
Hybridization and careful back-breeding, Carl Zimmer points out, has given the American Chestnut tree a chance at long-term survival. Other techniques might not be so useful. As Ferris Jabr reported in Scientific American yesterday, conservation biologists aren’t optimistic about the prospect of restoring or saving species through cloning. Beyond the technological difficulties, cloning doesn’t address habitat loss, poaching, climate change, and other pressures that have pushed species to the edge of existence. Creating more of a species will not save that organism if it no longer has a place to live. Furthermore, as Stuart Pimm argues in an online National Geographic piece, sexy deextinction projects might distract from more pressing conservation problems that living species face.
I’m not totally against deextinction efforts. Some, such as Clarke’s Shasta ground sloth proposal, may actually have significant benefits for ecosystems that are at risk of deteriorating. But the conversation needs to move beyond charismatic characters and details about technology to the ecological consequences of reviving lost creatures – not only for the species in question, but for the ecosystem it might be reintroduced into and still-living animals that are nearing extinction.
And despite the question posed in National Geographic’s own promotional video for the event, deextinction is not a matter of scientists “playing god.” That’s trite fluff that the film adaptation of Jurassic Park tried to sell audiences through Ian Malcolm’s rambling soliloquies. Our species has driven others to extinction, and is having such a substantial impact on global ecology that the imprint of what we’re doing today will be visible for thousands of years to come. We’re already intervening and rearranging nature, intentionally or not. Once we own that fact, we can start to make decisions about conservation triage and what the future of wildlife might look like. Should resurrected species be part of the future? That’s the question driving this week’s DeExtinction symposium, and I’ll be tweeting and blogging my reaction to the day-long discussion of that critical and controversial place where past and future ecology meet.
[TEDxDeExtinction will be held at the National Geographic Society in Washington, DC this Friday. If you’re in town, you can look into tickets, and anyone can watch a free livestream of the talks on the web. And for a little more background on the methods of deextinction, see this brief news piece I wrote for the National Geographic news site, as well as the National Geographic deextinction hub.]
|
<urn:uuid:028cffd5-7e35-4fac-8871-4419cf75ee09>
|
CC-MAIN-2013-20
|
http://phenomena.nationalgeographic.com/2013/03/12/the-promise-and-pitfalls-of-resurrection-ecology/
|
2013-05-21T18:06:26Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700380063/warc/CC-MAIN-20130516103300-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.941626
| 2,047
|
The Cassini spacecraft has been orbiting Saturn since late 2004, and has spent most of that time more or less in the same plane as the rings and moons. That allows it to pass close to these interesting places and see them in high resolution.
But scientists and engineers recently changed that, flinging the probe into a more inclined orbit so that it can see things from a different vantage point, literally getting a new perspective on them. For example, from this tipped path, it was able to clearly see the south pole of Titan, Saturn’s ginormous moon – the biggest in the soar system, bigger than the planet Mercury! And what it saw surprised everyone, and for good reason:
Isn’t that weird looking? Like some kind of bacterium, or a cell. In fact, it is a cell, but not the biological kind. It’s an air cell, a vortex, a spinning around the pole. Titan has a thick atmosphere (thicker than Earth’s in fact) and it moves. This cell of air rotates once every 9 hours or so, far faster than Titan’s own 16 day spin. Cassini took enough images to make this animation of the vortex’s motion:
Things like this are seen at the poles of other words; Saturn itself has one, as does Venus. Titan also has a "hood" a haze layer over its north pole. That may be a seasonal feature, and right now winter is coming for Titan’s southern hemisphere*. Perhaps this vortex plays a part in forming the polar hood, and we’ll see one over the south pole soon.
That’s not clear yet, but it may become so as Cassini continues to investigate this incredible system. It’s been there for almost 8 years, and we’ve barely scratched the surface of what’s going on. There’s a whole lot of real estate in the Saturn system, and it changes all the time. We could use 50 Cassinis stationed there, and it still wouldn’t be enough to gather up all the beauty and amazing slices of nature to be seen.
Credits: Video: NASA/JPL-Caltech/Space Science Institute; Music: “Passing Action” by Kevin MacLeod
|
<urn:uuid:fddbdc38-b706-43ed-b97c-ebbdc30e80c0>
|
CC-MAIN-2013-20
|
http://blogs.discovermagazine.com/badastronomy/2012/07/11/titanic-antarctic-vortex-antics/
|
2013-05-22T00:42:52Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.939696
| 474
|
Ceiling from the Palacio de Altamira, Torrijos
(detail of Enriquez coat of arms)
Museum no. 407-1905
Spain was part of the Islamic world for nearly 800 years, so Islamic ornament was prevalent there through the continuity of local traditions, rather than the import of exotic art from the East. After the conquest of Granada in 1492 Islamic styles and techniques remained popular with the ruling elite. Luxurious palaces were created in which the architecture and furnishings were covered with Islamic designs. Sometimes rooms were crowned by magnificent marquetry ceilings like this one. Their construction method as well as their decoration continued an Islamic architectural form which emerged under the Nasrids, who installed many such ceilings in the Alhambra palace.
|
<urn:uuid:373e8a0a-e941-4cd9-b15e-f11e3cac78c5>
|
CC-MAIN-2013-20
|
http://www.vam.ac.uk/users/node/6217
|
2013-05-25T12:35:21Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.970321
| 158
|
“Money politics” has become even more prominent in the U.S. presidential race this year.
In 2010, the U.S. Supreme Court removed the limits on corporate donations to political campaigns and ruled that corporate donations are a protected form of free speech. As a result, this year’s congressional and presidential elections have become the most expensive in U.S. history, with billions of U.S. dollars spent already.
While rich people are throwing loads of money into the presidential election, ordinary Americans are worried about their own financial conditions.
Over the past 20 years, the income of middle-class Americans has been on the decline, and the income gap is becoming increasingly wide.
A poll has found that most Americans believe that too much money has been spent on the elections, and political contributions will only enhance rich people’s influence over the policy-making. No matter who is elected the U.S. president, he is bound to pay more attention to the needs of the rich than those of the poor.
Rich people are enjoying greater influence in politics, while the rights of ordinary voters are being damaged, which runs counter to the U.S. constitutional principle of “political equality.”
World News in Photo
U.S. naval vessels cruise in waters near China
Russian aerobatic team to perform in China's air show
Joy and distress fill the air during Eid festival
Hurricane Sandy hits United States
N. Korea's Kim attends university anniversary celebration
Houston air show kicks off
|
<urn:uuid:1f0155f1-1ebb-46ad-8a99-8ee036b83c02>
|
CC-MAIN-2013-20
|
http://english.peopledaily.com.cn/90777/8007566.html
|
2013-05-22T07:47:21Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.964411
| 315
|
Here's another chance to play geographical detective! This natural-color image from the Multi-angle Imaging SpectroRadiometer (MISR) represents an area of about 372 kilometers x 425 kilometers, and was captured by the instrument's vertical-viewing (nadir) camera in December, 2000. Only some of the following 9 statements about the region shown are true.
Use any reference materials you like to mark each statement true or false:
1. Of the two large smoke plumes rising from fires near image center, one is burning within 10 kilometers of a major gas pipeline.
2. The blue, green, and silver-colored lakes and lagoons, and the white salt-encrusted lakes and marshes that appear throughout the image area, are usually drier during winter and wetter in the summer.
3. Agriculture in this region is devoted primarily to vegetable and fruit production.
4. There are fewer trees and forests in the region today than there were 500 years ago.
5. The fresh waters that feed the silver-colored lakes in the upper-right corner of the image are described as an aid to digestion in a 19th century novel by a French science fiction author.
6. The silver-colored area along the right-hand edge at image center is situated along the boundary of a city that was originally named for its white beaches.
7. In the same year in which this image was acquired, a water contamination event occurred and residents of the aforementioned city were warned not to drink from the municipal water supply.
8. The dark blue lake apparent at left-hand edge of image center is named for its sweet waters and supports year-round commercial and sport fishing.
9. The waters of the river that ends in a large alluvial fan (situated near the right-hand edge below image center), are saltier than the waters of the river below it, which continues to flow beyond the right-hand image edge.
E-mail your answers, name (initials are acceptable if you prefer), and your hometown by the quiz deadline of Tuesday, February 17, 2004, to email@example.com
Answers will be published on the MISR Quiz page . The names and home towns of respondents who answer all questions correctly by the deadline will also be published in the order responses were received. The first 3 people on this list who are not affiliated with NASA, JPL, or MISR and who have not previously won a prize will be sent a print of the image.
A new "Where on Earth...?" mystery appears as the MISR "latest featured image" approximately once every two months. New featured images are released on Wednesdays at noon Pacific time on the MISR home page, http://www-misr.jpl.nasa.gov . The image also appears on the Earth Observatory, http://earthobservatory.nasa.gov/, and on the Atmospheric Sciences Data Center home page, http://eosweb.larc.nasa.gov/, though usually with a several-hour delay.
MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.
|
<urn:uuid:8cbc50d0-60f7-4e3d-ad09-69b954462aee>
|
CC-MAIN-2013-20
|
http://photojournal.jpl.nasa.gov/catalog/PIA04351,OrigCaption
|
2013-05-22T21:39:12Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.944199
| 707
|
This is a database of the 1885 Territorial Census of New Mexico. The census included all counties in the territory and was partly paid for by funds provided by the Federal Government. The census was taken during two months starting the first Monday in June of 1885. The census contains schedules for population, agriculture, manufactures, and mortality. These records were transferred from the Census Department to the National Archives in 1944.
Like other censuses, this provides information on who was in a household and how they are related. It also provides birthplaces for parents and children, as well as estimated ages. The records contain:
- Enumeration district
- Birth month
- Birth year
- Relation to head of household
- Marital status
- Father’s birthplace
- Mother’s birthplace
Help preserve historical records for generations to come. Join the Ancestry World Archives Project, a collaborative effort involving thousands of people around the world keying digital records to make them free for everyone. Anyone can join, and you decide how much time you’ll contribute - as little as 15 minutes helps. Learn more.
|
<urn:uuid:6e02c93e-7496-4acf-b7b0-5d505b5572ac>
|
CC-MAIN-2013-20
|
http://search.ancestry.co.uk/search/db.aspx?dbid=1976
|
2013-06-19T14:31:04Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.964516
| 229
|
Worship in the Diocese
Christian worship involves praising God in music and speech, readings from scripture, prayers of various sorts, a sermon, and various holy ceremonies (often called sacraments) such as the Eucharist.
While worship is often thought of only as services in which Christians come together in a group, individual Christians can worship God on their own, and in any place.
Christian worship grew out of Jewish worship.
Jesus Christ was a religious Jew who attended the synagogue and celebrated Jewish festivals, and his disciples were familiar with Jewish ritual and tradition.
The first obvious divergence from Judaism was making Sunday the holy day instead of Saturday. By doing this the day of Christian worship is the same as the day that Jesus rose from the dead.
Jesus's promise to stay with his followers, fulfilled in the sending of the Holy Spirit, illuminated the development of Christian worship from early times.
God is present
So Christians regard worship as something that they don't only do for God, but that God, through Jesus's example and the presence of the Holy Spirit is also at work in.
The Eucharist and the Word
Church services on a Sunday divide into two general types: Eucharistic services and services of the Word.
Both types of service will include hymns, readings and prayers.
The Eucharistic service will be focused on the act of Holy Communion.
The service of the Word does not include this rite, but instead features a much longer sermon, in which the preacher will speak at length to expound a biblical text and bring out its relevance to those present.
Different churches, even within the same denomination, will use very different styles of worship. Some will be elaborate, with a choir singing difficult music, others will hand the music over to the congregation, who sing simpler hymns or worship songs.
Some churches leave much of the action to the minister, while others encourage great congregational participation.
(Of course all churches encourage the full participation of the congregation in praising God with heart, mind, and soul, but some churches give the congregation more physical participation.)
|
<urn:uuid:c1af4a52-5f86-4544-b45a-bb05fd478823>
|
CC-MAIN-2013-20
|
http://www.sodorandman.im/worship_in_the_diocese
|
2013-05-22T07:34:54Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.963755
| 435
|
Flag It (Vocabulary) Poster & 30 bookmarks
Grades K - 5. Build vocabulary during reading time. Good readers acknowledge words they do not know and do what it takes to find the meaning so they can continue reading. Classroom poster is great for guided reading. Independent readers use one of 30 bookmarks and clips to mark the location of the challenging word.
|
<urn:uuid:dea2210d-0ab5-4e24-a290-ed381e53c17f>
|
CC-MAIN-2013-20
|
http://www.kaplantoys.com/store/trans/productDetailForm.asp?CatID=product&PID=14368
|
2013-05-25T12:48:47Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.925093
| 76
|
By Craig Mason, Principal, AIA, LEED AP and
Lisa Johnson, Principal, AIA, LEED AP
When Green River Community College chose to build the new Marv Nelson Science Learning Center, its vision was to create a building that supports and celebrates science education and sustainability. This presented the design team with a distinct challenge: Science lab spaces involve stringent health and safety requirements that often increase energy needs.
The resulting design combined simple building organization ideas with innovative systems that allow sustainable processes to occur and to be expressed.
The Green River campus envelops visitors in woodland. In designing the new center, the project team determined that the building should complement that campus character.
The building is arranged in three stories, minimizing the footprint and thus its impact on the site. The building plan places labs at the center of the building, bookended by two wings of classroom, administrative and informal gathering space.
Co-locating the labs at the core allowed the team to stack them vertically in ascending orders of complexity. The simple, elegant organization of the building allowed for the sharing of utilities systems between the labs, and the more efficient and effective operation of those systems.
In strategizing design solutions for the laboratories, the team implemented Labs 21 recommendations and best practices related to energy consumption and system safety. Labs 21 is a voluntary partnership program sponsored by the U.S. Department of Energy, U.S. Environmental Protection Agency and the International Institute for Sustainable Labor. Puget Sound Energy awarded a grant for beating local energy code requirements by 30 percent.
The lab ventilation system, which uses a heat recovery system for all fume hood exhaust, captures heat while exhausting pollutants, and works as either a heat source or heat sink as conditions warrant. For user safety and comfort, science labs typically require eight to 12 air changes per hour. For comparison, consider that an office typically processes one to three air changes an hour. Using variable air volume and space occupancy sensors, the system reduces air changes to six per hour for further energy savings.
Additional energy savings were gained by the building's co-location with Green River's existing technology center. This adjacency allowed both buildings to share mechanical systems, including high-efficiency boilers.
Fitting Into Surroundings
The design of the building allows it to nestle into the forested campus. The smaller footprint allowed for the preservation of all trees on the site, and maximizes site permeability and natural water cycling. Detention ponds were crafted as rain gardens, while additional plantings included native and drought-tolerant species to minimize water demand and the use of maintenance chemicals.
Seen from the outside, the building appears to grow from the ground, with the two bookend wings reaching out in an embrace of the landscape. From the inside, users experience direct connection to the campus. Extensive glazing provides views of the natural surroundings and abundant natural light. Solar shading is used to control glare and to darken rooms when necessary for instruction.
The design integrates durable materials appropriate to science lab conditions, using regionally available products, low-emitting materials and recycled materials: Marmoleum and concrete flooring, composite board wainscoting and accents, and tack surfaces composed of recycled rubber for display use.
Green River stakeholders wanted visitors to engage with science from the moment they walked through the door, and to allow the building to convey the essence of its function within the campus. Working with faculty to brainstorm translatable scientific principles, the designers crafted elements that were both identifiable scientific concepts and enriching design features:
- Newton's second law of motion: Horizontal paneling on an atrium wall communicates acceleration due to gravity
- Geological strata: Fiberboard panels on a corridor wall were patterned in varying textures mimicking the complex geological strata of the Grand Canyon.
- Weather, seismography and cosmic rays: Sensors incorporated into the building for various programs also provide data displaying digital readouts on flatscreen monitors located in communal atriums
- Greenhouse: The third-floor greenhouse reflects the integration of the associated sciences
- Green touchscreen: The system gathers energy consumption data (water, gas, electricity) and displays the data on kiosks located throughout the building
Honoring Marv Nelson
Green River named its new science center in honor of retired physics instructor Marv Nelson, who, over his more than 30-year tenure there, challenged conventional educational practices and worked with other faculty to provide interdisciplinary education. At the building dedication, Nelson humbly stated that the most important words in the name of the building were "Science Learning Center."
In every aspect, this building aims to live up to his collaborative spirit and passion for the sciences, and will serve as an effective, sustainable tool in providing a first-rate education to Green River students.
# # #
Published in Daily Journal of Commerce (Seattle, Wash.), August 28, 2008
|
<urn:uuid:8f5e74bf-0a5e-4bbe-a0f0-a7b378e48d3e>
|
CC-MAIN-2013-20
|
http://dlrgroup.com/?p=4.2.13
|
2013-05-21T10:14:27Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.926233
| 998
|
Chalk one up for the supercapacitor side in the ongoing debate about the role of batteries versus supercapacitors for serious energy storage applications. Energy storage is seen as not only critical for the adoption of electrical vehicles but also for helping with people get past the intermittent nature of alternative energy technologies such as wind or solar.
Atomic resolution electron micrograph of activated graphene. The images show that the material is composed of single sheets of crystalline carbon, which are highly curved to form a three-dimensional porous network.
Scientists at the Brookhaven National Laboratory have discovered that by adding a form of "activated graphene" to supercapacitors, they can help enhance the latter's ability to soak up more energy -- while holding onto their ability to charge and release energy quickly. The material was developed by researchers at the University of Texas, Austin, and it is the subject of a paper published in the May 2011 issue of Science. The reason this is interesting and relevant is because supercapacitors typically haven't been able to hold as much charge as battery alternatives.
Said Eric Stach, who is one of the co-authors of the paper:
"Those properties make this new form of carbon particularly attractive for meeting electrical energy storage needs that also require a quick release of energy -- for instance in electric vehicles or to smooth out power availability from intermittent energy sources, such as wind and solar power."
The material in question is a more porous form of carbon (yep, carbon) that uses potassium hydroxide to restructure graphene platelets at the nanoscale and "activate" them. This material typically is used in filters or in supercapacitor applications.
You can read more about the potential of graphene as a green technology in some posts over at ZDNet sister site, SmartPlanet:
- Graphene supes up supercapacitors
- DARPA funds $17 million for new ultracapacitor
- Graphene: a hot new material for keeping electronics cool
- Graphene goes green
|
<urn:uuid:5b98de3c-4194-4c07-b53a-e2906418996c>
|
CC-MAIN-2013-20
|
http://www.zdnet.com/blog/green/researchers-new-supercapacitor-acts-like-energy-sponge/17389
|
2013-05-22T01:00:17Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.943349
| 416
|
Ralph Hartley: Biography
Died: 1 March 1970
Ralph V. L. Hartley inventor of the electronic oscillator circuit that bears his name, was born in Spruce, Nevada, on 30 November 1888. He graduated with the A.B. degree from the University of Utah in 1909. As a Rhodes Scholar, he received the B.A. degree in 1912 and the B.Sc. degree in 1913 from Oxford University.
Upon returning from England, Hartley joined the Research Laboratory of the Western Electric Company and was given charge of radio-receiver development for the Bell System's transatlantic radiotelephone tests of 1915. He invented his oscillating circuit during that time and also invented a neutralizing circuit to eliminate triode singing resulting from internal coupling.
During World War I, Hartley worked out the principles that led to the development of sound-type directional finders. After the war, he worked at Western Electric and later at the Bell Laboratories, doing research on repeaters, and voice and carrier transmission. During this period he formulated the law "that the total amount of information that can be transmitted is proportional to frequency range transmitted and the time of the transmission."
Illness kept Hartley away from research for about ten years, but in 1939 he returned to Bell Labs as a consultant, and during World War II he was particularly involved with servo problems. He retired from Bell Labs in 1950.
Hartley was awarded the IRE Medal of Honor in 1946 "For his early work on oscillating circuits employing triode tubes and likewise for his early recognition and clear exposition of the fundamental relationship between the total amount of information which may be transmitted over a transmission system of limited band-width and the time required." He was a Fellow of the American Association for the Advancement of Science. He died on 1 May 1970 at the age of 81.
|
<urn:uuid:021025b4-e588-4838-ba0e-a082b310e8f4>
|
CC-MAIN-2013-20
|
http://www.ieeeghn.org/wiki6/index.php?title=Ralph_Hartley&direction=prev&oldid=72260
|
2013-05-24T01:38:52Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00053-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.970337
| 383
|
The most important feature in eddy current testing is the way
in which the eddy currents are induced and detected in the material
under test. This depends on the design of the probe. As discussed in the previous pages, probes can
contain one or more coils, a core and shielding. All have an important effect on the probe, but the coil requires the most design consideration.
A coil consists of a length
of wire wound in a helical manner around the length of a former. The main purpose of the former is to provide a sufficient
amount of rigidity in the coil to prevent distortion. Formers
used for coils with diameters greater than a few millimeters
(i.e. encircling and pancake coils), generally take the form of
tubes or rings made from dielectric materials. Small-diameter coils are
usually wound directly onto a solid former.
The region inside the former is called the core, which can consist
of either a solid material or just air. When the core is air or a nonconductive material, the probe is often referred to as an air-core probe. Some coils are wound around a ferrite core which concentrates the the coil's magnetic field into a smaller area. These coils are referred to as "loaded" coils.
The wire used in an eddy current probe is typically made from copper or other nonferrous metal to avoid magnetic hysteresis effects. The winding usually has more than
one layer so as to increase the value of inductance for a given
length of coil. The higher the inductance (L) of a coil, at a given frequency,
the greater the sensitivity of eddy current testing.
It is essential
that the current through the coil is as low as possible. Too high
a current may produce:
- a rise in temperature, hence an expansion of the coil, which
increases the value of L.
- magnetic hysteresis, which is small but detectable when a
ferrite core is used.
The simplest type of probe is the single-coil probe, which is
in widespread use. The following applet may be used to calculate
the effect of the inner and outer diameters, length, number of turns and wire diameter of a simple probe
design on the probe's self inductance. Dimensional units are in
A more precise
value of L is given by:
L = Kn2 p [ (ro2
- rc2) - µrrc2]
µo / l
- ro is the mean radius of the coil.
- rc is the radius of the core.
- l is the length of the coil.
- n is the number of turns.
- µr is the relative magnetic permeability
of the core.
- µo is the
permeability of free space (i.e. 4 pi x 10-7 H/m).
- K is a dimensionless constant characteristic of the length
and the external and internal radii.
|
<urn:uuid:9d5dba8c-5026-4097-8da9-9a5ce024114f>
|
CC-MAIN-2013-20
|
http://www.ndt-ed.org/EducationResources/CommunityCollege/EddyCurrents/ProbesCoilDesign/diameter.htm
|
2013-06-19T19:57:19Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709101476/warc/CC-MAIN-20130516125821-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.915724
| 634
|
All things considered, we're pretty happy that life evolved into multicellular organisms — but what was the advantage to this shift? The answer may lie in brewer's yeast. Beyond just giving us the wonderful liquid that is beer, Saccharomyces cerevisiae has an interesting feature: it's far more efficient at certain tasks when in groups. Could yeast social networking really be the origin of our complicated bodies, with their trillions of cells?
Researchers put the yeast through its paces in a very straightforward task: breaking down and transporting sugar, so that it could eat, survive and multiply. In order to devour sucrose, the yeast has to split it into glucose and fructose, and then get it to the cell membrane where it can be absorbed. The problem is that both of these activities are incredibly inefficient for a single celled organism on its own — the researchers calculated that a single cell would only capture 1% of the sugar it broke down.
However, once the yeast cells started gathering in clumps, the efficiency improved dramatically. Because sugar was being broken down all over the place, it was much easier for each yeast cell to absorb the stuff. This allowed a higher chance for all cells to grow and divide. It's a possible explanation for the evolutionary advantage of cells grouping with others, rather than hanging around on their own.
|
<urn:uuid:f784671e-0a24-42ec-a4d2-91f950080071>
|
CC-MAIN-2013-20
|
http://io9.com/5829634/why-did-we-have-to-become-multicellular-anyway
|
2013-05-23T05:28:57Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702849682/warc/CC-MAIN-20130516111409-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.970202
| 271
|
Disunion follows the Civil War as it unfolded.
It was nearing midnight of New Year’s Eve, 1862, when the men of the Fourth Texas Cavalry began their dash down the final stretch of the 10,000-foot railroad bridge that connected the Texas mainland to their target: the island city of Galveston, currently under the control of Union forces.
There was no obstacle they hadn’t faced in their bid to dislodge the Union invasion force. Though the tracks had been covered with wooden planks for ease and quiet, the stubborn pack mules refused to set foot on the bridge. By hand, troops had to haul 6 pieces of siege artillery, 14 pieces of field artillery and a railroad ram across the bridge under a full moon that illuminated every move of their surprise attack. To top it off, their ships making for the harbor were outnumbered three to one.
The Confederate battle plan about to unfold owed its audacity to one man: Maj. Gen. John Bankhead Magruder, a first-rate soldier and one of the most important and most misunderstood military commanders of the Civil War. The man who actually won the first battle of the entire war, Magruder had just arrived in Texas to replace a generally incompetent commander and face a Union invasion force that had begun to do what President Abraham Lincoln had wanted to so badly accomplish: occupy Texas and cleave it from the Confederacy. Read more…
|
<urn:uuid:bec66c9d-c6af-4264-97e0-0fb23e47f060>
|
CC-MAIN-2013-20
|
http://opinionator.blogs.nytimes.com/tag/john-b-magruder/
|
2013-05-22T01:23:47Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.97458
| 290
|
(ARA) - This year's tick population, including the increased number of the treatments throughout the Mid-Atlantic, has a somewhat surprising cause ... acorns.
Oak trees produced an extremely high number of acorns in 2010, which led to an increase in the white-footed mouse population in 2011. In turn, the deer tick (or black-legged tick), had ample supply of its preferred food source. As a result, you may spot more of the most common tick in the Mid-Atlantic in your backyard.
Ranging from the size of a sesame seed to 5/8-inch long, most ticks are ectoparasites, or parasites that live on the surface of their host. The deer tick goes through three life stages - larva, nymph and adult - requiring a blood meal during each stage. Typically, ticks feed on wildlife where they can come into contact with dangerous bacteria, including Lyme disease and Rocky Mountain spotted fever. The bacteria may be transferred to humans through tick bites.
"In most cases, a tick must be attached to your body for 24 to 36 hours to transmit disease. As a result, prevention and early detection are critical," says Phil Pierce, entomologist and technical services manager for Western Pest Services. "It is important to always check yourself, children and pets promptly after spending time outdoors in wooded or grassy areas and to take steps to limit your exposure to these blood-sucking pests."
Pierce recommends the following tips to help you avoid ticks when outdoors:
* Wear long-sleeved shirts and long pants when working outside near woodlands, fields and areas with shrubbery and tall grass.
* Choose light-colored clothing so it's easier to identify any ticks on your body, and tuck pants into socks or boots to prevent ticks from crawling into pant legs.
* Apply an EPA-approved insect repellant on clothing and exposed skin near potential entrance areas (pants cuff, shirt cuff, collar and around socks). You can also purchase clothing treated with materials that repel and control ticks.
Ticks generally do not infest areas that are well maintained. To help control tick populations around the home, keep vegetation in the yard trimmed, especially along the edges of your property.
Should you encounter ticks, it is best to remove them with fine point tweezers. Grasp the tick as close to the point of the bite as possible. Gently, but firmly, lift the tick at the head with tweezers. Avoid using rubbing alcohol, nail polish, hot matches, petroleum jelly or other items to remove ticks as these may startle them, causing them to regurgitate and possibly infect you with disease or bacteria.
"Ticks are a year-round pest, so we expect residents will continue to encounter this pest into the fall," adds Pierce.
Contact your local pest management professional should you suspect tick activity in or around your home. Experts also recommend consulting your doctor should you notice an attached tick lodged onto your body, as well as working with your veterinarian to make sure your pets are protected.
|
<urn:uuid:830c28f1-9392-4b8d-b1fe-f5e9ff70aeb4>
|
CC-MAIN-2013-20
|
http://www.loganbanner.com/pages/full_story/push?article-Could+acorns+lead+to+an+up+-tick-+in+Lyme+disease-%20&id=19568222&instance=all
|
2013-05-20T22:31:24Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.949869
| 632
|
Bullies have a need to dominate others. They tend to be aggressive toward adults; they are quick-tempered, impulsive and intolerant of frustration; they tend to be coldhearted and unfeeling toward their victims; they find it difficult to follow rules; they are good at talking themselves out of getting into trouble; and have a favorable view of violence.
Bullying involves isolation, humiliation and persecution. Eighty-five percent of bullying occurs in front of people. By publicly dominating the victim and demonstrating a victim's lack of social support, the bully establishes a "right" to torture the victim. Once that happens, it reduces the chance that anyone will step forward and help the victim.
Perhaps because the bully senses that most onlookers don't like what he or she is doing, bullies can be stopped pretty much in the act. If just one person speaks up for the victim, most incidents end quickly.
Young people shouldn't take that as a recommendation to physically challenge a bully; bullies are generally short-tempered, mean and accompanied by friends who act as henchmen. However, if you see someone being bullied, diplomatically urging the bully to "cool it" and leading the victim away may end the incident. But even that approach can be risky. It is best to find an adult to step in. Drugs and/or alcohol can be involved with hostile behavior, what some call "whiskey muscles."
More than 80 percent of students say that watching bullying makes them uncomfortable. However, 54 percent admit they reinforce the bullying by passively watching, and 21 percent of the remainder say they actively participate. Only 1 student in 4 tries to help the victim.
Since a bully needs an audience, watching is almost as bad as joining in. If you see a bullying incident, find an adult to break it up. That doesn't make you a tattletale. Caring enough to look out for someone else takes guts, and it's the right thing to do.
Maddox is president and founder of Drug and Alcohol Presentations Inc. of Charleston.
|
<urn:uuid:f7eccb3e-cd30-432a-9d8e-1e46c846fa60>
|
CC-MAIN-2013-20
|
http://www.wvgazette.com/Opinion/Editorials/201209250095?page=2&build=cache
|
2013-06-18T22:50:34Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.965114
| 424
|
This name is given to the convention of the 26th Messidor, year IX (July 16, 1802), whereby Pope Pius VII and Bonaparte, First Consul, re-established the Catholic Church in France. Bonaparte understood that the restoration of religious peace was above all things necessary for the peace of the country. The hostility of the Vendeans to the new state of affairs which resulted from the Revolution was due chiefly to the fact that their Catholic consciences were outraged by the Revolutionary laws. Of the 136 sees of ancient France a certain number had lost their titulars by death; the titulars of many others had been forced to emigrate. In Paris the Cathedral of Notre-Dame and the church of St-Sulpice were in the possession of "constitutional" clergy; Royer, a "constitutional" bishop, had taken the place of Mgr. de Juigné, the lawful Archbishop of Paris, an émigré; even in the churches which the Catholics had recovered, the rites of the "Theophilanthropists" and those of the "Decadi" were also celebrated. The nation suffered from this religious anarchy, and the wishes of the people coincided with Bonaparte's projected policy to restore the Catholic Church and Catholic worship to their normal condition in France.
On the 25th of June, 1800, Bonaparte, after his victory at Marengo, passed through Vercelli, where he paid a visit to Cardinal Martiniana, bishop of that city. He asked that prelate to go to Rome and inform Pius VII that Bonaparte wished to make him a present of thirty million French Catholics; that the first consul desired to reorganize the French dioceses, while lessening their number; that the émigré bishops should be induced to resign their sees; that France should have a new clergy untrammelled by past political conditions; that the pope's spiritual jurisdiction in France should be restored. Martiniana faithfully reported these words to Pius VII. It was only a few months before that Pius VI had died at Valence, a prisoner of revolutionary France. Pius VII, when elected at Venice, had announced his accession to the legitimate government of Louis XVIII, not to that of the Republic; and now Bonaparte, the representative of this de facto government, was making overtures of peace to the Holy See on the very morrow of his great victory. His action naturally caused the greatest surprise at Rome. The difficulties in the way, however, were very serious. They arose, chiefly;
(1) from the susceptibilities of the émigré bishops, from the future Louis XVIII, and from Cardinal Maury, who was suspicious of any attempt at reconciliation between the Roman Church and the new France;
The distinctive mark of the negotiations, taken as a whole, is the fact that the French bishops, whether still abroad or returned to their own country, had no heart whatever in them. The concordat as finally arranged practically ignored their existence.
Spina, titular Archbishop of Corinth, accompanied by Caselli, General of the Servites, arrived in Paris, on 5 November, 1800. Bernier, who had been parish priest of Saint-Laud, at Angers, and famous for the part he had played in the wars of La Vendée, was instructed by Bonaparte to confer with Spina. Four proposals for a concordat were submitted in turn to the pope's representative, who felt that he had no right to sign them without referring them to the Holy See. Finally, after numerous delays, for which Talleyrand was responsible, a fifth proposal, written by Napoleon himself, was brought to Rome, on 10 March, by the courier Palmoni.
Cacault, member of the Corps Legislatif, appointed as minister plenipotentiary to the Holy See, reached Rome on 8 April, 1801. He had received instructions from Napoleon to treat the pope as if he had 200,000 men. He was a good Christian, and anxious to bring the work of the concordat to a successful issue. What Bonaparte wished, however, was the immediate acceptance by Rome of his plan of the concordat; on the other hand, the cardinals to whom Pius VII had submitted it took two months to study it. On 12 May, 1801, the very day on which Napoleon, at Malmaison, was complaining to Spina of the slowness of the Holy See, the cardinals to whom the proposed concordat had been submitted sent yet another proposal to Paris. But, before this last proposal had reached its destination, Cacault received an ultimatum from Talleyrand, to the effect that he must leave Rome if, after an interval of five days, the concordat proposed by Bonaparte had not been signed by Pius VII. All might, even then, have been broken off, had the situation not been saved by Cacault. He left Rome, leaving his secretary Artaud there, but suggested to the Holy See the idea of sending Consalvi himself, Secretary of State to Pius VII, to treat with Bonaparte. On 6 June, 1801, Artaud and Consalvi left Rome in the same carriage.
Consalvi, after an audience with Bonaparte, discussed the various points of the proposed concordat with Bernier, and on 12 July they had reached an agreement. Bonaparte thereupon instructed his brother Joseph, Cretet, councillor of state, and Bernier to sign the concordat with Consalvi, Spina, and Caselli. During the day of the 13th, Bernier sent Consalvi a minute, adding: "Here is what they will propose to you at first; read it well, examine everything, despair of nothing." Between this minute and the proposal concerning which Consalvi and Bernier had come to the agreement of the day before, there were certain remarkable differences with regard to the publicity of worship; a clause relative to married priests, and always rejected by Consalvi, was inserted; the clauses relating to seminaries, to chapters, and that of the profession of the Catholic Faith by the consuls, to which the Holy See attached great importance were suppressed. Consalvi received the impression he expresses it in his "Memoirs", written in 1812 that the French Government intended to deceive him by substituting a fresh text for the text he had accepted; and d'Haussonville, in his book, "The Roman Church and the First Empire", has formally impugned the good faith of Bonaparte's representatives. Bernier's aforementioned note of 13 July, recently discovered by Cardinal Mathieu, asking Consalvi to "read" and "examine" carefully, proves that the French Government did not intend any deception; nevertheless, the presentation of this new draft reopened the whole question. Talleyrand had taken the initiative in this matter; for twenty consecutive hours Bonaparte's three plenipotentiaries and those of the Holy See carried on their discussion. The plan on which they finally agreed was thrown into the fire by Bonaparte, who that evening, at dinner, gave way to a violent fit of anger against Consalvi. Finally, on 15 July, a conference of twelve hours ended in a definite agreement; on the 16th Bonaparte approved of it. Pius VII, on his part, after consultation with the cardinals, sanctioned this arrangement, 11 August; on 10 September the signatures were exchanged, and on 18 April, 1802, Bonaparte caused the publication of the concordat and the reconciliation of France with the Church to be solemnly celebrated in the cathedral of Notre-Dame at Paris.
The French Government by the concordat recognized the Catholic religion as the religion of the great majority of Frenchmen. The phrase was no longer as in former times, the religion of the State. But it was a question of a personal profession of Catholicism on the part of the Consuls of the Republic. The Holy See had insisted on this mention, and it was only on this condition that the pope agreed to grant to the State police power in the matter of public worship. This question had been one of the most troublesome that arose during the course of the deliberations. In the matter of these police powers it had been agreed after many difficulties that the following should stand as Article I of the concordat: "The Catholic, Apostolic and Roman Religion shall be freely exercised in France. Its worship shall be public while conforming to such police regulations as the government shall consider necessary to public tranquillity." The pope agreed to a fresh circumscription of the French dioceses. When this subsequently took place, of the 136 sees only 60 were retained. The pope promised to inform the actual titulars of the dioceses that he should expect from them every sacrifice, even that of their sees.
According to Articles 4 and 5 the French Government was to present the new bishops, but the pope was to give them canonical institution. (See PRESENTATION; CANONICAL INSTITUTION; NOMINATION.) The bishops were to appoint as parish priests such persons only as were acceptable to the Government (Art. 9); the latter, in turn, stipulated that such churches as had not been alienated, and were necessary for worship, would be placed "at the disposition" of the bishops (Art. 12).
The Church agreed not to trouble the consciences of those citizens who, during the Revolution, had become possessed of ecclesiastical property (Art. 13); on the other hand the Government promised the bishops and parish priests a fitting maintenance (sustentationem, Art. 14).
Such were the principal stipulations of the concordat. Certain of its articles have been fully discussed, particularly by canonists and jurists, notably Articles 5, 12, and 14, relating to the nomination of bishops, the use of churches, and the maintenance of the clergy. Moreover, the law known as "The Organic Articles" (see THE ORGANIC ARTICLES), promulgated in April, 1802, and always upheld by later French governments in spite of the protest of the pope, made immediately after its publication, has in various ways infringed on the spirit of the concordat and given rise during the nineteenth century to frequent disputes between Church and State in France.
The concordat, notwithstanding the addition of the Organic Articles, must be credited with having restored peace to the consciences of the French people on the very morrow of the Revolution. To it also was due the reorganization of Catholicism in France, under the protection of the Holy See. It was also of great moment in the history of the Church. Only a few years after Josephinism and Febronianism had disputed the pope's rights to govern the Church, the Papacy and the Revolution, in the persons of Pius VII and Napoleon, came to an understanding which gave France a new episcopate and marked the final defeat of Gallicanism.
The French law of 9 December, 1905, on the Separation of Church and State, against which Pius X protested in his Allocution of 11 December, 1905, was based on the principle that the State of France should no longer recognize the Catholic Church, but only distinct associations cultuelles, i.e. associations formed in each parish for the purpose of worship "in accordance with the rules governing the organization of worship in general". In case of the non-formation of such associations destined to take over the property, real and personal, of the churches or fabriques (see ECCLESIASTICAL BUILDINGS; FABRICA ECCLESIÆ), this property was to be forever lost to the Church and to be turned over by decree to the charitable establishments of the respective communes. By the Encyclical "Gravissimo Officii", of 10 August, 1906, the pope forbade the formation of these associations cultuelles or associations for worship. Rome feared that they would furnish the State with a pretext for interfering with the internal life of the Church, and would offer to the laity a constant temptation to control the religious life of the parish. Thereupon, the State applied strictly the aforementioned law, considered the fabriques, i.e. the hitherto legally-recognized churches, as no longer existing, and, in the absence of associations cultuelles to take up their inheritance, gave over all their property to charitable establishments (établissements de bienfaisance). Exception was made for the church edifices actually used for worship; at the same time nothing was done concerning the numberless legal questions that arise apropos of these edifices, e.g. right of ownership, right of use, repairs, etc. At the present writing, therefore (end of 1908), the Church of France, stripped of all her property, is barely tolerated in her religious edifices, and has only a precarious enjoyment of them. On the other hand, since ecclesiastical authority has forbidden the only kind of corporations (associations cultuelles) which the State recognizes as authorized to collect funds for purposes of worship, the Church has no means of putting together in a legal and regular way such funds or capital as may be required for the ordinary needs of public worship. Thus the churches of France live from day to day; neither the parish nor the diocese can own any fund, however small, which the parish priest or the bishop is free to hand down to his successors; all this because the State stubbornly insists that only the above-described associations cultuelles (which it knows are impossible for French Catholics) shall be clothed with the right of ownership for purposes of worship. Though the present condition is necessarily a transitory one, it appears, unfortunately, to offer one permanent element, i.e. the certain loss of all the property once belonging to the fabriques. The worst enemies of the French clergy must admit that, in order to safeguard its principles, the Church which they accuse of avarice has sacrificed without hesitation all its temporal goods. (See CONCORDAT; FRANCE; ERCOLE CONSALVI; PIUS VII; NAPOLEON BONAPARTE.)
SECHÉ, Les origines du Concordat (2 vols., Paris, 1894); SICARD, L'Ancien clergé de France (Paris, 1903), III; GOYAU, Les origines populaires du Concordat in Autour du catholicisme social (Paris, 1906); LANZAC LABORIE, Paris sous Napoléon (Paris,1905 and 1907); BOULAY DE LA MEURTHE, Documents sur la négociation du Concordat (Paris, 1891-97); MATHIEU, Le Concordat de 1801 (Paris, 1903); RINIERI, La diplomatie Pontificale au XIXe siècle; Le Concordat entre Pie VII et le Premier Consul, tr. into Fr. by VERDIER (Paris, 1903). The last two works have really given an entirely new version of the history of the third phase of the negotiations, thanks to the fresh documents unknown to former historians, D'HAUSSONVILLE, CRÉTINEAU-JOLY, and THEINER. OLLIVIER, Nouveau manuel de droit ecclésiastique français (Paris, 1886); CROUZIL, Le Concordat de 1801 (Paris, 1904); BAUDRILLART, Quatre cents ans de Concordat (Paris, 1905); DE BROGLIE, Le Concordat (Paris, 1893); PERRAUD, La discussion concordataire (Paris, 1892); SÉVESTRE, Le Concordat (2d ed., Paris, 1906), the best documentary work. D'HAUSSONVILLE, Après la séparation (Paris, 1906); GABRIEL AUBRAY, La solution libératrice (Paris 1906); JENOUVRIER, Exposé de la situation légale de l'église en France (Paris, 1906); LAMARZELLE ET TAUDIÈRE, Commentaire de la loi du 9 Décembre, 1905 (Paris 1906); see also HOGAN, Church and State in France in Am. Cath. Quart. Rev. (1892), 333 sqq.; PARSONS, The Third French Republic as a Persecutor of the Church, ibid. (1899), 1 sqq.; BODLEY, The Church in France (London, 1906).
APA citation. (1908). The French Concordat of 1801. In The Catholic Encyclopedia. New York: Robert Appleton Company. http://www.newadvent.org/cathen/04204a.htm
MLA citation. "The French Concordat of 1801." The Catholic Encyclopedia. Vol. 4. New York: Robert Appleton Company, 1908. <http://www.newadvent.org/cathen/04204a.htm>.
Transcription. This article was transcribed for New Advent by Douglas J. Potter. Dedicated to the Sacred Heart of Jesus Christ.
Ecclesiastical approbation. Nihil Obstat. Remy Lafort, Censor. Imprimatur. +John M. Farley, Archbishop of New York.
Contact information. The editor of New Advent is Kevin Knight. My email address is feedback732 at newadvent.org. (To help fight spam, this address might change occasionally.) Regrettably, I can't reply to every letter, but I greatly appreciate your feedback — especially notifications about typographical errors and inappropriate ads.
|
<urn:uuid:1754bbf4-f6f9-4a8f-b8e3-3f9310c4f710>
|
CC-MAIN-2013-20
|
http://www.newadvent.org/cathen/04204a.htm
|
2013-05-18T05:24:32Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00053-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.961435
| 3,724
|
Comments let you have a conversation about something you're working on. Comment threads, called discussions, help you keep track of comments, address your comments to specific people, and respond to and follow comments from your email inbox.
Comments are a handy way of adding notes to your documents, spreadsheets, and presentations that are visible to viewers and collaborators. These can be invaluable for communicating with collaborators about specific parts of the document, as well as making notes about changes you've made or would like to make.
To add a comment, follow these instructions:
- Highlight or select the text, object, or spreadsheet cell you'd like to comment on. If you're working with a presentation, you can highlight an entire slide by selecting it from the list of slides on the left.
- From the Insert menu, select Comment. You can also use the keyboard shortcut Ctrl + Alt + M (Cmd + Option + M on a Mac) to insert a comment.
- Type your comment in the box that appears to the right of the document.
If you'd like to address your comment to a specific person, type a plus sign followed by their email address, like this: +email@example.com. That person will receive an email with your comment.
Comments in spreadsheets
When working with spreadsheets, you can comment only on one cell at a time. Spreadsheet cells with comments are indicated by a yellow triangle in the cell's top-right corner.
To see all of a sheet's comments, click on the comment icon on the sheet's tab.
In addition to comments, you can also leave notes on individual cells. While comments are great for conversations, notes are useful for adding annotations that don't require a back-and-forth discussion. To add a note, select a cell, click the Insert menu, and select Note. Cells with notes are indicated by a black triangle in the cell's top-right corner.
Working with Comments
After inserting a comment, there are two main places you can work with it — within the yellow comment box, or from within the discussions thread, which you can access by clicking the Comments button in the top right-hand corner of your browser window.
You can reply to a comment with a new post, edit or delete a previous comment you’ve inserted, and resolve the discussion when you’re ready to remove it. Resolving a discussion removes the discussion from your document, spreadsheet, or presentation, but resolved threads will always be available under Comments in the right-hand corner of your document.
From the discussion thread, you can review all discussions, including those discussions that have been resolved. From this menu, you can also change the notification settings for discussions.
Sometimes it can be tedious to have to scan through all the comments you and your collaborators have made on an item. If you open the discussion thread with the Comments button, you can keep track of all of an item's discussions, including those that have already been resolved. In this view, you can also make comments about the entire document, spreadsheet, or presentation.
|
<urn:uuid:5911aea9-e9af-4a84-ba7d-7cf23e1ada3b>
|
CC-MAIN-2013-20
|
http://support.google.com/drive/bin/answer.py?hl=en&cbrank=3&cbid=e2aok3035u5y&ctx=cb&answer=65129&src=cb
|
2013-05-20T21:59:55Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.936596
| 635
|
"He was of an active, restless, indefatigable Genius even almost to the last, and always slept little to his death, seldom going to sleep till two three, or four a Clock in the Morning, and seldomer to Bed, often continuing his Studies all Night, and taking a nap in the day. His temper was Melancholy...."
These words were intended to describe Robert Hooke, but have been said to equally describe Isaac Newton. Both men played vital roles in the development of science in the seventeenth century, though at first glance Newton appears to outshine and outclass Robert Hooke. When Hooke is mentioned to this day, we usually speak of Newton as well, but not the other way around. They influenced one another far more than either would ever admit and, though each deserves his own separate identity, Hooke has rarely been granted his. This is largely because, though Newton and Hooke had much in common, they were bitter enemies, and Newton was able to exert far more influence over the Royal Society and, thereby, over the entire scientific community of his day. Robert Hooke's genius is hidden in shadows created partly by Hooke himself, but largely by Isaac Newton, a man who could not speak without contempt for Hooke, even long after Hooke's death, and who may well have taken steps to obliterate much of Hooke's contributions to science. Hooke's reputation is riddled by exaggerated accusations and misconceptions.
Robert Hooke was a significant influence in the advancement of science as well as Newton. An established physicist and astronomer, Hooke was with the Royal Society from its inception, and served it tirelessly and loyally for over forty years; it was he who had worded the society's credo "To improve the knowledge of natural things, and all useful Arts, Manufactures, Mechanic practices, Engines and Inventions by Experiments (not meddling with divinity, Metaphysics, Morals, Politics, Grammar, Rhetoric or Logic)." But the rancor between Newton and Hooke did much to tarnish Hooke's reputation.
Hooke was born on the Isle of Wight, July 18, 1635. As a child he survived smallpox, but was scarred physically and emotionally for life. When Hooke was thirteen years old, his father, John Hooke, a clergyman hanged himself. Young Robert had much emotional pain in his youth. Receiving a 100 pound inheritance from his father, Robert Hooke became an orphan of sorts, being sent off to London. In London was the painter Sir Peter Lely, and there, Hooke was to develop his artistic skills.
As a boy, Robert Hooke had shown considerable interest and skill in mechanical things, and this, along with Hooke's intelligence, did not escape the notice of Richard Busby, the most feared man of Westminster School. Busby had a reputation for "flogging sense into them," but there was no threat here for Robert Hooke. Busby saw great genius in Hooke, and got involved to the extent of taking the boy into his own home.
Hooke moved through Westminster, to Oxford University, working his way through as a servant as had Newton in Cambridge. At Oxford, Hooke met Physicist Robert Boyle, becoming his paid assistant. During his time with Boyle, their greatest accomplish-ment was the construction of the air pump. Hooke stayed with Boyle until 1662 when Boyle helped Hooke secure the job as Curator of Experiments for the Royal Society.
No job could have suited Robert Hooke more, and most other scientists less, than the job of Curator of Experiments. His task, three to four major experiments each week to be reported on and/or demonstrated to the Royal Society. The experiments varied in topic greatly, some of chemical nature, some of astronomy, some of biology, all were considered Natural Philosophy. All had to be understood. It was not a menial task, but Hooke performed it excellently for forty one years until his death.
Testimony to Hooke's stamina, and ability to handle a tremendous workload lay in the endeavors of the next few years of his life after being appointed curator. In 1663, Hooke was elected a Fellow of the Society. In 1665, he was appointed Professor of Geometry at Gresham College. The same year he published his Micrographia, a book with elaborate drawings of various things under the microscope.
And while it is Flamsteed, Cassinni and Halley who usually get the credit for getting Newton involved with comets, a great deal of interest was sparked in Newton by a book entitled "Cometa," published around 1666, the author, Robert Hooke. Newton had made mention of the book in his notes, and later mentioned it in his correspondences. Hooke had taken close observations of the comets of 1664 and 1665, as well as collecting data from other astronomers. The only thing Hooke could not decide on was what type of motion the comet would take, straight line, circular orbit, or ellipse. By 1666, Hooke had put it aside for the time, apparently because of the necessity of pursuing other matters. In 1666, after the Great Fire of London, Hooke was appointed surveyor of London, designing many buildings including Montague House, the Royal College of Physicians, Bedlam and Bethlehem Hospital. Hooke was indeed a very busy man.
In 1677, after Henry Oldenburg's death, Hooke succeeded him to the post of Secretary of the Royal Society while still maintaining his responsibilities as Curator. Hooke continued in this capacity until 1683 when the post of secretary was filled by Richard Waller who would eventually write Hooke's biography.
Hooke continued as curator and with his interest in architecture, an interest he shared with Christopher Wren, though Wren practiced it far more diligently as an occupation. The two conversed often about the subject of architecture. While Wren was constructing St. Paul's Cathedral, his greatest work, Hooke assisted in modifying the great arches of the structure. And when the Royal Observatory was under construction, references appear about Hooke's connection with that, though precisely to what degree is not known.
While Hooke never married, there was only one instance where he seemed to be in love, that was with his niece, Jane Hooke, who took over the duties of housekeeper at Gresham. But though he became obsessed with her, she would not be faithful to him. Hooke was ever a lonely person.
Though Hooke outwardly may have seemed arrogant and self assured, underlying this seemed to be a great deal of insecurity. Perhaps his physical condition had much to do with it. While physical deformities and scars were far more common in those days, Hooke seems to have been an extreme case. Descriptions of him such as "scarred to the point of ugliness" and his condition of "twistedness, which grew worse with age" and references to a great deal of pain, seem to imply a tortured person. Certainly there were those who avoided him because of his condition, some even mocked him, Newton once made a reference to a "dwarf" that was most certainly a barb directed at Hooke.
Hooke devoted a great deal of time to the universe and its mysteries. The search for parallax was on in the seventeenth century, and Hooke made an attempt to find it using a zenith telescope. The idea of using zenith telescopes was based on atmospheric distortion being at a minimum directly overhead, and therefore making for the most accurate measurements. Hooke used the star Gamma Draconis, but the telescope was too crude to reach any definite conclusions.
Hooke anticipated some of the most important discoveries and inventions of his time. Among Hooke's contributions are the correct formulation of the theory of elasticity, the kinetic hypothesis of gases and the nature of combustion. He was the first to use the balance spring for the regulation of watches and devised improvements in pendulum clocks and invented a machine for cutting the teeth of watch wheels. An expert micro-scopist, his microstudies of the composition of cork led him to suggest the use of the word cell (meaning a tiny bare room, like a monk's cell), and the word survived as the name for living cells. The publication of his Micrographia in 1665, published in English, with its engraved magnifications of minute bodies, was a major milestone of English science.
Hooke was the first to report the Great Red Spot of Jupiter and the first to establish the rotation of the giant planet. He formulated the theory of planetary motion as a problem in mechanics, and pioneered the scientific trail that led Newton to his goal in the formulation of the law of gravitation. As a scientist, Hooke made useful contributions to the wave theory of light. His interests ranged from these matters to pre-Daltonian atomic studies, astronomy, earthquakes and the physics of spring mechanisms. He set the thermo-metrical zero at the freezing point of water and studied the relationship of barometrical readings to changes in the weather; he invented a land carriage, a diving bell, a method of telegraphy and he and ascertained the number of vibrations corresponding to musical notes.
The first confrontation between Hooke and Newton came in 1672. Newton had written a paper on his demonstration of white light being a composite of other colours. It was presented to the Royal Society just prior to Newton's reception as a Fellow of the Society. Newton thought a great deal of his demonstration, referring to it as "the oddest if not the most considerable detection wch hath hitherto beene made in the operations of Nature."1 But Newton was met with a strong rebuff by Hooke. Hooke had his own wave theory of light, he had gone into some detail about it in the Micrographia, and he still believed in it strongly. He claimed Newton had not proven his idea clearly, and needed more detail.
Newton had the equivalent of a temper tantrum. The situation was made worse for Newton because Hooke was not the only one attacking Newton's theory, he had been joined by Christian Huygens, Ignace Pardies and the Jesuits of Liege. Newton had since childhood, reacted strongly to criticism. He constantly challenged authority, and to rebuff him, was to become an enemy. Newton demonstrated this over and over during his lifetime; his response was often either complete withdrawal, or open battle. On this occasion, Newton chose withdrawal (though usually for Newton withdrawal was some form of manipulation in battle plans.) In March 1673, Newton wrote to Henry Oldenburg, the current secretary of the Royal Society. Newton requested to withdraw from the Society. It took much gushing of admiration, respect, etc. on Oldenburg's part, as well as an offer to wave dues to the Society to get Newton to change his mind. Oldenburg also offered an apology for the behavior of an "unnamed member." The stage was set. Newton had successfully established his place in the Society, and had scored a victory, of sorts, over Hooke.
In many ways, the problems between Hooke and Newton could be attributed to the traits they had in common, rather than to their differences of opinion on scientific matters. Both were short tempered. Both were quick to make someone an enemy. Newton once threw a colleague out of his office and refused to speak with him for years because the man had made a joke about a nun. And Newton refused to speak with Flamsteed for years because Flamsteed refused to surrender raw data on comet observations. (Actually it made both Newton and Halley mad, they needed the data for their studies and did not want to wait for "finished data," but while Newton ranted and raved, Halley took matters into his own hands, literally; he stole the data!) Hooke became enemies of Henry Oldenburg, secretary of the Royal Society, in 1658 because Oldenburg had taken Christian Huygens side of an argument over a claim to the invention of spring balanced watches.
Both Newton and Hooke were suspicious of other people's motives, (especially each other's), to the point of paranoia. Newton seems to have always been that way. But Hooke seems have developed this trait later in life. Richard Waller, who knew Hooke quite well, and was with him until his death wrote this of Hooke: "He was in the beginning of his being made known to the Learned, very communicative of his Philosophical Discoveries and Inventions, till some Accidents made him to a Crime close and reserv'd. He laid the cause upon some Persons, challenging his Discoveries for their own, taking occasion from his Hints to perfect what he had not; which made him say he would suggest nothing until he had time to perfect it himself, which has been the Reason that many things are lost, which he affirm'd he knew."2
In other ways Hooke and Newton were opposite, almost as if they had all the wrong things in common. While Newton was a recluse, seldom dining out, Hooke was gregarious and loved nothing better than the coffee house. He often dined there and stayed until one or two in the morning, drinking some, and smoking and talking to friends. When it came to experiments and work, they were opposite also. Newton would work on one project relentlessly until he had defeated it. Hooke, and it must be said this attribute would be required of him if he was to do a proper job as curator of Experiments, flitted from one topic to another. He was, similar to Halley, curious to a fault about everything. It was quite probably the demands of his job as Curator of Experiments that kept Hooke from concentrating adequate time on any one subject. The very job at which he had worked so diligently and so faithfully would be the cause of later accusations of Hooke's work being "broken" and "disjointed."
The next major confrontation between Hooke and Newton surfaced openly in 1684. It concerned Newton's Principia, and the involvement Hooke had in it. Newton claimed Hooke had none, and quite a few historians have agreed; but a closer look at the events prior to the Principia's publication, leave little doubt that Hooke was indeed involved.
The idea of gravity and its force of attraction was a common topic of interest in those days. Newton, Halley, Wren and Hooke all played with the concept. In 1679, there were several letters exchanged between Hooke and Newton. Both had made a slight attempt to work out their differences. Hooke had suggested it was other people (namely Oldenburg) who had made problems, and they should correspond with each other in order to avoid misunder-standings. Newton seemed agreeable. The topic of the first letters between them was the old trajectory problem. What path would an object follow falling to the Earth. Newton had suggested an experiment to prove it. But Newton made a mistake, suggesting that the trajectory would be a spiral. Hooke grabbed this and ran with it. He announced to the Society that Newton was wrong.
Newton was incensed, he felt Hooke had no right to take their correspondence to the Society, and that the major issue was one of a conduct problem on the part of Hooke. Hooke had no right to announce Newton wrong to the Society. It is entirely possible that Hooke was making the most of it, but one can hardly blame him when one considers the godlike esteem in which many people held Newton. Newton may have been the "giver of laws" but he often upstaged the others of his time, and was not inclined to give credit to anyone else.
Newton refused to correspond with Hooke any further, Hooke had written a third letter to Newton, that Newton refused to answer. And it is this third letter that is of particular interest. This letter was written January 6, 1680, and in it, Hooke spoke of his theory of gravity. Hooke wrote; "But my supposition is that the Attraction always is in a duplicate proportion to the Distance from the Center Reciprocal, and Consequently that the Velocity will be in a subduplicate proportion to the Attraction and Consequently as Kepler supposes Reciprocal to the Distance." This was the main letter Hooke used as evidence when he claimed Newton had robbed him of his theory, but Hooke had no answer from Newton acknowledging Hooke's theory.
Hooke first appealed to Halley saying that Newton had taken all credit for the theory of gravity, when in fact, he, Hooke, had given the idea to Newton. This put Halley in a difficult situation. Halley was himself paying for the Principia to be published, and the last thing he needed was for Newton to get temperamental. However, Halley had to know first hand, because of previous communication with Hooke, that Hooke was not unreasonable in his claims. Halley and Hooke had long before discussed the idea that the force of gravitation must diminish by the square of the distance across which it is propagated and agreed that the inverse square law could explain Kepler's discovery that the planets move in elliptical orbits, each sweeping out an equal area within its orbit in equal time. Halley wrote Newton and told him, "He sais you had the notion from him, though he owns the Demonstration of the Curves generated thereby to be wholly your own: how much of this is so you know best, as likewise what you have to do in this matter, only Mr. Hooke seems to expect you should make some mention of him in the preface, which, it is possible, you may see reason to prefix."3
Newton vehemently denied any such accusation to Halley. A second letter to Newton from Halley pointed out that Hooke had not made a formal complaint of the matter, and that he felt that others had made Hooke's conduct seem worse than it was. Halley further pointed out again that Hooke was not trying to lay claim to the entire theory. It must have been a terribly uncomfortable situation for the easy going Halley.
Newton had another temper tantrum and told Halley he would not write the third book of the Principia. Halley thought this an incredible loss to mankind, and he had already invested much of his own resources in the publication of the first two books; he stopped at nothing to appease Newton. This incident only served to further harm Hooke's reputation. Newton still maintained Hooke was wrong; Newton would share his credit with no one, most certainly not with Hooke, and refused to do anything for him. The Principia was formally presented to the Royal Society in 1687 with no mention of Hooke in the preface; clearly, Newton had scored another victory over Hooke.
The year 1687 was indeed a dark year for Robert Hooke. The Principia was published, without recognition to Hooke. As if that was not enough, Hooke's niece also died that year. She was the niece who had captured the heart of the aging scientist. After the Principia publication and the death of Hooke's niece, his health declined at a greater rate. It is possible, judging by some descriptions, that Hooke was inflicted with Scoliosis, a crippling degenerative disease that causes an unnatural curvature of the spine and would account for his "incurvature" and stooping posture. But he stayed active until the last year of his life when he possibly had a stroke and was confined to bed. But Waller reported that his mind stayed clear until his death, though he became increasingly melancholy and disagreeable.
Hooke died on March 3, 1703, having been blind and bedridden the last year of his life. There had been little justice for Hooke during his life, and there would be little to follow after his death. His grave location is not even known. Moreover, Richard Waller published some of Hooke's works in 1705, dedicated to none other than Isaac Newton. This posthumous insult did little for Hooke and it is quite doubtful Newton appreciated it anyway. What remained of Hooke's works then passed to the Reverend William Derham, who was an old friend of Newton's and took until 1725 to publish any more of Hooke's works.
What part Newton played in the events that took place in the moving of the Royal Society from Gresham is unknown for sure. However it was during the move, that Hooke's portrait, the only one known, disappeared, as did most of Hooke's instruments, papers and scientific contrivances which Hooke had fashioned with his own hands. Derham commented that even twenty years after Hooke's death, Newton could still not speak of him and remain calm. There may be no evidence to prove Newton was responsible, but the motive is damning.
It was also probably due to Newton's spite that one of Hooke's gifts to the Society fell through. Hooke had spent little of his money, keeping it locked away in an iron chest. When he was a dying man he told Waller he wanted to give his money after his death, to the Society, so that new quarters, meeting rooms, laboratories, and a library might be constructed. But Hooke had unfortunately not made a will, or at least one was never found. It seems logical that, had Newton wanted to assert the Society's right to the money, based on Waller's testimony, he undoubtedly would have gotten it. Newton, who after becoming president of The Royal Society in 1703 had severed all ties that bound the Society to Hooke, wanted nothing of him.
Those who charge Robert Hooke with, habitually and without justification, accusing others of stealing his work need only consider that Wren's name had been attached to the architecture of the Royal College of Physicians, Willen Church in Buckingham-shire. Perhaps the only justice Hooke ever received, albeit posthumously, is that Robert Hooke was eventually recognized as the true architect.
Newton once wrote Halley and referring to his (Newton's) works, said they were a garden, and that Hooke had pilfered from it. Sometimes we need to take a look at the facts rather than to judge someone by a reputation his enemies helped create in order to grasp the true picture. Robert Hooke may have had his faults, and he may have been too quick to make assertions, but he most certainly does not deserve his fate or lack of recognition. Newton's actions in severing all ties between Hooke and the Society did nothing to further the knowledge of science and its development and denied the rest of us of the opportunity to know all the contributions to the advancement of science Hooke really made. Newton once said, "If I have seen further, it is by standing on ye sholders of giants." There can be little doubt that one of those giants was Robert Hooke. It seems that it would apparently be more appropriate to consider Hooke as the sower of many of the seeds in Newton's garden.
This page is best viewed when using Netscape 2.0.
|
<urn:uuid:d250a3d3-2859-48d3-be48-d012e6f2b0f9>
|
CC-MAIN-2013-20
|
http://starryskies.com/~kmiles/spec/hooks.html
|
2013-05-26T09:41:10Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.989439
| 4,802
|
TAPPED is dark today as we take a day to celebrate Martin Luther King Jr. Jamelle Bouie has a piece up on how King paved the way for other minority groups to demand equality:
[King's] legacy for other minority groups is less obvious. In public policy, we group racial and ethnic minorities together, even when their situations are very different. African Americans, with their legacy of slavery, apartheid, and institutionalized discrimination, face a vastly different set of circumstances than Latinos (who, until relatively recently, were classified as "white" in large parts of the country), Asians, Native Americans, and women.
That the federal government views these constituencies as a single group is a direct consequence of the civil-rights movement and King's successful push to fundamentally alter the federal government's relationship to African Americans. In the years following King's assassination, other movements -- for women's rights, for Latino rights, for Native American rights, for gay rights -- took advantage of these pathways in their struggle for rights and redress from the federal government.
Read the whole piece. We'll see you tomorrow.
You need to be logged in to comment.
(If there's one thing we know about comment trolls, it's that they're lazy)
|
<urn:uuid:572faf53-4891-456b-9c1b-2466d0db1ab4>
|
CC-MAIN-2013-20
|
http://prospect.org/article/recognizing-mlks-legacy
|
2013-05-22T01:17:38Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.954611
| 254
|
Goalkeeper (association football)
Goalkeeper, often shortened to keeper or goalie, is one of the major positions of association football. It is the oldest, most specialised, and generally most important position in the sport. The goalkeeper's primary role is to prevent the opposing team from successfully moving the ball over the defended goal-line (between the posts and under the crossbar). This is accomplished by the goalkeeper moving into the path of the ball and either catching it or directing away from the vicinity of the goal line. Within the 18 yard box goalkeepers are able to use their hands, making them (outside of throw-ins) the only players on the field able to handle the ball. Goalkeepers perform goal kicks, and also give commands to their defence during corner kicks, direct and indirect free kicks, and marking. Goalkeepers play an important role in directing on field strategy as they have an unrestricted view of the entire pitch giving them a unique perspective on play development.
Goalkeepers are required to remain on the pitch at all times though that does not mean they have to be between the sticks all the time. For example goalkeepers may have to take a penalty kick during a penalty shoot-out or even go for a corner late in the game though it is rare, as it leaves the goal unguarded. If a goalkeeper is hurt or sent off the back-up goalkeeper must take his place, if not then an outfielder must do so. In the event of a sending off an outfielder too must leave and the other keeper "replaces" him. If both keepers are hurt or sent off and there is no third choice keeper especially at club level, an outfielder (usually a defender) has to take his place and wear the goalkeeper kit. Because goalkeeper is the most important job in football as well as being the most difficult position to master, most teams have the same keeper play in the starting XI every season. For example Petr Cech is currently no.1 at Chelsea F.C. while Iker Casillas has been Real Madrid's no.1 for more than 9 years, and the starting goalkeeper for Spain for over a decade. As a result it could be a long time before the back-up keeper has a chance to play. This is one of the main reasons why most goalkeeper on average retire in their forties. The main squad number for a goalkeeper is no.1 but today though this is still common some goalkeepers now wear other squad numbers when in goal. For example: Despite being no.1 at Liverpool F.C.. Pepe Reina has worn the no.25 jersey through all his Liverpool career. No.13, especially in Britain, is the common number for the second choice keeper, though others like 12, 16 (a number attributed to hockey, 24, 25, even 30 (a number attributed to ice hockey) are also common.
Football, like many sports, has experienced many changes in tactics resulting in the generation and elimination of different positions. Goalkeeper is the only position that is certain to have existed since the codification of the sport. Even in the early days of organised football, when systems were limited or non-existent and the main idea was for all players to attack and defend, teams had a designated member to play as the goalkeeper.
The earliest account of football teams with player positions comes from Richard Mulcaster in 1581; however, he does not specify goalkeepers. The earliest specific reference to keeping goal comes from Cornish Hurling in 1602. According to Carew: "they pitch two bushes in the ground, some eight or ten foot asunder; and directly against them, ten or twelve score off, other twayne in like distance, which they term their Goals. One of these is appointed by lots, to the one side, and the other to his adverse party. There is assigned for their guard, a couple of their best stopping Hurlers". Other references to scoring goals begin in English literature in the early 16th century; for example, in John Day's play The Blind Beggar of Bethnal Green (performed circa 1600; published 1659): "I'll play a gole at camp-ball" (an extremely violent variety of football, popular in East Anglia). Similarly, in a 1613 poem, Michael Drayton refers to "when the Ball to throw, And drive it to the Gole, in squadrons forth they goe". It seems inevitable that wherever a game has evolved goals, some form of goalkeeping must also be developed. David Wedderburn refers to what has been translated from Latin as to "keep goal" in 1633, though this does not necessarily imply a fixed goalkeeper position.
Initially, goalkeepers typically played between the goalposts and had limited mobility, except when trying to save opposition shots. Throughout the years, goalkeeping has evolved, due to the changes in systems of play, to be a more active role. Goalkeeper is the only position in which you can use your hands in the game of football(other than during throw-ins). The original Laws of the Game permitted goalkeepers to handle the ball anywhere in their half of the pitch. This was revised in 1912, restricting use of the hands by the goalkeeper to the penalty area.
In 1992, the International Football Association Board made changes in the laws of the game that affected goalkeepers – notably the back-pass rule, which prohibits goalkeepers from handling the ball when receiving a deliberate pass from a team-mate that is made with their feet (the pass can be made with all the others parts of the body except hands). As a result, all goalkeepers were required to improve controlling the ball with their feet.
General play and technique
The goalkeeper position is the most specialised of all positions on the field. Unlike other players, goalkeepers may touch the ball with any part of their body while in their own penalty area. Outside of their penalty area, goalkeepers have the same restrictions as other field players. They are also "protected" from active interference by opponents within their own goal area.
Perhaps the most spectacular move a goalkeeper routinely performs is the extension dive. To execute this manoeuver properly, they push off the ground with the foot nearest to the ball, launching themself into a horizontal position. At this point, the ball may be caught or simply pushed away. In the latter case, a good goalkeeper will attempt to ensure that the rebound cannot be taken by a player on the opposite team, although this is not always possible.
The tactical responsibilities of goalkeepers include:
- To keep goal by physically blocking attempted shots with any part of their body. The keeper is permitted to play the ball anywhere on the field, but he may not handle the ball using his hands outside the penalty area.
- To take free kicks from deep into their own territory and goal kicks.
- To organise the team's defenders during defensive set pieces such as free kicks and corners. In the case of free kicks, this includes picking the numbers and the organisation of a defensive man "wall". The wall serves to provide a physical barrier to the incoming ball, but some goalkeepers position their wall in a certain position to tempt the kick-taker to a certain type of shot. Occasionally, goalkeepers may opt to dispense with the wall. Some goalkeepers are also entrusted with the responsibility of picking markers while defending at set pieces.
- To pick out crosses and attempted long passes either by punching them clear or collecting them in flight.
Although goalkeepers have special privileges, including the ability to handle the ball in the penalty area, they are otherwise subject to the same rules as any other player. Due to the increasing importance of crosses and set-pieces that put the ball in the air, the goalkeeper is often the tallest member of the team, and most stand over 6 ft (180 cm) tall in professional competition, with many well-known keepers standing particularly tall at over 6 ft 4 in (193 cm).
Goalkeepers in playmaking and attack
Goalkeepers are not required to stay in the penalty area; they may get involved in play anywhere on the pitch, and it is common for them to act as an additional defender during certain passages of the game. Brazil's Rogerio Ceni, Colombia's René Higuita, Germany's Hans Jorg Butt, France's Fabien Barthez, Mexico's Jorge Campos and Zimbabwe's Bruce Grobbelaar were notable for their foot skills and their regular play outside the penalty area. Goalkeepers with a long throwing range or accurate long-distance kicks may be able to quickly create attacking positions for a team and generate goal-scoring chances from defensive situations, a tactic known as the long ball.
Some goalkeepers have even scored goals. This most commonly occurs where a goalkeeper has rushed up to the opposite end of the pitch to give his team a numerical advantage in attack. This rush is risky, as it leaves the goalkeeper's goal undefended. As such, it is normally only done late in a game at set-pieces where the consequences of scoring far outweigh those of conceding a further goal, such as for a team trailing in a knock-out tournament. As goalkeepers are usually tall, often taller than all the outfield players, they can be successful at connecting with headers.
Though this action rarely succeeds, it is regular enough to have occurred a number of times in professional football: goalscoring goalkeepers include Dimitar Ivankov, Michelangelo Rampulla, Peter Schmeichel, Mart Poom, Steve Ogrizovic, Marco Amelia, Andrés Palop, Jens Lehmann, Edwin Van Der Sar, Brad Friedel, Massimo Taibi, Jimmy Glass, Adam Federici, Paul Robinson, Michelangelo Rampulla, Michael Petkovic, Fabien Barthez, Federico Vilar, Daniel Aranzubia, Tim Howard, Chris Weale, Gavin Ward and Mark Crossley.
Some goalkeepers, such as Rogério Ceni and José Luis Chilavert, may also be expert set-piece takers. These players may take their team's attacking free kicks and even penalties. Ceni, São Paulo's long-time custodian, has scored 100 goals in his career, more than many outfield players.
In some even rarer situations, goalkeepers have even scored goals unintentionally, when a ball kicked downfield has caught the opposing goalkeeper out of position. Jung Sung-Ryong, Paul Robinson, Danny Cepero, Jason Matthews, Jérôme Palatsi, Andrew Lonergan, Dragan Pantelić, Neco Martínez, Michael Petkovic, Tim Howard, Pat Jennings and Ian Deakin are examples of goalkeepers who have scored under such circumstances. One notable example came in the final of the 2003 CAF Champions League, in which Al-Ahly goalkeeper Essam El-Hadary created a goal by driving an indirect free kick from near his penalty area into the post of opponent's goal; the ball then hit the back of the opposing goalkeeper and went into the net.
Equipment and attire
Goalkeepers must wear kit that distinguishes them clearly from other players and match officials, as this is all that the FIFA Laws of the Game require. Some goalkeepers have received recognition for their match attire, like Lev Yashin of the Soviet Union, who was nicknamed the "Black Spider" for his distinctive all-black outfit; Klaus Lindenberger of Austria, who designed his own variation of a clown's costume; and Jorge Campos of Mexico, who was popular for his colourful attire.
Most goalkeepers also wear gloves to improve their grip on the ball, and to protect themselves from injury. Some gloves now include rigid plastic spines down each finger to help prevent injuries such as jammed and sprained fingers. Though gloves are not mandatory attire, it is uncommon for goalkeepers to opt against them due to the advantages they offer. At UEFA Euro 2004, Portuguese goalkeeper Ricardo famously took off his gloves for the quarter-final penalty shoot-out against England.
When assigning numbers to players on the team, if a squad number system is not in use, the number 1 shirt is usually reserved for the goalkeeper. However, until recently, goalkeepers were not required to wear the number 1, as is now a regulation for the FIFA World Cup. For example, Argentine Ubaldo Fillol wore the numbers 5 and 7 at the 1978 and 1982 FIFA World Cups, respectively. This often happens when a team has already assigned the shirt number to a goalkeeper, but brings in a new player who subsequently becomes the starter. Even in these cases, the player is usually referred to as the team's "number one". Sixteen is often a popular number for goalkeepers in France and its former African colonies.
Czech Republic and Chelsea goalkeeper Petr Čech wears a head guard, after having fractured his skull in a Premier League match against Reading, and a few goalkeepers, most notably Miguel Calero and Chris Kirkland, wear baseball caps to shield their eyes from the sun. Calero has also worn a bandana while keeping goal for Pachuca.
Goalkeepers are crucial in penalty shoot-outs. The record for most penalties saved in a shoot-out is held by Helmuth Duckadam of Steaua București. Duckadam defended four consecutive penalties in the 1986 European Cup Final against Barcelona. Stefano Tacconi is the only goalkeeper to have won all official Club Competitions.
The quickest goal scored by a goalkeeper is Nottingham Forest's Paul Smith after 22 seconds, on 18 September 2007, when Leicester City agreed to give Forest a free goal in the Football League Cup second round after the original tie was abandoned when City's Clive Clarke collapsed at half-time when Forest were up 1–0. Forest ended up losing the game 3–2.
A few goalkeepers have become notable at taking set pieces; for example, José Luis Chilavert is the only goalkeeper to score a hat-trick (three goals in a game), doing so through penalty kicks. He also was a free kick expert. Rogério Ceni has scored the highest number of goals for a goalkeeper, having scored his 100th goal in official games on 27 March 2011. Ceni scored his goals through free kicks and penalty kicks.
At International Level, Dino Zoff is the goalkeeper who has remained unbeaten for the longest period of time, whilst Walter Zenga is the goalkeeper who holds the record for the longest unbeaten run in a FIFA World Cup tournament. Gianluigi Buffon, Fabien Barthez and Iker Casillas hold the record for least goals conceded by a winning goalkeeper in a World Cup tournament, only conceding two goals and leading their team to victory, as they were also awarded the Yashin Award for best keeper. Gianluigi Buffon is also the only World Cup winning goalkeeper to not have conceded a goal in open play throughout the whole tournament (one goal coming from an own goal after a free-kick, and the second from a penalty). Fabien Barthez and Peter Shilton hold the record for most clean sheets in World Cup matches, with 10 clean sheets each. Oliver Kahn is the only goalkeeper to have won the Adidas Golden Ball for the best player of the tournament in a World Cup, whilst Lev Yashin is the only goalkeeper to have won the Ballon d'Or. Iker Casillas holds both the record for fewest goals conceded in a European Championship (1) and the record for longest unbeaten run at a European Championship, beating the previous record held by Dino Zoff. He also holds the records for most international clean sheets (74) beating the previous record held by Edwin Van der Sar (72).
Gianluca Pagliuca of Italy became the first goalkeeper to be sent off in a World Cup Finals match, dismissed for apparently handling outside his area against Norway. He also became the first goalkeeper to save a penalty in a penalty-shootout in a World Cup Final in the same tournament.
Highest fees
|1||Gianluigi Buffon ||Parma||Juventus||£33m||€54.2m||2001|
|2||Manuel Neuer||Schalke 04||Bayern Munich||£19m||€24m||2011|
|3||David De Gea ||Atlético Madrid||Manchester United||£18m||€22m||2011|
|4||Hugo Lloris||Lyon||Tottenham||£17m||€20.83m||2012|
|5||Angelo Peruzzi ||Internazionale||Lazio||£15.7m||€17.8m||2000|
|6||Fernando Muslera ||Lazio||Galatasaray||£9.93m||€11.75m||2011|
|7||Samir Handanović ||Udinese||Internazionale||£9m||€11.00m||2012|
|9||Thibaut Courtois||Racing Genk||Chelsea||£7.8m||€8.8m||2011|
|10||Fabien Barthez||AS Monaco||Manchester United||£7.8m||€8.8m||2000|
Notable goalkeepers
See also
|Wikimedia Commons has media related to: Association football goalkeepers|
- FIFA World Cup awards#All-Star Team
- FIFA World Cup awards#Golden Glove
- List of goalscoring goalkeepers
- "The Survey of Cornwall by Richard Carew". Retrieved 2013-02-01.
- "From 1863 to the Present Day". FIFA.com. Retrieved 2013-02-01.
- The World's most successful goalscoring Goalkeepers of all time, IFFHS, 23 October 2006.
- "Dino Zoff". Retrieved 26.6.12.
- "Zenga: I've Dedicated My Life to Football". Retrieved 26.6.12.
- "World Cup Goalkeeping Records". Retrieved 27.6.12.
- "Manchester United confirm signing of David de Gea". BBC Sport (British Broadcasting Corporation). 29 June 2011. Retrieved 30 June 2011.
- "Most Expensive Goalkeepers Ever". Cricbeat.com. 2011-06-01. Retrieved 2013-02-01.
- "Bayern agree on Neuer transfer fee". fifa.com. 14 May 2011. Retrieved 5 February 2013.
- "Manchester United confirm signing of David de Gea". BBC Sport (British Broadcasting Corporation). 29 June 2011. Retrieved 29 June 2011.
- Italy. "Angelo Peruzzi - Player changes, player transfers". transfermarkt.co.uk. Retrieved 2013-02-01.
- "Fernando Muslera - Player changes, player transfers". transfermarkt.co.uk. Retrieved 2013-02-01.
- Part of the transfer fee was paid with the transfer of Lorik Cana to Lazio who Galatasaray paid €4.5m for in 2010. The remaining €6.75m was paid to Montevideo Wanderers
- "Inter agree €11m fee for Handanovic, Udinese confirm". Goal.com. 2012-07-04. Retrieved 2013-02-01.
- "Gordon completes Sunderland move". bbc.co.uk. 2007-08-08. Retrieved 2013-04-02.
|
<urn:uuid:a48d2de1-8e76-461b-b4aa-6b561e80bd7d>
|
CC-MAIN-2013-20
|
http://en.wikipedia.org/wiki/Goalkeeper_(association_football)
|
2013-05-23T19:02:18Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.963147
| 4,021
|
Science Fair Project Encyclopedia
In the early part of the 20th century, experiments by Ernest Rutherford and others had established that atoms consisted of a small dense positively charged nucleus surrounded by orbiting negatively charged electrons. However, physics at that time was unable to explain why the orbiting electrons did not spiral into the nucleus.
The simplest possible atom is hydrogen, which consists of a nucleus and one orbiting electron. Since the nucleus is positive and the electron are oppositely charged they will attract one another by coulomb force, in much the same way that the sun attracts the earth by gravitational force. However, if the electron orbits the nucleus in a classical orbit, it ought to emit electromagnetic radiation (light) according to well established theories of electromagnetism.
If the orbiting electron emits light, it must lose energy and spiral into the nucleus, so why do atoms even exist? What's more, the spectra of atoms show that the orbiting electrons can emit light but only at certain frequencies. This made no sense at all to the scientists of the time.
These difficulties were resolved in 1913 by Niels Bohr who proposed that:
- (1) The orbiting electrons existed in orbits that had discrete quantized energies. That is, not every orbit is possible but only certain specific ones. The exact energies of the allowed orbits depends on the atom in question.
- (2) The laws of classical mechanics do not apply when electrons make the jump from one allowed orbit to another
- (3) When an electron makes a jump from one orbit to another the energy difference is carried off (or supplied) by a single quantum of light (called a photon) which has a frequency that directly depends on the energy difference between the two orbitals.
- f = E / h
- where f is the frequency of the photon, E the energy difference, and h is a constant of proportionality known as Planck's constant. Defining we can write
- (4) The allowed orbits depend on quantized (discrete) values of orbital angular momentum, L according to the equation
- Where n = 1,2,3,… and is called the angular momentum quantum number.
These assumptions explained many of the observations seen at the time, such as why spectra consist of discrete lines. Assumption (4) states that the lowest value of n is 1. This corresponds to a smallest possible radius of 0.0529 nm (for the mathematics see Hans Ohanian, Principles of Physics or any large introductory college physics textbook). This is known as the Bohr radius, and explains why atoms are stable. Once an electron is in the lowest orbit, it can go no further. It cannot emit any more light because it would need to go into a lower orbit, but it can't do that if it is already in the lowest allowed orbit.
The Bohr model is sometimes known as the semiclassical model because although it does include some ideas of quantum mechanics it is not a full quantum mechanical description of the atom. Assumption (2) states that the laws of classical mechanics don't apply during a quantum jump but doesn't state what laws should replace classical mechanics. Assumption (4) states that angular momentum is quantised but does not explain why.
In order to fully describe an atom we need to use the full theory of quantum mechanics, which was worked out by a number of people in the years following the Bohr model. This theory treats the electrons as waves, which create 3D standing wave patterns in the atom. (This is why quantum mechanics is sometimes called wave mechanics.) This theory considers that idea of electrons as being little billiard ball like particles that travel round in orbits as absurdly wrong; instead electrons form probability clouds. You might find the electron here with a certain probability; you might find it over there with a different probability. However, it is interesting to note that if you work out the average radius of an electron in the lowest possible energy state it turns out to be exactly equal to the Bohr radius (although it takes many more pages of mathematics to work it out).
The full quantum mechanics theory is a beautiful theory that has been experimentally tested and found to be incredibly accurate, but it is mathematically much more advanced, and often using the much simpler Bohr model will get you the results with much less hassle. The thing to remember is that it is only a model, an aid to understanding. Atoms are not really little solar systems. Bohr's genius, though, was to begin a breakaway from this view that continues to this day.
- An interactive demonstration of the probability clouds of electron in Hydrogen atorm according to the full QM solution.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
|
<urn:uuid:c36b06ee-2683-4182-9db4-8327fbb97dd7>
|
CC-MAIN-2013-20
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Bohr_model
|
2013-05-21T17:57:46Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700380063/warc/CC-MAIN-20130516103300-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.951147
| 986
|
Last week there was a big splash (pun intended) made over some images taken by the Cassini spacecraft of Titan, Saturn’s largest moon. The images suggest that there is indeed liquid (liquid does not mean liquid water) on the surface of Titan. NASA has an excellent summary of the interpretation of the images online here, but please keep in mind that water has a number of unique properties. One of them is that water freezes from the top down, because frozen water floats in the liquid phase of water. Thus, here on Earth you can have frozen ponds with fish still swimming beneath. And on Europa you may find liquid water beneath the frozen solid surface top. Liquid ethane and methane don’t act in the same manner.
In a paper published in Nature, available online now here, an international team of astronomers, utilizing the data from the Cassini spacecraft between 2004 and 2007, announce their conclusions about the nature of the clouds of Titan, Saturn’s largest and most interesting moon. These astronomers conclude that the clouds on Titan are the result of condensation of methane and ethane. Here on Earth, our clouds result from the condensation of water vapor. The clouds on Titan are driven around the moon by global atmospheric circulation. These astronomers have developed a global circulation model for Titan, similar to the global circulation models developed for Earth.
|
<urn:uuid:5a7b0c84-d5c8-4e8f-b063-ba7294e3d4ab>
|
CC-MAIN-2013-20
|
http://astrocast.tv/blog/?tag=ethane
|
2013-05-18T05:48:48Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.925517
| 274
|
The world of science publishing is changing, dramatically. The internet has now produced a plethora on online-only journals that have been instrumental in aiding the distribution of the latest scientific results. So too has the relatively new idea of open-access journals. Traditionally, experiments and data were published in peer-reviewed subscription based journals. Most schools and libraries owned subscription plans to hundreds of titles and provided access to the employees so they could easily (and affordably) conduct their research and hence, do their jobs. In the late 2000s, things began to change. Funding agencies who provided the research dollars to conduct this research were getting increasingly frustrated that they had paid for this work to be done, but were unable to read the results of these efforts without shelling out a tidy sum to gain access to the journal. This opened the door for open-access journals and publishing. In 2008, the National Institutes of Health invoked a new policy: that the work paid for by NIH funds had to be “deposited” in a centralized database, PubMed Central, so that it was publicly available. Open-access journals that allow anyone with access to the internet to download a complete scientific article for fee, were born. Unlike traditional journals that charge a subscription fee, open access journals charge a review fee for each paper that is submitted for review. This way they can pay for the publication of the work without charging a subscription fee.
Although open-access journals provide access for everyone to the scientific research publishing on their pages, to some these research papers are far from accessible. Science is written in a very technical language, so while these journals provide access to the very scientific papers, few of these articles are translated into a more accessible language. This is not the fault of the scientists or the journals, but it does point to a need to convert this scientific information so a broader audience can appreciate the work that is being done. Open-access is a good idea, but maybe there should also be an open-access journal that helps review the science and translates it so it is more accessible to a much broader audience.
|
<urn:uuid:ad2edce6-4f3e-4ce6-89b3-4e7002157d56>
|
CC-MAIN-2013-20
|
http://n3science.blogspot.com/2012/01/how-accessible-are-open-access-journal.html?showComment=1326224539384
|
2013-05-22T00:14:04Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.965948
| 422
|
Summer's End Signals Bulb Planting Time
If thinking about the end of summer is getting you down, start planning your spring flowering bulb show. Autumn is the time to plant crocus, daffodils, tulips and many other spring bloomers.
Spring flowering bulbs are planted in fall to allow them to establish roots before top growth begins in spring. Planting too early may cause the bulbs to sprout this fall, only to be killed back by winter weather. Planting too late may not give the bulbs adequate time to root before winter. Bulbs should be planted in late September through mid October in the Lafayette area. Plant a couple of weeks earlier in northern Indiana and likewise, later in southern Indiana.
Start your bulb garden out on the right path by planting only quality bulbs, which are available from local garden centers or reputable mail order sources. It's best to shop early to ensure the best selection of variety and quality. Select large, firm bulbs, and avoid those that are sprouting or molding.
While many bulbs can adapt to a wide range of soil types, none can tolerate poorly drained soil. Prepare the planting bed by adding organic matter, such as peat moss, well-rotted manure or compost. Adequate fertility can be achieved by adding a low-analysis, balanced fertilizer, such as 5-10-5 or 6-10-4, at the rate of 2-3 pounds per 100 square feet of bed. Mix all amendments thoroughly with the soil in the bed, before you plant the bulbs.
The size of the bulb and the species will dictate the proper planting depth and spacing. The bulbs should come with planting instructions specific to that particular flower.
For more information on the many types of bulbs that can be grown in Indiana, you can download a copy of HO-86 "Flowering Bulbs" from http://www.hort.purdue.edu/ext/HO-86.pdf.
|
<urn:uuid:d69f0d81-d8a5-461e-a9bf-76aeb69a7628>
|
CC-MAIN-2013-20
|
http://www.agriculture.purdue.edu/agcomm/newscolumns/archives/YGnews/2005/September/050901YG.htm
|
2013-05-23T19:25:32Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703728865/warc/CC-MAIN-20130516112848-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.9369
| 401
|
Gabon, flag ofArticle Free Pass
The French did not allow the development of national flags in their colonies, fearing the flags might become symbols around which separatists could rally. Therefore there were few such traditions in French Africa when autonomous governments were established in 1958 (the year of the new constitution of France). Whereas some countries did not adopt flags for more than a year, Gabon, one of the more progressive of the territories, quickly settled on a distinctive design.
Instead of the vertical stripes of the French Tricolor, which was adopted with appropriate changes in colours by many former colonies, Gabon chose horizontal stripes. However, these were not of equal width: the central yellow stripe was narrower than the green stripe at the top of the flag and the blue stripe at the bottom. Gabon also set itself apart from its neighbours in rejecting the pan-African green-yellow-red and in having the French Tricolor as a canton. None of the other autonomous republics expressed a similar link with the metropolitan country, although Togo, as a trust territory, had the Tricolor in its flag prior to independence.
Shortly before Gabon proclaimed independence from France, its national flag was modified, on August 9, 1960. The French Tricolor was dropped, and the central yellow stripe, symbolic of the Equator, which runs through Gabon, was widened to give it equality with the other stripes. The green stripe symbolizes the extensive forested area, which is one of the country’s most important economic resources. The blue stripe is a reminder of the extensive coast along the South Atlantic Ocean.
What made you want to look up "Gabon, flag of"? Please share what surprised you most...
|
<urn:uuid:44397b3c-e911-4f5d-b6ba-900b915666c3>
|
CC-MAIN-2013-20
|
http://www.britannica.com/EBchecked/topic/1355240/Gabon-flag-of
|
2013-05-18T18:05:27Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.969088
| 350
|
The National Alzheimer's Council is dedicated to the dissemination of information about progress in understanding and moderating the causes and effects of Alzheimer disease on individuals, their families, and friends. This is accomplished through applied research and education.
What is Alzheimer disease?
Alzheimer disease, or Alzheimer's as it is also referred to (pronounced Alz'- hi-merz), is the leading cause of dementia. Named after the person who first described it, Alzheimer disease may be defined as a set of symptoms that include loss of memory, judgement, reasoning ability, and often changes in mood and behavior.
Two forms of Alzheimer disease:
There are two forms of Alzheimer disease identified by researchers:
Sporadic Alzheimer disease is the more common of the two forms, accounting for 90-95% of cases. People who have this form of Alzheimer disease may or may not have a family history of the disorder, but if they have had relatives with Alzheimer disease they have a greater chance of developing it themselves. The other known risk factor for developing this form of the disease is advancing age. The older you get, the better your chances of developing it.
Familial autosomal dominant Alzheimer disease is clearly passed from one generation to the next. In families who have the disease with an affected parent, each child has a fifty percent chance of developing it. There is a genetic test available for those people who have this form of Alzheimer disease, so those with a family history that can be traced over several generations, where family members who are affected show a similar age of onset and duration of the disease, can be tested. This form of the disease is fairly rare, however, accounting for only five to ten percent of cases of Alzheimer disease.
How Alzheimer's affects a person:
Alzheimer disease affects a person's ability to think, understand, reason, remember, and communicate. Many of the changes that occur early are so subtle you may not notice them or think they are remarkable. But gradually, you will notice that someone seems unable to learn new things or make decisions. He or she will forget how to do tasks they've performed for years.
Difficulty with people's names is common. The person with Alzheimer
disease may forget where she is, what she was supposed to be doing, or
may not understand what is being said. Eventually, these difficulties increase
until the past is forgotten. One of the most commonly used scales for determining
the progression of Alzheimer disease is the Global Deterioration Scale.
It was developed by Dr. Barry Reisbery, MD, clinical director of the Aging
and Dementia Research Center at New York University. The scale looks like
The progression of Alzheimer disease is highly variable, so it's very difficult to predict when or whether a certain person will progress to the more advanced stages of the disease. Most people will live for about eight years after receiving a diagnosis of Alzheimer disease, so getting properly diagnosed is extremely important in helping the family prepare for what may come.
Is there a cure for Alzheimer's?
At the present time, Alzheimer's cannot be cured or stopped. Much of today's research, however, is focused on the question of whether Alzheimer disease can be prevented. There is growing hope that changes in lifestyle, diet, exercise, and the use of alternative or complimentary treatments such as vitamins and herbs may make a difference.
Following are some of the educational materials developed and disseminated by the Council:
Support the National Alzheimer's Council
|
<urn:uuid:37a8175f-a650-402d-aadc-4b98f8a3fde9>
|
CC-MAIN-2013-20
|
http://www.nemahealth.org/programs/nac/index.htm
|
2013-05-22T00:50:00Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.958037
| 710
|
The essential nanoclay raw material is montmorillonite, a 2-to-1 layered smectite clay mineral with a platey structure. Individual platelet thicknesses are just one nanometer (one-billionth of a meter), but surface dimensions are generally 300 to more than 600 nanometers, resulting in an unusually high aspect ratio. Naturally occurring montmorillonite is hydrophilic. Since polymers are generally organophilic, unmodified nanoclay disperses in polymers with great difficulty. Through clay surface modification, montmorillonite can be made organophilic and, therefore, compatible with conventional organic polymers. Surface compatibilization is also known as “intercalation”. Compatibilized nanoclays disperse readily in polymers.
Montmorillonite's unique structure creates a platey particle
Nanocor employs a number of chemistries to surface compatiblize its nanoclays. For example, in addition to traditional onuim ion modification Nanocor has developed and patented a novel means for modification by leaving the sodium ion on the surface and coordinating it via ion-dipole interaction. Regardless of the modification technology used, the resulting clay-chemical complex, which exhibits a definite gallery spacing between the platelets, is called a Nanomer® nanoclay, and is supplied as a free-flowing, micronized powder. When Nanomer nanoclays are dispersed in a polymer matrix, they form a near-molecular blend called a nanocomposite.
For more information about montmorillonite structure and morphology, consult Technical Data G-105. Additional information about surface compatiblization is contained in Technical Data G-100.
|
<urn:uuid:cd3cd29d-9d4d-4a44-81fa-b44a60c5e5e7>
|
CC-MAIN-2013-20
|
http://www.nanocor.com/nano_struct.asp
|
2013-05-19T03:07:25Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383160/warc/CC-MAIN-20130516092623-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.882573
| 354
|
This Setting Prices in a Retail Store Guide is a checklist for the owner-manager of a retail business. These 51 questions probe the consideration - from markup to pricing strategy to adjustments - that lead to correct set prices decisions. You can use this checklist to establish setting prices in your new store, or you can use it to periodically review your established pricing policy.
A retailer's set prices influence the quantities of various items that consumers will buy, which in turn affects total revenue and profit. Hence, correct setting prices decisions are a key to successful retail management. With this in mind, the following checklist of 52 questions has been developed to assist retailers in making systematic, informed decisions regarding pricing strategies and tactics.
This checklist should be especially useful to a new retailer who is making pricing decisions for the first time. However, established retailers, including successful ones, can also benefit from this Guide. They may use it as a reminder of all the individual pricing decisions they should review periodically. And, it may also be used in training new employees who will have pricing authority.
The Central Concept of Setting Prices
A major step toward making a profit in retailing is selling merchandise for more than it cost you. This difference between cost of merchandise and retail price is called markup (or occasionally markon). From an arithmetic standpoint, markup is calculated as follows:
Dollar markup = Retail price - Cost of the merchandise.
Percentage markup =
If an item cost $6.50 and you feel consumers will buy it at $10.00, the dollar markup is $3.50 (which is $10.00 - $6.59). Going one step further, the percentage markup is 35 percent (which is $3.50 divided by $10.00). Anyone involved in retail pricing should be as knowledgeable about formulas as about the name and preferences of his or her best customer!
Two other key points about markup should be mentioned. First, the cost of merchandise used in calculating markup consists of the base invoice for the merchandise plus any transportation charges minus any quantity and cash discounts given by the seller. Second, retail price, rather than cost, is ordinarily used in calculating percentage markup. The reason for this is that when other operating figures such as wages, advertising expenses, and profits are expressed as a percentage, all are being based on retail price rather than cost of the merchandise being sold.
Target Consumers and the Retailing Mix
In this section, your attention is directed to price as it relates to your potential customers. These questions examine your merchandise, location, promotion, and customer services that will be combined with price in attempting to satisfy customers and make a profit. After some questions, brief commentary is provided.
1. Is the relative price of this item very important to your target consumers?
The importance of setting prices depends on the specific product and on the specific individual. Some shoppers are very price conscious. Others want convenience and knowledgeable sales personnel. Because of these variations, you need to learn about your customers' desires in relation to different products. Having sales personnel seek feedback from shoppers is a good starting point.
2. Are set prices based on estimates of the number of units that consumers will demand at various price levels?
Demand-orientated pricing such as this is superior to cost-orientated pricing. In the cost approach, a predetermined amount is added to the cost of the merchandise, whereas the demand approach considers what consumers are willing to pay.
3. Have you established a price range for the product?
The cost of merchandise will be at one end of the price range and the level above which consumers will not buy the product at the other end.
4. Have you considered what price strategies would be compatible with your store's total retailing mix that includes merchandise, location, promotion, and services
5. Will trade-ins be accepted as part of the purchase price on items such as appliances and television sets?
This set of questions looks outside your firm to two factors that you cannot directly control - suppliers and competitors.
6. Do you have final pricing authority?
With the repeal of fair trade laws, "yes" answers will be more common than in previous years. Still, a supplier can control retail prices by refusing to deal with non-conforming stores (a tactic which may be illegal) or by selling to you on consignment.
7. Do you know what direct competitors are doing price-wise?
8. Do you regularly review competitor's ads to obtain information on their prices?
9. Is your store large enough to employ either a full-time or a part-time comparison shopper?
These three questions emphasize the point that you must watch competitors' prices so that your prices will not be far out of line - too high or too low - without good reason. Of, course, there may be a good reason for the out-of-the-ordinary prices, such as seeking a special price image.
A Price Level Strategy
Selecting a general level of prices in relation to competition is a key strategic decision, perhaps the most important.
10. Should your overall strategy be to sell at prevailing market price levels?
The other alternatives are an above-the-market strategy or a below-the-market strategy.
11. Should competitor's temporary price reductions ever be matched?
12. Could private-brand merchandise be obtained in order to avoid direct price competition?
Calculating Planned Initial Markup in setting prices
In this section you will have to look inside your business, taking into account sales, expenses, and profits before setting prices. The point is that your initial markup must be large enough to cover anticipated expenses and reductions and still produce a satisfactory profit.
13. Have you estimated sales, operating expenses, and reductions for the next selling season?
14. Have you established a profit objective for the next selling season
15. Given estimated sales, expenses, and reductions, have you planned initial markup?
This figure is calculated with the following formula:
Initial markup percentage =
Operating expenses + Reductions + Profit
Net sales + Reductions
Reductions consist of markdowns, stock shortages, and employee and customer discounts. The following example uses dollar amounts, but the estimates can also be percentages, and if the retailer desires a $4,000 profit, initial markup percentage can be calculated:
Initial markup percentage =
$34,000 + $6,000 + $4,000
_________________________ = 44%
$94,000 + $6,000
The resulting figure, 44 percent in this example, indicates what size markup is needed on the average in order to make the desired profits.
16 Would it be appropriate to have different initial markup figures for various lines of merchandise or service?
You would seriously consider this when some lines have much different characteristics than others. For instance, a clothing retailer might logically have different initial markup figures for suits, shirts, and pants, and accessories. (Various merchandise characteristics are covered in an upcoming section.) You may want those items with the highest turnover rates to carry the lowest initial markup.
Set Prices Store Policies
Having calculated an initial markup figure, you could proceed to set prices on your merchandise. But an important decision such as this should not be rushed. Instead, you should consider additional factors which suggest what would be the best price.
Policies are written guidelines indicating appropriate methods or actions in different methods or actions in different situations. If established with care, they can save you time in decision making and provide for consistent treatment of shoppers. Specific policy areas that you should consider are as follows:
18. Will a one-price system, under which the same price is charged every purchaser of a particular item, be used on all items?
The alternative is to negotiate price with consumers
19. Will odd-ending prices such as $1.98 and $44.95, be more appealing to your customers than never-ending price
20. Will consumers buy more if multiple pricing, such as 2 for $8.50, is used?
21. Should any leader offerings (selected products with quite, low less profitable prices) be used?
22.Have the characteristics of an effective leader offering been considered?
Ordinarily, a leader offering needs the following characteristics to accomplish its purpose of generating much shopper traffic: used by most people, bought frequently, very familiar regular price, and not a large expenditure for consumers.
23. Will price lining, the practice of setting up distinct points (such as $5.00, $7.50 and $10.00) and then marking all related merchandise at these points, be used?
24 Would price lining by means of zones (such as $5.00 - $7.50 and $12.50 - $15.00) be more appropriate than price points?
25. Will cent-off coupons be used in newspaper ads or mailed to selected consumers on any occasion?
26. Would periodic special sales, combining reduced prices and heavier advertising, be consistent with the store image you are seeking?
27. Do certain items have greater appeal than others when they are part of a special sale?
28 Has the impact of various sale items on profit been considered?
Sales prices may mean little or no profit on these items. Still, the special sales may contribute to total profits by bringing in shoppers who may also buy some regular-price (and profitable) merchandise and by attracting new customers. Also, you should avoid featuring items that require a large amount of labor, which in turn would reduce or erase profits. For instance, according to this criterion, shirts would be a better special sales item than men's suits that often require free alterations.
29. Will "rain checks" be issued to consumers who come in for special-sale merchandise that is temporarily out of stock?
You should give particular attention to this decision since rain checks are required in some situations. Your lawyer or the regional Federal Trade Commission office should be consulted for specific advice regarding whether rain checks are needed in the special sales you plan.
Nature of the Merchandise
In this section you will be considering how selected characteristics of particular merchandise affect planned initial markup.
30. Did you get a "good deal" on the wholesale price of this merchandise?
31. Is this item at the peak of its popularity?
32. Are handling and selling costs relatively great due to the product being bulky, having a low turnover rate, and requiring much personal selling, installation, or alterations?
33. Are relatively large levels of reductions expected due to markdowns, spoilage, breakage, or theft?
With respect to the preceding four questions, "Yes" answers suggest the possibility of or need for larger-than-normal initial markups. For example, very fashionable clothing often will carry a higher markup than basic clothing such as underwear because the particular fashion may suddenly lose its appeal to consumers.
34. Will customer services such as delivery, alterations, gift wrapping, and installation be free of charge to customers?
The alternative is to charge for some or all of these services
The questions in this section focus your attention on three factors outside your business, namely economic conditions, laws, and consumerism.
35. Are economic conditions in your trading area abnormal?
Consumers tend to be price-conscious when the economy is depressed, suggesting that lower-than-normal markups may be needed to be competitive. On the other hand, shoppers are less price-conscious when the economy is booming, which would permit larger markups on a selective basis.
36. Are the ways in which prices are displayed and promoted compatible with consumerism, one part of which has been a call for more straightforward price information?
37. If yours is a grocery store, it is feasible to use unit pricing in which the item's cost per some standard measure is indicated?
Having asked (and hopefully answered) more than three dozen questions, you are indeed ready to establish retail prices. When you have decided on an appropriate percentage markup, 35 percent on a garden hose, for example, the next step is to determine what percentage of the still unknown retail price is represented by the cost figure. The basic markup formula is simply rearranged to do this:
Cost = Retail price - Markup
Cost = 100% - 35% = 65%
Then the dollar cost, say $3.25 for the garden hose, is plugged in to the following formula to arrive at the retail price:
Retail price =
65% (or .65)
One other consideration is necessary:
38. Is the retail price consistent with your planned initial markups?
Set Prices Adjustments
It would be ideal if all items sold at their original retail prices. But we know that things are not always ideal. Therefore, a section on price adjustments is necessary.
39. Are additional markups called for because wholesale prices have increased or because an item's low price causes consumers to question its quality?
40. Should employees be given purchase discounts?
41. Should any groups of customers, such as students or senior citizens, be given purchase discounts?
42. When markdowns appear necessary, have you first considered other alternatives such as retaining price but changing another element of the retailing mix or storing the merchandise until the next selling season?
43. Has an attempt been made to identify causes of markdown so that steps can be taken to minimize the number of avoidable buying, selling, and pricing errors that cause markdowns?
44. Has the relationship between timing and size of markdowns been taken into account?
In general, markdowns taken early in the selling season or shortly after sales slow down can be smaller than late markdowns. Whether an early or late markdown would be more appropriate in a particular situation depends on how many consumers might still be interested in the product, the size of the initial markup and the amount remaining in stock.
45. Would a schedule or automatic markdowns after merchandise has been in stock for specified intervals be appropriate?
46. Is the size of the markdown "just enough" to stimulate purchases?
This question stresses the point that you have to observe the effects of markdowns so that you can know what size markdowns are "just enough" for different kinds of merchandise.
47. Has a procedure been worked out for markdowns on price-lined merchandise?
48. Is the markdown price calculated from the off-retail percentage?
This question gets you into the arithmetic of markdowns. Usually, you first tentatively decide on the percentage amount price must be marked down to excite consumers. For example, if you think a 25 percent markdown will be necessary to sell a lavender sofa, the dollar amount of the markdown is calculated as follows:
Dollar markdown = Off-retail percentage x Previous retail price
Dollar markdown = 25% (or .25) x $500 = $125
Then the markdown price is obtained by subtracting the dollar markdown from the previous retail price. Hence, the sofa would be $375.00 after taking the markdown.
49. Has cost of the merchandise been considered before setting the markdown price?
This is not to say that a markdown price should never be lower than cost, on the contrary, a price that low may be your only hope of generating some revenue from the item. But cost should be considered to make sure that below-cost markdown prices are the exception in your store rather than being so common that your total profits are really hurt.
50. Have procedures for recording the dollar amounts, percentages, and probable causes of markdowns been set up?
Markdown analysis can provide information for assist in calculating planned initial markup, in decreasing errors that cause markdowns, and in evaluating suppliers.
51. Have you marked the calendar for a periodic review of your pricing decisions?
Rather than making careless pricing decisions, this checklist should help you lay a solid foundation of effective prices as you try to build retail profits.
Copyright © by Bizmove. All rights reserved.
|
<urn:uuid:fdac6c08-69af-4dd5-b460-c51523725399>
|
CC-MAIN-2013-20
|
http://bizmove.com/marketing/m2y3.htm
|
2013-05-25T19:35:35Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.934664
| 3,292
|
Mostly Clear, 69°F
Businesses & Buildings
City Seals & Logos
Clubs & Organizations
Collections & Research
City of Naperville History
City of Naperville History
DuPage and Will Counties, 28 miles W of the Loop.
Joseph Naper is credited with founding Naperville along the DuPage River in 1831. He drew the first plat in 1842 and was elected the president of the board when the village of Naperville was incorporated in 1857.
Early families like the Napers, Scotts, Hobsons, and Paines came primarily from the Northeast; by the 1840s they were joined by Pennsylvanians, Germans, English, and Scots. They built at least seven churches, four of which held most services in German.
Naperville became an important stop at the crossroads of two main stage routes that ran from Chicago to Galena and to Ottawa. By 1832, 180 residents had built sawmills, gristmills, stores, and the Pre-Emption House hotel. The town became the county seat when DuPage County was established in 1839.
Eight Naperville businesses contributed to the development of the Southwest Plank Road, which was completed in 1851 and connected Chicago, Naperville, and Oswego. These businessmen then opposed a Naperville right-of-way for the Galena & Chicago Union Railroad when its representatives came prospecting that same year. The Galena line went through Wheaton instead. But the town got a second chance when the Chicago, Burlington & Quincy Railroad ran its line through Naperville in 1864.
Naperville's growth for the next century was tied to this easy rail connection to Chicago. In 1870, North Central College (then North Western College) relocated to Naperville from Plainfield to serve the community and members of the Evangelical Association of North America. Stone quarries flourished, providing building materials for Chicago, especially after the disastrous fire of 1871. The Stenger Brewery shipped beer around the region. The Kroehler Manufacturing Company, which became Naperville's largest employer, shipped furniture by rail into Chicago and its all-important markets.
Naperville organized as a city in 1890 and had a population of 2,629 by 1900. Between 1890 and 1920, residents began receiving city services such as water, sewers, electricity, and telephones. Naperville grew to 12,933 by 1960.
While the suburban boom began in the near western suburbs after World War II, Naperville remained out of the range of this growth until 1954, when plans for the East-West toll road were announced. The route, which skirted the northern edge of Naperville and included an interchange, linked the city to downtown Chicago via the just completed Eisenhower Expressway. As a result of this new access, residential, retail, industrial, and service industries boomed in and around Naperville. The city grew to 50 square miles in 1993, with a population of 128,358 in 2000. Among municipalities in the metropolitan area only Aurora and Chicago itself were larger.
Many of the new enterprises attracted to the Naperville area were based in research and development. During the late 1950s and 1960s, Argonne National Laboratory, Northern Illinois Gas, Amoco Research Center, AT&T Bell Laboratories, and Fermi National Accelerator Laboratory were established in or near Naperville. Harold Moser led the residential building boom with his first subdivision in 1956. By 1995, Moser had subdivided 8,000 building lots and had built 3,500 homes in Naperville.
North Central College, now Methodist-affiliated, continues to serve the Naperville community. The Naper Settlement, established in 1969 under the Naperville Park District, has transported historic structures from across the area and serves as a focal point for the Naperville community. Beginning in the early 1980s, the Riverwalk revitalized the downtown area and today provides acres of park and paths.
Ann Durkin Keating - Copyright Newberry Library.
From James R. Grossman, Ann Durkin Keating, and Janice L. Reiff, eds., The Encyclopedia of Chicago (University of Chicago Press, 2004).
523 S. Webster St., Naperville, IL 60540 • 630.420.6010 • Fax: 630.305.4044
Powered by CivicPlus
Administered by the Naperville Heritage Society
Accredited by the American Association of Museums
|
<urn:uuid:b824c8ef-0a46-44ce-88d8-668b9a10d100>
|
CC-MAIN-2013-20
|
http://il-napersettlement.civicplus.com/index.aspx?NID=123
|
2013-06-18T23:03:55Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707436332/warc/CC-MAIN-20130516123036-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.950038
| 914
|
A Copenhagen Climate Agreement
The UN Conference on Climate Change in Copenhagen presents a critical opportunity to strengthen the international response to global climate change. The aim in Copenhagen should be a comprehensive political agreement that puts countries on a clear path to concluding a legally binding agreement in 2010. This interim agreement should deliver both immediate action and the broad architecture of a future treaty, including:
- Ambitious political commitments for mid-term action by all major economies: economy-wide emission reduction targets for developed countries, and quantified mitigation actions by major developing countries;
- A “prompt start” on adaptation, forestry, technology and capacity-building activities and support in developing countries;
- The core elements of a legally binding agreement to be finalized over the coming year, including: a framework for verifiable mitigation commitments by all major economies; new arrangements for sustained mitigation and adaptation support to developing countries; and a system to verify countries’ actions and support; and,
- A clear mandate to conclude negotiations on a legally binding agreement at COP 16 in December 2010.
The Ultimate Goal: A Ratifiable Treaty
Negotiations are proceeding on parallel tracks under the UN Framework Convention on Climate Change (UNFCCC), which includes the United States, and under the UNFCCC’s Kyoto Protocol, which does not. The ultimate outcome could take many forms; the most coherent would be a single comprehensive agreement under the UNFCCC.
Whatever its particular form, it is important that this final outcome be legally binding. Countries will deliver their strongest possible efforts only if they are confident that their major counterparts and competitors are as well. This confidence is best instilled and maintained through mutual and verifiable commitments. While the United States and other countries are moving to strengthen their domestic climate efforts, and most will be ready to announce political commitments in Copenhagen, not all are prepared to take on binding legal commitments. An interim agreement in Copenhagen would significantly advance the global climate effort by settling fundamental legal and design issues so that governments can then negotiate specific commitments in a ratifiable agreement post-Copenhagen.
In Copenhagen: A Strong Framework Agreement
Much of the focus in Copenhagen will be on the political commitments announced by governments on their domestic climate efforts, and on the decisions and “prompt-start” finance needed to quickly operationalize new support for developing countries. It is critical that the Copenhagen agreement also begin to establish the legal and institutional framework for converting these interim pledges and
decisions into an effective treaty with legally binding commitments. It should go as far as possible to define:
Ambitious Goals. The agreement should recognize the imperative of limiting warming to 2 degrees Celsius and set an aspirational goal of reducing global emissions at least 50 percent by 2050.
A Framework for Mitigation Commitments. The agreement should clearly define the nature of mitigation commitments and how they are to be reflected in a final agreement (e.g., through “appendices” or “schedules”). Consistent with the UNFCCC’s principle of “common but differentiated responsibilities,” it should allow varying forms and levels of commitments depending on national circumstance:
- Absolute economy-wide emission targets for all developed countries; and
- A wider range of quantifiable policy-based commitments for major developing countries (e.g., sectoral emission targets, energy efficiency standards, renewable energy targets, sustainable forestry goals).
The agreement should launch and support a process, such as a “registry” process, to elaborate country-specific commitments for the major developing countries and to align support for them. It also should go as far as possible in defining implementation and accounting rules.
Support for Developing Countries. The agreement should broadly establish the mechanisms, sources, and levels of support to be provided in a final agreement for adaptation, capacity building, forestry and technology deployment in developing countries. It should: set initial funding levels and a timetable for periodic replenishment; set criteria to determine countries’ contributions to and/or eligibility for support; rely on, rather than replicate, existing multilateral financial mechanisms; provide for stronger developing country representation in the governance of climate finance; and, recognize the full range of multilateral and bilateral funding sources.
A Sound System of Verification. The agreement should establish basic terms for the measurement, reporting and verification of countries’ mitigation actions, and of support for developing country efforts, as called for in the Bali Action Plan. Building on existing reporting and review requirements under the UNFCCC and Kyoto Protocol, it should require annual emissions inventories by all major-emitting countries (with a phase-in period and support for developing countries); national verification of countries’ mitigation commitments; and, regular implementation reports subject to international review. The review process should culminate in a clear determination of whether or not a country is complying with its commitments, with facilitative remedies in cases of non-compliance.
|
<urn:uuid:65afa2dd-2cc4-4ce1-9b1d-428d0b9e5460>
|
CC-MAIN-2013-20
|
http://www.c2es.org/publications/copenhagen-climate-agreement
|
2013-05-23T04:35:46Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.932909
| 1,001
|
Feb 02, 1876: National League born
Submitted by BTGrimes on Sat, 02/02/2013 - 7:00am
"Senior circuit" formed
NEW YORK, NEW YORK - The National League of Baseball Clubs was formed on this date in 1876. What became known as the National League survives to this day, and it owes as much to the marketing of sporting goods as it does to play on the field.
One of the chief architects of the new league was Albert G. Spalding of Rockford, Illinois. He was thinking of the sale of baseball equipment as much as balls and strikes.
As Leonard Koppett wrote in Koppett's Concise History of Major League Baseball Spalding thought he had a better way to run a professional baseball organization than the loosely held National Association founded in 1871. He didn't have much faith that the east coast dominated Association would survive, and he wanted desperately for professional baseball to survive so teams and their fans would buy baseball equipment from him. He and William Hulbert of Chicago began to put together a plan. The problem was Spalding and Hulbert were part of the National Association; Spalding played for Boston, and Hulbert was in the front office of the Chicago White Stockings.
The two needed a solid plan before the start of the next season to attract select east coast National Association teams. They got commitments from midwest teams in Cincinnati, Louisville and St. Louis to join Chicago. That's where the February 2, 1876 meeting came in. The gathering was held at the Central Hotel in Manhattan with representatives from Philadelphia, New York, Boston and Hartford. They all agreed and the National League was born. Play began that spring with those eight teams. As Koppett wrote, "It established a pattern that became the model for all commercialized spectator team sports from then on."
This daily dose of baseball history is brought to you by TODAY in BASEBALL. Spread the word. Link www.todayinbaseball.com to your website.
|
<urn:uuid:a5b2ce51-766a-4669-804b-13238cdd0ba0>
|
CC-MAIN-2013-20
|
http://www.todayinbaseball.com/cms/0202187613-nl
|
2013-05-24T22:36:47Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.981527
| 420
|
Mobile Design Pattern: Paging
This design pattern is part of the Mobile Design Patterns series.
A systematic technique of arranging content of a long page over a number of numbered webpages is known as Paging.
- Gives the users advance information of the quantity of the content.
Many times the content of the page is too long, ever-scrollable in other words. The users need to look for something specific in a huge list of items. For example, search results, available jobs, shopping items, etc were there are numerous results available . To avoid the user ever scrolling in such conditions the content is systematically classified according to relevance, name, type, price, etc. and presented to the user in the form of multiple webpages.
As shown in the below screenshots, only part of the results are provided on the current webpage. The navigation links to the other lists of results or next pages are located at the top and/or bottom of the page. It is generally a practice to provide these navigation links both at the top and bottom of the webpage for the convenience of the user.
Depending upon the number of other results available, the navigation links are numbered. For example when the results are very large the links are available for "1-10" pages, while when the results are comparatively low less number of navigation links are shown. Ref fig.
The navigation links are generally page numbers, as shown in the above screenshots. However, when mobilizing a website - due to less screen resolution resulting in less space - the page numbers tend to be very small. A special design which has links "Show next 10 results" and "Show previous 10 results" may be used.
- As per the availability of screen space and the size and style of fonts used the navigation buttons may be sometime appear to small. An alternate is using buttons which link to proper navigation links.
- A scroll bar (with a Go button) with navigation links may also be used instead of all visible number links. This may be helpful when the screen space available is small.
--Submitted by croozeus 01:18, 12 May 2009 (EEST)
|
<urn:uuid:8008e621-bb9b-4d50-a796-6cea57e5b7ff>
|
CC-MAIN-2013-20
|
http://www.developer.nokia.com/Community/Wiki/Mobile_Design_Pattern:_Paging
|
2013-05-26T03:50:09Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706578727/warc/CC-MAIN-20130516121618-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.903222
| 439
|
Aromatherapy is the use of organic compounds called essential oils to improve a person’s mood, mental state, or health. The oils are extracted from various plant parts, such as roots, seeds, leaves, and blossoms, and can be blended together. Treatment involves diffusing oil into the air, dissolving it in a bath, or applying it during a massage, among other techniques.
How Aromatherapy Works
Researchers aren’t sure how aromatherapy works, but most believe that chemicals in essential oils trigger smell receptors in the nose that are connected to areas of the brain related to mood. For people with depression, certain oils are believed to bring about a sense of calm or to elevate mood. Methods of aromatherapy include:
- Diffusion: Diffusion is the process of spreading a scent gently and continuously throughout an area. This is usually done by a machine called a diffuser, which will allow you to transform a room or your entire house into a therapy solution.
- Room spray: Sprays are good way to get scents into an area quickly and conveniently. You can buy them premixed, or add some of your favorite oils to water in a spray bottle to make your own.
- Massage: Aromatherapy oils are a popular addition to a full-body massage. Combining aromatherapy with massage is a great way to relax and ease away both physical and mental stress.
- Baths: Various oils as well as salts that contain oils are available to turn your bath or shower into a relaxing, therapeutic experience.
- Skin and hair products: Scented beauty products are an easy way to keep a scent with you throughout the day.
The following essential oils that are sometimes used to help ease the effects of depression:
- Clary sage
Pros of Aromatherapy
Aromatherapy is an easy way to help beat stress with calming and relaxing scents. The oils are usually inexpensive, and usage is simple based on one of the above methods.
Cons of Aromatherapy
Although aromatherapy is generally safe, the oils could cause an averse or allergic reaction in some people. People who are particularly sensitive to strong scents should consider other options. Like most alternative therapies, aromatherapy should not be used as the only therapy for moderate to severe depression. There is little to no clinical research to support the efficacy of aromatherapy as a treatment for depression. However, people often use it as a complement to more traditional depression treatments.
What the Expert Says
Aromatherapy is best when incorporated with other alternative therapies, such as massage or meditation, according to Dr. Mason Turner, Chief of Psychiatry, Kaiser Permanente San Francisco.
“It really can help bring the person into the present moment,” he said.
Using comforting scents, such as the smell of fresh baked cookies, can also elicit a conditioned response. “Scents bring up memories the way no other senses can,” Dr. Turner said. “They can be very powerful in jogging fond memories.”
|
<urn:uuid:7d2d924d-f90b-4a4a-b6f1-cdfcfdfa1307>
|
CC-MAIN-2013-20
|
http://www.helpfordepression.com/article/alternative-methods/aromatherapy-depression
|
2013-05-23T18:39:20Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.950908
| 640
|
HTTP/1.1 200 OK Date: Thu, 23 May 2013 19:04:34 GMT Server: Apache/1.3.31 (Unix) mod_auth_tkt/1.3.11 PHP/4.3.8 mod_ssl/2.8.19 OpenSSL/0.9.7d mod_perl/1.29 X-Powered-By: PHP/4.3.8 Set-Cookie: cookieConsent=X; path=/ Connection: close Content-Type: text/html
About Antarctica - Dramatic clouds above Reptile Ridge, Adelaide Island
Home » About Antarctica » Wildlife » Plants »
The majority of the Antarctic continent is covered by permanent ice and snow leaving less than 1% available for colonization by plants. Most of this ice and snow-free land is found along the Antarctic Peninsula, its associated islands and in coastal regions around the edge of the rest of the Antarctic continent. Even in the most inhospitable ice-free habitats, such as inland mountains and nunataks, life can still be found.
There are no trees or shrubs, and only two species of flowering plants, Antarctic hair grass ( Deschampsia antarctica ) and Antarctic pearlwort ( Colobanthus quitensis ) are found, occurring on the South Orkney Islands, the South Shetland Islands and along the western Antarctic Peninsula. The vegetation is predominantly made up of lower plant groups (mosses, liverworts, lichens and fungi) which are specially adapted to surviving in extreme environments, in particular, tolerating low temperatures and dehydration. There are, in total, around 100 species of mosses, 25 species of liverworts, 300 to 400 species of lichens and 20-odd species of macro-fungi. The greatest diversity of species is found along the western side of the Antarctic Peninsula where the climate is generally warmer and wetter than elsewhere in the Antarctic continent. Certain species of moss and lichen, however, have a widespread distribution and others specialise in surviving in very extreme conditions. In the dry valleys of Victoria Land, for example, where it is very dry and extremely cold, algae, fungi and lichens are found living in cracks and pore spaces inside the sandstone and granite rocks.Terrestrial Plants
Antarctic pearlwort, Colobanthus quitensis , with very long flower stalksAntarctic pearlwort with very long flower stalks
Antarctic hair grass, Deschampsia antarctica stand with a large patch of dead grass ( cause unknown ).Antarctic hair grass
Tussock grass ( Parodiochloa flabellata , dark green) and Antarctic hair grass ( Deschampsia antarctica , light green) lawnTussock grass (dark green) and Antarctic hair grass (light green)
Tussock grass ( Parodiochloa flabellata) stools eroded by elephant seal activityTussock grass stools eroded by elephant seal activity < 1 2 3 4 >
The sub-Antarctic islands have a milder and wetter climate more favourable for plant growth meaning these islands possess a more diverse flora including a greater number of flowering plant species and some ferns. Dominant amongst sub-Antarctic vegetation is tussock grass, a tall (up to 2 m) robust plant forming a dense fringe near the coast. Wet habitats are covered by various kinds of bog, while drier terrain has extensive dry grassland with various herbs and, in exposed habitats, sparsely-vegetated moss- and lichen-dominated fellfield. Human activities such as whaling and sealing have led to many species being introduced. South Georgia, for example, has a vascular flora of 26 indigenous species, with a further 15 alien species which are well-established, and in some cases spreading, and a number of other alien species which are managing to survive close to the former whaling stations
Dried specimens of most of the Antarctic and sub-Antarctic flora can be found in the British Antarctic Survey’s herbarium.
Back to Top Email to a Friend
© NERC-BAS 2012
Change Text Only Settings
Graphic version of this page
|
<urn:uuid:63e1dba6-bada-4de4-95a1-89d961f932b0>
|
CC-MAIN-2013-20
|
http://www.nerc-bas.ac.uk/cgi-bin/parser.pl/0345/www.antarctica.ac.uk/about_antarctica/wildlife/plants/index.php
|
2013-05-23T19:04:33Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.888597
| 879
|
This book focuses almost exclusively on cabinet-level politics, international diplomacy, and military grand strategy. It provides a detailed chronological account of the war, sometimes recounting events hour by hour (on average, each year of the war receives a separate chapter). A more apt title for the book would have been "Navalism and its Limitations". Many politicians cherished the view that navies were cheap and safe as opposed to the standing armies and expensive European entanglements that might put Britain on the road to despotism. Richard Harding is concerned with the practical side of this proposition. Could wars be won without a substantial investment in what we now like to call "boots on the ground"? Some historians, mesmerized by the navy's decisive victories in the 1756-1763 conflict, have assumed that "in the 1740s seapower had been wasted, by incompetence and muddled thinking." (p. 7) Harding sets out to complicate that picture, noting, first, that the navy could never have lived up to the unrealistic expectations placed upon it in 1739, and second, that decision-makers did not display incompetence but faced a risk-fraught and ever-changing environment. Harding's essential point is that the naval option did not exist in a vacuum; to determine whether the use of seapower would have a decisive impact, it must be evaluated in the wider context of concurrent events such as land warfare and diplomacy.
Memories of the defeat of the Armada and of the exploits of Elizabethan privateers tantalized war planners with the possibility that Spain's far-flung empire remained vulnerable to attack from the sea. A naval blockade had forced concessions from Spain as recently as 1726. Declaring war on Spain with "massive expectations of quick victory based on naval power" (p. 6) in 1739, Britain found herself instead drawn into a larger conflict, fighting France as well as Spain, and obligated to support Maria Theresa in the War of the Austrian Succession, a conflict that put George II's cherished Hanover close to the center of diplomatic and military action. The government's tortured maneuvers to protect its German-speaking possession provoked bitter debate in Parliament about "Patriotic" versus "Hanoverian" interests. Meanwhile, the Spanish grip on the Americas did not loosen, and major naval victories occurred too late in the war to exercise a decisive influence over the timing, or the terms, of the peace treaty. The assumption that command of the sea would shield Britain itself from any serious attempt at invasion while it pursued gains elsewhere also proved incorrect. The Jacobite advance from Scotland, combined with French invasion fleets mustering just offshore, briefly posed a threat to London itself.
The need to counter both French and Spanish fleets meant that "from being primarily an offensive force in 1739/40, with a focus on the Caribbean, the Royal Navy had, by the end of 1741, been forced to move to a more defensive posture with a Mediterranean focus." (p. 122) Even in these southern European waters, admirals were sometimes under orders to take no action for diplomatic reasons, which is hardly evidence for the impotence of seapower, but points rather to the complexity of Britain's balancing act as it protected Hanover while prosecuting the wider war. (p. 118) British planners concluded that "the best way of controlling France was to have 80,000 men in Flanders" (p. 89), threatening Paris. Harding could have made this point succinctly, but instead the land campaigns in Flanders muscle aside the naval war within the pages of his book, just as they did in real life; no less than eleven different detailed maps show the maneuverings of the rival armies in this tiny area, whereas the minimalist sketches of "The West Indies" and "Spanish Imperial Trade Routes" offer little to the reader.
Naval power might achieve exciting raids and captures, as Admiral Vernon demonstrated early in the war, but the toll of tropical diseases made it difficult to hold Spanish ports for long. By 1743, the only Spanish soil captured and still retained in the Caribbean was "the little island of Roatan in the Gulf of Honduras," tenuously held by a mutiny-prone garrison of American troops. (p. 167) The return to a maritime emphasis came only after victory in Flanders seemed impossible. News of the unexpected capture of Louisbourg in North America spurred debate over the possibility of forcing France to the negotiating table using overseas exploits alone. (p. 262) Using seapower to convey an invasion force to take Quebec seemed promising, but the possibility of further French attempts to invade across the English Channel made it seem imprudent to send large numbers of troops so far away. A bold stroke in North America was certainly an option, but it might have been met with an even more devastating riposte in Europe.
Despite the word "global" in the title, developments beyond the shores of the Atlantic receive limited attention; a late remark on the jubilation in London that French schemes had been foiled in both America and India (p. 318) comes as a surprise, since events on the Asian front of the war had not been brought to the reader's attention. A thoughtful account of the debate over whether the first blow against the Spanish should come at the Philippine port of Manila or at the South American port of Cartagena (pp. 60-65) is an honorable exception to this neglect of Asia, though Britain's long-term strategic plans for a captured Manila (if they existed) are not discussed.
While Harding occasionally cites historiography in French and Spanish, his archival work and his secondary reading are overwhelmingly in English-language sources alone. His focus remains consistently on the internal debates within the British political elite and the options available to them at any given moment, without supplying anything like an equivalent account of the strengths, weaknesses, or strategic objectives of Britain's enemies. Such a one-sided approach to military or diplomatic history has inherent limitations. Statements such as "French naval operations in the West Indies proved remarkably ineffective" (p. 337) and "France was becoming war weary as a result of its own confused policy" (p. 327) appear without adequate explanation or even supporting footnotes.
The American front receives better coverage than events in Asia, but Harding's treatment of the new colony in Georgia is representative of his weaknesses here. Georgia is mentioned on several different occasions, but this colony is not named in the index. He mentions the Native American population as one reason why the British would not wage war effectively in the Carolinas, Georgia and Florida. Harding's authorities on this matter are books more than thirty years old, and journal articles from 1927 and 1941. (p. 247, note 117). The unsuccessful interactions with potential native allies are an excellent example of how new insights from cultural history and Native American history could redefine our approach to both diplomacy and military affairs, but this opportunity went unnoticed. More broadly, there is little recognition in this volume that non-Europeans may have played a substantive role in this "global" and imperial conflict.
|
<urn:uuid:d9b686e3-e9c6-4816-9995-b8af5e564ee9>
|
CC-MAIN-2013-20
|
http://geschichte-transnational.clio-online.net/rezensionen/id=15258&count=29&recno=13&type=rezbuecher&sort=verfasser_herausgeber&order=up&geschichte=171&segment=16
|
2013-05-22T21:44:26Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.962723
| 1,447
|
|Josephus and Jesus
By Paul L. Maier, The Russell H. Seibert Professor of Ancient History, Western Michigan University
Flavius Josephus (A.D. 37 – c. 100) was a Jewish historian born in Jerusalem four years after the crucifixion of Jesus of Nazareth in the same city. Because of this proximity to Jesus in terms of time and place, his writings have a near-eyewitness quality as they relate to the entire cultural background of the New Testament era. But their scope is much wider than this, encompassing also the world of the Old Testament. His two greatest works are Jewish Antiquities, unveiling Hebrew history from the Creation to the start of the great war with Rome in A.D. 66, while his Jewish War, though written first, carries the record on to the destruction of Jerusalem and the fall of Masada in A.D. 73.
Against this background, we should certainly expect that he would refer to Jesus of Nazareth, and he does—twice in fact. In Antiquities 18:63—in the middle of information on Pontius Pilate (A.D., 26-36)—Josephus provides the longest secular reference to Jesus in any first-century source. Later, when he reports events from the administration of the Roman governor Albinus (A.D. 62-64) in Antiquities 20:200, he again mentions Jesus in connection with the death of Jesus' half-brother, James the Just of Jerusalem. These passages, along with other non-biblical, non-Christian references to Jesus in secular first-century sources—among them Tacitus (Annals 15:44), Suetonius (Claudius 25), and Pliny the Younger (Letter to Trajan)—prove conclusively that any denial of Jesus' historicity is maundering sensationalism by the uninformed and/or the dishonest.
Although this passage is so worded in the Josephus manuscripts as early as the third-century church historian Eusebius, scholars have long suspected a Christian interpolation, since Josephus could hardly have believed Jesus to be the Messiah or in his resurrection and have remained, as he did, a non-Christian Jew. In 1972, however, Professor Schlomo Pines of the Hebrew University in Jerusalem announced his discovery of a different manuscript tradition of Josephus's writings in the tenth-century Melkite historian Agapius, which reads as follows at Antiquities 18:63:
Here, clearly, is language that a Jew could have written without conversion to Christianity. (Schlomo Pines, An Arabic Version of the Testimonium Flavianum and its Implications [Jerusalem: Israel Academy of Sciences and Humanities, 1971.])
Scholars fall into three basic camps regarding Antiquities 18:63:
Josephus must have mentioned Jesus in authentic core material at 18:63 since this passage is present in all Greek manuscripts of Josephus, and the Agapian version accords well with his grammar and vocabulary elsewhere. Moreover, Jesus is portrayed as a "wise man" [sophos aner], a phrase not used by Christians but employed by Josephus for such personalities as David and Solomon in the Hebrew Bible.
Furthermore, his claim that Jesus won over "many of the Greeks" is not substantiated in the New Testament, and thus hardly a Christian interpolation but rather something that Josephus would have noted in his own day. Finally, the fact that the second reference to Jesus at Antiquities 20:200, which follows, merely calls him the Christos [Messiah] without further explanation suggests that a previous, fuller identification had already taken place. Had Jesus appeared for the first time at the later point in Josephus's record, he would most probably have introduced a phrase like "…brother of a certain Jesus, who was called the Christ."
This, Josephus's second reference to Jesus, shows no tampering whatever with the text and it is present in all Josephus manuscripts. Had there been Christian interpolation here, more material on James and Jesus would doubtless have been presented than this brief, passing notice. James would likely have been wreathed in laudatory language and styled, "the brother of the Lord," as the New Testament defines him, rather than "the brother of Jesus." Nor could the New Testament have served as Josephus's source since it provides no detail on James's death. For Josephus to further define Jesus as the one "who was called the Christos" was both credible and even necessary in view of the twenty other Jesuses he cites in his works.
Accordingly, the vast majority of contemporary scholars regard this passage as genuine in its entirety, and concur with ranking Josephus expert Louis H. Feldman in his notation in the Loeb Classical Library edition of Josephus: "…few have doubted the genuineness of this passage on James" (Louis H. Feldman, tr., Josephus, IX [Cambridge, MA: Harvard University Press, 1965], 496).
The preponderance of evidence, then, strongly suggests that Josephus did indeed mention Jesus in both passages. He did so in a manner totally congruent with the New Testament portraits of Christ, and his description, from the vantage point of a non-Christian, seems remarkably fair, especially in view of his well-known proclivity to roast false messiahs as wretches who misled the people and brought on war with the Romans.
Furthermore, his second citation regarding the attitudes of the high priest and Sanhedrin versus that of the Roman governor perfectly mirrors the Gospel versions of the two opposing sides at the Good Friday event. And this extrabiblical evidence comes not from a Christian source trying to make the Gospels look good, but from a totally Jewish author who never converted to Christianity.
|
<urn:uuid:86fe4160-eb8d-410b-b3ba-185b061df58e>
|
CC-MAIN-2013-20
|
http://crossfaithministry.org/josephusandjesus.html
|
2013-06-19T20:03:11Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709101476/warc/CC-MAIN-20130516125821-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.96024
| 1,195
|
How to use Excel
Editing Excel Cells
We continue our computer course showing how to use Excel with a computer lesson demonstrating editing cells in Excel. The lesson talks about how you can set permissions and passwords to protect your work in Excel without going in to too much detail at this time, before showing three ways of editing Excel cells, firstly double clicking the selected cell, secondly using the Excel formula bar and then explaining how key F2 can be used. The computer lesson for excel beginners goes on to demonstrate the status bar at the bottom of the Excel worksheet.
The course for how to use Excel is suitable for anybody looking to improve their computer skills and understand about spreadsheets for work or home. Those people looking to qualify in the European Computer Driving Licence (ECDL) will also find this computer lesson helpful for obtaining their qualification. Read the rest of this entry »
|
<urn:uuid:84559e0e-9b7c-400e-abd6-2f202133a261>
|
CC-MAIN-2013-20
|
http://www.meganga.com/tag/excel-beginners/
|
2013-06-18T04:33:52Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706933615/warc/CC-MAIN-20130516122213-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.915733
| 177
|
|Spoken by:||Evolved into Middle then Modern English|
|Spoken in:||The British Isles,|
|Language family:||(West) Germanic; Indo-European|
Late west Saxon reconstructed pronunciation:
|Diphthongs||Short (monomoraic)||Long (bimoraic)|
|First element is close||iy||iːy|
|Both elements are mid||eo||eːo|
|Both elements are open||æɑ||æːɑ|
Old English grammar is quite complex and is typically that of a archaic Indo-European language, and if you know other archaic Germanic languages (or, to a lesser extent, German, or, to a lesser extent again, other modern Germanic languages) you will find it much easier to learn.
Old English is a highly phonetic language, and the spelling can almost always be fully predicted from the pronunciation (with the rare exception of a double letter in the spelling, which only makes a difference in the pronunciation if it is a double plosive/stop), and vice-verse (with the rare exception of a certain letter being pronounced several ways).
Common difficulties Edit
Old English retains four (and in the earliest period five) grammatical cases, and also has several declensions, the main two of which are the strong declension and the weak declension.
Old English was spoken at a time before dictionaries, so spelling varied from dialect to dialect (the main 4 being Mercian, Northumbrian, Kentish, and West Saxon). Also there were variations depending on the style of the writing, i.e. prose, poetry, or colloquial texts. This fact may be a point of confusion for speakers of modern English who are used to little spelling variation over the entire language.
There are far more strong verbs in Old English than there are in Modern English.
|
<urn:uuid:1bb0c2d0-ee97-40d4-9b1e-9ea8329f0406>
|
CC-MAIN-2013-20
|
http://learnanylanguage.wikia.com/wiki/Old_English
|
2013-06-19T12:53:52Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.836885
| 402
|
A working group of the Pontifical Academy of Sciences, one of the oldest scientific institutes in the world, has issued a sobering report on the retreat of mountain glaciers as a result of human activity leading to climate change, reports the Catholic Coalition on Climate Change.
In their declaration, the working group calls, “on all people and nations to recognise the serious and potentially irreversible impacts of global warming caused by the anthropogenic emissions of greenhouse gases and other pollutants, and by changes in forests, wetlands, grasslands, and other land uses.” They echoed Pope Benedict XVI’s 2010 World Day of Peace Message saying, “…if we want justice and peace, we must protect the habitat that sustains us.”
Veerabhadran Ramanathan of the Scripps Institution of Oceanography at the University of California San Diego, a member of the Pontifical Academy since 2004 and a co-chair of the working group, said, “"I have never participated in any report in 30 years where the word 'God' is mentioned. I think the Vatican brings that moral authority."
The report focuses on the impact of anthropogenic climate change on mountain glaciers and warns that, “Failure to mitigate climate change will violate our duty to the vulnerable of the Earth, including those dependent on the water supply of mountain glaciers, and those facing rising sea level and stronger storm surges. Our duty includes the duty to help vulnerable communities adapt to changes that cannot be mitigated. All nations must ensure that their actions are strong enough and prompt enough to address the increasing impacts and growing risk of climate change and to avoid catastrophic irreversible consequences.”
The working group recommended three measures to reduce the threat of climate change and its impacts:
* Reduce worldwide carbon dioxide emissions without delay, using all means possible to meet ambitious international global warming targets and ensure the long-term stability of the climate system. All nations must focus on a rapid transition to renewable energy sources and other strategies to reduce CO2 emissions. Nations should also avoid removal of carbon sinks by stopping deforestation, and should strengthen carbon sinks by reforestation of degraded lands. They also need to develop and deploy technologies that draw down excess carbon dioxide in the atmosphere. These actions must be accomplished within a few decades.
* Reduce the concentrations of warming air pollutants (dark soot, methane, lower atmosphere ozone, and hydrofluorocarbons) by as much as 50 per cent , to slow down climate change during this century while preventing millions of premature deaths from respiratory disease and millions of tons of crop damages every year.
* Prepare to adapt to the climatic changes, both chronic and abrupt, that society will be unable to mitigate.
The working party added, "in particular, we call for a global capacity building initiative to assess the natural and social impacts of climate change in mountain systems and related watersheds."
The full report can be read here:http://catholicclimatecovenant.org/
|
<urn:uuid:824c557c-ff39-4d6c-9bcf-147796fc8db2>
|
CC-MAIN-2013-20
|
http://www.ekklesia.co.uk/print/14751
|
2013-05-26T02:52:07Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.933729
| 607
|
(PhysOrg.com) -- The first complete genus-level dated phylogeny of palms reveals insights into the evolution of rainforests.
Understanding how biodiversity is shaped through time is a fundamental question in biology. Even though tropical rainforests represent the most diverse terrestrial biomes, the timing, location and mechanisms of their diversification remain poorly understood. In a recent paper, scientists from the Institut de Recherche pour le Développement (Montpellier), the New York Botanical Garden, and RBG Kew address these issues by constructing the first complete genus-level dated phylogeny of a largely rainforest-restricted plant family, the palms.
Their results indicate that diversification of extant lineages of palms started about 100 million years ago, during the mid-Cretaceous period. Using a range of diversification analyzes, the authors conclude that palms diversified in a rainforest-like environment at northern latitudes and have conformed to a constant diversification model (the 'museum' model or Yule process), at least until the Neogene.
These results imply the presence of a rainforest-like biome in the mid-Cretaceous period of Laurasia, considerably earlier than the first reliable fossil evidence for rainforests in the early Tertiary. Controversially, the results also suggest that ancient and steady evolutionary processes dating back to the mid-Cretaceous period can contribute, at least in part, to present day species richness in rainforests, perhaps due to the persistence of refugia during climatically unfavourable periods.
Explore further: Principles of locomotion in confined spaces could help fire ant-inspired robot teams work underground (w/ video)
More information: Couvreour, T. L. P., et al. (2011). Origin and global diversification patterns of tropical rain forests: inferences from a complete genus-level phylogeny of palms. BMC Biology 9: 44 (open access).
|
<urn:uuid:91ddb3c9-d5a6-4801-a0c1-d8b0a2839b74>
|
CC-MAIN-2013-20
|
http://phys.org/news/2011-11-palms-rainforest-evolution.html
|
2013-05-21T17:16:51Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.897311
| 403
|
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
African interlocking techniques
...the pulse of one performer or group of performers falls exactly in the middle of the other’s pulse. This type of interlocking occurs, for example, in the music of the amadinda and embaire xylophones of southern Uganda. A special type of notation is now used for these xylophones, consisting of numbers and periods. A...
...African xylophones show similarities to those of Southeast Asia in tuning and construction, but questions of the influences of trade and migration are controversial. The amadinda is made of logs. Gourd resonators are often provided for each key, sometimes with a mirliton (vibrating membrane) set in the resonator wall, giving a buzzing edge to the tone. It...
What made you want to look up "amadinda"? Please share what surprised you most...
|
<urn:uuid:5ce84998-8450-429b-bcf4-36717cdf7652>
|
CC-MAIN-2013-20
|
http://www.britannica.com/EBchecked/topic/18298/amadinda
|
2013-06-20T08:52:28Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.886989
| 217
|
Russian has over its long history had onlybhad two principal dynasties: the House of Rurik and the House of Romanov. The Rurik dynasty was founded by Rurick and his son Oleg, Swedish Vikings known as the Rus. It is from their name that the name of Russia is derived. Few European families have dominated their country's history for such an extensive period as the Ronanovs. The Romanovs followed the Ruriks, but it is the Romanovs that are generally associated with Russia because they ruled in modern tomes. The Romanov dynasty was founded by Michael Romanov (1613-45). The first great Romanov Tsar was Michael's grandson who we know of today as Peter I or Peter the Great (1696-1725).
Oleh during the mid-8th century became the first prince of Kiev and founded the Rurick Dynasty. Oleh employed mercenary troops to unite the Eastern Slavs for the first time. He introduced a complex system combining tribute and military democracy. Oleh also led impressive military operations against Khozzars, a nation of Jewish origin residing on Volga. Oleh led and even larger army against Byzantium and assaulted Constantinople. This was the first Western introduction to a Slavic power in the East--the Kievian Rus. After Oleh's death, his relative Ihor became the Great Prince of Kiev. Thor's greed and cruelty led to his downfall. and his management of this newborn empire infuriated some of his subjects. When. His wife, Olga, replaced him as the Great Princess. She accepted Christianity and this becan tge Christianization of the Eastern Slavs. Olga also sought to establish more cooperative relations with Byzantium. Olga's son Svyatoslav, was a superb military leader. He fought predatory nomadic tribes like Pechenigy and conquered Bulgaria. SvyatoslavHe was killed by a group of Pechenigy after his victory over Byzantium. Kiev declined as a result of a debilatating feud among his sons. This period of instability was finally ended when Vladimir the Red Sun, seized Kiev and became the fifth Great Prince. He baptised the Rus' into Christianity (988) and repulsed a Byzantine army. His son, Yaroslav the Wise, enacted the first legal code which came to be known made up the first set of laws, came to be known as Rus's Truths. An internl power stuggle and an invasion by the Cumans folloed.
The death of Fedor left Russia without any legitimate heirs to the crown (1598). The result was what has become known as "The Time of Rouvles" which brought Russia to its knees. Several powerful princes and boyars attempted to seize the crown. The resulting wars devestated the countryside bringing famine to Russia. Ivan IV's reign had weakened Russian institutions. Many weak rulers after Feodor's death attempted to govern, but with little succcess. The best known is Boris Godunov, a boyar who had gained power during Feodor's reign. He had, however, no blood connection to the ruling family. Godunov was elected tsar by a zemskii sobor. His reign proved short (1598-1605). His reign was beset with both Church and boyar opposition--a powerful combination in Russia. Serfs fleed the great estates and the Cossacks in the south rebelled. After Boris Godunov died, a pretender to the crown claiming to be Dmitrii (a younger son of Ivan IV who died mysteriously), seized control of the crown. He was soon murdered by dissatisfied boyars. Prince Vasilii Shuiskii reigned from 1606-10 as Vasilii IV, but he was unable to prevent either domestic strife or foreign invasion. Moscow was threatened by a Cossack rebellion. There was another rebellion by a second fale Dmitrii. This was followed by 2 years of debilitating civil war. The Poles occupied Moscow (1610). They occupied the city for 2 years. Two Russian fighters become prominent--Minin and Prince Pozharsky. They led an army that retook the Kremlin in (1612). It is after this that the boyars conclude that they had to put asided constant infighting and support a new tsar.
Few European families have dominated their country's history for such an extensive period as the Ronanovs. The Romanovs followed the Ruriks, but it is the Romanovs that are generally associated with Russia because they ruled in modern tomes. The Romanov dynasty was founded by Michael Romanov (1613-45). The first great Romanov Tsar was Michael's grandson who we know of today as Peter I or Peter the Great (1696-1725). The Romanovs include two remarkable Tsaeinas, Elizaneth and Catherine. Alexander II freed the slaves. The last Romanov was Nicholas II who led Russia into World War I and was force to abdicate. He and his family was killed by the Vomminists (1918), ending thecRimanov family. This page provides historical background on the Romanov family. Not a lot of information yet on how the princes in early historical periods were dressed, but I do hope to acquire some eventually. The background history, is also useful in understanding Nicholas II and his family.
Nicholas II, the last Russian Emperor, was the eldest son of Alexander III and was born on May 6, 1868. Nicholas was born on the Alexander Palace, as the eldest son of Tsar Alexander III and Tsarina Maria Feodorovna, of the House of Romanov-Holstein-Gottorp, in the small town of Tsarskoe Selo ("The Tsar's Village" in Russian), near St. Petersburg. Nicholas and his siblings were brought up very simply. They were brought up in the Imperial Palace of Gatchina, their father's favorite residence. Despite the palace having 900 rooms, their quarters were located on the mezzanine level, firstly destinated for servants. They slept in army camp beds without pillows or mats and they took cold showers every morning. Their father didn't want them spoiled. Being Tsarevitch and as a rule in the family of a Tsar, Nicholas was brought up by tutors and private teachers, the best of their time. Nicholas and his siblings attended classes in separate rooms but the same curriculum was given. Nicholas ascended the throne after the untimely death of his father on October 20, 1894, and was crowned on May 14, 1896. Nicholas was only 28 years old and probably not yet read for the emense responsinbilties he faced. According to contemporaries, Nicholas was gentle and approachable. Those who met him easily forgot that they were face to face with the Emperor. In private life, he was undemanding but had contradictions in his character, tending to weakness and inconsistency. A stubborn supporter of the right of the sovereign, despite
growing pressure for revolution, he did not give way on a single issue, even when common sense and circumstances demanded it. Nicholas married the daughter of Grand Duke Ludwig of Hessen, Alice Victoria Eleanor Louisa Beatrice (Alexandra Feodorovna). The story of Nicholas and Alexander is one of the great love stories of the 20th Century. The two were devoted to each other throughout their lives. They had five children. The youngest child, Alexis Nicolaievich, was born August 12, 1904. The Czarevich Alexei suffered from hemophilia and was a permanent invalid. There were four daughters. Olga, Tatiana, Maria and Anastasia. The First World War sealed the fate of Nicholas and his family. Without the War, Russia may have been able to have evolved into a democratic government. It would have been difficult, but not impossible. The War made such a transition virtually impossible. Horendous losses were suffered in World War I, which Russia entered on the Allied side on August 1, 1914. Russian participation forced the Germans to divide their forces, probably saving France on the western front. Russia's loss of territory, massive casualties and confusion at home were the main reasons for the Second Russian Revolution in February 1917. Nicholas II abdicated on March 2, 1917, in favor of his brother Michael. Lenin ordered them to be shot on July 17, 1918. The bodies were hidden and have only recently been found and identified. They were given a Christian burial in 199?. A good-hearted man, he was not capable of guiding his huge empire into the modern world and the disaster of World War I.
The Zareivitch Alexis was one of the most phitographed boys of his age. Photography by the time of Alexis' birth had been perfected. George Eastman had popularized photography with his Brownie and virtually anyone could take snapshots. Photography was very popular with the royal family. There are not only many official portraits, but the Tsar and his family liked to take family snap shots, at least in the pre-World War I era. Thus a lot images of Alexis and his sisters exist. Sailor suits appear to have been the principal outfit, but once the war began he mostly wore an army uniform.
While the Tsar's immediated family was mascred by the Bolshevicks, several nephews and nieces survived.
Nicholas and Alexandra and all their children were executed by the Bolshevicks at Ekterinburg in 1918. There were no surviving family members, only grand nephews and nieces. Some claimed that Anastasia survived the masacre, but her claim has since been disproven by DNA testing. Another individual claims to be the grandson of Nicholas through a morganatic mairrage.
The royal family set the styles for the Russian nobility as
well as for the middle class. Some information is available on
the clothes worn by nobel Russian boys:
The Prince was born in 1890. Like other Russian boys he wore sailor suits. His mother, however, preferred long, shoulder-legth hair to the closely cropped hair worn by other Russian boys. The Prince hated
his long hairs as his boy cousins used to pull it. (His girl cousins liked to comb it.) His mother also had
the Prince wear his sailor suit with skirts rather than the kneepants his cousins of the same age were wearing.
This cousin of Prince Obilinski at the age of about 4 wore wide-brimmed sailor hats, long shoulder-length hair and knee-length knickers.
Some good sources of information include:
The Romanovs: John van der Kiste book The Romanovs centers on the life of Tsar Alexander II (1818-81) and all his Romanov and Yourievski children, the last of whom died in 1959--Princess Catherine Alexandrovna Yourievsky. The interesting aspect of this new book is the fact that it centers on the life of many Romanovs who had previously been ignored by royal historians!
Navigate the Boys' Historical Clothing Web Site royal pages:
[Return to the Main royal pages]
[Austria] [Belgium] [Bulgaria] [Denmark] [France] [Germany] [German states] [Italy] [Japan] [Jordon] [Luxemburg]
[Monaco] [Netherlands] [Norway] [Romania] [Spain] [Sweden] [United Kingdom] [Yugoslavia]
|
<urn:uuid:fd539fd9-3480-426e-8a40-d697393b7308>
|
CC-MAIN-2013-20
|
http://www.histclo.com/royal/rus/royal-rus.htm
|
2013-06-19T19:19:30Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.981738
| 2,396
|
Colposcopy is a procedure that allows a physician to take a closer look at a woman's cervix and vagina using a special instrument called a colposcope. It is used to check for precancerous or abnormal areas. The colposcope can magnify the area between 10 and 40 times; some devices also can take photographs.
The colposcope helps to identify abnormal areas of the cervix or vagina so that small pieces of tissue (biopsies) can be taken for further analysis.
Colposcopy is used to identify or rule out the existence of any precancerous conditions in the cervical tissue.
If a PAP test shows abnormal cell growth, further testing, such as colposcopy, often is required. A PAP test is a screening test that involves scraping cells from the outside of the cervix. If abnormal cells are found, the physician will attempt to find the area that produced the abnormal cells and remove it for further study (biopsy). Only then can a diagnosis be made.
Colposcopy may also be performed if the cervix looks abnormal during a routine examination. It may also be suggested for women with genital warts and for diethylstilbestrol (DES) daughters (women whose mothers took DES when pregnant with them).
Women who are pregnant, or who suspect that they are pregnant, must tell their doctor before the procedure begins. Pregnant women can, and should, have a colposcopy if they have an abnormal PAP test. However, special precautions must be taken during biopsy of the cervix.
A colposcopy is performed in a physician's office and is similar to a regular gynecologic exam. An instrument called a speculum is used to hold the vagina open, and the gynecologist looks at the cervix and vagina through the colposcope instead simply by eye, as in a routine examination.
The colposcope is placed outside the patient's body and never touches the skin. The cervix and vagina are swabbed with dilute acetic acid (vinegar). The solution highlights abnormal areas by turning them white (instead of a normal pink color). Abnormal areas can also be identified by looking for a characteristic pattern made by abnormal blood vessels.
If any abnormal areas are seen, the doctor will take a biopsy of the tissue, a common procedure that takes about 15 minutes. Several samples might be taken, depending on the size of the abnormal area. A biopsy may cause temporary discomfort and cramping, which usually go away within a few minutes. If the abnormal area appears to extend inside the cervical canal, a scraping of the canal may be done. The biopsy results are usually available within a week.
If the tissue sample indicates abnormal growth (dysplasia) or precancer, and if the entire abnormal area can be seen, the doctor can destroy the tissue using one of several procedures, including ones that use high heat (diathermy), extreme cold (cryosurgery), or lasers. Another procedure, called a loop electrosurgical excision (LEEP), uses low-voltage high-frequency radio waves to excise tissue. If any of the abnormal tissue is within the cervical canal, a cone biopsy (removal of a conical section of the cervix for inspection) will be needed.
Colposcopy is a painless procedure that does not require any anesthetic medication. If a biopsy is done, there may be mild cramps or a sharp pinching when the tissue is removed. To lessen this pain, your doctor may recommend 800 mg of ibuprofen (Motrin) taken the night before and the morning of the procedure (no later than 30 minutes before the appointment). Patients who are pregnant or allergic to aspirin or ibuprofen can take two tablets of acetaminophen (Tylenol) instead.
If a biopsy was done, there may be a dark vaginal discharge afterwards. After the sample is removed, the doctor applies Monsel's solution to the area to stop the bleeding. When this mixes with blood it creates a black fluid that looks like coffee grounds for a couple of days after the procedure. It is also normal to have some spotting after a colposcopy.
Patients should not use tampons or put anything else in the vagina for at least a week after the procedure, or until the doctor says it's safe. In addition, women should
Occasionally, patients may have bleeding or infection after biopsy. Bleeding is usually controlled with a topical medication.
A patient should call her doctor right away if she notices any of the following symptoms:
- heavy vaginal bleeding (more than one sanitary pad an hour)
- fever, chills, or an unpleasant vaginal odor
- lower abdominal pain.
If visual inspection shows that the surface of the cervix is smooth and pink, this is considered normal. If abnormal areas are found and biopsied and the results show no indication of cancer, a precancerous condition, or other disease, this also is considered normal.
Carlson, Karen J., Stephanie A. Eisenstat, and Terra Ziporyn. The Harvard Guide to Women's Health. Cambridge, MA: Harvard University Press, 1996.
Ryan, Kenneth J., Ross S. Berkowitz, and Robert L. Barbieri. Kistner's Gynecology. 6th ed. St. Louis: Mosby, 1995.
American Society for Colposcopy and Cervical Pathology. 20 W. Washington St., Ste. #1, Hagerstown, MD 21740. (800) 787-7227. <http://www.asccp.org>.
Carol A. Turkington
Biopsy—Removal of sample of abnormal tissue for more extensive examination under a microscope.
Cervix—The neck of the uterus.
Cryosurgery—Freezing and destroying abnormal cells.
DES—The abbreviation for diethylstilbestrol, a synthetic form of estrogen that was widely prescribed to women from 1940 to 1970 to prevent complications. It was linked to several serious birth defects and disorders of the reproductive system in daughters of women who took DES. In 1971, the FDA suggested it not be used during pregnancy and banned its use in 1979 as a growth promoter in livestock.
Human papilloma virus—A virus that causes common warts of the hands and feet, as well as lesions in the genital and vaginal area. More than 50 types of HPV have been identified, some of which are linked to cancerous and precancerous conditions, including cancer of the cervix.
Loop electrosurgical excision (LEEP)—A procedure that can help diagnose and treat cervical abnormalities, using a thin wire loop that emits a low-voltage high-frequency radio wave that can excise tissue. It is considered better than either lasers or electrocautery because it can both diagnose and treat precancerous cells or early stage cancer at the same time.
PAP test—The common term for the Papanicolaou test, a simple smear method of examining stained cells to detect cancer of the cervix.
Speculum—A retractor used to separate the walls of the vagina to make visual examination easier.
|
<urn:uuid:d6471e20-4146-4e17-954f-df1f229223e8>
|
CC-MAIN-2013-20
|
http://www.healthline.com/galecontent/colposcopy
|
2013-05-24T09:33:38Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704433753/warc/CC-MAIN-20130516114033-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.924503
| 1,498
|
Galileo Galilei «founder of modern experimental science»
Galileo Galilei was one of the most remarkable scientists ever. He discovered many new ideas and theories and introduced them to mankind.
Galileo helped society as an Italian astronomer and physicist, but how did he come to be such a great and well-known scientist? It took hard work and patience....
Galileo was born during the renaissance in Pisa, Italy on February 15, 1564. He was raised by his mom, Giulia Ammanati, and his dad, Vincenzo Galilei. His family had enough money for school, but they were not rich. When he was about seven years old, his family moved to Florence where he started his
education. In 1581, his father sent him to the University of Pisa because he thought his son should be a doctor. For four years, he studied medicine and the different theories of the scientist Aristotle. He was not interested in medicine, but soon he became interested in math. In 1585, he convinced his father to let him leave the school without a degree.
Galileo was a math tutor for the next four years in Florence. He spent a lot of the four years studying the scientific thoughts and philosophies of Aristotle. He also invented an instrument that could find the gravity of objects. This instrument, called a hydrostatic balance, was used by weighing the objects in water.
Galileo returned to Pisa in 1589 and became a professor in math. He taught courses in astronomy at the University of Pisa, based on Ptolemy's theory that the sun and all of the planets move around the earth. Teaching these courses, he became more understanding of astronomy.
In 1592, the University of Padua gave him a professorship in math. He stayed at that school for eighteen years. He learned and believed Nicolaus Copernicus's theory that all of the planets move around the sun, made a mechanical tool called a sector, explained the tides based on Copernican theory of motion of earth, found that the Milky Way was made up of many stars, and told people that machines cannot create power, they can only change it.
In 1602, still at Padua, Galileo did research on motion. The Aristotelian theory of motion went against the theory that the earth moves. Because of this, Galileo worked on forming a theory that would show that the earth does move. He formed a theory that all pendulums swing at the same rate no matter what size the arc is by watching a chandelier swing at the cathedral at Pisa.
Please do not pass this sample essay as your own, otherwise you will be accused of plagiarism. Our writers can write any custom essay for you!
Overview of Galileo’s Life as a Scientist Galileo Galilei was born near Pisa in February 15’ 1564. As he grew up he was taught by Monks and entered into the University of Pisa. In the University of Pisa he studied Mathematics and he got a very high degree. After he graduated, around 1609 when the first telescope was invented he made a Biography of Galileo Galilei “In questions of science, the authority of a thousand is not worth the humble reasoning of a single individual.” For Galileo Galilei this thought meant everything. He went against everyone and believed what he thought was true. Many disrespected him and thought he was insane to question the theories of many great scientists of that Astronomy – Galileo Galilei Galileo Galilei's father, Vincenzo Galilei (c.1520 - 1591), who described himself as a nobleman of Florence, was a professional musician. He carried out experiments on strings to support his musical theories. Galileo studied medicine at the university of Pisa, but his real interests were always in mathematics and natural philosophy. He is chiefly remembered for Galileo Galilei Galileo Galilei was born at Pisa on the 18th of February in 1564. His father,
Vincenzo Galilei, belonged to a noble family and had gained some distinction as a
Musician and a mathematician. At an early age, Galileo manifested his ability to learn
Both mathematical and mechanical types of things, but his parents, wishing to turn him
Aside from Scientist Biographical Essay, Galileo Galileo, Italian physicist and astronomer, was born at Pisa February 15, 1564 and died at Arcetri, near Florence, January 8, 1642. In 1581 he entered the University of Pisa to study medicine and the Aristotelian philosophy, but soon abandoned medicine for mathematics and physical science. In 1585 he left the university and went to Florence
Need Book Reports, essays, lectures? Save to bookmarks - » Galileo Galilei «founder of modern experimental science». Collections of essays on literature!
|
<urn:uuid:818eb1c7-b8df-4bea-b8ed-551ab71aca1a>
|
CC-MAIN-2013-20
|
http://www.mannmuseum.com/galileo-galilei-founder-of-modern-experimental-science/
|
2013-05-20T22:38:48Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.98444
| 996
|
Reviewed by Lois Spangler
Second Grade Science Teacher
This is an excellent book for early childhood classrooms. It is one of 8 volumes in the Plant–ology series by Bearport. Topics covered are in alignment with early childhood life science curriculum standards. In addition to exploring how plants use toxins to protect themselves from hungry animals, this volume establishes a very important safety concept that some plants are poisonous to animals and humans. Children will also learn that you cannot distinguish poisonous plants from nonpoisonous plants just by looking at them. Topics include: poisonous flowers and leaves, poisonous bulbs, deadly berries, birds and berries, poisonous seeds, along with poison ivy and oak, and more.
The information in Lawrence’s book will pique children’s curiosity. There is a clear text section on each left–hand page, with enlarged images of details that students might observe in nature like tree rings, pine cones, or leaf ribs. Full page photography on the opposing page illustrate the topics. The engaging photos are clearly labeled and present a strong visual impact, which will captivate children. Each book in the series includes a glossary, an index, a reference section, and an activity. The reading level is 2, with an interest level K–3. Because the text is supported by such good graphics, the range of readers is broad. This publication is also available in eBook format. Classroom teachers and those using differentiated instruction will find this volume an excellent addition to their plant unit.
Review posted on 10/11/2012
|
<urn:uuid:bc11f347-b8de-46d1-bf93-5bd0813b3f60>
|
CC-MAIN-2013-20
|
http://www.nsta.org/recommends/ViewProduct.aspx?ProductID=21367
|
2013-05-19T09:55:37Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.947763
| 311
|
NASA launching solar-powered spacecraft to explore Jupiter
CAPE CANAVERAL, Fla. (AP) — NASA has launched a robotic explorer to Jupiter.
The spacecraft, named Juno, blasted off aboard an unmanned rocket Friday from Cape Canaveral. It will take Juno five years to reach the largest planet in the solar system.
Juno is solar powered with three huge panels, a first for a spacecraft intended to roam so far from the sun. The total mission costs $1.1 billion.
Scientists hope to discover the recipe for making planets, by identifying Jupiter's secret ingredients. The gas giant is believed to be the solar system's oldest planet.
Attached to Juno are three little Lego figures. They represent the Italian physicist Galileo, who discovered Jupiter's biggest moons; the Roman god Jupiter; and his wife Juno, for whom the spacecraft is named.
RecommendedRecent Facebook Activity
Only On 7
Now you can get customized weather right down to your street! Plan your day and week ahead with ABC7's Interactive 7-day forecast!
TBD Blogs What you need to read
The Market Report
@TBD On Foot
Best of TBD In case you missed it
Here's a visual look at the eight most delicious, disgusting meals in the country.
|
<urn:uuid:06a5c5b3-9c29-4829-8667-52701bb93707>
|
CC-MAIN-2013-20
|
http://www.wjla.com/articles/2011/08/nasa-launching-solar-powered-spacecraft-to-explore-jupiter-64732.html
|
2013-06-19T20:09:51Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709101476/warc/CC-MAIN-20130516125821-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.919444
| 260
|
In the 20th cent. the structural or descriptive linguistics school emerged. It dealt with languages at particular points in time (synchronic) rather than throughout their historical development (diachronic). The father of modern structural linguistics was Ferdinand de Saussure, who believed in language as a systematic structure serving as a link between thought and sound; he thought of language sounds as a series of linguistic signs that are purely arbitrary, as can be seen in the linguistic signs or words for horse: German Pferd, Turkish at, French cheval, and Russian loshad'. In America, a structural approach was continued through the efforts of Franz Boas and Edward Sapir, who worked primarily with Native American languages, and Leonard Bloomfield, whose methodology required that nonlinguistic criteria must not enter a structural description. Rigorous procedures for determining language structure were developed by Kenneth Pike, Bernard Bloch, Charles Hockett, and others.
See also structuralism.
Sections in this article:
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on linguistics Structural Linguistics from Infoplease:
See more Encyclopedia articles on: Language and Linguistics
|
<urn:uuid:999c21b5-7d56-4210-bb91-b2959dedb763>
|
CC-MAIN-2013-20
|
http://www.infoplease.com/encyclopedia/society/linguistics-structural-linguistics.html
|
2013-05-23T19:06:14Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.931253
| 255
|
Saqqara – Splendours of Egypt
Saqqara is one of the richest archaeological sites in Egypt and today we visit its Step Pyramid of Djoser. Saqqara’s monuments span some 3,000 years, with monuments including the earliest ancient funerary structures to Coptic monasteries.
There were masses of visitors at the entrance to the enclosure, so we had to wait our turn to move in. To get to the Great Southern Court, we walk through a magnificent colonnaded corridor. This corridor is lined with 40 pillars, ribbed in imitation palm stems.
Everyone knows of the Great Pyramids of Giza, but it’s the Step Pyramid that holds the most significance in the history of Egyptian monuments. The Step Pyramid marked an unprecedented leap forward in the world of architecture. Imhotep has been hailed the inventor of the art of building with hewn stone. Site excavations revealed Imhotep’s name inscribed on Djoser’s pedestal. The vast enclosure surrounding the Step Pyramid was another achievement. The site design provided the template for subsequent Egyptian architecture.
Saqqara became the royal necropolis for the Old Kingdom capital of Memphis. As the city grew, so did its necropolis. It spanned an area over 6 km long and more than 1.5 km wide. The Step Pyramid of Djoser was built some time after 2630 BC. The pyramid was built for King Djoser by Imhotep, high priest and architect. It was the first pyramid in Egyptian history. The Step Pyramid was the first stone structure of its size in the world. Prior to this, royal tombs were underground rooms covered by low sandy mounds. The Step Pyramid was started as a large mastaba tomb. It followed the well-established Saqqara tradition. Imhotep chose to use stone rather than mud-brick. He built one mastaba on top of the other, each one smaller than the one below. In one of the restored sections of the wall you’ll see a frieze of cobras. It is quite a sight.
HelenWhat are your thoughts on the subject?
|
<urn:uuid:81c47e7d-bd22-48f4-bc21-4c4acd40e23a>
|
CC-MAIN-2013-20
|
http://www.travelsignposts.com/wordpress/egypt/splendours-of-egypt/saqqara-splendours-of-egypt
|
2013-05-24T02:27:36Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132729/warc/CC-MAIN-20130516113532-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.94958
| 445
|
NEW HEADING 1
December 3, 2012By Kyle Adams of Thomas Nelson HS
In 1647, the English parliament passed a law made Christmas illegal. The Puritan leader Oliver Cromwell, who considered feasting and revelry on what was supposed to be a holy day to be immoral, banned the Christmas festivities. The ban was lifted only when Cromwell lost power in 1660.
|
<urn:uuid:decb63b4-b9fb-48bc-b4f5-83daaee5440f>
|
CC-MAIN-2013-20
|
http://www.ihigh.com/thomasnelson/article_149764.html
|
2013-06-19T19:48:45Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709101476/warc/CC-MAIN-20130516125821-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.888397
| 78
|
Two main types of cherries are produced in the United States: sweet cherries and tart or “sour” cherries. Washington, California, Oregon and Michigan are the primary sweet cherry producing states, accounting for more than 97 percent of the quantity produced nationwide. The primary tart cherry producing state is Michigan, which alone accounts for nearly 90 percent of tart cherry production.
Cherries are consumed in a variety of ways, including fresh, frozen and canned, or as juice, wine, brined or dried. In recent years, two-thirds of the sweet cherries produced have been destined for the fresh market, with the remaining one third used for processing. Of the sweet cherries that are processed, just over 50 percent are brined.
With regard to tart cherries, 83 percent of production is used for processing, with the majority processed as a frozen product (67 percent). A total of 16 percent of tart cherries are canned and the remainder (those neither frozen nor canned) are used for juice, wine, brined and dried products (NASS).
The marketing season for U.S. sweet cherries last from early May to mid August, while the marketing season for tart cherries lasts from mid June to mid August (NASS). June 2012 ... Cherries
- California Cherries, California Cherry Advisory Board - This site provides information on Bing, Rainer, Lambert and Van cherries to consumers and industry sources.
- Cherry Marketing Institute - The national research and promotion organization represents U.S. cherry growers and works to increase demand of processed tart cherries.
- Cherry Production and Trade Summary, Foreign Agricultural Service (FAS), USDA.
- Global Agricultural Trade System (GATS), FAS, USDA.
- National Cherry Growers and Industries Foundation - This organization represents growers and industry interests.
- Northwest Cherry Growers - Approximately 70 percent of the U.S. cherry production comes from this group, which represents growers in Washington, Oregon, Utah and Idaho.
- Oregon Cherry Growers - This grower-cooperative was formed in 1932.
- Stone Fruit: World Markets and Trade, FAS, USDA, 2012.
- Cherry, Integrated Pest Management guidelines, University of California, Davis.
- Cherry Production, National Ag Statistics Service (NASS), USDA - This report contains the mid-June production forecast for U.S. tart and sweet cherries.
- Commodity Highlight: Cherries, Fruit and Tree Nuts Outlook, Economic Research Service (ERS), USDA, 2012, p. 31 - The United States maintains its rank as the No. 1 cherry exporter globally, accounting for almost a quarter of the world’s average export volume during 2005-09 and around one-third the average of world cherry export value. The United States leads in both volume and value of cherry exports—averaging 117 million pounds in 2005-09 and valued at $261.4 million.
- Food Availability (Per Capita) Data System, ERS, USDA.
- Fruit and Tree Nuts Outlook and Yearbook, ERS, USDA.
- Michigan Cherries, Michigan State University - This page links to other university resources for sweet and tart cherries.
- Noncitrus Fruits and Nuts, NASS, USDA, 2012.
- PlantFacts, Ohio State University - This Web site provides a full-text search engine of all extension and academic department information from all land-grant universities in the United States. Additionally, there are significant image and video databases, a FAQ database and a glossary.
- Postharvest Information Network, Washington State University - This features information on topics such as cherry quality, marketing, packaging and storage.
- Sample Costs to Establish an Orchard and Produce Sweet Cherries, University of California Cooperative Extension, 2012 - This online guide provides sample costs for establishing a cherry orchard using sprinkler irrigation in California.
- 75 Years Strong: Oregon Cherry Co-op, Rural Cooperatives, USDA Rural Development, 2008 - Oregon Cherry Growers remains a giant in maraschino cherry production and annually packs thousands of tons of fresh cherries. In recent years, the co-op has branched into producing infused dried cherries and blueberries.
- Cherry Central, Traverse City, Michigan - Located in the cherry capital of the world, this company is owned by member cooperatives representing growers in Michigan, New York, Wisconsin, Utah and Washington.
- Chukar Cherries, Prosser, Washington - This case study, prepared by the University of Kentucky Cooperative Extension Service in 2001, describes Chukar's growth and development, focusing on their entry into the dried foods specialty market.
- Falcon Orchard, Sister Bay, Wisconsin - This "you-pick" and "we-pick" fresh cherry orchard features a craft store that sells country-folk-art-related gift items. Cherries are available in season, and the operation stresses the camaraderie of picking fruit together. Other fun activities and amusements include a seed-spitting range and life-size soft-sculpture figurines.
- Irons Fruit Farm, Lebanon, Ohio - This farm raises tart cherries, along with apples, berries, pumpkins, sweet corn and vegetables. The farm offers you-pick produce, a seasonal bakery and gift room, fall tours and hayrides.
- King Orchards Cherry Juice Concentrate, Central Lake, Michigan - This farm processes and markets its own tart cherry juice and dried cherries using its own label.
- Seaquist Orchards, Sister Bay, Wisconsin - This fifth-generation operation processes cherries in its own processing plant. Pressed cider and cherry juice are processed on the farm.
- Shaw Orchards, Stewartstown, Pennsylvania - This orchard has been in operation for seven generations. In addition to tart and sweet cherries, the orchard is also noted for its apples and peaches. A variety of fruits, vegetables and homemade jams and jellies are also marketed directly to customers.
- Tree-Mendus Fruit Farm, Eau Claire, Michigan - The farm grows more than 15 varieties of sweet cherries as well as tart cherries. You-pick customers can have their fresh picked cherries pitted while at the farm. Tree-Mendus also is host to the “world famous” International Cherry Pit-Spitting Championship the first Saturday in July.
Links checked January 2013.
|
<urn:uuid:59d8e4c0-6189-47e4-8c50-aef6e40e39aa>
|
CC-MAIN-2013-20
|
http://agmrc.org/commodities__products/fruits/cherries/
|
2013-05-19T02:58:40Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383160/warc/CC-MAIN-20130516092623-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.900462
| 1,340
|
Standing and waiting. Photo by Image Zen.Ever since Charles Darwin and Alfred Russell Wallace first described the workings of natural selection, one popular way to summarize about selective change has gone something like this: A population of critters is well-adapted to its environment until that environment changes—maybe the critters move to a new climate, maybe the climate changes on them, maybe some new competitors or predators move in. Life gets harder for our critters, until one of them is born ... different. That lucky mutant has a never-before-seen trait that lets it cope in the new conditions, and in a few generations, every critter in the population is a descendent of that original mutant.
That narrative isn't wrong. But it does miss one of the key insights that led to the discovery of natural selection—natural populations are variable.
That population of critters encountering new conditions of life may very well not need to wait around for the lucky mutant before it can begin adapting to new conditions. Mutations happen at random, and continuously—and, particularly if they don't leave the mutant much less fit, can hang around in a population for generations. And this "standing" variation is raw material waiting for natural selection to act.
High-octane fuel for adaptation
There's good reason to think that natural selection is more efficient when it has standing variation to work with. Joachim Hermisson and Pleuni Pennings demonstrated this principle rather neatly in a 2005 theory paper, in which they modeled the fate of new genetic mutations that had a weak negative effect when they first appeared in a population, but then became beneficial after the population's environment changed.
Normally, when a new mutation appears in a population, it's almost immediately lost to the random effects of genetic drift, even if it confers a benefit. This means that a new mutation needs to be quite strongly favored by selection to have a high probability of "fixing," or spreading through an entire population.
However, under Hermisson and Pennings's model, the mutations considered are only those that survive the initial effects of drift. The flip-side of the randomness that can make a weakly beneficial mutation disappear can also help a weakly deleterious mutation spread, achieving an equilibrium between drift, selection, and new mutation events that create new copies of the same variant to replace the ones lost to selection or drift. So, when conditions changed, and the mutation became even weakly beneficial, it was ready to start spreading.
Natural selection is more effective when it works with standing variants. Figure 1 from Hermisson and Pennings (2005).This graph, the key figure from Hermisson and Pennings's paper, shows the probability that a mutation will "fix," or spread to dominate the population over the course of several generations, given the power of natural selection (alpha, the term on the horizontal axis). The dotted line tracks the probability of fixation for a brand-new mutation; the solid line tracks probability of fixation for a mutation that existed before selection began to act, and had achieved mutation-selection-drift equilibrium. No matter how strong selection is, the pre-existing mutation is more likely to "fix" than the new mutation—and that difference is most pronounced when selection favoring the mutation is weakest.
In other words, if mutations provide the variation that fuels evolution by natural selection, standing variation is fuel with a substantially higher octane rating.
Harder to spot
But the same features that make adaptation from standing variation so much more efficient also act as a sort of population genetic stealthing. This is because adaptation from standing variation has very different effects on the genetics of an adapting population than the spread of a single new mutation.
The key to this difference is that gene variants, or alleles, aren't transmitted from one generation to another one at a time. Instead, they come as part of chromosome regions, physically linked to genetic code that may have nothing to do with the function of the focal gene. And population geneticists use that fact to zero in on genetic regions that might have been recently affected by selection.
It's a little bit like buying LEGO bricks—or, at least, how it used to be when I was still buying a lot of LEGOs, back before you could custom-build your own sets online. Say you want a hundred copies of a particularly special type of LEGO brick, one that's only available in a single kit. To get those hundred bricks, you need to buy a hundred copies of that one kit. So you end up with a selection of bricks—the ones you wanted, and the ones that came with the ones you wanted—that probably doesn't have a very wide diversity of brick types.
But suppose you want a hundred copies of a more common LEGO brick, one that's included in dozens of different kits—kits for pirate ships and castles, race cars and railroads. You might still need to buy a hundred kits, but you can buy many different kinds of kits, and so in addition to the hundred copies of the brick you want, you also have bricks to build anything from a starship to a dragon.
LEGOs, evolving. Photo by Kaptain Kobold.Selection on a single beneficial mutation is like that first LEGO shopping case, where there's only one kit containing the brick you want. The one lucky mutation exists with only one "genetic background" of other, associated genetic code, and so when the mutation spreads through the population, a chunk of that background code spreads with it. (At least, until recombination can separate the favored mutation from its background; that takes time, sometimes a lot of time.)
Just as purchasing a hundred copies of the same LEGO kit would leave an obvious mark on the makeup of your brick collection, a selective sweep that starts with a single mutation—what's called a "hard sweep"—results in a region of genetic code with noticeably lower variation across the population, because everyone is carrying the original lucky mutation plus its associated background.
Figure 4 from Linnen et al. (2009), demonstrates the reduced diversity in a gene region associated with fur color in deer mice. Image from Linnen et al. (2009).In practice, biologists use this principle in two major ways. First, if a biologist has a particular gene in mind that might have recently experienced selection, she can collect DNA sequence data in the vicinity of that gene for many individuals in a popualtion, and see whether it's less diverse than it ought to be. This is how Catherine Linnen and her collaborators demonstrated that a population of deer mice living on light-colored soils in the Sand Hills of Nebraska had experienced natural selection for lighter color. In a study [PDF] I've discussed previously, the team identified a genetic region that was associated with coat color in the mice, then collected sequence data from that region in mice collected from the light-soil population. Compared to the same genetic region in mice from nearby sites with dark soil, the light-soil mice had markedly less variation in the coat-color region.
Alternatively, biologists who don't know which genes might have been targeted by natural selection can collect sequence data from a whole lot of gene regions—or even "scan" the whole genome—and compare the diversity at each region. Any region that has lower diversity than most of the other sampled regions may have experienced selection recently, and is probably a good candidate for follow-up study.
But selection froms standing variation doesn't leave such a clear mark on the genome. It's more like that second LEGO shopping spree, for a brick found in many different kits. If a useful variant is located on many different genetic backgrounds, than selection can make the variant more common in the population without necessarily reducing the diversity of gene regions near the focal variant. This is called a "soft sweep." Soft sweeps present a problem for those of us who want to find genes that have recently been affected by natural selection—without the loss of diversity, genetic regions that have undergone soft sweeps may not stand out in the genome as a whole.
Searching for soft sweeps
As we collect and analyze more genome-scale population genetic datasets, biologists are coming around to the idea that easy-to-detect hard sweeps may be the exception [$a], rather than the rule, for evolution in natural populations—in no small part because the evidence of hard sweeps just isn't there [PDF].
But the absence of hard sweeps doesn't mean that soft sweeps are going on all over the place instead. For instance, in an (ongoing) analysis I presented [PDF] at the recent Evolution meetings in Ottawa, I examined patterns of diversity in genetic regions close to genetic markers that are very strongly associated with differing climate conditions in the small but awesome wildflower Medicago truncatula—and I found little evidence of recent hard sweeps. Does that mean all those strongly associated gene variants are strongly associated as a result of adaptation from standing variation? Maybe; but some portion of the associations could also be due to population genetic processes like drift and isolation-by-distance—I'm still thinking about ways to kill the soft sweep hypothesis.
Pennings and Hermisson followed up their original theory paper with a study comparing the power of several different statistical tests to detect soft sweeps, and they found some promising results with an approach based on linkage between genetic variants in the vicinity of a favored variant. More recently, Pennings has approached the question of adaptation from standing variation from a somewhat different angle, by studying selective sweeps in human immunodeficiency virus, HIV. The evolution of HIV after it infects a patient, and as it adapts to antiviral drugs, is quite well understood—to the point that virologists know to expect particular mutations to sweep the viral population within a patient who starts taking a particular drug.
In an analysis recently published in PLoS Computational Biology, Pennings found that the virus's evolution of drug resistance could be based on standing variation in about 6% of patients on a standard anti-viral drug cocktail—which is to say, about 6% of all patients carry viral populations that are primed to evolve drug resistance the moment therapy begins. (Pennings's lab website has a good explanation of the clinical implications of this result, with video, even.)
Then, at the Ottawa Evolution meetings, Pennings presented [PDF] an examination of HIV genetic samples taken from multiple patients undergoing antiviral treatment. She identified cases when the virus's adaptation to the drugs was fueled by standing variation or based on a mutation that occurred after the drug treatment started; one resistance mutation evolved to fixation via a soft sweep in eight out of 23 patients. [Correction, 6 Aug 2012: See Pennings's comment below for a correction on this point; it's not known whether this particular soft sweep started from standing variation, or whether it's simply the case that two different mutations with the same effect managed to sweep the population together.]
If evolutionary biologists want to understand how natural selection helped make the living world we see around us today, it looks like we're going to have to learn to love soft sweeps. We're still learning how to differentiate the aftermath of soft sweeps from the results of other, non-selective processes. But fortunately, we live in an era when the genome-scale data that may let us untangle this question are increasingly easy to collect.◼
I started working on this post quite a while before the Ottawa Evolution meetings, when I was pleased to meet Pleuni Pennings for the first time. If there are mistakes in what I've written above, they're my own; but I hope she'll let me know if I've made any!
Flintoft, L. (2011). Human evolution: Sweep model is swept away. Nature Reviews Genetics, 12, 228-9 DOI: 10.1038/nrg2978
Hermisson, J., & Pennings, P.S. (2005). Soft sweeps: Molecular population genetics of adaptation from standing genetic variation. Genetics, 169 (4), 2335-52 DOI: 10.1534/genetics.104.036947
Hernandez, R. D., J. L. Kelley, E. Elyashiv, S. Melton, A. Auton, G. McVean, G. Sella, & M. Przeworski (2011). Classic selective sweeps were rare in recent human evolution. Science, 331, 920-4 DOI: 10.1126/science.1198878
Linnen, C. R., E. P. Kingsley, J. D. Jensen, & H. E. Hoekstra (2009). On the origin and spread of an adaptive allele in deer mice. Science, 325, 1095-8 DOI: 10.1126/science.1175826
Oleksyk, T. K., M. W. Smith, & S. J. O'Brien (2010). Genome-wide scans for footprints of natural selection. Phil. Trans. Royal Soc. B, 365, 185-205 DOI: 10.1098/rstb.2009.0219
Pennings, P.S. (2012). Standing genetic variation and the evolution of drug resistance in HIV. PLoS Computational Biology, 8 : 10.1371/journal.pcbi.1002527
Pennings, P.S., & J. Hermisson (2006). Soft sweeps III: The signature of positive selection from recurrent mutation. PLoS Genetics, 2 DOI: 10.1371/journal.pgen.0020186.eor
Pritchard, J. K., & A. Di Rienzo (2010). Adaptation—not by sweeps alone Nature Reviews Genetics, 11, 665-7 DOI: 10.1038/nrg2880
|
<urn:uuid:98594ee3-9fb8-458a-b107-76d1c23ee20f>
|
CC-MAIN-2013-20
|
http://www.denimandtweed.com/2012/07/they-also-serve-adaptation-from.html
|
2013-05-26T09:36:22Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.942282
| 2,839
|
Since its founding in 1941, NASA's Glenn Research Center in Cleveland has made invaluable contributions to aeronautics and space flight. New aircraft technology has been developed, tested, designed and fabricated at Glenn. And for much of Glenn's history, the testing of cutting-edge technology took place in the Altitude Wind Tunnel.
The Altitude Wind Tunnel, or AWT, was built in Cleveland in 1944. The huge structure, which could accommodate a full-sized airplane within the tunnel, was the first of its kind to accurately simulate a variety of realistic flight conditions. Researchers could investigate the effects of high speeds and varying pressures, temperatures and elevations on aircraft by using this tunnel.
During its existence, the AWT played pivotal roles in myriad projects, from early astronaut training in Project Mercury to testing the Centaur rocket. It ceased operations in the 1970s and was demolished in 2009.
Although the tunnel itself is now gone, the lessons learned in the tunnel remain. And there are stories from the tunnel to be told.
The Altitude Wind Tunnel and Space Power Chambers
Bob Arrighi (Wyle Information Systems LLC) is the archivist for Glenn. His job is to archive, document and present to the public the rich history of Glenn. When the Altitude Wind Tunnel (AWT) was demolished in 2009, he was tasked with telling the tunnel's story. He has written a book, called "Revolutionary Atmosphere: The Story of the Altitude Wind Tunnel and the Space Power Chambers." Accompanying this book is a DVD, "A Tunnel Through Time: The History of NASA's Altitude Wind Tunnel." There is also a website
that explores the history of the AWT. A complete online version
of the book "Revolutionary Atmosphere" is available.
The tales of the tunnel can, quite literally, fill a book. Bob Arrighi gives us a preview into the book and shares a bit about his background, how the project came to be, what the process of writing the book was like, and why the topic is relevant today.
Q: What does your job entail?
Bob Arrighi: I maintain a collection of documents and other media related to the history of NASA's Glenn Research Center. I assist researchers with reference requests and in-depth research. I also identify or am assigned noteworthy historical topics and asked to document them by creating publications, websites, documentaries and multimedia pieces.
Q: How long have you been at Glenn?
Arrighi: I started at in July 2001 as a contractor for the Plum Brook Reactor Facility documentation effort. In July 2003 I assumed my current position in the Glenn History Office. Since May 2005, my time has been divided between working on documentation of the Altitude Wind Tunnel (AWT), Propulsion Systems Laboratory and other facilities for the Glenn Historic Preservation Officer and my continuing role in the History Office.
Q: How did the idea of the book come about?
Arrighi: Section 106 of the National Preservation Act states that federal agencies must work with the State Historic Preservation Officers to mitigate any significant changes to historic facilities or structures. In recent years, Glenn has demolished several of its historic facilities, and so has been required to provide the State Historic Preservation Officers with documentation of the history of each facility. The type of documentation is worked out with the State Historic Preservation Officers prior to any demolition.
Past projects have included the Plum Brook Reactor Facility and Rocket Engine Test Facility. In both cases, an outside company was hired to perform the documentation. Books, documentaries, a website and other materials were created. When outlining the efforts undertaken to fulfill the mitigation for the Altitude Wind Tunnel (AWT) and Propulsion Systems Laboratory, the work on Plum Brook Reactor Facility and Rocket Engine Test Facility served as a template. A number of products were requested. These products, created in-house, included a website, documentary, a historic engineering report and a book.
Q: What was the timeline for creating the book?
Arrighi: I started the general AWT research during the summer of 2005. A big push on the manuscript came in the summer and fall of 2007. The manuscript was submitted to NASA Headquarters in Washington for peer review in September 2008. Editing and layout were conducted throughout 2009. We received copies in June 2010.
Q: Describe the process of writing this book.
Arrighi: As I started the research, I kept an ongoing narrative of information that was used for many of the different products. As the research became more complete and some of the other products had been created, I began converting this narrative into a story by integrating quotes from interviews, photographs and anecdotes from articles, and creating chapters and sidebars. I repeatedly reviewed and rewrote the text to make it more interesting and easier to grasp. After incorporating comments from the peer reviewers, I began working with Nancy O'Bryan (Wyle Information Systems LLC) and Kelly Shankland (Wyle Information Systems LLC) in the Publishing Department at Glenn. We worked on editing the text and laying out the chapters. This process, which included creating the index, formatting the citations and working on the cover art, took about a year to complete.
Q: Describe what the book is about.
Arrighi: The book tells the history of the Altitude Wind Tunnel, or AWT. For years, the AWT was not only Glenn's premier facility, but also the only tunnel in the country that could test engines in altitude conditions. In the late 1950s it was modified so that its interior could be used for several altitude tests and astronaut training for Project Mercury. In 1961 the facility was converted into two large test chambers and used for numerous different types of tests for the Centaur second-stage rocket. The facility was closed in the mid-1970s. A bid to restore the facility to its original wind tunnel function in the early 1980s failed. The tunnel was eventually demolished in 2009.
Q: Why did you think it was important to write this book?
Arrighi: Glenn has a rich history that includes many significant accomplishments and a high level of technical expertise. The AWT story provides insight into the center's history from its very inception through the turbojet revolution, to the early manned space program and into recent years. The AWT demonstrates the significance of modifying a facility to stay relevant, the importance of the center's technical staff and the accomplishments of Abe Silverstein, a former Glenn center director who was very instrumental in the creation of NACA and NASA wind tunnels.
Q: How is the subject material of your book relevant to the general public?
Arrighi: Although it is impossible to write a book about a wind tunnel without getting into some technical detail, I tried to focus on the staff who were involved, and what was happening contextually. The AWT story presents many of the center's accomplishments to the public, many which have never been shared before.
Q: What is one of the most interesting things in the book?
Arrighi: I learned a tremendous amount about NACA, Glenn, jet engines, liquid hydrogen, and many other topics. Probably the most rewarding aspect was talking with or reading interview transcripts from former employees. These fill the cracks that reports, newspaper articles and other sources leave open. In almost every case, two themes arose in these interviews: the importance and skill of the mechanics and technicians and Abe Silverstein's role in guiding the center and agency for decades.
Read the Book, Watch the DVD
There are several ways to purchase a printed copy of "Revolutionary Atmosphere: The Story of the Altitude Wind Tunnel and the Space Power Chambers."
NASA Center for AeroSpace Information (CASI)
U.S. Government Printing Office (GPO)
"Revolutionary Atmosphere" is also available from private, commercial booksellers.
To acquire a copy of the DVD "A Tunnel Through Time: The History of NASA's Altitude Wind Tunnel" please email the NASA Glenn History Office
-Tori Woods, SGT Inc.
NASA’s Glenn Research Center
|
<urn:uuid:6380ab4b-0977-428f-bd22-ade5364a6552>
|
CC-MAIN-2013-20
|
http://www1.nasa.gov/centers/glenn/about/bios/revolution_atmosphere.html
|
2013-05-22T07:48:39Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.957269
| 1,668
|
Essay Topics/Writing Assignments
These 20 essay questions can be used as essay questions on a test, or as stand-alone essay topics for a take-home or in-class writing assignment. Students should have a full understanding of the text in order to answer these questions. They ask for a thorough analysis of the text.
1. Why do you think it is important that Lois Lowry write a book about such a mature subject for younger children?
2. Write an essay contrasting the fairy tale world and the actual events surrounding Ellen and Annemarie. Be sure to use specific examples from the story to support your contrasts.
3. Describe how Annemarie's concept and understanding of how to be heroic progress throughout the novel. Include specific examples from the text to support your answer.
4. Write an essay about heroism. Choose at least five characters and write how...
This section contains 712 words|
(approx. 3 pages at 300 words per page)
|
<urn:uuid:ea33c952-cbd2-4712-aabb-ee9d3441d22b>
|
CC-MAIN-2013-20
|
http://www.bookrags.com/lessonplan/number-the-stars/essaytopics.html
|
2013-05-22T22:21:50Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702452567/warc/CC-MAIN-20130516110732-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.899561
| 202
|
Lesson plan for the film
Fields of Sacrifice
Students will create a video or slide show presentation that answers the question “Why should we remember the sacrifices of World War II?”
Advanced 9–12 and beyond. This unit can be adapted to younger grades and different courses of study in various Canadian provinces and territories.
For Activity 1:
Handout 1: Timeline of World War II
For Culminating Activity:
Access to a computer lab
Video editing software (e.g., iMovie or MovieMaker) or PowerPoint
Handout 2: Why should we remember World War II?
These two activities explore the main theme of Fields of Sacrifice: the collective memory of World War II. The first considers these battles as turning points. The second is an extended project to answer the focus question “Why should we remember World War II?” for a Remembrance Day ceremony.
Access all lesson plans »»
Site section related to this lesson plan
|
<urn:uuid:613b1296-d20d-4a7b-ae49-0cedaa7f7667>
|
CC-MAIN-2013-20
|
http://www3.nfb.ca/ww2/for-teachers/?lesson=731755&view=698350
|
2013-05-21T18:21:28Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700380063/warc/CC-MAIN-20130516103300-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.804158
| 201
|
cordage (kôrˈdĭj) [key], collective name for rope and other flexible lines. It is used for such purposes as wrapping, hauling, lifting, and power transmission. Early man used strips of hide, animal hair, and plant materials. Hemp and flax were formerly standard in Europe and America but were largely replaced in the 19th cent. by hard fibers, especially Manila hemp and sisal. In the 20th cent. the natural fibers were replaced in many applications by synthetic fibers such as nylon and polyester. The fibers are straightened, usually by combing, then spun into yarn. Twine, which is sometimes called cord, is formed by wrapping two or more yarns together. By twisting together a number of yarns, a strand is formed. By twisting together three or more strands, a rope is produced. A cable-laid rope is formed from three or more ropes. In general a synthetic fiber rope lasts much longer and is much stronger than a natural fiber rope. Steel wire, often with a fiber core, is also used for rope.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on cordage from Fact Monster:
See more Encyclopedia articles on: Technology: Terms and Concepts
|
<urn:uuid:85511acb-0f66-46c9-ae57-bfe3473b020c>
|
CC-MAIN-2013-20
|
http://www.factmonster.com/encyclopedia/science/cordage.html
|
2013-05-25T05:38:46Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.959635
| 267
|
In what may come to be a significant and exemplary discovery, researchers claim to have identified 20 genes that are held responsible for causing kidney diseases. This finding has the potential of dramatically altering treatment of these diseases.
It is a known fact that Chronic Kidney Disease is a long term condition. Patients afflicted with this disease gradually lose function of their kidneys. This loss of function of the kidney leads to a slowing down of other organs as well.
The importance of this research is evident. Kidney disorders are slow and they usually are not accompanied by any major symptoms. By the time patients usually discover that their kidneys have been affected, it is too late to reverse kidney damage.
Dr. Jim Wilson from the University of Edinburgh says, "No-one knows who will be affected or when kidney disease may strike next, so even more research needs to be funded to help us tackle this challenge".
Renal failure has also been noted to be a major cause of death. Moreover, there is a worldwide shortage of donor kidneys. It is known that patients in need of a healthy kidney have to wait for months or even years before they can undergo a transplant. The demand for kidneys is also the highest in comparison to other organs.
This dearth of kidneys leads to black marketeering of the organ. In third world countries like India, where 70% of the population is below the poverty line, the poor and needy are lured into kidney donation for a meager sum of money. These kidneys are then sold at exorbitant prices to those who wish to jump the wait list.
Even after a kidney is made available for transplant, there are high chances that the recipient's body will reject the organ. Dialysis is the other option available to patients.
In this scenario the identification of 20 genes, which could help enlighten doctors about the causes of kidney disease, has opened a window of hope for thousands of kidney disease patients throughout the world.
It would be correct to extol the study as a path breaking discovery.
|
<urn:uuid:78a65198-d83a-454b-9932-b5eaa77f2736>
|
CC-MAIN-2013-20
|
http://topnews.co.uk/opinion/23451-path-breaking-research-identifies-20-genes-responsible-causing-kidney-disease
|
2013-05-20T12:08:21Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.967687
| 406
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.